Already did.. Key word smart developer. Like i was saying not every developer knows everything man, and just the fact you say go ask a developer doesn't mean they knwo what they are talking about. No one knows everything and a lot of developers don't know what they are doign when it comes to the cell. Fact. Or they just don't want to take the time. The fact is the ps3 is perfectly capable for gaming and even moreso if developers take the time.
"In-order microprocessors suffer because as soon as you introduce a cache into the equation, you no longer have control over memory latencies. Most of the time, a well-designed cache is going to give you low latency access to the data that you need. But look at the type of applications that Cell is targeted at (at least initially) - 3D rendering, games, physics, media encoding etc. - all applications that aren't dependent on massive caches. Look at any one of Intel's numerous cache increased CPUs and note that 3D rendering, gaming and encoding performance usually don't benefit much beyond a certain amount of cache. For example, the Pentium 4 660 (3.60GHz - 2MB L2) offered a 13% increase in Business Winstone 2004 over the Pentium 4 560 (3.60GHz - 1MB L2), but less than a 2% average performance increase in 3D games. In 3dsmax, there was absolutely no performance gain due to the extra cache. A similar lack of performance improvement can be seen in our media encoding tests. The usage model of the Playstation 3 isn't going to be running Microsoft Office; it's going to be a lot of these "media rich" types of applications like 3D gaming and media encoding. For these types of applications, a large cache isn't totally necessary - low latency memory access is necessary, and lots of memory bandwidth is important, but you can get both of those things without a cache. How? Cell shows you how.
Each SPE features 256KB of local memory, more specifically, not cache. The local memory doesn't work on its own. If you want to put something in it, you need to send the SPE a store instruction. Cache works automatically; it uses hard-wired algorithms to make good guesses at what it should store. The SPE's local memory is the size of a cache, but works just like a main memory. The other important thing is that the local memory is SRAM based, not DRAM based, so you get cache-like access times (6 cycles for the SPE) instead of main memory access times (e.g. 100s of cycles).
What's the big deal then? With the absence of cache, but the presence of a very low latency memory, each SPE effectively has controllable, predictable memory latencies. This means that a smart developer, or smart compiler, could schedule instructions for each SPE extremely granularly. The compiler would know exactly when data would be ready from the local memory, and thus, could schedule instructions and work around memory latencies just as well as an out-of-order microprocessor, but without the additional hardware complexity. If the SPE needs data that's stored in the main memory attached to the Cell, the latencies are just as predictable, since once again, there's no cache to worry about mucking things up.
Making the SPEs in-order cores made a lot of sense for their tasks. However, the PPE being in-order is more for space/complexity constraints than anything else. While the SPEs handle more specified tasks, the PPE's role in Cell is to handle all of the general purpose tasks that are not best executed on the array of SPEs. The problem with this approach is that in order to function as a relatively solid performing general purpose processor, it needs a cache - and we've already explained how cache can hurt in-order cores. If there's a weak element of the Cell architecture it's the PPE, but then again, Cell isn't targeted at general purpose computing, despite what some may like to spin it as."
Walker34
On CUDA processors, you have multi-thousand registers(e.g. 32768 32bit registers for Geforce 8600GT i.e.8192 32bit registers (32K) per SP) being stored next to ALUs and ithas both software and hardware managed cache.
Log in to comment