This topic is locked from further discussion.
Have to say I'm impressed with windows 7 so far ... Clean and fast. I've had one or two compatability issues so far but in general its been excellent.blue_hazy_basicCan't wait to try it, but my computer's too full of stuff. Have to offload half a TB of stuff first.
I guess you didn't bother spending 5 minutes reading the link.The warp rendering willallow people with integrated gpus to leverage available cpu power to help render modern games. This could allow for a much broader base for pc developers to target.Im not sure you understand what this is about.
This isnt meant for games. Its meant for intensive applications that could benefit from GPU processing, like video/image editors, or 3d modelling applications. Many of these applications dont bother actually implenting DirectX processsing because it adds a layer of complexitiy and instability to their products. Warp10 could remove that instability layer.
But for gaming this isn't likely to be used for anything beyond 2D games and very simple 3D ones.
this isnt going to do anything for PC gaming really.
XaosII
I saw this a while back and I don't think it is useful at all, CPUs are so weak it takes an I7 to get around the performance you would expect from the lowest integrated. There is a reason we moved away from the CPU for running graphics.
If you want to see something from Windows 7 that will revolutionize gaming; try the compute shader in DX11.
AnnoyedDragon
Thank you for not saying tessellation.
I'm seriously sick of that hyped up addition.
Thank you for not saying tessellation.
I'm seriously sick of that hyped up addition.
Irick_cb
Tessellation isn't going to revolutionize anything, it is just a form of mesh LOD that enables higher resolutions models up close.
[QUOTE="Irick_cb"]
Thank you for not saying tessellation.
I'm seriously sick of that hyped up addition.
AnnoyedDragon
Tessellation isn't going to revolutionize anything, it is just a form of mesh LOD that enables higher resolutions models up close.
Given what I've read on the subject, I don't know if Compute Shaders will be as strong as we hope it will be, either. After all, they have to work across different architectures (think about resolving differences between CUDA and CAL, not to mention whatever Larabee will be using) and will therefore lose some performance in an arena where this could be critical (example: Folding@home insists on being closer to the metal, which is why they're against using either DX11 or OpenCL at this time).1 though just 1...here it is. http://msdn.microsoft.com/en-us/library/dd285359.aspx
its known as "Windows Advanced Rasterization Platform" and it allows the CPU to render directX....meaning that pre-built computers (which often have powerfull CPU's but low end GPU's) will actually be able to play demanding games at basic levels.
I know a handful of people using the RC and every single one of them are loving it. One of the first things people mention is how much quicker and "snappier" it is, even in boot time.
I've downloaded it and just haven't had the opportunity to burn a DVD and load it up yet.
thats good news does that mean computers even with already good gpus and cpus will run better?mayforcebeyou
I don't have any numbers, but a couple gamer friends of mine went from Vista to the Windows 7 RC and have noticed very favorable results on the same hardware.
However, I have no idea how it compares to XP in the same regard
Given what I've read on the subject, I don't know if Compute Shaders will be as strong as we hope it will be, either. After all, they have to work across different architectures (think about resolving differences between CUDA and CAL, not to mention whatever Larabee will be using) and will therefore lose some performance in an arena where this could be critical (example: Folding@home insists on being closer to the metal, which is why they're against using either DX11 or OpenCL at this time).HuusAsking
I'm not sure what you mean by Folding@Home insisting to use ATI's CTM solution, my GPU2 client is working fine with Nvidia.
Anyway regarding GPU computing I heard OpenCL being slower than CUDA was a temporary issue they were resolving? Cross architecture methods like OpenCL are bound to be slower than native solutions like CUDA, however I don't think the performance difference is going to invalidate the method. The reason I'm looking forward to the compute shader in DX11 is because it will standardize the method across GPUs, making it easier for developers to implement while having it work on both ATI and Nvidia GPUs.
We've had a taste of GPU computing in a few games; but its application is not limited to just physics, we won't know what it is truly capable of until the install base is there to justify experimentation with it in games.
I guess you didn't bother spending 5 minutes reading the link.The warp rendering willallow people with integrated gpus to leverage available cpu power to help render modern games. This could allow for a much broader base for pc developers to target.[QUOTE="XaosII"]
Im not sure you understand what this is about.
This isnt meant for games. Its meant for intensive applications that could benefit from GPU processing, like video/image editors, or 3d modelling applications. Many of these applications dont bother actually implenting DirectX processsing because it adds a layer of complexitiy and instability to their products. Warp10 could remove that instability layer.
But for gaming this isn't likely to be used for anything beyond 2D games and very simple 3D ones.
this isnt going to do anything for PC gaming really.
dc337
I guess you didnt bother spending 5 minute to think about it. This isnt going to mean much or help much for gaming. Especially when you consider that Windows 7 will be out late this year, it will take about 2 more years to reach any notable level of adoption, and about that same time for even a handful of games to bother implementing it.
Not to mention that such a technology already exists with DirectX's Reference Rasterizer. Im just assuming Warp10 is the next version of it and should have far less performance issues.
This is likely going to benefit non-gaming applications far more than gaming. The largest contribution to creating a broader gaming base would be for Intel to up their standards in integrated GPUs. Hopefully Larrabee can do that. Theres not much more MS can do on their end with stirring up a big issue.
[QUOTE="HuusAsking"]Given what I've read on the subject, I don't know if Compute Shaders will be as strong as we hope it will be, either. After all, they have to work across different architectures (think about resolving differences between CUDA and CAL, not to mention whatever Larabee will be using) and will therefore lose some performance in an arena where this could be critical (example: Folding@home insists on being closer to the metal, which is why they're against using either DX11 or OpenCL at this time).AnnoyedDragon
I'm not sure what you mean by Folding@Home insisting to use ATI's CTM solution, my GPU2 client is working fine with Nvidia.
Anyway regarding GPU computing I heard OpenCL being slower than CUDA was a temporary issue they were resolving? Cross architecture methods like OpenCL are bound to be slower than native solutions like CUDA, however I don't think the performance difference is going to invalidate the method. The reason I'm looking forward to the compute shader in DX11 is because it will standardize the method across GPUs, making it easier for developers to implement while having it work on both ATI and Nvidia GPUs.
We've had a taste of GPU computing in a few games; but its application is not limited to just physics, we won't know what it is truly capable of until the install base is there to justify experimentation with it in games.
OpenCL will be the catalyst that brings us wide spread GPGPU. Cross platform, hardware independent.
When i saw it
i forgot about games.
...
....
.....
for like 20 minuets.
[QUOTE="HuusAsking"]Given what I've read on the subject, I don't know if Compute Shaders will be as strong as we hope it will be, either. After all, they have to work across different architectures (think about resolving differences between CUDA and CAL, not to mention whatever Larabee will be using) and will therefore lose some performance in an arena where this could be critical (example: Folding@home insists on being closer to the metal, which is why they're against using either DX11 or OpenCL at this time).AnnoyedDragon
I'm not sure what you mean by Folding@Home insisting to use ATI's CTM solution, my GPU2 client is working fine with Nvidia.
Anyway regarding GPU computing I heard OpenCL being slower than CUDA was a temporary issue they were resolving? Cross architecture methods like OpenCL are bound to be slower than native solutions like CUDA, however I don't think the performance difference is going to invalidate the method. The reason I'm looking forward to the compute shader in DX11 is because it will standardize the method across GPUs, making it easier for developers to implement while having it work on both ATI and Nvidia GPUs.
We've had a taste of GPU computing in a few games; but its application is not limited to just physics, we won't know what it is truly capable of until the install base is there to justify experimentation with it in games.
Closer to the metal as in they prefer to use lower-level, higher-performance tools like CAL and CUDA. I don't think they'll make the jump unless either Larabee spanks CAL and CUDA performance or they converge to a unified standard.Hence why many of us have mor than one GPU, A decent CPU goes a long way but you really have to coulpe it with a decent card, both my CPU and GPUs get taxed almost to the limit as it.Thats good, but CPUs do not have the architecture to handle the graphics of modern games...
Wasdie
You cant play games on only a CPU,,,,even top end CPU's now days cant play current games at ANY settings.
OpenCL will be the catalyst that brings us wide spread GPGPU. Cross platform, hardware independent.
Irick_cb
[QUOTE="Irick_cb"]
OpenCL will be the catalyst that brings us wide spread GPGPU. Cross platform, hardware independent.
Teufelhuhn
They've been up since last year!
If OpenCL flops, i'll be very sad.
Very sad.
Does this mean the Intel Larabree cards will be able to utilize DirectX? since they don't nativly? O_O
PC360Wii
I guess you didn't bother spending 5 minutes reading the link.The warp rendering willallow people with integrated gpus to leverage available cpu power to help render modern games. This could allow for a much broader base for pc developers to target.[QUOTE="dc337"]
[QUOTE="XaosII"]
Im not sure you understand what this is about.
This isnt meant for games. Its meant for intensive applications that could benefit from GPU processing, like video/image editors, or 3d modelling applications. Many of these applications dont bother actually implenting DirectX processsing because it adds a layer of complexitiy and instability to their products. Warp10 could remove that instability layer.
But for gaming this isn't likely to be used for anything beyond 2D games and very simple 3D ones.
this isnt going to do anything for PC gaming really.
XaosII
I guess you didnt bother spending 5 minute to think about it. This isnt going to mean much or help much for gaming. Especially when you consider that Windows 7 will be out late this year, it will take about 2 more years to reach any notable level of adoption, and about that same time for even a handful of games to bother implementing it.
Not to mention that such a technology already exists with DirectX's Reference Rasterizer. Im just assuming Warp10 is the next version of it and should have far less performance issues.
This is likely going to benefit non-gaming applications far more than gaming. The largest contribution to creating a broader gaming base would be for Intel to up their standards in integrated GPUs. Hopefully Larrabee can do that. Theres not much more MS can do on their end with stirring up a big issue.
You originally said:This isnt meant for games.
Even though Microsoft clearly stated in the link that games are being targeted. Changing your analysis already?
The vast majority of applications that are gpu dependent are games. The fact that MS used crysis in their example tells you who their targeted demograpic is.
Warp will improve every integrated gpu, which provides a better base for developers to target. Of course it will take a few years to be adopted but then that is true for most technologies. As for larrabee it is being targeted at the performance market, not the low-end.
Please Log In to post.
Log in to comment