When I put en PCI-E2.0 card (like a ATI HD 3870) in my mobo with a PCI-E1.0 slot, will I still get the performance you see in benchmarks, or will it be only half of that (because the data transfer is cut in half)? Or will there only be a minor difference?
The difference is minor, if it is at all. Current "PCI-E 2.0 compatible" cards are not fast enough to require anything else than standard PCI-E 16X. PCI-E2.0 is yet just a way to make a system more future-proof.
PCI-E 1.* started having GPU's for it with the geForce 6 and Radeon X800 parts. Back then, it didn't matter which you used unless you wanted to employ SLi or CrossFire. Back then it didn't matter if the GPU's had x8 or x16 bandwidth either. Nowadays with the 8800's in particular, it'd be pretty stupid to run them at anything other than x16 bandwidth if you can help it, because it loses a good chunk of performance even dropping down to x8 bandwidth - same with the HD 2900 and newer. Given this trend, it will take until the geForce 10 series or the Radeon HD 5**** series before PCI-E 2.0 will be necessary. That's not to say there isn't a difference, but with current GPU technology, you're only seeing a 20% improvement in the best case scenario with PCI-E 2.0-ready GPU's and a supporting chipset, and very few games actually see that kind of boost - the rest are much less than 10%, if anything at all.
Log in to comment