http://www.dailytech.com/article.aspx?newsid=7052
Their new card is already flopping. :lol: It's the Radeon 8500 all over again.
*if this is old, whatever*
This topic is locked from further discussion.
http://www.dailytech.com/article.aspx?newsid=7052
Their new card is already flopping. :lol: It's the Radeon 8500 all over again.
*if this is old, whatever*
Using current drivers, vs an overclocked board.
And they are talking about the XTX board.
NobuoMusicMaker
Still not a good sign.
regardless, NVidia invested in the wrong console (PS3)Zaistev_basicNvidia is still the market leader.
[QUOTE="Zaistev_basic"]regardless, NVidia invested in the wrong console (PS3)haolsThey didn't ivnest :|
Paid they are. But of course they also lose potential sales because they get something for every sales out of the PS3. Few PS3 sold means few profit for NVidia from PS3. ATI got lucky that they have two winning consoles.
[QUOTE="Zaistev_basic"]regardless, NVidia invested in the wrong console (PS3)musclesergeNvidia is still the market leader.
No one is denying that, but they just chose the wrong side that they can lose potential profit from PS3 sales. Again, few PS3 sold, few profit (not loss) for NVidia. Is not about lost of profitm it's about not gaining potential profit that NVidia expected from PS3 (so much for that 100 million PS3 sales prediction).
[QUOTE="Zenkuso"]They should have put the GDDR4 in it, would have snowed under 8800's but they went cheap and its dragged them down.dgsag
The XTX does have GDDR4. :D
Face it, ATI got owned.
When the 2900XT outscores the 2900XTX, you know there's something fundamentally broken somewhere.[QUOTE="dgsag"][QUOTE="Zenkuso"]They should have put the GDDR4 in it, would have snowed under 8800's but they went cheap and its dragged them down.Makari
The XTX does have GDDR4. :D
Face it, ATI got owned.
When the 2900XT outscores the 2900XTX, you know there's something fundamentally broken somewhere.Drivers most likely.
[QUOTE="Marka1700"]ATI usually makes the better card, poeple just dont want to deal wither thier terrible drivers.joeblak
Looks like the roles have switched.......
i was going to say.. people still give ati a hard time for this, while nvidia's facing a (mildy stupid, to be fair) class-action lawsuit for having had completely broken vista drivers for the last 6 months. :P[QUOTE="Makari"][QUOTE="dgsag"][QUOTE="Zenkuso"]They should have put the GDDR4 in it, would have snowed under 8800's but they went cheap and its dragged them down.NobuoMusicMaker
The XTX does have GDDR4. :D
Face it, ATI got owned.
When the 2900XT outscores the 2900XTX, you know there's something fundamentally broken somewhere.Drivers most likely.
We'll see, but I for a chipset that's hitting the market way later than Nvidia, ATI should be worried...
Don't you guys remember the Geforce 3 Ti series...And then ATI countered with the Radeon 8500...And then BLAM, Geforce 4 Ti series.
These numbers are highly questionable. The r600's technical specs point to a performance at least alot better then the x1950xtx so there has got to be something wierd going on. I guraentee you they will be alot better when real reviews come out in two-three weeks.TOAO_Cyrus1
Someone posted this at Dailytech:
"The R600 has 64x5 Vec5D units which maces each unit handle a maximum of 320 stream ops per second, but makes it's worst case scenario a lot worse with 64 stream ops per second (for the lack of a better term). You can think of that in the same manner as you see that SIMD units in our current processors can deliver huge amounts of processing power if, and only if, used correctly and optimized accordingly, otherwise we see no gains.
In my opinion AMD/ATI made a design compromise, they used this approach as it could prove to be way better in the dx10 world, and in a much more interesting way, in the GPGPU world.
Think about it, if you open up your architecture with CTM and give the people the power of 64x5 vec5d units you end up with an amazing amount of processing power. That's where I think they are focusing.
Nvidia has a much more favorable place in the gaming world. If you have 128 scalar units, in a worst case scenario you'd still issue 128 stream ops (all else constant, and given you have the bandwidth). But your best case scenario isn't that good.
I believe they delayed it because they were expecting dx10 games (of course, this is just speculation on my part). And I hope, for their sake, that it performs a lot better in that world.
Still, if I am somewhat right, drivers could provide better optimization for shader programs that aren't made with a simd architecture in mind, but then again, I could be entirely wrong.
http://forums.anandtech.com/messageview.aspx?catid=31&threadid=2039097&frmKeyword=&STARTPAGE=8&FTVAR_FORUMVIEWTMP=Linear
[QUOTE="TOAO_Cyrus1"]These numbers are highly questionable. The r600's technical specs point to a performance at least alot better then the x1950xtx so there has got to be something wierd going on. I guraentee you they will be alot better when real reviews come out in two-three weeks.noIinteam
Someone posted this at Dailytech:
"The R600 has 64x5 Vec5D units which maces each unit handle a maximum of 320 stream ops per second, but makes it's worst case scenario a lot worse with 64 stream ops per second (for the lack of a better term). You can think of that in the same manner as you see that SIMD units in our current processors can deliver huge amounts of processing power if, and only if, used correctly and optimized accordingly, otherwise we see no gains.
In my opinion AMD/ATI made a design compromise, they used this approach as it could prove to be way better in the dx10 world, and in a much more interesting way, in the GPGPU world.
Think about it, if you open up your architecture with CTM and give the people the power of 64x5 vec5d units you end up with an amazing amount of processing power. That's where I think they are focusing.
Nvidia has a much more favorable place in the gaming world. If you have 128 scalar units, in a worst case scenario you'd still issue 128 stream ops (all else constant, and given you have the bandwidth). But your best case scenario isn't that good.
I believe they delayed it because they were expecting dx10 games (of course, this is just speculation on my part). And I hope, for their sake, that it performs a lot better in that world.
Still, if I am somewhat right, drivers could provide better optimization for shader programs that aren't made with a simd architecture in mind, but then again, I could be entirely wrong.
http://forums.anandtech.com/messageview.aspx?catid=31&threadid=2039097&frmKeyword=&STARTPAGE=8&FTVAR_FORUMVIEWTMP=Linear
Wow...very interesting...so in a DX9 environment it uses on 64 shader ops..while in DX10 it can use up to 320 shaders ops a sec...
Nvidia is still the market leader.[QUOTE="muscleserge"][QUOTE="Zaistev_basic"]regardless, NVidia invested in the wrong console (PS3)dilmos
So it is ok to use that logic if you are a PS3 fanboy, but when Wii lovers mention sales, it doesn't count right???
Haha, Wii lovers Haha, but you make a good point. Its sad really.
[QUOTE="noIinteam"][QUOTE="TOAO_Cyrus1"]These numbers are highly questionable. The r600's technical specs point to a performance at least alot better then the x1950xtx so there has got to be something wierd going on. I guraentee you they will be alot better when real reviews come out in two-three weeks.jwcyclone15
Someone posted this at Dailytech:
"The R600 has 64x5 Vec5D units which maces each unit handle a maximum of 320 stream ops per second, but makes it's worst case scenario a lot worse with 64 stream ops per second (for the lack of a better term). You can think of that in the same manner as you see that SIMD units in our current processors can deliver huge amounts of processing power if, and only if, used correctly and optimized accordingly, otherwise we see no gains.
In my opinion AMD/ATI made a design compromise, they used this approach as it could prove to be way better in the dx10 world, and in a much more interesting way, in the GPGPU world.
Think about it, if you open up your architecture with CTM and give the people the power of 64x5 vec5d units you end up with an amazing amount of processing power. That's where I think they are focusing.
Nvidia has a much more favorable place in the gaming world. If you have 128 scalar units, in a worst case scenario you'd still issue 128 stream ops (all else constant, and given you have the bandwidth). But your best case scenario isn't that good.
I believe they delayed it because they were expecting dx10 games (of course, this is just speculation on my part). And I hope, for their sake, that it performs a lot better in that world.
Still, if I am somewhat right, drivers could provide better optimization for shader programs that aren't made with a simd architecture in mind, but then again, I could be entirely wrong.
http://forums.anandtech.com/messageview.aspx?catid=31&threadid=2039097&frmKeyword=&STARTPAGE=8&FTVAR_FORUMVIEWTMP=Linear
Wow...very interesting...so in a DX9 environment it uses on 64 shader ops..while in DX10 it can use up to 320 shaders ops a sec...
Uh no. Either way its 64 shader opps. The ATi card can do 64 vector operations per clock. If an instruction is scalor it uses up a whole Vector ALU which is inefficient. Nvidia has 128 scalor ALU's which are always 100% utalized. If there is a vector opp the compiler has to break it down into into three scalor opps which uses up 3 SP's. DX10 shaders will be more complex giving ATi's extra power the advantage over the efficiency of Nvidia.
Still the performance difference is way to big. They should be about even in DX9. The card they tested was the 12 inch OEM only card so it could really be just the OEM XT. That would go with rumors that ATI is going to launch with just the HD2900xt (better then a 8800gts for same price) and release the XTX version later with higher clock speeds and 65nm.
[QUOTE="NobuoMusicMaker"][QUOTE="Makari"][QUOTE="dgsag"][QUOTE="Zenkuso"]They should have put the GDDR4 in it, would have snowed under 8800's but they went cheap and its dragged them down.TekkenMaster606
The XTX does have GDDR4. :D
Face it, ATI got owned.
When the 2900XT outscores the 2900XTX, you know there's something fundamentally broken somewhere.Drivers most likely.
We'll see, but I for a chipset that's hitting the market way later than Nvidia, ATI should be worried...
Don't you guys remember the Geforce 3 Ti series...And then ATI countered with the Radeon 8500...And then BLAM, Geforce 4 Ti series.
yeh but i also remember the 9700/9800s smoking the Gf4s, the super flop 5800, and the 5900s.
Anyway
Their Article 24/04/07 for AMD ATI Radeon HD 2900 XT
Gaming: Maximum Quality, 1280x1024
Call of Duty 2 73.5 FPS
Company of Heroes 92.1 FPS
F.E.A.R. 84.0 FPS
Half Life 2: Episode 1 112.0
Oblivion 47.9 FPS
3DMark06 11447
Article on 26/04/07 XTX first XT 2nd
Frames per second 1280x1024
Company of Heroes 97.1 na
F.E.A.R. 84 79
Half Life 2: Episode 1 117.9 118.9
Elder Scrolls IV: Oblivion 100.3 101.4
So not only is the XT a faster card than the XTX - but the XT fps has dropped since they reviewed it 2 days ago which makes no sense given the res is the same and quality is max.....
Dailytech looks like a load of bollox tbh...... their benchs make no sense and the editor is a bit of a laughing stock if he can get away with stuff like that....
Personally I couldnt give a toss which out of AMD or Nvidia are faster - as long as I buy the faster of the two thats all that matters. I was hoping to give AMD a try this time wound but if these scores are correct (unlikely) then ill be getting a gtx...
First of all, I really don't care who wears the crown when it comes to video cards. Being a fanboy of one specific company is the dumbest thing ever. Everyone knows that it's back and forth.
I'll take this with a grain of salt until some other benchmarks come out. But for now, The Geforce that's on the market right now is looking better than an ATI card we have yet to see hit the shelves.
[QUOTE="Zenkuso"]They should have put the GDDR4 in it, would have snowed under 8800's but they went cheap and its dragged them down.dgsag
The XTX does have GDDR4. :D
Face it, ATI got owned.
the card they used for ATI was not even finished yet
http://dailytech.com/DailyTech+Digest+Making+Sense+of+R600/article7079.htm
Dailytech says you will never see this card. They overclocked the XT to 845 Mhz which almost equaled a GTX. I smell an XTX with at least a 900Mhz core clock.
Remember G80's ALU's are clocked at 1400Mhz so ATi stands to gain the most with a clock increase. They already have more then enough bandwidth for DX9 games.
AMD is hurting now they need support. We are screwed if we dont have competition in high end graphics and CPU's.
Topic = Duh, also Consoles > PC since the PC video card costs more than console but doesn't perform as well, lolz.wok7
Got a link to that claim? Show me where either the PS3 or 360 outperform the current DX10 lineups.
Those benchmarks are trash, ill wait for a reliable site.Hammerofjustice
Dailytech is reliable. They are affiliated with Anandtech, one of the most reliable and trustworthy tech sites on the net.
That is much BS tests. I know for a fact ATI cards do better in HL2. I will wait for better tests and drivers to be released. PS3_3DO
It's not about the brand; it's about the card.
[QUOTE="Makari"][QUOTE="dgsag"][QUOTE="Zenkuso"]They should have put the GDDR4 in it, would have snowed under 8800's but they went cheap and its dragged them down.NobuoMusicMaker
The XTX does have GDDR4. :D
Face it, ATI got owned.
When the 2900XT outscores the 2900XTX, you know there's something fundamentally broken somewhere.Drivers most likely.
Could be. The drivers might be in beta just like NVIDIA's. However, the drivers used to test this card were the retail drivers that will be shipping with the card.
when is the R-600 coming out -_- ???venture00
May.
The NDA lifts May 2nd, I believe.
Please Log In to post.
Log in to comment