gpuguru's forum posts

  • 25 results
  • 1
  • 2
  • 3
Avatar image for gpuguru
gpuguru

30

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#1  Edited By gpuguru
Member since 2016 • 30 Posts

@04dcarraher said:
@gpuguru said:

I didn't see the world "fastest GPU". I see the "Most Powerful Computer Chip".

your clearly an alt account

Lol what? You think his account got banned. Doesn't look like it. If you look at his post from 2013 as someone linked on this thread he clearly mentions that too. I also find it funny that you called that guy an idiot even though he pointed out that the HD 5970 was powerful than the HD7870 with graphs showing the HD 5970 being faster than the HD 7950, makes you look like a bigger idiot as the HD 5970 as he stated, "may lose in games if it doesn't have proper game support or driver support for certain games". LOL.

Regardless if you look at the description of the video:

"This is the award for the most powerful consumer based computer chip not Professional Level Computer Chips (ie. AMD's Fire GL line or nVidia's Quadro GPU lines)".

They aren't saying as it's the best GPU. But the most powerful consumer computer chip.

Avatar image for gpuguru
gpuguru

30

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#2 gpuguru
Member since 2016 • 30 Posts

The company has PC roots. I think every game they made so far has come to the PC.

Avatar image for gpuguru
gpuguru

30

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#3 gpuguru
Member since 2016 • 30 Posts

@04dcarraher said:
@gpuguru said:
@04dcarraher said:
@gpuguru said:

How was he wrong? He wasn't even talking about dual GPU, he was talking about single GPU.

Heck, I could go 4X crossfire and say see it has more teraflops but hey it still loses in some games because you know multi-gpu doesn't work in all scenarios.

claiming that pure gflops ratings show whole picture of performance.... when in fact raw numbers mean squat when you use same standards. ie you cant compare flop performance from one architecture vs another and always expect the one with higher rating to performance better.

the example used

7870 with 2,560 GFLOPS vs 5970 2,867 GFLOPS

Of course not. But that wasn't what he was arguing. As post #4 mentioned that different supercomputers have different architectures but one can still have more teraflops than the other.

Also, other supercomputers may perform better than some other in certain applications even though they may not have the most raw teraflops.

wrong , he was arguing that furyx based on tflops rating making it the fastest gpu which it is not.

I didn't see the world "fastest GPU". I see the "Most Powerful Computer Chip".

Avatar image for gpuguru
gpuguru

30

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#4  Edited By gpuguru
Member since 2016 • 30 Posts

@04dcarraher said:
@gpuguru said:
@04dcarraher said:
@gpuguru said:

It will only go to system RAM when it exceeds the 4GB. I don't see why this is unique to other cards with 4GB GDDR5. They both have 4GB frame buffer.

When its buffer is full or over saturated Fury sees massive frametiming issues that other 4gb cards dont see. Even with an AMD gpu like 290x with 4gb does not see the issue in same games

Source? Also, the Fury and Fury X, AMD is doing special memory optimization via drivers. It's also dependent on driver updates. That's why you see some stuttering with Shadow of Mordor when the Fury X at 4K everything maxed out because it exceeds the 4GB frame buffer. But with the newer drivers you see that problem has been contained where you don't see the stuttering.

lol AMD cant fix the issue of games requiring more than 4gb, all they can do is soften the blow when game is not requiring all the buffer.

False, the new drivers have not fixed the issues with Fury buffer the new drivers mainly helped improve crossfire framepacing in DX9 games.

Well actually they have improved. Shadow of Mordor now run much better at 4K maxed out:

Loading Video...

But where is your proof that other 4GB cards doesn't have the same issues?

Avatar image for gpuguru
gpuguru

30

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#5 gpuguru
Member since 2016 • 30 Posts

@04dcarraher said:
@gpuguru said:

How was he wrong? He wasn't even talking about dual GPU, he was talking about single GPU.

Heck, I could go 4X crossfire and say see it has more teraflops but hey it still loses in some games because you know multi-gpu doesn't work in all scenarios.

claiming that pure gflops ratings show whole picture of performance.... when in fact raw numbers mean squat when you use same standards. ie you cant compare flop performance from one architecture vs another and always expect the one with higher rating to performance better.

the example used

7870 with 2,560 GFLOPS vs 5970 2,867 GFLOPS

Of course not. But that wasn't what he was arguing. As post #4 mentioned that different supercomputers have different architectures but one can still have more teraflops than the other.

Also, other supercomputers may perform better than some other in certain applications even though they may not have the most raw teraflops.

Avatar image for gpuguru
gpuguru

30

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#6 gpuguru
Member since 2016 • 30 Posts

@04dcarraher said:
@gpuguru said:

It will only go to system RAM when it exceeds the 4GB. I don't see why this is unique to other cards with 4GB GDDR5. They both have 4GB frame buffer.

When its buffer is full or over saturated Fury sees massive frametiming issues that other 4gb cards dont see. Even with an AMD gpu like 290x with 4gb does not see the issue in same games

Source? Also, the Fury and Fury X, AMD is doing special memory optimization via drivers. It's also dependent on driver updates. That's why you see some stuttering with Shadow of Mordor when the Fury X at 4K everything maxed out because it exceeds the 4GB frame buffer. But with the newer drivers you see that problem has been contained where you don't see the stuttering.

Avatar image for gpuguru
gpuguru

30

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#7 gpuguru
Member since 2016 • 30 Posts

@04dcarraher said:
@gpuguru said:
@04dcarraher said:
@gpuguru said:

The xtasy guy is right a HD 5970 would beat a HD 7870 granted if the game fully supports the HD 5970. HD 5970 is 2X Cypress dice mounted on a single, dual-slot video card. Anyone who knows about dual-gpu configurations would know that they are highly dependent on crossfire and SLI support. As such, some games will not take full advantage of the HD 5970 like Arkham City you pointed out. But as he pointed out the HD 5970 when taken full advantage would beat a HD 7870. Just look at the graph he posted on average it's beating HD 7950. The HD 7870 is slower than the HD 7950.

Oh by the way, your graph doesn't include the HD 5970. His does and it clearly shows that the HD 5970 is beating the HD 7950.

Even with proper crossfire support 5970 would not even get 2x the performance of a 5870, while 7870 is able to get more than 2x the framerate than a 5870 in virtually all DX11 games. So 7870 would still outperform a 5970 with proper support

Of course the HD 5970 wouldn't get 2X the performance of a 5870. Crossfire or SLI doesn't scale linearly. Sometimes may get 70% - 80%+ depending on the game. It all depends on how well the game supports crossfire. And if it does it will beat it as the graph shows.

Frankly, TC was talking about single GPU. I don't even know why you dragged multi-gpu scenarios into the question. Anyone who knows about GPUs knows that multi-gpu configurations can be a hassle or not work properly.

it was to prove a point that he was wrong once again

How was he wrong? He wasn't even talking about dual GPU, he was talking about single GPU.

Heck, I could go 4X crossfire and say see it has more teraflops but hey it still loses in some games because you know multi-gpu doesn't work in all scenarios.

Avatar image for gpuguru
gpuguru

30

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#8 gpuguru
Member since 2016 • 30 Posts

@04dcarraher said:

@gpuguru said:

For 1080P you can make that argument. But still people buy it because it's faster. If TC want's to buy it because it's faster than he can do so. Hell, I know one buddy who brought a GTX 970 and then upgraded to a GTX 980 series just because.

And where in the world did you get that 4GB HBM is an issue more so than 4GB GDDR5. They will both have the same 4GB frame buffer. Given the choice I would go with 4GB HBM because it's faster than 4GB GDDR5. Hence R9 Fury Nitro > GTX 980.

That buddy was not too smart,

Problem with the HBM that the sudden massive bottleneck that happens when the gpu has to go to system ram to because it has to dump data and grab new data creates massive framepacing issues.

It will only go to system RAM when it exceeds the 4GB. I don't see why this is unique to other cards with 4GB GDDR5. They both have 4GB frame buffer.

Avatar image for gpuguru
gpuguru

30

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#9 gpuguru
Member since 2016 • 30 Posts

@04dcarraher said:
@gpuguru said:

The xtasy guy is right a HD 5970 would beat a HD 7870 granted if the game fully supports the HD 5970. HD 5970 is 2X Cypress dice mounted on a single, dual-slot video card. Anyone who knows about dual-gpu configurations would know that they are highly dependent on crossfire and SLI support. As such, some games will not take full advantage of the HD 5970 like Arkham City you pointed out. But as he pointed out the HD 5970 when taken full advantage would beat a HD 7870. Just look at the graph he posted on average it's beating HD 7950. The HD 7870 is slower than the HD 7950.

Oh by the way, your graph doesn't include the HD 5970. His does and it clearly shows that the HD 5970 is beating the HD 7950.

Even with proper crossfire support 5970 would not even get 2x the performance of a 5870, while 7870 is able to get more than 2x the framerate than a 5870 in virtually all DX11 games. So 7870 would still outperform a 5970 with proper support

Of course the HD 5970 wouldn't get 2X the performance of a 5870. Crossfire or SLI doesn't scale linearly. Sometimes may get 70% - 80%+ depending on the game. It all depends on how well the game supports crossfire. And if it does it will beat it as the graph shows.

Frankly, TC was talking about single GPU. I don't even know why you dragged multi-gpu scenarios into the question. Anyone who knows about GPUs knows that multi-gpu configurations can be a hassle or not work properly.

Avatar image for gpuguru
gpuguru

30

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#10  Edited By gpuguru
Member since 2016 • 30 Posts

@04dcarraher said:

Fury price to performance just sucks, and the reason why I mentioned the issue with the 4gb HBM is because for $500+ you get more than 4gb, and the 4gb HBM is an issue when buffer is oversaturated more so than any other gpu with 4gb GDDR5.

The only worthy cards to buy right now is 390x/390 or 970 or 980ti.

For 1080P you can make that argument. But still people buy it because it's faster. If TC want's to buy it because it's faster than he can do so. Hell, I know one buddy who brought a GTX 970 and then upgraded to a GTX 980 series just because it's faster. A lot of people did so as I mentioned because of the 3.5GB issue with the GTX 980.

And where in the world did you get that 4GB HBM is an issue more so than 4GB GDDR5. They will both have the same 4GB frame buffer. Given the choice I would go with 4GB HBM because it's faster than 4GB GDDR5. Hence R9 Fury Nitro > GTX 980.

  • 25 results
  • 1
  • 2
  • 3