Don't tell me that you believe this bullshit now too. -_- Everyone on this board is retarded.
I'll buy you one of these too if the rumor proves true and this is announced at E3. I'll buy every single person here one, even if I have to sell everything I own.
...
(don't listen to the idiot who made this thread, since he really isn't as good as figuring this stuff out as he thinks but keeps getting encouraged by lucky guesses
I don't buy this rumor either, but calling everyone on here a retard is out of line. Especially considering that I perfectly recollect you telling ron there was no way FinFet would allow for clock speeds in the 1.5GHz+ range and according to the earliest benchmarks for the GTX 1080, it comes with a base clock of 1.6GHz and boost of 1.8GHz.
What are your thoughts in this case? Also, I will hold you to the free Xbox Next if it's 9tflops thing, even if I didn't buy the rumor. *bookmarked*
I said that about AMD cards. For Pascal, I said that I believed that 1.6GHz base could happen, though it did indeed exceed my expectations. This rumor, however, is total bullshit. Even a forum dedicated to making up bullshit about how amazing Xbox is is knows that it's bullshit.
ON another note, I don't know why you posted a GTX 1080 rumor when the specs have already been officially announced...
No, it can't. Not without taking a heavy loss, and there's no reason to do that. $1000 is a perhaps a bit much, but the minimum would still be $600 after factoring the additional RAM and the much faster CPU needed. You're also looking at an Nvidia chip and it's too soon to say if AMD has caught up to them in terms of performance per watt. Most likely, they have not. Then you have to consider the larger box with better cooling (don't listen to the idiot who made this thread, since he really isn't as good as figuring this stuff out as he thinks but keeps getting encouraged by lucky guesses; there won't be another dramatic increase in GPU efficiency next year without factoring in HBM), and it barely counts as a console anymore.
Fury Nano (GCN 1.2) and E8970 (GCN 1.2) has similar perf/watt as Maxwell you fool. The difference is AMD couldn't build enough of them for top to bottom SKUs.
No, it can't. Not without taking a heavy loss, and there's no reason to do that. $1000 is a perhaps a bit much, but the minimum would still be $600 after factoring the additional RAM and the much faster CPU needed. You're also looking at an Nvidia chip and it's too soon to say if AMD has caught up to them in terms of performance per watt. Most likely, they have not. Then you have to consider the larger box with better cooling (don't listen to the idiot who made this thread, since he really isn't as good as figuring this stuff out as he thinks but keeps getting encouraged by lucky guesses; there won't be another dramatic increase in GPU efficiency next year without factoring in HBM), and it barely counts as a console anymore.
Fury Nano (GCN 1.2) and E8970 (GCN 1.2) has similar perf/watt as Maxwell you fool. The difference is AMD couldn't build enough of them for top to bottom SKUs.
Okay, now you're just posting random pictures.
Against your "don't listen to the idiot who made this thread" bullshit
The FLOPS are similar when AMD GPUs are not gimped by geometry/tessellation issues (e.g Gameworks) and both AMD and NVIDIA using the same multi-threading submission.
This is just 100% wrong in gaming. You're free to prove me wrong with actual calculations from actual gaming benchmarks though, and not just a bunch of slides.
To increase yields, current consoles has disabled CUs i.e. they have full chip with disabled CUs to increase yields.
Do you seriously think that I don't know this? They're not going to be able to disable CUs if they want to hit 10 TFLOPs in 2017 unless they use a GPU with something like 80 CUs on the full chip.
Intel doesn't have a monopoly on Tick-Tock cycle.
Kelper 28 nm = Tick,
Maxwell 28 nm = Tock,
GCN 1.0 28 nm = Tick, GCN 1.1 doesn't have any pref/watt improvements i.e. it only has functionally improvements e.g ACE units.
GCN 1.2 28 nm = Tock, AMD didn't apply the full top-to-bottom GCN1.2 refresh. Hence GCN SKUs with version 1.0 didn't have any perf/watt gains.
--------------------------------
Polaris FinFET = Tick, 2.5X perf/watt
Vega FinFET = Tock , 4X perf/watt
No, they do have a monopoly on it, since the term was coined by them. What you're describing doesn't even fit the model. You basically made up a different version of the model. I guess you failed at researching this one because of a lack of images?
Also, looking at this timeline, wouldn't it make more sense if Navi and Volta are the "tock" in your version if the cycle? Or are you expecting two tocks for AMD? Or you could just use your brain and admit that HBM is a factor in Vega's improvements, but I know that you have too much pride to admit to being wrong.
@ronvalencia: You are living proof of this example. If it wasn't for the fact that you were posting on Gamespot forums, you'd be laughed off as the most blatant dope of a troll on this side of the Earth.
Against your "don't listen to the idiot who made this thread" bullshit
It's a flame war.
And I owned up to that, but informed you that it was just an offhand speculative comment of his and not one of his leaks. You continue to ignore this because you're either a troll or an idiot. It's not my fault that you have issues understanding context. Again, that wasn't even posted in the thread Emily was referring to.
You claim to be an adult, so I suggest that you act like one.
Against your "don't listen to the idiot who made this thread" bullshit
It's a flame war.
And I owned up to that, but informed you that it was just an offhand speculative comment of his and not one of his leaks. You continue to ignore this because you're either a troll or an idiot. It's not my fault that you have issues understanding context. Again, that wasn't even posted in the thread Emily was referring to.
You claim to be an adult, so I suggest that you act like one.
Emily Rogers killed 10K's hardware rumors. Emily specifically counters 10K's hardware rumors for being NOT correct. Rumors = more than one rumor.
10K made multiple hardware rumors
1. The NX will use a custom Polaris-like GPU. Likely will be on a FinFET 14nm fabrication node. The source told me it's on the same architecture with heavy customizations of course . It will contain the feature set of Polaris. It is "marginally better than the PS4" and theoretically could be "2x the power of PS4 GPU". I asked about PS4K being rumored to have a gpu 2x as powerful as the OG PS4 and how the theoretical performance of the NX would be and was told "Theoretically it could be close to the PS4K rumored specs". Of course, we know nothing of Polaris or the PS4K specs, but he gave that metric.
10K claimed Polaris ---> Emily claimed "GPU is wrong".
10K claimed 2.5 TFLOPS i.e., 10K claimed "marginally better than the PS4" , 10K claimed "2x the power of PS4 GPU" ---> Emily claimed "Power level wrong".
Emily killed the entire 10K's hardware rumors i.e. both "marginally better" and "2X the power".
Emily nuked both Polaris 11 and 10 from NX.
The only troll and idiot is YOU..
@shrek said:
@ronvalencia: You are living proof of this example. If it wasn't for the fact that you were posting on Gamespot forums, you'd be laughed off as the most blatant dope of a troll on this side of the Earth.
1080 blasted past 1600Mhz with overclocked 2100 Mhz (67C temps) with base clock speed 1607 Mhz and reference 1733 MHz boost.
If AMD doesn't matched NVIDIA in this FinFET upgrade hardware cycle, AMD should go bust and welcome PC dGPU monopoly. I have already switched to 980 Ti camp and plans for 1080.
Since you supported techhog89, the only dope is you.
Beating Titan X/Fury X in mid-2016 with medium size GPU has been done. For Xbox Next, it only needs another pref/watt improvement step e.g. Vega's 4X perf/watt or Navi's 5X perf/watt
This is just 100% wrong in gaming. You're free to prove me wrong with actual calculations from actual gaming benchmarks though, and not just a bunch of slides.
GTX 980 Ti Strix at 1380Mhz has about 7.7 TFLOPS
R9 Fury Nitro at 1050 Mhz has about 7.53 TFLOPS
HIS R9-390X has about 6.26 TFLOPS
Hitman DX12 was designed with multi-threading rendering and async.
Hitman DX12 minimizes current GCN's geometry/tessellation bottlenecks.
DX12 enables AMD GPUs to again MT rendering and async tasks which exist for NVIDIA DX11 GPU drivers.
From https://developer.nvidia.com/dx12-dos-and-donts
On DX11 the driver does farm off asynchronous tasks to driver worker threads where possible
If you are going to use DX11 games, NVIDIA has driver side MT rendering and asynchronous tasks while AMD DX11 driver doesn't have these features.
MT rendering and asynchronous tasks major DX12 speed up methods..
With DX12, AMD catches up to NVIDIA's MT rendering and asynchronous tasks speed up methods...
@techhog89 said:
No, they do have a monopoly on it, since the term was coined by them. What you're describing doesn't even fit the model. You basically made up a different version of the model. I guess you failed at researching this one because of a lack of images?
Also, looking at this timeline, wouldn't it make more sense if Navi and Volta are the "tock" in your version if the cycle? Or are you expecting two tocks for AMD? Or you could just use your brain and admit that HBM is a factor in Vega's improvements, but I know that you have too much pride to admit to being wrong.
What you fail to understand is different companies has different meaning for tick-tock improvement patterns from Intel, but the principle is the same.
On GPUs, both AMD and NVIDIA has two major pref/watt improvements instances in the 28 nm process node
NVIDIA
1st gen 28nm, Kelper
2nd gen 28nm, Maxwell
AMD
1st gen 28nm, GCN 1.0
2nd gen 28nm, GCN 1.2. AMD didn't apply top to bottom updates with GCN 1.2 e.g. No 44 CU version Fury design with reduced power consumption. No 20 CU version Fury design with reduced power consumption.
@ronvalencia: Ugh. Let me spell it out for for you:
Emily was saying that the hardware rumors posted by 10k were false. This part is the hardware rumors:
1. The NX will use a custom Polaris-like GPU. Likely will be on a FinFET 14nm fabrication node. The source told me it's on the same architecture with heavy customizations of course . It will contain the feature set of Polaris. It is "marginally better than the PS4" and theoretically could be "2x the power of PS4 GPU". I asked about PS4K being rumored to have a gpu 2x as powerful as the OG PS4 and how the theoretical performance of the NX would be and was told "Theoretically it could be close to the PS4K rumored specs". Of course, we know nothing of Polaris or the PS4K specs, but he gave that metric.
So, the stuff you underlined is ruled out.
This part is not a rumor, but instead a guess that 10k made before posting the above rumors:
Now, all of this is obviously 80% ruled out too because it matches up with the other stuff he was talking about and it 80% confirms that NX is weaker than PS4, but that's not the point. The point that I was trying to make is that not everything posted by rumor starters is a rumor. A lot of people get annoyed by people thinking that everything they post is some kind of leak. This was a guess based on what he heard, and it was in a different thread than the one she was talking about, because if you read her quote:
"Here is what multiple sources close to Nintendo are telling me about 10k’s hardware rumors: The gimmick is made up. GPU is wrong. Power level is wrong.” “The specs on NX are good, but a lot of the information being shared in this thread is incorrect.
I was told that NX has good specs, but the info in this thread on the GPU and power level is just not correct. Sorry to burst everyone’s hype."
She was talking specifically about that thread. This SPECULATIVE (AS IN, NOT A RUMOR BUT A GUESS) post from 10k is unrelated because it is neither a rumor nor a post in that thread.
Now, why did I say 80%? Because that's her self-proclaimed track record.
People like this are why I returned to Twitter. I would say my track record on rumors is 80% correct, 20% wrong.
In other words, don't use one rumor to say for certain that another is false. Take everything with a grain of salt.
GTX 980 Ti Strix at 1380Mhz has about 7.7 TFLOPS
R9 Fury Nitro at 1050 Mhz has about 7.53 TFLOPS
HIS R9-390X has about 6.26 TFLOPS
Hitman DX12 was designed with multi-threading rendering and async.
Hitman DX12 minimizes current GCN's geometry/tessellation bottlenecks.
DX12 enables AMD GPUs to again MT rendering and async tasks which exist for NVIDIA DX11 GPU drivers.
From https://developer.nvidia.com/dx12-dos-and-donts
On DX11 the driver does farm off asynchronous tasks to driver worker threads where possible
If you are going to use DX11 games, NVIDIA has driver side MT rendering and asynchronous tasks while AMD DX11 driver doesn't have these features.
MT rendering and asynchronous tasks major DX12 speed up methods..
With DX12, AMD catches up to NVIDIA's MT rendering and asynchronous tasks speed up methods...
Yeah, let's wait and see if this holds up with Pascal and Polaris/Vega. Spoiler alert: It won't.
What you fail to understand is different companies has different meaning for tick-tock improvement patterns from Intel, but the principle is the same.
On GPUs, both AMD and NVIDIA has two major pref/watt improvements instances in the 28 nm process node
NVIDIA
1st gen 28nm, Kelper
2nd gen 28nm, Maxwell
AMD
1st gen 28nm, GCN 1.0
2nd gen 28nm, GCN 1.2. AMD didn't apply top to bottom updates with GCN 1.2 e.g. No 44 CU version Fury design with reduced power consumption. No 20 CU version Fury design with reduced power consumption.
It's hilarious how you just refuse to admit to being wrong! Instead, you just post the same information over and over hoping that it'll magically turn out to be correct. But, okay. Give me a source on AMD and Nvidia mentioning their own tick-tock model, and you win! If you can't find one though, you have to admit to being wrong. And you can't just post a bunch of images to piece together a theory either. You need to post actual quotes or slides from AMD and Nvidia with the exact phrase "tick-tock" on them. If you can't do that (which you obviously can't since you haven't done so already), you have to admit that the model you keep posting for them is something that you completely made up. If you can be the bigger man and admit to being wrong, I'll apologize for every insult I've aimed at you.
I see this topic the same level as the guy who was saying that the X-1 is using alien technology that MS has kept hidden
You're talking about MisterXMedia. He has his own forums, and the source of the rumor in this thread is someone who was banned from those forums. In other words, it's even worse.
This is going to be an amazing year. Nintendo fans disappointed by NX, Playstation fans disappointed when they see that Neo isn't the huge jump they were expected, and naive Xbox fans disappointed by this fake leak. I love it!
I see this topic the same level as the guy who was saying that the X-1 is using alien technology that MS has kept hidden
No, more-so reliant on "the cloud" special powas.
lol, I totally forgot about "the cloud"
@techhog89 said:
@freedom01 said:
I see this topic the same level as the guy who was saying that the X-1 is using alien technology that MS has kept hidden
You're talking about MisterXMedia. He has his own forums, and the source of the rumor in this thread is someone who was banned from those forums. In other words, it's even worse.
This is going to be an amazing year. Nintendo fans disappointed by NX, Playstation fans disappointed when they see that Neo isn't the huge jump they were expected, and naive Xbox fans disappointed by this fake leak. I love it!
The tears of the disappointed will fill the ocean!!!!!
But really, I just hope that in the end everyone is happy, though this supposed system is just wishful thoughts
AMD has stated thier pref/watt improvement was 70 percent FinFET.
980 Ti's reference base clock is 1000 Mhz. Applying FinFET's 70 percent improvement on 1000 Mhz yields 1700 Mhz.
Reference 1080's boost clock speed is 1733 Mhz.
You lose.
@techhog89 said:
@ronvalencia: Ugh. Let me spell it out for for you:
Emily was saying that the hardware rumors posted by 10k were false. This part is the hardware rumors:
1. The NX will use a custom Polaris-like GPU. Likely will be on a FinFET 14nm fabrication node. The source told me it's on the same architecture with heavy customizations of course . It will contain the feature set of Polaris. It is "marginally better than the PS4" and theoretically could be "2x the power of PS4 GPU". I asked about PS4K being rumored to have a gpu 2x as powerful as the OG PS4 and how the theoretical performance of the NX would be and was told "Theoretically it could be close to the PS4K rumored specs". Of course, we know nothing of Polaris or the PS4K specs, but he gave that metric.
So, the stuff you underlined is ruled out.
This part is not a rumor, but instead a guess that 10k made before posting the above rumors:
Now, all of this is obviously 80% ruled out too because it matches up with the other stuff he was talking about and it 80% confirms that NX is weaker than PS4, but that's not the point. The point that I was trying to make is that not everything posted by rumor starters is a rumor. A lot of people get annoyed by people thinking that everything they post is some kind of leak. This was a guess based on what he heard, and it was in a different thread than the one she was talking about, because if you read her quote:
"Here is what multiple sources close to Nintendo are telling me about 10k’s hardware rumors: The gimmick is made up. GPU is wrong. Power level is wrong.” “The specs on NX are good, but a lot of the information being shared in this thread is incorrect.
I was told that NX has good specs, but the info in this thread on the GPU and power level is just not correct. Sorry to burst everyone’s hype."
She was talking specifically about that thread. This SPECULATIVE (AS IN, NOT A RUMOR BUT A GUESS) post from 10k is unrelated because it is neither a rumor nor a post in that thread.
Now, why did I say 80%? Because that's her self-proclaimed track record.
People like this are why I returned to Twitter. I would say my track record on rumors is 80% correct, 20% wrong.
In other words, don't use one rumor to say for certain that another is false. Take everything with a grain of salt.
GTX 980 Ti Strix at 1380Mhz has about 7.7 TFLOPS
R9 Fury Nitro at 1050 Mhz has about 7.53 TFLOPS
HIS R9-390X has about 6.26 TFLOPS
Hitman DX12 was designed with multi-threading rendering and async.
Hitman DX12 minimizes current GCN's geometry/tessellation bottlenecks.
DX12 enables AMD GPUs to again MT rendering and async tasks which exist for NVIDIA DX11 GPU drivers.
From https://developer.nvidia.com/dx12-dos-and-donts
On DX11 the driver does farm off asynchronous tasks to driver worker threads where possible
If you are going to use DX11 games, NVIDIA has driver side MT rendering and asynchronous tasks while AMD DX11 driver doesn't have these features.
MT rendering and asynchronous tasks major DX12 speed up methods..
With DX12, AMD catches up to NVIDIA's MT rendering and asynchronous tasks speed up methods...
Yeah, let's wait and see if this holds up with Pascal and Polaris/Vega. Spoiler alert: It won't.
What you fail to understand is different companies has different meaning for tick-tock improvement patterns from Intel, but the principle is the same.
On GPUs, both AMD and NVIDIA has two major pref/watt improvements instances in the 28 nm process node
NVIDIA
1st gen 28nm, Kelper
2nd gen 28nm, Maxwell
AMD
1st gen 28nm, GCN 1.0
2nd gen 28nm, GCN 1.2. AMD didn't apply top to bottom updates with GCN 1.2 e.g. No 44 CU version Fury design with reduced power consumption. No 20 CU version Fury design with reduced power consumption.
It's hilarious how you just refuse to admit to being wrong! Instead, you just post the same information over and over hoping that it'll magically turn out to be correct. But, okay. Give me a source on AMD and Nvidia mentioning their own tick-tock model, and you win! If you can't find one though, you have to admit to being wrong. And you can't just post a bunch of images to piece together a theory either. You need to post actual quotes or slides from AMD and Nvidia with the exact phrase "tick-tock" on them. If you can't do that (which you obviously can't since you haven't done so already), you have to admit that the model you keep posting for them is something that you completely made up. If you can be the bigger man and admit to being wrong, I'll apologize for every insult I've aimed at you.
AMD has stated thier pref/watt improvement was 70 percent FinFET.
980 Ti's reference base clock is 1000 Mhz. Applying FinFET's 70 percent improvement on 1000 Mhz yields 1700 Mhz.
Reference 1080's boost clock speed is 1733 Mhz.
You lose.
Uh, why are you comparing the base clock of the 980 Ti to the boost clock of the 1080? That makes no sense. You should be comparing it to the boost clock of its predecessor, the 980, in which case it's a 40% increase.
I can't tell if you think that I'm an idiot or if you simply made a mistake. I also don't know what you were trying to prove with that.
No fanboy war here. Just me trying to knock a kid down a peg or two.
Do you actually think that's the case, because from where I'm standing it isn't going to work for you...
Emily or 10k never said 2.5tflops = Proven wrong
We won't see core clock speeds of 1.6GHz from FinFet = Proven wrong
---
I believe ron is wrong from a historical and pricing perspective, but you just seem intent on flaming. That's cool.
Both of those are only half right, since Ronnie-boy likes to leave out or manipulate context to make it look like he's always right. Again, that 10k post was just a random speculative post, not a rumor. The only way that I would have seen it is if I were there at the time or I was looking through literally every single one of his posts, and it's irrelevant since it wasn't one of his rumors. (I do admit, however, that I might have been there, but the post didn't stick in my mind because, again, it was speculation and not him posting a rumored spec list.) Everything that he posted as a rumor was clearly labeled as such. And as for the clock speed, I was referring specifically to AMD cards with that. I said early on that it could happen with Nvidia. Meanwhile, this guy fell for an April Fool's joke a month later and did everything in his power to deny that fact...
AMD has stated thier pref/watt improvement was 70 percent FinFET.
980 Ti's reference base clock is 1000 Mhz. Applying FinFET's 70 percent improvement on 1000 Mhz yields 1700 Mhz.
Reference 1080's boost clock speed is 1733 Mhz.
You lose.
Uh, why are you comparing the base clock of the 980 Ti to the boost clock of the 1080? That makes no sense. You should be comparing it to the boost clock of its predecessor, the 980, in which case it's a 40% increase.
I can't tell if you think that I'm an idiot or if you simply made a mistake. I also don't know what you were trying to prove with that.
Wrong, I made my statement with respect to transistor count.
980 = 5.2 billion
980 Ti = 8 billion
1080 = 7.2 billion
In terms of transistor count, 1080 is closer to 980 Ti than 980.
Pref/watt estimation vs clock speed can only be made with like for like i.e. 1080's transistor count closer to 980 Ti.
Don't tell me that you believe this bullshit now too. -_- Everyone on this board is retarded.
I'll buy you one of these too if the rumor proves true and this is announced at E3. I'll buy every single person here one, even if I have to sell everything I own.
...
(don't listen to the idiot who made this thread, since he really isn't as good as figuring this stuff out as he thinks but keeps getting encouraged by lucky guesses
I don't buy this rumor either, but calling everyone on here a retard is out of line. Especially considering that I perfectly recollect you telling ron there was no way FinFet would allow for clock speeds in the 1.5GHz+ range and according to the earliest benchmarks for the GTX 1080, it comes with a base clock of 1.6GHz and boost of 1.8GHz.
What are your thoughts in this case? Also, I will hold you to the free Xbox Next if it's 9tflops thing, even if I didn't buy the rumor. *bookmarked*
I said that about AMD cards. For Pascal, I said that I believed that 1.6GHz base could happen, though it did indeed exceed my expectations. This rumor, however, is total bullshit. Even a forum dedicated to making up bullshit about how amazing Xbox is is knows that it's bullshit.
ON another note, I don't know why you posted a GTX 1080 rumor when the specs have already been officially announced...
There's a high end Polaris 10 SKU targeting Fury X's market segment.
1050 Mhz at 40 CU only yields = 5.376 TFLOPS.
When applying AMD's 70 improvement from FinFET for 1050Mhz, it yields 1785 Mhz.
1785 Mhz at 40 CU only yields = 9.139 TFLOPS which is enough to beat Fury X for raw TFLOPS compute i.e. non-gaming workloads needs raw TFLOPS and generally not bottle-necked by geometry/tessellation.
The point you missed is AMD attributed their Polaris perf/watt improvements mostly from FinFET i.e. 70 percent FinFET, 30 percent from architecture i.e. design of the GPU.
There's no way in hell Polaris 10 at 5.376 TFLOPS would come close to GTX 1080, unless you claim AMD has a large jump with Polaris' IPC over NVIDIA's Pascal IPC.
Both Pascal and GCN has the same register file storage vs stream processor ratio at the SM/CU level!!
Btw, "Vega" also has top-to-bottom SKUs.
I estimated Polaris and general FinFET GPU clock speed based from AMD's 70 percent from FinFET. 1080's clock speed is what I expected from FinFET.
Games like Ashes of the Singularity DX12 enables Fury X to beat R9-390X which is different from Hitman DX12's results.
Gaming PC can afford Polaris 10 with high clock speed i.e. better cooling, better power regulators, better power supply and 'etc'. Factory over-clocks usually comes with better power regulators/caps.
NEO remains like Xbox 360 while PC has 180 watts like 8800 GTX.
@ronvalencia: I didn't say that I expected it to stay at 1050MHz either. In fact, YOU were the one saying that it would in one of our arguments earlier and that only Polaris 11 would see high clocks. Rekt?
After seeing Pascal, I expect 1400-1500MHz, and I don't expect them to beat the 1080.
@ronvalencia: I didn't say that I expected it to stay at 1050MHz either. In fact, YOU were the one saying that it would in one of our arguments earlier and that only Polaris 11 would see high clocks. Rekt?
There's already a factory overclock R7-360 with 1.2 Ghz with the old 28 nm.
From http://www.overclockers.com/amd-r7-260x-graphics-card-review/
For beyond 1.25 Ghz, R7-260X lacks end user voltage control.
@techhog89 said:
After seeing Pascal, I expect 1400-1500MHz, and I don't expect them to beat the 1080.
1. With 16 nm FinFET, NVIDIA can scale GM200's chip geometry into near 300 mm^2.
2. FinFET's pref/watt improvements. From http://wccftech.com/nvidia-pascal-gpu-gtc-2015/
The article commented 2X perf/watt for Pascal. The article from 1 year ago. AMD has divided their perf/watt improvements between 70 percent FinFET and 30 percent architecture.
I have already stated both AMD and NVIDIA will NOT ignore high spending PC gamers.
@ronvalencia: I didn't say that I expected it to stay at 1050MHz either. In fact, YOU were the one saying that it would in one of our arguments earlier and that only Polaris 11 would see high clocks. Rekt?
There's already a factory overclock R7-360 with 1.2 Ghz with the old 28 nm.
From http://www.overclockers.com/amd-r7-260x-graphics-card-review/
For beyond 1.25 Ghz, R7-260X lacks end user voltage control.
@techhog89 said:
After seeing Pascal, I expect 1400-1500MHz, and I don't expect them to beat the 1080.
1. With 16 nm FinFET, NVIDIA can scale GM200's chip geometry into near 300 mm^2.
2. FinFET's pref/watt improvements. From http://wccftech.com/nvidia-pascal-gpu-gtc-2015/
The article commented 2X perf/watt for Pascal. The article from 1 year ago. AMD has divided their perf/watt improvements between 70 percent FinFET and 30 percent architecture.
I have already stated both AMD and NVIDIA will NOT ignore high spending PC gamers.
You're comparing different classes of chips again. You're also reaching, since the main focus of our previous arguments was what could be in a console. Consoles use GPUs running at speeds well below reference clocks, so it doesn't make sense to compare that to highly overclocked chips. Hell, it's a stretch to talk about overclocks at all. Now, Polaris 11 may be able to reach 1600MHz at its reference clock. I think that might be possible. However, your logic for clocks increasing by 70% is baseless. You just look for numbers to support your guesses and magically apply those numbers how you want, even if there's evidence disputing your conclusions. You also change your story when necessary in order to "win" this "flame war," which is beyond childish. I guess that nothing will change until you're proven wrong though. sigh
EDIT: I just came to a shocking conclusion which turns this argument on its head. P11 and P10 are actually Oland and Bonaire replacements respectively, given the rumored power consumption. P10 480X is expected to use 130W, compared to the 260X's 115W. Applying the 2.5x efficiency increase, we should expect something like 5.6 TFLOPs out of the 480X points to a clock speed of 1.1GHz. Factory OCs should be higher, likely hitting 1.3GHz.
That's just my hypothesis though. Vega 11 replaces Pitcairn and will have a 160-180W TDP, Vega 10 replaces Tonga and will have a 200-225W TDP. Navi will have 3+ chips and be a new series.
@chikenfriedrice: nvidia already has a card that's doing around 10 tflops and it's $600 and it comes out at the end of this month now if the new xboxone comes out in 2017 and its made to last 4 or 5 years this could be true the system would cost around $400 in 2017 not $1000 it would be a amd card and it would be built for the future the fact you can get a 10tflop card for $600 today shows that will be the standard in 2017 till around 2021
@ronvalencia: I didn't say that I expected it to stay at 1050MHz either. In fact, YOU were the one saying that it would in one of our arguments earlier and that only Polaris 11 would see high clocks. Rekt?
There's already a factory overclock R7-360 with 1.2 Ghz with the old 28 nm.
From http://www.overclockers.com/amd-r7-260x-graphics-card-review/
For beyond 1.25 Ghz, R7-260X lacks end user voltage control.
@techhog89 said:
After seeing Pascal, I expect 1400-1500MHz, and I don't expect them to beat the 1080.
1. With 16 nm FinFET, NVIDIA can scale GM200's chip geometry into near 300 mm^2.
2. FinFET's pref/watt improvements. From http://wccftech.com/nvidia-pascal-gpu-gtc-2015/
The article commented 2X perf/watt for Pascal. The article from 1 year ago. AMD has divided their perf/watt improvements between 70 percent FinFET and 30 percent architecture.
I have already stated both AMD and NVIDIA will NOT ignore high spending PC gamers.
You're comparing different classes of chips again. You're also reaching, since the main focus of our previous arguments was what could be in a console. Consoles use GPUs running at speeds well below reference clocks, so it doesn't make sense to compare that to highly overclocked chips. Hell, it's a stretch to talk about overclocks at all. Now, Polaris 11 may be able to reach 1600MHz at its reference clock. I think that might be possible. However, your logic for clocks increasing by 70% is baseless. You just look for numbers to support your guesses and magically apply those numbers how you want, even if there's evidence disputing your conclusions. You also change your story when necessary in order to "win" this "flame war," which is beyond childish. I guess that nothing will change until you're proven wrong though. sigh
EDIT: I just came to a shocking conclusion which turns this argument on its head. P11 and P10 are actually Oland and Bonaire replacements respectively, given the rumored power consumption. P10 480X is expected to use 130W, compared to the 260X's 115W. Applying the 2.5x efficiency increase, we should expect something like 5.6 TFLOPs out of the 480X points to a clock speed of 1.1GHz. Factory OCs should be higher, likely hitting 1.3GHz.
That's just my hypothesis though. Vega 11 replaces Pitcairn and will have a 160-180W TDP, Vega 10 replaces Tonga and will have a 200-225W TDP. Navi will have 3+ chips and be a new series.
No.
Roy Taylor seems to have made a typo with "3".
2816 SP = 44 CU
2304 SP = 36 CU
Middle SKU's 6GB indicates 384 bit GDDR5 card. R9-280X's PCB has 384 bit.
Lower SKU's 4GB indicates 256 bit GDDR5 card. R9-380X's PCB has 256 bit.
If Roy Taylor's "3" wasn't a typo and the entire scenario is changed.
According to Roy Taylor, there's a Polaris 10 SKU with HBMv2.
Again...
AMD Kaveri APU's chip size is 245 mm^2 (28 nm process node) and AMD A6-7400K SKU's price tag is $50.99 retail which indicates GloFlo's chip price is very low.
Like DirectX12 and Vulkan ecosystem changes, AMD is attempting to change the software ecosystem for multiple GPUs i.e. multi-GPUs from top to bottom SKUs.
Xbox Next could follow AMD's push for multi-GPU on a chip package i.e. multi-chip HBM + multi-GPU on the chip package.
Game consoles are important to change the ecosystem on PCs.
@ronvalencia: I didn't say that I expected it to stay at 1050MHz either. In fact, YOU were the one saying that it would in one of our arguments earlier and that only Polaris 11 would see high clocks. Rekt?
There's already a factory overclock R7-360 with 1.2 Ghz with the old 28 nm.
From http://www.overclockers.com/amd-r7-260x-graphics-card-review/
For beyond 1.25 Ghz, R7-260X lacks end user voltage control.
@techhog89 said:
After seeing Pascal, I expect 1400-1500MHz, and I don't expect them to beat the 1080.
1. With 16 nm FinFET, NVIDIA can scale GM200's chip geometry into near 300 mm^2.
2. FinFET's pref/watt improvements. From http://wccftech.com/nvidia-pascal-gpu-gtc-2015/
The article commented 2X perf/watt for Pascal. The article from 1 year ago. AMD has divided their perf/watt improvements between 70 percent FinFET and 30 percent architecture.
I have already stated both AMD and NVIDIA will NOT ignore high spending PC gamers.
You're comparing different classes of chips again. You're also reaching, since the main focus of our previous arguments was what could be in a console. Consoles use GPUs running at speeds well below reference clocks, so it doesn't make sense to compare that to highly overclocked chips. Hell, it's a stretch to talk about overclocks at all. Now, Polaris 11 may be able to reach 1600MHz at its reference clock. I think that might be possible. However, your logic for clocks increasing by 70% is baseless. You just look for numbers to support your guesses and magically apply those numbers how you want, even if there's evidence disputing your conclusions. You also change your story when necessary in order to "win" this "flame war," which is beyond childish. I guess that nothing will change until you're proven wrong though. sigh
EDIT: I just came to a shocking conclusion which turns this argument on its head. P11 and P10 are actually Oland and Bonaire replacements respectively, given the rumored power consumption. P10 480X is expected to use 130W, compared to the 260X's 115W. Applying the 2.5x efficiency increase, we should expect something like 5.6 TFLOPs out of the 480X points to a clock speed of 1.1GHz. Factory OCs should be higher, likely hitting 1.3GHz.
That's just my hypothesis though. Vega 11 replaces Pitcairn and will have a 160-180W TDP, Vega 10 replaces Tonga and will have a 200-225W TDP. Navi will have 3+ chips and be a new series.
No.
Roy Taylor seems to have made a typo with "3".
2816 SP = 44 CU
2304 SP = 36 CU
Middle SKU's 6GB indicates 384 bit GDDR5 card. R9-280X's PCB has 384 bit.
Lower SKU's 4GB indicates 256 bit GDDR5 card. R9-380X's PCB has 256 bit.
If Roy Taylor's "3" wasn't a typo and the entire scenario is changed.
According to Roy Taylor, there's a Polaris 10 SKU with HBMv2.
Again...
AMD Kaveri APU's chip size is 245 mm^2 (28 nm process node) and AMD A6-7400K SKU's price tag is $50.99 retail which indicates GloFlo's chip price is very low.
You really love comparing apples and oranges.
And you can post that chart as many times as you like, but you still read it wrong. There will not be a Fury-branded Polaris card. Polaris 10 will be the 480X or maybe the 490X. The 1080 is the successor to the 980, not the 980 Ti, even though it's faster. All of these are facts. Deal with it.
@ronvalencia: I didn't say that I expected it to stay at 1050MHz either. In fact, YOU were the one saying that it would in one of our arguments earlier and that only Polaris 11 would see high clocks. Rekt?
There's already a factory overclock R7-360 with 1.2 Ghz with the old 28 nm.
From http://www.overclockers.com/amd-r7-260x-graphics-card-review/
For beyond 1.25 Ghz, R7-260X lacks end user voltage control.
@techhog89 said:
After seeing Pascal, I expect 1400-1500MHz, and I don't expect them to beat the 1080.
1. With 16 nm FinFET, NVIDIA can scale GM200's chip geometry into near 300 mm^2.
2. FinFET's pref/watt improvements. From http://wccftech.com/nvidia-pascal-gpu-gtc-2015/
The article commented 2X perf/watt for Pascal. The article from 1 year ago. AMD has divided their perf/watt improvements between 70 percent FinFET and 30 percent architecture.
I have already stated both AMD and NVIDIA will NOT ignore high spending PC gamers.
You're comparing different classes of chips again. You're also reaching, since the main focus of our previous arguments was what could be in a console. Consoles use GPUs running at speeds well below reference clocks, so it doesn't make sense to compare that to highly overclocked chips. Hell, it's a stretch to talk about overclocks at all. Now, Polaris 11 may be able to reach 1600MHz at its reference clock. I think that might be possible. However, your logic for clocks increasing by 70% is baseless. You just look for numbers to support your guesses and magically apply those numbers how you want, even if there's evidence disputing your conclusions. You also change your story when necessary in order to "win" this "flame war," which is beyond childish. I guess that nothing will change until you're proven wrong though. sigh
EDIT: I just came to a shocking conclusion which turns this argument on its head. P11 and P10 are actually Oland and Bonaire replacements respectively, given the rumored power consumption. P10 480X is expected to use 130W, compared to the 260X's 115W. Applying the 2.5x efficiency increase, we should expect something like 5.6 TFLOPs out of the 480X points to a clock speed of 1.1GHz. Factory OCs should be higher, likely hitting 1.3GHz.
That's just my hypothesis though. Vega 11 replaces Pitcairn and will have a 160-180W TDP, Vega 10 replaces Tonga and will have a 200-225W TDP. Navi will have 3+ chips and be a new series.
No.
Roy Taylor seems to have made a typo with "3".
2816 SP = 44 CU
2304 SP = 36 CU
Middle SKU's 6GB indicates 384 bit GDDR5 card. R9-280X's PCB has 384 bit.
Lower SKU's 4GB indicates 256 bit GDDR5 card. R9-380X's PCB has 256 bit.
If Roy Taylor's "3" wasn't a typo and the entire scenario is changed.
According to Roy Taylor, there's a Polaris 10 SKU with HBMv2.
Again...
AMD Kaveri APU's chip size is 245 mm^2 (28 nm process node) and AMD A6-7400K SKU's price tag is $50.99 retail which indicates GloFlo's chip price is very low.
You really love comparing apples and oranges.
And you can post that chart as many times as you like, but you still read it wrong. There will not be a Fury-branded Polaris card. Polaris 10 will be the 480X or maybe the 490X. The 1080 is the successor to the 980, not the 980 Ti, even though it's faster. All of these are facts. Deal with it.
No, it's you who is reading it wrong, There's a Polaris 10 SKU replacing Fury SKUs. You are now ignoring Roy Taylor's post.
@TheWalkingGhost: They serve the exact same purpose, but one is interactive. That's the only difference. Just because of that, games are inferior? Ridiculous. Next you'll say that they don't count as art or a form of expression.
Well, whatever. We have different opinions, clearly. Yours is just stuck in the 80s, but we can agree to disagree.
Log in to comment