More Polaris10 and Polaris11 Specifications Revealed

Avatar image for Coseniath
Coseniath

3183

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#1  Edited By Coseniath
Member since 2004 • 3183 Posts

From TechpowerUp:

More Polaris10 and Polaris11 Specifications Revealed

Industry sources revealed to TechPowerUp some pretty interesting specifications of AMD's two upcoming GPUs based on the 4th generation Graphics CoreNext "Polaris" architecture. The company is preparing a performance-segment GPU and a mainstream one. It turns out, that the performance-segment chip, which the press has been referring to as "Ellesmere," could feature 32 compute units (CUs), and not the previously thought 40.

Assuming that each CU continues to consist of 64 stream processors (SP), you're looking at an SP count of 2,048. What's more, this chip is said to offer a single-precision floating point performance of 5.5 TFLOP/s, as claimed by AMD. To put this into perspective, the company had claimed 5.2 TFLOP/s for the "Hawaii"/"Grenada" based FirePro W9100, which launched earlier this February, and that SKU featured all 2,816 SP present on the chip. So this chip is definitely faster than most "Hawaii" based SKUs.

While "Hawaii" based SKUs feature TDP of no less than 250W, the new chip has a TDP rated no higher than 150W. AMD could pull off a "single 8-pin power connector" feat like NVIDIA, with quite some headroom to spare. The chip features a 256-bit wide GDDR5/GDDR5X memory interface, and 8 GB could be its standard memory amount. The first SKUs based on this chip could feature 7 Gbps GDDR5 memory.

AMD will upgrade the feature-set to include HVEC/H.265 hardware encode/decode acceleration, DisplayPort 1.3, and HDMI 2.0a outputs.

The smaller "Polaris" chip scheduled for 2016, which the press has been referring to as "Baffin," could feature 14 compute units, working out to a stream processor count of 896. It will be a mainstream chip, succeeding the "Tobago" silicon, which drives the current R7 360 series SKUs, although it wouldn't surprise us if it outperformed bigger chips, such as the "Trinidad" based R7 370 series. This chip has its peak single-precision floating-point performance rated at 2.5 TFLOP/s. Its TDP is rated at just 50W, and it is expected to feature a 128-bit wide GDDR5 memory interface, holding 4 GB of memory.

edit: Guru3D and VideoCardZ aswell:

New Polaris10 and Polaris11 Specifications

Rumor: Possible Polaris 10 and Polaris 11 specifications emerge

ps: VideoCardZ also commented this:

Our comment: We received the exact same specifications just few days ago. Although it was noted that those are specifications for mobile GPUs, so you may want to take this into consideration.

----

Many have said before that new Polaris GPUs, will have only 390 and 390X level of performance but in a small price.

I thought they were trash talking. Now, I wouldn't be surprised if new Polaris GPU will have around R9 390X performance...

Avatar image for SuperClocks
SuperClocks

334

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#2 SuperClocks
Member since 2009 • 334 Posts

http://wccftech.com/amd-polaris-10-polaris-11-launch-event/

Avatar image for Coseniath
Coseniath

3183

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#3 Coseniath
Member since 2004 • 3183 Posts

@SuperClocks: Apart from the fact that WCCFTech is not a reliable source, this was 4 days ago.

Check what they posted today:

:P

Avatar image for Hydrolex
Hydrolex

1648

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#4 Hydrolex
Member since 2007 • 1648 Posts

Weak!!!!!

Avatar image for SuperClocks
SuperClocks

334

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#5  Edited By SuperClocks
Member since 2009 • 334 Posts

@Coseniath: I wonder which one is correct. Either way, I'm much more interested in Vega than Polaris, as are most gamers. Hopefully, the rumors of Vega coming in October are correct...

Avatar image for deactivated-579f651eab962
deactivated-579f651eab962

5404

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#6 deactivated-579f651eab962
Member since 2003 • 5404 Posts

All I can think is too little too late.

AMD exiting the High end GPU space means Nvidia will reign supreme and prices will get even higher :(

Avatar image for Coseniath
Coseniath

3183

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#7 Coseniath
Member since 2004 • 3183 Posts

@SuperClocks: Well according, to TechpowerUp, Guru3D, VideoCardZ and WCCFTech, the latest is the correct.

AMD has scheduled to provide more Polaris info on May 18th.

About Vega... I think rumors that saying there will be 2 Vega chips (Vega 10 and Vega 11) seems more accurate to me.

Cause the full Vega chip will be very big and there will be a huuuge difference between Vega and the small 232mm2 chip Polaris10.

The small Vega should compete with GP104 and the big Vega should compete with GP100.

So... I think AMD regret the fact that GP104 would sell unrivaled and we might see small Vega with GDDR5X in October to compete with GP104.

Avatar image for SuperClocks
SuperClocks

334

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#8 SuperClocks
Member since 2009 • 334 Posts

@Hydrolex said:

Weak!!!!!

Polaris 10 has already been shown to maintain a constant 60fps in Hitman at 1440p using Ultra settings. That's Fury X territory. Considering the GPU's low tdp and great performance, Polaris 10 will be a great value if the price is right.

Avatar image for SuperClocks
SuperClocks

334

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#9  Edited By SuperClocks
Member since 2009 • 334 Posts

@Coseniath: I'm not saying that you're wrong, but why do think that Vega will release with GDDR5X? Samsung began mass production of HBM2 several months ago, and Hynix is supposed to begin mass production anytime now. Considering that AMD has priority access to HBM2 supplies, due to them being co-developers of the technology, I'm hoping that Vega 10 does indeed launch in October using HBM2.

Avatar image for dxmcat
dxmcat

3385

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#10 dxmcat
Member since 2007 • 3385 Posts

@klunt_bumskrint said:

All I can think is too little too late.

AMD exiting the High end GPU space means Nvidia will reign supreme and prices will get even higher :(

I'd say Nvidia is doing some decent reign'ing right now, and the pricing on the 1070 / 1080.......well...........

Avatar image for Coseniath
Coseniath

3183

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#11 Coseniath
Member since 2004 • 3183 Posts
@SuperClocks said:

@Coseniath: I'm not saying that you're wrong, but why do think that Vega will release with GDDR5X? Samsung began mass production of HBM2 several months ago, and Hynix is supposed to begin mass production anytime now. Considering that AMD has priority access to HBM2 supplies, due to them being co-developers of the technology, I'm hoping that Vega 10 does indeed launch in October using HBM2.

No, I agree that Vega 10 will have HBM2.0. I was talking about the rumored smaller Vega that can benefit from GDDR5X (HBM2.0 might be an overkill for this) and AMD could minimize the production costs (HBM2.0 is much more expensive than GDDR5X) while maximazing the profit and all that without loosing performance since GDDR5X can go as high as 896GB/s (I doubt that they will even use half of it...).

Avatar image for horgen
horgen

127732

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#12 horgen  Moderator
Member since 2006 • 127732 Posts
@Coseniath said:
@SuperClocks said:

@Coseniath: I'm not saying that you're wrong, but why do think that Vega will release with GDDR5X? Samsung began mass production of HBM2 several months ago, and Hynix is supposed to begin mass production anytime now. Considering that AMD has priority access to HBM2 supplies, due to them being co-developers of the technology, I'm hoping that Vega 10 does indeed launch in October using HBM2.

No, I agree that Vega 10 will have HBM2.0. I was talking about the rumored smaller Vega that can benefit from GDDR5X (HBM2.0 might be an overkill for this) and AMD could minimize the production costs (HBM2.0 is much more expensive than GDDR5X) while maximazing the profit and all that without loosing performance since GDDR5X can go as high as 896GB/s (I doubt that they will even use half of it...).

Yeah and so will the power usage go up as well.

Avatar image for Coseniath
Coseniath

3183

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#13 Coseniath
Member since 2004 • 3183 Posts
@horgen said:
@Coseniath said:
@SuperClocks said:

@Coseniath: I'm not saying that you're wrong, but why do think that Vega will release with GDDR5X? Samsung began mass production of HBM2 several months ago, and Hynix is supposed to begin mass production anytime now. Considering that AMD has priority access to HBM2 supplies, due to them being co-developers of the technology, I'm hoping that Vega 10 does indeed launch in October using HBM2.

No, I agree that Vega 10 will have HBM2.0. I was talking about the rumored smaller Vega that can benefit from GDDR5X (HBM2.0 might be an overkill for this) and AMD could minimize the production costs (HBM2.0 is much more expensive than GDDR5X) while maximazing the profit and all that without loosing performance since GDDR5X can go as high as 896GB/s (I doubt that they will even use half of it...).

Yeah and so will the power usage go up as well.

HBM2.0 has already high enough power usage. Its not like HBM.

The difference would be minimal, like 10 watt...

Avatar image for horgen
horgen

127732

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#14 horgen  Moderator
Member since 2006 • 127732 Posts

@Coseniath: There GDDR5 peaks at almost 80watts with a bandwidth of below 500GB/s... HBM 2 peaks at 60 and 120 watts with 2-4 times the bandwidth.

Avatar image for Coseniath
Coseniath

3183

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#16 Coseniath
Member since 2004 • 3183 Posts
@horgen said:

@Coseniath: There GDDR5 peaks at almost 80watts with a bandwidth of below 500GB/s... HBM 2 peaks at 60 and 120 watts with 2-4 times the bandwidth.

Yes. But GDDR5X will have double bandwidth.

According to micron it can go up to 14GHz and with 512 bit can reach 896GB/s.

@SuperClocks: I am very curious about AMD's next gen VRAM too.

In the begining I thought it was 3D memory (HBM is 2,5D), but then this theory came...

We will see...

Avatar image for SuperClocks
SuperClocks

334

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#18 SuperClocks
Member since 2009 • 334 Posts

@Coseniath: After reading an article about CPU and GPU interaction, I now know that ALU's in the VRAM of a dedicated card would still have to receive instructions from the driver through the CPU, and would still have to access system RAM in order to properly instruct the GPU. I'm guessing now that the processors in the next gen memory that AMD is developing are most likely to offload non-GPU related tasks from the CPU in their APU's with a shared memory pool.

Avatar image for horgen
horgen

127732

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#19 horgen  Moderator
Member since 2006 • 127732 Posts

@Coseniath: GDDR5X is still using more power.

Avatar image for Coseniath
Coseniath

3183

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#20  Edited By Coseniath
Member since 2004 • 3183 Posts
@horgen said:

@Coseniath: GDDR5X is still using more power.

Yes, but I think AMD will prioritise profits than some watt less... :P

ps: Also the big advantage for HBM over GDDR5 is also less mm2 die and this is vital when you go over 600mm2 and you are limited from the die size.

@SuperClocks said:

@Coseniath: After reading an article about CPU and GPU interaction, I now know that ALU's in the VRAM of a dedicated card would still have to receive instructions from the driver through the CPU, and would still have to access system RAM in order to properly instruct the GPU. I'm guessing now that the processors in the next gen memory that AMD is developing are most likely to offload non-GPU related tasks from the CPU in their APU's with a shared memory pool.

I think Fury (X) with HBM is already doing this.

The AMD Radeon R9 Fury X VRAM behavior does make sense though if you look toward the dynamic VRAM. It seems the onboard dedicated VRAM was mostly pegged at or near its 4GB of capacity. Then it seems the video card is able to shift its memory load out to system RAM, by as much as almost 4GB at 4K with 4X SSAA.

..

Avatar image for horgen
horgen

127732

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#21 horgen  Moderator
Member since 2006 • 127732 Posts

@Coseniath: yeah, smaller size is a big plus.

About vram. If Fury X offloads on RAM then increased ram speed becomes mich more important. Suddenly the benefits of 2400MHz ++ are bigger than before.

Avatar image for gerygo
GeryGo

12810

Forum Posts

0

Wiki Points

0

Followers

Reviews: 6

User Lists: 0

#22 GeryGo  Moderator
Member since 2006 • 12810 Posts

Polaris and Vega should feature 14nm chips by Samsung, compared to 16nm of Nvidia

Should give the some kind of performance advantage

Avatar image for 04dcarraher
04dcarraher

23858

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#23  Edited By 04dcarraher
Member since 2004 • 23858 Posts
@PredatorRules said:

Polaris and Vega should feature 14nm chips by Samsung, compared to 16nm of Nvidia

Should give the some kind of performance advantage

The difference between 14 and 16nm are very minor. The only thing that might benefit from using 16nm is abit better production yields.

Also there is abit of info about Apple A9 using both Samsung 14nm and TSMC 16nm and 16nm performed abit better and wasn't as warm as Samasung chip

Avatar image for Coseniath
Coseniath

3183

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#24 Coseniath
Member since 2004 • 3183 Posts
@horgen said:

@Coseniath: About vram. If Fury X offloads on RAM then increased ram speed becomes mich more important. Suddenly the benefits of 2400MHz ++ are bigger than before.

Quad channel from extreme platform as well...

Avatar image for horgen
horgen

127732

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#25 horgen  Moderator
Member since 2006 • 127732 Posts
@Coseniath said:
@horgen said:

@Coseniath: About vram. If Fury X offloads on RAM then increased ram speed becomes mich more important. Suddenly the benefits of 2400MHz ++ are bigger than before.

Quad channel from extreme platform as well...

Yup. That as well... I guess it will be time for 32GB when I upgrade... And a windows pro license as well :P

Avatar image for SuperClocks
SuperClocks

334

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#26  Edited By SuperClocks
Member since 2009 • 334 Posts

@Coseniath: More than likely, AMD cards have been able to do this for several years. I know that nVidia has had the feature for atleast 10 years. I discovered that my card could do it with GTA IV. Vanilla GTA IV would not let me increase the graphics settings beyond what would fill the amount of VRAM that my card had, which was 512MB. After seeing posts about others doing it, I edited one of the GTA config files to get rid of the graphics settings VRAM ceiling. Using graphics settings that equaled 1GB or so, my framerate was essentially unaffected, and I had no problems with pop-in artifacts. But, increasing the settings to the point that 1.5GB of VRAM was required caused major texture pop-in problems. Surprisingly, my framerate was still unaffected while using 1GB of system RAM as VRAM.

As for the whole memory controller co-processor thing from before, I always knew that the CPU had to keep the GPU well fed, but I also always assumed that the GPU and CPU had a ton of back and forth communication. Apparently though, the CPU runs the graphics API, puts what the GPU needs into VRAM, and then sends the order (draw call?) for the GPU to perform a task. I only read a couple of paragraphs on the CPU & GPU interaction subject, but having the CPU handle the API and issue commands to the GPU is definitely the best option. It would just add a ton of latency for ALU's (integer processors?) to access RAM and CPU resources across the PCIe bus before doing exactly what the CPU does without that extra step. It might as well be a one way trip from the CPU side.

Avatar image for Coseniath
Coseniath

3183

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#27 Coseniath
Member since 2004 • 3183 Posts

@SuperClocks: I don't think that Nvidia has this feature and I think this is something new that AMD develop with Fury in order to try and avoid the limitations of 4GB VRAM. I think it doesn't just uses the RAM after HBM is full, but it chooses what to sent to RAM and keep the intensive tasks to the fast VRAM.

Your situation is different. GTA IV, might was a good game, but it was crap coded.

And every game that limits your options depending on what "it" calculates your graphics card would use...

I had back then a GTX570. GTX570 was a bit faster than GTX480. GTX480 had 1536MB VRAM, GTX570 had 1280MB VRAM. Just no comment about the crappy GTAIV...

Oh, about the memory controller. From what you are saying, this is something that can be implemented in API, without needing a special type of VRAM.

Maybe this is the reason, that dynamic memory Fury was using, was even 10x more in DX12, than in DX11.

Avatar image for SuperClocks
SuperClocks

334

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#28  Edited By SuperClocks
Member since 2009 • 334 Posts

@horgen said:
@Coseniath said:
@horgen said:

@Coseniath: About vram. If Fury X offloads on RAM then increased ram speed becomes mich more important. Suddenly the benefits of 2400MHz ++ are bigger than before.

Quad channel from extreme platform as well...

Yup. That as well... I guess it will be time for 32GB when I upgrade... And a windows pro license as well :P

Do you do some type of design work on your PC? Or, maybe use it as a web server? Or, are you just future proofing? Zen and Vega are both supposedly coming in October, so I may upgrade then. 16GB of DDR4 will be plenty for me, though, lol.

Speaking of 32GB of RAM, Zen, and Vega, though, have you seen those sick APU's that AMD is releasing? 32x Zen cores, 32GB HBM2, and Vega 10 GPU all on one chip. I don't think this particular version will be a mainstream product, but I'm sure that they're considering something along those lines for the consumer market. Imagine the price difference of a high end, mainstream APU compared to buying an Intel chip and nVidia card seperately. They could likely hit a price to performance ratio that would undercut the current behemoths by a pretty large margin, and with performance equal to or greater than that of Intel and team green, especially after they incorporate more of the SoC (or APU) upgrades that they're working on.

Avatar image for Coseniath
Coseniath

3183

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#29 Coseniath
Member since 2004 • 3183 Posts

@SuperClocks: @horgen: Nah don't worry. After the crap they did with Win7 home (and premium) edition limiting RAM to 8(and 16)GB, from Windows8 and after the limit for the Windows home edition is 128GB RAM.

Comparison chart:

Comparison of Windows 10 editions[15][16]
FeaturesDesktopMobileIoT
HomeProEnterpriseEducationMobileMobile EnterpriseIoT Core
ArchitectureIA-32
x86-64
IA-32
ARMv7 ARM
AvailabilityOEM,
Retail
OEM,
Retail,
Volume licensing
Volume licensing, OEMVolume licensingOEMVolume licensingOEM
N Edition AvailableYesYesYesYesNoNoNo
Maximum physical memory (RAM)4 GB on IA-32
128 GB on x86-64
4 GB on IA-32
2048 GB on x86-64
???
ContinuumYesYesYesYesYesYes?

:)

Avatar image for SuperClocks
SuperClocks

334

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#30 SuperClocks
Member since 2009 • 334 Posts

@Coseniath: Prioritizing what stays in VRAM is a good move. AMD seems to have smart people in the right places again. I've been seeing some really good ideas from them lately, which is very nice after the whole Bulldozer fiasco.

And I almost got a GTX 570, but I ended up getting a Palit GTX 560 w/ 2048MB of VRAM instead, since Aliens vs Predator was already using 2GB of VRAM. I put an Apogee GT CPU waterblock and custom VRM heatsink on it so I could atleast get near the performance of a GTX 570. One really good thing about buying those obscure cards with extra VRAM, though, is that I got almost all of my money back when I sold it on eBay to buy an AMD Tahiti card. The upgrade was a no brainer, especially since AMD was giving away some really good games with their cards at the time. I got Tomb Raider, Far Cry 3, and Far Cry 3 Blood Dragon.

Oh, and I know that I'm in the minority, but I honestly don't like GTA IV. I just couldn't get into playing the role of a hardcore criminal gangster. The game was okay for what it was, and exploring the open cities was cool at first, but it just wasn't my cup of tea, tbh. But, then again, there aren't many games that have really struck a good chord with me for several years now. I guess that I've just been in a bit of a gaming rut.

Avatar image for SuperClocks
SuperClocks

334

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#31  Edited By SuperClocks
Member since 2009 • 334 Posts

@Coseniath said:

@SuperClocks: @horgen: Nah don't worry. After the crap they did with Win7 home (and premium) edition limiting RAM to 8(and 16)GB, from Windows8 and after the limit for the Windows home edition is 128GB RAM.

Comparison chart:

Comparison of Windows 10 editions[15][16]
FeaturesDesktopMobileIoT
HomeProEnterpriseEducationMobileMobile EnterpriseIoT Core
ArchitectureIA-32
x86-64
IA-32
ARMv7 ARM
AvailabilityOEM,
Retail
OEM,
Retail,
Volume licensing
Volume licensing, OEMVolume licensingOEMVolume licensingOEM
N Edition AvailableYesYesYesYesNoNoNo
Maximum physical memory (RAM)4 GB on IA-32
128 GB on x86-64
4 GB on IA-32
2048 GB on x86-64
???
ContinuumYesYesYesYesYesYes?

:)

Nice, Microsoft needs to work on keeping as many people using Windows as they can. The mobile market is slowly eating into the Windows PC market.

But, tbh, I'm getting sick and tired of Android phones and tablets. I'm anxious to see what Microsoft does with their Nokia phones running Windows 10 and using the DirectX API. I've pretty much already decided that I'm switching to a Microsoft phone with Windows 10, but I would fall in love with the thing if I could play some DirectX games on it.

Avatar image for Coseniath
Coseniath

3183

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#32  Edited By Coseniath
Member since 2004 • 3183 Posts
@SuperClocks said:

Nice, Microsoft needs to work on keeping as many people using Windows as they can. The mobile market is slowly eating into the Windows PC market.

But, tbh, I'm getting sick and tired of Android phones and tablets. I'm anxious to see what Microsoft does with their Nokia phones running Windows 10 and using the DirectX API. I've pretty much already decided that I'm switching to a Microsoft phone with Windows 10, but I would fall in love with the thing if I could play some DirectX games on it.

Microsoft will no longer create Nokia phones. They will focus on surface phone(s).

On the other hand...

The sleeping giant has just opened his eyes.

From GSMArena:

Connecting people once more: Nokia phones, tablets are coming back

Avatar image for SuperClocks
SuperClocks

334

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#33  Edited By SuperClocks
Member since 2009 • 334 Posts

@Coseniath: The switch to Android OS is probably good for their bottom line, as most people love it, but I'm looking to stop using Android OS, personally. All but a couple of Android games are complete garbage, software of all types that you can download is bare bones minimum type stuff, there are a ton of annoying little quirks with every build that I've used, and there is no reason that so many games on my kids' phones should require access to the mic, camera, gps, etc.

If desktop apps and DX games work on mobile Win10 and Surface phones, that one feature alone would make me switch to, and keep me on, the Windows 10 mobile platform. An AMD SoC w/ HBM2 and a nice little chunk of Vega cores would be amazing, especially with USB-C and a 4k HDMI adapter. I would never need another mobile gaming platform.

Avatar image for horgen
horgen

127732

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#34 horgen  Moderator
Member since 2006 • 127732 Posts
@SuperClocks said:

Do you do some type of design work on your PC? Or, maybe use it as a web server? Or, are you just future proofing? Zen and Vega are both supposedly coming in October, so I may upgrade then. 16GB of DDR4 will be plenty for me, though, lol.

Speaking of 32GB of RAM, Zen, and Vega, though, have you seen those sick APU's that AMD is releasing? 32x Zen cores, 32GB HBM2, and Vega 10 GPU all on one chip. I don't think this particular version will be a mainstream product, but I'm sure that they're considering something along those lines for the consumer market. Imagine the price difference of a high end, mainstream APU compared to buying an Intel chip and nVidia card seperately. They could likely hit a price to performance ratio that would undercut the current behemoths by a pretty large margin, and with performance equal to or greater than that of Intel and team green, especially after they incorporate more of the SoC (or APU) upgrades that they're working on.

I got 16GB now. My next one shall have more. Just because I want more :P

Granted if I get 32GB I will look into what else I can do with the RAM. I rarely turn of my computer so I might try to create a RAM disc. I am waiting for reviews and price on ZEN, and I really want to compare it with Skylake-E from Intel which should release next year. Ivy Bridge i5 has lasted me 4 years already, might be 5. If I go up a segment I'm looking for 6 years or more probably. Or till it fails me.

I also hope Intel goes back to more decent pricing schemes. 1500$ for the top tier and have 4 HEDT processors for Broadwell-E is to much I think. I'm think about getting the Skylale successor to either 5820K or 5930K. Also wonder if that chip-set/socket will have PCIE 4th gen or still only 3rd gen.

@Coseniath said:

@SuperClocks: @horgen: Nah don't worry. After the crap they did with Win7 home (and premium) edition limiting RAM to 8(and 16)GB, from Windows8 and after the limit for the Windows home edition is 128GB RAM.

Comparison chart:

Comparison of Windows 10 editions[15][16]
FeaturesDesktopMobileIoT
HomeProEnterpriseEducationMobileMobile EnterpriseIoT Core
ArchitectureIA-32
x86-64
IA-32
ARMv7 ARM
AvailabilityOEM,
Retail
OEM,
Retail,
Volume licensing
Volume licensing, OEMVolume licensingOEMVolume licensingOEM
N Edition AvailableYesYesYesYesNoNoNo
Maximum physical memory (RAM)4 GB on IA-32
128 GB on x86-64
4 GB on IA-32
2048 GB on x86-64
???
ContinuumYesYesYesYesYesYes?

:)

I remember I had to go for home premium to get 16GB support. Really dick move by MS.

Also Nokia had a deal with MS about not making phones. I'm glad the timelimit for that deal is over now or getting close to an end.