AMD Vega falls between 1080 Ti and Titan X running Doom 2016 (Fudzilla.com)

  • 51 results
  • 1
  • 2
Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#1  Edited By ronvalencia
Member since 2008 • 29612 Posts

http://www.fudzilla.com/news/graphics/42350-amd-vega-handles-doom-like-a-champ

Hits almost 70 FPS at 4K/UHD

AMD's next-generation Vega GPU is coming soon and according to the newest information, it packs enough punch to run Doom at 4K/UHD resolution at almost 70 FPS.

Revealed in a set of pictures published by Golem.de, the Vega GPU, with 687F:C1 device ID and 8GB of HBM2 memory was running Doom at 4K/UHD resolution and Ultra settings, scoring an average of 68 FPS.

Bear in mind that this is probably an early engineering sample of the Vega GPU based graphics cards and probably has early Alpha stage drivers so the result should be even higher when it finally hits the market, sometime in Q1 or even Q2 2017.

As expected, Vulkan API was enabled, which probably pushed the score a bit higher. The 68 FPS result puts it ahead of the Geforce GTX 1080 Ti and close to the Titan X.

Apparently, AMD did not share a lot of information regarding the prototype, but some information suggests that it will provide compute performance of 25 TFLOPs (FP16) and 12.5 TFLOPs (FP32).

In any case, it appears that Vega GPU is on track and should be eventually coming to the market, although AMD was not keen to reveal a lot of precise details.

Avatar image for schu
schu

10200

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#2 schu
Member since 2003 • 10200 Posts

@ronvalencia: What's the average FPS? I heard a lot about how great the 1080 is and it is a pretty beastly card, but it can't quite take the heat with 4k and settings maxed. It does a competent job, but I want more power.

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#3 Juub1990
Member since 2013 • 12622 Posts

@ronvalencia: "Between 1080 Ti and Titan X"

Lost all credibility. Nobody knows what the 1080 Ti runs like.

Clickbait article if I ever seen one. Very disappointed in you @ronvalencia.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#4  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Juub1990 said:

@ronvalencia: "Between 1080 Ti and Titan X"

Lost all credibility. Nobody knows what the 1080 Ti runs like.

Clickbait article if I ever seen one. Very disappointed in you @ronvalencia.

1080 Ti is just cut down GP102 Titan X Pascal which could have slightly faster core clock speed and slightly higher memory bandwidth.

http://wccftech.com/nvidia-gtx-1080-ti-launch-january/

My 980 Ti is slightly inferior to Titan X Maxwell.

GP102 is already obsolete since it doesn't support Shader Model 6's native 16bit FP and lacks double rate 16 bit FP feature.

Just a quick update on the non-Tesla Pascal cards: The GEForce 1080, 1070, 1060 and the new pascal-based Titan X (no "GTX" to differentiate from the Maxwell-based GTX TitanX) and the new Quadro P series.

FP16 compute on these cards is horribly crippled, even worse than FP64. For every shader module of 128 FP32 units, there are 4 FP64 units (1/32) and a single vec2 FP16 unit (1/64 rate) on the GP102 and GP104 that powers the GEForces, Quadros, and Titan X. Basically they are only there for debugging programs to run on the Pascal-based Teslas.

Try again.

Avatar image for GarGx1
GarGx1

10934

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#5  Edited By GarGx1
Member since 2011 • 10934 Posts

What's the overclocking headroom like on it? AMD haven't exactly been great in that department.

@ronvalencia said:
@Juub1990 said:

@ronvalencia: "Between 1080 Ti and Titan X"

Lost all credibility. Nobody knows what the 1080 Ti runs like.

Clickbait article if I ever seen one. Very disappointed in you @ronvalencia.

1080 Ti is just cut down GP102 Titan X Pascal which could have slightly faster core clock speed and slightly higher memory bandwidth.

My 980 Ti is slightly inferior to Titan X Maxwell.

Try again.

So they're speculating?

Avatar image for EducatingU_PCMR
EducatingU_PCMR

1581

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#6 EducatingU_PCMR
Member since 2013 • 1581 Posts

So this is baby Vega? Looks good if so, the cut down version should be around 1070 or faster, give it to me now!

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#7 Juub1990
Member since 2013 • 12622 Posts

@ronvalencia: Show me 1080 Ti benchmarks.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#8  Edited By ronvalencia
Member since 2008 • 29612 Posts

@GarGx1 said:

What's the overclocking headroom like on it? AMD haven't exactly been great in that department.

@ronvalencia said:
@Juub1990 said:

@ronvalencia: "Between 1080 Ti and Titan X"

Lost all credibility. Nobody knows what the 1080 Ti runs like.

Clickbait article if I ever seen one. Very disappointed in you @ronvalencia.

1080 Ti is just cut down GP102 Titan X Pascal which could have slightly faster core clock speed and slightly higher memory bandwidth.

My 980 Ti is slightly inferior to Titan X Maxwell.

Try again.

So they're speculating?

1080 Ti GP102 is no better than the full GP102 Titan X Pascal.

GP100 is missing quad rate 8 bit Integer feature i.e. wasn't design for 8 bit integer color processing.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#9  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Juub1990 said:

@ronvalencia: Show me 1080 Ti benchmarks.

Refer to TItan X Pascal full GP102 benchmarks.

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#10 Juub1990
Member since 2013 • 12622 Posts

@ronvalencia: It's not a 1080 Ti. Benchmarks or GTFO.

Avatar image for GarGx1
GarGx1

10934

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#11 GarGx1
Member since 2011 • 10934 Posts

@ronvalencia said:
@GarGx1 said:

What's the overclocking headroom like on it? AMD haven't exactly been great in that department.

@ronvalencia said:
@Juub1990 said:

@ronvalencia: "Between 1080 Ti and Titan X"

Lost all credibility. Nobody knows what the 1080 Ti runs like.

Clickbait article if I ever seen one. Very disappointed in you @ronvalencia.

1080 Ti is just cut down GP102 Titan X Pascal which could have slightly faster core clock speed and slightly higher memory bandwidth.

My 980 Ti is slightly inferior to Titan X Maxwell.

Try again.

So they're speculating?

1080 Ti GP102 is no better than the full GP102 Titan X Pascal.

GP100 is missing quad rate 8 bit Integer feature i.e. wasn't design for 8 bit integer color processing.

So it is speculation then?

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#12  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Juub1990 said:

@ronvalencia: It's not a 1080 Ti. Benchmarks or GTFO.

You GTFO. Titan X Pascal is the full GP102 chip you stupid clown.

https://www.overclock3d.net/news/gpu_displays/nvidia_gtx_1080ti_specifications_leak/1

1080 Ti has 384 bit normal GDDR5 instead of Titan X Pascal's 384 bit GDDR5X.

1080 Ti slots between Titan X Pascal and GTX 1080 non-Ti

Try again.

Avatar image for deactivated-63d2876fd4204
deactivated-63d2876fd4204

9129

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#13 deactivated-63d2876fd4204
Member since 2016 • 9129 Posts

Lol AMD

Avatar image for superbuuman
superbuuman

6400

Forum Posts

0

Wiki Points

0

Followers

Reviews: 14

User Lists: 0

#14 superbuuman
Member since 2010 • 6400 Posts

Oh God! its Vega!!...that is quite impressive. :P

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#15  Edited By ronvalencia
Member since 2008 • 29612 Posts

@goldenelementxl said:

Lol AMD

Competition is good for the consumer market.

fudzilla.com is mostly a green lake.

My purchase plans.

RX-490 replaces my R9-390X PC box

1080 Ti replaces my 980 Ti PC box

My house is a double story hence the need for two gaming PCs.

Avatar image for 04dcarraher
04dcarraher

23857

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#16 04dcarraher
Member since 2004 • 23857 Posts

Fact that the test is using Vulkan voids the comparison since Nvidia's performance with Vulkan is nil.

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#17 Juub1990
Member since 2013 • 12622 Posts

@ronvalencia: Waiting for the 1080 Ti benchmark. Prove it or GTFO.

Avatar image for Commiesdie
Commiesdie

372

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#18 Commiesdie
Member since 2006 • 372 Posts

I have no loyalty with brand alone unless AMD cpus have sucked since 2005 and up

Avatar image for ellos
ellos

2532

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#19 ellos
Member since 2015 • 2532 Posts

Here we go. I'm not gonna get suckered in this time. I'll wait for actual proven targets.

Avatar image for 04dcarraher
04dcarraher

23857

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#20 04dcarraher
Member since 2004 • 23857 Posts

@Juub1990:

lol, 1080ti will most likely fit into the same bracket as that last three generations of ti cards, 80<80ti<titan. But even still the report suggesting between 1080ti and titan X means squat since 1080ti isnt even out and no idea what it can do.

Plus to the fact their using Vulkan vs Nvidia near nil performance gain with the API also makes the comparison a joke since it would suggest that this VEGA example needs Vulkan/DX12 Async to perform as well or better than GTX 1080. What happens when a game uses DX11 or poorly coded DX12 title that offers little to nothing. Where would it sit then.

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#21  Edited By Juub1990
Member since 2013 • 12622 Posts

@04dcarraher: That's what I told him. "Between 1080 Ti and Titan X." As if we knew exactly how fast the 1080 Ti is.

Avatar image for jereb31
Jereb31

2025

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#22 Jereb31
Member since 2015 • 2025 Posts

@superbuuman said:

Oh God! its Vega!!...that is quite impressive. :P

Cue "Ultra" from KMFDM.

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#23 Juub1990
Member since 2013 • 12622 Posts

@ronvalencia: Benchmark. Speculation from a geek isn't good enough.

Avatar image for DragonfireXZ95
DragonfireXZ95

26712

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#24 DragonfireXZ95
Member since 2005 • 26712 Posts

@Juub1990 said:

@ronvalencia: Show me 1080 Ti benchmarks.

I was just about to say, we don't have 1080 Ti benchmarks.

Avatar image for dynamitecop
dynamitecop

6395

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#25  Edited By dynamitecop
Member since 2004 • 6395 Posts

The 980Ti is about a 35% increase in performance over the 980 overall, and the 780 to 780Ti is about a 30% increase. It's pretty easy to speculate where the 1080Ti will fall based upon the reality that the Titan X is only about 25% faster than the 1080 and a Ti variant is not going to be faster than the Titan, so we're likely looking at a 15-20% gain over the 1080 at most.

Simple.

Avatar image for mjebb
mjebb

86

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#26 mjebb
Member since 2016 • 86 Posts

Hopefully the Vega GPU won't be released yet cos I'm broke after overspending for Christmas

Sounds great but need cash

Avatar image for jasonofa36
JasonOfA36

3725

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#27 JasonOfA36
Member since 2016 • 3725 Posts

Speculations can only go so much especially since there is a difference between architectures from different generations. We need benchmarks to properly gauge the performances of the cards.

Avatar image for tormentos
tormentos

33793

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#28 tormentos
Member since 2003 • 33793 Posts

@04dcarraher said:

Fact that the test is using Vulkan voids the comparison since Nvidia's performance with Vulkan is nil.

Wait so just because Nvidia performance sucks with Vulkan somehow is not valid.? How about the hundred of games that favor Nvidia based on gameworks alone do you cry about it when they get compare to AMD GPU.?

Avatar image for arunsunk
ArunsunK

335

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#29 ArunsunK
Member since 2014 • 335 Posts

@ronvalencia said:
@Juub1990 said:

@ronvalencia: It's not a 1080 Ti. Benchmarks or GTFO.

You GTFO. Titan X Pascal is the full GP102 chip you stupid clown.

https://www.overclock3d.net/news/gpu_displays/nvidia_gtx_1080ti_specifications_leak/1

1080 Ti has 384 bit normal GDDR5 instead of Titan X Pascal's 384 bit GDDR5X.

1080 Ti slots between Titan X Pascal and GTX 1080 non-Ti

Try again.

Aren't those specs supposed to be fake?

http://www.guru3d.com/news-story/nvidia-gtx-1080ti-specifications-surface.html?noredirect=1#noredirect

Avatar image for BassMan
BassMan

18730

Forum Posts

0

Wiki Points

0

Followers

Reviews: 232

User Lists: 0

#30 BassMan
Member since 2002 • 18730 Posts

We still don't know what each card is capable of. Let's wait until all cards are on the table (pun intended).

Avatar image for 04dcarraher
04dcarraher

23857

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#31  Edited By 04dcarraher
Member since 2004 • 23857 Posts

@tormentos said:
@04dcarraher said:

Fact that the test is using Vulkan voids the comparison since Nvidia's performance with Vulkan is nil.

Wait so just because Nvidia performance sucks with Vulkan somehow is not valid.? How about the hundred of games that favor Nvidia based on gameworks alone do you cry about it when they get compare to AMD GPU.?

Gameworks is not an API...... Lets look when AMD did Mantle sponsored games or TressFX 1.0(before Nvidia got source code to optimize) or any other AMD sponsored projects or games that favor AMD using their strengths or features that hurt Nvidia's performance. It goes both ways....

Fact that you seem to miss is that using Vulkan cough.... cough.... Mantle does not show what the gpu can do on equal ground. Vulkan is basically Mantle which throws AMD a bone. If they tested with an open mature API ie DX11 or in this case Open GL 4.5 there would not be any doubt in its performance overall.

Gameworks does not itself directly hurt AMD's performance its the devs that over use features like tessellation which earlier GCN's dont perform as well even though they supports the feature.. AMD handles some gameworks features like HBAO+ just fine. the games that include CUDA based physics(physx) can be turned off.

Avatar image for Wasdie
Wasdie

53622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 23

User Lists: 0

#32 Wasdie  Moderator
Member since 2003 • 53622 Posts

Seems like a lot of speculation given it's being compared to cards we don't have benchmarks of yet. Not saying it's not true, just saying this is a questionable source.

AMD needs a high end GPU they can put against Nvidia's top end stuff. The Steam Hardware Survey shows the GTX 1060, GTX 1070, and GTX 1080 all beating the RX 400 series cards. That's not good for AMD.

They need a line of more powerful GPUs that can be competitively priced against Nvidia's top end if they want to start seeing larger market penetration. I would also love to see them force Nvidia to drop some prices.

Avatar image for aroxx_ab
aroxx_ab

13236

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#33 aroxx_ab
Member since 2005 • 13236 Posts

But does it burn motherboards?

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#34  Edited By ronvalencia
Member since 2008 • 29612 Posts

@04dcarraher said:
@tormentos said:
@04dcarraher said:

Fact that the test is using Vulkan voids the comparison since Nvidia's performance with Vulkan is nil.

Wait so just because Nvidia performance sucks with Vulkan somehow is not valid.? How about the hundred of games that favor Nvidia based on gameworks alone do you cry about it when they get compare to AMD GPU.?

Gameworks is not an API...... Lets look when AMD did Mantle sponsored games or TressFX 1.0(before Nvidia got source code to optimize) or any other AMD sponsored projects or games that favor AMD using their strengths or features that hurt Nvidia's performance. It goes both ways....

Fact that you seem to miss is that using Vulkan cough.... cough.... Mantle does not show what the gpu can do on equal ground. Vulkan is basically Mantle which throws AMD a bone. If they tested with an open mature API ie DX11 or in this case Open GL 4.5 there would not be any doubt in its performance overall.

Gameworks does not itself directly hurt AMD's performance its the devs that over use features like tessellation which earlier GCN's dont perform as well even though they supports the feature.. AMD handles some gameworks features like HBAO+ just fine. the games that include CUDA based physics(physx) can be turned off.

NVIDIA Gameworks includes NVAPI which exposes NVIDIA's GPU intrinsic functions for Direct3D APIs.

https://developer.nvidia.com/unlocking-gpu-intrinsics-hlsl

None of the intrinsics are possible in standard DirectX or OpenGL. But they have been supported and well-documented in CUDA for years. A mechanism to support them in DirectX has been available for a while but not widely documented. I happen to have an old NVAPI version 343 on my system from October 2014 and the intrinsics are supported in DirectX by that version and probably earlier versions. This blog explains the mechanism for using them in DirectX.

Unlike OpenGL or Vulkan, DirectX unfortunately doesn't have a native mechanism for vendor-specific extensions. But there is still a way to make all this functionality available in DirectX 11 or 12 through custom intrinsics. That mechanism is implemented in our graphics driver and accessible through the NVAPI library.

NVIDIA revealed their existing GPU intrinsic functions when AMD created their own intrinsic function access method during Doom's Vulkan release.

NVIDIA has customised GPU intrinsic access with Direct3D since year 2008 Far Cry 2 DX10.1 kitbash. http://www.bit-tech.net/news/hardware/2008/10/22/nvidia-gpus-support-dx10-1-features-in-far-cry-2/1

Mantle still runs with MS HLSL and Doom Vulkan AMD code path has hardware specific GCN intrinsic shaders.

Xbox One officially supports AMD GCN intrinsic functions access.

Doom 2016's OpenGL 4.5 code path has specific NVIDIA extension usage e.g. compare the frame rate difference between Alpha builds vs RTM build.

Avatar image for 04dcarraher
04dcarraher

23857

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#35  Edited By 04dcarraher
Member since 2004 • 23857 Posts

@ronvalencia:

NVAPI is a driver interface framework. It doesn't replace Opengl or Direct x. Mantle/Vulkan which incorporates native GCN hardware features. NVAPI allows low access to the GPU but only for some functions. It does not help the GPU render graphics any better, its a middleman to allows devs to enable and access features through the driver easier but still has to go through opengl/DX. Big difference there comparing NVAPI to Mantle or Vulkan when those API base was originally designed for GCN hardware and does not have to do same channels as NVAPI.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#36 ronvalencia
Member since 2008 • 29612 Posts

@Juub1990 said:

@ronvalencia: Waiting for the 1080 Ti benchmark. Prove it or GTFO.

Let's see.. 1080 Ti's 384 bit GDDR5 vs Titan XP's 384 bit GDDR5X.... Most PC games are effective memory bound. Guess which GP102 is faster... hint: it's not 1080 Ti.

Like stock 980 Ti vs Titan X Maxwell, stock 1080 Ti wasn't design to beat Titan X Pascal.

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#37 Juub1990
Member since 2013 • 12622 Posts

@ronvalencia: Yawn. Wake me up when you have the benchmarks.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#38  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Juub1990 said:

@ronvalencia: Yawn. Wake me up when you have the benchmarks.

Stock 1080 Ti slower than Titan X due to the following areas

1. 1080 Ti's physical 384 GB/s memory solution is slower than Titan X's physical 480 GB/s. 1080 non-Ti has physical 320 GB/s memory bandwidth.

2. 1080 Ti has less active SM units than Titan XP..

I speculated Kepler's compute is shit (e.g. register file vs CUDA core count ratio argument) and I was correct.

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#39 Juub1990
Member since 2013 • 12622 Posts

@ronvalencia: Quiet down everyone...hear that? That's the sound of someone talking out of an ass.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#40  Edited By ronvalencia
Member since 2008 • 29612 Posts

@04dcarraher said:

@ronvalencia:

NVAPI is a driver interface framework. It doesn't replace Opengl or Direct x. Big difference there comparing it to Mantle or Vulkan etc. Which incorporates native GCN hardware features. NVAPI allows low access to the GPU but only for some functions. It does not help the GPU render graphics any better.

Do you realize Mantle is still broken/slower on GCN 1.2 than on GCN 1.0/1.1?

OpenGL's NVIDIA "Approaching Zero Driver Overhead" API extensions was the alternative to Vulkan API. There was more votes for AMD's Mantle/Vulkan API than NVIDIA's "Approaching Zero Driver Overhead" API extensions.

Again, Mantle still uses MS HLSL and Doom 2016 Vulkan is first PC title that used AMD's specific GCN intrinsic shaders.

So you completely ignored Far Cry 2's DX10.X kitbash access that speeds up rendering process (impacts ROPS's MSAA areas)?

https://developer.nvidia.com/unlocking-gpu-intrinsics-hlsl

None of the intrinsics are possible in standard DirectX or OpenGL. But they have been supported and well-documented in CUDA for years. A mechanism to support them in DirectX has been available for a while but not widely documented. I happen to have an old NVAPI version 343 on my system from October 2014 and the intrinsics are supported in DirectX by that version and probably earlier versions. This blog explains the mechanism for using them in DirectX.

Unlike OpenGL or Vulkan, DirectX unfortunately doesn't have a native mechanism for vendor-specific extensions. But there is still a way to make all this functionality available in DirectX 11 or 12 through custom intrinsics. That mechanism is implemented in our graphics driver and accessible through the NVAPI library.

OpenGL is a rendering API.. NVIDIA talks about similar OpenGL extended functions under DirectX via NVAPI.

https://developer.nvidia.com/unlocking-gpu-intrinsics-hlsl

Extending HLSL Shaders

In order to use the intrinsics, they have to be encoded as special sequences of regular HLSL instructions that the driver can recognize and turn into the intended operations. These special sequences are provided in one of the header files that comes with the NVAPI SDK: nvHLSLExtns.h.

One important thing about these instruction sequences is that they have to pass through the HLSL compiler without optimizations, because the compiler does not understand their true meaning and therefore could modify them beyond recognition, change their order, or even completely remove them. To prevent the compiler from doing that, the sequences use atomic operations on a UAV buffer. The HLSL compiler cannot optimize away these instructions because it is unaware of possible dependencies (even though there are none). That UAV buffer is basically a fake and it will not be used by the actual shader once it's passed through the NVIDIA GPU driver. But the applications still have to allocate a UAV slot for it and tell the driver which slot that is.

You did not read https://developer.nvidia.com/unlocking-gpu-intrinsics-hlsl where it shown an example for pixel shader hardware intrinsics.

Avatar image for quadknight
QuadKnight

12916

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#41  Edited By QuadKnight
Member since 2015 • 12916 Posts

@Juub1990 said:

@ronvalencia: "Between 1080 Ti and Titan X"

Lost all credibility. Nobody knows what the 1080 Ti runs like.

Clickbait article if I ever seen one. Very disappointed in you @ronvalencia.

^ ^ This.

1080Ti benchmarks aren't out yet. This is a BS clickbait article.

Avatar image for frank_castle
Frank_Castle

1982

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#42  Edited By Frank_Castle
Member since 2015 • 1982 Posts

Just bought a 1080 FTW (basically an early tax refund purchase) that will have me totally set for 1080p/1440p gaming for the next 4-5 years.

Until then, I couldn't care less about 4K gaming on a 23-32 inch monitor.

If you have the extra money to spare and the compulsive desire to acquire the latest tech, then that's totally fine and understandable.

But otherwise, I don't see the real benefit in upgrading if you've already put together a quality rig within the last year or so.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#43  Edited By ronvalencia
Member since 2008 • 29612 Posts

@quadknight said:
@Juub1990 said:

@ronvalencia: "Between 1080 Ti and Titan X"

Lost all credibility. Nobody knows what the 1080 Ti runs like.

Clickbait article if I ever seen one. Very disappointed in you @ronvalencia.

^ ^ This.

1080Ti benchmarks aren't out yet. This is a BS clickbait article.

980 Ti vs Titan X Maxwell SKU relationship says Hi.

1080 Ti falls somewhere between 1080 and Titan X.

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#44 Juub1990
Member since 2013 • 12622 Posts

@ronvalencia: You really don't get it do you? Benchmarks or GTFO.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#45  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Juub1990 said:

@ronvalencia: You really don't get it do you? Benchmarks or GTFO.

You GTFO. 1080 Ti is less than Titan X Pascal (the full GP102) you stupid clown. 1080 Ti is NOT GP100 (missing quad rate 8 bit integer for color integer processing) nor it's GV102(Volta with double rate 16bit FP mode).

http://www.guru3d.com/news-story/shipping-manifest-shows-nvidia-gp102-with-10gb-memory-geforce-gtx-1080-ti-spotted.html

FOC / PG611 SKU0010 GPU / 384-BIT 10240MB GDDR COMPUTER GRAPHICS CARDS, 699-1G611-0010-000

Interesting is that the card would be fitted with 10GB of memory, which is 2 GB less then the Titan X. Since the 1080 Ti is rumored to have two SMXes less thena full Titan X.

Based on earlier speculative information the 1080 Ti would get 52 shader clusters (SM) totalling towards 3328 shader processors and DDR5 memory. The GeForce GTX 1080 Ti will be using the GP102 silicon, similar as used for Pascal Titan X, however it has 4 out of 30 shader processor clusters disabled, so that is 3,328 shader processors. If you do the math then your TMU count would get to 208 with a ROP count of 96. The numbers add up tyowards a 10.8 GFLOP/s single precision performing product.

....

The product is rumored to get regular GDDR5 memory nor GDDR5X - GDDR5 memory would bring in 384GB/s of memory bandwidth, which would be the deal-breaker over a Titan X with its 480GB/s of bandwidth. It is still more then the 1080 though

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#46 Juub1990
Member since 2013 • 12622 Posts

@ronvalencia: Nobody doubts that. We just doubt that whatever AMD will offer will beat a 1080 Ti.

Avatar image for EducatingU_PCMR
EducatingU_PCMR

1581

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#47 EducatingU_PCMR
Member since 2013 • 1581 Posts

@aroxx_ab said:

But does it burn motherboards?

Loading Video...

Avatar image for 04dcarraher
04dcarraher

23857

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#48  Edited By 04dcarraher
Member since 2004 • 23857 Posts

@ronvalencia said:

Do you realize Mantle is still broken/slower on GCN 1.2 than on GCN 1.0/1.1?

OpenGL's NVIDIA "Approaching Zero Driver Overhead" API extensions was the alternative to Vulkan API. There was more votes for AMD's Mantle/Vulkan API than NVIDIA's "Approaching Zero Driver Overhead" API extensions.

Again, Mantle still uses MS HLSL and Doom 2016 Vulkan is first PC title that used AMD's specific GCN intrinsic shaders.

So you completely ignored Far Cry 2's DX10.X kitbash access that speeds up rendering process?

Mantle has not been updated and was dropped by AMD duh.

lol..... AZDO??? which was a goal for opengl and back in 2014. and yet we are in 2016 using opengl 4.5 and ding ding ding Vulkan based on Mantle.... There is your lower overhead API extensions went.

farcry 2 and nvapi was a bypass for DX10.1 feature set artificially imposed that was not allowed on Nvidia gpus since they only DX10.0 . Even though the hardware could use the feature. its not the same as having an an api built from ground up for a set architecture. NVAPI is a middleman to allows devs to enable and access features through the driver but still has to go through opengl/DX base which is not designed around for a set architecture like Mantle/vulkan.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#49  Edited By ronvalencia
Member since 2008 • 29612 Posts

@04dcarraher said:
@ronvalencia said:

Do you realize Mantle is still broken/slower on GCN 1.2 than on GCN 1.0/1.1?

OpenGL's NVIDIA "Approaching Zero Driver Overhead" API extensions was the alternative to Vulkan API. There was more votes for AMD's Mantle/Vulkan API than NVIDIA's "Approaching Zero Driver Overhead" API extensions.

Again, Mantle still uses MS HLSL and Doom 2016 Vulkan is first PC title that used AMD's specific GCN intrinsic shaders.

So you completely ignored Far Cry 2's DX10.X kitbash access that speeds up rendering process?

Mantle has not been updated and was dropped by AMD duh.

lol..... AZDO??? which was a goal for opengl and back in 2014. and yet we are in 2016 using opengl 4.5 and ding ding ding Vulkan based on Mantle.... There is your lower overhead API extensions went.

farcry 2 and nvapi was a bypass for DX10.1 feature set that was not allowed on Nvidia gpus since they only DX10.0 . Even though the hardware could use the feature. its not the same as having an an api built from ground up for a set architecture. NVAPI is a middleman to allows devs to enable and access features through the driver easier but still has to go through opengl/DX base which is not designed around for a set architecture like Mantle/vulkan.

"There is no image quality or performance difference between the two implementations."

G8X is not fully DX10.1 compliant but it has similar DX10.1 MSAA features. My point with FarCry 2 is the existence for NVIDIA's custom API access during year 2008.

A more complete quote

FarCry 2 reads from a multisampled depth buffer to speed up antialiasing performance. This feature is fully implemented on GeForce GPUs via NVAPI. Radeon GPUs implement an equivalent path via DirectX 10.1. There is no image quality or performance difference between the two implementations.

http://www.hardocp.com/article/2008/12/01/farcry_2_dx9_vs_dx10_performance/#.WFH53HnauUk

"There is no image quality or performance difference between the two implementations" context is for multisampled depth buffer to speed up antialiasing performance.

As for AZDO, Hint: NV_command_list, Bindless, Bindless Buffers, Bindless Textures, Bindless Constants (UBO).

As an example

Bindless Textures requires OpenGL 4.4 i.e. renames NV_bindless_texture to ARB_bindless_texture.

https://www.reddit.com/r/Amd/comments/4j7e48/doom_benchmarks_970_73_faster_than_390_the_way/?st=iwpqcuo1&sh=a5410c5b

Doom 2016 OpenGL... programmers restricted AMD GPUs to OpenGL 4.3.

Avatar image for 04dcarraher
04dcarraher

23857

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#50  Edited By 04dcarraher
Member since 2004 • 23857 Posts

We need to see more tests and examples using a slew of API's DX11, opengl, DX12, vulkan etc and if the new vega gpu performs as well or better than 1070/1080 especially in dx11 and opengl 4.5 then we can safely say its as powerful or better than a 1080. But until we see a 1080ti and its been tested the claim of performance between 1080ti and titan means nothing.