Intel is entering GPGPU market - how it will effect consoles

  • 65 results
  • 1
  • 2

This topic is locked from further discussion.

Avatar image for rimnet00
rimnet00

11003

Forum Posts

0

Wiki Points

0

Followers

Reviews: 7

User Lists: 0

#51 rimnet00
Member since 2003 • 11003 Posts

It clearly says "GPGPU", not GPU. So I don't see how you two are confused.

rexoverbey

After a title edit. On top of that it's VPU since it is not a real General purpose GPU.

No, I didn't edit the title... Maybe you misread it. While you may call it a VPU, Intel has marked it as a GPGPU on it's powerpoint slides.

Avatar image for LibertySaint
LibertySaint

6500

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#52 LibertySaint
Member since 2007 • 6500 Posts
hasn't intel had gpu's for a long time? the intel gma series and alike?
Avatar image for rimnet00
rimnet00

11003

Forum Posts

0

Wiki Points

0

Followers

Reviews: 7

User Lists: 0

#53 rimnet00
Member since 2003 • 11003 Posts



You can't say one is faster than the other eithout giving some sort of basis. How many objects, and are they dynamic? What kind of acceleration structures are you using? (I assume your "exponentially faster" algorithm is using some sort of acceleration structure, because without them ray-tracing is basically rasterization but performed with every single pixel. And I'd love for you to explain how that is somehow faster). And I hope you're not going to argue that ray-tracing is somehow faster in practical scenarios, given the huge gap between traditional HW-assisted rasterization and any real-time ray-tracing implementation in existance.

Please don't take offense from this, but most computer science departments generally don't exactly have their finger on the pulse of the professional industry. Teufelhuhn

When graphical complexity of 3D scenes intersects with both the efficiency of ray tracing and rasterization, ray tracing from there on out will see an exponential leap in performance in comparision to rasterization. This is what I mean by jumping over the "ray tracing constant". As the ray tracing algorithm itself, which is a linear formula - O(n) - has a very large constant attached to it. Rasterization algoithms however, are of complexity O(n^2). Of course, it's a little more complex then that, but that is generally the case. This does not require any kind of 'acceleration structure' of any kind, seeing as it's theoretically what will happen, which can be seen when looking at the two sets of algorithms side by side.

As for asking me if I would argue if it's faster in pratical scenarios, I would require some clarification as to what you mean? Do I think it is practicle in the future of games. Yes, of course. Today, no. However, as cpu parrellelization continues to become the future movement for computation, and three dimensional scenes are becoming increasingly complex -- rasterization will make less sense moving forward, and a scalable approach for rendering will have to replace it. I'm not saying how soon it will be, however I think the transition will happen sooner then people realize.

I would argue this is precisely why nVidia is pushing SLI as the next forward movement. It's their way to survive in the future, by having each gpu render a section of the scene. However, going forward... this is only going to work so well, until it becomes impractical. Rasterization techniques do not scale well with parrellelization.

I'm not going to take offense as to your comment regarding "most computer science" departments, considering UMass Amherst 's graduate program is ranked as being one of the top in the world. Just about every professor there has multi-million research contracts with the professional industry. Heck, most of the rasterization algorithms are formulated through research groups at universities. Theories / Algorithms starts at the universities, and then somewhere along the road to implementation they move over to corporations.

With that said, the concepts revolving rasterization versus ray tracing doesn't require you to be working for an architect for Intel or Nvidia. The concepts themselves are really not all that difficult -- it's theory, the thing that is difficult is getting things like this implimented.

Avatar image for teuf_
Teuf_

30805

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#55 Teuf_
Member since 2004 • 30805 Posts

When graphical complexity of 3D scenes intersects with both the efficiency of ray tracing and rasterization, ray tracing from there on out will see an exponential leap in performance in comparision to rasterization. This is what I mean by jumping over the "ray tracing constant". As the ray tracing algorithm itself, which is a linear formula - O(n) - has a very large constant attached to it. Rasterization algoithms however, are of complexity O(n^2). Of course, it's a little more complex then that, but that is generally the case. This does not require any kind of 'acceleration structure' of any kind, seeing as it's theoretically what will happen, which can be seen when looking at the two sets of algorithms side by side.

rimnet00



I have no idea where you're getting your math from. The naive implementation of rasterization is O(N) * number of pixels. This is the same exact case for ray-tracing with only primary rays, and and it gets much worse as you add secondary rays (and as I'm sure you know you need lots of secondary rays to deal with the aliasing problems inherent in ray-tracing). And contrary to what you said, this is not "generally" the case...it's almost never the case. Acceleration structures can get both techniques down to O(log(n)) levels, which is why they're always used. The pracitical situations involving the technology used to implement these techniques renders the theory useless quite quickly. Unless of course you've somehow implemented a ray-tracer that has coherent memory access patterns.



As for asking me if I would argue if it's faster in pratical scenarios, I would require some clarification as to what you mean? Do I think it is practicle in the future of games. Yes, of course. Today, no. However, as cpu parrellelization continues to become the future movement for computation, and three dimensional scenes are becoming increasingly complex -- rasterization will make less sense moving forward, and a scalable approach for rendering will have to replace it. I'm not saying how soon it will be, however I think the transition will happen sooner then people realize.

rimnet00


By practical scenarios I mean interactive framerates on (current) consumer hardware. Like I've already said I see no reason why rasterization won't continute to be the fastest means of calculating primary rays for the foreseeable future.




I would argue this is precisely why nVidia is pushing SLI as the next forward movement. It's their way to survive in the future, by having each gpu render a section of the scene. However, going forward... this is only going to work so well, until it becomes impractical. Rasterization techniques do not scale well with parrellelization.

rimnet00

All I see is Nvidia trying to push further into the GPGPU space. The obviously bought Ageia as way to deliver physics through CUDA, and in the future I wouldn't be surpised if they make some acquisitions that allow them to integrate x86 capabilites into their hardware in order to combat Fusion and Larrabee.

the thing that is difficult is getting things like this implimented.

rimnet00


That has been exactly my point all along.


By the way I apologize for my comment about CS departments, I was out of line.
Avatar image for subrosian
subrosian

14232

Forum Posts

0

Wiki Points

0

Followers

Reviews: 7

User Lists: 0

#57 subrosian
Member since 2005 • 14232 Posts

The engineering / CS answer: a long explanation of technologies with logarithmic notation of computational complexity and discussion of efficiency techniques in real-world versus theoretical scenarios.

The business answer: Intel is "throwing down" a term that basically means "we use graphics cards with some general computing" in order to charge a higher-consulting fee. It's the same reason information systems doesn't called rows / colums in databases "rows and columns" or why it's called a "capitol gain" and not "you made some money from stocks" :P.

As cynical as it is to say - a really detailed discussion on techniques, hardware, and the companies involved is essentially meaningless. You want to know whether or not Intel will play a part in the hardware of next-gen consoles? Look at their partnering, and look at their bottom-line costs - can they deliver the hardware-spec in 2011 for less than AMD, IBM, or nVidia?

Further - look at their licensing - will they be willing to turn this thing over to MS / Sony / Nintendo - and is it even going to be the product they want - or are they going to pull an nVidia?

-

Common sense says maybe, if Intel sees money to be made in the console market, and are moving in the direction of higher-performance GPUs, then it's common sense for them to gun for it. If they're just jumping in in say, late 2010 though, I don't know that Intel will play a part in all of the consoles GPU next time around or not - at this point it's pure speculation.

-

Business Answer - Intel will get into consoles if there's money to made, and they can negotiate a deal - GPGPU is just a technical buzzword, it has nothing to do with answering the question of "will Intel be involved heavily in the next-gen console hardware?".

Avatar image for hamidious
hamidious

1537

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#58 hamidious
Member since 2007 • 1537 Posts

This is bad news for Nvidia and AMD/ATI.

Greedy Intel is gonna monopolize computer hardware.

Avatar image for Meu2k7
Meu2k7

11809

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#59 Meu2k7
Member since 2007 • 11809 Posts

This has gone way over my head even when it was explained directly to me ....

Make it simple, how does this benefit me?:twisted:

Avatar image for skrat_01
skrat_01

33767

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#60 skrat_01
Member since 2007 • 33767 Posts

Interesting to see what the future holds.

Avatar image for jaisimar_chelse
jaisimar_chelse

1931

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#61 jaisimar_chelse
Member since 2007 • 1931 Posts
[QUOTE="mjarantilla"]

[QUOTE="rimnet00"][QUOTE="MrGrimFandango"]Man thats sweet, Ray-tracing, thats the new gameplay of the 21st century right?Senor_Kami


Gameplay? You mean, graphics eh?

To most PS3/360 gamers, graphics = gameplay. :)

Whoa, take the 360 out of that and add PC. Go in a thread about PC games and all you'll see is people mentioning framerate and polygons per second.

and yet who argues that cod4 on xbox360 is the best console version and fable 2 is better looking than crysis

Avatar image for razu_gamer2
razu_gamer2

491

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#62 razu_gamer2
Member since 2007 • 491 Posts
who cares, any decent gamer plays on a console for the GAMES.
Avatar image for HuusAsking
HuusAsking

15270

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#63 HuusAsking
Member since 2006 • 15270 Posts
[QUOTE="rimnet00"]Lastly, once we step over the 'ray tracing constant' problem with fast enough CPUs, GPUs as we think of them today will likely not exist. Teufelhuhn


There's nothing that says ray tracing is the holy grail of rendering (depsite what Intel might claim). It's just a technique that happens to have certain benefits and certain (big) drawbacks. Many arguements that are pro ray-tracing like to say how new techniques will make ray-tracing faster, but they don't mention that those new techniques almost always make traditional rasterization faster as well.

Most signs point to ray-tracing and rasterization being combined in certain scenarios, since they can very happily co-exist.

Right now, the big advantage is that raytracing scales better as the resolution goes up (it scales at less than a 1:1 ratio compared to current scanline rendering techniques). It's also better-equipped to handle more complicated lighting effects (thing realtime soft shadowing and light refraction).
Avatar image for Innovazero2000
Innovazero2000

3159

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#64 Innovazero2000
Member since 2006 • 3159 Posts

Intel has been making onboard graphics for years and they have always sucked. I think this is just marketing hype much like the 128-teracore processor.

rexoverbey

No this is not hype, esp. if you knew what their upcoming larabee archecture is capiable of, at least on paper. I'm sure some of that technology is being intergrated.

Avatar image for Innovazero2000
Innovazero2000

3159

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#65 Innovazero2000
Member since 2006 • 3159 Posts

[QUOTE="rimnet00"]I never realized how vaguely defined GPGPUs really were. More precisely, I was refering to GPGPUs which have dedicated components for doing both linear and vector calculations, as opposed to mapping to across paradigms which is very ineffecient -- ie CPUs doing vector calculations, and GPUs doing hardcore linear computation. Teufelhuhn


GPGPU isn't a kind of hardware (although I guess you could have hardware designed for it), it's just a branch of programming where GPU's are used for general-purpose calculations. It's been going on since before Nvidia came up with an API for it.

Yeah but it's never been very efficent, or have had much use for...this is the first time I think we'll really see it blossem.

Avatar image for subrosian
subrosian

14232

Forum Posts

0

Wiki Points

0

Followers

Reviews: 7

User Lists: 0

#66 subrosian
Member since 2005 • 14232 Posts

People seem to be missing the point here:

Discussing the technical details of GPGPU, rendering techniques, and computational effeciency to "determine" if Intel will be a major graphics hardware player in the next-gen console race is like having a debate on eschatology to find out if your friend is coming with you to church on Sunday.

Avatar image for jaisimar_chelse
jaisimar_chelse

1931

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#67 jaisimar_chelse
Member since 2007 • 1931 Posts

who cares, any decent gamer plays on a console for the GAMES.razu_gamer2

correction gamers like YOU play on consoles

Avatar image for anasbouzid
anasbouzid

2340

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#68 anasbouzid
Member since 2004 • 2340 Posts
Intels...got some serious $#!+ going on at their labs...insane processors....and IBM as well (IBM said they have devices that are able to transfer a whole blueray disc of data a second..like you can download full length movies in seconds)
Avatar image for HuusAsking
HuusAsking

15270

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#69 HuusAsking
Member since 2006 • 15270 Posts
Intels...got some serious $#!+ going on at their labs...insane processors....and IBM as well (IBM said they have devices that are able to transfer a whole blueray disc of data a second..like you can download full length movies in seconds)anasbouzid
It's probably part of the Internet2 initiative, designed specifically for the education and research sectors to test the limits of communication technology.
Avatar image for HuusAsking
HuusAsking

15270

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#70 HuusAsking
Member since 2006 • 15270 Posts

[QUOTE="Teufelhuhn"][QUOTE="rimnet00"]I never realized how vaguely defined GPGPUs really were. More precisely, I was refering to GPGPUs which have dedicated components for doing both linear and vector calculations, as opposed to mapping to across paradigms which is very ineffecient -- ie CPUs doing vector calculations, and GPUs doing hardcore linear computation. Innovazero2000



GPGPU isn't a kind of hardware (although I guess you could have hardware designed for it), it's just a branch of programming where GPU's are used for general-purpose calculations. It's been going on since before Nvidia came up with an API for it.

Yeah but it's never been very efficent, or have had much use for...this is the first time I think we'll really see it blossem.

It's still a relatively new idea. Furthermore, GPU have only recently gotten broad enough to make the approach worthwhile (this has mostly come about through the evolution of the Programmable Shaders--essentially little programs in themselves). Microsoft is further advancing the idea through its DirectX 10 spec. Why else would DX10-compliant GPUs need to incorporate an integer instruction set if not to advance the idea of GPUs evolving into GPGPUs?

What people are trying to do now is try to figure out just which kinds of programs would benefit from a GPGPU approach. Currently, the most favored routines involve anything that involves a lot of simple floating-point calculation that benefits from being done in parallel. But like we both said, they're still probing the limits.

Avatar image for oldvander
oldvander

295

Forum Posts

0

Wiki Points

0

Followers

Reviews: 13

User Lists: 0

#71 oldvander
Member since 2008 • 295 Posts

1. I'm glad that companies like intel and sony come up with new technology that tries to change/push the industry forward. Not in an unessesary dvd-blu ray way, but in a snes-ps1 way.

2. Alot of people say graphics cant get much better than they are today. But with brand new technology surely someone is going to make a breakthrough with all the r+d that goes into these projects. It will be interesting to see how this goes.