PC visuals in games will change by leaps and bounds when graphics cards die off.

  • 73 results
  • 1
  • 2

This topic is locked from further discussion.

Avatar image for Gangans
Gangans

1273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#1 Gangans
Member since 2007 • 1273 Posts

I have been reading up on things like programmable shaders, and basically even directx 10 with it's shader model 4 features does not even come close to software rendering capabilities. Remember playing half-life or quake for the first time, and not even needing a graphics card?

The possibilites in theory were unlimited, yet in reality cpu's were just too slow at the time.

But welcome to 2007, the mass produced and affordable intel core2duo quad processor (just one example) could do so much with software rendering. Dedicate a whole 2 cores to graphics computations with smart and highly efficient voxel based rendering techniques and you can achieve crysis like scenes on a cpu at a playable framerate. (OK, I'm exaggerating ;) )

Unlike programmable gpu's, software rendering is unlimited in it's possibilities, pixar and other studios use software rendering to produce their highly detailed 3d films, graphics card don't have the flexibility for techniques such as ray-tracing etc in real time.

So, here's what I'm getting at....about 20 years from now, gpu's are basically going to become so flexible, so general purpose with their shader implementations, that they will in fact have all the capabilities of cpu's. Then the logical step is to combine any differences and create a hybrid cpu/gpu design that will become so powerful, so efficient, that specialised graphics processing boards will become obsolete.

Why bother optimizing your game for a particular gpu instruction set like directx10 or opengl2.0, when you can just write all your shaders in software and achieve any effect you wish on any scale, which would be possible, given the power of the cpu you're dealing with.

To further simplify my point....we will only get true to life graphics and leave the uncanny valley, when cpu's become our graphics and physics processors as well. From the beginning it has been about integration. We are headed towards some sort of processing singularity, when the core features of a whole PC will fit on a single nail sized chip, I guarantee it, and it's sooner than you think.

And with it will come games that will be so realistic, they will be games no more. Convincing AI will have the chance to flourish when such supercomputing power is in the homes of mainstream consumers. We are headed for a fusion of hardware. When that fusion occurs, developers will only need to worry about one thing, software. They will be limited no more.

What does this mean for the future of specialised games consoles?

Nothing sinister. They will simply become simplified entertainment extensions/appendages of the greater processing hub of the home of 2027 that will be the PC.

Couple all this with new peripherals, that will allow user interaction on an entirely new level (direct brain-to-pc interfaces) and we are living in ancient history people.

Remember the games of 1987? We live in an entirely new era. Imagine what 2027 will look like...

I think we should look ahead, instead of focusing on directx11 or the geforce 9800, the gpu makers and cpu makers should experiment, work together and dvelve into entirely new and radical technologies. Let's skip this time consuming sequential evolution of computer graphics, and move by leaps and bounds. To do this the cpu and gpu manufacturers should pool their resources and technology.

Gpu's are starting to share so many aspect of the cpu, it is becoming wasteful, us gamers are ready for union NOW. No separate physics cards, no more graphics cards, just one single processing unit smack bang in the heart of our machines. No need for specialisation, for fancy visual tricks, if the cpu is fast enough.

If nano/quantum computing ever takes off, and mainstream pc's hit the terahertz range, specialised hardware will become obsolete, and the possibilities endless. I look forward to this day.

~Gangans~

Avatar image for Platearmor_6
Platearmor_6

2817

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#2 Platearmor_6
Member since 2004 • 2817 Posts
From reading that, I think my school failed me. But I'm understanding what your saying, graphics made from a GPU and a processor would be far more forgiving on the system than if you just had the processor doing it all.
Avatar image for druglord6
druglord6

1030

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#3 druglord6
Member since 2005 • 1030 Posts

nice post man.

Avatar image for trenchermanX
trenchermanX

25

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#5 trenchermanX
Member since 2006 • 25 Posts
I share your enthusiasm and we can only hope that technology will advance enough in our lifetime to the point that gene-therapy, nano-technology are words in common use and there is even slang out there for them. anyway i saw a video today of a guy controlling objects in what looked to be a hl2 environment with his mind. so i guess the days we're almost at THAT point, where we can control machines with only our thought. villians will have a field day with this.but back to the point, i agree witht he poster, a single card/processor should meet all our computational needs, but the trend seems to be, if you take a&eia into account, more and more cards that cater to very specific areas of gaming...whether this is good or bad and whether we need to be cutting down on peripherals instead of incrementing their numbers, who knows? let's hope they find that simplicity is the key to progress in gaming .if not i can't wait until 2027 when i need to buy a ageia emotion card or whatever just to run a game...wait, no; that'll suck.:P
Avatar image for Gangans
Gangans

1273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#6 Gangans
Member since 2007 • 1273 Posts

From reading that, I think my school failed me. But I'm understanding what your saying, graphics made from a GPU and a processor would be far more forgiving on the system than if you just had the processor doing it all.Platearmor_6

Yeah you got the gist of it, basically what I'm saying is, integrate cpu's, gpus and just about anything else on a single multicore cpu, and you greatly simplify the process for games developers. You have one set of tools needed to tap into all this power, not a whole bunch of different compilers and tools for gpu specific things like direct3d and opengl etc and then having to worry about physics calculations on the cpu or even a separate psu (physics processing unit) and having to use yet another set of tools to define those calculations etc..

Avatar image for Gangans
Gangans

1273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#7 Gangans
Member since 2007 • 1273 Posts

I share your enthusiasm and we can only hope that technology will advance enough in our lifetime to the point that gene-therapy, nano-technology are words in common use and there is even slang out there for them. anyway i saw a video today of a guy controlling objects in what looked to be a hl2 environment with his mind. so i guess the days we're almost at THAT point, where we can control machines with only our thought. villians will have a field day with this.but back to the point, i agree witht he poster, a single card/processor should meet all our computational needs, but the trend seems to be, if you take a&eia into account, more and more cards that cater to very specific areas of gaming...whether this is good or bad and whether we need to be cutting down on peripherals instead of incrementing their numbers, who knows? let's hope they find that simplicity is the key to progress in gaming .if not i can't wait until 2027 when i need to buy a ageia emotion card or whatever just to run a game...wait, no; that'll suck.:PtrenchermanX

Functional simplicity is always desirable.

And lol at ageia emotion cards :lol:

Avatar image for Gangans
Gangans

1273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#8 Gangans
Member since 2007 • 1273 Posts

nice post man.

druglord6

Glad you like it. These random essays just happen sometimes. :)
Avatar image for G013M
G013M

6424

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#9 G013M
Member since 2006 • 6424 Posts

But welcome to 2007, the mass produced and affordable intel core2duo quad processor (just one example) could do so much with software rendering. Dedicate a whole 2 cores to graphics computations with smart and highly efficient voxel based rendering techniques and you can achieve crysis like scenes on a cpu at a playable framerate.

Gangans

While as a whole I agree with your post, and in 20 or so years, I'd love to have an integrated CPU & GPU, that statement that I've quoted above isn't true.

The reason that graphics cards are so fast at what they do, is becuase they are extremely specialised at what they do.

While I can't be 100% sure on what I'm about to say, and I'm completely fine if anyone decides to completely own me but, 2 cores of a quad-core COULD NOTproduce crysis like graphics at a PLAYABLE frame rate. It just isn't possible.

Maybe in 20 years, but not right now.

Avatar image for tribalTox
tribalTox

803

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#10 tribalTox
Member since 2006 • 803 Posts
*Bravo*..This is one post that i actually enjoyed reading.
Avatar image for Gangans
Gangans

1273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#11 Gangans
Member since 2007 • 1273 Posts
[QUOTE="Gangans"]

But welcome to 2007, the mass produced and affordable intel core2duo quad processor (just one example) could do so much with software rendering. Dedicate a whole 2 cores to graphics computations with smart and highly efficient voxel based rendering techniques and you can achieve crysis like scenes on a cpu at a playable framerate.

G013M

While as a whole I agree with your post, and in 20 or so years, I'd love to have an integrated CPU & GPU, that statement that I've quoted above isn't true.

The reason that graphics cards are so fast at what they do, is becuase they are extremely specialised at what they do.

While I can't be 100% sure on what I'm about to say, and I'm completely fine if anyone decides to completely own me but, 2 cores of a quad-core COULD NOTproduce crysis like graphics at a PLAYABLE frame rate. It just isn't possible.

Maybe in 20 years, but not right now.

I'm talking highly optimized, efficient code running in the voxel(not polygon) rendering regime. THIS game: http://en.wikipedia.org/wiki/Outcast_%28game%29 had BUMPMAPPING, FSAA and DEPTH OF FIELD EFFECTS in 1999before gpu's even used pixel shaders.....because it didn't use a gpu! Sure it doesn't look like much now but it outdid the gpu's of it's time with software rendering.

Avatar image for Greyhound222
Greyhound222

2899

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#12 Greyhound222
Member since 2005 • 2899 Posts
But everybody would need a Q6800 to run it maxed.I'll wait for quad-cores to drop in price.Also,I doubt you could get Crysis-quality,as the game right now requires a dual-core procesor.
Avatar image for Gangans
Gangans

1273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#13 Gangans
Member since 2007 • 1273 Posts
[QUOTE="G013M"][QUOTE="Gangans"]

But welcome to 2007, the mass produced and affordable intel core2duo quad processor (just one example) could do so much with software rendering. Dedicate a whole 2 cores to graphics computations with smart and highly efficient voxel based rendering techniques and you can achieve crysis like scenes on a cpu at a playable framerate.

Greyhound222

While as a whole I agree with your post, and in 20 or so years, I'd love to have an integrated CPU & GPU, that statement that I've quoted above isn't true.

The reason that graphics cards are so fast at what they do, is becuase they are extremely specialised at what they do.

While I can't be 100% sure on what I'm about to say, and I'm completely fine if anyone decides to completely own me but, 2 cores of a quad-core COULD NOTproduce crysis like graphics at a PLAYABLE frame rate. It just isn't possible.

Maybe in 20 years, but not right now.

What he said.

Once again I refer you to the game called outcast.

Also, I think most of you underestimate the core2duo line of processors.

I have a low end core2duo 5300 in my laptop, and a p4 3.4 ghz in my desktop. I did some benchmarks, lo and behold, my little laptop is essentialy 4x faster in real world applications! It's as if my p4 was running at 13.6ghz, the slow bastard.

It's at this time that I'd like to highlight a relevant point about Crysis in particular, it will use voxel based software rendering for it's highly detailed terrain! Got this from a recent pc hardware magazine. I think crysis wiki also mentions it.

The reason I believe they did this is because traditional rendering of tesselated near-photo realistic terrain would simply consume too much bandwidth. So they went with a cpu assisted method that is much faster in execution.

Avatar image for hamidious
hamidious

1537

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#14 hamidious
Member since 2007 • 1537 Posts
AMD is already planning an architecture that combines the CPU and GPU as one chip, sounds like your prediction is becoming true.
Avatar image for kyrieee
kyrieee

978

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#15 kyrieee
Member since 2007 • 978 Posts

Even if you do away with GPUs you still need an API.

GPUs are ridiculously fast when it comes to certain types of calculations, and you probably wont see them go away for quite some time. It's not like pixar doesn't use GPUs for their real time apps where they make everything. Throwing hundres of CPUs at something thinking you'll get unlimited processing powers is not a viable solution. The design problems associated with it are monstrous, and having that many CPUs creates power problems. We're in the green house age after all.

Voxels are great for some things, yes, but they won't replace polygons.

Quantum computers are something radically different so don't bring them into the discussion.

Avatar image for Gangans
Gangans

1273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#16 Gangans
Member since 2007 • 1273 Posts

AMD is already planning an architecture that combines the CPU and GPU as one chip, sounds like your prediction is becoming true.hamidious

These sound like baby steps to me, but ATI and AMD may make the winning combination.

Avatar image for rimnet00
rimnet00

11003

Forum Posts

0

Wiki Points

0

Followers

Reviews: 7

User Lists: 0

#17 rimnet00
Member since 2003 • 11003 Posts

The thing is, they are already working on CPU/GPU unification. This is precisely why AMD bought ATI, because both mindsets are needed in order to research the area. Nvidia is also closely working with Intel to achieve the same thing. However, a unified CPU/GPU architecture would be more like a CPU and GPU combined, rather then a super fast CPU. Also, this unification process is going to take a long time to perfect.

Even then, they arn't simply going to get rid of graphical APIs like DX - they are needed to simplify development. DX10 in itself is a beast and took Microsoft three years to create their first release. Development houses arn't going to want to write their own APIs, considering how much overhead would be involved.

While I agree software rendering allows people to use more graphical techniques, it is not practical for the real world in the least. That is why hardware is created, in order to speed up graphical techniques. Using Quake 2 as an example; many people played that game in software rendering mode, but at a much lower framerate and quality then someone who bought a 3dfx with OpenGL support. This is because the hardware sped up the processing involved.

In fact, all the DX10 techniques we see today were once done in software. Many of it's new techniques are very old. The only difference is, research brought us the ability to break down the software into circuitry.

I agree, one day we will see techniques like ray tracing become possible in realtime (in games). In fact, ray tracing is actually very efficient, however it has a very big 'constant' to it's algorithm, which means we have to wait for a gpu that can handle it.

The bottom line is though, hardware will always be used to replace software techniques (in gaming). Software is plainfully slow, because the logic being implemented then has to be interpreted by a vast number of components until it is finally executed. However with hardware, once the instruction is fired, it is almost instantly completed. The only time you will see software implementation is with non-time critical rendering, ie: rendering a 3D scene for a movie. Even then, most "rendering farms" have been using GPUs to handle much of their processing.

Avatar image for Gangans
Gangans

1273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#18 Gangans
Member since 2007 • 1273 Posts

Even if you do away with GPUs you still need an API.

GPUs are ridiculously fast when it comes to certain types of calculations, and you probably wont see them go away for quite some time. It's not like pixar doesn't use GPUs for their real time apps where they make everything. Throwing hundres of CPUs at something thinking you'll get unlimited processing powers is not a viable solution. The design problems associated with it are monstrous, and having that many CPUs creates power problems. We're in the green house age after all.

Quantum computers are something radically different so don't bring them into the discussion.

kyrieee

I am aware that a general purpose cpu would experience problems with the massively parallel calculations needed to fulfill gpu tasks, that is why I've been peddling the hugely efficient yet horribily neglected 'voxel' rendering regime, but more importantly in my fusion of all the components, I stressed a multicore design, suggesting certain cores specifically designed around the massively parallel process of the gpu, to maximize their graphical output, and others optimized for things like traditional logic computation and yet others designed with physics in mind. Combine all these cores on a single die and you get what I want.

Avatar image for captalchol
captalchol

643

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#19 captalchol
Member since 2006 • 643 Posts
not for a long time my friend. Have you ever run the 3dmark06 cpu test its like 640x480 with just a few frames per second and thats quad core optomized. Maybe when we start seeing 80 core cpu's it might be possible but not with 4 or even 8cores.
Avatar image for G013M
G013M

6424

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#20 G013M
Member since 2006 • 6424 Posts

Once again I refer you to the game called outcast.

Also, I think most of you underestimate the core2duo line of processors.

I have a low end core2duo 5300 in my laptop, and a p4 3.4 ghz in my desktop. I did some benchmarks, lo and behold, my little laptop is essentialy 4x faster in real world applications! It's as if my p4 was running at 13.6ghz, the slow bastard.

It's at this time that I'd like to highlight a relevant point about Crysis in particular, it will use voxel based software rendering for it's highly detailed terrain! Got this from a recent pc hardware magazine. I think crysis wiki also mentions it.

The reason I believe they did this is because traditional rendering of tesselated near-photo realistic terrain would simply consume too much bandwidth. So they went with a cpu assisted method that is much faster in execution.

Gangans

Well according to wikipedia they used voxels becuase :

"Voxel Object: Allows creating geometry a heightmap system wouldn't support to create cliffs, caves, canyons and alien shapes. Voxel editing is as easy as heightmap editing and fast in rendering."

Becuase thenormal heightmaps don't support the ideas that they want to do. Not becuase GPU's don't have enough "bandwidth".

Altough I'll admit thatthey do say, "fast in rendering".

Avatar image for fenriz275
fenriz275

2394

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#21 fenriz275
Member since 2003 • 2394 Posts

I think at a certain point the limitations in games will not be because of the hardware but because of the software. Most pc games now really don't benefit much from having the top of the line screaming fast processors. I think it's more up to the developers to actually take advantage of the hardware available now. My computer is mid-range and I have yet to playany game that actually taxes it even when I have two or three other programs running in the background. I'm more interested in seeing where in game AI goes rather than graphics. It's nice if the enemies in my FPS have individually rendered eyelashes but who cares if they're too stupid to duck and cover when I shoot at them? Still, I agree that at some point the cores of a cpu will be assigned to specific tasks like graphics, sound, etc ... Very good thread.

Avatar image for Gangans
Gangans

1273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#22 Gangans
Member since 2007 • 1273 Posts

The thing is, they are already working on CPU/GPU unification. This is precisely why AMD bought ATI, because both mindsets are needed in order to research the area. Nvidia is also closely working with Intel to achieve the same thing. However, a unified CPU/GPU architecture would be more like a CPU and GPU combined, rather then a super fast CPU. Also, this unification process is going to take a long time to perfect.

Even then, they arn't simply going to get rid of graphical APIs like DX - they are needed to simplify development. DX10 in itself is a beast and took Microsoft three years to create their first release. Development houses arn't going to want to write their own APIs, considering how much overhead would be involved.

While I agree software rendering allows people to use more graphical techniques, it is not practical for the real world in the least. That is why hardware is created, in order to speed up graphical techniques. Using Quake 2 as an example; many people played that game in software rendering mode, but at a much lower framerate and quality then someone who bought a 3dfx with OpenGL support. This is because the hardware sped up the processing involved.

In fact, all the DX10 techniques we see today were once done in software. Many of it's new techniques are very old. The only difference is, research brought us the ability to break down the software into circuitry.

I agree, one day we will see techniques like ray tracing become possible in realtime (in games). In fact, ray tracing is actually very efficient, however it has a very big 'constant' to it's algorithm, which means we have to wait for a gpu that can handle it.

The bottom line is though, hardware will always be used to replace software techniques (in gaming). Software is plainfully slow, because the logic being implemented then has to be interpreted by a vast number of components until it is finally executed. However with hardware, once the instruction is fired, it is almost instantly completed. The only time you will see software implementation is with non-time critical rendering, ie: rendering a 3D scene for a movie. Even then, most "rendering farms" have been using GPUs to handle much of their processing.rimnet00

I agree with everything you say. However in my initial post I was speaking of 20 years into the future. One can't predict what will happen in 5 years when it comes to things like gpu's and cpu's, let alone 20. I was just postulating a hypothetical based on the constant trend of unification/intergration of various components into ever smaller spaces when it comes to cpu and gpu design.

I don't believe we will still have separate graphics cards in 20 years according to such trends. The humble audio accelerator card has basically died, with the ability of cpu's to emulate all such hardware functions in software easily. I think this is where gpu's are headed, not soon mind you, but in 20 or more years, which is an era when it comes to computer technology.

Avatar image for anandram
anandram

1537

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#23 anandram
Member since 2007 • 1537 Posts
Although my mind nearly spasm'd reading your post, i got the jist and am glad there are some intelligent posters around GS :) Your insightful post alone awards you the post of the day award.....even if it gave me brain freeze.:?
Avatar image for rimnet00
rimnet00

11003

Forum Posts

0

Wiki Points

0

Followers

Reviews: 7

User Lists: 0

#24 rimnet00
Member since 2003 • 11003 Posts
[QUOTE="rimnet00"]

The thing is, they are already working on CPU/GPU unification. This is precisely why AMD bought ATI, because both mindsets are needed in order to research the area. Nvidia is also closely working with Intel to achieve the same thing. However, a unified CPU/GPU architecture would be more like a CPU and GPU combined, rather then a super fast CPU. Also, this unification process is going to take a long time to perfect.

Even then, they arn't simply going to get rid of graphical APIs like DX - they are needed to simplify development. DX10 in itself is a beast and took Microsoft three years to create their first release. Development houses arn't going to want to write their own APIs, considering how much overhead would be involved.

While I agree software rendering allows people to use more graphical techniques, it is not practical for the real world in the least. That is why hardware is created, in order to speed up graphical techniques. Using Quake 2 as an example; many people played that game in software rendering mode, but at a much lower framerate and quality then someone who bought a 3dfx with OpenGL support. This is because the hardware sped up the processing involved.

In fact, all the DX10 techniques we see today were once done in software. Many of it's new techniques are very old. The only difference is, research brought us the ability to break down the software into circuitry.

I agree, one day we will see techniques like ray tracing become possible in realtime (in games). In fact, ray tracing is actually very efficient, however it has a very big 'constant' to it's algorithm, which means we have to wait for a gpu that can handle it.

The bottom line is though, hardware will always be used to replace software techniques (in gaming). Software is plainfully slow, because the logic being implemented then has to be interpreted by a vast number of components until it is finally executed. However with hardware, once the instruction is fired, it is almost instantly completed. The only time you will see software implementation is with non-time critical rendering, ie: rendering a 3D scene for a movie. Even then, most "rendering farms" have been using GPUs to handle much of their processing.Gangans

I agree with everything you say. However in my initial post I was speaking of 20 years into the future. One can't predict what will happen in 5 years when it comes to things like gpu's and cpu's, let alone 20. I was just postulating a hypothetical based on the constant trend of unification/intergration of various components into ever smaller spaces when it comes to cpu and gpu design.

I don't believe we will still have separate graphics cards in 20 years according to such trends. The humble audio accelerator card has basically died, with the ability of cpu's to emulate all such hardware functions in software easily. I think this is where gpu's are headed, not soon mind you, but in 20 or more years, which is an era when it comes to computer technology.

My main issue is though, your hypothesis is talking about devolving the entire field of computer graphics. Like I mentioned before, CPUs and GPUs are in the process of being unified. In other words, a multiple CPU and GPU diodes on one chip. This does not translate to cpu's in the future emulating grapical techniques. Ray tracing as you mention in your first post is actually a 20 some odd creation, and it still isn't possible to do efficiently in real time. Imagining CPU taking over graphical processing, even in 20 years is far fetched, and once again devolving. This isn't like audio techniques being emulated on the CPU, the audio emuation example you refer to is much less complicated.

It is nice to see that there are people who are looking forward. I am just letting you know, as someone who has studied in this field, I know it's not a feasible notion.

Avatar image for beckoflight
beckoflight

848

Forum Posts

0

Wiki Points

0

Followers

Reviews: 63

User Lists: 0

#25 beckoflight
Member since 2006 • 848 Posts
Well that was good post & i'm happy to see ppl who think ahead .... in fact ATI/AMD keeps things in secret sinc they merged so i think they will be the first to create this kind of hybrid ...so that you know Ati could have had reales the XTX version with 1gb ddr memory but the suplier of ddr wasn't up for the task..... anyway i'm happy to see the 2900 XT with new drivers & MAYBE it will eat the GTX to ...we will se till now Nvidia has the PC market & ATI has the rest 46% of it but keep in mind that ATI controls the console/notebook market with the gamecube / Wii / X360 & besides tha fact that most games for PC has the Nvidia logo .... some of them are working better on ATI cards with even better image quallity ....evan gamespot has proven that with the grudge match between 1950 XTX & 79.... GTX ... so if 1one thing is for sure Nvidia knows hat to make advertiesments !
Avatar image for Greyhound222
Greyhound222

2899

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#26 Greyhound222
Member since 2005 • 2899 Posts

The thing is, they are already working on CPU/GPU unification. This is precisely why AMD bought ATI, because both mindsets are needed in order to research the area. Nvidia is also closely working with Intel to achieve the same thing. However, a unified CPU/GPU architecture would be more like a CPU and GPU combined, rather then a super fast CPU. Also, this unification process is going to take a long time to perfect.

Even then, they arn't simply going to get rid of graphical APIs like DX - they are needed to simplify development. DX10 in itself is a beast and took Microsoft three years to create their first release. Development houses arn't going to want to write their own APIs, considering how much overhead would be involved.

While I agree software rendering allows people to use more graphical techniques, it is not practical for the real world in the least. That is why hardware is created, in order to speed up graphical techniques. Using Quake 2 as an example; many people played that game in software rendering mode, but at a much lower framerate and quality then someone who bought a 3dfx with OpenGL support. This is because the hardware sped up the processing involved.

In fact, all the DX10 techniques we see today were once done in software. Many of it's new techniques are very old. The only difference is, research brought us the ability to break down the software into circuitry.

I agree, one day we will see techniques like ray tracing become possible in realtime (in games). In fact, ray tracing is actually very efficient, however it has a very big 'constant' to it's algorithm, which means we have to wait for a gpu that can handle it.

The bottom line is though, hardware will always be used to replace software techniques (in gaming). Software is plainfully slow, because the logic being implemented then has to be interpreted by a vast number of components until it is finally executed. However with hardware, once the instruction is fired, it is almost instantly completed. The only time you will see software implementation is with non-time critical rendering, ie: rendering a 3D scene for a movie. Even then, most "rendering farms" have been using GPUs to handle much of their processing.

rimnet00
Actually,people have integrated ray-tracing into old games such as Quake 2,so it may be in the near future.
Avatar image for Funkyhamster
Funkyhamster

17366

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#27 Funkyhamster
Member since 2005 • 17366 Posts
But the whole reason graphics cards better at processing visuals compared to processors is because they're made with a very specific architecture... rendering entirely with processors is possible, but they would have to increase in power much faster than they are right now. Which, if processor techniques are refined, or some new computing technology comes along in 20 years, could very well happen.
Avatar image for rimnet00
rimnet00

11003

Forum Posts

0

Wiki Points

0

Followers

Reviews: 7

User Lists: 0

#28 rimnet00
Member since 2003 • 11003 Posts
Actually,people have integrated ray-tracing into old games such as Quake 2,so it may be in the near future.Greyhound222
Considering only recently some researchers used 4 PS3's to conduct real-time ray tracing on a single car model, and it was considering a big deal, making such a statement is vastly unbelievable. The truth is, what you are refering to was a "proof in concept" model and was not real time ray tracing.
Avatar image for Macolele
Macolele

534

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#29 Macolele
Member since 2006 • 534 Posts
I don't think so. A CPU always run behind OS which never allow max out of its potential. Instead of this, they will put all AI, physic, graphic and so on to a card within a closed architecture. OS simply send data or in put to this card. A strong CPU won't be necessary.
Avatar image for elmertheowl
elmertheowl

62

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#31 elmertheowl
Member since 2005 • 62 Posts

Very good post. I felt like I was reading a Pop Sci article or something. What's your line of work?

Let's skip this time consuming sequential evolution of computer graphics, and move by leaps and bounds.

Gangans

In reference to this, it makes sense that when companies make a leap in the advancement of their technology, they release it when it's profitable. It's the same across all markets, what something can offer matters less than how much money it can make. They'll want to squeeze their cash cows till the milk runs dry, then advance in increments, and not introduce the latest technology (at least at affordable prices) till they stop making money on everything in-between.

Avatar image for LouieV13
LouieV13

7604

Forum Posts

0

Wiki Points

0

Followers

Reviews: 69

User Lists: 0

#32 LouieV13
Member since 2005 • 7604 Posts
AMD is already planning an architecture that combines the CPU and GPU as one chip, sounds like your prediction is becoming true.hamidious
Its good to be a amd fanboy aint it?
Avatar image for whgresiak
whgresiak

1889

Forum Posts

0

Wiki Points

0

Followers

Reviews: 35

User Lists: 0

#33 whgresiak
Member since 2005 • 1889 Posts
From reading that, I think my school failed mePlatearmor_6
Haha I think that an integrated CPU/GPU that uses Software rendering is a great idea, too bad no one has the technology to make one that would be able to run a powerful game on it yet. You mentioned Quake and Half Life, well we might be able to get Halo (the original) like graphics with the technology we have today. I look forward to the future as a PC gamer though.
Avatar image for Greyhound222
Greyhound222

2899

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#35 Greyhound222
Member since 2005 • 2899 Posts
[QUOTE="Greyhound222"]Actually,people have integrated ray-tracing into old games such as Quake 2,so it may be in the near future.rimnet00
Considering only recently some researchers used 4 PS3's to conduct real-time ray tracing on a single car model, and it was considering a big deal, making such a statement is vastly unbelievable. The truth is, what you are refering to was a "proof in concept" model and was not real time ray tracing.

http://graphics.cs.uni-sb.de/~sidapohl/egoshooter/ There you go,Ray Tracing in Quake 3.Well,20 FPS @ 512x512 with a 36 Ghz CPU.:P Highlights realtime 3d raytrace engine possible resolution up to 16384x16384x32 colored realtime per-pixel shadows realtime per-pixel shadows Now all we have to do is wait for 9Ghz Quad-Cores to become common.:P Happy?
Avatar image for Deamon321
Deamon321

1568

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#36 Deamon321
Member since 2005 • 1568 Posts
hmm, you do amke a every good poitn because CPU frequencies are MUCH higher than GPU frequencies and correct me if im wrong but i dont think there is a gpu out there that goes ghz. one thing that might be aproblem is that a gfx card is liek its own computer, not just a processor. It ahs its own mobo, cpu(Gpu) and memory. excellent post though, a truly revolutionary idea.
Avatar image for Funkyhamster
Funkyhamster

17366

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#37 Funkyhamster
Member since 2005 • 17366 Posts

hmm, you do amke a every good poitn because CPU frequencies are MUCH higher than GPU frequencies and correct me if im wrong but i dont think there is a gpu out there that goes ghz. one thing that might be aproblem is that a gfx card is liek its own computer, not just a processor. It ahs its own mobo, cpu(Gpu) and memory. excellent post though, a truly revolutionary idea.Deamon321

Graphics cards also have very specific designs that work better for processing 3d graphics.

Avatar image for Deamon321
Deamon321

1568

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#38 Deamon321
Member since 2005 • 1568 Posts
Well I think that the 1st step theyll take in all of this is to keep the graphics cards and make 2 cores of teh processor to work with the graphics cards becuase lets face it, nvidea and ati do not want to lose their money and theyll find a way to still amek us pay.
Avatar image for Gimano
Gimano

25

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#39 Gimano
Member since 2007 • 25 Posts

I always have the tendency to think that marketing and hungre for maximum profit always blocks the evolution of technology, especially computer technology. I have the feeling that the biggest computer science companies always are 4 or 5 steps ahead of what we call reality. This in order to make their research fully profitable.

What I see in such posts is that the human mind can push the limits very far...if you put 20 like minded people together it goes even further and faster. Yet in our reality,it evolves relativly slow, babysteps, one after another.

Avatar image for rimnet00
rimnet00

11003

Forum Posts

0

Wiki Points

0

Followers

Reviews: 7

User Lists: 0

#40 rimnet00
Member since 2003 • 11003 Posts

[QUOTE="rimnet00"][QUOTE="Greyhound222"]Actually,people have integrated ray-tracing into old games such as Quake 2,so it may be in the near future.Greyhound222
Considering only recently some researchers used 4 PS3's to conduct real-time ray tracing on a single car model, and it was considering a big deal, making such a statement is vastly unbelievable. The truth is, what you are refering to was a "proof in concept" model and was not real time ray tracing.

http://graphics.cs.uni-sb.de/~sidapohl/egoshooter/ There you go,Ray Tracing in Quake 3.Well,20 FPS @ 512x512 with a 36 Ghz CPU.:P Highlights realtime 3d raytrace engine possible resolution up to 16384x16384x32 colored realtime per-pixel shadows realtime per-pixel shadows Now all we have to do is wait for 9Ghz Quad-Cores to become common.:P Happy?


That projects proves what I mentioned in my above posts. Of course, if you are grasping at straws, I should have mentioned that it is impractical and impossible on a single machine. The research project you decided to link me to is using a cluster of computers, to render a very low poly environment, at a extremely low resolution at 20fps, with extremely low textures -- this, once again, only shows how far away ray tracing is from being practical. This research project pretty much indicates exactly what I was trying to bring across with the 4 PS3 example.

Note, that I have actually written a few ray tracers in college, I'm not exactly a noob in this field and talking out of my ass. In fact, writing a ray tracer is easy, its just that the hardware extremely far away from being there yet.

If you want to see some of the stuff I worked with: http://www.cs.umass.edu/~ruiwang/

Avatar image for FragMonkey09
FragMonkey09

1543

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#41 FragMonkey09
Member since 2005 • 1543 Posts

After reading all the utter crap in the E3 forum, this post WOKE UP MY MIND!

Seriously though, you have a great writing style.

Onto the point, at the moment, I do not think acore2duo quad could render crysis-like graphics. Although I am sure your theory is correct, and in the not-too-distant future, we will have combined GPU's and CPU's into one powerful processing unit. This might actually come sooner than expected, as just last month or so, Nvidia released a super computer of sorts to be used medical research, and it seems that they are gaining experience in fields other than just GPUs.

Avatar image for Hondo189
Hondo189

272

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#42 Hondo189
Member since 2005 • 272 Posts
This idea would seem even more feasible once manufacturers go to man-made daimond and phases out silicone.
Avatar image for Gangans
Gangans

1273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#43 Gangans
Member since 2007 • 1273 Posts

rimnet00 wrote:
My main issue is though, your hypothesis is talking about devolving the entire field of computer graphics. Like I mentioned before, CPUs and GPUs are in the process of being unified. In other words, a multiple CPU and GPU diodes on one chip. This does not translate to cpu's in the future emulating grapical techniques. Ray tracing as you mention in your first post is actually a 20 some odd creation, and it still isn't possible to do efficiently in real time. Imagining CPU taking over graphical processing, even in 20 years is far fetched, and once again devolving. This isn't like audio techniques being emulated on the CPU, the audio emuation example you refer to is much less complicated.



I think as one poster mentioned in this thread, Halo1 like graphics could be achieved in software mode on the fastest of today's cpu at a playable fps. Which goes to prove your point I guess, cpu's are WAY behind specialised gpu's when it comes to rendering graphics in real time.

HOWEVER, you assume a sequential evolutionary progress in the field of cpu's in the eternity of computer evolution that is a 20 year period. That could very well be the case!
Nonetheless, due to this timeframe (which I believe is vast), I have been urging and hoping for a few revolutions along the way, in light of some nifty new technologies that are being demonstrated in small scale applications. There is no reason why we can't build another flying machine or admire the vistas of the moon firsthand in a spacesuit. Both revolutionary achievements that only took a few years from theoretical conception to physical realisation.
Here is a small example of a new technology being tested that could have huge implications on how cpu's are designed and built: http://archives.cnn.com/2001/TECH/science/05/17/quantum.computer/ Mind you the article is old, but I don't think they abandoned this work just for the hell of it.

I think we should bring up radical new ideas if we want to instigate revolutionary change. If we allow market trends and baby steps to reign, we will simply take longer to get to point B. I guess I'm just a visionary and all-too-eager to see the future. ;)

rimnet00 wrote:
It is nice to see that there are people who are looking forward. I am just letting you know, as someone who has studied in this field, I know it's not a feasible notion.



Not yet, I agree.

elmertheowl wrote:
Very good post. I felt like I was reading a Pop Sci article or something. What's your line of work?



Full time university student, flat-out financially broke too. :(

elmertheowl wrote:
In reference to this, it makes sense that when companies make a leap in the advancement of their technology, they release it when it's profitable. It's the same across all markets, what something can offer matters less than how much money it can make. They'll want to squeeze their cash cows till the milk runs dry, then advance in increments, and not introduce the latest technology (at least at affordable prices) till they stop making money on everything in-between.



My sentiments exactly. We should not allow mere market trends to be the only things dictating technological progression in these areas.


FragMonkey09 wrote:
After reading all the utter crap in the E3 forum, this post WOKE UP MY MIND!

Seriously though, you have a great writing ****

Onto the point, at the moment, I do not think acore2duo quad could render crysis-like graphics. Although I am sure your theory is correct, and in the not-too-distant future, we will have combined GPU's and CPU's into one powerful processing unit. This might actually come sooner than expected, as just last month or so, Nvidia released a super computer of sorts to be used medical research, and it seems that they are gaining experience in fields other than just GPUs.



Yes I agree, my crysis example was perhaps an overenthusiastic exaggeration.

As to your other point, AMD/NVIDIA are actively tinkering with cpu/gpu fusion. I refer you to this article which is only 4 hours old (since this post was made): http://www.dailytech.com/AMD+Talks+GPGPU+and+Fusion/article8066c.htm

Avatar image for Gangans
Gangans

1273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#44 Gangans
Member since 2007 • 1273 Posts

This idea would seem even more feasible once manufacturers go to man-made daimond and phases out silicone.Hondo189

I think nano-tubes in place of copper wires and photons in place of electrons would be even more interesting to try. A light run nano-processor that doesn't even require electrical input and barely even generates any heat, yet is millions of times faster.

Avatar image for FragMonkey09
FragMonkey09

1543

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#45 FragMonkey09
Member since 2005 • 1543 Posts

Ah thanks for that link. My prediction is that this will start out slowly. Once AMD releases this new GPGPU, it will be something like the Aegia PhysX processor, that nobody is currently buying because games do not support it. Given enough time, probably not as much as the 20 years you predict, this could become mainstream.

Another thing is, the GPGPU AMD is working on is not really just a CPU that handles graphics too, its seems like its basically a GPU "fused" into a CPU, hence the project "Fusion." If this does become a reality though, how does one upgrade the GPU part of the CPU? Buy a whole CPU/GPU Fusion all over again?

EDIT: It seems to have censored the word "style" when you quoted me... :/

Avatar image for Gangans
Gangans

1273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#46 Gangans
Member since 2007 • 1273 Posts

Ah thanks for that link. My prediction is that this will start out slowly. Once AMD releases this new GPGPU, it will be something like the Aegia PhysX processor, that nobody is currently buying because games do not support it. Given enough time, probably not as much as the 20 years you predict, this could become mainstream.

Another thing is, the GPGPU AMD is working on is not really just a CPU that handles graphics too, its seems like its basically a GPU "fused" into a CPU, hence the project "Fusion." If this does become a reality though, how does one upgrade the GPU part of the CPU? Buy a whole CPU/GPU Fusion all over again?

EDIT: It seems to have censored the word "style" when you quoted me... :/

FragMonkey09

One can always get a separate gpu and stick it into their pci-e slot as they do now if the gpu accompanying their cpu becomes obsolete. The two could even be made to work in unison. But I agree it wouldn't really be a great change.

What amd/ati are doing seems to be more like an onboard intergrated video solution, in other words, more baby steps.

But it would be good if one could get moderate 3d acceleration with just a cheap cpu.

Avatar image for onemic
onemic

5616

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#47 onemic
Member since 2003 • 5616 Posts
So what exactly is ray tracing? From reading about the description on wikipedia, it seems like it's just an accurate representation of light being casted based on light being seen from your eyes.
Avatar image for gbarules2999
gbarules2999

390

Forum Posts

0

Wiki Points

0

Followers

Reviews: 43

User Lists: 0

#48 gbarules2999
Member since 2006 • 390 Posts

This whole post is very interesting, here's my noobish input.

First off, stop trying to use very big words in gigantic paragraphs to try and scare each other off. Not working, and it makes the entire thread harder to grasp. I got the point, being a moderate computer person. But wehn you start railing off and your blah blah blah in your debates...settle down, you're all smart, let it fall.

Now, as for fast processors...twenty years is a long time. Did anybody in the year 1987 think of 3D games? Heck no! We were all too busy playing Zelda, wondering how they could make the world bigger. Graphics smaphics. We were not interested in hardware, we were interested in ideas. A new age, of whatever Nintendo could bring to our TV and such. I don't think anyone concieved the whole notion of 3D graphics until everything got more powerful, and Wolfenstein 3D decided to take it further.

But that's the thing I'm thinking: we have no idea what's coming next. Most Mario gamers didn't understand 3D anyway, much less guess that their favorite plumber would be bunching off of polygonal walls a decade later. Who are we to guess what will be coming aroudn the mountain? Holograms? VR?

As for the hardware, that's the last of my worries. Graphics will be increasing, but the increase will always be an exponentially decreasing slope. Every gap between the PS6 and Xbox 2160 will be miniscule, because the developers will be wanting that little increase in the graphics. Every drop of sweat will cost a new processor by the time we're finished, and the whole thing will just suddenly end. And where will the industry be when there aren't any graphics, just one life-perfect engine that is the only thing that needs to exist any more?

That's where the industry is going. Perhaps people will not find a need for graphics cards when CPU's can run everything with style, but I doubt it. As processors go up, the games and proigrams follow suit. Do you think that Halo 2 needs the system that it should to run it? Heck no, I might be able to run the damn thing on my crappy box if they did a good job with it. But they're taking initiative with the possiblities, and so we get...well, shinier suits and a demand for the next faster unit.

Besdies, I'm not the only one on this forum who would defend a gameplay-awesome game that looks like crap. I play StarCraft all the time, and I love my DS, even though it isn't very good for graphics. I'm a sucker for pretty faces, sure, but for StarCraft II, they could do it in a slightly better 2D engine and I'd be happy. Make the AI awesome and behaive like real people, and I don't care how nice it looks.

Onme last thing: why is it that we are so amazed at computer graphics over real life? Crysis footage makes me gasp, but the bluffs around my area bore me. Huh.

Avatar image for frizzyman0292
frizzyman0292

2855

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#49 frizzyman0292
Member since 2007 • 2855 Posts
Nice Read thanks for posting.
Avatar image for rimnet00
rimnet00

11003

Forum Posts

0

Wiki Points

0

Followers

Reviews: 7

User Lists: 0

#50 rimnet00
Member since 2003 • 11003 Posts

So what exactly is ray tracing? From reading about the description on wikipedia, it seems like it's just an accurate representation of light being casted based on light being seen from your eyes.onemic

To explain it simply, it is a simulation of how light works in the real world. Objects in the real world absorb light, which in turn gives them their colors. Ray tracing is the process of taking every light source in the 3D world and shooting a "ray" each of the light sources to every object in the "viewable scene", and then calculating the exact color that should be created by it.
The way lighting works in games today is rasteration based. This is a lot faster, but doesn't look nearly as good as ray tracing.