This was posted last year and its one of the better explanations of the reasons why MS chose such a strange architecture.
so-could-the-x1s-secret-sauce-be-voxel-cone-ray-tracing
I did the leg work. I'll just present verifiable evidence. You decide. I'm going to try to keep it much simpler this time. I did in a question and answer format so that if some members might not be up to speed can also understand, but if you know what these things are just skip ahead to the next question.
What is Ray Tracing?
Some would say it's the holy grail of computer graphics lighting. It gives you realistic dynamic reflections, lights, shadows, materials. Huge increases in geometry. It's the future. If you don't know, you probably should stop right here and go look it up.
Why is ray tracing awesome?
Ray Tracing using a 3D graphics program POV-Ray.
A true ray tracing engine gives you true global illumination, shadows, lights, reflections, specular maps, realistic material surfaces, all standard. Energy conservation is easy to do. Lighting looks amazing. Materials can finally look realistic without the need for a whole bunch of texture tricks. Refractions and transparencies(glass looks like glass, water looks like water and bends and distorts light as it should). It removes all the necessary hacks that never quite look as good from development.
Developers don't have to create a whole bunch of reflection maps to do reflections, water, and many other materials. It's a pain in the ass and they never quite look good enough anyway. It's also going to offer a huge increase in geometry. Rasterized graphics are easy to get up and going, but past a certain point, the more polygons you have the slower it gets. Ray tracing is mainly dependent on resolution and it's the complete opposite. It takes a lot of power, more than currently available to get it up and running. But once you cross that point, you can have lots and lots of geometry at very little additional cost. It won't affect it. That means more actual 3D detail in objects, especially organic plants like trees. So there's an expected switch in the industry coming shortly. Well known figureheads like John Carmack is already gearing his studio towards preparing for it.
Can we have it?
Not yet. We're actually really close on the PC, but still nowhere near being able to run on next gen consoles. Real ray tracing is around the corner on the PC though. See the Brigade ray tracing engine. Keep an eye on it. It's going to be awesome. The only thing confirmed for consoles are screen spaced reflections, which is NOT to be confused with the real thing.
What's the difference between ray tracing and screen spaced reflections?
SSR is a standard feature of Crytek's Cryengine, Frostbite, Unreal Engine 4, Guerrilla's KZ:SF engine, and a lot of others, but it's not the same thing. They are not even remotely close to being similar. It's a hack mainly to give you dynamic reflections only. It's not a lighting engine. It's just used for getting dynamic reflections for rasterized graphics, but that's all. No transparency, no lighting and shadows, no global illumination, no refraction benefits, none of that. And the reflections themselves only take into considerations objects on screen, not objects that may be out of your view, but where you should still see their reflection if you are at an angle. It just won't be there.
Screen spaced reflections:
Brigade 2 ray tracing lighting engine:
Note how close it gets to images you typically only see in 3D animation programs or Pixar movies. Except, it's in real time and runs at roughly 30fps but currently requires roughly a Titan to run and it's still very noisy in motion.
The entire lighting engine, shadows, material creation is driven by the ray tracing engine. Not a lot of texturing going on here. The look of materials isn't dependent so much on the original texture, but rather on how light bounces off the defined material surface. Procedural texture work fantastic with ray tracing in creating realistic materials. Notice the reflections at the top which reflect objects not on scree, which beyond the capabilities of screen spaced reflections.
Most developers still use reflection maps for their reflections which is just a texture, taken from the point of view of the reflective object. It's as fake as it gets and your next gen racing games like Forza 5 and Drive Club are still both using this age old hack.
What is Voxel Cone Ray Tracing?
Ray tracing done cheap. It works in a similar same way. Instead of individual rays, or lines, being shot from each pixel, it uses cones and voxels. It covers a larger area and uses a few cones. Prior implementations used along the lines of 9-12 cones which obviously saves a lot of power compared to a ~million rays at a 1080p resolution. It makes ray tracing almost practical on current GPU power. But the triangle or polygon data being calculated on in the scenes are converted and stored as voxels.
It's an approximation, but you can't argue with the results:
What are voxels?
\
Unlike polygons, where the basic unit is the triangle, a voxel is a 3D cube. They're volume based. Never really been that popular in videogames other than the very well known game Outcast in the 90s, but now they are making a comeback. Project Spark, Everquest Next are using it are using it for their graphics engines. They're really great at creating organic looking graphics as well as have superior scalability(zooming in and out to great draw distances) and performance compared to polygons.
Ever notice how you can't typically make out the individual polygons in Project Spark?
That's because those aren't your typical polygon graphics made out of triangles you're looking at. They're voxels. That's why those objects scale big or small so nicely. Kodu, Project Spark's father, uses a voxel engine as well.
So those voxels are somehow now being used to do ray tracing?
Yes.
Who came up with Voxel Cone Ray Tracing?
Cyril Crassin is typically credit with it. Here's a paper outlining a course on it which was the big clue in this research. You can try to decipher it, or you can take my word for it and I will save you a headache and time. Up to you.
Why didn't it make it big? Why didn't I hear about it?
It did. It was a big hit at Siggraph and Unreal Engine 4 was initially based on it. Turns out some are saying they eventually had to strip it out(quietly) due to them not being able to get it up to speed on next generation consoles and mid-range PC's. However, there's hope. There's also a plugin for Unity and it runs quite well.
What was the problem with it?
The data was being stored in a Sparse Voxel Octree. Don't worry about what it means if you don't understand but let's just say it's a 3D, layered, voxel grid. What's important is that traversing this grid is very slow.
How did they fix it?
Instead of using voxels to store the data, they're using a 3D texture. Like a cube, or a voxel, but stored as an array of 2D textures. Now it was fast, but this had some problems of its own.
What was the problem with using 3D textures for cone ray tracing?
The 3D textures were big and required a lot of memory.
This demo served both as a means to familiarize myself with voxel cone tracing and as a testbed for performance experiments with the voxel storage: plain 3D textures, real-time compressed 3D textures, and 3D textures aligned with the diffuse sample rays were tested. Sparse voxel octrees were not implemented due to time constraints, but would have been nice to have as a baseline reference. Compared to SVO in the context of voxel cone tracing (as opposed to ray casting, where SVO is a clear winner), 3D textures allow for easier filtering, direct lookups without evaluating the octree structure, and potentially better cache and memory bandwidth utilization (depending on cone size and scene density). The clear downside is the space requirement: 3D textures can’t scale to larger scenes or smaller, more detailed voxels. There may be ways to work around this deficiency: sparse textures (GL_AMD_sparse_texture), compression, or hybrid schemes that mix tree structures with 3D textures.
http://www.geeks3d.com/20121214/voxel-cone-tracing-global-illumination-in-opengl-4-3/
How did they fix that?
Using partial resident textures.
What the the hell are partial resident textures?
It chops up an enormous texture into tiny little tiles, and streams only what is needed, saving both RAM and bandwidth.
Left: Original texture.
Middle: It gets stored in a tile pool.
Right: How it's stored in memory.
The original texture on the left is big. The combined 64kb tiles on the right, takes up very little space in RAM. So powerful you can store textures as big as 3GB in 16MB of RAM(or eSRAM?).
Why is this important in reference to the X1?
Because DirectX 11.2 and the X1 chip architecture is built for doing partial resident resources in hardware. Removes the limitations other software implementations had, which held some engines back, such as John Carmack's Rage.
The X1's architecture and data move engines have tile and untile features natively, in hardware:
Doesn't the PS4 have this too?
Both AMD GPUs support partial resident textures, but we do know for a fact Microsoft added additional dedicated hardware in the X1 architecture to focus on this area beyond AMD's standard implementations.
Did Sony?
Ask Sony.
How come no one's talked about this?
They have. MS talked about partial resident resources in their DirectX build conference. They explained the move to partial resident resources as a solution in this paper. It just might end up being even more important than originally believed. And more recently an unnamed third party developer is touting better ray tracing capabilities on the X1:
Xbox One does, however, boast superior performance to PS4 in other ways. “Let’s say you are using procedural generation or raytracing via parametric surfaces – that is, using a lot of memory writes and not much texturing or ALU – Xbox One will be likely be faster,” said one developer.
http://www.edge-online.com/news/pow...erences-between-ps4-and-xbox-one-performance/
When might we hear something about it?
DirectX11.2 was only recently unveiled earlier this year. No launch games would have been designed for this. Partial resident textures are still a fairly new technique, and just now getting supported in hardware. Voxel cone ray tracing is also a fairly new implementation. And the alternative of using 3D texture along with partially resident textures is even newer, and not many have attempted it. Developers will certainly need time to start messing around with both.
So what now?
We don't know to what extend partially resident resources will be used on the X1, but if it's enough to pull off cone ray tracing using partially resident 3D textures, it's going to be a pretty big deal. It was a big blow when Epic had to yank it out of UE4, but the Unity plug in and 3d texture implementations still gives hope. Even Epic are considering re-introducing it at some point. Perhaps second and third generation games will attempt to use this. If possible, I'd expect to hear more about it soon.
Log in to comment