If the main problem is memory consumption then they should be able to implement non-flat reflections. Maybe not like ray tracing, but seriously, how many polygon games have reflective surfaces at all? Sure, it's maybe not as overly fantastic as they say, but if they can release a downloadable demo in a few years with animations, lighting, etc, I don't see why you're judging them so hard. Sure, it's maybe not unlimited, but you would still be able to make an AAA game with this if it works. If it has the performance they say it has it would be amazing. A software renderer running in 25 FPS on a quad-core laptop is pretty f*cking amazing considering the output. He also said the demo could easily be optimized to 3x performance (questionable, but whatever). The real question is how good it will scale with multiple processors. That will decide how good a version which runs on a GPU will work. A Radeon HD6970 has 1600 stream processors running at 880MHz. Compared to a quad-core CPU, in this case an laptop i7-2630QM, to a graphics card, we have so much more theoretical processing power. For a quick (maybe really inaccurate) comparison look at Bitcoin mining: https://en.bitcoin.it/wiki/Mining_hardware_comparison
i7-2635QM (closest match): 2.93 million hashes/sec
Radeon HD6950: 272 million hashes/sec
NVidia GTX560 Ti: 67.7 million hashes/sec
Even considering very bad scaling compared to theoretical performance (30-100x performance), we can still assume a 10x scaling compared to the CPU version is achievable. Multiply that with the promised optimizations, let's say 2x increase as a worst case scenario, and we still have a 20x increase in performance. 20 times the 15-25 FPS achieved in the demo would be 300-500 FPS. For the content in that demo. Which would be insane geometry performance compared to current AAA games no matter how you look at, even without advanced shaders, lighting, etc.
Now before you start flaming me:
I now I compared a laptop CPU to a high-end desktop graphics card, but really, isn't that the target hardware for most AAA games? If a game runs badly on cheap hardware people can't really complain if it looks that good. xd
Memory problems would be even more insane. Graphics cards usually don't have 8GB of memory... More like 1-2GB. Considering each "atom" would require more data than that demo if they had lighting (do they need normals? I think they do...), shader data, e.t.c.
Great. Now I have even more questions I want them to answer. Can they extract motion vectors for motion blur? How fast can they calculate the distance of a point to the screen (for SSAO/HBAO/shadow mapping. Remember, they did have some buggy shadow mapping in the first demo)? What was the resolution the game was running on in the demo? Considering the clearly aliased edge when he happened to move into something in the demo, I think it was quite low actually... What about antialiasing? Can it support anything faster than supersampling? Will supersampling even be slower compared to today's game engines? As the lighting will effectively be performed just like with deferred rendering for the engines we use today, and considering that a good MSAA implementation will do the lighting for each sample, would we even see a big performance cost if we do jittered supersampling with this good geometry performance compared to deferred shading? How do they not have huge aliasing problems now? Can the solution be used for antialiasing? Memory usage? How much data needs to always be on the graphics card on a GPU implementation? If they have geometry "mipmaps", they could keep the whole world in RAM and only send the needed "mipmaps" to the GPU, problem solved. Same principle for RAM and a harddrive, if a harddrive/SSD is fast enough (streamed of course, but I couldn't see any "pops" or something, which they even bragged about having eliminated... Geh, I dunno!!!)
I understand some people are skeptical to the demo. So much is left to speculations, and not much is actually proven. But don't look at what it isn't, look at what it is! It is a different approach to rendering that could actually rival polygons for realtime applications. The fact that they got that far with about 10 people is insane, considering polygon rendering has evolved over many, many years. Think about what this could be with the funding and research of polygon rendering. I WANT this to work, because I want to see/use the end result. I WANT them to deliver a working demo when they are done with it in a year or two. If it turns out it they couldn't then shit happens, but it has potential. For the moment I'm gonna assume they can do this. Maybe it won't.