Hi !
Featured games (90)
games approved by the League of Dukes
Games in Showcase (734)
Games in Android Showcase (222)
games submitted by our members
Games in WIP (811)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
    Home     Help   Search   Login   Register   
Pages: 1 2 [3]
  ignore  |  Print  
  Unlimited Detail Update  (Read 18536 times)
0 Members and 1 Guest are viewing this topic.
Offline Addictman

Senior Devvie

Medals: 3
Projects: 1

Java games rock!

« Reply #60 - Posted 2011-08-11 14:43:16 »

Oh stop being so jealous Wink It looks awesome.
Offline Eli Delventhal

JGO Kernel

Medals: 42
Projects: 11
Exp: 10 years

Game Engineer

« Reply #61 - Posted 2011-08-11 14:51:53 »

I thought it was interesting but BS last year and I thought so this year as well. The number of questions they are avoiding (especially animation, as many have already pointed out) leads me to believe that they haven't solved those issues.

And in this most recent video, what really got me was the "ugly" polygon grass they showed, which was like 3 different faces shaped like a * but which moved a swayed in the wind, giving the world a very alive feeling. They he shows his grass, which has plenty of detail but just sits there and looks dead.

The big, important, fact: I play a game to feel enveloped in the world, whether it's ugly blocks like in Minecraft, simple retro 2D graphics, or super pretty polygon Valve graphics. And his demo world is static and dead, it feels like a giant clay model.

See my work:
OTC Software
Offline CommanderKeith
« Reply #62 - Posted 2011-08-11 15:15:14 »

Wow, this tech is pretty impressive, even without animation.

I think everyone's being overly critical because that tech guy isn't modest enough the way he says "unlimited" all the time. But it's pretty clear that he's made something quite incredible.

And he's from Queensland, Australia! That's amazing in itself since queenslanders are bogans who can't do anything but get drunk and be crass. These guys at euclideon seem organised and have brains.

Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline bobjob

JGO Knight

Medals: 12
Projects: 4

David A M

« Reply #63 - Posted 2011-08-11 15:18:22 »

I honestly think they are avoiding questions, because the technology is relatively simple.
Its just an advanced culling algorythm done on each pixel (instead of the frustrum). Its the logical next step. Geometry is no longer the bottle neck.
Offline ShannonSmith
« Reply #64 - Posted 2011-08-11 16:02:45 »

The problem is when you shown one demo is you have to assume that limits of the demo are limits of the engine, and the demo is pretty limited. Any one of the limitations could be deal breakers for doing anything useful.
It looks like it is a cube world made up of only about eight or so different cube types which makes sense because it's hard to imagine how the memory requirements for each cube are not massive. The only reflective surface shown is a perfectly flat mirror surface and I can't see any shadows so it is quite probable they aren't doing any dynamic lighting at all and it is all baked in, no shadows, no shiny objects, no moving lights.
If they don't have any memory issues (as they claim) they should forget about making a voxel engine as the compression tech would be worth much more.

Online theagentd
« Reply #65 - Posted 2011-08-12 08:48:35 »

If the main problem is memory consumption then they should be able to implement non-flat reflections. Maybe not like ray tracing, but seriously, how many polygon games have reflective surfaces at all? Sure, it's maybe not as overly fantastic as they say, but if they can release a downloadable demo in a few years with animations, lighting, etc, I don't see why you're judging them so hard. Sure, it's maybe not unlimited, but you would still be able to make an AAA game with this if it works. If it has the performance they say it has it would be amazing. A software renderer running in 25 FPS on a quad-core laptop is pretty f*cking amazing considering the output. He also said the demo could easily be optimized to 3x performance (questionable, but whatever). The real question is how good it will scale with multiple processors. That will decide how good a version which runs on a GPU will work. A Radeon HD6970 has 1600 stream processors running at 880MHz. Compared to a quad-core CPU, in this case an laptop i7-2630QM, to a graphics card, we have so much more theoretical processing power. For a quick (maybe really inaccurate) comparison look at Bitcoin mining:
i7-2635QM (closest match): 2.93 million hashes/sec
Radeon HD6950: 272 million hashes/sec
NVidia GTX560 Ti: 67.7 million hashes/sec
Even considering very bad scaling compared to theoretical performance (30-100x performance), we can still assume a 10x scaling compared to the CPU version is achievable. Multiply that with the promised optimizations, let's say 2x increase as a worst case scenario, and we still have a 20x increase in performance. 20 times the 15-25 FPS achieved in the demo would be 300-500 FPS. For the content in that demo. Which would be insane geometry performance compared to current AAA games no matter how you look at, even without advanced shaders, lighting, etc.

Now before you start flaming me:
I now I compared a laptop CPU to a high-end desktop graphics card, but really, isn't that the target hardware for most AAA games? If a game runs badly on cheap hardware people can't really complain if it looks that good. xd
Memory problems would be even more insane. Graphics cards usually don't have 8GB of memory... More like 1-2GB. Considering each "atom" would require more data than that demo if they had lighting (do they need normals? I think they do...), shader data, e.t.c.

Great. Now I have even more questions I want them to answer. Can they extract motion vectors for motion blur? How fast can they calculate the distance of a point to the screen (for SSAO/HBAO/shadow mapping. Remember, they did have some buggy shadow mapping in the first demo)? What was the resolution the game was running on in the demo? Considering the clearly aliased edge when he happened to move into something in the demo, I think it was quite low actually... What about antialiasing? Can it support anything faster than supersampling? Will supersampling even be slower compared to today's game engines? As the lighting will effectively be performed just like with deferred rendering for the engines we use today, and considering that a good MSAA implementation will do the lighting for each sample, would we even see a big performance cost if we do jittered supersampling with this good geometry performance compared to deferred shading? How do they not have huge aliasing problems now? Can the solution be used for antialiasing? Memory usage? How much data needs to always be on the graphics card on a GPU implementation? If they have geometry "mipmaps", they could keep the whole world in RAM and only send the needed "mipmaps" to the GPU, problem solved. Same principle for RAM and a harddrive, if a harddrive/SSD is fast enough (streamed of course, but I couldn't see any "pops" or something, which they even bragged about having eliminated... Geh, I dunno!!!)

I understand some people are skeptical to the demo. So much is left to speculations, and not much is actually proven. But don't look at what it isn't, look at what it is! It is a different approach to rendering that could actually rival polygons for realtime applications. The fact that they got that far with about 10 people is insane, considering polygon rendering has evolved over many, many years. Think about what this could be with the funding and research of polygon rendering. I WANT this to work, because I want to see/use the end result. I WANT them to deliver a working demo when they are done with it in a year or two. If it turns out it they couldn't then shit happens, but it has potential. For the moment I'm gonna assume they can do this. Maybe it won't.  Tongue

Offline Bonbon-Chan

JGO Coder

Medals: 12

« Reply #66 - Posted 2011-08-12 12:43:10 »

I work often with researchers (mainly medical though). They are usually great and enthusiastic. They often manage to do something working and when they want to sell it... it is a big failed (when they work alone).

I'm not against this engine, nor this technic. But if you think about it, there is some much thing against them :
- new technic : a lot of unknowns
- habits : it is allways difficult to change people habits. There will have to make change engine, design, workflow, ... (it is a lots of works)
- prejudge : just see how many people a far too enthusiastic (bad thing) or pesimistic (bad thing too); and all technical assumptions that has been done
- api competitors : it is not like 10 years ago... OpenGL and DirectX are really advenced API.
- hardware competitors : some people have say that they want to do some GPU or CPU. To do a fullgraphic card is a no go, they will have to do a OpenGL compatible card that can compete NVidia and AMD. To do a aditional card is not really go (very few people will buy it). And for both, if there is no software equivalent no company will use the engine.
- Small company : few human and money resources.

I hope them a lots of courage and luck.

By the way, I have heard a lot of optimization, optimiztion and optimization. I don't know for you but my teachers have allways said : "Don't do optimisation too early, make something almost complet and then start optimization. Otherwise, you will loose a lot of time to redo most of you code".
With this in mind, for me, the best is to stay on the CPU (or may be with OpenCL... it can be interresting this way. Multi-threading is the way to go at least) at a low resolution (640x480), at a low FPS (even 5 fps is good) but with a maximum functionnalities and quality. There is very few risk at the optimization stage (by the time, the hardware will even be more powerfull).

Online theagentd
« Reply #67 - Posted 2011-08-12 12:52:07 »

You're right, I'm not encouraging them to optimize it now. I just mentioned that they said that they could optimize it to 3x performance in the video.
When it comes to hardware, I don't see why they shouldn't be able to implement it in OpenCL or something similar. If they can't, I don't think they can really solve it even with dedicated hardware. Maybe their official point cloud graphics card will be a modded NVidia or ATI card with 16GB VRAM...  Grin

Pages: 1 2 [3]
  ignore  |  Print  
You cannot reply to this message, because it is very, very old.

cybrmynd (34 views)
2017-08-02 12:28:51

cybrmynd (49 views)
2017-08-02 12:19:43

cybrmynd (58 views)
2017-08-02 12:18:09

Sralse (70 views)
2017-07-25 17:13:48

Archive (497 views)
2017-04-27 17:45:51

buddyBro (644 views)
2017-04-05 03:38:00

CopyableCougar4 (1130 views)
2017-03-24 15:39:42

theagentd (1133 views)
2017-03-24 15:32:08

Rule (1109 views)
2017-03-19 12:43:22

Rule (1086 views)
2017-03-19 12:42:17
List of Learning Resources
by elect
2017-03-13 14:05:44

List of Learning Resources
by elect
2017-03-13 14:04:45

SF/X Libraries
by philfrei
2017-03-02 08:45:19

SF/X Libraries
by philfrei
2017-03-02 08:44:05

SF/X Libraries
by SkyAphid
2017-03-02 06:38:56

SF/X Libraries
by SkyAphid
2017-03-02 06:38:32

SF/X Libraries
by SkyAphid
2017-03-02 06:38:05

SF/X Libraries
by SkyAphid
2017-03-02 06:37:51 is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!