Java-Gaming.org Hi !
Featured games (83)
games approved by the League of Dukes
Games in Showcase (527)
Games in Android Showcase (127)
games submitted by our members
Games in WIP (593)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
   Home   Help   Search   Login   Register   
  Show Posts
Pages: [1] 2 3 ... 9
1  Discussions / Miscellaneous Topics / Re: I don't have the lollipop update yet? on: 2014-11-27 08:30:48
Or you can do it yourself especially since you have a Nexus 5:
https://developers.google.com/android/nexus/images

2  Game Development / Newbie & Debugging Questions / Re: I want to scale ParticleEffect - Libgdx on: 2014-11-23 22:58:11
What is the value of "Values.Scalar_Width"? You need to likely take screen density into account. The S4 isn't the highest density screen either. IE I just got the Nexus 6 for instance. These are Android system values from DisplayMetrics:

Nexus 6:
density: 3.5
densityDpi: 560
scaledDensity: 3.5
xdpi: 494.27
ydpi: 492.606
widthPixels: 2413
heightPixels: 1440

Nexus 5:
density: 3.0
densityDpi: 480
scaledDensity: 3.0
xdpi: 442.451
ydpi: 443.345
widthPixels: 1794
heightPixels: 1080

You get them by getting a DisplayMetrics instance for the desired screen:

1  
2  
3  
4  
      
DisplayMetrics displayMetrics = new DisplayMetrics();
((WindowManager)getSystemService(Context.WINDOW_SERVICE)).getDefaultDisplay().getMetrics(
displayMetrics);


You can use any Context (Activity / Application) to call getSystemService.

Granted this is the solution for Android. You'll want to find a value with some of the factors returned by DisplayMetrics and set "scaleEffect" accordingly.
3  Discussions / Miscellaneous Topics / Re: What I did today on: 2014-11-19 02:30:14
Yeah Galaxy Nexus was released in the fall of '11 though there wasn't that much of a delay if I recall. I happened to attend Google I/O in '12 and got one and the original Nexus 7. It replaced and was slightly better than the G2X / Tegra 2. I think that was the last year of good device giveaways for Android developers. I skipped the Nexus 4 (not that great of a GPU) and went straight to the Nexus 5 / Snapdragon 800. I started building my video engine on the Galaxy Nexus; har har har... About 1 filter (only some of them; forget blur!) running at ~24FPS while encoding. One couldn't instantiate a MediaCodec encoder / decoder at the same time or one after another due to whatever native resources not being released / out of memory. Snapdragon 800 / 801 and the Adreno 330 GPU blew the door wide open for doing what I'm doing today; up to 8 effects + plenty of window based ones (blur) at 30FPS (OK multiple blur / or kuwahara turned up can kill things). It's what I call the "FBO bonanza" as the Adreno 330 was the first phone form factor GPU that wasn't heavily penalized for multiple FBO rendering in a single pass.

The Snapdragon 805 / 810 is like the tock of a tick; ~30% and 80% faster respectively of the Snapdragon 800.
4  Discussions / Miscellaneous Topics / Re: What I did today on: 2014-11-18 23:11:58
Goodness things have been busy... Last week I attended the Samsung Developers Convention and ran into Phil Nickinson of Android Central... On demoing the video capture / effects composition app he pulled out his video camera and we recorded an impromptu demo that was posted to Android Central (first outside press):
http://www.androidcentral.com/typhonrt-video-suite-looks-be-amazing-real-time-renderingrecording-platform

I'm definitely looking forward to AnDevCon the rest of the week!  For anyone attending I'm giving a lightning talk on Wednesday evening about modern OpenGL on Android. I also have the opportunity to participate with a demo booth at the Intel sponsored after party Thursday evening 7:30 to 10pm. Stop by and check out the TyphonRT Video Suite. I'll have an LCD TV on hand and allow folks to get hands on with the video capture / effects composition app and have some fun.  Wished I had time to get together a green screen backdrop for photo / video fun, but I'm working hard on my demo code for my presentation.

My presentation on OpenGL ES 3.1 / compute shaders is on Friday at 10am. I am finishing up the source code takeaways today using the Nexus 9. What I'm demoing and providing utility code for is using OpenGL ES 3.1 from Java / Android Studio. There presently is no information out there about how to do this and so far I've found several tricks that make it possible that aren't obvious in integrating compute shaders with standard Android mechanisms like loading textures from resources that can also be used in compute shaders. I'm working real hard today in converting the advanced ComputeParticles demo (http://docs.nvidia.com/gameworks/index.html#gameworkslibrary/graphicssamples/opengl_samples/computeparticlessample.htm)
from the NVidia Gameworks samples today to Java... Don't know if I'll make it as it's complex, but I'm giving it a try. Source code and slides will be available on GitHub soon.

If you are planning to make it to AnDevCon give me a holler and let's hang out and chat all things Android + modern graphics / video!

Also, some darn cool updates. I just installed Lollipop on the NVidia Shield tablet and yeah yeah yeah yeah YEAH MediaCodec works fine and my video engine is stable on the K1 / Lollipop. NVidia shipped the Shield w/ KitKat and a broken implementation of MediaCodec. So moving to Lollipop as the base OS for the video engine is really going to help in stability and hopefully I won't have to build as huge of a device testing lab.

And OMG Nexus 6 in the mail.. I'm trying to reroute it to a UPS station, so I can pick it up tomorrow morning!!! I still am amazed that a year ago I was using the Galaxy Nexus and now the Nexus 6 which is light years apart from the GN. 

Booyah!
5  Games Center / WIP games, tools & toy projects / Re: TyphonRT Video Suite (Kickstarter) on: 2014-11-13 23:33:22
Didn't mean to insult Smiley It the audio doesn't fit the video, maybe that colored my perception of the audio, ending up a tad harsh Smiley

No worries.. That is one of the only few tracks of mine that would be at all appropriate or close to appropriate for a general audience. All the rest are outsider / eccentric / experimental music.

Anyway, if anything, that 8 second 'intermission' is hopefully easy/cheap to rip out. That will surely help retention rates.

That I can take a look at and am plenty skilled at editing. Maybe There could be a way I could mash up the monologue and intro sizzle reel. I think refining the textual content is much more pertinent at this point.

As things go it's practically impossible for me to get anyone to be in a film / reshoot scenes without involving other creatives on the clock. I've essentially locked myself in my studio coding for the past year+ and most old acquaintances won't be in a video or even back the KS, so yeah. So everything has a dollar sign attached to it.
6  Games Center / WIP games, tools & toy projects / Re: TyphonRT Video Suite (Kickstarter) on: 2014-11-13 23:02:03
Definitely a live and learn experience I suppose..  As things go the current video cost a lot to produce. I don't have the resources / equipment to shoot a better one myself and was engaged full time as a contractor when things were being filmed for half of it at least (then that client ran out of funds and ahhh...). I'm taking on a short term contract Dec / January to survive at this point, thus also pushed out the release date a month for the KS.

A comparison KS is Looksery's:
https://www.kickstarter.com/projects/looksery/looksery

Sure the video is a bit... Hmm.. I guess I'll call it "consumer friendly". I dunno...

then 8 seconds of fade out (bye bye!) followed by a monologue without background audio.

Dang, I thought it would seem over produced with background music during the monologue.

that background music is so generic it would be better to switch to audio that

Hey man.. I produced that track.. Albeit back in 2002... ;P
7  Games Center / WIP games, tools & toy projects / Re: TyphonRT Video Suite (Kickstarter) on: 2014-11-13 22:34:30
Good points.. I do appreciate the feedback because I'm not getting much. I'll try an figure out verbiage changes that make sense.

I see all sorts of projects being transparent in where the funding goes. I guess there are some that don't list it either and lean toward the direction you bring up. A big problem is just getting out of the friends and family category of backers. During the first campaign I was counting on conference attendance to pass out flyers and demo in person to Android enthusiasts. This turned out to be not so good.

I'm at the Samsung Developers Conference right now though and got very lucky to get featured on Android Central via Phil Nickinson who've I've bumped into in the past at conferences since the early days of Android:
http://www.androidcentral.com/typhonrt-video-suite-looks-be-amazing-real-time-renderingrecording-platform

Getting further coverage in Android blogs / any press is the only thing seemingly that will make it tip.
8  Games Center / WIP games, tools & toy projects / Re: TyphonRT Video Suite (Kickstarter) on: 2014-11-13 21:57:14
So, I rebooted the Kickstarter. I lowered the funding goal from $45k to $20k and did this without lowering quality expectations. Basically I needed to fund a significant device testing lab if I was going to release for KitKat due to MediaCodec API instability across the ecosystem. I'm bumping the min Android OS requirements to Lollipop which received more testing from Google. There are also other benefits like defaulting to the Camera2 API.

Check out the new Kickstarter:
https://www.kickstarter.com/projects/85808410/typhonrt-video-suite-next-gen-video-apps-for-andro-0
9  Game Development / Newbie & Debugging Questions / Re: Ways of keeping track of classes... on: 2014-11-13 21:51:46
Packages are for organization and the default access (package private) modifier can be used to provide a non-public API across classes, but other than that packages don't mean anything.

Packages have a role in general regarding modularity. In particular they can be useful for meta-data describing import / export requirements of a module. This is scene in OSGi for instance.

The link below has some details then describes OSGi and how packages are used as meta-data:
http://branchandbound.net/blog/java/2013/07/java-modularity-story/
10  Game Development / Newbie & Debugging Questions / Re: Perlin Noise clouds & CPU-Usage on: 2014-11-13 21:43:49
After some testing, i've proved that the Perlin Noise doesn't use alot of CPU, it's the creating of the bitmaps...

Then don't create a new Bitmap every time and recycle Bitmaps. Make two Bitmaps (mask and target) parameters to "generate clouds".  Handle the error case when mask and target are not the same size. Certainly the "clouds" bitmap can be recycled.  I don't know how many mask images you load or how often this method is called.
11  Game Development / Newbie & Debugging Questions / Re: Ways of keeping track of classes... on: 2014-11-10 21:35:30
Consistency is king when it comes to organizing code.  I work on various engine middleware and have ~1500 classes to organize. It's long past where I can memorize every single class name.

As far as code architecture is concerned the MVC pattern and inheritance eventually break down and really make things too rigid. Most large libraries (Java & Android SDK included) that solely base themselves in this direction go towards being a "kitchen sink" or "big ball of mud"
http://en.wikipedia.org/wiki/Big_ball_of_mud

You are not going to face that problem yet. I'd say things start to buckle around 500 classes.

So some tips:
- I'm partial to naming interfaces with a leading capital 'I'. This makes it obvious when looking at the file that it is an interface and not a class.

- For package naming I put interfaces and other common utility code that doesn't have dependencies using a "commons" tag in the package name. Here is the package name for IEventBus (interface):
org.typhon.commons.java6.sdk.opencore.events.bus.cop.systems

"commons" indicating it stores interfaces, java6 to indicate language level, sdk (in my middleware efforts it's either 'sdk' or 'rt' for runtime), opencore indicating this will be a freely available package ('core' is another variation for indicating for pay), then the actual specific naming portion for the code at hand.

And one implementation of IEventBus is BasicEventBus; here is it's package:
org.typhon.java6.sdk.opencore.events.bus.basic.cop.systems

When you have a bunch of code and even when you don't a big help in organizing is to think about module separations from the get go. At least with IDE tooling like IntelliJ Idea & Android Studio (based on Idea) there is a strong module concept in the IDE. I understand Eclipse might be a bit different, but should have a similar construct.

What I have done is aggressively modularize my middleware and am now at ~750 modules / independent IDE projects. Essentially a module should expose one main core package and a few more if necessary. You don't split packages between modules! IE don't have the same package in two different modules. The really cool thing that this does is that in the IDE and build files you can explicitly see any other modules that any given module may depend upon.

I name the modules based on the core package name exposed.

For IEventBus this is the module it is in:
shared-java6-sdk-opencore-events-cop-bus-commons

For BasicEventBus this is the module it is in:
shared-java6-sdk-opencore-events-cop-bus-basic

"shared" indicates that it runs on any platform where Java SE is supported; the other parts of the module name mirror the package name internally. I tail things off with "commons" so its easier to see modules that are interfaces / utility code w/ no dependencies.

So, now when I look through the entire middleware code base I have ~750 modules that are named according to package and utility areas. I've ended up now with about ~50 utility areas with X amount of modules under them. Entity system, physics, opengl, opengl es, etc. are all major groupings.

Basically you just have to get meta on organizing your codebase. With 27 classes you are fine. At some point though putting off strongly organizing your code will have a negative consequence in maintenance and productivity. So for the time being just be as consistent as possible. Depending on how long the project / effort goes and how big it gets will determine if you need to consider doing the kinds of things I describe above.

The direction I'm travelling down has been summed up in the Modularity Maturity Model:
http://www.infoq.com/news/2011/09/mmm-osgi
http://www.slideshare.net/mfrancis/ibm-sponsorship-keynote-towards-a-modularity-maturity-model-graham-charters

12  Game Development / Game Play & Game Design / Re: How to save/load game data with Entity/Component based Engine via JSON on: 2014-11-09 11:04:24
I'd recommend checking out existing JSON / Java serializers like Jackson or Boon. I haven't worked with either of those libraries, so I don't know how extensible they are or if you can replace how they might walk object graphs. I have no idea offhand if either of those would work out of the box and properly serialize an Entity or if you'll have to provide bespoke serialization which means you'd have to manage the process. If the latter is the case perhaps you can make tag, mask, compMap package private and have a "JsonEntityUtil" class in the same package that manages the serialization / deserialization which can directly access tag, mask, compMap.
13  Game Development / Newbie & Debugging Questions / Re: Image Effects on: 2014-11-09 04:42:59
Why not use a scripting language like Lua or even Javascript for your effects? Then you could alter your effects at runtime.

There is fast (GLSL) and then there is slow: everything else...   

BTW you can recompile GLSL code on the fly. It's how a lot of the convolution / kernel based shader code is manipulated. IE programmatically recompiling shader code base on various settings that expand / contract the window applied.

Just wondering, I am using the image kernal for a lot of effects like blur and emboss... But what about effects like sepia, instant and enhance?

Think about a "kernel" operation (not "a" BTW Cheesy) for a color matrix. I'm sure there are other ways of doing a sepia tone image like an image lookup, but here is how things are implemented in GPUImage w/ a color matrix.

See:
https://github.com/BradLarson/GPUImage/blob/master/framework/Source/GPUImageSepiaFilter.m

https://github.com/BradLarson/GPUImage/blob/master/framework/Source/GPUImageColorMatrixFilter.m

As mentioned previously by @SHC if you have a proper matrix / vector library you'd multiply the color matrix by the color vector for the pixel essentially. Look at the GLSL code and translate that to Java basically.
14  Discussions / Miscellaneous Topics / Re: simplifying getter/setter classes on: 2014-11-08 01:42:52
I think this conversation is interesting and I meant to jump in sooner. The real problem that I see with traditional getters / setters is that they become awfully fragile across module boundaries in terms of code changes (3rd party library for instance). Architecturally speaking they are a horrible way to retrieve various resources from a main model or control (re: say a custom Activity in Android). Dependency Injection is a partial solution, but still pretty weak IMHO (especially constructor injection!) and has inherent maintenance concerns too.

Yes yes.. Component architectures with implicit lookup comes to the rescue IMHO. This is one of the super strong points of this approach such that across module boundaries lookups are implicit and don't require specialized getter / setter methods to connect everything together at runtime. It's great too because the component architecture API itself is super standardized across all modules that use it and dependencies aren't leaked between module boundaries (exposed in traditional getter / setter method signatures).  JGO will hate me for this, but I'll get buzz wordy and say it's essentially "In-process microservices".

Going beyond implicit lookup is the EventBus pattern (not Guava / Otto, etc.). At least my EventBus implementation allows setting up adhoc message passing between producer / consumer relationships and one can setup two way messaging (input / output ports). This allows passing messages across module boundaries that otherwise would be method invocations. For instance in my video engine efforts I have a camera manager. When I want to start a video stream for the camera. I send a "start video" message to the input port defined by a series of extensible enums that define a nested category indicating some sort of camera manager is receiving the message. And then camera frame received messages start streaming over the output port.

I could go on in this direction, but in the past JGO has been overly critical for little reason on the directions I've taken even though they solve a lot of problems with traditional OOP; especially the getter / setter debacle.



15  Game Development / Newbie & Debugging Questions / Re: Image Effects on: 2014-11-03 01:24:12
Then I will have a shader object which has the compiled shader, and all the shaders it requires...

(Lol, my Safari on my iPad froze twice while I was typing this... Srsly, how do people think iOS is amazing?)

Technically you can do whatever you want in regard to pre-processing your own shader code before linking it. You might be able to do some sort of limited chaining of what should be independent shader passes that don't require doing any sort of convolution kernel operations on intermediate state, but that would really make your post processing pipeline rather rigid and with no practical speed improvements. There are other fish to fry basically for performance.

By building a composable pipeline of independent image operations where each one writes to a successive FBO you can then allow the user to mix and match on the fly image operations to perform.

One of the really cool things my post-processing pipeline allows is a built in preset editor where users can interactively drag horizontally on the screen and the GL component shows dragging between between shaders / image operations by rendering the pipeline twice and combining. There is also interactive dragging between complete presets (renders the pipeline twice with each preset and combines).

Indeed... The GLSL direction is a lot to take in for what is a simple image processing API. Take what you learned from the Java side of things though and consider a GLSL implementation as it's useful in game dev or general media based apps.
16  Game Development / Newbie & Debugging Questions / Re: Image Effects on: 2014-11-03 00:24:15
So can I have some sort of shader with something like:
#use <shader here> <args>
#use <another shader> <more args>
And then when my program reads the shaders it checks for lines like that and it will pass the image through those shaders first?

No.. There is no embedding or continuation of shader code that is controlled GPU side*. You need to control the succession of shaders in a post-processing pipeline on the CPU side in addition to whatever variables / textures are set up / shared across executing various shader code.

* OpenGL ES 3.1 introduced indirect draw functionality that can be paired up nicely with compute shaders which allows drawing to occur after computation without CPU intervention, but this is not the same as bespoke GPU continuation like you are asking about.

At this point it may be good to get into trying some OpenGL yourself. There is a learning curve no doubt, but it will be worth it. If you happen to have an iOS device perhaps take a look at GPUImage example apps and source code. I guess for some examples there (modifying a static image) the simulator could work. I guess GPUImage also works on the desktop. I've never actually ran any GPUImage apps... Just snarfed* all the fragment shader code then improved it for OpenGL ES 3.0+ and my own custom pipeline.. Cheesy *will attribute of course!

17  Game Development / Newbie & Debugging Questions / Re: Image Effects on: 2014-11-03 00:10:04
Is there a max size on the matrix in OpenGL? So can I do something like 21x21 or something?

For built in variable types the max matrix size is 4x4. It's handy to review the reference cards for OpenGL ES 3.0 & 3.1 for built in types and GLSL functions:
https://www.khronos.org/files/opengles3-quick-reference-card.pdf
https://www.khronos.org/files/opengles31-quick-reference-card.pdf

also

https://www.opengl.org/wiki/Data_Type_(GLSL)

Also there is a max array size for uniform variables depending on the OpenGL implementation. More to the point there is max uniform storage across all data types defined in a shader. See this link and "Implementation limits" section on how to query for the max sizes; usually 512 and greater is supported:
https://www.opengl.org/wiki/Uniform_(GLSL)

You can aggregate basic types in arrays and structs. For a "21x21" matrix an array of 441 floats, but you'll have to deal with the indexing yourself and of course built in matrix operations don't apply to a bespoke array. Not all OpenGL implementations support multidimensional arrays in GLSL. I understand that OpenGL 4.3+ should support or where the ARB_arrays_of_arrays extension is available. I understand OpenGL ES 3.1 has multidimensional arrays / ARB_arrays_of_arrays support.
18  Game Development / Newbie & Debugging Questions / Re: Image Effects on: 2014-11-02 21:37:32
1. But how can the developer tell they want to use another shader before or after with just "a shader". Also I want to send data from one shader to the other.
2. I mean number arrays. So I can have an image kernal. Like so (in Java):
This will do some sort of edge detect. Again, sry if I'm using the wrong term here.

1. You have CPU side control over the whole pipeline one builds for post-processing. You don't control the pipeline between shaders in shader code itself as it's done on the CPU side. You can share data directly between shaders via CPU side control such as FBO (textures) which are shared between shaders and you can share uniform data (variables in the shader code) between shaders. OpenGL ES 3.0 also enables a lot more in regard to general buffer objects as input / output from shaders (PBO is an example). But in the case of uniform data there are UBO (Uniform Buffer Objects). These are interesting because they allow setting uniform data of a shader efficiently in one call, but can also share data across multiple shaders. See the following for (OpenGL ES 3.0 feature):
https://www.opengl.org/wiki/Uniform_Buffer_Object
http://www.lighthouse3d.com/tutorials/glsl-core-tutorial/3490-2/

2. Most definitely you can do convolution based kernels. See this code example for fragment shaders which do a Laplacian edge detection:
https://github.com/BradLarson/GPUImage/blob/master/framework/Source/GPUImageLaplacianFilter.m

The line in the fragment shader "uniform mediump mat3 convolutionMatrix;" stores the 3x3 matrix and would be filled with the desired coefficients.

It should be noted that doing convolution based filters is generally expensive due to dependent texture reads. IE setting the color of the current pixel being processed is dependent on reading other pixel values surrounding it. You'll want to be careful in your implementation to reduce "dependent texture reads" in your fragment shaders. A good article on optimizations for this from Brad Larson regarding the Gaussian blur filter in GPUImage:
http://www.sunsetlakesoftware.com/2013/10/21/optimizing-gaussian-blurs-mobile-gpu

In short.. Yes, you can do convolution based filtering with OpenGL. There is nothing you can do in Java / CPU side that you can't do in OpenGL.
19  Game Development / Newbie & Debugging Questions / Re: Image Effects on: 2014-11-01 02:59:31
I'm starting to consider GLSL... I just want to know two things first:
- can you somehow 'extend' other shaders? In the Java code I have a 'KernalEfffect' which other effects (like blur and edge detect) can extend and add a kernal (sry if I'm not using the right terms)
- can you have arrays? I would need an array for the kernal shader


1. You don't extend shaders, but you can make multiple passes via FBO rendering to an intermediate texture. Some of the "effects" I expose to users of my video engine are composite effects themselves made up of two or more passes internally. Basically you can make whatever architecture makes sense on the Java / control side as long as you are making separate passes rendering to a texture for each shader / IE "kernel".

2. Not exactly sure what you mean by "can you have arrays"... So describe it better. That can mean many things I suppose.

For instance in OpenGL ES 3.0 you can have "texture arrays" I'm soon implementing that into the pipeline / engine to store the last X output frames rendered. This will allow easy access in shader code to say do actual real motion blurring with past frame data without having to pass in bespoke texture data as separate samplers; IE just one texture array is passed in..


Be carefull by chaining many effects serially if you output them to 8bit texture, you will lose precision of original image every time and these errors  just accumulate up and might cause horrible banding. Using +- half bit dither and/or srgb/higher precision after every effect will help with this.
http://loopit.dk/banding_in_games.pdf

Good to keep in mind. I haven't gotten there yet as there are other engine upgrades to make first, but OpenGL ES 3.0 has sRGB textures and framebuffers so that you can address this problem.. OpenGL ES 3.0 is well worth moving to if you can since it is a major update.
20  Game Development / Newbie & Debugging Questions / Re: Image Effects on: 2014-10-31 21:54:15
The post processing in my photo / video engine is of course all GLSL based. Consider backing the Kickstarter... http://kck.st/1sk0lN4

I highly recommend this approach especially if applying more than one filter is desirable in series. My efforts are pretty neat since you can have on and off screen buffers and apply up to 8 effects in series which can be used for blending and other compound effects all controlled by a simple GUI that fits on an Android phone screen running at 30FPS or better depending on the camera for modern Android devices.

Big hint here for those interested in GLSL based image processing.. Take a gander at the iOS GPUImage open source project. There is a treasure trove of GLSL image processing shaders there. Of course most of the work is done in fragment shaders.
https://github.com/BradLarson/GPUImage

GPUImage gave me a head start in building my video engine / post processing pipeline. There is no shared software architecture between my efforts and GPUImage, but the shader code helped a bit. I have since upgraded everything to OpenGL ES 3.0 and built much more comprehensive support for modern GL into my post processing pipeline that GPUImage does not have, but I wouldn't have been able to focus on all of that if I didn't have all the fundamental image operation shader code on hand out of the gate. It also gave me plenty of time to wrestle with the Android MediaCodec API.

GPUImage is well worth checking out for those interested in GLSL image processing (just for the shader code!)

Also another big hint for Android. With the Snapdragon 800 / Adreno 330 this was the first mobile GPU in phone form factor to unleash the "FBO bonanza". I do upwards of 30 FBO passes in the post processing pipeline without significant penalties. Previously mobile GPUs had harsh performance penalties on using many FBO passes. Even 2 would cripple performance. Not so anymore!
21  Game Development / Newbie & Debugging Questions / Re: All about arrays! on: 2014-10-29 20:10:48
>Everything is about optimization

You should see me drive a car.. ;P

Now, I wouldn't go as far as @lcass per se though the enthusiasm is noted, but the majority of traditional Java "best practices" are like cake and we all know cake is a.. yeah..

22  Game Development / Newbie & Debugging Questions / Re: All about arrays! on: 2014-10-28 19:58:36
So one thing that no one has mentioned yet (@lcass was closest) and is the most pertinent is that working with arrays of primitive types with Java is the only way you are guaranteed contiguous block of memory. This is important for optimizing algorithms in Java quite often.

Instead of an array or ArrayList of Objects one unpacks the member data in the Object into multiple primitive arrays. In the example below the object in the Android API is "MediaCodec.BufferInfo". I also store extra relational data across frames that a single MediaCodec.BufferInfo object does not like an index of video key frames which allows me to quickly show updates to the display when a user scrubs video for instance.

The reason my video engine for Android, which you should totally back the Kickstarter: http://kck.st/1sk0lN4 is so darn fast and efficient is because I track everything including frame metadata in primitive arrays. Here is the top of "MediaBuffer" (can be audio / video data or anything really). "MediaFrameInfoArray" holds the metadata arrays.

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
/**
 * MediaBuffer
 *
 * H264 Stream you should put the SPS in a ByteBuffer with the key “csd-0″ and the PPS with a key of “csd-1″
 */

public final class MediaBuffer extends AbstractDataManager<IComponentManager> implements IMediaBuffer
{
   public final String              mimeType;

   public final MediaFrameInfoArray frameInfoArray;

   public final byte                bufferEncoded[];
   public final int                 bufferEncodedLength;

   public int                       bufferEncodedLimit;


And MediaFrameInfoArray...

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
20  
21  
22  
23  
24  
25  
26  
27  
28  
/**
 * MediaFrameInfoArray -- Stores arrays of primitive data normally associated with a single MediaCodec.BufferInfo
 * instance. There are helper methods to manipulate the data stored in addition to relative data between frames.
 *
 * %0, 1, 2, 3, 4, %5, 6, 7, 8, 9, %10       frameInfoLength         // % indicates key frame
 *  0, 0, 0, 0, 0,  1, 1, 1, 1, 1, 2         frameToKeyFrame         // stores index to previous key frame
 *  T, F, F, F, F,  T, F, F, F, F, T         frameIsKeyFrame         // stores if frame is a key frame
 *  0, 5, 10                                 keyFrameToFrameIndex    // stores frame index for key frame
 *                                           bufferEncodedPosition   // stores the position of the frame in an associated IMediaBuffer
 */

public final class MediaFrameInfoArray implements IData
{
   public final int        frameContentSize[];
   public final int        frameHeaderOffset[];
   public final int        frameFlags[];
   public final long       framePresentationTimeUs[];

   public final int        frameToKeyFrame[];
   public final boolean    frameIsKeyFrame[];

   public final int        keyFrameToFrameIndex[];

   public final int        bufferEncodedPosition[];

   public final int        frameInfoLength;

   public int              frameInfoLimit;
   public int              keyFrameLimit;
23  Game Development / Newbie & Debugging Questions / Re: How can I optimize my code: multiple boolean checks on: 2014-10-26 23:27:26
I recommend using an enum for the levels (lets make that Levels) and EnumSet for the tracking.

EnumSet<Levels> completedLevels = EnumSet.noneOf(Levels.class);

Just like all the other collections there are many familiar operations:

completedLevels.size() gives the count.

completedLevels.add(Levels.LEVEL_1);

You can get an iterator for enums in the set.

You can get a complement set via:

EnumSet<Levels> unfinishedLevels = EnumSet.complementOf(completedLevels);

And lots more without having to write any custom tracking code like other solutions in this thread.

You can even create a EnumSet for special conditions and easily check against completedLevels:

EnumSet<Levels> mediumDifficultyLevels = EnumSet.range(Levels.LEVEL_10, Levels.LEVEL_20);

if (completedLevels.containsAll(mediumDifficultlyLevels))
{
// etc etc etc.
}
24  Games Center / WIP games, tools & toy projects / Re: TyphonRT Video Suite (Kickstarter) on: 2014-10-14 06:37:39
The promo video is like a tech demo. Where's the motivating narration, something to lure in the potential funder and pitch your product. Imagine the first iPhone to be marketed without Steve screaming 'amazing!', 'fascinating!', 'best phone!', 'revolutionary!', 'unparalleled!', 'beautiful!', 'masterpiece!', at the willing audience.

Good point.. I made this the first update and didn't put it at the end of the sizzle video. I'll probably add it to the end of the sizzle reel... Here is the video:
https://www.kickstarter.com/projects/85808410/typhonrt-video-suite-next-gen-video-apps-for-andro/posts/1016236

In the snippet with the local news reporter, the visual effect is not clear... and it's anti-Putin, which can be a turn off for people. Keep your video politically neutral Smiley

Wait someone likes Putin? ;P OK US / west coast bias here. I did think about this, but time ran out before being able to reshoot. I'll see if I can swap out that section. The creative team I was working with didn't advise on putting the text caption before the news reporter: "Edit lower thirds in the field" like I suggested. They said that was too technical of a term. I dunno.. The clip is supposed to show that you can do lower thirds in the app and not have to go back to desktop editing to get a reasonable news clip for posting online.

Thanks for your feedback for sure!
25  Games Center / WIP games, tools & toy projects / Re: TyphonRT Video Suite (Kickstarter) on: 2014-10-14 06:21:56
If your highest tier is $50, you're going to have a hard time. Where is the "$25,000 to have dinner with the dev and access to a private forum" tier? Tongue But seriously, there may be people/enthusiasts/diehards willing to spend more than you're asking for. Even if nobody is going to end up paying >$50, having higher tiers means you've got the guts to ask more - which might convince people funding just a tad more.

Good eye.. I'm adding these expanded reward tiers tomorrow morning before I blast the rest of the world. I actually had them all edited, but messed up with saving them and they all disappeared..  Cry I've got to get the demo code for the presentation this weekend working tonight. :: sigh :: 16+ hour day here if not longer!   Some of the higher reward offerings will include dinner at the studio (I'm actually a good cook too) and one night and weekend long immersive gaming sessions at the studio. I'm also going to offer a 10k reward (max value possible) to custom code a photo app using the effects preset editor with the HTC M8 depth camera. I've already got all the depth mask effects code written in general. Sadly the HTC M8 only works with photos, but they'll have exclusive access to that app for at least 6 months and can brag to all their friends.   Roll Eyes  There is a launch party as well starting at the $100 tier for tickets.

With any luck this weekend at the Big Android BBQ I'll be getting an in person demo in front of several decent Android blog folks and any posts like that will definitely help get eyes on the site.

Also have www.typhonvideo.com up... First time getting things up on a CDN and getting video backgrounds working.

26  Discussions / General Discussions / Big Android BBQ (Texas) & AnDevCon (SF) conferences on: 2014-10-14 04:31:25
Hey folks...  Anyone going to the following conferences?

Big Android BBQ - http://www.bigandroidbbq.com/  (This week!)

or

AnDevCon - http://www.andevcon.com/ (November)

I'll be at both and am presenting on modern graphics / video engine development with OpenGL. The topics are of course Android related and I'll be covering a lot of broad material starting from OpenGL ES 3.0 support which is the foundation of modern engine development to OpenGL full profile on mobile. Given time for a deeper dive I'll be highlighting compute shaders which is OpenGL ES 3.1 functionality that is imminently available on Android.

The Big Android BBQ presentations might be streamed / recorded. I'm speaking there if the schedule doesn't change Saturday at 12:30.  I present Friday late morning at AnDevCon in November.

You better believe I'll be beating the street promoting my recently posted Kickstarter at these two events:https://www.kickstarter.com/projects/85808410/typhonrt-video-suite-next-gen-video-apps-for-andro

I made 5k business card sized promo flyers and will harken back to my underground electronic music event promoter past with a bit of hustle.
27  Games Center / WIP games, tools & toy projects / TyphonRT Video Suite (Kickstarter) on: 2014-10-14 03:51:22
I've finally launched the Kickstarter for the next-gen suite of video apps I'm launching for Android. The marquee launch app is a creative video capture app that combines an open ended effects composition preset editor. I spent a good deal of effort and time refining the info & promo video. Be sure to check out the Kickstarter here for expanded info:

https://www.kickstarter.com/projects/85808410/typhonrt-video-suite-next-gen-video-apps-for-andro-0

Check out the Kickstarter page for the promo video! Let me know what you guys think. Cheesy

Some promo shots:

28  Discussions / Miscellaneous Topics / Re: Something like libgdx for business logic on: 2014-10-10 08:17:32
Just use RoboVM directly. They had a session at this years JavaOne and mention a test app for Android / iOS that is in their Github samples repo.
http://blog.robovm.org/2014/10/robovm-at-javaone-2014.html
29  Discussions / Miscellaneous Topics / Re: Is drinking half a bottle of vodka safe? on: 2014-10-10 03:18:41
A wise and literary oriented roommate of mine in college wrote on my whiteboard many funny quotes about Java back in '98... The top right most though applies to vodka...

"Vodka is like a debugging tool for the soul; it takes it apart line by line and shows you what to fix."

-- Luke Peacock


30  Discussions / General Discussions / Re: Worried on: 2014-10-09 16:43:59
Learning multiple languages is handy because it reveals the patterns that are common to programming. For instance one of the basic things you do in most languages is read data in from a file / file system. Once you have a catalog of the ~10 most common things you need to do to accomplish X picking up a new language and getting to X is usually a matter of a few checks in a book or online reference and off you go. Syntax may change, but essential programming concepts are shared across a large cross-section of the implementation details in any given language of a given class (imperative, functional, etc. etc.)
Pages: [1] 2 3 ... 9
 

Add your game by posting it in the WIP section,
or publish it in Showcase.

The first screenshot will be displayed as a thumbnail.

PocketCrafter7 (12 views)
2014-11-28 16:25:35

PocketCrafter7 (7 views)
2014-11-28 16:25:09

PocketCrafter7 (8 views)
2014-11-28 16:24:29

toopeicgaming1999 (74 views)
2014-11-26 15:22:04

toopeicgaming1999 (64 views)
2014-11-26 15:20:36

toopeicgaming1999 (15 views)
2014-11-26 15:20:08

SHC (29 views)
2014-11-25 12:00:59

SHC (27 views)
2014-11-25 11:53:45

Norakomi (32 views)
2014-11-25 11:26:43

Gibbo3771 (28 views)
2014-11-24 19:59:16
Understanding relations between setOrigin, setScale and setPosition in libGdx
by mbabuskov
2014-10-09 22:35:00

Definite guide to supporting multiple device resolutions on Android (2014)
by mbabuskov
2014-10-02 22:36:02

List of Learning Resources
by Longor1996
2014-08-16 10:40:00

List of Learning Resources
by SilverTiger
2014-08-05 19:33:27

Resources for WIP games
by CogWheelz
2014-08-01 16:20:17

Resources for WIP games
by CogWheelz
2014-08-01 16:19:50

List of Learning Resources
by SilverTiger
2014-07-31 16:29:50

List of Learning Resources
by SilverTiger
2014-07-31 16:26:06
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!