Show Posts

Pages: [1] 2 3 ... 7

1

Discussions / Miscellaneous Topics / Re: What I did today

on: 20161026 07:53:48

Started work on a logic engine for use in (my) games. I got the main design done and unification working. 1 2 3 4 5 6
 Parser p = new Parser(); Term f1 = p.parseTerm("foo(X,b)"); Term f2 = p.parseTerm("foo(a,Y)"); System.out.println(f1.unify(f2)); System.out.println(f1); System.out.println(f2); 
gives: true foo(a,b) foo(a,b) and Prologstyle lists: 1 2 3 4
 Term t1 = p.parseTerm("[Y,b,c]"); Term t2 = p.parseTerm("[a[X[c[]]]]"); t1.unify(t2); System.out.println(t1); 
gives: [a[b[c]]] Next up are knowledge bases, rules and backtracking.



2

Game Development / Newbie & Debugging Questions / Re: Spotlights and decorator

on: 20160914 20:40:33

It feels a bit mixed up.
I don't think the subclass RotatingSpotLight should exist. This is a behaviour and so everything can be put in a "RotatingLightDelegate" that could be assigned an angle on construction.
The RotatingLightDelegate would replace the RotatingSplotLightDelegate  which is misnamed anyway as it can apply to any light, not just splot lights.
RotatingSplotLightDelegate is probably more accurately named "MoveOnRenderDelegate". As an aside, updating on render is a bad idea (but not really relevant to the design pattern discussion).
The core idea that's missing is that any delegate can wrap any light (including other delegates). If you ever start making code that relies on specific pairs of delegates/lights being used then the design has gone wrong.



3

Java Game APIs & Engines / OpenGL Development / Re: *AMD testing needed!* Vertex cache shenanigans

on: 20160912 20:16:43

Some errors but seemed to run on an AMD APU 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
 Vertex shader log: 0:1(10): error: GLSL 3.30 is not supported. Supported versions are: 1.10, 1.20, 1.30, 1.00 ES, and 3.00 ES
Fragment shader log: 0:1(10): error: GLSL 3.30 is not supported. Supported versions are: 1.10, 1.20, 1.30, 1.00 ES, and 3.00 ES
Link log: error: linking with uncompiled shadererror: linking with uncompiled shader Batch size test invocations: 8129 / 3145728 Calculated vertex cache batch size: 387
Cache size 1 invocation test: 8129 / 3145728 Cache size 2 invocation test: 16258 / 3145728 Cache size 3 invocation test: 24387 / 3145728 Cache size 4 invocation test: 32516 / 3145728 Cache size 5 invocation test: 40645 / 3145728 Cache size 6 invocation test: 48774 / 3145728 Cache size 7 invocation test: 56903 / 3145728 Cache size 8 invocation test: 65032 / 3145728 Cache size 9 invocation test: 73161 / 3145728 Cache size 10 invocation test: 81290 / 3145728 Cache size 11 invocation test: 89419 / 3145728 Cache size 12 invocation test: 97548 / 3145728 Cache size 13 invocation test: 105677 / 3145728 Cache size 14 invocation test: 113806 / 3145728 Cache size 15 invocation test: 398303 / 3145728 Cache size 16 invocation test: 699062 / 3145728 Cache size 17 invocation test: 3145728 / 3145728
Results: Renderer: Gallium 0.4 on AMD KAVERI (DRM 2.43.0, LLVM 3.8.0) Calculated vertex cache batch size: 387 Cache size: 16 



4

Discussions / Miscellaneous Topics / Re: What I did today

on: 20160905 08:33:08

It's very realistic. Olaf Ragnarson lives there and he's one scary dude with a big axe. It's safest to take the scenic route. A simple bruteforce way would be to run the path generation several times and then pick the result with the least number of paths. Another way would be to prune paths with connections to mandatory roads last. If I calculate the total distance to a point in the graph, I can then add paths back in and see what addition lowers the total distance most. But hey, the Vangardians don't even walk on the paths yet You're looking for something between a Shortest Path Tree (efficient movement) and a Minimum Spanning Tree (least road construction). If your villages are small enough for a quadratic algorithm then I'd do:  for every pair of locations  add the shortest path between them, where existing paths cost 0.1 and nonpaths cost 1.0



5

Games Center / Contests / Re: The OpenTafl Computer Tafl Open  An Abstract Strategy AI Tournament

on: 20160818 08:54:09

Eeeeeeeeek! My entry will serve the purpose of making everyone (else) a winner The fellow who runs playtaflonline.com whipped up a bot in a day or two which is reliably better than OpenTafl, so I'll be right there with you at the back of the pack. Also, your AI might just end up being the kryptonite to someone else's. Right now my AI looks like it makes smart moves, but if you just aggressively capture pieces it really starts to fall apart.



6

Games Center / WIP games, tools & toy projects / Re: OpenTafl: old terminal graphics meet Old Norse board games

on: 20160727 08:36:54

Could you explain the AI that goes behind this? The class structure and methods, really. How are you going about this?
That would be quite a long explanation.. but in brief, the important classes:  Board: maintains game state (positions of pieces) using minimal memory. Provides useful functions that make some assumptions about rules, e.g.: get all open spaces vertically and horizontally from a given piece, or from all pieces of a certain colour. Also whether the simple win conditions have been met  white in a corner or king captured.
 "Heavy" Board: a subclass of Board that adds more information that can be inferred from the game state. E.g. counts of pieces, whether white is surrounded, which pieces are threatened (can be captured in one turn) etc. Ideally the extra information should be able to me modified incrementally when a piece is moved.
 Transition/HeavyTransition: classes that can apply moves to a Board and can undo moves.
 Evaluator: An interface that provides a function for evaluating who is winning in a given Board position. The simplest one says 0 when black has won, 1 when white has won, and 0.5 otherwise. This is where heuristics can be implemented.
 Search: Interface that takes an initial board state, an Evaluator, a Transition factory (that produces valid transitions for a given Board), and returns the move that produces the best looking Board. I have two implementations:
 IDDFS (iterative deepening depthfirst search) using min/max with alpha/beta pruning. As this is depthfirst, adding and undoing moves as it goes, memory is no issue. You can leave this running for days to evaluate positions if you want so is good for testing the correctness of other search strategies.
 MCTS (MonteCarlo tree search). This can be much better than IDDFS if you can get good playouts. This requires selecting fairly reasonable moves from any board position. Without lots of examples or good knowledge of the game this is pretty hard so I decided to shelve this strategy.
There are other bits that get added to improve things. For example, visited sets so you can reuse values when you encounter the same state multiple times. A function for classifying moves as interesting/unintersting that can be used either for MCTS playouts or for ordering the search in IDDFS. Plus plenty of heuristics that appear as concrete evaluators. The lightweight states almost always lose out to heavyweight states coupled with computationally expensive heuristics. Basically you should always do more work and search less. That seems to hold for Tafl, at least for me. On my anemic PC I get 400k nodes per second searched using a vanilla board, and about 30k/second with heavy boards and heuristics. The slower search absolutely clobbers the faster one in games. It only gives up about 1 or 2 depth but has far better evaluation of position.



9

Games Center / Contests / Re: The OpenTafl Computer Tafl Open  An Abstract Strategy AI Tournament

on: 20160531 14:05:40

I've encountered a bug with the latest build (one after 0.2.5b). When playing with human (attackers) vs AI (defenders) the board state send with opponentmove commands stops including the king after the first capture.
E.g.
move /4tttt3/5t5/3t7/t4T4t/t3TTT3t/tt1TTKTT1tt/t1T3T3t/t4T4t/4Tt5/11/3ttttt3/ opponentmove d11d9 /4tttt3/5t5/3t7/t4T4t/t3TTT3t/tt1TT1TT1tt/t1T3T3t/t4T4t/3t1t5/11/4tttt3/
The opponentmove captured a piece at d10 (not the king), but the king still disappeared from the next board state.



11

Game Development / Artificial Intelligence / Re: Hidden Markov Models, how to train using BaumWelch?

on: 20151019 19:45:36

The entire problem is taking in a stream of emissions/observations and predicting the next emission. The underlying states are irrelevant. You mean it's irrelevant to your post because they've given you a structure? Out of interest, how many states are there and how many transitions with nonzero probability? If it's up to you to make the structure then this is a pretty critical step. BaumWelch seems to suck. If I repeatedly train my HMM with the entire sequence of emissions/observations, it eventually explodes into a HMM filled with 0s. The scarce information I've found says that this is due to overfitting. I don't think that should be possible. The transition probabilities out of each state have to sum to 1, though that is including the "stay" transition back in to the state itself. Overfitting can generate some states with only a single nonzero probability transition in and out but you'll either need to have tons of states or be training it on the same sequence again and again. What is the optimal way of training an HMM given a stream of data? BaumWelch is pretty good. I'd stick with that until you get it to work. Given the sequence of emissions [a, b, c], to calculate the probability of each possible next emission I compute the probability of the emission [a, b, c] occurring given the current HMM state and then the probabilities of [a, b, c, <X>] occurring, where X is each possible emission. The conditional probability of each emission is then p([a, b, c, <X>]) / p([a, b, c]) Is this the correct way of calculating the probability of the next emission? After seeing [a,b,c] you could be in any of your states. Call the set of states <N>. You first need to calculate the probability of being in each n in <N>: P([a,b,c,n]). This is the "forward" step that you'll find documented (Wikipedia is fine for this). In each of the states there will also be a probability of emitting each x in <X>: P(xn). That's straight from the current model parameters. For an emission x, P([a,b,c,x]) is the sum of all the ways x could be emitted, i.e. sum of P([a,b,c,n])*P(xn). If you follow through with the method above to just predict the next state I think you'll be doing Forward/Backward. If all you want is the single most likely state then it's much simpler to do Viterbi, which is just plain old dynamic programming applied to the HMM (and for me at least, is easier to understand).



13

Discussions / Miscellaneous Topics / Re: advanced 'Knapsack problem'

on: 20150422 12:46:33

This also sounds like something you could potentially solve with a theorem prover like Z3.
Yeah, this is an SMT problem http://en.wikipedia.org/wiki/Satisfiability_modulo_theories. What you are doing is called constraint programming. If you really want to roll your own, I'd do it this way:  first make a solver that finds a solution to the hard (boolean) constraints  generate a single valid solution  traverse the space of valid solutions, testing the overall value of the soft constraints as you go  after some time or after exhausting the search space, return the best result found so far From your description of the problem it shouldn't be too hard to define a function that takes a valid solution and generates similar, valid solutions for the traversal, so this kind of approach should be ok. If you can make the first step nondeterministic then keep repeating with greedy searches from new valid start points  this is much less prone to getting stuck in local optima than evolutionary algorithms and will be quicker too.



14

Game Development / Shared Code / Re: Curve fitting

on: 20150401 16:08:27

I find the easiest way to understand the DFT is as least squares: http://en.wikipedia.org/wiki/Linear_least_squares_%28mathematics%29. You're finding the best coefficients for a set of sinusoids so their sum is closest to your data. By using sufficient sinusoids of different frequencies these form a basis for all 1D signals, so you can get a perfect answer. The X in that page are all the sinusoids as a matrix (one per column). beta are the coefficients  they're complex so you get magnitude and phase. y is the data (your 1D signal).



17

Discussions / Miscellaneous Topics / Re: What I did today

on: 20150216 22:26:51

I finally have a solid initial implementation of a Goal planning system. It allows for entities to have goals and subgoals and to replan as circumstances change. A lot of GOAP material available is fiarly complicated from an implementation side, and it took a while for me to get to a simpler system that meets my needs.
I worked on a planning system too, but mine's overly complicated. Today I fixed some bugs exposed by the fallinlove and decidesexuality actions.



19

Game Development / Game Play & Game Design / Re: The impossible 15Puzzle

on: 20150212 21:52:02

My apologies, there was a bug after all. Using "0" to represent the gap makes the indexing a bit confusing and the even/odd state check was wrong. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
 public static boolean isSolvable(int[] state){ int[] positions = new int[16]; for(int i = 0; i < state.length; i++){ positions[state[i]] = (i+1)%16; } int row = (positions[0]1)/4; int col = (positions[0]1)row*4; boolean isEvenState = positions[0] == 0  row % 2 == col %2; int evenCount = 0; boolean[] visited = new boolean[16]; for(int i = 0; i < positions.length; i++){ if(visited[i]) continue; int cycleLength = 0; int nextTile = i; while(!visited[nextTile]){ cycleLength++; visited[nextTile] = true; nextTile = positions[nextTile]; } if(cycleLength % 2 == 0) evenCount++; } return isEvenState == (evenCount % 2 == 0); } 
If you have code that makes a whole lot of random swaps from the goal that would be a good way to test the correctness.



20

Discussions / Miscellaneous Topics / Re: What I did today

on: 20150212 14:02:18

For someone normal living its food and warm house and time for doing what hi love to do)
Aged 17 to 20? At that age, you might be at uni and all you need is enough money for a roof over your head, some food and beer. You get a little older and you want to travel. You just need enough money for a roof over your head, food, beer, a plane ticket per year and some hostel fees. You want to go out a little more with your girlfriend or boyfriend. Roof, food, beer, travel, and a bit of entertainment each week. You get a little older and want to have a house  no point paying rent forever. So you need to add a few thousand savings per year for a deposit. You get a little older and have a kid. Drop the beer and travel. Roof, food, savings, family entertainment and a good education for your child. If you want to be an indie game developer, chances are you'll have to go back through that last list and choose at least two things to throw out. That's "half a living".



21

Game Development / Game Play & Game Design / Re: The impossible 15Puzzle

on: 20150212 08:52:34

One thing I should have mentioned: I was assuming the blank space would be at the bottom right. Everything above is still correct, but the rule "even number of even cycles > solvable" holds when the blank space is at an even position (e.g. bottom right is position 16, in your other example the blank space is at position 2). When the blank space is in an odd position, the rule is "odd number of even cycles > solvable". If I find time today I'll put it in code.. Edit: here you go: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
 public static boolean isSolvable(int[] state){ int[] positions = new int[16]; for(int i = 0; i < state.length; i++){ positions[state[i]] = i; } boolean isEvenState = positions[0] % 2 == 0; int evenCount = 0; boolean[] visited = new boolean[16]; for(int i = 0; i < positions.length; i++){ if(visited[i]) continue; int cycleLength = 0; int nextTile = i; while(!visited[nextTile]){ cycleLength++; visited[nextTile] = true; nextTile = positions[nextTile]; } if(cycleLength % 2 == 0) evenCount++; } return isEvenState == (evenCount % 2 == 0); } 
Examples: Is the goal state solvable? 1
 System.out.println(isSolvable(new int[]{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,0})); 
"true" Is one move from the goal state solvable (having moved tile 15)? 1
 System.out.println(isSolvable(new int[]{1,2,3,4,5,6,7,8,9,10,11,12,13,14,0,15})); 
"true" If you swap tiles 14 and 15 is it solvable? 1
 System.out.println(isSolvable(new int[]{1,2,3,4,5,6,7,8,9,10,11,12,13,15,14,0})); 
"false" Is your other example solvable? 1
 System.out.println(isSolvable(new int[]{1,0,3,4,6,2,11,10,5,8,7,9,14,12,15,13};)); 
"false" Those are the only tests I've done, but the code seems to be bug free.



22

Game Development / Game Play & Game Design / Re: The impossible 15Puzzle

on: 20150211 14:23:08

Here's how the counting goes:  The cycle starting at 1 is (1) because it is already in the correct location. Even cycle count is still: 0  The cycle starting at 2 is (2 6 5 9 12 14 13 *). Even cycle count is now: 1  The cycle starting at 3 is (3). Even cycle count is still: 1  The cycle starting at 4 is (4). Even cycle count is still: 1  The cycle starting at 7 is (7 11). Even cycle count is now: 2  The cycle starting at 8 is (8 10). Even cycle count is now: 3  The cycle starting at 15 is (15). Even cycle count is still: 3
There are no more tiles that aren't in a cycle so we are done. The number of even cycles is 3, which is odd. So the arrangement is not solvable.
To get the long cycle, you look at the first tile which is 2. It's position in the arrangement is the one that 6 should be in in the solution. So 6 is next in the cycle. Then we look at tile 6. It's position in the arrangement is the one that 5 should be in in the solution. etc. When we get to the blank spot *, it is in the position that 2 should be in in the solution, but we've already processed 2 so the cycle has finished.
Hope that helps.
Edit: Your last example has cycles (1)(2)(3)(4)(5)(6)(7)(8 )(9)(10)(11)(12)(13)(14 15)(*). There is one even cycle. That's an odd number, so not solvable either.



23

Game Development / Game Play & Game Design / Re: The impossible 15Puzzle

on: 20150210 20:40:50

The first link you posted with the permutations is probably the easiest test. In their first 15puzzle example they describe the board position as: 1
 (1 11 10 9 2 3 4 7 12 13 8 * 6 15)(5 14) 
The first set of parentheses says that tile 1 is in the position that would have 11 in the solution. 11 is in the position that 10 would have. 10 is in position 9 etc. The second set of parentheses says that tile 5 is in position 14 and tile 14 is in position 5. You can count the size of these cycles by picking one tile to start with and just following the trail of permutations. Then picking an unvisited tile and repeating until no more tiles are left unvisited. Count the number of cycles that have an even number of tiles in them. If this count is even, your arrangement is solvable.



25

Game Development / Newbie & Debugging Questions / Re: Moving player tiletotile (A* Pathfinding finally done)

on: 20140908 21:10:36

(Near)zerocost is not required, the effect is already observable if we have a parallel dirt road and paved road. Let's say the edges of the dirt road have cost 1.1 and the paved road has cost 0.9, all other edges (grass) have a cost of 2.0.
Hmm.. actually I think we're in disagreement on this bit. The heuristic distance I'd use is Manhattan*0.9, because the most optimistic estimate of the distance is that there is some direct path by paved road. A* using this will expand more nodes than if it used unweighted Manhattan distance but it will be optimal again. Regardless, at least the OP decided on a solution in the meantime



27

Game Development / Newbie & Debugging Questions / Re: Moving player tiletotile (A* Pathfinding finally done)

on: 20140908 14:41:00

Oh, and A* is not guaranteed to yield the fastest path, as it is heavily biased to a path that lies on the line src→dst, it might not even visit the 'instant portal' only a few cells away (in the opposite direction). Dijkstra would handle this odd case just fine, at the cost of worse performance in the general case.
If there are portals then instead of just using Euclidean distance from A to B for the heuristic, use (Euclidean distance from A to nearest portal) + (Euclidean distance from B to nearest portal) if that is shorter. Then A* won't be incorrectly biased and will still guarantee the shortest path.



30

Game Development / Newbie & Debugging Questions / Re: Stuck in recursion hell!

on: 20140828 07:36:36

I'd suggest you move to using a queue and perform a breadthfirst fill.
Tail recursion is just depthfirst traversal where you use the program stack instead of explicitly using a stack object. This makes the code more succinct and maps nicely to recurrence relations, but doesn't buy you much else.
For the breadthfirst fill, start by putting all sources in the queue then just dequeue and process one by one as you are now.
Edit: you can do fancier stuff with nontail recursion, but that doesn't really apply here





javagaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org

