I maybe agree with your normalized top 20 that it should be higher than some other games there, or some of the other games there should be lower.
Before the judges submit their final scores, they should be able to view a sorted list of their individual results. Or, why not let the judges simply order the games as oppose to assigning a number to them? The ordering itself could establish a score. For instance, if there were 50 games, the lowest in the sequence would be assigned 0% and the highest 100%. Everything else would be, in this case, 2% higher than its predecessor. I think that would have prevented Frog Solitaire and Rainbow Road from sharing the coveted 20th place.
I'm not sure what "20th place" you're talking about.
Rainbow Road got #2nd place from the judges, not #20th place, and with only 0.9% less score than the one in #1st place. If you got a game that got 90%+, then you got a superb game. I don't see how that is unjust. (The reason we don't show normalized scores from individual judges is perhaps of this, one judge can be more wrong than a handful of them, which is why you have multiple judges in some judiciary systems, so "right" judges can cancel out "wrong" judges).
You have to realize the grades are subjective and games are reviewed through a period of couple of weeks, but most of all there were almost 70 games. Keeping all the knowledge of all games at the tip of your mind to be able to quickly order them, as you suggest, is pretty hard. You might have a handful of favorite of games, but we have to weigh nearly 70 games.
I have already suggested a new judging method based on classification buckets, which I may implement for next year. So there's no need to create more complex grading rules which at the end of the day may not change anything for the final results.
Also, (not speaking of Rainbow Road here)
I've re-played many of the games that some thought were mistreated by the judges and I can't say there were many mistakes that would affect the top 10. Maybe 10-20. Most likely 20-40. It's easier to say which game is best and which is worst, than to re-arrange the games in absolute order of how good they are from the center.
Of course mistakes do happen, but with games like 4k games, the first-impression is often the right one. You also have to consider we also have to view these games as some casual player would view them... if you have to read manual, if it just doesn't intuitively play from start, then casual players won't bother with it.
Judges are also under a time-restraint, only allocating limited time to each game.
The only reason for a judge to spend more than 20-30 minutes on a game is because it's so fun to play and addictive. If he's struggling with understanding it and spending a lot of time on it because of that, then there's something wrong with the game, not the judge. That's the reality, even if you may perfectly understand your own game, the same is not for others, judges or casual players. (And in my case about Rainbow Road, it was easy to start playing, but after a while it became quite pointless with no encouragement or challenge to continue).
I say, if you have to read the instructions to play, you can improve the presentation and how intuitive your game is. If you have to keep reading the instructions while playing, you've done something wrong.
Apo's games were probably the most "mistreated", at least a few of them, but other than that not much else would affect the final result.
I'll be sure to invite you on the judging panel for next year
