Someone at work once made a Robocode tournament, and we spent a day paid playing with it.
Most people went for an "if-based" strategy. If close to wall turn, if pointing towards enemy shoot, etc. And when that works apply some more rules and geometric calculations.
I did mine trying using neural nets. One net that would learn the movement patterns of opponents (offline) and predict positions and shoot there (since bullets have move time). That actually worked fairly well with a really simple net.
For movement, however, deep-RL etc wasn't that big yet, so I struggled a bit. What to train on? So I used genetic algorithms to train this part of the net. Not the best results, but it worked. Mind you these were simple homemade nets, not sure deep learning even was a concept then.
Hard to write a good fitness function. I tried to deduct the fitness if it hit a wall. Then it would just stand still. Tried to then increase fitness based on movement, then it would just oscillate at one spot. Etc. Etc. GAs exploit all loopholes, pretty interesting.
Great fun. Great tool to explore programming at all levels.
At my alma mater we, the students, used to organize a one week "Introduction To Programming" course based on Robocode. It was meant to teach freshmen the very basics of Java before they'd have their first classes.
They'd have four days to come up with strategies and develop their robots and on Friday there was a tournament between all the developed robots. The best teams got a non-trivial prize (and bragging rights).
It was really interesting to see which strategies the students came up with and how they changed from year to year. The amount of material provided really made a much larger difference than one might expect.
Among the interesting challenges of Robocode is that the bullets you shoot are so slow that your opponent has ample time to dodge, so you need to predict their movement. On the other hand you can't see where your opponent is shooting so you need to guess and move elsewhere. It's really quite fun.
I remember playing https://en.wikipedia.org/wiki/ChipWits on an original 128kb Mac at school very, very fondly. One of the best programming games I ever had the pleasure of playing with.
This reminds me of Omega [1] from 1989. One of the more fun games I've played. I was thinking about giving it a go again and seeing how well I do.
[1] - https://en.wikipedia.org/wiki/Omega_(video_game) - https://corewar.co.uk/omega/files/misc/omegamanual.pdf
I was always impressed with Robocode coders. But I always wished that the team meta somehow developed.
One tank is hard enough, but team-games where radar-free drones have more HP seem to provide interesting strategies.
The Radar system is very smart in this game, at most covering 22.5 degrees per tick. Converting one enemy is easy enough, but getting a good view of the battlefield as a whole seems like a different game.
I guess singlebot games and 1v1 is easier to test and make AIs for, and was more fun for larger sets of the community.
I loved Robocode, it was one of my first introductions to programming, as we used it in my high school programming class. I've sort of had a pipe dream for awhile to build a browser based Robocode inspired game. Not an exact clone, but a similar idea of programming bots to fight each other.
Maybe we can use it as a self-improvement exercise:
Carve out a day every year to write a brand new bot (no reuse) to see how it fares against older bots. Try to outsmart ourselves.
Oh man, I spent hours and hours on a similar game in the 90s: RoboWar. I learned so much playing that game.
See also BattleCode https://www.youtube.com/watch?v=x3a5dXaj-XA
A programming competition at MIT, it looks like they change the engine out every couple years. When I first encountered it, it was a starcraft style game. Now it looks like agents competing in a resource constrained environment.
Alternative? - Rocket Bot Royale - https://rocketbotroyale.winterpixel.io
I have a little self-written version I use as an assignment for Scala students at UNE (Aus).
At the moment, using classic actors but it'll probably shift to typed actors next time around.
https://github.com/UNEcosc250/2018assignment3
I also put in a bit about "tankfighting with insults" a la Monkey Island, to try to give a little exercise in streams.
(Relatively safe to link because I'll be updating it next year anyway)
Some years ago, a colleague and I ran a software studio course at UQ where we used the original Robocode codebase as the starter project, and had teams adding action-replay, Call of Duty style killstreak rewards and all sorts of other odd features.
(Though the pain of the original Robocode Java codebase was there was a 1,000 line long class that was so central to everthing that by the time students were done with it, it was a 3,000 line class. Our hopes of "prime target for students to refactor" were thwarted by "turns out, students don't do that".)
It's not clear from the website's home page what is the difference with the original version hosted at sourceforge. This page explains it: https://robocode-dev.github.io/tank-royale/articles/tank-roy...
I have won an unofficial competition in the university back in the 1990s, we competed using something called PascalRobots. The game was played via rounds of 1v1 matches. My victory came thanks to a simple trick: I noticed that certain algorithms had good success against most algorithms, but were very vulnerable against few others, which, in turn, were very vulnerable to most other ones. Basically a rock-paper-scissors problem. So I did a very simple thing: I've programmed a robot that used first 50% of health using one algorithm, and then switched to another very different one. The second one was simply running along the edges semi-randomly and scanning and shooting backwards - i remember the rules were that every action like scanning took some time so you couldn't build a robot who'd pinpoint the exact location of the enemy in the allotted time, you'd waste too many cycles on that.
See also with a similar concept: RoboForge[0][1] (2001), where you designed and programmed a fighting robot.
Roboforge also did another thing that I still - even two decades later in this microtransaction-laden world - haven't seen again: Paid tournaments!
You could enter your bot into online tournaments for free, or you could pay something like $5 to enter a paid tournament, and if you won you'd win a real-money prize. There's probably some huge legal issue around what pretty much amounts to gambling, but it was brilliant.
[0]http://www.roboforge.altervista.org/ [1]https://en.wikipedia.org/wiki/Roboforge
I have fond memories of Color Robot Battle from my childhood. Setting up two robots and heading off to school while they spend hours fighting.
At the university I got into AI, NN, evolutionary algorithms. I wanted to develop the skill using these so as a laboratory work I choose building a Robocode bot. The idea was to learn how to evolve it rather hardcoding. This video was quite an inspiration: https://youtu.be/Hp6bhARBGc4
I used a Java GA lib called Watchmaker and trained my bot against built-in bots AND against a world no.1 bot of previous years.
At that time I could barely code cleanly and I was mostly fighting the frameworks so there was a point where evolution in 3 dimensions (acceleration, bodyTurn, turretTurn) were implemented, but maybe one of the most important one, shooting was not.
I run out of time, next day I had to give a presentation so I just gave it a go. Surprisingly the bot evolved to a point (in minutes) where it beat the ex world champ bot. First its performance went above 50% which was a shock, but later it got to 60%+ sometimes even 80%. But how? How to beat without a single shot?
Turns out evolution and random can find you things which are so anti intuitive you could basically never come up with. ( I had a sense of that, that’s why I choose that domain ;D )
What I have seen is that my bot evolved a really simple but weird technique to dodge the champs bullets: it found some kind of frequency and on that frequency it has just… acceleration up and down (negative velocity) and HIT THE WALL AGAIN AND AGAIN. Wat?
The champ bot’s prediction was just too good but not prepared for such a stupid enemy like my bot and it was basically impossible to predict that my bot will slow down for V to 0 in 0 sec so champ was almsot constantly shooting ahead or behind my bot, shooting the wall instead my bot. If the wall wasn’t there, it would hit almost 100%.
Shooting requires energy and after a given time both bots lose energy at the same rate. My bot could win just by “backing out” and not being hit too often, wasting the enemies energy.
Since that experience I could not care less about NN (except neuroevolution), evolutionary algorithms ftw! :D
(I hope no one will create the AI killing us all thanks to my comment. :D )
Reminds me of sc2ai https://sc2ai.net/ where you can write bots for starcraft2 and let them battle it out vs other bots.
Brings back a lot of good highschool memories :)
Wow, what a throwback! This game is the reason I got into programming. Thank you for reminding me that this exists!
You can program a tank, or team of tanks, to fight each other.
Each tank can move, scan other tanks and shoot.
Your tank has hit points and shooting a bullet decreases your hit points by the amount of firepower you used.
Bullets are particles that move over time, so you have to shoot not at where other tanks are but where they are going to be at based on their last position seen on radar.
"Please notice that Robocode contains no gore, no blood, no people, and no politics. The battles are simply for the excitement of the competition that we love so much." if only everything could be so pure.
Do games like this help improve game development skills?
I remember this from years back.
A friend of mine came up with a fantastic strategy based around the fact that shot strength was a float that could be modified.
He used this to have his team fire extremely weak shots at each other, with the value of the float encoding different messages. He then used these to co-ordinate his bots against the enemy as a team, instead of individuals.