Tiledriver – Making Wolfenstein 3D Levels with Machine Learning
This is the second post in my series on the value of personal projects. See the first one for more context.
Marrying the Old with the New
Tiledriver was an attempt to create levels for the classic 1992 game Wolfenstein 3D using machine learning. I thought this would be a fun challenge since I didn’t know anything about machine learning going into it.
The reason I thought this was feasible was that there are existing methods of generating completely synthetic images based on machine learning. A Wolfenstein 3D level happens to be composed of 64×64 tiles – this is not really that different from an image, it’s just that a “pixel” has a bit of a different meaning. There are thousands of levels to use as input since fans have been making Wolfenstein 3D levels since 1992.
A (Very) Rough Welcome to Machine Learning
As it turns out, going from “I know nothing about ML” to “trying to use some of the fringes of deep learning in a novel context” was, putting it mildly, a massive learning curve. The core technology used in Tiledriver was Deep Convolutional Generative Adversarial Networks (DCGANs).
In a GAN, two neural networks are put into adversarial roles (hence the name). The first network, called the Generator, learns from your examples of real data and tries to produce realistic fakes. The second network, called the Discriminator, is designed to distinguish between real data from fake data. By pitting the two against each other for thousands of iterations, the Generator gets better at fooling the Discriminator, while the Discriminator gets better at rooting out the fake data. By the end you hopefully have generated data that can convincingly fool the ultimate discriminator – you!
Did It Work?
To cut to the chase… no, it didn’t work. DCGANs require a lot of tuning to get useful output, and this is something you only learn how to do with a lot of experience. This was a bit too steep of a mountain to climb as my first machine learning project.
But I did learn a lot along the way!
- I learned the importance of cleaning and normalizing the data being fed into ML systems. This part of the project went very well and involved some fun problems like connected-component labeling.
- I learned that my data set was too small. Even with some tricks to amplify the amount of data, there simply wasn’t enough unique levels to do much learning on. I only ended up with a few thousand levels; ideally I would probably have had at least an order of magnitude more.
- I learned why a Wolfenstein 3D map isn’t really equivalent to an image. An image is composed of continuous data. For example, we can say that a pixel is a mix of blue and red. A Wolfenstein 3D level, on the other hand, is very discrete – what is halfway between “a wall” and “empty space?” The question doesn’t really make sense, and DCGANs don’t do well for problems like this.
- I learned a lot about technologies along the way:
- Python along with numpy
- Jupyter notebooks – interactive notebooks mixing notes and code, commonly used in scientific circles
- Keras – a neat deep learning framework that, among other things, hides the complexity of GPU computing from you. I was kind of shocked to learn that in other frameworks you have to do a lot more work to take advantage of GPUs. This was completely transparent when using keras and I would certainly look into using this framework in the future.
Even though I couldn’t get this to work, I do think the idea is feasible. While I was working on this, a team of people at an Italian university managed to accomplish what I set out to do using the game Doom. I actually think Wolfenstein 3D is much better suited for this than Doom since the level format is much more regular. This might be something I revisit in the future…
The last personal project I’ll talk about actually does involve the game Doom, but from the opposite direction of Tiledriver…