Idea & Inspiration
There is a certain kind of magic in choosing through a process that seems seemingly random. Tarot cards are one example, where there is a series of cards and we try to divine meaning through cards that we choose at random. If the idea of Tarot Cards is to generate meaning through a randomized chaos, I figured that this would be a good idea to machine-generate a deck of tarot card images and their meanings. Another benefit of using Tarot Cards is the established rhythm of how the cards are drawn — they are usually portrait, a bit long in format, and the names are usually written at the bottom. They have a certain mystical atmosphere, and I was hoping that synthetic mediums would be able to make sense out of the card images that I would feed it.
Process
This project initially started out as an experiment related to my Electronic Rituals, Oracles and Fortune Telling class. However, I decided to explore further since I was starting to learn so much from this project.
For this project, I collected about 700 tarot card images from the internet (source links [1], [2], [3] and [4]) and used StyleGAN on RunwayML to train a tarot card model. Then, I compiled interpretations of readings and ran them through GPT-2 on RunwayML to generate new meanings. Finally, for the names, I settled on having a list of feelings that would pick 22 of them out of random. The number 22 comes from the major arcana of Tarot Cards. There were many pitfalls that I encountered throughout this process, I will list them below.
Attempt #1 at Training StyleGAN
Attempt #2 at Training StyleGAN
Attempt #3 at Training StyleGAN
Attempt #4 at Training StyleGAN
Attempt to Train GPT-2
In an attempt to train GPT-2 on ‘fortune-telling’ sentences, I collected data from fortune cookie message archives (link [1], [2], [3]), proverb saying collections from the internet, and Tarot reading data (from Allison Parrish). Unfortunately, it ended up only being about 167KB’s worth of text (it takes a LOT of text to be over 1MB, I realized!), and the GPT–2 ended up only repeating the data that it already had.
So in the end, I just ended up using the same dataset but just selecting random samples of meanings (instead of generating them).
Final Outcome
After being disappointed with earlier attempts, I tried to debug the dataset — meaning that I tried to have a smaller quantity of samples that are more similar to each other in terms of artistic quality and resolution. I scraped this Japanese Blog that had four editions of the Rider-Waite-Smith tarot card deck, and added in one more set that I had. It wasn’t quite 500 images, but I hoped that it would be enough. Also, I found that the StyleGAN in Runway had a second, improved version. So I tried it out.
Here is a gist of the scraping Jupyter Notebook
To my pleasant surprise, the model turned out the most successfully than I expected! It produced much more variety than the previous models, and more predictable styles with less blobs. Still not varied enough, but a bit better than the previous attempts.
Final Thoughts
I’m happy with where it ended up, although I can’t get over my frustration of wanting the images to be more different and varied. It seems the data is running into the problem of overfitting, when it trains for more than 4,000 steps. If only I had more tarot cards and data!