This is a small project exploring a computer's ability to create abstract visual representations of real world objects through genetic image generation.
Get started below.
research / background / info
Check out the project on GitHub
The image generation is done genetically. This means that we generate many random images then choose the best two and "breed" them by combining their image data. This new child image is used as a seed for a new set of random images and the process will iterate from ThemeProvider. Over time, this creates a more and more accurate image and eventually trigger the AI's label detection more than a normal picture of that label.
Through exploring how artificial intelligence would shape the future of art and design, we discovered several projects that were especially inspiring for us: Chris Cummins genetic algorithms for generative art, and Tom White’s Perception Engine. We combined the two techniques in order to generate a fully original image that is the internal representation of what our neural network identifies as its label. There is a large list of labels to choose from and they are made up of anything that humans can identify visually. Plenty of animals and household objects are available for users to choose.
The neural network is based on the DarkNet open source neural network, which is written in C and CUDA.
view / upload / download