Discover more from Lore Machine
Lore Machine Meets Unlimited Dream Co.
From Eyeball Dog Pagodas to Brain-Scanning GANs
Unlimited Dream Co. is a UK-based designer and artist camped way out on the frontiers of AI-collaborative creation. The quality and volume of his output is like wt actual f. We sat down to discuss process, text-to-video pipelines, image-to-rug pipelines, ethics and more. Oh, and stick around for a weapons-grade list of premo AI artists.
Lore Machine: Who are you IRL?
Unlimited Dream Co.: I currently work as a product designer, but I’ve done a wide range of things from graphic design to web development to product management. I also like to draw and create abstract art, which have become major components in my AI work.
I first started paying attention to the creative potential of AI with the release of Google’s Deep Dream and all its weird eyeball dog pagodas back in 2016, but it wasn’t until I discovered VQGAN+CLIP by Katherine Crowson about 18 months ago that I was really hooked. Watching a brand-new image emerge out of nowhere was absolutely mind-blowing and I basically haven’t stopped using it since.
Your site greets visitors with a description of your creations - “Strange organic forms, both biological and mechanical. Creatures of smoke, fire and glass. Half-remembered monsters last seen in a dream.” Where was your mind at when you conjured these words??
I wanted to give a feeling of the sort of dreamlike images that AI tools produce - familiar but not, and often quite strange.
Your Twitter account identifies your location at “Latent Space”. What’s that part of the world all about? Is it worth visiting?
Latent space is a sort of machine unconscious, a vast probability cloud containing the potential for pretty much any image you can imagine. Creating art with AI is the process of exploring this space to see what’s there – and monsters lurk around every corner.
What first drew me to your work was the aspect ratio. It’s cool to see some height! Any reason for those dimensions?
I’m glad you like it! It’s fairly prosaic, but I chose that ratio because my work tends to be viewed on a mobile phone and I wanted a size that made the most of the vertical space. I’ll occasionally mix it up and go square or landscape, especially if I’m making a video, but I think the portrait shape has sort of become part of my style now.
There’s a coherence and continuity to your work that feels more like a sentient being with a powerful machine than a happy cyborg accident. What is the connective tissue between the pieces you’ve been making?
That’s a good description of what I do. One of the things I love about AI art is its serendipity and unexpectedness, but I think I’m quite methodical in how I work. I’m interested in color, composition, materials and texture. VQGAN+CLIP is very responsive to these and can reproduce them well. So my pictures tend to be an exploration of what works and what doesn’t. I also use my drawings as initial images, so rather than starting with random noise, the AI uses them as a kickoff for its generation. This gives the results a structure and composition they wouldn’t have otherwise.
AI seems to ignite the imagination of people from all disciplines. Have you had any conversations or crossovers with folks outside of the art space?
Beyond creating album artwork for a few different bands and labels, and a collaboration with a handmade rug company, I haven’t had the opportunity to work with anyone outside the art space so far. I’m very interested in doing so though, because I believe that AI is going to become a foundational technology affecting every part of our lives.
I was reading a research paper the other day about an AI model that can recreate what someone is seeing from a scan of their brain activity with startling accuracy. The future’s here and the code is available on GitHub.
There’s something almost architectural about your works. Naturally, art is in the eye of the beholder. Am I picking up on an intentional theme?
It’s absolutely intentional. I’m a big fan of modernist architecture and I find the shapes and materials work really well when contrasted with organic or botanical forms. It’s a theme I draw from a lot.
Have you ever thought about how your work might look in the real world, say as a 3D-printed sculpture? If you had to pick one work that you could bring into the physical world, which one is it?
Yeah that would be fun. It’s something I’ve thought about a lot, but I have absolutely no idea how to go about it. As for a piece to pick, it’d have to be this one.
This was the first time I managed to create an image of something that was truly three-dimensional looking. It was a big breakthrough in my style. It’s also got everything including fruit and vegetables, sea monsters and molded plastic so it’s a fun image.
Is video of interest to you?
I’ve made a few videos and would like to make more, but I do like the immediacy and rapid experimentation that still images allow.
It’s worth following what’s happening with AI video though as other artists are creating some incredible work. There’s also a lot of research into text-to-video, where you can create a seconds or minutes-long video from a text prompt. Models like CogVideo can do it already. Procedural video generation is going to be huge for everything from traditional filmmaking to gaming to VR/AR and much more.
Can you take us through your process?
I mostly use a text-to-image AI called VQGAN+Clip (VQC), though I do occasionally use others including StyleGAN and Stable Diffusion. VQC and SD work in similar ways, in that they’re actually made up of two different AI networks. The first is the discriminator, which understands the text prompt you give it, e.g. ‘a painting of a dog by Picasso’ and knows what a painting and a dog and a painting of a dog and what Picasso’s style looks like. The second is the generator, which creates the image. Starting with random noise, the generator and discriminator go back and forth until the discriminator agrees that the generated picture does indeed look like a painting of a dog. A third component is the dataset - this is the information that the generator uses to create the picture. The dataset is created by ‘training’ it on lots of images. The more images you give it the better the result will be. As an example, Stable Diffusion’s dataset is trained on over 5 billion images that are each tagged with the subject, style, source, medium, style and more.
During training, the AI works out the statistical similarities and differences between the source images and gradually learns what dogs are, or what Cubism is or what paintings are. This takes weeks and needs a huge amount of processing power. At the end of the process the resulting model doesn’t contain any of the actual source images, but it does have the potential to recreate any of the concepts they contained. This is why it’s known as ‘latent space.’
As for my own images, I start with one of my drawings, or a section of a drawing as they’re quite large. I think about the colors, styles and materials I want to include. The AI uses the drawing as a starting point, so the generation follows its structure and composition. After about an hour of processing I’ll check the result to see how its come out. I may need to tweak the prompt a number of times before I get a result I’m happy with. If I find a really good prompt combination, I’ll spend time exploring variations of colors and materials to see what else is there.
Who are some folks you really appreciate in the scene?
Too many to count! There’s so many talented artists out there who constantly amaze me with the quality of their work. Here’s a by no means comprehensive list:
RiversHaveWings – Katherine Crowson created the main AI tools I use, so if it wasn’t for her I wouldn’t be speaking to you today.
KaliYuga_AI is an amazing artist and has done a huge amount of work creating and sharing her own custom models.
DehiscenceArt is a true artist with a very strong point of view. She’s really pushing the boundaries and demonstrating how creative it’s possible to be with AI.
@ManekiNekoAIArt – Beautiful, intricate and over the top.
MademoisellleK - I love their AI-generated cyanotypes. The mix of a highly digital process creating such an organic result is fantastic.
Ganbrood is a huge inspiration, his work constantly amazes.
images_ai first introduced me to AI art and they’ve been a huge promoter for me and the AI art scene in general.
VJ_DMTFL – Love his vivid colorful vaporwave exploration of art history.
Jeremy Torman’s custom trained models are amazing and directly inspired me to create my own.
makeitrad – Love his mi-century architecture and custom models and the way he mixes AI with traditional 3D and motion graphics.
Are there any tools you’re just learning about to dive into that you’re particularly excited about?
I’ve only just started using Stable Diffusion seriously as it’s taken me a while to get used to its prompt structure, but I’m really excited about its potential, especially when combined with Dreambooth. This is a feature that trains a Stable Diffusion model on your own images so that it can understand them as concepts. I’ve trained a bunch of models with this to further personalize and customize the output.
AI art can be divisive. It feels like an inevitable phase in early adoption. Why do you think that is?
I think because AI art is new, there’s a certain amount of fear and misunderstanding about what it means and how it works. People are worried about having to change how they work and possibly finding their skills have become obsolete. I’m also reminded of Douglas Adams’ three rules for our reactions to new technology:
1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you're thirty-five is against the natural order of things.
However for artists, generative AI models are _the_ most powerful creative tools that have been invented. We’re only just beginning to scratch the surface of what they can do and they’re getting better at an incredible rate.
I fully believe that learning how to use them properly and exploring their potential is going to be the defining creative movement of the next decade at least. It’s an exciting time to be an artist.
The recent controversy around whether AI art is real art or somehow stealing is understandable, but in my view is a misunderstanding of both what art is and how AI models work. Firstly, most of art history can be seen as an argument about whether something is art or not (it is). Secondly, training AI models isn’t stealing or copyright theft like a lot of people think.
An AI model doesn’t actually contain any of the source images – when creating an image, it’s not assembling a collage out of bits of images. Instead it has learned the concepts in the images and is doing its best to replicate the concepts it identifies in the prompt it’s given. It’s rather like we as humans learn to draw. And just like a human drawing an image, it’s perfectly possible to create an image that, say, Disney’s lawyers might take exception to – but it’s the result that’s problematic, not the act of learning or drawing.
I’m not sure if I’m succeeding, but one of the things I’m trying to do is demonstrate that generative AI tools can be a creative medium in their own right. I’d prefer to develop my own style and push it as far as I can than just replicate other styles. It’s novel to create a picture that looks like it was by Picasso but it’s much more fun and rewarding to make something that could only have been created by yourself.
Thanks for reading Lore Machine!