Main Site | Trained Model | How to make Bucket Art
A digital art project comprising a set of handmade digital paintings and an image generation model trained on them. You can try out the trained model to make your own Bucket Art, as well as mash it up with models by other artists, on titles.xyz. You'll need an Ethereum wallet to play around, but don't need to pay unless you want to publish or mint something.
The technique for handcrafted Bucket Art starts with laying down an interesting base layer using grids, textured brushes, patterns of random strokes, and trying to spot interesting image possibilities. I then add some more appropriate scaffolding and then use the fill tool to tease out the possibilities. The trick is to lay down a base layer that interacts well with the fill algorithm (playing with the fill tolerance is critical here). You can learn more on the How to make Bucket Art page. The 41 handcrafted images you see here were then used to train the image model, which you can use to generate images in this style. The model works pretty well so long as you stay relatively close to the images in the training set below. This set is also published as an NFT collection on highlight.xyz. It's my first foray into "second wave" NFTs which are closer to cheap laptop stickers than fine art. If there are any left, you can snag one for 1 USDC. I had to publish as an NFT collection because the AI models on Titles rely on crypto rails not just to route payments to artists, but to track the provenance of generated images back to the artists whose models were used. It's a very interesting blend of AI and blockchain tech.
Some motifs -- ships, waterfalls, skylines -- proved easy to generate in reliably repeatable ways. One impulse behind this project was to explore an approach to protocol art, via a tension between human seeing/pareidolia and algorithmic "painting kernels" (like the fill tool, grid tool, and texture tool). Another goal was to try and "see like an image generator" using low-level kernels of the sort used by AI image generators, such as diffusion and stochastic field primitives. I plan to use these images to train an image generator and see if it picks up on the underlying protocol.
A more high-concept name for this style might be Generative Impressionism, where the goal is to recognize familiar things emerging in a stochastic ontogenic process governed by the strange rules of a latent space.