where to watch casino royale

Where Can I Stream 'Casino Royale' Online for Free in 2024?

Just how good are the new wave of AI image generation tools?

AI-generated imagery is here. Type a simple description of what you want to see into a computer and beautiful illustrations, sketches, or photographs pop up a few seconds later. By harnessing the power of machine learning, high-end graphics hardware is now capable of creating impressive, professional-grade artwork with minimal human input. But how could this affect video games? Modern titles are extremely art-intensive, requiring countless pieces of texture and concept art. If developers could harness this tech, perhaps the speed and quality of asset generation could radically increase.

However, as with any groundbreaking technology, there’s plenty of controversy too: what role does the artist play if machine learning can generate high quality imagery so quickly and so easily? And what of the data used to train these AIs – is there an argument that machine learning-generated images are created by effectively passing off the work of human artists? There are major ethical questions to grapple with once these technologies reach a certain degree of effectiveness – and based on the rapid pace of improvement I’ve seen, the questions may need to be addressed sooner rather than later.

In the meantime, the focus of this piece is to see just how effective these technologies are right now. I tried three of the leading AI generators: DALL-E 2, Stable Diffusion, and Midjourney. You can see the results of these technologies in the embedded video below (and indeed in the collage at the top of this page) but to be clear, I generated all of them, either by using their web portals or else running them directly on local hardware.

At the moment, the default way of using AI image generators is through something called ‘prompting’. Essentially, you simply write what you’d like the AI to generate and it does its best to create it for you. Using DALL-E 2, for example, the best way to prompt it seems to be to use a combination of a simple description, plus some sort of stylisation, or indication of how you’d like the image to look. Attaching a lot of descriptors at the end of a prompt often helps the AI deliver a high quality result.

There’s another form of prompting that involves giving the software a base image to work with, along with a verbal prompt that essentially guides the software to create a new image. Right now this is only available in Stable Diffusion. Like many other AI techniques, AI image generation works by sampling a large variety of inputs – in this case, databases of images – and coming up with parameters based on that work. In broad strokes, it’s similar to the way that DLSS or XeSS work, or other machine learning applications like the text generator GPT-3. On some level, the AI is ‘learning’ how to create art with superhuman versatility and speed.