They’re both text-to-image generation tools, which means when you’re in the interface of either Darley or Mid Journey, you have to put text inputs quite often called prompts into the tool. And those prompts are gonna define what the output is, what those images are, which it’s gonna come back to you with. So, a lot of what’s gonna get you some really great results there is often being quite specific with what you’re looking for. And the more inputs you are gonna put in there in terms of those text prompts, the more likely you’re gonna get something which is attuned to what you’re looking for.
So DALL-E and Midjourney, they’re but there’s a few little subtle differences Denmark WhatsApp Number Data in there as well. So, for example, DALL-E offers the ability to do what they call in-painting and out-painting. So in-painting is when you upload an image and you’re able to recreate parts of it through AI-generated imagery for specific parts of that image. So say you wanted to edit an image of a circus, but you wanted more elephants in there, and our elephants are gonna be the topic of conversation today, maybe even throughout the entire podcast.
But say you wanted a couple of elephants in that circus shot. You could paint those in through DALL-E. Say you wanted to zoom out on that image and recreate the audience for that circus, which isn’t in the original image. You can use what they call out-painting as well. So that’s where AI is using everything it knows from its machine learning algorithms, which it’s been trained on, to fill in the gaps and paint the outside of that image to expand it to be larger than it originally was when you put it in there.
|