Thread: AI Art Generation |OT| Midjourney and beyond
Official Thread
qHhJg0.jpeg

qHh6c3.jpeg

qHhkOF.jpeg

qHhEEr.jpeg
 
I'm starting to think we're living in a simulation. Just think about in ten years, will be more real than real.
 
Guys, I need this answered: can you actually create porn with Stable diffusion? I've searched for it, but all I can find is websites saying "do this to create NSFW pics", but I can't find anyone actually talking about doing it nor Ai porn pics themselves. So now I wonder whether it's actually possible. This extends to non-porn stuff, ofc. I'd hope there are no limitations, but I wanna be sure before spending 2000 bucks on a new pc. :eek:

Any answer is appreciated, thx.
 
  • Funny
Reactions: Kadayi
Guys, I need this answered: can you actually create porn with Stable diffusion? I've searched for it, but all I can find is websites saying "do this to create NSFW pics", but I can't find anyone actually talking about doing it nor Ai porn pics themselves. So now I wonder whether it's actually possible. This extends to non-porn stuff, ofc. I'd hope there are no limitations, but I wanna be sure before spending 2000 bucks on a new pc. :eek:

Any answer is appreciated, thx.

Can you guys answer this? I can literally not find a trace with Google. It'd be a real bummer if even offline SD was censored.
 
  • Coffee
Reactions: Kadayi
Can you guys answer this? I can literally not find a trace with Google. It'd be a real bummer if even offline SD was censored.

Sorry mate, I have no idea. Given the push for online safety that most major image generators preach, I'd image they would mostly censor that sort of stuff. There are probably tools out there to make your fantasy hentai a reality though.
 
Imagine being suspended for an hour on Microsoft's Bing Image Creator for trying to create an image of Abe Lincoln fighting a vampire. Could never be me.
 
  • Funny
Reactions: Mirabilis
Being able to test DALL-E 3 and I must say, it has come a long way from DALL-E 2.

It's impressive at how good it is in producing cartoons, anime and certain art styles though it seemingly gets thrown a bit when references certain artists (like I previously mentioned about my failed Yoji Shinkawa experiment)


Here's some images of Mario in the style of Cuphead and a couple of him as a dictator of the Mushroom Kingdom, the first is "Golden Age of American Animation" so of course they gave him a gun.


EDIT:
Is there a reason the slider carousel doesn't display Imgur previews? I guess I can just paste the links in instead...

EDIT 2 ELECTRIC BOOGALOO:I swapped over to Img.bb instead and they have direct BBcode stuff. Much simpler!
 
Last edited:
Sorry mate, I have no idea. Given the push for online safety that most major image generators preach, I'd image they would mostly censor that sort of stuff. There are probably tools out there to make your fantasy hentai a reality though.

I'm pretty sure he's trolling at this juncture. Civitai is like the biggest online repository/community for custom Stable Diffusion models on the planet and has been mentioned umpteen times in this thread already (it's even in the threadmarks), and almost every Youtuber who talks about Stable Diffusion links to it. It's free to sign up and free to download the models from. There's everything from Photorealistic models to anime-style ones to be found there. If Bavarian wobblechops was looking through the lint hair of his navel I could understand why he's had no luck, but not so much a simple goddamn web search.
 
  • Funny
Reactions: Amorous Biscuit
I'm pretty sure he's trolling at this juncture. Civitai is like the biggest online repository/community for custom Stable Diffusion models on the planet and has been mentioned umpteen times in this thread already (it's even in the threadmarks), and almost every Youtuber who talks about Stable Diffusion links to it. It's free to sign up and free to download the models from. There's everything from Photorealistic models to anime-style ones to be found there. If Bavarian wobblechops was looking through the lint hair of his navel I could understand why he's had no luck, but not so much a simple goddamn web search.

Wtf, I'm not trolling. I know about civitai, but assumed there's no porn checkpoints. So there is?

On another topic: does Ai art have difficulties with multiple defined characters in one picture? At least on pixai, it's super hard to reliably get pictures with 2 or 3 people in it.
 
  • Coffee
Reactions: Kadayi
Hey @Kadayi

Since you're using SD, could you please try creating some basic porn scene? Just 2 people having sex. I'm just curious if it's actually possible. Thx!
 
  • Coffee
Reactions: Kadayi
Hey @Kadayi

Since you're using SD, could you please try creating some basic porn scene? Just 2 people having sex. I'm just curious if it's actually possible. Thx!

How about no. This isn't an NSFW thread. If you want AI Porn, there's a tonne of it at Civitai if you go to the images section.
 
How about no. This isn't an NSFW thread. If you want AI Porn, there's a tonne of it at Civitai if you go to the images section.

Don't know why you're so itchy. I don't "want" porn, I'd just like to know whether it's possible. I cannot find any under civitai's images section.
 
Wow, just read Civitai's terms of service. They're literally a woke sjw community, filled with "we want an inclusive, save environment for everyone", literally treating fiction like it's real. No wonder they don't allow porn images. Even "be respectful to religion", lol.
 
Wow, just read Civitai's terms of service. They're literally a woke sjw community, filled with "we want an inclusive, save environment for everyone", literally treating fiction like it's real. No wonder they don't allow porn images. Even "be respectful to religion", lol.

LOL. The fact that you're complaining about the TOS suggests you haven't signed up which may perhaps explain why you can't see any NSFW images at the site, because anything like that (including models) are hidden from public view. :rolleyes:
 
LOL. The fact that you're complaining about the TOS suggests you haven't signed up which may perhaps explain why you can't see any NSFW images at the site, because anything like that (including models) are hidden from public view. :rolleyes:

Alright, will try to subscribe then. Still, could you answer my question about multiple people in 9ne image?
 
  • Brain
Reactions: Kadayi
Stable Diffusion: Rough explanation of In-painting.
Alright, will try to subscribe then. Still, could you answer my question about multiple people in one image?

If you are on about having say an image with 2 distinct-looking people in it (for instance Scarlett Johansson and Kirsten Dunst) a baseline AI no matter the model cannot produce that natively. It might have an awareness of what the people look like if they are famous, but what it will likely produce is an amalgamation of them.

As you can see below, the output has given me two distinct-looking characters, but the faces are similar looking and we're not seeing much of Kirsten, but a lot of Scarlett.

zhOaTSG.png


To overcome that, they need to send your Text to image output (txt2img) to Image to image (Img2img) and do some inpainting. Which basically involves masking the part of the image you want to change, and adjusting the prompt.

So here I've done that selecting the left-hand face and updated the image. I wouldn't say it's a perfect likeness of Kirsten (this is a stylized model I'm using), but it's definitely a more distinct face, and I also changed the lipstick to pink in the prompt.

dfTX5Gs.png


I could then repeat the exercise with the right-hand face, and make it a bit more Scarlett looking.

oYxTQuR.png


However, maybe I'm like let's change Scarlett to Sasha Grey instead (again, not a great likeness, but this is a very stylized model I'm using).

hoNynas.png


The thing to understand with AI is basically that when you are running a prompt, the image is being generated by noise through the model and so it's not like it is painting with an awareness of knowing where the end result is going to be, it's just pulling the data from a sense of what something is visually. Faces and bodies it is generally good at, as well as objects, buildings, etc etc, but hands for instance, AI tends to suck at those as they are quite complex.

Hope that helps, but honestly, there are a bunch of YouTube channels I've listed previously that will give you a better understanding of how to do much of this. This post is more demonstrating the flexibility of something like Stable Diffusion.
 
@Baylan Skoll Nice outputs Grish. Are you just using randoms for the face of mixing celeb names? Shades of Margot Robbie and Emma Stone in some of them.

First two images prompt was Ana De Armas, the others Emma Stone. All Dall-E 3, then upscaled. Prompts might not work today, since celebrity names are usually blocked (although there are workarounds).
 
  • Brain
Reactions: Kadayi
If you are on about having say an image with 2 distinct-looking people in it (for instance Scarlett Johansson and Kirsten Dunst) a baseline AI no matter the model cannot produce that natively. It might have an awareness of what the people look like if they are famous, but what it will likely produce is an amalgamation of them.

As you can see below, the output has given me two distinct-looking characters, but the faces are similar looking and we're not seeing much of Kirsten, but a lot of Scarlett.

zhOaTSG.png


To overcome that, they need to send your Text to image output (txt2img) to Image to image (Img2img) and do some inpainting. Which basically involves masking the part of the image you want to change, and adjusting the prompt.

So here I've done that selecting the left-hand face and updated the image. I wouldn't say it's a perfect likeness of Kirsten (this is a stylized model I'm using), but it's definitely a more distinct face, and I also changed the lipstick to pink in the prompt.

dfTX5Gs.png


I could then repeat the exercise with the right-hand face, and make it a bit more Scarlett looking.

oYxTQuR.png


However, maybe I'm like let's change Scarlett to Sasha Grey instead (again, not a great likeness, but this is a very stylized model I'm using).

hoNynas.png


The thing to understand with AI is basically that when you are running a prompt, the image is being generated by noise through the model and so it's not like it is painting with an awareness of knowing where the end result is going to be, it's just pulling the data from a sense of what somethingg is visually. Faces and bodies it is generally good at, as well as objects, buildings, etc etc, but hands for instance, AI tends to suck at those as they are quite complex.

Hope that helps, but honestly, there are a bunch of YouTube channels I've listed previously that will give you a better understanding of how to do much of this. This post is more demonstrating the flexibility of something like Stable Diffusion.

Ah, I think I've watched a video about inpainting. You use that control net-plug in for that, right? Anyway, thx.
 
First two images prompt was Ana De Armas, the others Emma Stone. All Dall-E 3, then upscaled. Prompts might not work today, since celebrity names are usually blocked (although there are workarounds).

I was right about Emma at least. Nice outputs though. I should have a dip at Dall-e 3 at some point, but I like the flexibility of SD plus there are a lot of models to play around with.

I'm really into this fashion magazine/advertising Photoshoot vibe at present ever since I came across that dude's Photography Prompting Video that I linked to in an earlier post. Helmut Newton, Bruce Weber, etc style. These were all done using the Absolute Reality model, which is still SD 1.5. Albeit SDXL does produce excellent outputs, it's a bit too clunky to work with I presently find.


r3DA7Rn.png
5p6Amc0.png

SCSEzO6.png
arjhqcX.png
 
Ah, I think I've watched a video about inpainting. You use that control net-plug in for that, right? Anyway, thx.

Controlnet is more about taking an existing image and using it as the framework for a new prompt versus regional inpainting per se. However it's always expanding in terms of what you can do with. It's a very powerful versatile tool, but can be a bit complicated, although there are plenty of Tutorial on YouTube about it.
 
BTW @Kadayi just for better understanding SD: would it be correct to say that checkpoints and loras are basically lower instances of the internet? In ozher words, an Ai that accesses the internet has the most data to pull from. A checkpoint is a large collection of data that can create a lot, but not everything. And a Lora is a small collection of data tailor-made for a specific task. Correct?
 
  • Brain
Reactions: Kadayi
BTW @Kadayi just for better understanding SD: would it be correct to say that checkpoints and loras are basically lower instances of the internet? In ozher words, an Ai that accesses the internet has the most data to pull from. A checkpoint is a large collection of data that can create a lot, but not everything. And a Lora is a small collection of data tailor-made for a specific task. Correct?

You need a checkpoint always as that is your base model. Checkpoints are trained on vast amounts of image data.

LoRAs are effectively trained mini-models of a particular subject or style that effectively interface with the main Checkpoint model, these are usually from 8 - 200mb in size dependent on the training.

On top of Checkpoints & Lora you also have Embeddings, which are generally quite small files (normally sub 1mb) and oftentimes are codified collections of negative prompts, or people's likenesses.

Let's take the image that I posted earlier on. This was generated using the Rev Animated model which is quite a popular stylised model.

zhOaTSG.png


For the sake of expediency to illustrate the inpainting issue I just used a simple prompt that I cribbed from Civitai: -

Prompt: pop art, 2 Girls, Scarlett Johansson and Kirsten Dunst, upper body, hair, red lips, looking at viewer, hat, <lora: pop_art_v2:0.7>

^ as you can see the Prompt has a style Lora reference to it in between the < > and there is a weight to it of 0.7 (generally so as not to overpower the checkpoint you want to go for 0.3 - 0.9) , and "pop art" is the associated trigger word

Negative prompt: EasyNegativeV2 ng_deepnegative_v1_75t bad_prompt_version2

^These are all Embeddings, which you should be able to find quite easily using search at civitai. Just install them to your Embeddings folder.


Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 874390029, Size: 768x768, Model hash: f8bb2922e1, Model: revAnimated_v122, VAE hash: 551eac7037, VAE: vae-ft-mse-840000-ema-pruned.ckpt, Clip skip: 2, Token merging ratio: 0.2,

^ these are the image settings plus the image seed. Using the same prompt, plus seed you should end up with a very similar image (although that can vary depending on your SD settings).


Lora can be quite good, esp for things like artistic styles, or people, however, they can overpower a model somewhat and you need to balance that out in terms of the weight. Embeddings are pretty good for likenesses and operate off of keywords usually something like 3mmaston3 for instance.

Same prompt as before save 1 girl using Rev Animated checkpoint and the same seed but using embeds for Alison Brie, Emma Stone, Anya Taylor-Joy respectively

lWhElB0.png
ugoUJu0.png
0dgQexo.png


Don't bother with Hypernetworks, as they are barely used. LyCoris and Lora are interchangeable and are effectively the same thing and operate in the same way. LyCoris is a newer tech though and generally, the files sizes are a bit smaller compared to Lora.
 
  • Like
Reactions: FactsAreDead