Testing AI for Super Deepthoat (1 Viewer)

Felldude

Content Creator
Joined
May 3, 2014
So the obvious use would be backgrounds as that would be easy and require almost no work

Second to that I think in ease would be hair generation. Its easy for AI to draw, and usually a problem for artist.
Here is an example where I took the hair guide png and kept it to scale but scaled it way up (Easier to go down then up)
Lol.png
Face 4.png

Lastly is the possibility of training a Lora to draw the individual body parts (As a group), I'm not sure if any value would come from this as highly realistic models don't really fit the game.


My theoretical template would look like this (You would have to double the legs)

Maybe.png

I'm not sure of the value of painting faces yet, staying within the bounds set by the pattern is tough to do something like a celebrity or even anime character with AI
Face2.png

In this case the ear got rotated
 
Last edited:

Graceus

Content Creator
Joined
Jan 5, 2021
I've spent some time w/ SD models. Models like AOM3 can maintain the anime style but also add some variation to the default SDT (really just a result of low denoising and a very good model). Heavy limitations of the "side view" of SDT, at higher denoising, even 0.4-0.5 at a time, the models/latent space isn't well specified for these side-view scenes. "Niche" models without tons of training look really bad, even piece by piece.

Doing part by part is cumbersome and even with the same seed, you will have discoherent body parts. Takes a lot of manual fixing: seams, color, lighting, shading, texture to get it to look right (at least on par with vanilla body).
ControlNet (lineart_anime, canny -- vanilla sprites as masks) makes this viable.

Two approaches -- you feed the whole body and then cut out the body parts (again, use vanilla sprites as masks) and paste them in a mod template (like ExtraMod).
Second approach is body part by body part. The models don't really understand dismembered body parts, like they're trained on people and whole scenes involving whole people, and usually are overfit (like a no prompt mid CFG txt2img pops out generic anime girls, its baked in, low versatility). Results are disappointing, but I haven't explored the entire breadth of models.

Looking at your examples, I'd be interested to know what models you've been trying. There's something to it.

LORA idea is 100% the way to go if mass producing; help egg the models in the right style/get over that side-view speed bump.

Also, you have to develop a series of frames for the breasts, jaw. Any of the "tweened" components. EBSynth makes this very easy, can one or two attempt with a single keyframe, but again more of a learning curve.


I actually followed this exact process for my Ruby mod. Human beings should be easier. Just takes some time and patience.
 
Last edited:

Felldude

Content Creator
Joined
May 3, 2014
I've spent some time w/ SD models. Models like AOM3 can maintain the anime style but also add some variation to the default SDT (really just a result of low denoising and a very good model). Heavy limitations of the "side view" of SDT, at higher denoising, even 0.4-0.5 at a time, the models/latent space isn't well specified for these side-view scenes. "Niche" models without tons of training look really bad, even piece by piece.

Doing part by part is cumbersome and even with the same seed, you will have discoherent body parts. Takes a lot of manual fixing: seams, color, lighting, shading, texture to get it to look right (at least on par with vanilla body).
ControlNet (lineart_anime, canny -- vanilla sprites as masks) makes this viable.

Two approaches -- you feed the whole body and then cut out the body parts (again, use vanilla sprites as masks) and paste them in a mod template (like ExtraMod).
Second approach is body part by body part. The models don't really understand dismembered body parts, like they're trained on people and whole scenes involving whole people, and usually are overfit (like a no prompt mid CFG txt2img pops out generic anime girls, its baked in, low versatility). Results are disappointing, but I haven't explored the entire breadth of models.

Looking at your examples, I'd be interested to know what models you've been trying. There's something to it.

LORA idea is 100% the way to go if mass producing; help egg the models in the right style/get over that side-view speed bump.

Also, you have to develop a series of frames for the breasts, jaw. Any of the "tweened" components. EBSynth makes this very easy, can one or two attempt with a single keyframe, but again more of a learning curve.


I actually followed this exact process for my Ruby mod. Human beings should be easier. Just takes some time and patience.

So I used both a realistic vision merge and Orange mix anime checkpoint, both 1.5 not XL.

With in painting drawing over a full model defiantly produces better results.

I do think you could train a lora with the segments and get similar results but I'm not sure how much value it would bring given the static nature of the game.
 

Users who are viewing this thread

Top


Are you 18 or older?

This website requires you to be 18 years of age or older. Please verify your age to view the content, or click Exit to leave.