

I’m not sure exactly what you’re asking, but there are controlnet models that can be made from videos or images and used to guide how a compatible generative AI model poses figures, and there may be something that does functionally the same thing for trying to animate bones for a 3d model although I have to stress that a lot of this tech is at once surprisingly capable but still complete dogshit in practice. Generative AI research has been hard focused on making the shitty little black boxes less bad at churning out slop from simple prompts in a way that’s meant a whole bunch of attendant tech that might make it less bad via human curation and guidance has just not been made.
The hobbyist sector wants its slop gacha, the management sector wants a fully autonomous worker replacer, and the whole thing’s such a grift no thought’s been given to how to actually make genuinely useful toolkits involving the tech.









And it’s just the same model as the second worst car, but with the muffler and catalytic converter removed, a bad lift job, and some razor blades welded to the front bumper “in case of pedestrian impact”. These are the only differing features that let the second worst car claim that title over this one.