I have a library of videos and photos featuring my own characters, and I want to breathe new life into that footage through AI. Your task is to build a script—Python preferred, but I’m open to suggestions—that can take any of my existing images or clips and output fresh videos and stills featuring the same characters in a realistic yet stylized look. Core goals • Input: my current JPG/PNG photos and MP4/MOV videos • Output: new videos and photos, automatically rendered with realistic, stylized versions of the original characters • Single-command workflow: choose source file, set desired length or frame count, and generate results without manual frame-by-frame editing What I’m picturing – Face/character extraction and consistent identity preservation across frames – Style transfer or fine-tuned diffusion model that keeps proportions and likeness while adding a creative, cinematic polish – Optional batch mode for processing an entire folder of assets in one run Deliverables 1. Fully commented script (or notebook) including model loading, inference pipeline, and export routine 2. Clear README showing setup steps, dependencies, and example commands 3. One sample output video and one sample image created from my test assets to prove everything works I’m comfortable installing CUDA toolkits and large models locally, but if you can streamline it with an API such as Stable Diffusion, Runway, or similar, spell that out in the instructions. Accuracy, speed, and reproducibility matter more to me than flashy GUIs—I want a solid, reusable workflow that I own and can extend later.