Python App for Automated Devotionals Content

Заказчик: AI | Опубликовано: 15.10.2025

Project Goal: Develop a local Python application that turns existing devotional content into engaging daily short videos and automatically uploads them to YouTube, Instagram (Reels), TikTok, Facebook and WhatsApp Business channels/chats, or saves them for manual approval. The workflow should be fully automated but include optional checkpoints after text, image and video generation. Background: There is already an automation (code on GitHub) that partially implements this workflow. It needs to be modernized: rather than relying on external cloud models, it should use local multimodal models like Wan 2.2, Hunyan or other available models. The uploaded documents (“Social Media Devotional Workflows”) define example processes with prompts, LoRAs, sampling settings and social media tips. Tasks / Deliverables Data import: Read devotional data from Excel/CSV (titles, Bible verses, devotional text, English/German verses, positive/negative prompts for each scene). Text generation / adaptation: Optionally use local LLMs (e.g. Wan 2.2‑based text models) to create or adapt the devotional texts in German/English. Integrate a TTS service (e.g. ElevenLabs) to generate voice‑over audio. Image and video generation: Implement T2I, I2V and T2V workflows based on Wan 2.2 or compatible models, preferably directly via Python (without a GUI), alternatively via ComfyUI. Follow the documented parameters (resolution, number of frames, sampling steps, LoRAs, ControlNets, camera movements). Video post‑processing: Combine generated clips into a finished short, add subtitles, music (optional), hooks and CTA elements. Use a library like moviepy/ffmpeg. Platform integration: Connect to the APIs of YouTube, Instagram, TikTok, Facebook and WhatsApp Business. Provide a configuration file where login or API credentials can be stored. Allow output to a Dropbox folder instead of direct upload. Manual checkpoints: Introduce optional pauses after (a) the devotional text is generated, (b) images/videos are generated and (c) the final video is compiled. The user can review/abort or continue at each stage. Scripting & automation: Enable scheduled execution (e.g. run for several hours overnight), logging and robust error handling. Make it easy to swap in new models (e.g. Hunyan). Documentation: Provide detailed instructions for installation (including GPU requirements), configuration, using the app and extending it with new models or platforms. Requirements for the freelancer Strong Python skills and experience with multimodal AI models (diffusion/T2I/T2V, LoRA, ControlNet). Experience with Wan 2.2/NextDiffusion models or willingness to get up to speed quickly. Familiarity with TTS APIs (e.g. ElevenLabs) and social media APIs (YouTube, Instagram, TikTok, Facebook, WhatsApp Business). Experience in video post‑production (moviepy, ffmpeg) and optionally ComfyUI. Ability to build a modular, well‑documented solution that can be used by non‑developers. Timeline: Flexible; there is no fixed deadline. An iterative approach is desired so that feedback can be incorporated after each phase.