Editing develop

Замовник: AI | Опубліковано: 17.11.2025
Бюджет: 250 $

, here is the final requirement summary Full automation: anylisis. generate → schedule → publish (1hr+ videos).add basic chart generation(utilize apis) Advanced script system: topic extraction, multi-version generation, and editable prompt/template structure. UI/UX in Figma + full .env + complete working system. GPT/Claude can generate most chart code, so you only need simple integration. I need to confirm a few things to avoid scope issues: 1) Scene splitting accuracy — can you show 1 example script → scene breakdown? 2) Is B-roll matching fully automatic or “auto suggestions + manual pick”? 3) Which free-footage sources will be used (API + non-API)? 4) Which models will be used for NER, embeddings, keyword extraction, and mood classification? 5) What editing automation is included? (cuts, silence detection, captions, BGM, transitions) 6) Can I edit templates myself later? (captions, fonts, cut structure, b-roll design) 7) How will the .md training structure work, and can I replace files anytime? 8) Tech stack (backend, ffmpeg version, n8n integration)? 9) Total cost + exact timeline + maintenance period? 10) Provide one sample: my short script → footage-matched mini demo. I don’t need another AI video generation website. There are already many tools that can generate AI videos. What I need is an automated editing pipeline, not just generation. That means scene detection, smart cutting, footage matching, subtitles, audio, rendering, and template assembly. AI generation is optional — the core is automated video editing, similar to InVideo, Pictory, or Descript. Ialso cna give you metarial to develop very easy . With gpt prompts Than I want you do develop this first Youtube analisys and have a small feature upgrade request that fits smoothly into our current pipeline. I want to add basic chart generation to the system, with two output types: 1. Static transparent PNG chart 2. Animated chart (GIF or short MP4) Both are based on simple numeric data (TradingView / Coingecko / FRED API — I can provide any key). It’s basically: fetch the numbers auto-generate the chart with Matplotlib/Plotly and overlay the PNG/MP4 using the same ffmpeg process we already use So this doesn’t create a new workflow — it’s just adding one small step inside the existing Python → ffmpeg pipeline. GPT/Claude can generate 90% of the chart code, so you only need to integrate it. Core Requirements (English) 1. Full Automation: Generation → Scheduling → Publishing Connect any platform via API → auto-generate, process, and publish. Must support videos over 1 hour in length. 2. Script System Learn all storytelling/marketing types I provide. Extract topics from news / YouTube captions and analyze what’s good. Download and use the original source text. Classify the topic → auto-generate multiple script versions per type. Step-by-step generation (X-step, short-form, long-form up to X0,000+ characters) → linked directly to video production. Prompts, templates, and SNS-specific formats must be editable by me without touching code. 3. UI/UX (Figma) We refine the style together during the design process. 4. Deliverables .env configuration, screenshots of progress, and the fully working final system. I need to confirm a few things to avoid scope issues: 1) Scene splitting accuracy — can you show 1 example script → scene breakdown? 2) Is B-roll matching fully automatic or “auto suggestions + manual pick”? 3) Which free-footage sources will be used (API + non-API)? 4) Which models will be used for NER, embeddings, keyword extraction, and mood classification? 5) What editing automation is included? (cuts, silence detection, captions, BGM, transitions) 6) Can I edit templates myself later? (captions, fonts, cut structure, b-roll design) 7) How will the .md training structure work, and can I replace files anytime? 8) Tech stack (backend, ffmpeg version, n8n integration)? 9) Total cost + exact timeline + maintenance period? 10) Provide one sample: my short script → footage-matched mini demo. 1) scene splitting accuracy LLM-based segmenter +rule-based timestamps; works reliably for 95% of short/medium scripts. Example breakdown will follow your chosen style template. 2. B-roll matching fully automatic by default, with an option to override manually in the UI. 3)free-footage sources pexels API, Pixabay API, Videvo (scrape/index), and may be Archive.org public-domain sources. 4) Modelss used NER : spaCy / small LLM. Embeddings: OpenAi text-embedding-3-small. Keyword extraction: RAKE + LLM refinement. Mood classification: small LLM classifier. 5) Editing automation auto cuts, silence detection, B-roll placement, subtitltes, BGM leveling, transitions, and final FFmpeg rendering. 6) Template editing Yes ,all templates (fonts, captions, cut logic, overlays) are editable via JSON/YAML files. 7) Markdown training structure All prompts + examples stored as .md files; your can replace/edit them anytym and pipeline reloads automatically. 8) Tech stack Backend: Python (FastAPI) + workers (Celery). FFmpeg: latest stable 6.x. automation: n8n workflow triggers + custom microservices. 9) Cost + timeline + maintenance Cost: $450 or depending on video-length support. Timeline: 7–10 days. Maintenance: 15 days free support. 10) sample mini demo Short script -> scene 1) Do you have experience developing AI video creation/editing systems, including free stock footage search, automatic B-roll, and script-based AI video generation? 2) Please share any portfolio links or examples showing video automation work (ffmpeg, Whisper subtitles, scene detection, or automatic cut editing). 3) How do you plan to implement automatic free-footage matching based on script keywords? 4) Can you build the script/image training and generation structure so that I can modify and update the prompts myself later? -> matched pexels clips (semantic similarity). Eg, a man was walking in a quiet city morning -> returns urban sunrise walk, slow-motion street, morning skyline clips., etc.