AI-POWERED BLOCKCHAIN NEWS TRUTH ENGINE

Заказчик: AI | Опубликовано: 18.12.2025
Бюджет: 1500 $

DEVELOPER REQUIREMENTS DOCUMENT 1. Functional Requirements 1.1 Fixed News Source List - System uses predefined blockchain news websites. - No UI or API for managing sources. - Source list editable only via configuration files. - Each source includes: name, URL, country, credibility score. 1.2 Automated News Scraping - Automatically scrape all configured news sources. - Extract: title, body text, publish date, URL, source metadata. - Run scraping on scheduled intervals. - Save scraped articles to database. 1.3 Article Normalization - Convert HTML to plain text. - Remove ads, menus, irrelevant content. - Normalize content and store cleaned text. 1.4 Topic Clustering - Generate embeddings for each article. - Assign articles to topic clusters using similarity search. - Create new clusters when no match exists. - Store cluster IDs. 1.5 Claim Extraction - Extract factual claims from articles. - Each claim includes: text, type (fact/prediction/opinion/speculation), sentiment. - Store extracted claims. 1.6 Cross-Source Claim Comparison - Compare claims within each topic cluster. - Detect supporting vs contradicting claims. - Count supporting articles per claim. - Store comparison data. 1.7 Truth Evaluation Engine - Compute true outcome for each topic cluster. - Generate: final truth summary, confidence score, supporting claims, contradicting claims. - Save truth summary to database. 1.8 Sentiment & Opinion Classification - Classify each claim as positive, negative, or neutral. - Distinguish facts vs predictions vs opinions. 1.9 Data Storage Requirements Tables Required: - Articles: id, title, content, url, publish_date, source, country, credibility_score, topic_cluster_id. - Claims: id, article_id, claim_text, claim_type, sentiment. - Truth_Clusters: id, topic_summary, final_truth_summary, confidence_score. - Claim_Support: id, cluster_id, claim_id, support_type. 1.10 Automation Requirements - Automated pipelines for scraping, embeddings, clustering, claim extraction, truth evaluation. - All tasks must retry on failure. 1.11 Public API Requirements - GET /truth/latest – return latest truth summaries. - GET /truth/{cluster_id} – truth details. - GET /article/{id} – raw article. - GET /opinions/{cluster_id} – sentiment & opinion breakdown. 1.12 Logging Requirements - Log scraping. - Log claim extraction. - Log truth evaluation. - Log errors and API events. 2. Non-Functional Requirements 2.1 Performance - Support 100–1,000 daily articles. - Clustering: <5s per article. - Truth evaluation: <60s per topic. 2.2 Reliability - Retry failed AI or scraping jobs. - System continues operating if one source fails. 2.3 Scalability - Must support expanding number of sources via config. - Must support distributed scraping and AI workers. 2.4 Security - HTTPS required. - Public APIs must have rate-limiting. 2.5 Maintainability - Adding new news sources requires only config changes. - AI model provider must be swappable via configuration. 3. Deliverables - Backend service implementation. - Automated scraping pipeline. - Topic clustering module. - Claim extraction module. - Truth evaluation engine. - Database schema. - Public REST API. - Deployment configuration. - Developer documentation. 4. Acceptance Criteria - System scrapes all sources automatically. - Articles are cleaned and stored. - Claims extracted accurately. - Contradictions identified. - Truth summaries stored and accessible. - No admin interface for source management.