OpenSearch Ingest Pipeline, Queries, and Indices Setup

Замовник: AI | Опубліковано: 13.12.2025
Бюджет: 25 $

I am ready to stand up a production-grade OpenSearch environment and want your help wiring the full ingest pipeline. The core stack will be OpenSearch with FluentD as the log-management and routing layer, together with Data Prepper (or an equally effective OpenSearch component) for enrichment and trace analytics. Source variety is high: classic log files from several services, change-data-capture streams coming off our databases, and a real-time event feed. Most of what flows in is semi-structured—think JSON snippets with the occasional free-form field—so careful parsing, field mapping, and transformation will be essential before anything lands in an index. What I need from you • A reproducible configuration (YAML / Docker-Compose or Helm is fine) that spins up OpenSearch plus the required FluentD and Data Prepper pieces. • FluentD pipelines that reliably ingest the three source types above, apply helpful tags, and forward to Data Prepper. • Data Prepper processors that normalize the semi-structured payloads, drop noise, and output clean documents ready for search and dashboarding. • A short README explaining the flow and any environment variables I must set. Acceptance is straightforward: stand up the stack on my test machine, push sample data from each source, and show the records landing in OpenSearch with the expected fields populated. If you have recent experience tuning FluentD buffers, crafting Grok or Ruby filters, or optimizing OpenSearch ingest performance, your insight will be immediately valuable. Let’s get this pipeline humming.