I’m putting together a comprehensive backtest that focuses exclusively on stock market data from the NYSE and NASDAQ, spanning the last 10 years. To do this effectively I need a well-structured historical dataset that I can drop straight into my models without hours of manual cleanup. Here’s what I’m after: • Daily (or finer) OHLCV prices, fully adjusted for corporate actions, splits, and dividends. • Consistent symbol mapping so delistings, mergers, and ticker changes don’t break the series. • A single, tidy delivery format—CSV files are fine, but a lightweight SQL or Parquet database also works if you prefer. • A short README that explains field definitions, adjustment methodology, and any known data caveats. If you already have a reliable data pipeline in Python, R, or another language, feel free to leverage it; just make sure the final output is easy for me to import into pandas. Accuracy and completeness over the full decade are far more important than ultra-low latency. Once I confirm everything loads and reconciles against a spot check of raw exchange data, I’ll sign off on the deliverable.