I have a collection of structured datasets delivered in both JSON and XML files and I need to turn them into something I can quickly explore. The task is to build a clean, well-commented Python workflow that ingests each format, normalises the fields into a common schema, pushes everything into a pandas DataFrame, and then runs a small set of descriptive statistics—row counts, means, medians, simple groupings, that sort of thing. Once the data are consolidated I’d like the script to export a tidy CSV and generate a couple of basic visuals with Matplotlib or Seaborn so I can spot trends at a glance. Please keep dependencies lightweight and stick to Python 3.10. Deliverables • Single .py file or Jupyter notebook, fully documented • README with setup and usage steps • Example output CSV and any generated charts Acceptance criteria • JSON and XML samples load without error and produce identical row counts in the final DataFrame • Statistics and plots reproduce on my machine with the instructions provided • Code remains easy to extend for new columns or additional structured sources