Goal Create a web-based, walkable interior viewer of a building using 3D Gaussian Splatting generated from my capture data: FJD Trion P2 scan outputs (project + point cloud) Insta360 X5 video recorded during/around the scan The viewer must allow users to navigate room-to-room and up stairs, with collision/constraints so users cannot walk through major objects (walls, tables, large furniture) and can only move through valid openings (doors/stairs). Reference quality/behavior to match: krpano 3DGS example (render quality + web navigation feel): https://krpano.com/releases/1.23/viewer/krpano.html?xml=examples/3d-gaussian-splatting/shaders-demo.xml&antialias=false Inputs I will provide Trion scan project file(s): .fjdslam (and/or compressed project output) Point cloud export(s): .las (plus optional .ply/.e57 if needed) Insta360 X5 footage: .insv and/or exported equirectangular .mp4 Notes on which areas to keep/remove, and which rooms/stairs must be navigable Required deliverables Processed 3D Gaussian Splat model photorealistic, stable perspective, crisp walls/edges where possible cleaned: remove people/ghosting, remove unwanted areas (outside, mirrors if messy, clutter zones as directed) output format suitable for web viewer (e.g., .splat/.ply/.ksplat or krpano-compatible, depending on chosen viewer) Collision / navigation system A proxy collision mesh derived from LiDAR (LAS) and/or a simplified mesh authored manually. Must block walking through: walls/ceilings/floors tables/counters/large furniture (at least the main ones) Must support: stairs traversal and multi-room navigation movement constraints (no “fly mode” unless explicitly allowed) Bonus: doorway “funnels” or nav graph to keep users on valid paths. Web viewer Choose one of: krpano 1.23+ implementation using their 3D Gaussian Splatting support, OR Three.js viewer using a proven 3DGS renderer (e.g. GaussianSplats3D / similar) with collision integration. Note: krpano has native 3DGS support examples. Three.js has well-known web 3DGS viewers; some examples show first-person physics with a collision mesh. Hosting-ready output A folder I can upload to a web server (or a simple Node static server) that runs the viewer. Includes all assets + a single entry URL (index.html or krpano html/xml). Clear instructions: “upload folder → visit URL”. Documentation + reproducibility A short written pipeline: how you went from inputs → final splat + collision mesh + web package. List of tools used (e.g., COLMAP / 3DGS training tool / Postshot / custom scripts / CloudCompare / Blender). Any scripts you wrote (Python/JS), with how to run them. Technical requirements & expectations Use LiDAR (LAS) for geometry truth, and video for texture/photorealism. Remove people/ghosts: use masking/cleaning and/or retraining/refinement. Keep scale correct (match LiDAR). Optimize for web performance: reasonable load time stable framerate on a modern laptop and recent phones (state what devices you targeted) Viewer controls: WASD + mouse look (desktop) touch joystick / drag-look (mobile) optional minimap or hotspot navigation (nice-to-have) Acceptance criteria (how I will test) I can open the web viewer and: Walk around without falling through floors or walking through walls Walk up the main stairs and into the key rooms Tables/walls block movement (no passing through) People are removed (no obvious ghost humans) Unwanted areas are removed (as per my notes) Visual quality is comparable to the krpano demo’s level of stability and clarity (within the limits of my capture) Milestones Feasibility + plan (1–2 days) confirm chosen viewer approach (krpano vs Three.js) confirm what additional captures (if any) would help First splat draft initial splat generated from video/frames, aligned/validated against LiDAR Cleaning pass masks, remove people, crop unwanted geometry, improve crispness Collision + navigation collision mesh generation + first-person navigation constraints Final web package optimized assets + deployment instructions