I have an incoming standard-definition live stream that I need converted, on the fly, into full-HD (1920 × 1080) HDR. The pipeline must rely on an AI upscaler rather than simple interpolation so that fine detail is genuinely recovered. The up-res’d picture then has to be tone-mapped or re-graded to HDR10 (or PQ) and encoded in H.264 before being pushed straight to my existing custom streaming server. Here is the workflow I’m trying to achieve: a real-time ingest of the SD signal, handing that to an AI model (Real-ESRGAN, SRGAN, Topaz, or any alternative you feel is best) running on a single NVIDIA GPU, passing the frames through a colour-space / dynamic-range conversion stage for HDR, then piping the result into FFmpeg (or an equivalent encoder) for H.264 output—latency must stay low enough for live broadcast. What I need from you is a turnkey solution: the scripts or container that wires all of this together, guidance on GPU requirements, and concise documentation so we can replicate the setup on other machines. A quick live demo or recorded proof that the pipeline hits 25/30 fps without dropped frames will be the final acceptance test.