I need a compact scripting language that lets me control my VRM scene on the Quest Meta 3 with plain-English sentences. The focus is object manipulation—specifically moving objects—so a command such as “Move the blue sphere two metres to the right” must translate directly into the appropriate action inside the headset. The language should: • read natural-language movement commands (no keyword memorisation or graphical editors) • parse them with an AI model or NLP pipeline of your choice • generate or call the low-level VRM/Unity† methods required to move the targeted object in real time Deliverables • Language specification and grammar rules (PDF or Markdown) • Interpreter or compiler code with clear build/run instructions • A Quest Meta 3 demo scene showing at least five distinct movement commands working end-to-end • Sample scripts plus brief user-level documentation Acceptance checkpoint If the sentence “Move the red cube three metres forward” correctly animates the cube, we are on the right track. Please keep the architecture modular so we can extend it later to resizing or rotation. †Unity is named only as a likely underlying engine; if you prefer another runtime that works natively on Quest 3, propose it.