• Bridging the Sim-to-Real Gap: Architected end-to-end PBR synthetic data pipelines for human-centric ML models, reducing training error by matching the statistical distribution of real-world sensor noise and lighting.
• Sensor & Lens Fidelity: Engineered a proprietary library for extracting ray-accurate sensor models from CodeV/Zemax, enabling the simulation of cameras through pancake lens systems.
• Neural Ground Truth Generation: Developed hybrid data workflows using 3DGS and NeRFs to reconstruct high-fidelity "digital twins," providing pixel-perfect ground truth for depth, occlusion, and semantic segmentation.
• Scalable Rendering Architecture: Optimized large-scale workflows (OpenCue, custom renderfarms) to accelerate synthetic dataset throughput, ensuring consistent multimodal metadata across RGB, Depth, and IR sensors.
• Research Collaboration: Partnered with research teams to support peer-reviewed submissions in human motion synthesis and generative 3D modeling, with accepted papers at ECCV, WACV, and FG.
• Spatial Computing & Real-time Rendering: Contributed to asset/shader loading and animation components (OpenXR, Vulkan) for the Meta Spatial SDK, and implemented experimental algorithm prototypes for high-fidelity VR headset passthrough.