• Designed and iterated on multi-modal perception models combining 2D object detection and semantic segmentation, progressing to end-to-end transformer-based architectures with direct 3D bounding box prediction.
• Optimized model latency for on-vehicle deployment through architecture redesign, bottleneck profiling (GPU flame graphs), and quantization via TensorRT - balancing detection accuracy against real-time safety-critical inference requirements.
• Helped architect and manage the internal training framework (PyTorch Lightning wrapper) - became a go-to engineer for model training issues including data throughput, architecture sizing, and optimization bottlenecks.
• Evaluated and integrated state-of-the-art research into production perception models - reading, benchmarking, and adapting novel architectures to meet on-vehicle compute and latency constraints.
• Verified and validated perception models for safe on-vehicle deployment, ensuring compliance with autonomous vehicle safety cases.
• Named inventor on 3 granted U.S. patents spanning scene embedding-based detection, construction zone perception, and collision avoidance trajectory prediction - covering sensing through planning across the full autonomy stack.