Led a team of 20+ engineers in the research and development of advanced computer vision systems for autonomous vehicles.
Designed Deep learning training framework on top of pytorch to commonize libraries on loss functions, detectors, backbones etc. Integrated tools like MLflow/W&B for model tracking.
Lead the hardware requirements and design for the domain ADAS embedded controller for L2+ application.
Established realistic expectations on software feature deliveries for the upper management. Increased visibility of the upper management in the development activities by deploying scrum and traceability.
Designed a deep learning-based lane detection system, improved lane detection accuracy by 35%.
Lead sensor fusion algorithm development which used JPDAF, extended kalman filtering, track management algorithm, plausibility checks, existence probabilities etc.
Oversaw the end-to-end deployment of a sensor fusion system, integrating LiDAR, radar, and camera data to enhance environmental perception.
Established repositories, enforced git flow using regression testing, jenkins CI CD pipeline, integrated unit test frameworks, integrated QAC tools for static analysis to meet production quality.
Mentored junior engineers and conducted technical training sessions, improving team skill sets and productivity.
Collaborated with external partners and academic institutions on cutting-edge research projects, resulting in invention disclosures.
Established interview process, hackerrank coding questions and pipeline to filter highly qualified candidates and built a cross functional team.
Researched state of the art neural networks architectures such as transformers, BERT and GPT. Trained and deployed 3d object detection algorithms such as PointNet, VoxelNet, segmentation transformers on a GPGPU.