United States
Designed and developed safety and quality monitoring systems for CHEP facilities.
Led the end-to-end design, development, and management of an Ergonomic Risk Assessment System, encompassing proof-of-concept camera design, installation, model development, and module deployment. Introduced a novel REBA scoring methodology by integrating multi-view cameras, precisely identifying ergonomic risks realtime.
Enhanced a Human object detection model, reducing false positives through background sampling, a novel method. Implemented a log server reporting script, reducing manual checking time by 5-8 minutes per day, thereby increasing overall productivity.
Optimized production code, increasing throughput using multi-processing pipeline with ONNX runtime optimization for inference. Streamlined deep learning model inference reporting with a single script, reducing redundant operations for engineers.
Orlando, Florida, United States
Research Assistant in Center for Research for Computer Vision, assisting with the R&D development under Professor Abhijit Mahalanobis. Primarily worked in the R&D of scalable and optimized surveillance systems for outdoor and infrared imagery(depth) based on the instance and action detection and recognition using deep learning.
For Elbit Systems
* Developed pipelines for multi-scene detection for 5 different outdoor scenes for Elbit Systems. Created a 4 stage pipeline for motion cue based filtering and tracking approach to suppress the false positives in the scene analyzing multiple frames at once. Was successful at generating suppressing false positives by a factor of 30% while gaining 25%,19%, 18% and 9% improvements in Accuracy, Precision, Recall and F1 Score respectively.
* Devised an algorithm to interpolate ground truth for up sampling the training dataset to reduce manpower required for annotation pipeline(reduced the time taken for annotation by half).
* Developed a Autoencoder model with decoder+classification to suppress the background noise, however failed when tried to port it to COCO dataset.
For DRS Leonardo
* Developed a framework for training and testing for tracking and recognition of object instances(for the purpose of action recognition) for DRS Leonardo. Pretrained object detection model(FasterRCNN with Resnet50) on COCO was used to generate data for training and testing based on constrains and i3D network was used to classify each action classes on consecutively tracked object ROI. A data loader to minimize the imbalance was developed.
* Developed deployments using TensorRT and ONNX for TCRNet1 and TCRNet2 for DRS Leonardo.
* Development of object detection and recognition on infrared images using detectron2.
* Deployment and benchmarking of code in C++ for caffe converted pytorch model for detection of infrared images.
Developed a sports tech system's proof of concept with RPI, Sony IMX camera with system-level camera settings control. Architected the 1st iteration of software integration and completed the system. Expanded upon the previous architecture and directed the pipelines/APIs for server integration and computer vision technology in the 2nd generation iteration as an iPhone application leveraging the high-quality camera hardware and CoreML available for development. Communicated specifications with multiple vendors to develop optics, camera hardware, and embedded systems for standalone hardware systems. Actively managed timelines and tackled technical challenges during the entire development stage.
Phase 1
* Prototyped the first Proof of Concept model, hacking from Raspberry Pi and IMX219/298.
* Design and Architecture of Technology Pipeline, Backend Integration and developing hardware requirements.
Phase 2
* Working closely with development team for web based interface for acquiring data, using wifi connection and django backend for the computer vision algorithm.
*Extensively testing multiple camera and camera settings and off-the-shelf options for the best field of view.
*Directing the development towards phone based development for quick migration of pilot users.
Phase 3(Alpha prototype)
* Compiling technical requirements for backend/hardware and computer vision functionalities for the iphone applications(found to be a good intersection for the market and least fragmented device segment that provides access to high computation).
* Designing the architecture and the set of APIs required, feature architectures and the research for
backend/frontend(iphone) development,
* Development of algorithm and support for Core ML integration in the iphone app.
* Directing the standups of a team of 6 developers to tailor to the business and prioritize the development and correction as required.
* Developing and compiling the hardware requirements for the supporting hardware.
2018 — 2019
Bengaluru Area, India
Development of CNN models and architectures for Intelligent Video Analytics solution for Smart City applications to provide the technology on large scale. Spearheading efforts within the organization to utilize Deepstream framework from Nvidia for optimization leveraged by optimized use of GStreamer and TensorRT.
A part in the diverse and rich ecosystem of one among the pioneers in the cognitive media processing space.
Education
University of Central Florida
Master's degree
TKM College of Engineering , Kollam
Bachelor of Technology (BTech)
Udemy Alumni
Foundation degree
Udemy Alumni
Foundation degree
fast.ai