# Abraham Jose > Computer Vision Research Engineer | Full Lifecycle Development | Rapid R&D Iteration | Unity | Blender Location: San Francisco, California, United States Profile: https://flows.cv/abrahamjose I graduated from University of Central Florida with Master's in Computer Science with research experience in Computer Vision from Center for Research in Computer Vision Lab in UCF in Video surveillance and advanced vision processing technologies. Prior to my Master's, I graduated with Bachelor's in technology in Engineering in Electronics & Communication with Signal Processing as minor, which proved to very useful in understanding and leveraging any cross-platform hardware/software engineering projects & theoretical understanding of information theory. And having thinking pythonic for past couple of years, I am able to sieve through, debug and get familiarized with anything in python faster than ever. I am no stranger to other programming languages, just don't had any requirements to use them day-to-day. I am professionally competent with the frameworks including Pytorch, Tensorflow, Keras and model optimization using TensorRT, ONNX, Deepstream, and libraries such as OpenCV, Pandas, Numpy, Pillow, Scikit, ScikitLearn, bash, slurm and collaboration using Git, Slack and Jira. I have also worked on multiple embedded hardwares for deployment including but not limited to Raspberry Pi, Nvidia Jetson TX1, TX2 and Jetson Nano, Arduino, Atmega(Micro-controller). Born as a late talker, I have always been keen on everything in visual perception. It heightened my sense of visual cognition from my childhood which helped me earn curiosity in understanding, passionately pursuing and experiencing the field of imaging and image understanding. In a nutshell I would love to present myself to you as someone who enjoys the pursuit of craft, and it fills my heart. ## Work Experience ### Senior AI Software Engineer @ Spot AI Jan 2024 – Present | United States ### Computer Vision Engineer, Process Engineering @ CHEP Jan 2021 – Jan 2023 Designed and developed safety and quality monitoring systems for CHEP facilities. Led the end-to-end design, development, and management of an Ergonomic Risk Assessment System, encompassing proof-of-concept camera design, installation, model development, and module deployment. Introduced a novel REBA scoring methodology by integrating multi-view cameras, precisely identifying ergonomic risks realtime. Enhanced a Human object detection model, reducing false positives through background sampling, a novel method. Implemented a log server reporting script, reducing manual checking time by 5-8 minutes per day, thereby increasing overall productivity. Optimized production code, increasing throughput using multi-processing pipeline with ONNX runtime optimization for inference. Streamlined deep learning model inference reporting with a single script, reducing redundant operations for engineers. ### Graduate Research Assistant @ UCF Center for Research in Computer Vision Jan 2020 – Jan 2021 | Orlando, Florida, United States Research Assistant in Center for Research for Computer Vision, assisting with the R&D development under Professor Abhijit Mahalanobis. Primarily worked in the R&D of scalable and optimized surveillance systems for outdoor and infrared imagery(depth) based on the instance and action detection and recognition using deep learning. For Elbit Systems * Developed pipelines for multi-scene detection for 5 different outdoor scenes for Elbit Systems. Created a 4 stage pipeline for motion cue based filtering and tracking approach to suppress the false positives in the scene analyzing multiple frames at once. Was successful at generating suppressing false positives by a factor of 30% while gaining 25%,19%, 18% and 9% improvements in Accuracy, Precision, Recall and F1 Score respectively. * Devised an algorithm to interpolate ground truth for up sampling the training dataset to reduce manpower required for annotation pipeline(reduced the time taken for annotation by half). * Developed a Autoencoder model with decoder+classification to suppress the background noise, however failed when tried to port it to COCO dataset. For DRS Leonardo * Developed a framework for training and testing for tracking and recognition of object instances(for the purpose of action recognition) for DRS Leonardo. Pretrained object detection model(FasterRCNN with Resnet50) on COCO was used to generate data for training and testing based on constrains and i3D network was used to classify each action classes on consecutively tracked object ROI. A data loader to minimize the imbalance was developed. * Developed deployments using TensorRT and ONNX for TCRNet1 and TCRNet2 for DRS Leonardo. * Development of object detection and recognition on infrared images using detectron2. * Deployment and benchmarking of code in C++ for caffe converted pytorch model for detection of infrared images. ### Director Of Technology @ Stealth Startup Jan 2020 – Jan 2021 Developed a sports tech system's proof of concept with RPI, Sony IMX camera with system-level camera settings control. Architected the 1st iteration of software integration and completed the system. Expanded upon the previous architecture and directed the pipelines/APIs for server integration and computer vision technology in the 2nd generation iteration as an iPhone application leveraging the high-quality camera hardware and CoreML available for development. Communicated specifications with multiple vendors to develop optics, camera hardware, and embedded systems for standalone hardware systems. Actively managed timelines and tackled technical challenges during the entire development stage. Phase 1 * Prototyped the first Proof of Concept model, hacking from Raspberry Pi and IMX219/298. * Design and Architecture of Technology Pipeline, Backend Integration and developing hardware requirements. Phase 2 * Working closely with development team for web based interface for acquiring data, using wifi connection and django backend for the computer vision algorithm. *Extensively testing multiple camera and camera settings and off-the-shelf options for the best field of view. *Directing the development towards phone based development for quick migration of pilot users. Phase 3(Alpha prototype) * Compiling technical requirements for backend/hardware and computer vision functionalities for the iphone applications(found to be a good intersection for the market and least fragmented device segment that provides access to high computation). * Designing the architecture and the set of APIs required, feature architectures and the research for backend/frontend(iphone) development, * Development of algorithm and support for Core ML integration in the iphone app. * Directing the standups of a team of 6 developers to tailor to the business and prioritize the development and correction as required. * Developing and compiling the hardware requirements for the supporting hardware. ### Algorithm Engineer @ Graymatics, Inc Jan 2018 – Jan 2019 | Bengaluru Area, India Development of CNN models and architectures for Intelligent Video Analytics solution for Smart City applications to provide the technology on large scale. Spearheading efforts within the organization to utilize Deepstream framework from Nvidia for optimization leveraged by optimized use of GStreamer and TensorRT. A part in the diverse and rich ecosystem of one among the pioneers in the cognitive media processing space. ### Advisor - Computer Vision @ Sticheo Jan 2018 – Jan 2018 | Singapore Teamed up for providing a seamless multimedia experience for consumers with in-video ad placement. A challenging problem that requires to address the drawbacks of current vision engines for the best viewing experience. The solution, if fully functional, will play a crucial role as one of the best marketing strategies for any contents. #in_stealth ### Machine Vision Engineer @ Cell Propulsion Jan 2017 – Jan 2018 | Bengaluru Area, India *Working in bleeding edge technology to sort Detection/Segmentation/Cognition to let the machine understand the sequence of images and to come up with deterministic as well as learning based inference which will drive the motor. To quote, Detection is based on Custom YOLO(You Only Look at Once), Segmentation through Fast RCNN, Behavioural cloning for enriching feature-behaviour pair, end to end AI to drive a car in a given track. Visualization and understanding is the backbone for developing ML models. ### Secretary @ Electronics and Communication Association Jan 2016 – Jan 2017 | TKM College of Engineering ### Cheif Executive Officer @ IEDC TKMCE Jan 2016 – Jan 2017 | Karicode ,Kollam ### Trainee @ Airports Authority of India Jan 2016 – Jan 2016 | Thiruvananthapuram Area, India Get acquainted with various instruments and technologies used in Air aviation traffic surveillance and control, Landing and Instrumentation in Trivandrum International Airport, Kerala. ## Education ### Master's degree in Computer Science University of Central Florida ### Bachelor of Technology (BTech) in Electrical, Electronics and Communications Engineering TKM College of Engineering , Kollam ### Foundation degree in Machine Learning & Deep Learning: Hands-On Python In Data Science Udemy Alumni ### Foundation degree in Computer Vision: Deep learning(CNN) & Cutting Edge technologies Udemy Alumni ### Deep Learning For Coders in Part 1 and 2 fast.ai ## Contact & Social - LinkedIn: https://linkedin.com/in/abraham-jose - Portfolio: https://abramjos.dev --- Source: https://flows.cv/abrahamjose JSON Resume: https://flows.cv/abrahamjose/resume.json Last updated: 2026-03-29