San Jose, California, United States
Machine Learning for Medical Robotics:
• Implemented a Mask R-CNN instance segmentation model using ONNX Runtime in C++ to detect and identify fiducial markers in the camera feed of the ARTAS robot, validating model outputs against expected 3D spatial locations and marker IDs to ensure accurate registration.
• Implemented a Faster R-CNN object detection model using ONNX Runtime in C++ to perform real-time detection of existing scalp extraction sites from the camera feed of the ARTAS robot, dynamically blocking those regions to prevent repeat harvesting
• Leveraged Azure ML Studio to train instance segmentation models for real-time detection and shape identification of the extraction needle.
• Trained convolutional neural networks (CNNs) using TensorFlow to classify post-extraction images and assess hair follicle extraction success.
• Created a custom image labeling tool to annotate and extract thousands of images for training.
User Interface Development:
• Modernized and streamlined the ARTAS UI using WPF and Caliburn.Micro, making workflows more intuitive and reducing opportunities for user error.
• Collaborated with a graphic designer to improve the UI’s visual design and overall user experience.
Graphics & Visualization:
• Upgraded the robot’s graphics pipeline to modern OpenGL (v3.3), enabling support for rendering 3D models, dynamic overlays, and visual treatment previews.
• Developed features such as real-time 3D rendering of hair follicles over live camera feeds to assist in treatment planning.
Touchscreen Interaction & Connectivity:
• Designed and implemented a touch-based algorithm for users to draw treatment areas directly on the UI, improving workflow efficiency.
• Enabled Wi-Fi connectivity using Tik4Net, reducing dependency on wired connections for setup and support.