• Built and deployed real-time deep learning pipelines for event-based perception and signal analysis, implementing projects such as SpikeYOLOv8 Tracker and Spectrum Analyzer using CNN, RNN, and Transformer inspired architectures for low-latency, on-device inference in resource-constrained environments.
• Developed and extended the TALON SDK to support end-to-end model workflows, including training integration, graph transformation, simulation, profiling, hardware-aware partitioning, and deployment, enabling faster experimentation and productionization of advanced ML models.
• Driving development of neuroscience-inspired AI models optimized for low-power, on-device inference (sub-1 W, sub-millisecond latency) to enable real-time adaptation & decision-making on edge devices like low-earth orbit satellites.
• Implementing and refining high-speed object-tracking solutions using event-based cameras (e.g., Prophesee EVK4), converting asynchronous visual data into actionable input for neuromorphic processors.
• Designing sensor-fusion pipelines and embedding learning loops so devices continuously adapt to new stimuli and environments without any cloud dependence.
• Collaborating with hardware and systems teams to align ML architecture with brain-inspired processors, ensuring models meet edge constraints (power, latency, memory).
• Presenting findings and progress to stakeholders (engineering leadership, partners, investors) and helping guide the strategic AI & compute roadmap from model design to deployment.