Pioneered the development of native floating point feature of HDLCoder toolbox from zero. This feature enables floating-point HDL generation directly from MATLAB/Simulink, removing the need for manual-fixed-point conversion. The challenges were consistent numeric behavior with MATLAB simulations (US10936769B2), providing vendor-independent arithmetic, trigonometric and advanced math operations with tunable latency/accuracy, and allowing mixed floating-point + fixed-point design for high-accuracy, area-efficient hardware while preserving IEEE-754 tolerances.
https://www.mathworks.com/help/hdlcoder/ug/native-floating-point-support.html
Spearheaded the development of generating FPGA-ready HDL code from Simscape physical simulation model, performing solver configuration, state-space extraction, implementation model generation and RTL validation for real-time HIL deployment.
https://www.mathworks.com/help/physmod/simscape/ug/generate-hdl-code-using-the-simscape-hdl-workflow-advisor.html
Accelerated FPGA/SoC deployment of Deep learning models by leveraging MathWorks Deep Learning HDL toolbox to generate custom, synthesizable HDL and optimized IP cores, enabling high-throughput inference, design-performance tradeoff analysis, INT8 quantization and hardware-aware profiling on Xilinx/Intel FPGA platforms.
https://www.mathworks.com/products/deep-learning-hdl.html