•Conducted in-depth research on digital signal processing pipelines and recurrent-neural-network automatic speech-recognition models, building a strong foundation in machine-learning acoustics.
•Implemented and evaluated targeted and untargeted adversarial attacks—ranging from preprocessing exploits to gradient-based perturbations—to reveal critical vulnerabilities in commercial voice interfaces.
•Designed and validated a hybrid LIME/LEMNA explanation framework that surfaces the spectral features DeepSpeech2 relies on, enabling precise diagnosis of model weaknesses.
•Automated large-scale ablation studies of DeepSpeech2 with a SciPy, Pandas, and scikit-learn toolchain, benchmarking robustness under varied noise, filtering, and perturbation scenarios.
•Presented findings through a peer-reviewed poster at the University of Richmond Undergraduate Symposium, translating complex results for academic and industry audiences.