# Shariq Mobin > UC Berkeley PhD Location: San Francisco Bay Area, United States Profile: https://flows.cv/shariq I love tackling challenging problems across neuroscience, physics, acoustics, and software, especially ones with big social impact. Tackling the low adherence rate of hearing aids will forever hold a special place in my heart, especially because hearing aids have such an impact reducing cognitive decline & social isolation. I started AudioFocus for this reason with funding from YCombinator, Xoogler, and the National Institute on Aging. ## Work Experience ### Forward Deployed Engineer @ Modal Jan 2025 – Present | San Francisco Bay Area Working with prospective and existing customers on LLM inference engine optimizations, text-to-speech (TTS), image diffusion models, protein folding (+MSA), synthetic data generation, etc. ### Machine Learning Engineer @ Modal Jan 2024 – Jan 2025 | San Francisco Bay Area Developing ML examples for modal.com/examples, incl: - Train an LLM from scratch using torch on Shakespeare text - Protein Folding with ESM3 & boltz-1 - Browserman Agent ### CEO & Founder @ AudioFocus Jan 2019 – Jan 2024 | Oakland, California, United States * Solving #1 complaint in hearing industry — Difficulty hearing in noisy social situations. * Raised money from YCombinator, Xoogler, and National Institute on Aging (grant). * Developed team of contractors across hardware, machine learning, acoustics, and hearing difficulty. * Collaborated with Stanford, John Hopkins, & University of the Pacific on clinical benefits. * Designed an acoustic ray-tracing engine to generate tens of gigabytes of highly accurate acoustic training data for training our deep learning models using 16 AWS GPUs simultaneously. ### Deep Learning Research @ AudioFocus Jan 2019 – Jan 2024 | Oakland, CA Developed a novel patented thesis for solving the cocktail party problem - Enhance voices nearby patients and suppress ones further away. * Method: Develop a synthetic acoustic dataset of voices from different distances and train a deep learning model on it. * Developed an acoustic ray tracing engine from scratch that could generate binaural room impulse responses (BRIRs) 80% as accurate as ones from the real world using 16 GPUs on AWS. * https://www.youtube.com/playlist?list=PLTJbNpfeLPSgiFzJaD6fzFT20L6Qf_ZBc ### Clinical Research @ AudioFocus Jan 2023 – Jan 2024 | Oakland, California, United States Clinical Research into the benefits of our technology including increased interest in socializing, noise tolerance, and word recognition in noise. * Set up audiology booth for 2-channel testing of QuickSIN and Acceptable Noise Level (ANL) with two researchers. * Researchers found 2-3X better noise tolerance with our technology. * Applied & Awarded A2 Pilot Grant from the NIA & John Hopkins University for our research. * Collaboration with Dr. Jiong Hu at University of the Pacific & Professor Fitzgerald at Stanford ### Firmware Engineer @ AudioFocus Jan 2022 – Jan 2023 | Oakland, California, United States Deployed deep learning model onto a embedded board for testing with patients. * Hardware Components: BatAndCat Behind-The-Ear Hearing Aid + Variscite Embedded board (VAR-SOM-MX8). * Wrote C++ code for real-time full-duplex audio streaming (mic->speaker), short-time fourier transform (STFT), weiner filters, resampling, etc. * Built an Audio Lab at Circuit Launch for testing patients in, refined the design through 3 iteration cycles. ### Summer 2019 Cohort @ Y Combinator Jan 2019 – Jan 2019 ### Neuroscience PhD @ University of California, Berkeley Jan 2015 – Jan 2019 | Berkeley, CA At the Redwood Center for Theoretical Neuroscience I created * Robust statistical models of audio signals, * Neural Networks which replicate human auditory attention, and * Theories of how brain areas use feedback for computation. ### Research Engineer @ Google Jan 2018 – Jan 2018 | Mountain View, CA Working with the Machine Perception team at Google we experimented with running deep learning models in real-time on android phones using a new framework, TFLite. ### Research Engineer @ Google Jan 2017 – Jan 2017 | Mountain View, CA Working with the Magenta team at Google Brain we built Generative Adversarial Networks (GANs) for improving the quality of our RNN-based generative models of music. ### Scientific Researcher @ Redwood Center for Theoretical Neuroscience Jan 2014 – Jan 2015 | UC Berkeley * Autonomous learning, Information-driven actions, sensorimotor learning (NeurIPS 2014) * Dynamic Image Models with Sparse Coding and Linear Dynamical Systems ### Software Engineer @ Clustrix, Inc. Jan 2011 – Jan 2013 | San Francisco Bay Area Directed development of our distributed query optimizer engine and worked on data recovery. * Implemented statistical estimates of different SQL operations to improve JOIN performances. * Led design and implementation of a project for insuring data integrity under power failures ### Intern Software Engineer @ BillFloat Jan 2010 – Jan 2010 Ruby on Rails support application for editing large amounts of data (SQL, Git, MongoDB, Javascript Linux) ## Education ### Doctor of Philosophy (PhD) in Computational Neuroscience University of California, Berkeley ### Bachelor's degree in EECS University of California, Berkeley ## Contact & Social - LinkedIn: https://linkedin.com/in/shariqmobin - Portfolio: https://www.shariqmobin.com - GitHub: http://github.com/ShariqM --- Source: https://flows.cv/shariq JSON Resume: https://flows.cv/shariq/resume.json Last updated: 2026-04-10