🎵 Conducting human factors and perception research to improve speech recognition and transcription accuracy across Apple's wearable audio, AR//VR and iPhone ecosystem.
• Led in-person usability studies with 300+ participants using AirPods and iPhone prototypes, reducing field session disruptions by 20% through standardized data collection protocols.
• Expanded multilingual transcription accuracy evaluation coverage by 12% by designing and running research sessions across 20+ locales, translating participant feedback into actionable insights for AI/ML speech models.
• Increased reliability of training datasets across 10+ regions by reviewing visual assets in bulk, flagging, tracking, and quality checking problems to keep only high-quality data in the pipeline.
• Partnered with AI/ML data engineers and project managers to troubleshoot issues, incorporate team feedback, and enhance study outcomes.