# Hui Zheng > Currently working on LLM safety at Google Deep Mind. PhD in HCI and Accessibility, focus on applying AI/ML in assistive mobile and/or wearable software applications. Location: Mountain View, California, United States Profile: https://flows.cv/huizheng LLM, AI safety, HCI, Accessibility, Wearable, Mobile, IoT, Education, Health, Ubiquitous Computing, Sensors, Machine Learning, Computer Vision ## Work Experience ### Machine Learning Engineer @ Google DeepMind Jan 2023 – Present | Mountain View, California, United States Working for LLM safety, Bard/Gemini. Especially Gemini safety/fairness and Multimodality(Veo and Imagen) safety involving EVAL, and post training (SFT, RL). ### Software Engineer @ Google Jan 2022 – Present Machine learning engr working on Bard and Gemini safety, especially LLM safety and RLHF ### Software Engineer @ Click Therapeutics, Inc. Jan 2021 – Jan 2022 | New York City Metropolitan Area ### Research Assistant @ George Mason University Jan 2016 – Jan 2021 | Fairfax, VA, US *Wearable Life Project for Young Adults with IIDs -Designed, developed, evaluated a wearable and mobile application for young adults with intellectual and developmental disabilities(IIDs) to help with the coordination and communication between the students and their in-class assistants in inclusive education in a less obtrusive and stigmatizing way, which provided timely intervention notifications in class context sent automatically/manually by assistant to student, including the features of interventions and self-assessment, guided by User-Centered Design, with different relevant stakeholders (students with IID, their assistants, special education experts), including 14 user studies (interviews, focus groups, surveys, usability tests, field studies). Collected sensor(accelerometer, gyroscope, heart-rate) data form the smartwatch, to recognize the student’s in-class behavior(future work), and I already tried to use deep learning(LSTM+CNN) to do human activity recognition on wearable sensor data(accelerometer and gyroscope). Developed In Android, SQL. Deep Learning in PyTorch and Python. Published in CHI’18, ASSETS'21, InterAct'19, ASSETS’ 17. ### Research Assistant @ George Mason University Jan 2016 – Jan 2018 | Fairfax, VA *Wearable on-task self-monitor for the students with ASD, Jun 2017- Aug 2017 - Designed, developed a wearable app to let the students with ASD self-monitor their on-task daily routine on their smartwatch (check on-/off-task). There are 4 kinds of self-monitor tasks (in-class, off-class, weekday, weekend), which their support staff or their parents can program the content and repeat pattern of the task on the phone. In Android. *Log Parse for a cognitive intelligence analysis assistant Jun 2018 – Aug 2018 -Designed, developed a statistical analysis program for Cogent (cognitive assistant for the intelligence analyst) project. Parse XML logs of Cogent. Analyze hands-on and usage of various operation(total time, frequency, what operation used and never used by a user per problem), usage and trend line of Help operations. Funded by IARPA. In Java. *Mental Stress Level Recognition through the Wearables, Ap 2017- May 2017 - Analyze the data collected from Amulet watch. Extract features from the heart rate sensor and accelerometer. Learn an SVM model to predict the stress level. Funded by NSF. In MATLAB. ### Research Internship @ Microsoft Jan 2019 – Jan 2019 | Redmond, WA Summer research internship at Microsoft Research - Ability team. Built a website to augment emotion feedbacks during video calling for assisting people with neurodiversity in self and other appraisals of emotion during a video conversation. In React, TypeScript, JavaScript, Node.js, Azure, CSS, HTML, and WebRTc. ### Research Internship @ Intel Labs Jan 2016 – Jan 2016 | Beijing City, China Parsing background structures (floor, structure, furniture, generic objects) in RGBD video. Integrated contour map with big planes boundaries to get superpixels, trained a unary classifier using boosting, associated data, propagated label using optical flow cues. Programmed in MATLAB. Contribute to my IROS paper. ### Research Internship @ Institute of Automation, Chinese Academy of Sciences Jan 2013 – Jan 2013 | Beijing City, China Adapted the demo of the paper “PWP3D” with C++ & CUDA. Utilized the frames similar to the first frame as “anchor” frames to speed up and multithread the algorithm of “PWP3D” to do pose tracking efficiently. ## Education ### Doctor of Philosophy - PhD in Information Technology George Mason University ### Master's degree in Computer Software Engineering Sun Yat-sen University ### Bachelor's degree in Computer Software Engineering Sun Yat-sen University ## Contact & Social - LinkedIn: https://linkedin.com/in/hui-zheng-484841145 - Website: https://sites.google.com/view/hui-zheng/home - Website: https://scholar.google.com/citations?user=5DQ7jkwAAAAJ&hl=en --- Source: https://flows.cv/huizheng JSON Resume: https://flows.cv/huizheng/resume.json Last updated: 2026-04-01