# Brian Lu > Full Stack Engineer | Built AWS-Powered NGO Platform | Next.js, PHP/MySQL, PostgreSQL, MongoDB | Master CS Location: United States, United States Profile: https://flows.cv/brianlu As a Full Stack Engineer with an MS in Computer Science from Stony Brook University, I build scalable, human-centered systems that solve real-world challenges with empathy and efficiency. My work spans startups, NGOs, and research centers, where I align technical solutions with business goals. At ReliefAid NGO, I architected AWS infrastructure with Docker to streamline resource coordination for 100+ aid workers. At the Simons Center for Geometry and Physics, I modernized legacy PHP/MySQL systems and used Apache Spark to accelerate research reporting by 50%, backed by robust JUnit tests. A project close to my heart is a VR eye-gaze app for ALS patients, designed for accessible, joyful interaction through direct patient collaboration. My stack includes React, Next.js, TypeScript, Node.js, Java Spring, PostgreSQL, MongoDB, Firebase, Docker, and AWS, with expertise in REST APIs, CI/CD pipelines, and Unix/Linux automation. As an American-Taiwanese engineer, I’m passionate about empowering communities through technology. I thrive in collaborative teams, delivering impactful solutions with clear communication and rapid problem-solving. I’m excited to connect with teams building innovative, meaningful tech. Let’s create something extraordinary together! ## Work Experience ### Full Stack Engineer @ Relif Jan 2024 – Present Led the development of an operations platform for Relfiad, an NGO supporting disaster relief and refugee shelters. Designed and deployed a full-stack system using Next.js, MongoDB, and AWS, enabling organizers to coordinate shelter space, supply inventory, and aid distribution in real time. Implemented JWT for secure authentication and live updates, and integrated Google Maps APIs to visualize location-based needs across regions. Built the platform to be responsive and accessible, focusing on ease of use for coordinators operating under high-stress conditions. Used containerized deployments and CI/CD pipelines to ensure fast, reliable updates. The platform helped reduce logistical overhead and improve transparency during humanitarian operations. ### Full Stack Developer @ Simons Center for Geometry and Physics - Stony Brook University Jan 2022 – Present | Stony Brook, New York, United States I began my journey in web development using PHP and MySQL, helping modernize a legacy academic platform by updating outdated PHP functions and setting up a new server environment. I wrote SQL queries, integrated Google Cloud APIs for user authentication and data retrieval, and automated email notifications to improve communication flows. As my role evolved, I helped design new systems, including a real-time Q&A platform that allowed thousands of global participants to submit questions during live academic events. This required designing for scalability and reliability, using Redis for in-memory session tracking and MySQL for persistent logging. I implemented security measures like session management and IP rate limiting to ensure safe usage at scale. Later, I focused on data infrastructure, building reporting pipelines using Spark and Hadoop to process and transform large datasets for organizational insights. I gained experience in distributed and parallel systems, tuning performance, and managing resources efficiently. ### Virtual Reality Developer / Researcher @ Stony Brook University Jan 2022 – Jan 2025 | Stony Brook, New York, United States As a Test Technician and Developer for the Eyecando project, I've been working on an innovative eye gaze detection system aimed at improving the quality of life and entertainment options for ALS patients with limited mobility. The project initially focused on iPad-based eye gaze detection using Swift and Xcode. However, due to limitations in mobile device camera capabilities, we transitioned to using the Meta Quest Pro VR headset, which offered superior eye-tracking technology. In this evolving role, I've developed various VR functionalities using Unity, including web browsing, text-to-speech, passthrough, and environment changes. Leveraging the Oculus SDK and prebuilt systems, I streamlined the environment setup process. Currently, our team is focused on addressing eye-tracking inaccuracies caused by glasses wear, with an expected resolution in the coming weeks. This ongoing project has not only enhanced my technical skills across iOS and VR development platforms but also provided me with the opportunity to contribute to a meaningful cause that has the potential to significantly impact the lives of ALS patients. ### Test Technician @ Stony Brook University Jan 2022 – Jan 2023 | Stony Brook, New York, United States "EyeCanDo" is a communication platform on Apple that enables people to communicate with the device using eye gazing and brain-computer interfaces. This product is specifically useful for people with disabilities. EyeCanDo is built upon artificial intelligence, machine learning, and augmented reality. It was developed as a graduate and Ph.D. project, and during my undergraduate studies, I assisted with the development of helpful functions and provided new ideas. My job now involves testing the functions and adjusting the parameters for each function. For example, we use ARKit to help us detect facial expressions, and sometimes the numbers may be inaccurate. In such cases, I have to test it multiple times and make changes. The difference can range from .1 to more than 1. The challenging part of this project is that I have never coded in Swift and Xcode before, and it took me a while to understand the syntax. ### Frontend Web Developer @ Own Interest Jan 2023 – Jan 2023 | Stony Brook, New York, United States My team and I developed a website that leverages Amazon S3, AWS, React, and JavaScript to create a system capable of generating videos from scripts. This system utilizes the ChatGPT API, FFmpeg, and Dall-E to produce content. Users are invited to input the theme and topic of their desired script. Once generated, the script can be edited to their liking. The process is straightforward: users simply wait for their custom video to be generated. Understanding that clarity is crucial on a webpage, we emphasized user guidance. To achieve this, we incorporated tooltips throughout the site. These tooltips are designed to inform users about the rules and best practices for creating their scripts, ensuring a seamless and intuitive experience. ### VR Designer @ Own Interest Jan 2017 – Jan 2019 | Taichung City, Taichung City, Taiwan VRChat, a game where players interact in virtual reality (VR) through the use of a headset. In VRChat, players also have the option of customizing the avatar they represent and choosing the world they want to explore, effectively creating an alternative reality that can serve as an outlet for self-expression. The limitless possibilities for personalizing this alternative reality inspired me to discover the programs that can be used to enhance VRChat experiences. Blender, a software used for modeling, rendering, and motion tracking, and Unity, a cross-platform game engine, became the subjects of my fascination as I studied how to design and modify VRChat avatars and worlds. The majority of the user-uploaded avatars are clunky so I started by fixing their limbs, face, and overall physical structure. Character editing and creation require trained eyes for symmetry, and I had just that. Later, I learned that designing and editing the worlds required a different set of logic since it mostly involved blueprinting the features of the world and planning the interactions between an avatar and the objects within that world. ## Education ### Master's degree in Computer Science Stony Brook University Jan 2024 – Jan 2025 ### Bachelor's degree in Computer Science Stony Brook University Jan 2021 – Jan 2024 ## Contact & Social - LinkedIn: https://linkedin.com/in/brianlulu --- Source: https://flows.cv/brianlu JSON Resume: https://flows.cv/brianlu/resume.json Last updated: 2026-04-01