As a Full Stack Engineer with an MS in Computer Science from Stony Brook University, I build scalable, human-centered systems that solve real-world challenges with empathy and efficiency. My work spans startups, NGOs, and research centers, where I align technical solutions with business goals.
2024 — Now
Led the development of an operations platform for Relfiad, an NGO supporting disaster relief and refugee shelters. Designed and deployed a full-stack system using Next.js, MongoDB, and AWS, enabling organizers to coordinate shelter space, supply inventory, and aid distribution in real time.
Implemented JWT for secure authentication and live updates, and integrated Google Maps APIs to visualize location-based needs across regions. Built the platform to be responsive and accessible, focusing on ease of use for coordinators operating under high-stress conditions.
Used containerized deployments and CI/CD pipelines to ensure fast, reliable updates. The platform helped reduce logistical overhead and improve transparency during humanitarian operations.
2022 — Now
Stony Brook, New York, United States
I began my journey in web development using PHP and MySQL, helping modernize a legacy academic platform by updating outdated PHP functions and setting up a new server environment. I wrote SQL queries, integrated Google Cloud APIs for user authentication and data retrieval, and automated email notifications to improve communication flows.
As my role evolved, I helped design new systems, including a real-time Q&A platform that allowed thousands of global participants to submit questions during live academic events. This required designing for scalability and reliability, using Redis for in-memory session tracking and MySQL for persistent logging. I implemented security measures like session management and IP rate limiting to ensure safe usage at scale.
Later, I focused on data infrastructure, building reporting pipelines using Spark and Hadoop to process and transform large datasets for organizational insights. I gained experience in distributed and parallel systems, tuning performance, and managing resources efficiently.
Stony Brook, New York, United States
As a Test Technician and Developer for the Eyecando project, I've been working on an innovative eye gaze detection system aimed at improving the quality of life and entertainment options for ALS patients with limited mobility. The project initially focused on iPad-based eye gaze detection using Swift and Xcode. However, due to limitations in mobile device camera capabilities, we transitioned to using the Meta Quest Pro VR headset, which offered superior eye-tracking technology.
In this evolving role, I've developed various VR functionalities using Unity, including web browsing, text-to-speech, passthrough, and environment changes. Leveraging the Oculus SDK and prebuilt systems, I streamlined the environment setup process. Currently, our team is focused on addressing eye-tracking inaccuracies caused by glasses wear, with an expected resolution in the coming weeks. This ongoing project has not only enhanced my technical skills across iOS and VR development platforms but also provided me with the opportunity to contribute to a meaningful cause that has the potential to significantly impact the lives of ALS patients.
Stony Brook, New York, United States
"EyeCanDo" is a communication platform on Apple that enables people to communicate with the device using eye gazing and brain-computer interfaces. This product is specifically useful for people with disabilities. EyeCanDo is built upon artificial intelligence, machine learning, and augmented reality. It was developed as a graduate and Ph.D. project, and during my undergraduate studies, I assisted with the development of helpful functions and provided new ideas. My job now involves testing the functions and adjusting the parameters for each function. For example, we use ARKit to help us detect facial expressions, and sometimes the numbers may be inaccurate. In such cases, I have to test it multiple times and make changes. The difference can range from .1 to more than 1. The challenging part of this project is that I have never coded in Swift and Xcode before, and it took me a while to understand the syntax.
Stony Brook, New York, United States
My team and I developed a website that leverages Amazon S3, AWS, React, and JavaScript to create a system capable of generating videos from scripts. This system utilizes the ChatGPT API, FFmpeg, and Dall-E to produce content. Users are invited to input the theme and topic of their desired script. Once generated, the script can be edited to their liking. The process is straightforward: users simply wait for their custom video to be generated.
Understanding that clarity is crucial on a webpage, we emphasized user guidance. To achieve this, we incorporated tooltips throughout the site. These tooltips are designed to inform users about the rules and best practices for creating their scripts, ensuring a seamless and intuitive experience.
Education
2024 — 2025
Stony Brook University
Master's degree
2024 — 2025
2021 — 2024
Stony Brook University
Bachelor's degree
2021 — 2024