New York, United States
At Pinecone, I worked on a semantic Q&A system that allows users to upload PDF files and ask context-aware questions, powered by LangChain and OpenAI GPT-3.5 Turbo. I built the frontend in React with Ant Design, creating components for file upload, chat interaction, and document preview. I implemented back-end services with Node.js and Express to handle PDF loading, text chunking, embedding generation, and retrieval via a Pinecone vector database. To optimize speed, I introduced a caching layer for document embeddings using in-memory storage. This greatly reduced response latency and improved user experience. I also designed a modular architecture for pipeline components (split, embed, store, retrieve) to support future extensibility. The tool has applications in legal, academic, and corporate document analysis.
2022 — 2024
New York, United States
Cogram is a full-stack social platform that integrates OpenAI’s DALL·E to allow users to generate and share AI-created images based on text prompts. I built the frontend using React.js and Tailwind CSS, designing a responsive feed, image generator page, and personal post history dashboard. Authentication was implemented with JWT and React Router, supporting secure login and role-based navigation. On the backend, I helped develop services in Go, using ElasticSearch for fast search indexing across posts and tags. I also implemented token-based API requests to the OpenAI image generation API, and displayed results using styled image preview cards. The app was deployed on Google Cloud Platform, with storage offloaded to GCS buckets. This project deepened my experience with AI integration and building scalable, interactive UIs.
I led the frontend team of a 7-person group to build a full-stack platform for neighborhood communities, enabling residents, property managers, and third-party service providers to communicate and transact online. Using React, Ant Design, and React Router, I designed a dynamic role-based interface with secure private routing and custom navigation for different user types. Features included a discussion board with real-time pagination and modals, event scheduling integrated with Moment.js, and payment submission via REST APIs. I architected the global state management to handle user role and authentication, and improved UI responsiveness by optimizing layout rendering. This project provided a full simulation of SaaS-style web architecture and strengthened my leadership in front-end system design.
2020 — 2021
At Zype, I developed a personalized streaming interface modeled after Twitch, supporting dynamic content recommendation and user interaction. I used React.js and Spring Boot to build a full-stack platform that allowed users to browse live streams, filter by category, and receive recommendations based on watch history. I created reusable UI components including live carousels, trending banners, and viewer engagement cards. On the backend, I contributed to API integration for retrieving recommendation results and stream metadata. I wrote unit and integration tests using Jest and React Testing Library, achieving over 90% test coverage for critical components. I also configured CORS policies and optimized cross-origin requests to ensure smooth front-to-back communication. This project sharpened my full-stack development skills and taught me to balance user experience with system performance.
2019 — 2020
Liaoning, China
At NiuTrans, a leading NLP platform in China, I contributed to the development of a web-based tool that facilitates annotation, parsing, and visualization of structured text data. The project aimed to streamline the preparation of high-quality datasets for machine translation, POS tagging, and syntactic analysis. I built the user interface using React.js and Ant Design, and designed components to support multi-format inputs (text, CSV, XML). I developed annotation logic in React with support for inline editing and validation, and used Redux for consistent state handling. I also wrote back-end integration scripts in Python to call NLP services via RESTful APIs and extract structured outputs. This tool enabled linguists and researchers to create curated datasets more efficiently, reducing the manual labeling workload and improving model training quality.
Education
Columbia University
Master's degree
UC Irvine