Experience
2024 — Now
2024 — Now
San Francisco, CA
Tech lead for LangSmith Deployment (formerly LangGraph Platform).
Responsible for control plane (API, orchestration), data plane (agent runtime), infrastructure (cloud, K8s), billing (Metronome), and operations (user support, on-call).
2023 — 2024
2023 — 2024
San Francisco, CA
Member of FF5. Exploring sales, supply chain tech, and LLMs.
2021 — 2023
2021 — 2023
San Francisco, CA
First data engineer hire at CartaX.
Bootstrapped CartaX's data platform from scratch. Configured AWS infrastructure (e.g. VPC, IAM) and services (e.g. MWAA, Glue, Kinesis) via Terraform. Set up CI/CD pipelines in GitLab. Implemented the first batch and streaming pipelines to move CartaX's core data into Redshift. Configured Looker for reporting and data visualization. Handled operational responsibilities (on-call and user support) for all data infrastructure.
2018 — 2021
2018 — 2021
Los Gatos, CA
Owner of Netflix's Keystone Data Pipeline. Core developer for Netflix's real-time data infrastructure ecosystem (Keystone, Data Mesh, Stream Processing-as-a-Service, Messaging-as-a-Service, Schema Registry).
Handled operational responsibilities (on-call and user support) for Apache Kafka and Apache Flink deployments (1000+ Kafka brokers, 10000+ Flink containers).
Owner of Data Mesh - Netflix's next-generation data pipeline product for change-data-capture use cases.
Designed and implemented initial MVP of Data Mesh and delivered beta release to production for initial use cases. Impacted all aspects of Data Mesh including product vision, project management, technical architecture, UI/UX design, infrastructure operations, and user support.
Developed key features in Keystone for facilitating migration of Netflix's data warehouse to Apache Iceberg. Assisted with migration operations and successfully completed migration of 1500+ Apache Flink jobs.
2018 — 2018
2018 — 2018
Los Angeles, CA
First data engineer hire at Honey. Promoted to Senior Data Engineer after 1 year.
Designed Honey's core product catalog with batch and streaming ETL pipelines written in Scala/Java using Scio and Apache Beam frameworks.
Worked cross-functionally across engineering teams to build product catalog features for real-time price change notifications and product search/recommendations.
Committed 5 months to answering questions about Apache Beam and Google Cloud Dataflow on Stack Overflow. Became top answerer for apache-beam and google-cloud-dataflow tags.
Education
UCLA