San Francisco, California, United States
Directed integration efforts for major AI model hosts such as Gemini and Vertex Foundation models through writing RPC services/protos/complex material components/bug tickets, and kicking off builds for SDLC AI Agents
Optimized AI Safety report workflows through self-initiated features, including host agnostic REST model response extraction and field autocomplete
Conducted several app compliance agentic crawls and wrote backend services for classifying LLM responses
Influenced successful core functionalities such as app context extraction and improved error handling through actionable feedback
Organized bug bashes, triaged issues, delivered bug fixes, flagged early issues, reduced technical debt, and authored comprehensive unit and a11y tests
Launched an improved code compliance landing page (Inbox Mini) with complex Angular routing and advanced sorting
2021 — 2023
San Francisco Bay Area
Engineered end-to-end test infrastructure and environment checks for mission critical tests for our biggest customers (i.e. Ultra 64-CKU tests) and nightlies used for releases in Python in Jenkins
Developed from inception metrics collection (Prometheus), data pipelines (BigQuery), and dashboard creation (Metabase/Tableau) used by multiple teams to track performance of our product over time in many configurations. Served as the point of contact, owning the data and transforming use for other platforms and cases such as dynamic baseline tracking
Tested product performance at different high configurations (API keys, user accounts, service accounts, environments, etc) to test unexplored limits there, exposing bugs and providing insight into product limitations
Mentored engineers in best software engineering practices in the command line and Confluent specific tips, also documenting common issues / bugs and how to deal with them
Supported operations and technical debt through other important upkeep such as creating infrastructure to support M1 Macbook compatibility and migrating from JFrog to ECR for artifacts
Won the company-wide Ship it award for drastically reducing Semaphore CI times by improving slow checks and parallelizing others, also participating in other Hackathons
2020 — 2020
San Francisco Bay Area
Added infrastructure to the internal framework used for running benchmarks on Confluent’s flagship products in the cloud, allowing tests to run for the on-prem product (Confluent Platform) instead of only the cloud product (Confluent Cloud)
Improved ease of comparing performance differences between the on-prem/cloud products, since test can be easily run for both products and used the same configurations and hardware
Enabled utilization of cloud advantages (rather than locally) such as use of a standard environment, faster computing, automation, less strain on personal hardware, etc
Exported metrics for Prometheus monitoring, also allowing side-by-side comparisons between the baseline and current benchmark results for on-perm releases in files exported to AWS S3 and within Jenkins
Worked on improving a separate tool used to start the Confluent on-prem releases within the cloud, such as more robust Terraform version checking and ensuring public DNS addresses were generated for custom security groups/subnets
Received and accepted a full-time SWE return offer
2020 — 2020
New York, New York, United States
Contributed to Tracez for the OpenTelemetry (open-source tracing and metrics standard, also known as OTel) C++ repository, which are in-process webpages that are useful for debugging, give unique span sampling advantages, and require no external tracing system/database overhead
Designed and wrote a thread-safe distributed backend component used to collect/propagate span data by interfacing with the OTel API
Planned and implemented the API and frontend component to emits aggregations in JSON and serve files
Innovated how zPages/Tracez display aggregations by minimizing rendering through separating the data and UI
Contributed to the OTel blog by writing about zPages, creating diagrams, and organizing other interns to detail findings between all the zPages projects. Also spearheaded preliminary details for the zPages experimental specification
Created unit and benchmark using GTest fixtures, CPU timers, and friend classes
Received a full-time SWE return offer
Ann Arbor, Michigan
Fullstack development at University of Michigan's Center for Academic Innovation working on Atlas, which provides course analytics and insights to promote transparency, accountability in class quality, and informed student decisions and is used by ~40,000 unique users in 50+ countries
Refactored hundreds of lines of Django templating code into Vue, including updating grade distribution and student standing plots to Plotly graphs better utilizing whitespace, for production use
Debugged long-standing issues with missing and inaccurately sorted data for edge cases
Education
University of Michigan
BS
East Kentwood High School