Software Engineer, with a specialization in backend and data driven development.
2026 — Now
New York, New York, United States
2024 — 2025
New York, New York, United States
Built TypeScript backends with GraphQL APIs for blockchain applications including an NFT Farcaster client and today.fun's token distribution platform, integrating with EVM smart contracts and optimizing query performance
Engineered real-time blockchain indexing pipelines using Kafka and Debezium CDC, processing on-chain events with sub-second latency
Architected and built a real-time AI agent system with multi-provider inference handlers (OpenAI, Anthropic, LiteLLM), featuring streaming Server-Sent Events, tool orchestration, and automatic context compression for mobile-optimized content feeds
Implemented end-to-end observability with OpenTelemetry tracing, Prometheus metrics, and Grafana dashboards; built custom LiteLLM callbacks for AI inference monitoring across multiple providers
Built a FastAPI backend with async PostgreSQL (SQLModel/SQLAlchemy), Redis-backed conversation persistence, and Temporal for background refresh jobs and streams
2022 — 2024
New York City Metropolitan Area
Developed a Snapshot API (in Go) taking in a parameter of any ticker (Crypto, Indices, Stocks), and retrieving data from Redis, which was powered by real time Kafka streams.
Onboarded the entire Indices Market Type, including any indices market and reference data. Included updating and adding new api’s to support this market, as well as ingesting new data sources (in Rust) serving Index Data.
Helped Implement and optimize, many parts of our crypto ingestion and backfill pipelines, greatly aiding developer experience.
Worked with the Stripe API, to create prototypes and metrics for new billing systems, based on API usage levels.
Worked with Kubernetes for deployments and Prometheus, creating many time real-time analytics, metrics, alerting, and dashboards for our services.
2019 — 2022
San Francisco Bay Area
Working as a Software Engineer, on the Marcus Apple Card Data Team.
Working with large data sets daily, and creating new ETL pipelines and workflows, while using tools such as Airflow, Pyspark, AWS, etc.
Working Directly with multiple partners, in order to design build and deploy different ETL solutions,
which helped both business efficiency and productivity
Jersey City, New Jersey
Implemented a rest service for on demand Trace Route processing using Java and Spring
Created Multiple rest endpoints for both submitting trace route requests and saving data to ElasticSearch
Application alerts user, through the use of RabbitMQ, when the response becomes available after the request is submitted
Additionally, checks ElasticSearch for a recent response before submitting a request
Created a Kafka consumer to continuously process traceroute responses from Kafka Streams
Pushes responses to ElasticSearch through the service above and pushes metrics from the data to an internal tool
Education
University of Washington