Energetic Staff Software Engineer with over ten years’ experience designing and implementing software systems that are scalable and maintainable and provide low latency and high throughput.
Experience
2023 — Now
2023 — Now
San Francisco Bay Area
Leads scaling and latency reduction improvements of Zendesk’s core APIs that receive up to a billion requests per day to improve customer experience and reduction in cloud cost.
• Identifies system bottle-necks and develops solutions using DataDog’s Application Performance Monitoring (APM) software, improving API response times and customer experience and reducing AWS costs
• Successfully reduced core APIs p99 latency by 40% and averages by 30% through extensive code refactoring, query optimization and caching techniques
• Designs and implements stream processing system to process asynchronous ticket updates using Kafka Streams
• Improves query execution times by restructuring and introducing new indexes in AuroraDB. Guarded primary DB and achieved faster responses by moving data fetching to ElasticSearch
• Re-architected data retrieval layer by transitioning from MySQL to Elasticsearch, optimizing indexing and query logic to support real-time analytics with lower latency and improved fault tolerance
• Ensures code releases are extensively tested and guarded by feature flags to guarantee no service disruption and reduction in performance. Quantifies key measures after changes through performance metrics analysis
2021 — 2023
2021 — 2023
San Francisco, California, United States
Improving scaling and latency reduction of Zendesk’s core APIs that receive up to a billion requests per day to improve customer experience and reduction in cloud cost.
• Identifies system bottle-necks and develops solutions using DataDog’s Application Performance Monitoring (APM) software, improving API response times and customer experience and reducing AWS costs
• Successfully reduced core APIs p99 latency by 40% and averages by 30% through extensive code refactoring, query optimization and caching techniques
• Improved query execution times by restructuring and introducing new indexes in AuroraDB. Guarded primary DB and achieved faster responses by moving data fetching to ElasticSearch
• Created various DataDog dashboards for performance monitoring, system’s SLAs and error budget notification
• Ensure code releases are extensively tested and guarded by feature flags to guarantee no service disruption and reduction in performance. Quantifies key measures after changes through performance metrics analysis
2020 — 2021
2020 — 2021
San Francisco Bay Area
Developed system to analyze, monitor, and predict population-level knowledge. This system is implemented in Python, managed using Kubernetes and Docker and hosted in AWS.
• Implemented data pipeline using Python and SQS to ingest, transform and store information in S3 and MongoDB. Integrated machine learning models in data pipeline to generate predictions
• Designed scalable and low latency APIs using efficient algorithms and architectural patterns that provide low latency and high throughput using Python, MongoDB and Memcached as data storage
• Created single-page web-app using React, Redux, MapBox and D3.js for information visualization and charting
• Working on managing Docker containers and Kubernetes container orchestration
2019 — 2020
2019 — 2020
San Francisco Bay Area
Worked on extending and improving existing fulfillment system. Successfully lead, implemented and released platform to launch and manage new product lines of customizable inventory.
• Worked on extending and improving fulfillment system by implementing data pipelines and jobs using Python, AWS SQS and MySQL that were horizontally scalable during peak load
• Lead team of 5 engineers to architect, implemented and release platform for launching new product lines using Python, Flask and MySQL
• Maintained and improved RESTful APIs for third party integration for order fulfillment
• Collaborate with product managers to deliver projects with measurable business results
2017 — 2019
2017 — 2019
San Francisco
Working on creating cloud platform from ground up for managing the workflow of incidental findings in radiology reports identified by Natural Language Processing and Machine learning algorithms.
• One of the first developers involved in designing and implementing SaaS by implementing RESTful APIs and background processes using .Net Core, C# in Azure setting.
• Implemented messaging queues using Apache Kafka for high throughput, reliability and fault tolerance.
• Used Test Driven Development (TDD) to ensure highest level of software quality.
• Working closely with product owner, business analysts and team members using agile methodology.
Education
Utah State University