I am an aspiring computer scientist who enjoys developing my technical skills by working on research projects in various interdisciplinary computing fields. I am skilled in object-oriented programming, data visualization, machine learning, and mathematical analysis.
Automated the process of onboarding third-party (3P) applications and their generated services onto Google Web Service (GWS) logging using code generation of request and resource Protocol Buffers (proto).
Built a CLI tool using Bash scripting for Google’s 3P integrators to run the split code generation Java binaries.
Enabled generation of proto files for new services and APIs and also re-generation of proto files for existing services by parsing Swagger and OpenAPI specification files with newly written proto generator and visitor configuration modules.
Streamlined the process of converting between wire and storage proto definitions by automating helper process ticket generation from custom templates using internal APIs, facilitating information on creating mappings specific to a service.
Reduced onboarding time for new services by over 85% and onboarded two production Salesforce and ServiceNow services.
Spearheaded a new project to create a native Rockset data connector for Amazon Managed Streaming for Apache Kafka (MSK), making it simpler and faster to ingest Apache Kafka streaming data for real-time analytics.
Updated the Rockset API authentication process to connect to a customer’s MSK Kafka cluster using an AWS cross-account Identity and Access Management (IAM) role with the Amazon MSK Library for AWS IAM.
Built a load-testing framework for sending more than 33 MB/s of randomly generated data to a Kafka cluster and monitored the ingest of data in Rockset using Prometheus metrics for maximum throughput and ingest latency on a Grafana dashboard.
Redesigned the Rockset console using React and added components for creating a Rockset MSK integration and collection to connect and authenticate with a customer’s Kafka cluster
Enabled the use of dynamic executors on Fitbit internal Jenkins servers by updating the Google Compute Engine Jenkins plugin to allow users to optionally choose a custom SSH key pair to authenticate with Jenkins executors.
Published the updated plugin so the 1000+ existing users and future GCP users can use the feature.
Automated the process of building Google Cloud Platform and Jenkins resources using IaaC tools including Packer to build a disk image for a Jenkins executor and Terraform to provision GCP resources.
Automated the configuration of the GCP plugin on Fitbit Jenkins servers on every new image build using Groovy.
Developed Python scripts that can deploy any enterprise Sandia application to a client’s chosen environments by kicking off a CD pipeline in Sandia’s CI/CD pipeline wrapper, making a REST API call to get the latest build of the selected enterprise application, and triggering the deployment of that application in a PaaS environment.
Automated this process with a DAG by using Apache Airflow and Git Submodules to schedule a daily automated deployment of any enterprise application and utilized mockup-based test automation on all features.
Built an automated data test using a DAG in Airflow to check the number of archive invalidations accumulated in a Matomo development database. Created a response if the number of invalidations breached a certain threshold by utilizing webhooks to send an automated message in a Mattermost channel and emailing the DataX team directly from Apache Airflow.