I’m a Senior Software Engineer with 7+ years of real-world experience working across Cloud, DevOps, Data Engineering, Backend Development, and AI/GenAI platforms.
Experience
I design and automate large-scale AI/ML platforms across Azure and GCP, deploying training and inference workloads using AKS, GKE, Docker, Helm, and Terraform to support high-availability enterprise systems. I build end-to-end CI/CD pipelines using GitHub Actions, Jenkins, and Azure DevOps, integrating automated testing and DevSecOps validation to reduce deployment risks and accelerate release cycles. I also develop Python- and SQL-based data workflows integrated with Vertex AI, Azure ML, and cloud storage layers to streamline retraining, model deployment, and pipeline reliability.
To maintain production stability, I implement deep observability using Prometheus, Grafana, CloudWatch, and Stackdriver, building SLO-driven dashboards that cut MTTR by more than 50% and prevent outages before they occur. I work closely with data engineering, SRE, and application teams to automate infrastructure provisioning, enforce IAM/RBAC controls, and ensure compliance for mission-critical cloud and AI workloads operating at scale.
2023 — 2024
I built and managed cloud-native data and AI platforms on AWS, implementing pipelines and services that powered real-time personalization and credit-risk decisions. I designed scalable ML workflows using SageMaker, Lambda, Step Functions, DynamoDB, API Gateway, and automated infrastructure with Terraform to maintain consistent, secure environments. I also built ETL/ELT data pipelines using Python, Spark, and batch/streaming jobs, enabling event-driven retraining and improving prediction accuracy and latency across business-critical systems. Using EKS, Docker, and Helm, I optimized GPU-based inference workloads to handle millions of requests daily with predictable performance.
To strengthen reliability, I implemented full observability across logs, metrics, and traces using Grafana, CloudWatch, and OpenSearch (ELK), improving issue detection and reducing MTTR. I helped build CI/CD pipelines with GitHub Actions, Jenkins, and AWS CodePipeline, incorporating automated tests and deployment safeguards for high-volume production environments. I collaborated closely with platform, data, and ML teams to improve throughput, enhance security using IAM, Secrets Manager, and KMS, and ensure consistent availability and disaster recovery readiness through multi-region routing and automated failover strategies.
Chennai, Tamil Nadu, India
At TruWeather, I engineered cloud-native data and analytics pipelines on Google Cloud Platform, helping process large-scale geospatial and weather datasets used for aviation forecasting and real-time decision systems. I built streaming ingestion layers with Pub/Sub, automated data processing using Python, Spark, and Dataflow, and integrated outputs into BigQuery for high-speed analysis. I also deployed low-latency microservices and model-serving components on GKE using Docker, Helm, and Terraform, ensuring scalable and reliable operations during peak weather events.
To maintain stability and compliance, I automated IAM, secrets, and environment provisioning using Terraform, Secret Manager, and RBAC, reducing manual steps and securing access across teams. I strengthened observability using Prometheus, Stackdriver, and Dynatrace, improving incident visibility and reducing time to resolve production issues. I collaborated with data engineers, cloud engineers, and backend teams to tune ETL jobs, optimize computing costs, enhance SLAs, and ensure that weather intelligence systems were always available and responsive.
Bengaluru, Karnataka, India
At Veeva, I contributed to building and enhancing cloud-backed application features using Java, Spring Boot, SQL, and REST APIs. I supported backend development for healthcare and compliance modules while integrating services with Azure-based infrastructure. I also built serverless automation and lightweight data workflows using Python, Azure Functions, Blob Storage, enabling reliable ingestion and transformation of application data. My contributions helped streamline backend logic, improve API performance, and strengthen application reliability for enterprise customers.
I introduced DevOps practices into the team by helping automate CI/CD pipelines using Azure DevOps and Jenkins, integrating unit tests, static code checks, and deployment approvals. I also worked on Docker-based containerization and deployments to AKS, improving rollouts and reducing manual errors. For monitoring and troubleshooting, I utilized App Insights, Azure Monitor, and Grafana, assisting the team with incident response, RCA, and performance tuning during key releases. This role gave me strong hands-on experience across backend, cloud, and data workflows—fully aligned with my current cloud/DevOps/data engineering profile.
2018 — 2020
Hyderabad, Telangana, India
At Keka HR, I supported backend development for core HRMS modules using Java, Spring Boot, and MySQL, helping build features for payroll, attendance, employee workflows, and reporting. I contributed to API development, data modeling, and performance tuning to improve overall stability and responsiveness of internal systems. I also worked with Python and SQL to create small ETL scripts and data extraction jobs that powered dashboards and analytics used by internal teams.
I assisted in automating deployments using Jenkins, Git, and Docker, helping the team reduce manual deployment steps and improve consistency across environments. I collaborated with senior engineers to troubleshoot production issues, optimize queries, and refine application logic, gaining foundational experience across backend engineering, basic DevOps practices, and data workflows. This role established the core technical skills that shaped my later growth in cloud, DevOps, data engineering, and AI-focused systems.
Education
University of Bridgeport
Master of Science - MS
DVR & Dr. Hima Sekahr MIC College of Technology