# Bhanu Murthy A. > Senior Software AI Engineer – Cloud, Data & DevOps | Golang • Java • Python • SQL | Azure • AWS • GCP | Kubernetes • Terraform | CI/CD | SRE | Data Pipelines | AI/GenAI Location: Brooklyn, New York, United States Profile: https://flows.cv/bhanumurthya I’m a Senior Software Engineer with 7+ years of real-world experience working across Cloud, DevOps, Data Engineering, Backend Development, and AI/GenAI platforms. I design and build systems that stay reliable at scale—whether it’s Java/Spring Boot services, Python-based data pipelines, or Kubernetes workloads powering AI applications. My work mainly focuses on creating solutions that are fast, secure, cost-efficient, and always available, similar to keeping a complex production line running without interruption. I have strong hands-on experience across Azure, AWS, and GCP, where I’ve built everything from high-availability Kubernetes clusters (AKS, EKS, GKE) to end-to-end CI/CD pipelines using Jenkins, GitHub Actions, Azure DevOps, and GitLab. I automate infrastructure using Terraform, Ansible, CloudFormation, and Bicep, and follow GitOps practices to ensure every environment stays consistent and drift-free. On the data side, I build ETL/ELT pipelines, Spark/PySpark workflows, real-time streaming architectures using Kafka/PubSub, and cloud-native data platforms on BigQuery, Snowflake, Redshift, and SQL-based systems. I frequently integrate these pipelines with AI/ML platforms such as SageMaker, Vertex AI, Azure ML, and Databricks, supporting model training, feature engineering, and large-scale inference systems. I care deeply about observability and reliability, building full SLI/SLO dashboards using Prometheus, Grafana, CloudWatch, ELK, Dynatrace, App Insights, and tying them with PagerDuty/Slack for proactive incident response. These practices consistently help reduce MTTD/MTTR by 50% and keep production systems stable under heavy load. Across roles, I’ve collaborated with developers, data scientists, and platform teams to streamline releases, automate operations, strengthen security (IAM/RBAC/Key Vault/KMS), and support mission-critical applications with strong SLA commitments. I’m open to Corp-to-Corp (C2C) opportunities across the USA — remote, hybrid, or onsite — especially roles involving Cloud Engineering, DevOps, Data Engineering, AI/ML infrastructure, backend services, and distributed platforms. ## Work Experience ### Senior Software AI Engineer @ Kube IT Jan 2025 – Present I design and automate large-scale AI/ML platforms across Azure and GCP, deploying training and inference workloads using AKS, GKE, Docker, Helm, and Terraform to support high-availability enterprise systems. I build end-to-end CI/CD pipelines using GitHub Actions, Jenkins, and Azure DevOps, integrating automated testing and DevSecOps validation to reduce deployment risks and accelerate release cycles. I also develop Python- and SQL-based data workflows integrated with Vertex AI, Azure ML, and cloud storage layers to streamline retraining, model deployment, and pipeline reliability. To maintain production stability, I implement deep observability using Prometheus, Grafana, CloudWatch, and Stackdriver, building SLO-driven dashboards that cut MTTR by more than 50% and prevent outages before they occur. I work closely with data engineering, SRE, and application teams to automate infrastructure provisioning, enforce IAM/RBAC controls, and ensure compliance for mission-critical cloud and AI workloads operating at scale. ### Senior Software Engineer @ Oportun Jan 2023 – Jan 2024 I built and managed cloud-native data and AI platforms on AWS, implementing pipelines and services that powered real-time personalization and credit-risk decisions. I designed scalable ML workflows using SageMaker, Lambda, Step Functions, DynamoDB, API Gateway, and automated infrastructure with Terraform to maintain consistent, secure environments. I also built ETL/ELT data pipelines using Python, Spark, and batch/streaming jobs, enabling event-driven retraining and improving prediction accuracy and latency across business-critical systems. Using EKS, Docker, and Helm, I optimized GPU-based inference workloads to handle millions of requests daily with predictable performance. To strengthen reliability, I implemented full observability across logs, metrics, and traces using Grafana, CloudWatch, and OpenSearch (ELK), improving issue detection and reducing MTTR. I helped build CI/CD pipelines with GitHub Actions, Jenkins, and AWS CodePipeline, incorporating automated tests and deployment safeguards for high-volume production environments. I collaborated closely with platform, data, and ML teams to improve throughput, enhance security using IAM, Secrets Manager, and KMS, and ensure consistent availability and disaster recovery readiness through multi-region routing and automated failover strategies. ### Senior Software Engineer II @ TruWeather Solutions Jan 2022 – Jan 2023 | Chennai, Tamil Nadu, India At TruWeather, I engineered cloud-native data and analytics pipelines on Google Cloud Platform, helping process large-scale geospatial and weather datasets used for aviation forecasting and real-time decision systems. I built streaming ingestion layers with Pub/Sub, automated data processing using Python, Spark, and Dataflow, and integrated outputs into BigQuery for high-speed analysis. I also deployed low-latency microservices and model-serving components on GKE using Docker, Helm, and Terraform, ensuring scalable and reliable operations during peak weather events. To maintain stability and compliance, I automated IAM, secrets, and environment provisioning using Terraform, Secret Manager, and RBAC, reducing manual steps and securing access across teams. I strengthened observability using Prometheus, Stackdriver, and Dynatrace, improving incident visibility and reducing time to resolve production issues. I collaborated with data engineers, cloud engineers, and backend teams to tune ETL jobs, optimize computing costs, enhance SLAs, and ensure that weather intelligence systems were always available and responsive. ### Associate Software Engineer @ Veeva Systems Jan 2020 – Jan 2022 | Bengaluru, Karnataka, India At Veeva, I contributed to building and enhancing cloud-backed application features using Java, Spring Boot, SQL, and REST APIs. I supported backend development for healthcare and compliance modules while integrating services with Azure-based infrastructure. I also built serverless automation and lightweight data workflows using Python, Azure Functions, Blob Storage, enabling reliable ingestion and transformation of application data. My contributions helped streamline backend logic, improve API performance, and strengthen application reliability for enterprise customers. I introduced DevOps practices into the team by helping automate CI/CD pipelines using Azure DevOps and Jenkins, integrating unit tests, static code checks, and deployment approvals. I also worked on Docker-based containerization and deployments to AKS, improving rollouts and reducing manual errors. For monitoring and troubleshooting, I utilized App Insights, Azure Monitor, and Grafana, assisting the team with incident response, RCA, and performance tuning during key releases. This role gave me strong hands-on experience across backend, cloud, and data workflows—fully aligned with my current cloud/DevOps/data engineering profile. ### Junior Software Engineer @ Keka HR Jan 2018 – Jan 2020 | Hyderabad, Telangana, India At Keka HR, I supported backend development for core HRMS modules using Java, Spring Boot, and MySQL, helping build features for payroll, attendance, employee workflows, and reporting. I contributed to API development, data modeling, and performance tuning to improve overall stability and responsiveness of internal systems. I also worked with Python and SQL to create small ETL scripts and data extraction jobs that powered dashboards and analytics used by internal teams. I assisted in automating deployments using Jenkins, Git, and Docker, helping the team reduce manual deployment steps and improve consistency across environments. I collaborated with senior engineers to troubleshoot production issues, optimize queries, and refine application logic, gaining foundational experience across backend engineering, basic DevOps practices, and data workflows. This role established the core technical skills that shaped my later growth in cloud, DevOps, data engineering, and AI-focused systems. ## Education ### Master of Science - MS in Computer Science University of Bridgeport ### Bachelor of Technology - BTech in Electronics and Communications Engineering DVR & Dr. Hima Sekahr MIC College of Technology ## Contact & Social - LinkedIn: https://linkedin.com/in/bhanumurthyallada --- Source: https://flows.cv/bhanumurthya JSON Resume: https://flows.cv/bhanumurthya/resume.json Last updated: 2026-04-01