With a career dedicated to building robust and scalable systems, I specialize in developing resilient distributed frameworks that empower critical services to achieve high availability and fault tolerance.
Experience
2023 — Now
2023 — Now
Pleasanton, California, United States
With a focus on distributed systems and high availability, I spearhead architecture of resilient distributed frameworks for object management server that powers Workday platform.
Key Contributions:
• Design and implement a fault-tolerance framework for core components (query, transaction processing, and job processing), significantly improving system reliability and minimizing downtime for critical business functions.
• Lead development of features that helps host provisioning service to perform better resource utilization, thereby minimizing disruption to critical services.
• I play a key role in architecting solutions that optimize the management and operation of distributed systems and their resources through robust lifecycle that enables bootstrap of critical services. Additionally build comprehensive monitoring system to reduce MTTD(Mean time to detect) failures and MTTR(Mean time to recover).
2017 — 2023
2017 — 2023
Redwood City, California
With a strong foundation in cloud-native technologies, scalable distributed systems, and enterprise-grade data integration, I have architected and delivered high-impact solutions across multi-cloud environments. My work spans serverless platforms, Kubernetes-based orchestration, cost-efficient compute strategies, and secure data processing within customer-controlled infrastructure.
Key Contributions:
• Serverless Autoscaler: Built a dynamic autoscaling framework for Informatica’s Serverless platform by extending Kubernetes Cluster Autoscaler with custom APIs.
• Cost-Optimized Compute: Designed a Spot Instance orchestration system with fault-tolerant scheduling and preemption handling for significant cost savings.
• Secure Multi-Cloud Execution: Architected Elastic Secure Agent to run Spark-based workflows securely in customer VPCs across AWS, Azure, and GCP.
2016 — 2017
2016 — 2017
Bengaluru Area, India
I spearheaded the integration of critical security technologies within Informatica’s data processing ecosystem, focusing on enhancing data governance, access control, and secure operations across multiple execution engines and environments.
Key Contributions:
• Integrated Apache Sentry, Ranger, and Knox across Native, Grid, and Hive execution engines to enforce consistent security policies.
• Coordinated implementation and testing efforts with development and QA teams to ensure smooth and secure deployment.
The integration of advanced information security technologies significantly enhanced Informatica’s authentication and authorization capabilities, providing robust, fine-grained access control across all execution environments. This modernization strengthened data protection and compliance, elevating the platform’s overall security posture by multiple folds.
2014 — 2016
2014 — 2016
To modernize Informatica’s on-prem Data Integration Server and support evolving data complexity, I led the architecture and implementation of a Custom Data Type Framework designed for seamless integration of hierarchical and complex data types across the ETL pipeline. This included the development of an OSGi-based plugin system with extension-driven APIs that decoupled data type logic from core transformation modules—eliminating the need for deep code modifications and significantly reducing the time required to add new data types from years to just months.
Key contributions:
• Enhanced the expression engine to support complex data types (structs, arrays, maps) while ensuring backward compatibility.
• Led cross-team scrums and mentored engineers to drive adoption of the new data type framework across the platform.
2012 — 2014
2012 — 2014
Bengaluru Area, India
I contributed to the performance optimization and feature expansion of Informatica’s Data Integration Server by enabling parallel file I/O through N-way partitioning and enhancing lookup capabilities with dynamic cache updates. These improvements delivered more scalable and efficient data processing across diverse environments and workloads.
Key Contributions:
• Implemented N-way file partitioning support for parallel read/write operations across local file systems and distributed platforms like HDFS.
• Migrated and integrated dynamic lookup cache transformation, enabling real-time updates to cached data within ETL pipelines.
• Ensured compatibility across major Hadoop distributions and seamless integration with existing transformation frameworks.
Education
International Institute of Information Technology, Hyderabad
M. Tech
Nirma Institute of Technology, Gujarat University