# Karthikeyan S. > Senior Staff Software Engineer at Confluent | Distributed Systems & Storage (File Systems) | ex-Rubrik Location: Mountain View, California, United States Profile: https://flows.cv/karthikeyans Software Engineer with experience spanning filesystems, distributed systems, and cloud infrastructure. I have led and shipped large-scale infrastructure capabilities across on-prem and cloud, including secure connectivity, hybrid control planes, and high-performance data paths. I enjoy turning complex, multi-team problems into simple, reliable products, with a focus on correctness, performance, and operational excellence. Core areas: Distributed systems, Storage and filesystems, Cloud Infrastructure, Systems programming, Kubernetes(operators/controllers), Performance analysis and optimization. Languages: Java, C/C++, Go, Python and various scripting languages as needed. ## Work Experience ### Staff Software Engineer II @ Confluent Jan 2021 – Present | Mountain View, California, United States Member of the Stream Governance team, owning Schema Registry end to end across Confluent Platform and Confluent Cloud, including the cloud control plane plus fleet deployment, lifecycle management, and operations of Schema Registry. - Served as Technical Lead for adding Private Link support to Confluent Cloud Schema Registry (https://docs.confluent.io/cloud/current/sr/fundamentals/sr-private-link.html). Led end-to-end design and implementation of high-impact capabilities including cross-region connectivity and a migration path to enable Private Link in existing environments. Delivered the feature set from Early Access through Limited Availability to General Availability. - Served as Technical Lead for Schema Registry integration with Unified Stream Manager (https://www.confluent.io/blog/introducing-unified-stream-manager/), enabling a unified hybrid Schema Registry experience across Confluent Platform and Confluent Cloud. Designed and implemented a high-performance, secure request-forwarding module for read and write traffic between on-premises and cloud, and delivered a pull-based schema-linking mechanism that enables hybrid deployments without requiring inbound ingress into customer environments. - Collaborated with Cloud Platform and Cloud Infrastructure teams to build Confluent for Kubernetes/CFK(https://docs.confluent.io/operator/current/overview.html), a Kubernetes-based control plane for Confluent private-cloud deployments, with the goal of delivering an operational experience comparable to Confluent Cloud . Delivered Custom Resource Definitions (CRDs) for Cluster Linking (https://www.confluent.io/blog/hybrid-cloud-data-management-with-confluent-kubernetes/) and Schema Linking, along with platform-critical enhancements including certificate rotation and dynamic certificate reloading in Kafka. ### Staff Software Engineer / Tech Lead @ Rubrik, Inc. Jan 2018 – Jan 2021 Worked on the Cloud Data Management (CDM) team, focused on host-based and NAS-based data management. - Technical Lead and core developer for NAS Direct Archive, enabling archival of petabyte-scale datasets to the cloud while maintaining constant space usage on the Rubrik cluster. Two issued patents. - Technical Lead for large-file sharding to improve scalability for multi-terabyte files. Originated the approach, built the initial prototype as a solo hackathon project (selected Top 3), and led the effort to productize and ship it. One issued patent. - Technical Lead and developer for optimized hard-link support across filesystem and NAS data management, improving correctness and efficiency for link-heavy workloads. One Issued patent. - Technical Lead for a testpoint framework to inject faults into end-to-end tests in production-like environments. Originated the idea, implemented it as a solo hackathon project, and integrated it into the product, significantly improving coverage of real-world failure scenarios. - Re-architected the filesystem and NAS stack to a new foundation to support deduplication and improve performance and stability. - Implemented performance and memory optimizations for remote backup agents using batching and write-back mechanisms, enabling reliable operation on filesystems with hundreds of millions of files. ### Staff Software Engineer @ Tintri Jan 2013 – Jan 2018 Worked on the Tintri Filesystem team, focused on performance, security, and platform reliability. - Developed a zero-block optimization framework for the Tintri filesystem, reducing storage and write amplification for sparse workloads. One issued patent. - Designed and implemented software-based encryption for the filesystem to eliminate dependency on self-encrypting drives (SEDs) and enable FIPS compliance. - Built a FIPS-compliant encryption library on top of OpenSSL and led Tintri’s FIPS certification efforts. - Delivered write-path enhancements and performance optimizations to improve throughput and latency under heavy workloads. - Mentored two summer interns, guiding feature prototyping and technical execution. - Developed an NVRAM performance utility to evaluate NVRAM performance and prototype optimizations. ### Senior Software Engineer @ Dell EMC Jan 2011 – Jan 2013 Worked on the Data Domain Filesystems group, responsible for the core filesystem stack used across all EMC Data Domain appliances. - Prototyped a distributed, multi-node Data Domain archiver to improve scalability and throughput. - Implemented support for Unstable Streams (NVRAM bypass) in DDFS to enable protocol-specific write semantics, including CIFS/SMB workloads. - Redesigned the deduplication domain key-value store to be resilient to enclosure failures, strengthening durability for security-critical metadata including encryption keys. - Worked on performance enhancements to File Manager which is the VFS layer for Data Domain File system stack. ### Software Development Engineer @ Yahoo! Jan 2010 – Jan 2011 Worked on the Yahoo! Mail backend team, focused on the metadata storage and retrieval layer that powers core Yahoo! Mail experiences. - Delivered enhancements to the metadata platform, including lease server improvements, compact indexing, and cheap copy support in the Yahoo! Mail Metadata API. - Designed and implemented a Lucene index migration for the My Photos (Xoopit) application within Yahoo! Mail, improving indexing reliability and maintainability. ### Research Assistant - File Systems & Storage Lab @ Stony Brook University Jan 2009 – Jan 2010 Conducted research on Linux-based filesystems under Prof. Erez Zadok, with a focus on storage correctness and data efficiency. - Implemented write ordering guarantees using write barriers on SCSI via Tagged I/O, improving consistency across failure scenarios. - Designed and implemented an efficient deduplication metadata store using LSM-tree–based data structures, optimized for high write throughput and large key spaces. ### Software Development Intern @ Riverbed Technology Jan 2009 – Jan 2009 Intern on the CIFS/NFS optimization team, focused on improving WAN optimization performance and troubleshooting tooling. - Evaluated read-ahead optimizations for NFS and CIFS traffic in the context of Compound Document Format (CDF) workloads; built a Python prototype to validate performance impact. - Developed internal performance-analysis tools to correlate system-call activity with Wireshark network traces, improving debugging speed and root-cause isolation. ### Senior Software Engineer @ United Online, Inc. Jan 2005 – Jan 2008 Software Engineer on the Server and Web Applications team in Hyderabad, building web services and web applications for the Classmates.com portal. Worked primarily in Java/J2EE with Spring and Hibernate, backed by Oracle and MySQL, with additional Perl and shell scripting. - Built an internal “Be-User” tool to create user profiles with specific attributes on demand, significantly improving QA efficiency and test coverage. - Migrated key portal features (message boards, interest groups, and the core registration flow) from Perl and the ATG Dynamo framework to a Java-based web application using Spring, improving maintainability and enabling faster feature delivery. - Designed and developed the Forums module using the Jive framework; led a large-scale data migration of roughly 30 million rows to a new schema. Used JProfiler and Silk Performer to profile, tune performance, and validate scalability. - Contributed to the Rewards framework that incentivized user-generated content and engagement; optimized page load times by consolidating and aggregating data from multiple sources into a streamlined datastore for faster responses. ### Software Intern @ IBM Jan 2004 – Jan 2004 Intern on the IBM Tivoli Software team. - Analyzed key components of Tivoli Intelligent Orchestrator (TIO) and developed orchestration scripts to automate installation and setup of Tivoli Access Manager, significantly reducing deployment time and manual effort. - Built a prototype implementation of the SACRED protocol (RFC 3760) for credential management, using Java with a Swing-based UI and a MySQL backend to store and manage PKCS#12 (.p12) certificates. ## Education ### MS in Computer Science Stony Brook University ### B.E(Hons) in Computer Science Birla Institute of Technology and Science, Pilani ## Contact & Social - LinkedIn: https://linkedin.com/in/karthikeyanas --- Source: https://flows.cv/karthikeyans JSON Resume: https://flows.cv/karthikeyans/resume.json Last updated: 2026-04-12