# Rohith Reddy Kota > Senior Software Engineer (Data) Location: Boston, Massachusetts, United States Profile: https://flows.cv/rohithreddykota I love to build data applications for data engineers and data scientists. As an experienced software engineer with years in the field, I've dedicated my career to mastering the design, architecture, development, debugging, and maintenance of distributed data systems and backend services at scale, continually learning from my experiences. Currently, I am a core developer at Rill Data, where I contribute to Rill, an open-source operational BI tool that provides fastest dashboards. I love the open-source nature of Rill and its potential to revolutionize the way teams use dashboards for operational insights. I'm proficient in languages like Golang, Scala, Java, Python, Rus. My expertise spans a wide array of tools and technologies, such as Kafka, Pinot, Clickhouse, ElasticSearch, MongoDB, Druid, Zookeeper, Apache Spark with PySpark and Scala, Apache Flink, Apache Airflow, Apache Kafka, and and well-versed in openAPI, Swagger, and opentelemetry.. And I deal with Kubernetes, AWS and Google cloud services everyday. My skills extend to crafting intricate SQL queries, managing transactional databases like Postgres, MySQL, MongoDB and DynamoDB and working with data warehousing tools like Snowflake and Redshift and data lake houses like delta lake and apache iceberg. I excel in system integrations through technologies like gRPC, GraphQL, and RESTful APIs, and efficiently deploying microservices using Docker and Kubernetes. At Rill everything is and will be on k8s. Moreover, I have a strong foundation in developing high-concurrency, performance-oriented systems, and I prioritize good software engineering practices, such as testability, quality assurance and code reviews. My ability to communicate complex technical concepts to diverse stakeholders fosters effective collaboration. I've gained exposure to various business domains, adapting to different work environments, from early startups to large organizations, making me a versatile and effective software engineer well-equipped to tackle complex projects and drive them to successful outcomes. ## Work Experience ### Senior Software Engineer (Data) @ Rill Data Jan 2021 – Present | United States Rill is the fastest path to operational intelligence. As the world’s first truly elastic, fully managed cloud service for Apache Druid™, we enable data teams to deliver operational intelligence to their business stakeholders with zero DevOps overhead. Key responsibilities at Rilldata are • Design and implement complex stream and batch applications for programmatic advertising businesses using Apache Kafka, Apache Beam on Apache Flink, and Google Dataflow runners, processing terabytes of time-series data into Apache Druid and ClickHouse. • Manage state and observability of streaming applications, ensuring smooth updates and modifications to align with evolving business requirements. • Build and manage batch pipelines using Rill Cloud, dbt, and Apache Airflow on Kubernetes (GKE). • Optimize distributed applications and Druid ingestion specs for maximum performance and scalability. • Develop database connectors to extract data from data warehouses (Snowflake, Redshift) and data lakehouses (Delta Lake, Apache Iceberg), enabling seamless ingestion into Apache Druid and ClickHouse. • Quickly prototype and build POCs on DuckDB to validate data pipelines and accelerate solution design. • Possess deep understanding of database internals, applying tuning techniques to maximize query performance and resource utilization. • Implement tools and frameworks to automate data processing and ingestion, while improving overall database performance. • Implement and manage incremental ingestion features on Rill Cloud, enabling efficient data updates with reduced cost and latency. • Drive all development and operations with a strong focus on cost control, ensuring high efficiency without compromising performance. ### Senior Data Engineer @ Saltside Jan 2019 – Jan 2021 | India • Saltside Technologies provides an online marketplace for goods and services. Every day millions of users use our software, hence, millions of events flow into our data platform. Key responsibilities at Saltside Technologies are • Build and maintain streaming applications and data pipelines using Apache Storm which processes and enriches the application data. • Develop analytical services and maintain integrations between data level microservices and other backend services using Kafka, Apache thrift, and GRPC. • Develop and maintain fault-tolerant complex ETL and ELT jobs using Apache Airflow. • Design and maintain AWS Redshift data warehouse. • Build tableau reports on top of AWS Redshift for business owners and digital marketing teams. Assist them with ad-hoc SQL queries on demand which helps in business key decisions. ### Software Development Engineer @ NanoPrecise Sci Corp Jan 2018 – Jan 2019 | Bengaluru Area, India • At Nanoprecise we work on addressing very fundamental and critical problems faced by mechanical industries in predicting failure. I play a crucial role in processing and analyzing vibration data streaming from different sensors. Following are my Achievements/Tasks at Nanoprecise data services, • Successfully developed cost-effective data-intensive streaming analytical applications from scratch. • Developed and maintained ~30 node Hadoop Spark clusters on AWS EMR, which can process ~100GB sensor data effectively in a day. • Building ad-hoc reports using tableau and hive queries to answer business questions. • Worked with Engineering teams in developing integrations with data tech stack using Restful & GRPC. • Built several engineering services using golang, java, scala, and MongoDB, PostgreSQL as database. ### Software Engineer @ GENPACT Jan 2016 – Jan 2018 | Hyderabad Area, India • At Genpact, I have worked with different teams in migrating various ERP system data to the SAP R/3 ERP system. while working on the migration project, I got the chance to get expertise in database design and data warehouses. We used BackOffice Associates - Data Stewardship Platform as an ETL tool and MS SQL Servers as the databases for staging, migrating, and batch processing of the data. ## Education ### Master of Science - MS in Data Architecture and Management Northeastern University ### Executive MBA in Digital Marketing and Analytics Indian School of Business ### Bachelor’s Degree in Electrical, Electronics and Communications Engineering Amity University ## Contact & Social - LinkedIn: https://linkedin.com/in/rohithreddykota --- Source: https://flows.cv/rohithreddykota JSON Resume: https://flows.cv/rohithreddykota/resume.json Last updated: 2026-03-31