I have always been fascinated by the power of data to drive decisions and create meaningful applications. With over five years of experience as a Software Development Engineer focused on data and backend engineering, I have built scalable solutions that harness the potential of large datasets.
Experience
2025 — Now
2025 — Now
United States
As a Software Engineer, my work focuses on building end-to-end data-driven systems that combine data engineering, machine learning, and backend development to deliver scalable and reliable solutions.
I engineered data pipelines using PySpark and Apache Airflow to process large-scale structured and unstructured data, enabling efficient data availability for both real-time and batch analytics. This improved accessibility of insights and supported downstream machine learning and application workflows.
I developed NLP models using transformer architectures to extract meaningful insights from text data and integrated these models into applications through REST APIs. This helped bridge the gap between data science and production systems, making model outputs usable in real-world applications.
On the backend side, I built scalable services using C#/.NET and Python to serve data and ML outputs, implementing schema validation, versioning, and reliable API contracts. I also optimized SQL queries and data models to improve performance for high-volume analytical workloads.
Additionally, I designed event-driven architectures using Kafka to support real-time data streaming and built systems that handle high-throughput workloads efficiently. Monitoring and CI/CD pipelines were implemented across both data and application layers to ensure reliable deployments and consistent system performance.
Skills: Data Engineering, Machine Learning, Backend Development (.NET/Python), Real-Time Systems, APIs, Cloud Platforms
2024 — 2024
2024 — 2024
United States
During my internship as a Software Engineer Intern, I worked on building data-driven backend systems and processing large-scale telemetry and transactional datasets. Using Python and SQL, I implemented data validation and transformation logic that improved data quality and made it more reliable for analytics and downstream applications.
I developed ETL pipelines to automate data ingestion, transformation, and aggregation, reducing manual effort and improving consistency of data processing workflows. This enabled faster availability of structured data for reporting and analytical use cases.
On the backend side, I built REST APIs and services to integrate external data sources with internal systems, ensuring smooth data flow and reliable communication between applications and databases. I also worked on optimizing SQL queries and data access patterns, which improved system performance and reduced response times under load.
Additionally, I contributed to building dashboards and visualizations to monitor system metrics and operational trends, helping stakeholders gain real-time insights and make informed decisions.
Skills: Data Processing | ETL Pipelines | Backend Development | API Integration | SQL Optimization | Data Visualization
2020 — 2023
2020 — 2023
Bangalore
As a Software Engineer, my work focused on building data-driven applications by combining data engineering and backend development to deliver reliable and scalable enterprise solutions.
I worked extensively with Python and SQL to transform, validate, and process large-scale enterprise data, ensuring data consistency and reliability for reporting and downstream applications. This enabled better visibility into operational metrics and improved decision-making across teams.
I developed ETL workflows to automate data ingestion, transformation, and loading into relational databases, reducing manual effort and improving the efficiency of reporting pipelines. These workflows supported consistent and structured data availability for analytics use cases.
On the backend side, I built application components using C#/.NET and REST APIs to enable seamless interaction between frontend systems and databases. I also optimized SQL queries, indexing strategies, and database performance to improve response times for high-volume workloads.
Additionally, I implemented asynchronous processing patterns and messaging systems to handle large data flows efficiently, improving system scalability and stability. I also contributed to reporting and dashboard solutions that provided stakeholders with insights into KPIs and system performance.
Skills: Data Engineering, Backend Development (.NET/Python), ETL Pipelines, SQL Optimization, APIs, Messaging Systems
Education
University of Houston