# Rahel Ahmed > Azure Data Engineer | AWS |Tableau |Java | Python | Azure Data Lake | Hadoop |Snowflake | Spark | Airflow | Autosys | Hive |Data Modeling | SQL Server Location: New York, New York, United States Profile: https://flows.cv/rahel ## Work Experience ### Data Engineer @ Fidelity Investments Jan 2025 – Present | United States • Contributed to the AI-Driven Investment Intelligence Platform by integrating market, alternative, and ESG datasets with real-time streaming pipelines and scenario simulations, improving portfolio insights and increasing investment decision-making speed by 15% across U.S. markets. • Supported development of low-latency, event-driven data pipelines on AWS (Kinesis, Lambda, SQS) processing 5M+ trades and client signals daily, reducing risk detection delays by 9% and improving enterprise operational decision-making efficiency. • Collaborated with data science teams supporting AI/ML models by enabling Python-based feature engineering pipelines, curated datasets, and scalable data services, improving predictive model reliability and training efficiency by 14%. • Assisted in designing layered data architecture (ingestion, staging, curated, analytics) and optimizing relational and columnar database schemas using SQL, improving query performance and reporting speed by 7% across enterprise analytics workloads. • Participated in building Snowflake-based pipelines on AWS using Snowpipe, Streams, and Tasks, enabling near real-time ingestion of 2TB+ financial data weekly, improving data freshness, governance, and regulatory reporting readiness by 11% ### Data Engineer @ UBER INDIA SYSTEMS PRIVATE LIMITED Jan 2020 – Jan 2023 | Hyderabad • Architected a domain-oriented data mesh on Azure Data Lake and Databricks, onboarding domains and datasets, improving data discoverability by 14% and enabling self-service analytics for internal stakeholders. • Engineered event-driven and batch ingestion pipelines using Kafka, NiFi, CDC, and Avro schema evolution, processing 3M+ daily events while reducing cross-team data dependencies by 12%, ensuring reliable, real-time availability of ride and telemetry data. • Built scalable ETL pipelines on Azure Databricks with PySpark, implementing SCD Type-2, deduplication, and validation rules across datasets, improving cross-domain data consistency by 15% and supporting accurate analytics for multiple teams. • Optimized complex Snowflake SQL transformations with joins, window functions, clustering, and materialized views, reducing query latency by 18%, accelerating city-level demand forecasting, dynamic pricing, and operational dashboards used by teams across Uber India. • Implemented governance-as-code with automated lineage, schema enforcement, data quality validation, and RBAC policies across pipelines, while building fault-tolerant workflows handling late-arriving events, improving pipeline reliability, freshness, and audit compliance by 11%. ### ETL Developer @ Zoho Jan 2018 – Jan 2020 | Hyderabad • Designed modular data pipelines for CRM, marketing, support, and subscription analytics using Azure Data Lake, Delta Lake, and Parquet, enabling reusable ETL workflows, improved data discoverability, and self-service pipelines across multiple product domains. • Built event-driven ingestion and transformation frameworks with Kafka, Azure Event Hubs, NiFi, and PySpark, handling schema evolution, change data capture, and idempotent processing, reducing cross-team dependencies and supporting near real-time analytics for SaaS applications. • Developed Snowflake and Synapse data marts with partitioning, clustering, and materialized views, used governance-as-code, backfill orchestration, and observability dashboards, ensuring reproducible analytics, audit readiness, and reliable BI reporting across CRM, marketing, and product domains. ### Big Data Engineer @ EPAM Systems Jan 2016 – Jan 2018 | Hyderabad • Engineered end-to-end trip, rider, driver, and logistics telemetry pipelines integrating GPS, payments, and operations data into raw, curated, and presentation layers, improving ride allocation, ETA accuracy, and operational decision-making by 45%. • Built scalable batch and streaming ETL pipelines using Azure Data Factory, Apache Kafka, and Spark Structured Streaming, ingesting data from MySQL, PostgreSQL, GPS APIs, and driver devices, reducing latency 65% and enabling real-time analytics. • Developed ML-ready pipelines for demand forecasting, surge pricing, route optimization, and anomaly detection on Databricks Lakehouse, optimizing predictive surge, trip fulfillment, and driver engagement, while strengthening data governance, observability, and audit readiness by 50%. ## Education ### Bachelor of Technology in Information Technology in Information Technology Jawaharlal Nehru Technological University, Hyderabad, India ### Master of Science in Information Technology in Information Technology Valparaiso University, Valparaiso, Indiana, USA ## Contact & Social - LinkedIn: https://linkedin.com/in/rahel-ahmed-7619a4b1 --- Source: https://flows.cv/rahel JSON Resume: https://flows.cv/rahel/resume.json Last updated: 2026-04-17