Experience
2023 — Now
New Jersey, United States
1. Developed Azure optimization recommendations application leveraging Python, Azure web app service, Snowflake, Azure Devops pipeline, Generative AI langchain and Prompt engineering
2. Implemented generative AI large language models using Langchain to parse textual rules using Faiss Vector database and query tabular data through natural language using Pandas GPT.
3. Writing prompts iteratively to get the desired output from langchain implementation
4. Developed APIs in python flask to fetch Azure properties using Python SDK and host langchain implementation and load postgres DB tables , utilizing libraries- pandas, numpy, json, Azure SDK libraries, snowflake-SQL alchemy, flask, logging, threading, csv, pandas_gpt, openpyxl, pyodbc, etc
2022 — 2023
New Jersey, United States
1. Developed portfolio risk metrics ETL app in object-oriented Python, leveraging Sqlalchemy, pyodbc, pandas, numpy.
2. Implemented Python logic for calculating financial metrics like annualized returns, beta, volatility, sharpe ratio, IRR, max drawdown.
3. Created common Python modules for audit logging, database access, and date operations.
4. Applied best practices in Python: exception handling, audit logging, and writing loosely coupled code.
5. Integrated Python projects with Azure Data pipelines for seamless execution.
6. Optimized Brinson model financial processes by migrating SQL Server procedures to Azure Data Factory.
7. Developed SQL Server stored procedures and table-valued functions for fetching cashflows data, calculating fund returns, aggregating data, and invoking Python code for IRR calculations.
8. Resolved production issues in SQL Server stored procedures for enhanced performance.
9. Built Microsoft SSRS application from scratch to display portfolio risk metrics in a dynamic tabular format, supporting customizable portfolio strategies, names, modes, and dates.
10. Enhanced existing SSRS reports by adding columns and modifying underlying logic.
11. Created high-level and low-level designs for the portfolio risk metrics application.
12. Created Jupyter notebook in the cashflows application to interact with SQL Server and perform data transformations and loading using native Python operations.
2021 — 2022
New Jersey, United States
1. Developed object-oriented Python ETL frameworks to fetch Variance data from REST API, perform transformations, and load Credit market risk data into SQL Server.
2. Utilized libraries such as requests, pandas, sqlalchemy, and grpc for efficient data processing.
3. Implemented client-server architecture using grpc, enabling server-side execution of Python logic.
4. Leveraged batch job and Jupyter notebook as client applications for seamless integration with grpc.
5. Generated dynamic RNIVs Excel reports using Python, supporting pivots and dynamic formulas utilizing the eval method.
6. Created common Python modules for audit logging, database access, and date operations.
7. Applied best practices in Python, including exception handling, audit logging, writing loosely coupled code, and Test-driven development (TDD).
8. Developed comprehensive high-level and low-level documentation for the Variance API ETL process.
9. Scheduled jobs in Autosys for automated execution and monitoring.
2020 — 2021
2020 — 2021
Delhi, India
1. Led end-to-end software development lifecycle for Python applications, including requirements gathering, code development, unit testing, and peer review.
2. Translated Mule and Netsuite ETL processes into Python, enhancing efficiency and maintainability.
3. Developed Python frameworks for seamless extraction, transformation, and loading of data from Redshift and MySQL databases.
4. Enhanced reusable Python components to improve code quality and reusability.
5. Implemented Pyspark jobs to handle large-scale transactional data extraction and processing.
6. Leveraged DBT for building and performing incremental loads in Redshift tables.
7. Utilized DBT to construct and load type-2-dimension tables in AWS Redshift.
8. Achieved code reusability by leveraging macros in DBT.
9. Proficiently worked with various AWS services, such as S3 for file upload/download, AWS Redshift for data extraction/loading/analysis, AWS Secrets Manager for secure password storage, and AWS Batch for running Docker images of Python applications.
10. Monitored Jenkins builds and image creation post-deployment to ensure continuous integration and delivery.
2018 — 2020
Gurugram, Haryana, India
1. Proficient in Spark SQL, Hive, Oozie, and GitHub.
2. Developed a Python-based peer review tool that automates recursive review of 500+ code files, significantly reducing review time and minimizing production defects.
3. Created Unix bash scripting tools based on GIT for streamlined production deployments, enabling seamless delivery with reduced manual effort.
4. Built, executed, and tested Spark code using Scala and Spark SQL.
5. Developed intricate Spark SQL queries to deliver essential client models.
6. Conducted data validations in Hive for ensuring data integrity and accuracy.
Education
Amity University,Noida
Engineer’s Degree
Cambridge Foundation School