# Manish Kukreja > Python | Azure | Generative AI | Datastage | Spark | Asset Management Location: Jersey City, New Jersey, United States Profile: https://flows.cv/manishkukreja ## Work Experience ### Software Developer (AT&T) @ Ana-Data Consulting Inc Jan 2023 – Present | New Jersey, United States 1. Developed Azure optimization recommendations application leveraging Python, Azure web app service, Snowflake, Azure Devops pipeline, Generative AI langchain and Prompt engineering 2. Implemented generative AI large language models using Langchain to parse textual rules using Faiss Vector database and query tabular data through natural language using Pandas GPT. 3. Writing prompts iteratively to get the desired output from langchain implementation 4. Developed APIs in python flask to fetch Azure properties using Python SDK and host langchain implementation and load postgres DB tables , utilizing libraries- pandas, numpy, json, Azure SDK libraries, snowflake-SQL alchemy, flask, logging, threading, csv, pandas_gpt, openpyxl, pyodbc, etc ### Software Developer (CIFC Asset Management) @ Ana-Data Consulting Inc Jan 2022 – Jan 2023 | New Jersey, United States 1. Developed portfolio risk metrics ETL app in object-oriented Python, leveraging Sqlalchemy, pyodbc, pandas, numpy. 2. Implemented Python logic for calculating financial metrics like annualized returns, beta, volatility, sharpe ratio, IRR, max drawdown. 3. Created common Python modules for audit logging, database access, and date operations. 4. Applied best practices in Python: exception handling, audit logging, and writing loosely coupled code. 5. Integrated Python projects with Azure Data pipelines for seamless execution. 6. Optimized Brinson model financial processes by migrating SQL Server procedures to Azure Data Factory. 7. Developed SQL Server stored procedures and table-valued functions for fetching cashflows data, calculating fund returns, aggregating data, and invoking Python code for IRR calculations. 8. Resolved production issues in SQL Server stored procedures for enhanced performance. 9. Built Microsoft SSRS application from scratch to display portfolio risk metrics in a dynamic tabular format, supporting customizable portfolio strategies, names, modes, and dates. 10. Enhanced existing SSRS reports by adding columns and modifying underlying logic. 11. Created high-level and low-level designs for the portfolio risk metrics application. 12. Created Jupyter notebook in the cashflows application to interact with SQL Server and perform data transformations and loading using native Python operations. ### Software Developer (Barclays) @ Ana-Data Consulting Inc Jan 2021 – Jan 2022 | New Jersey, United States 1. Developed object-oriented Python ETL frameworks to fetch Variance data from REST API, perform transformations, and load Credit market risk data into SQL Server. 2. Utilized libraries such as requests, pandas, sqlalchemy, and grpc for efficient data processing. 3. Implemented client-server architecture using grpc, enabling server-side execution of Python logic. 4. Leveraged batch job and Jupyter notebook as client applications for seamless integration with grpc. 5. Generated dynamic RNIVs Excel reports using Python, supporting pivots and dynamic formulas utilizing the eval method. 6. Created common Python modules for audit logging, database access, and date operations. 7. Applied best practices in Python, including exception handling, audit logging, writing loosely coupled code, and Test-driven development (TDD). 8. Developed comprehensive high-level and low-level documentation for the Variance API ETL process. 9. Scheduled jobs in Autosys for automated execution and monitoring. ### Senior Data Engineer @ Chegg Inc. Jan 2020 – Jan 2021 | Delhi, India 1. Led end-to-end software development lifecycle for Python applications, including requirements gathering, code development, unit testing, and peer review. 2. Translated Mule and Netsuite ETL processes into Python, enhancing efficiency and maintainability. 3. Developed Python frameworks for seamless extraction, transformation, and loading of data from Redshift and MySQL databases. 4. Enhanced reusable Python components to improve code quality and reusability. 5. Implemented Pyspark jobs to handle large-scale transactional data extraction and processing. 6. Leveraged DBT for building and performing incremental loads in Redshift tables. 7. Utilized DBT to construct and load type-2-dimension tables in AWS Redshift. 8. Achieved code reusability by leveraging macros in DBT. 9. Proficiently worked with various AWS services, such as S3 for file upload/download, AWS Redshift for data extraction/loading/analysis, AWS Secrets Manager for secure password storage, and AWS Batch for running Docker images of Python applications. 10. Monitored Jenkins builds and image creation post-deployment to ensure continuous integration and delivery. ### Senior Consultant (HPE Services Pvt. Ltd.) @ Deloitte India (Offices of the US) Jan 2018 – Jan 2020 | Gurugram, Haryana, India 1. Proficient in Spark SQL, Hive, Oozie, and GitHub. 2. Developed a Python-based peer review tool that automates recursive review of 500+ code files, significantly reducing review time and minimizing production defects. 3. Created Unix bash scripting tools based on GIT for streamlined production deployments, enabling seamless delivery with reduced manual effort. 4. Built, executed, and tested Spark code using Scala and Spark SQL. 5. Developed intricate Spark SQL queries to deliver essential client models. 6. Conducted data validations in Hive for ensuring data integrity and accuracy. ### Consultant (Telstra) @ Deloitte India (Offices of the US) Jan 2015 – Jan 2018 | Gurugram, Haryana, India 1. Automated execution of Unix processes through shell script development. 2. Implemented revenue generation process for Telecommunication client by building PL/SQL procedures from scratch. 3. Developed and enhanced Datastage jobs for improved data processing and transformation. 4. Conducted peer reviews of Datastage and PL/SQL code to ensure code quality and adherence to best practices. 5. Provided leadership to team members, overseeing and guiding their development activities. ### Senior Software Engineer (Amadeus) @ Accenture in India Jan 2014 – Jan 2015 1. Developed, tested, and implemented ETL rules using IBM Datastage, ensuring efficient data processing. 2. Conducted technical walkthroughs to gain a thorough understanding of business requirements. 3. Performed comprehensive code reviews of Datastage jobs, resulting in reduced production incidents. 4. Created technical design documents to effectively translate business requirements into ETL rules. 5. Provided operational support by troubleshooting and addressing datastage job failures in production. 6. Led a team of technical resources in identifying and resolving code defects raised in production. 7. Led the development of a complex Customer Relationship Management module using IBM Datastage. 8. Debugged Datastage jobs to identify and resolve code issues, ensuring smooth data processing. 9. Resolved production defects by identifying and correcting errors and faults, minimizing impact on operations. 10. Adhered to naming conventions in the Software Development Life Cycle (SDLC) process, ensuring compliance with process guidelines and maintaining high quality standards. ### Software Engineer (Amadeus) @ Accenture in India Jan 2012 – Jan 2014 1. Conducted technical code reviews of Datastage jobs, reducing production incidents and improving code quality. 2. Managed change requests to address client requirements promptly and effectively. 3. Developed complex Datastage jobs and sequencers to meet specific client requirements. 4. Created reusable common jobs in Datastage for improved code usability across various technical modules. 5. Optimized Datastage jobs by strategically selecting code components to meet business requirements efficiently. 6. Resolved datastage job failures promptly to minimize disruption in data processing. 7. Monitored job execution time and record volume to ensure efficient performance. 8. Enhanced a complex value proposition module to track key performance indicators of the client relative to other business providers. 9. Developed aggregated datamarts in Datastage at monthly/yearly levels for comprehensive data analysis. 10. Implemented reprocess modules to handle historical data processing in case of job fixes or updates. 11. Led migration of Datastage from version 7.5 to 8.7, including conversion of Oracle Enterprise stage to Oracle connectors, transitioning ABAP extract modes from file transfer protocol to remote function call, and performing thorough job output comparison between versions. ### Associate Software Engineer (Amadeus) @ Accenture in India Jan 2011 – Jan 2012 | Bengaluru, Karnataka, India 1. Tested datastage jobs and documented test results for quality assurance. 2. Enhanced existing datastage jobs to improve functionality and performance. 3. Developed datastage jobs of varying complexity levels, from simple to medium. 4. Participated in technical walkthrough sessions and prepared detailed technical specifications. 5. Created reusable common jobs in datastage to promote code reusability and maintainability. 6. Optimized datastage jobs for improved efficiency and performance. 7. Created SQL queries for unit testing purposes within datastage. 8. Documented functional and unit test case scenarios for comprehensive test coverage. 9. Debugged code issues using datastage, oracle, and Unix, ensuring smooth execution. 10. Installed datastage jobs in the production environment, ensuring proper deployment and configuration. ## Education ### Engineer’s Degree in Computer Science and Technology Amity University,Noida ### High School Cambridge Foundation School ## Contact & Social - LinkedIn: https://linkedin.com/in/manish-kukreja-51603785 --- Source: https://flows.cv/manishkukreja JSON Resume: https://flows.cv/manishkukreja/resume.json Last updated: 2026-04-05