# Manu . > Full Stack Developer| Python | Django| Flask| FAST API |Open AI|(AWS, Azure, GCP) | API| Git| Gen AI| SQL| Microservices| LLM| RAG| CI/CD | SDLC|PySpark| Docker|DevOps| Kubernetes| Pytest |Git| Front-Office| SDLC| Agile Location: New York City Metropolitan Area, United States Profile: https://flows.cv/manu1 I’m a Full-Stack Software Engineer with a strong focus on Python, building scalable, secure, and high-performance web applications from backend APIs to modern frontend interfaces. Backend Engineering: Python is my primary language, and I use it to design clean, maintainable backend systems and APIs with FastAPI, Django, and Flask. I have deep experience building RESTful services, handling authentication and authorization, and integrating backend logic with databases such as PostgreSQL, MySQL, MongoDB, DynamoDB, and Redis. I focus on performance, data integrity, and scalable architecture, ensuring backend systems are production-ready and easy to evolve. Frontend & Full-Stack Development On the frontend, I build responsive and user-friendly interfaces using React, Next.js, JavaScript, TypeScript, HTML5, and CSS3. I enjoy working across the full stack—connecting frontend applications to backend APIs, managing state, and delivering smooth user experiences. Machine Learning & AI Integration I’ve developed and deployed machine learning models using scikit-learn, TensorFlow, and PyTorch, integrating them into backend services with FastAPI and deploying them for real-world use cases. Data, Integrations & APIs I have experience building data pipelines and backend integrations using Apache Kafka, Airflow, and cloud-native data services. I’ve worked extensively with ETL workflows and large datasets, supporting analytics and data-driven applications. Expert in building scalable, high-performance RESTful APIs using Django, Flask, and FastAPI. Proficient in backend data integration with MySQL, PostgreSQL, MongoDB, and DynamoDB for efficient data handling and persistence. Cloud Infrastructure Extensive hands-on expertise across major cloud platforms—AWS (EC2, Lambda, S3, RDS, SQS, CloudFormation, ECS, EKS), Azure (Blob Storage, DevOps, AKS, SQL Database), and GCP (Compute Engine, GKE, BigQuery, Cloud Pub/Sub)—with a strong focus on scalability, security, and automation. Production, Security & Automation I’m experienced in taking applications to production using Docker and Kubernetes, implementing secure API design with OAuth2, JWT, and SSO, and setting up monitoring with tools like Prometheus, Grafana, Datadog, and ELK. I also automate workflows and backend operations using Python and Bash. Tech Stack: Python, FastAPI, Django, Flask, React, Next.js, JavaScript, TypeScript, HTML, CSS, PostgreSQL, MySQL, MongoDB, Redis, Kafka, Airflow, Docker, Kubernetes, Git ## Work Experience ### Full Stack Developer @ UBS Jan 2024 – Present • Played a key role in developing scalable web applications using Python, with Django and Flask frameworks to build robust server-side systems. Integrated libraries like NumPy and SciPy for advanced data processing and complex mathematical computations. • Developed and maintained ETL pipelines to aggregate and transform data from multiple sources into centralized databases and data warehouses for analytical use. • Designed and implemented dynamic, responsive user interfaces using JavaScript, HTML, CSS, and React.js. Created reusable React components and applied modern CSS techniques to ensure an intuitive and seamless user experience. • Worked with NoSQL databases such as Cassandra and MongoDB to efficiently manage and retrieve unstructured data at scale. • Utilized ETL tools like Apache NiFi and AWS Glue to extract data from MySQL and ingest it into data lakes stored on Amazon S3 or HDFS. Designed and maintained PostgreSQL and MySQL databases to support backend operations of web applications. • Leveraged AWS cloud services, including Lambda for serverless compute, DynamoDB for NoSQL storage, and S3 for object storage, enabling high-performance, cost-efficient cloud-based solutions using Python. • Enhanced web application functionality by integrating AI/ML capabilities using Python, improving automation and data-driven features. • Monitored and analyzed logs from distributed systems such as Hadoop, and conducted performance analysis of SQL scripts. Built scalable ETL workflows using Python and AWS Glue, and processed large datasets with AWS EMR for big data analytics. • Employed YAML and JSON for configuration management and data exchange, improving deployment efficiency and system maintainability through clean and adaptable formats. • Contributed to microservices architecture by decomposing monolithic applications into lightweight, containerized microservices using Flask and Docker, improving scalability, reliability, and resource utilization. ### Python AI/ML Engineer @ J.P. Morgan Jan 2023 – Jan 2024 • Specialized in backend development for AI/ML applications using Python, leveraging Django and Flask to build scalable services for deploying machine learning models. Designed and implemented models for classification, natural language processing (NLP), and using TensorFlow, PyTorch, and Scikit-learn. • Created interactive, ML-driven front-end components using React.js and D3.js for real-time data visualization. • Developed RESTful APIs to deliver machine learning predictions and facilitate A/B testing in production environments. Containerized inference pipelines and ETL workflows using Docker, and deployed them via Azure Container Instances and Azure Kubernetes Service (AKS), with CI/CD automation powered by Jenkins. • Utilized Azure Database for PostgreSQL and MySQL for structured data management, and integrated vector databases like FAISS and Pinecone to enable semantic search and retrieval-augmented generation (RAG) capabilities. • Managed end-to-end MLOps workflows with MLflow for experiment tracking, model versioning, and reproducibility. • Automated infrastructure provisioning and model deployment processes using Terraform, integrated with Jenkins pipelines to ensure consistent and reliable CI/CD for AI applications. • Implemented real-time feature engineering pipelines using Azure Event Hubs (Kafka-compatible), enabling low-latency data streaming for online inference. Integrated these pipelines with Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics for scalable data preprocessing. • Deployed containerized ML workloads on AKS to support scalable, resilient AI infrastructure. • Set up robust monitoring and alerting systems for AI workloads using Azure Monitor, Application Insights, Grafana, and custom Python-based metrics. • Integrated large language models (LLMs) into applications for tasks such as summarization and question answering, and developed RAG pipelines combining vector-based search with LLM APIs. ### Python Data Engineer/ Full-stack Developer @ Cigna Healthcare Jan 2020 – Jan 2022 • Gained experience in DevOps methodologies with a focus on Agile practices. Contributed to the project's interactive environment, supporting frequent feedback and iterative development to improve software delivery. • Worked in Amazon Web Services (AWS) as a newcomer, laying the foundation for cloud platform integration. Assisted in understanding cloud capabilities for resource management, scalability, and cost-effectiveness. • Familiarized myself with Docker for Containerization, and Kubernetes to start managing containerized applications. Assisted in deployment, scaling, and orchestration, contributing to system stability. • Learned how to deal with Continuous Integration and Continuous Delivery (CI/CD) principles using Jenkins, contributing to the automation of software deployment, and learning the basics of version control. • Collaborated in utilizing Ansible for configuration management and automation. Learned to ensure system consistency, reducing human error, and simplifying deployment processes within the project. • Participated in the implementation of Grafana for basic system monitoring and metric visualization. Contributed to real-time insights into the project's infrastructure and early issue identification. • Started using GitLab for version control, getting familiar with code management and collaborating with team members. Utilized Jira for issue tracking and Slack for essential communication. • Began contributing to project documentation using GitBook, ensuring essential project configurations, processes, and best practices are well-documented. ## Contact & Social - LinkedIn: https://linkedin.com/in/manu-dev --- Source: https://flows.cv/manu1 JSON Resume: https://flows.cv/manu1/resume.json Last updated: 2026-04-05