# Ajinkya Parkar > Senior Software Engineer at SMBC Location: Jersey City, New Jersey, United States Profile: https://flows.cv/ajinkya Github:- https://github.com/ajinkyaaa As a Python Big Data engineer with a passion for innovation, I specialize in leveraging data to drive business results. With a graduate degree in computer science and expertise in Python, Scala, and Apache Spark, I help companies extract valuable insights from their data to make more informed decisions and drive growth. I've undertaken courses in Deep Learning, Algorithms and Data Structures, Big Data Analytics, Parallel and Distributed Computing, and more. Through advanced predictive modeling and optimization techniques, I can apply my acquired analytical skills to help leverage valuable insights from large data sets and enable them to make effective data-driven decisions. I graduated from Pace University, NY in 2018 and currently work as a Software Developer at IBM Armonk, New York. ## Work Experience ### Senior Software Engineer @ Sumitomo Mitsui Banking Corporation – SMBC Group Jan 2023 – Present | New York, United States • Worked on AWS pipelines for production deployment. • Worked on AWS Airflow to schedule reports and provided maintenance for those by identifying issues in the logs. • Resolved various issues in the compliance reports by updating it with SQL, doing thorough unit testing and de-ploying it to production • Worked on AWS, python and spark to process compliance reports. ### Senior Big Data Engineer @ HiLabs Jan 2023 – Jan 2023 | United States • Worked on Databricks and AWS to load process and transform healthcare data. • Used Apache Spark with Scala/Python and Machine learning models to identify correct data and then update the data accordingly • Helped the team set up Spark clusters with proper configurations to process data. • Worked with the clients to create use cases for the project and developed the architecture for the data flow. ### Software Engineer @ IBM Jan 2018 – Jan 2023 | United States • Used Pyspark, Scala and SQL with NoSQL databases like SOLR and MongoDB to identify subnets that had utilization greater than 80% in 24-hour window across different geolocations. Analyzed their patterns to classify them as unusual or normal and generated Subnet Utilization reports which helped IBM monitor network activity and performance. •Implemented realtime subnet health check and reporting system using Python, Kafka and Mongodb which generated slack notifications and emails to Product owners which significantly lowered downtime and improved performance. • Used Python/Scala with Spark, and Machine learning libraries such a Scikit-learn, Numpy and Pandas Dataframe to identify discrepancies in the device data which were extracted from raw files by comparing it to another data repository. Identified unreachable devices and their counts. High level summary of that report helped the network team to identify which devices were down/inactive so that they could better manage the inventory. • Created a report using python with multi-threading which scanned through network alerts which helped us to identify areas which need wifi support. Worked on enhancing the data, removing noise and duplicates which could affect the result output. The alerts range was from http throughput to DHCP success rate. It helped the client to identify if the sensor was down which could generate inaccurate output. • Generated a report for Umbrella dashboard which identified top violators in content and security categories. Helped the team to get a high-level summary of how many requests were blocked/approved and with historical data analysis, we could identify if there is a security threat or attack on IBM systems. • Used Angular to create UI which gave high level and drill down summary to the clients. Created API using NodeJS which was used to perform insert, update and delete operations. which helped us to store/ retrieve frequently used information resulting in lowering API calls. ### Software Developer @ IBM Jan 2017 – Jan 2018 | new york -I was working on an Angular 5 project which deals with network exceptions and we used Big data analytics to find any anomalies in the system. -I worked on Scala Spark, PySpark, UI, API, MongoDB, SQL, HDFS & SOLR. ### Software Developer @ eClerx Jan 2013 – Jan 2016 | Airoli Used C# MVC .NET with entity framework and SQL to build Recruitment portal from scratch which helped the company, specifically the HR team to initiate and manage end to end hiring process. • Used JavaScript with knockout architecture to build flexible and interactive UI. This improved the performance of some of the web pages as data was being loaded on a fly which resulted in lighter system and faster performance. • Used SQL to create stored procedures, temp views, replicating schema and performing data backup. The scripts benefitted the team for easier deployment and for Identifying and resolving data issues. It was also used to generate automated emails which were triggered under certain conditions. • Used OOPS concepts to create business layer and service layer of projects which helped to organize and better manage the code. • Modified the code which reduced the page load time from 8 minutes to 10 seconds. Some users which had admin privileges because of which the UI pulled lots of data from the system and generated hierarchy tree like structure. Instead of pulling the data at once, I created UI components which pulled data only when certain events were triggered. • Optimized stored procedures by using execution plan to identify queries having high runtime and then modifying those by indexing, pagination, etc. • Handling Deployment and maintenance of production environment. ## Education ### Master's degree in Computer Science Pace University Jan 2016 – Jan 2018 ### Master’s Degree in Computer Science Pace University Jan 2016 – Jan 2018 ### Bachelor's degree in Information Technology Saraswati Education Societys Saraswati College of Engineering Kharghar Navi Mumbai ## Contact & Social - LinkedIn: https://linkedin.com/in/ajinkya-parkar --- Source: https://flows.cv/ajinkya JSON Resume: https://flows.cv/ajinkya/resume.json Last updated: 2026-03-22