# Ramesh Manchikanti > Staff Engineer at CDK Global Location: Sunnyvale, California, United States Profile: https://flows.cv/rameshmanchikanti Proven Architect with 15+ years of Engineering experience in design and development of large scale applications in the areas of Data warehousing, Big Data, BI Environments. • Working with the Chief Technology Officer to improve software processes and team culture, platform roadmap execution planning, converting ideas and vision into executable tasks. • Managing geographically distributed teams responsible for developing and maintaining world-class data platforms and systems. • Deft at juggling many competing priorities and prioritizing for best results • Providing technical guidance, mentorship, and assistance as a go to engineering point of contact. • Highly adaptable to new technologies and enthusiastic in developing Proof of Concepts using them. • Drive planning, and review meetings as Design Architect for geo located teams. • Leading code and design reviews for new functionality and bug fixes. • Participating in cross-functional planning and reviews with leads from the Product Management and Quality Assurance functions. • Managing escalations, production issues, and new product deployments to customers. ## Work Experience ### Staff Software Engineer @ CDK Global Jan 2021 – Present | San Jose, California, United States Working in the Platform team, I am responsible for guiding various Modernization teams on data store architecture, database design, and process reviews, ensuring business requirements are implemented correctly without scalability or portability issues. As a Staff Software Engineer at CDK Global, I lead data modernization and platform engineering initiatives across multiple high-impact projects. I have driven RDS scalability and HA/DR enhancements, implemented partition reorganization and purge/archive frameworks, and optimized SQL and ETL workflows for modernization schemas. I provide back-end data support for critical programs including Unify and Embedded Reporting, and have delivered tools such as SQL performance analyzers, cross-platform migration utilities, and automation scripts to improve reliability, observability, and cost efficiency. Partnering with cross-functional teams, I ensure data design best practices, migration strategies, and monitoring frameworks are in place, delivering scalable, performant, and future-ready solutions that enhance both team productivity and customer experience. ### Sr. Big Data Engineer/ETL @ LinkedIn Jan 2020 – Jan 2021 Worked in Big Data Engineering, focusing on the migration of legacy AppWorx/Lassen-based datasets. This effort involved analyzing and retrofitting around 50 datasets for the LMS team, transforming them into new, efficient de-normalized tables and files. The LMS datasets are built on a homegrown UMP/UDP framework and utilize technologies such as Hive-SQL, Spark-SQL, PIG, and Scala, running on Apache Spark. As part of streamlining the BDE SOT tables, I developed custom UDFs using the organization?s standard UDF framework to enhance processing efficiency and maintainability. In addition, I successfully converted approximately 50 PIG data flows to Spark-SQL, significantly improving performance, scalability, and alignment with modern big data practices. ### Sr. Big Data Engineer/ETL @ Prudential Financial Jan 2019 – Jan 2020 | Sunnyvale, California, United States led the design and development of the new PruDB2DL process, introducing enhanced logging, tracking, and monitoring features for better system observability and performance. To close key process gaps, I built several automation tools, including a Log Purge and Archive utility, a Job Monitor for long-running jobs, a Basic Data Quality (DQ) process, and a Job Status Reporting solution. As part of the SIA project, I automated 5 out of 8 NFS feeds using the in-house Fast Data Exchange (FDE) framework built on Scala and Spark, while improving existing code for reliability. I also delivered WSG Alert Feeds and developed a fully automated Reconciliation Feed for the RET team. Additionally, I created a salable mass-loading framework to ingest ~3,500 tables into the Data Lake for departments like GI, and mentored several team members to help the team achieve delivery excellence. ### Big Data Engineer @ Samsung Electronics America Jan 2018 – Jan 2018 | San Francisco Bay Area Working in Big Data Science team, Responsible for delivering project Sparkle. Sparkle provides customer metrics to various departments such as marketing and business analysts etc. • Designed and deployed 23-25 ETLs for the Data Science Team. • Provided necessary data for models, setup Model ETL and uploaded scores. • Developed tool to generate DSMF definition from excel • Developed 15-17 DSMF definitions for the monitoring framework Technologies: Hive, Bash scripting, Python, Oozie ### ETL Architect @ Fishbowl Inc. Jan 2015 – Jan 2017 | Santa Clara Company provides closed loop restaurant marketing SaaS platform that is highly scalable and ingesting data from various sources which include email, SMS, social, online ordering, loyalty programs, reservations, and more. The analytics platform provides clients with actionable insights about guests, menus, pricing, media mix, and social media. • Delivered Multithreaded Java Based ETL scheduler, including other tools such as TDE (transaction Data Extractor), which extracts internal system’s data into GA. Calculated distance between the stores using Google API for Longitude and Latitude in order to determine neighborhood stores. • Resolved scaling issues from the earlier versions by implementing Pig and Oozie based frameworks for New ETLs. • Architected and delivered the latest version of GA, with a new data model including highly scalable Complete ETL, including several features such as monitoring, SLA processing, Re-scheduler capabilities. • Delivered highly customizable, easy to use Ad hoc Reporting Module for sales team’s needs using VBA, which is extended to generate QA reports. • Developed several utilities in Pig such as Murmur Hash, UDF to deal with parquet timestamp int96 and PigLoadDB function to read data from DB directly. • Mentored development teams to achieve our project goals. Gathered business requirements and architected scalable solutions. Provided Support for Release activities for Guest Analytics. Performed cross platform integration and gap analysis of our Promotion Manger, Campaign Manager and GA. Technologies: Hadoop, Hive, HBase, Drill, Monet DB, MySQL, MSSQL Server, Kylin, Oozie, Pig, Druid, Talend and Spark (as backend engine), svn, git. ### Sr. Software Engineer @ Yahoo! Jan 2011 – Jan 2015 | Sunnyvale Worked on Sonar, Lighthouse and PYM Tools Enhancements. Project Sonar provides anomaly detection for YAM+. Developed Sonar using Hadoop, Oozie and Pig workflows and R scripts for anomaly detection and provided RESTful API on Jetty server to integrate with YAM+ Dashboard. Models are integrated with user feedback to reduce the false positives. Project Lighthouse, a component of ADW (Analytical Data Warehouse), generates YMon scoreboards with status of Oozie workflows and generates alerts to SE for any failures. Monitors health of Druid system by developing Java Jersey Rest based web services for event data capturing. JMS monitoring enabled users with near real-time push instead of interval pull, which reduced waiting time of data availability. PYM Tools, worked on setting up dev environment, globalization of Pegasus, using one percent user activity feed. Developed robust ETL process on Oracle DB and MySQL for Hack/CEO challenges. Provided data in short amount of time for key word targeting, ad metrics (bucket) system, and developed UNIX shell-based Pig script scheduling and monitoring, Pig script development. Technologies: Hadoop, Oozie, Pig, Hive, Storm, Kafka, Java, Jenkins CD/CI, YMon, SVN, git, Oracle, MS SQL Server, SSIS, SSAS. Environment: Solaris UNIX, Teradata v2r5, Informatica. MS Office Products ### BI Solution Architect @ Saama Jan 2010 – Jan 2011 | San Jose, CA ### ETL Lead Consultant @ WhyWhere Jan 2006 – Jan 2010 | San Ramon ### Sr Software Engineer/PL @ Amdocs Jan 2001 – Jan 2007 | San Jose ### Sr. Software Engineer @ Fortuna Technologies Jan 1996 – Jan 2000 ## Education ### Master's degree in Computer Science Jawaharlal Nehru Technological University ### Master's degree in Statistics Sri Krishnadevaraya University ### Certificate in Machine Learning Big data Stanford University ### Certificate in R-Language The Johns Hopkins University ## Contact & Social - LinkedIn: https://linkedin.com/in/ramesh-manchikanti-2281641 --- Source: https://flows.cv/rameshmanchikanti JSON Resume: https://flows.cv/rameshmanchikanti/resume.json Last updated: 2026-04-12