# Kaushik Baruri > Staff Software Engineer - Big Data at Magnite Location: San Francisco, California, United States Profile: https://flows.cv/kaushikbaruri Completed Master's in Computer Engineering (Computer Systems) in Spring 2016. During Master's my concentration was in the field of Computer Systems where the focus is on Computer Software and its lower level abstractions on Operating Systems, Database Systems and Computer Architectures. In the Data Engineering Role, I work in Distributed Systems leveraging Spark and creating concurrent, distributed, and resilient message-driven applications. Before Master's I had software development experience in developing Data Warehouse and Business Intelligence to facilitate in Business Decisions. Extensively worked on Java, C, SQL and Database Systems. Technical Skills: Programming Languages: Java,Scala, Python C#, C++, Unix Shell Scripting Operating Systems: Linux and Windows Databases: Redshift, Cassandra, Oracle 11g , MS SQL, Postgress, Mongo DB, Distributed Database Platforms. Frameworks: Hadoop, Spark, Map Reduce, .NET, JAVA EE, Android Web Technologies: WCF, ASP, XML, JSON, HTML, CSS, JavaScript, Web Services (SOAP, Restful) Tools: Eclipse, Maven, MS Visual Studio 2013, GitHub, Toad, Oracle Business Intelligence, Microsoft Business Intelligence, Selenium, GENI, HP QC, JIRA, Sauce Lab Simulators: Matlab, Cadence ## Work Experience ### Staff Software Engineer - Big Data Operations @ Magnite Jan 2019 – Present | San Francisco Bay Area ### Senior Data Engineer @ Thrive Market Jan 2018 – Jan 2019 | Greater Los Angeles Area ### Data Engineer @ The Honest Company Jan 2016 – Jan 2018 | Greater Los Angeles Area Creating a framework to ingest data from various sources and give it a meaningful form to make important decisions. Work with Distributed Systems by leveraging Spark for computation, datastore such as Redshift, Cassandra, S3 and building concurrent, distributed, and resilient message-driven applications using AKKA. Blending all the Systems for efficient and synchronized computation and storage using Scala, Java, Python, SQL. Leveraging Tools such as Databricks, GitHub and AWS Infrastructure with great inclination towards the Open Source frameworks. Creating a real time system using kafka, maxwell and Spark Structured Streaming by leveraging the concepts of Distributed System. It is to replace legacy systems. ### Business Intelligence Developer (Student Worker) @ Arizona State University Jan 2015 – Jan 2016 | Tempe ● Create and update web based reports. Involves HTML, CSS, JS development and Corda tool usage. ● Analyze Data in Oracle DB, write complex SQL queries to fetch and optimize the results. ● Working on MS Sharepoint Content Database Tables in MS SQL Server to generate BI Reports and create Data Driven Subscriptions. ### Software Engineering Intern @ iCrossing Jan 2015 – Jan 2015 | Scottsdale, Arizona ● Developed a fully "Automated Cross Browser" test framework to check consistency of pixel calls across browsers. ● Worked on Hive query fetching to capture pixel calls from Hadoop System and invoke it in Sauce Lab VMs using Selenium Web Driver by passing data through Sauce Connect Rest APIs. ● Implemented Browser mob proxy server for getting header information to check consistency across Browsers ● Tools and Technologies: Python, Sauce Lab, Selenium Web Driver, GitHub, Maven, Eclipse - PyDev, Unix Shell, Confluence ### IT Intern @ RDH Jan 2015 – Jan 2015 ● Gathered Requirements for a comprehensive compliance tool GCS, to automate the scheduling and reporting process. ● Formulated test plan and wrote test cases for QA of web applications for compliance inspections currently used by the City of Tempe. ### I.T. Analyst @ Tata Consultancy Services and British Telecommunication Plc Jan 2010 – Jan 2014 ● Implemented systems and software to drive business decisions and improvements in BT's LOBs. ● Worked on technical requirement gathering, designing architecture and development of security features, metadata in Data Warehouse to host MIS applications. ● Developed and maintained Extraction, Transformation and Loading (ETL) processes to acquire and load data. Involves UNIX scripts, Java, SQL coding, indexes, and Database IO performance tuning ● Worked on physical, business and presentation layer of repository in Oracle Business Intelligence to develop highly formatted Reports and interactive Dashboards. ● Performed Test Driven Development for Unit testing and wrote scripts to automate the processes for Continuous Integration Testing to comply CMMI maturity level 5. ● Created Test Plan, Test Cases for System Integration, User Acceptance and Operational Research testing for Telecom OSS Stack. ● Resolved bugs and worked in all the stages of SDLC. ● Worked in Agile methodologies and all phases of SDLC. ### Project - Trainee @ Indian Statistical Instiute, Kolkata Jan 2009 – Jan 2009 ● Devised a Power Saving Cluster based Routing Algorithm for Sensor Networks. ● Performance evaluation with Benchmarks such as LEACH Algorithms. Tools Used: Matlab ## Education ### Master’s Degree in Computer Engineering (Computer Systems) Ira A. Fulton Schools of Engineering at Arizona State University ### Bachelor of Technology - BTech in Electrical and Electronics Engineering National Institute of Technology Durgapur ## Contact & Social - LinkedIn: https://linkedin.com/in/kaushik-baruri --- Source: https://flows.cv/kaushikbaruri JSON Resume: https://flows.cv/kaushikbaruri/resume.json Last updated: 2026-04-12