# Patrick Salami > Staff Software Engineer at DoorDash Location: San Francisco, California, United States Profile: https://flows.cv/patricksalami Senior Software Engineer with strong hands-on coding and systems design skills. I can build scalable software systems and tackle your biggest technical challenges while guiding technical decisions and mentoring junior team members. I have broad technical experience with a focus on backend development with modern design patterns such as event-driven microservice architectures. As a passionate technologist, I enjoy a hands-on roles that challenges my technical problem-solving skills and leverage my strong communication and interpersonal skills. I enjoy working with cross-functional, fast-paced software development teams with a focus on engineering best practices and driving outstanding business value with cutting-edge technology. I have recent experience both with rapidly growing businesses, well as working in an enterprise setting. My background as a Full-Stack Software Engineer includes strong hands-on development experience in Java, Python, JavaScript (React, Node.js, etc), iOS, Swift, Objective-C, financial services, payments, risk management, big data analytics, distributed systems, Hadoop, MongoDB, etc. I have developed a number of large-scale web and mobile applications and distributed data processing platforms in a broad range of verticals, such as personal finance, e-commerce, and blockchain. Additional skills: JavaScript, Node.js, React, TypeScript, GraphQL, TypeORM, Java, Python, Swift, Objective-C, C++, REST, test-driven development, application architecture, improving code quality, improving application performance, web services, microservices, big data, Spark, Hadoop, machine learning, ML pipelines, healthtech, fintech, transportation, logistics, video streaming, scaling businesses, technical leadership, Agile, Scrum, story point estimation, design sessions, technical architecture, interfacing with product team, technical mentorship. ## Work Experience ### Staff Software Engineer @ DoorDash Jan 2020 – Present | San Francisco, CA Building out the future of logistics! ### Lead Software Engineer @ Cruise Jan 2017 – Jan 2019 | San Francisco Bay Area - Hands-on implementation of autonomous vehicle remote advisor software using React and Node.js - Implement core features of Cruise’s fleet management UI and back end in JavaScript and C++ to add live streaming video from autonomous vehicle interior cameras - Hands-on development of REST APIs using Node.js to integrate third-party Genesys telephony platform with Cruise self-driving car APIs for two-way communication between vehicle and call center - Microservice development using gRPC and Protobuf over WebSockets API using Node.js and TypeScript with a ZeroMQ queue - Microservice development based on Docker and Google Kubernetes Engine - Lead the development of iOS mobile applications - Deliver high quality, bug-free software on time and on budget to meet strategic technical and business objectives - Conduct technical design sessions with other engineers to guide implementation and architecture of new features - Conduct regular code reviews and pair code with other engineers to guide best practices, solve hard problems, and pay down technical debt - Write and review technical design documents for new features and microservices - Participate in and help guide sprint planning meetings, end-of-sprint demos, retros, and daily scrum meetings - Work with product team to develop user stories and prioritize engineering backlog for upcoming sprints - Provide technical mentorship to other engineers, and support career growth into senior technical roles - Work with senior engineering leadership and Technical Project Managers to set a roadmap and allocate technical resources to sprint goals - Help create an engaging and collaborative engineering culture that embodies our core values and attracts top talent to a high-performing team - Implement engineering best practices (i.e. code reviews, agile development) and tooling (i.e, Git, Jira) ### Senior Staff Software Engineer @ Expa Jan 2014 – Jan 2015 | San Francisco Bay Area Expa is a consumer-oriented tech startup incubator in San Francisco. As part of the core engineering team, I work on Metabase, an analytics application to quickly visualize datasets and extract key performance indicators and other business performance metrics. For more information, visit http://www.metabase.com. My duties include: • Create Metabase, a scalable analytics platform based on Python and JavaScript • Enable meaningful customer insights by filtering and analyzing the most meaningful transactional and time-series data from customer databases • Create both generalized and specialized data analytics applications and dashboards that are highly customizable by the end-user and provide rapid access to highly relevant data • Develop specialized ETL processes to import raw data from internal and external data repositories • Leverage Django and AngularJS development frameworks and Redis task queues • Drive product and feature development by interfacing with customers and stakeholders • Perform competitive market analysis • Optimize high-performance analytics platform backed by PostgreSQL via SQLAlchemy ORM • Build single-page web application backed by Django Rest Framework and AngularJS • Create a rich data visualization user interface using Highcharts and Twitter Bootstrap • Contribute to open-source project Angular-Gridster ### Lead Software Engineer @ OperaSolutions Jan 2012 – Jan 2014 | San Diego, CA As Lead Software Engineer, I lead the development of Asset Commander, Opera Solutions’ Big Data portfolio and investment management platform that enhances an investor’s ability to make informed portfolio and allocation decisions. Asset Commander is a risk and asset management platform designed for Funds of Hedge Funds. My duties include: • Lead a team of software developers in the development of backend analytics and data processing software using Apache Pig on the Hadoop stack • Manage and synchronize work stream between multiple distributed development teams in different time zones • Implement scalable financial analytics backend using Hadoop MapReduce and other distributed technologies • Port large-scale financial data processing and analytics engine from SQL to Pig • Architect and implement data flow logic from backend analytics output to web services to UI by integrating Pig, MongoDB and Spring Roo using JSON • Integrate analytics and back-end code using Oozie • Prototype new data processing frameworks (i.e. YARN, Spark) in public and private cloud environments • Develop JSON-based REST web services using Spring Roo to serve data produced by the analytics backend • Mentor junior developers and provide guidance on best practices, testing, performance, integration, documentation and re-usability • Work closely with product and design teams to understand product design and requirements and distill them into actionable tasks for development teams • Provide detailed development roadmaps and timelines ### Senior Software Engineer @ SYSTRAN Jan 2011 – Jan 2012 | San Diego, CA • Responsible for company’s cloud computing strategy • Process large number of machine translation requests in parallel using Apache Hadoop and Cloudera Distribution for Hadoop • Prototype high-capacity, scalable logging system using Apache Chukwa • Develop tools to deploy, monitor and scale large number of machine translation compute resources using Spring and Hibernate • Create user interface for deployment, monitoring and scaling tools using Node.js and JQuery • Implement JSON-based REST API to facilitate communication between user interfaces and back-end tools • Leverage Apache Zookeeper to implement scalable, real-time, and fault-tolerant monitoring and provisioning of compute resources • Conduct extensive performance testing and optimization of translation requests in distributed computing environment, using Python • Create in-house implementation of automatic scaling of compute resources based on load in a hybrid cloud environment • Integrate closely with Amazon Web Services to automatically deploy public cloud-based compute resources using EC2, to monitor resources using CloudWatch and to send notifications when resources go offline using SNS • Work closely with cloud computing team at EBay to implement automatic scaling and monitoring of SYSTRAN translation resources on their servers ### Senior Software Engineer @ Temboo, Inc. Jan 2010 – Jan 2011 Architect and develop web application front-end for the Temboo cloud computing infrastructure, enabling enterprise clients to execute highly scalable workflows in the cloud using a visual programming interface. In charge of implementing scalable system architecture in the cloud using Hadoop and Apache stack Re-architect the Temboo reporting system using HDFS, Hbase, Scribe and Hive to handle millions of records per day Migrate storage system from PostgreSQL to Hbase and Hadoop clusters to support multi-Terabyte load Responsible for high-availability and performance of Hadoop Distributed File System (HDFS) and Hbase clusters Integrate various distributed technologies to leverage the Amazon Web Services (AWS) cloud using RightScale Research new technologies and connect with open source community to solve current business problems Work closely with enterprise clients and management to analyze complex business requirements and develop appropriate cloud-based solutions Design and develop enterprise-grade user interfaces to connect with load-balanced Service Oriented Architecture (SOA) backend ### Senior Software Architect, Hadoop @ Extrabux Jan 2008 – Jan 2010 Architect and develop comparison shopping and rewards web site www.extrabux.com with a catalog of millions of product records, updated daily Responsible for overall technology infrastructure, software development, data flow, web site functionality, metrics, and uptime Create large-scale, distributed system architecture within the Amazon Web Services cloud Use Hadoop distributed computing framework with Java to process and group vast amounts of raw product data automatically every day in under 1 hour Develop complex backend and customer-facing secure transaction processing and analysis systems Perform extensive data analysis and reporting using Pig and Hive Build autonomous, redundant, fail-safe and highly scalable front-end and back-end systems using virtualization technology based on Ubuntu Linux Server for EC2 and AWS APIs Build distributed search index using Solr to power the web site's product search engine Work with cloud-based distributed technologies, making heavy use of the Apache stack, in order to scale product data infinitely while keeping processing time constant Familiar with the technical intricacies of the Amazon cloud architecture to develop secure e-commerce solutions Integrate system automation deeply with AWS products such as EC2, Elastic MapReduce (EMR), SimpleDB, SQS, S3, CloudFront, Relational Database Service (RDS) and others Guarantee consistent site performance and availability during traffic spikes using Apache 2 servers behind Elastic Load Balancer and replicated MySQL 4.1 servers to allow for automatic scaling of server resources based on load ### Software Engineer @ Veoh Networks Jan 2006 – Jan 2008 As a key member of the front-end team, I contribute to the presentation layer of one of today's most active online video communities. I work primarily in conjunction with other Engineers to provide a streamlined and solid user experience for our visitors, while integrating the latest designs and features into our site. Our technology platform is based around Java servlets, JSP, MySQL, the Spring Framework, HTML and JavaScript. ### Software Engineer @ San Diego State University Jan 2003 – Jan 2004 ## Education ### Computer Science San Diego State University ### Blockchain Technology Blockchain University ## Contact & Social - LinkedIn: https://linkedin.com/in/psalami --- Source: https://flows.cv/patricksalami JSON Resume: https://flows.cv/patricksalami/resume.json Last updated: 2026-04-12