Senior Software Engineer with strong hands-on coding and systems design skills. I can build scalable software systems and tackle your biggest technical challenges while guiding technical decisions and mentoring junior team members.
Experience
2020 — Now
2020 — Now
San Francisco, CA
Building out the future of logistics!
2017 — 2019
2017 — 2019
San Francisco Bay Area
• Hands-on implementation of autonomous vehicle remote advisor software using React and Node.js
• Implement core features of Cruise’s fleet management UI and back end in JavaScript and C++ to add live streaming video from autonomous vehicle interior cameras
• Hands-on development of REST APIs using Node.js to integrate third-party Genesys telephony platform with Cruise self-driving car APIs for two-way communication between vehicle and call center
• Microservice development using gRPC and Protobuf over WebSockets API using Node.js and TypeScript with a ZeroMQ queue
• Microservice development based on Docker and Google Kubernetes Engine
• Lead the development of iOS mobile applications
• Deliver high quality, bug-free software on time and on budget to meet strategic technical and business objectives
• Conduct technical design sessions with other engineers to guide implementation and architecture of new features
• Conduct regular code reviews and pair code with other engineers to guide best practices, solve hard problems, and pay down technical debt
• Write and review technical design documents for new features and microservices
• Participate in and help guide sprint planning meetings, end-of-sprint demos, retros, and daily scrum meetings
• Work with product team to develop user stories and prioritize engineering backlog for upcoming sprints
• Provide technical mentorship to other engineers, and support career growth into senior technical roles
• Work with senior engineering leadership and Technical Project Managers to set a roadmap and allocate technical resources to sprint goals
• Help create an engaging and collaborative engineering culture that embodies our core values and attracts top talent to a high-performing team
• Implement engineering best practices (i.e. code reviews, agile development) and tooling (i.e, Git, Jira)
2014 — 2015
2014 — 2015
San Francisco Bay Area
Expa is a consumer-oriented tech startup incubator in San Francisco. As part of the core engineering team, I work on Metabase, an analytics application to quickly visualize datasets and extract key performance indicators and other business performance metrics. For more information, visit http://www.metabase.com.
My duties include:
• Create Metabase, a scalable analytics platform based on Python and JavaScript
• Enable meaningful customer insights by filtering and analyzing the most meaningful transactional and time-series data from customer databases
• Create both generalized and specialized data analytics applications and dashboards that are highly customizable by the end-user and provide rapid access to highly relevant data
• Develop specialized ETL processes to import raw data from internal and external data repositories
• Leverage Django and AngularJS development frameworks and Redis task queues
• Drive product and feature development by interfacing with customers and stakeholders
• Perform competitive market analysis
• Optimize high-performance analytics platform backed by PostgreSQL via SQLAlchemy ORM
• Build single-page web application backed by Django Rest Framework and AngularJS
• Create a rich data visualization user interface using Highcharts and Twitter Bootstrap
• Contribute to open-source project Angular-Gridster
2012 — 2014
2012 — 2014
San Diego, CA
As Lead Software Engineer, I lead the development of Asset Commander, Opera Solutions’ Big Data portfolio and investment management platform that enhances an investor’s ability to make informed portfolio and allocation decisions. Asset Commander is a risk and asset management platform designed for Funds of Hedge Funds.
My duties include:
• Lead a team of software developers in the development of backend analytics and data
processing software using Apache Pig on the Hadoop stack
• Manage and synchronize work stream between multiple distributed development teams in
different time zones
• Implement scalable financial analytics backend using Hadoop MapReduce and other distributed technologies
• Port large-scale financial data processing and analytics engine from SQL to Pig
• Architect and implement data flow logic from backend analytics output to web services to UI
by integrating Pig, MongoDB and Spring Roo using JSON
• Integrate analytics and back-end code using Oozie
• Prototype new data processing frameworks (i.e. YARN, Spark) in public and private cloud
environments
• Develop JSON-based REST web services using Spring Roo to serve data produced by the
analytics backend
• Mentor junior developers and provide guidance on best practices, testing, performance,
integration, documentation and re-usability
• Work closely with product and design teams to understand product design and requirements
and distill them into actionable tasks for development teams
• Provide detailed development roadmaps and timelines
2011 — 2012
2011 — 2012
San Diego, CA
• Responsible for company’s cloud computing strategy
• Process large number of machine translation requests in parallel using Apache Hadoop and
Cloudera Distribution for Hadoop
• Prototype high-capacity, scalable logging system using Apache Chukwa
• Develop tools to deploy, monitor and scale large number of machine translation compute
resources using Spring and Hibernate
• Create user interface for deployment, monitoring and scaling tools using Node.js and JQuery
• Implement JSON-based REST API to facilitate communication between user interfaces and
back-end tools
• Leverage Apache Zookeeper to implement scalable, real-time, and fault-tolerant monitoring
and provisioning of compute resources
• Conduct extensive performance testing and optimization of translation requests in distributed
computing environment, using Python
• Create in-house implementation of automatic scaling of compute resources based on load in
a hybrid cloud environment
• Integrate closely with Amazon Web Services to automatically deploy public cloud-based
compute resources using EC2, to monitor resources using CloudWatch and to send
notifications when resources go offline using SNS
• Work closely with cloud computing team at EBay to implement automatic scaling and
monitoring of SYSTRAN translation resources on their servers
Education
San Diego State University
Computer Science
Blockchain University