With 20 years of experience across games, data, infrastructure, and machine learning, I consider myself a generalist. Before joining Course Hero, I shipped three large video games, delivered a machine learning product, and led teams of various sizes across diverse technology stacks.
Experience
2022 — Now
2022 — Now
Key architect of the company's data-driven culture, originally building the foundational platform that the entire organization relies on. The Heroflow framework and the Medallion data warehouse are now the standard for hundreds of engineers and analysts. As a trusted advisor and mentor, I frequently helped 10+ teams solve their hardest problems, contributing to the company's collaboration culture.
• Saved over $2M annually by leading company-wide cost optimization initiatives. Worked hands-on with multiple teams to optimize infrastructure and data pipelines, and to implement proactive monitoring. Led the zero-downtime migration of a 1B-vector embedding index to OpenSearch, saving $300K annually while maintaining performance parity.
• Stepped in to quickly replace a critical legacy data ingestion system after its previous owners departed, preventing any disruption to data availability. The new system was designed for simplicity and maintainability by the team, and resulted in 60% lower operational costs.
• Committed to team growth and mentoring: Contributed to my team having a "high engagement team" rating according to company surveys. Improved the engineering recruitment process to reduce bias and raise the technical bar. Frequent advisor to teams like SEO, Machine Learning, and Analytics on infrastructure, data engineering, and cost management.
2020 — 2022
2020 — 2022
San Francisco Bay Area
• Built Heroflow, a declarative data processing framework in Python, organically adopted by 30+ engineers across the company over the years. Received widespread positive feedback for its ease of use, flexibility, and ability to accelerate development.
• Designed and implemented the foundational Medallion data warehouse, which serves as the single source of truth for hundreds of internal customers and remains a cornerstone of the company’s data infrastructure.
• Led the use of Apache Airflow as the company's central data orchestrator. Enabled 400+ company-wide automated processes, and empowered teams to self-serve their data needs. Authored "CHOperators", a shared library of custom operators to standardize and simplify pipeline development. Created "Course Hero Academy" training materials and template DAGs to accelerate developer onboarding. Drove adoption by providing rapid and continuous support to numerous teams and individuals.
2018 — 2020
San Francisco Bay Area
• Founding backend member (1 of 2); co-grew the team from 2 to 15 engineers, establishing hiring standards, onboarding, and technical bar.
• Designed and deployed a multi-cloud and multi-region Kubernetes architecture that minimized latency to players while providing great availability, scalability, and cost advantages.
• Contributed to most parts of the codebase: game hosting service, web api, presence, build system, monitoring, deployments, etc.
• Instituted engineering fundamentals (coding guidelines, code review practices, CI/testing, observability) to accelerate safe delivery.
• The platform's technical readiness helped the company grow from Series A to Series C, demonstrating stability, completeness, and clean, simple implementation - frequently praised by new hires for being easy to work with.
2016 — 2018
2016 — 2018
San Francisco Bay Area
At Scientific Revenue we pioneered Machine Learning to drive price optimization for the digital economy. Some of my achievements were:
• Developed a vision and prototype of version 2 of the platform.
• Reduced our AWS bill by 50%.
• Re-architected our release process. We moved to a weekly schedule of well-tested versions while reducing downtime by 90%.
2015 — 2016
2015 — 2016
San Francisco Bay Area
As part of the platform team, some of my accomplishments were:
• Rewrote a significant portion of the Java backend to make it scalable while at the same time simplifying the architecture. After 4 months of work, the new backend was put in production seamlessly, with no downtime and no significant bugs.
• Several optimizations. Two batch jobs went from over 10 hours to minutes.
• Designed and implemented some business-critical features that encompassed the full system, from the SDKs to the Data Pipeline.
• Implemented an innovative debug mechanism that increased reliability of the system by making it easier to detect bugs before deploying to production. Dozens of bugs in production were avoided thanks to it.
• Deployed and managed an Elasticsearch stack as our logging infrastructure.
Education
Universidad de Sevilla