# Ash Zahlen > Software Engineer Location: San Francisco, California, United States Profile: https://flows.cv/ashzahlen I've worked all across the stack, from writing web UIs in Javascript to optimizing C++ code on the backend. I'm an extremely self-taught learner, and am definitely capable of picking up new languages and frameworks on the job. I also always try to leave any codebase cleaner than when I started working on it, both by writing high-quality code and also by cleaning up old, crufty parts of the codebase. I'm not looking for work at this time. (Note that some of my work was under a previous name, as I changed my name in late 2018.) ## Work Experience ### Software Engineer @ Sigma Jan 2024 – Present ### Staff Software Engineer @ Afresh Jan 2021 – Jan 2022 | San Francisco, California, United States ### Software Engineer @ Taking a Break Jan 2020 – Jan 2021 | San Francisco, California, United States Took a break from work due to the COVID pandemic and also to deal with other life issues. I've continued programming on personal side projects during this time to keep myself sharp, mostly in Rust and Python. I also switched to NixOS on my personal machine, which gave me insight into alternative operating system package management methodologies and gave me a challenge to learn something new. ### Software Engineer @ r2c Jan 2019 – Jan 2020 | San Francisco, California, United States At the time, r2c's main product revolved around allowing code analysis authors to build Docker images that would communicate with the analysis platform via the host's filesystem. I worked on this aspect of the product, optimizing the setup/teardown time for the images for faster local debugging as well as splitting out the monolithic 'run this analysis' function by applying dependency injection principles; for example, instead of having a flag to save analysis locally or upload it to our servers, I modified it to take an object that would process the output of the analysis. This meant that we could then test the individual components, as opposed to having to write unit tests. I also reorganized the codebase structure to be more compliant with standard Python packaging best practices, which involved diving deep into the arcana of, e.g., how Python resolves imports. I also wrote stadt, a library for serializing and deserializing the TypeScript compiler's representation of types (which cannot be serialized/deserialized themselves as they refer to the underlying compiler object). This allowed us to split TypeScript analyzers into a "infer types" phase and an "analyze types" phase, which saved a large amount of time when iterating on an analyzer or running multiple analyzers at once, as we could cache the type inference output. I also built a visualization tool where users could enter a piece of JavaScript or TypeScript, and the tool would run the TypeScript compiler in the user's browser and output the inferred types as well as their stadt representation, similar to a compiler explorer like https://gcc.godbolt.org/. Both of these required diving into the internals of the compiler, as the external documentation was insufficient for my needs; I therefore had to read the compiler's type inference machinery to determine the semantics of various fields and methods. ### Software Engineer @ Google Jan 2014 – Jan 2019 | Mountain View, California, United States I worked on the team responsible for the OCR system that powers Google Books's text search and Google Drive's automatic PDF OCR, as well as various other internal clients. This required learning C++, a language that I was previously unfamiliar with; however, I was able to pick it up quickly, and eventually obtained "C++ readability", an internal certification signifying I was capable of writing good, idiomatic C++ code. I focused on improving the infrastructure, decreasing build times for the binaries and libraries my team depended on, breaking monolithic classes and functions into easier-to-test subunits, and a tool to visualize the data structure we produce as a result of the document OCR process. I also helped liaise between my team and some of the other teams that were interested in our work, explaining our current capabilities and what would be feasible to implement. I also worked on code for the frontend for TensorBoard, the frontend to Google's popular TensorFlow machine learning framework. My work was focused on the code for loading the 'events' TensorFlow emits from disk; since TensorFlow runs could stop and restart or put events in multiple files, I had to account for many possibilities and edge cases, and write thorough tests to make sure this code wouldn't break in production. ### Software Engineer Intern @ Google Jan 2012 – Jan 2012 | Cambridge, Massachusetts, United States I worked with the Big Picture team in Google Cambridge to come up with the design of the etymology visualization that Google returns if you search for a phrase like "etymology of mortgage". I was also responsible for writing the code that would generate the graph from the partner-provided etymology data. Since the data was provided to us as annotated English text, I analyzed the corpus, determined patterns in how the etymologies were phrased, and used this to produce a hand-crafted natural language parser to transform it into a structured intermediate representation that my code would then render to the final graph. I was also able to 'hit the ground running' quickly; Google's internal infrastructure is very different from the outside world (e.g., their own version control system instead of Git, the Google Closure Compiler instead of bower or similar), but I was able to quickly get used to the differences and write clean, high-quality code. ### Software Engineer @ MIT Media Lab Jan 2011 – Jan 2011 | Cambridge, Massachusetts, United States I helped work on DoppelLab, a 3D interactive data visualization of sensor readings from the MediaLab such as carbon dioxide concentration that allows movement through a model of the MediaLab itself, with various quantities expressed as color, presence of objects, and so on. My role was focused on ingesting the data from the sensors and inserting it into a SQL database so that queries such as 'what was the average concentration of CO2 over the past day' could be answered quickly and efficiently. I also did some work on the visualization itself, which was written in Unity. ## Education ### Master of Engineering - MEng in Computer Science Massachusetts Institute of Technology Jan 2009 – Jan 2014 ## Contact & Social - LinkedIn: https://linkedin.com/in/ash-zahlen-a06539214 --- Source: https://flows.cv/ashzahlen JSON Resume: https://flows.cv/ashzahlen/resume.json Last updated: 2026-03-22