# Krishnan Chandra > Principal Software Engineer at Ramp Location: New York, New York, United States Profile: https://flows.cv/krishnan I build and improve systems that help products run smoothly and efficiently. I specialize in data infrastructure and natural language processing (NLP), and have a track record of delivering scalable and robust solutions for data ingestion, processing, and analysis. Currently, I am the co-founder and CTO of SixAI. We are building the most powerful AI analyst in finance. Our mission is to help investors generate ideas, monitor risks, and make superior investment decisions using cutting-edge NLP and machine learning techniques. I lead the technical vision and strategy of the company, and oversee the development and deployment of our core product features. I also enjoy learning new technologies and exploring new domains, and I am always looking for opportunities to collaborate and innovate with other engineers and researchers. In my spare time, I like hiking, traveling, listening to music, and watching soccer. ## Work Experience ### Principal Software Engineer @ Ramp Jan 2025 – Present | New York, New York, United States ### Advisor @ SixAI Jan 2025 – Present ### Co-Founder and CTO @ SixAI Jan 2023 – Jan 2024 | New York, New York, United States (Same parent co as ResearchRabbit) Co-founded and built a powerful AI analyst in finance, empowering investors with superior idea generation to make better decisions. Led a distributed team of engineers, developing two core products (ResearchRabbit and SixAI) using a React/Remix frontend and Python/Django/FastAPI backend. SixAI leveraged LLMs and graph technology to help hedge funds and other clients, including a large UK private investor and the U.S. Department of Justice, identify opportunities aligned with their qualitative theses. Actively contribute to the Pants open-source build system, enhancing its Python backend and integrations. ### Co-Founder and CTO (acquired) @ ResearchRabbit Jan 2021 – Jan 2024 | New York, New York, United States (Acquired late 2024/early 2025) Led the technical vision and development of an AI-powered academic discovery platform revolutionizing how researchers explore scholarly literature. Architected and scaled a sophisticated recommendation engine processing hundreds of millions of academic articles, which we call "Spotify of Papers." Successfully launched and scaled the platform from concept to millions of active researchers worldwide, while maintaining a commitment to free access for the academic community. ### Software Engineer @ Confluent Jan 2019 – Jan 2021 | New York, New York, United States Led development of mission-critical features that drove significant revenue growth, including the successful launch of Usage Based Billing (UBB) and Elastic Scaling capabilities. Architected and implemented scalable billing systems encompassing hourly usage tracking, automated invoice generation, and verification processes. Leveraged Golang, PostgreSQL, and Apache Kafka to build robust microservices, with containerized deployments using Docker and Kubernetes. Recognized with the Ship It! Award at Confluent's May 2020 Hackday for exceptional technical innovation and execution. ### Staff Software Engineer @ Reddit, Inc. Jan 2019 – Jan 2019 | New York, New York, United States In March 2019, I moved from SF to NYC to help build the NYC engineering team, working on ad relevance. The team's focus is using machine learning methods to improve ad performance and user experience, primarily working with Go, Scala, and Spark. Personally, I also started giving technical talks: - Counting is Hard (linked) - talk I gave at Qcon.ai 2018. Was featured amongst the "Best Voted" category for the conference. - Conducting a Large Scale Infrastructure Migration Using Terraform (linked) - talk I gave at Hashiconf 2018. ### Senior Software Engineer II @ Reddit, Inc. Jan 2017 – Jan 2019 | SF + NYC ### Senior Software Engineer @ Reddit, Inc. Jan 2017 – Jan 2017 | San Francisco Bay Area Worked on the data engineering team, primarily on building out a real-time data pipeline at scale using Scala, Kafka, and Python. For most of this time we were a team of 2 people working on building out a framework for processing real-time data and running ETLs. Also built out some user facing data features like content view counts and traffic pages (both linked). We migrated most of our ETLs and data warehousing away from Hive and onto Google BigQuery during this time. Worked on a massive data migration, moving Reddit's data infrastructure across AWS regions. We also worked on making our infrastructure more robust and repeatable using Terraform and Puppet. Furthermore, we switched our container orchestration to work using Kubernetes instead of Mesos. Today, our pipeline processes tens of billions of events per day and powers all of Reddit's internal product analytics. ### Software Engineer III @ Reddit, Inc. Jan 2016 – Jan 2017 | San Francisco Bay Area ### Software Engineer II @ Optimizely Jan 2015 – Jan 2016 | San Francisco Bay Area I worked on the results team, scaling Optimizely's infrastructure and serving customers running millions of experiments. To help serve real-time results, I built a pipeline ingesting over 3 billion events per day using Flume, HBase, Avro, Kafka, and Dropwizard. For stability, I worked on an operations stack of Chef/Ruby, OpenTSDB and Nagios to support automatic scaling, provisioning, and monitoring of infrastructure. Worked on reducing hosting costs across the backend stack, succeeded in reducing them by over $200k annually. Also built the first version of the Event Data Export feature using MapReduce, Pig, and Python to enable customers to retrieve raw events for their A/B experiments. ### Software Engineer @ LinkedIn Jan 2014 – Jan 2015 | San Francisco Bay Area I worked on helping the Relationships team at LinkedIn scale its infrastructure to serve over 300 million members. I worked primarily in Java and Python, working to migrate user data over to a new tech stack to aid scaling. ### Senior Fellow @ Margolis Market Information Lab Jan 2012 – Jan 2014 I developed and taught workshops on MATLAB, Oracle Crystal Ball, and Bloomberg. As a senior fellow, I supervised two incoming fellows and helped integrate them into the flow of the lab. I also made several technological tools using Java, Python, and MySQL to help the lab run more efficiently. ### Undergraduate Teaching Assistant @ Margolis Market Information Lab Jan 2012 – Jan 2014 I helped students with labs and programming assignments in MIPS and Verilog for a computer architecture class by holding weekly office hours. I also worked on creating web-based problem sets using Python and Javascript to help students understand the material better. ### Software Engineering Intern @ LinkedIn Jan 2013 – Jan 2013 I implemented graph size reductions using Java in order to aid overall data size reduction for the LinkedIn Contacts application. I also wrote scripts using Pig and Hadoop to migrate existing data over to new storage models. ### Financial Software Developer Intern @ Bloomberg LP Jan 2012 – Jan 2012 I improved Bloomberg terminal functions using C and C++ to facilitate equities trading on dark pools of liquidity, and created an application in C/C++/Perl to allow capturing and replaying of market data feeds from various exchanges around the world. ### Software Engineering Intern @ ByAllAccounts Jan 2011 – Jan 2011 I rebuilt a Java application to find and repair broken URLs from an internal database. I also developed an application using Java Servlets and HTML/CSS to track failure statistics about data retrieval for user accounts. ## Education ### Bachelor of Science (BS) in Mathematics and Computer Science University of Illinois Urbana-Champaign ### Phillips Academy ## Contact & Social - LinkedIn: https://linkedin.com/in/krishnanchandra - Website: https://krishnanchandra.com --- Source: https://flows.cv/krishnan JSON Resume: https://flows.cv/krishnan/resume.json Last updated: 2026-04-07