Feel free to contact me or view my website, stevebronder.com, if you have any questions about my current interests or research. If you would like to inquire about freelance R, C++, or Python tool creation or analysis send me a proposal at the email available in my contact info.
Experience
2023 — Now
2023 — Now
New York, New York, United States
2020 — 2022
New York, New York, United States
• Lead a team in designing a new memory pattern for matrix automatic differentiation that allows for better cache efficiency and the use of SIMD instructions. This new pattern is applied automatically using an optimization routine I wrote in Stan's Ocaml based compiler. Stan models that use these new matrices can have a 20%-60% reduction in runtime.
• Rewrote Stan's Monte Carlo Markov Chain Sampler API to allow multiple chains to run in parallel. Running each chain in parallel allows the program to share data across threads, heavily reducing the programs memory footprint. Models with large datasets can decrease runtime by 10%-30%.
• Worked with a small group to build simpler abstractions for creating new functions for reverse mode automatic differentiation that promotes a composition style over the previous inheritance pattern.
• Extended Stan's current documentation easier onboarding of new users and developers of Stan's automatic differentiation library.
• Created a "legacy C++14 requires" scheme that allows developers to write specialized overloads in a way similar to C++20 requires.
2018 — 2020
2018 — 2020
New York City
• Developed risk models in R and Python to assess the loss given default for leverage lending collateral pools to assess risk of both current collateral pools and synthetic worst case collateral pools. An underwriter submits a draft contract's collateral covenants to a web-based app. Given the collateral covenants in the contract, we can construct a synthetic worst case collateral pool. Assuming the borrower will max out the covenants, we build the riskiest loan pool possible. The model then performs risk analysis on the structure and gives the underwriter summary information, loss curve and default rate plots, and the synthetic loan pool.
• Automated FIG's reporting by taking it from a monthly and manual powerpoint process and creating a website giving the most important content updated on a daily basis. The site is hosted on AWS, modulated into separate applications, and has oauth authentication managed by nginx with the lua module. Senior team leaders are now able to assess book quality and make data informed decisions on a daily basis instead of a monthly basis. One of the biggest win points was the data generated headers which provide business leaders with the most relevant information for each piece of the portfolio.
• Built machine learning models to predict Moody's risk ratings on unrated loans with a 93% accuracy. We use a random forest method tuned with Model-Based Optimization on 10-fold cross-validation. The results of this analysis allows us to impute the loan ratings for collateral that is missing official scores. The imputation method allows our group to have a much better understanding of the risk across our contracts and overall book.
2017 — 2018
2017 — 2018
Greater New York City Area
• Built a collateral visualization and querying platform that gives underwriters and portfolio managers access to the underlying collateral for collateralized deals through a point and click interface. There has been a near complete drop in ad-hoc data requests as the business team is now able to access and visualize the data through the self-service portal.
• Wrote a Python package to automate the processing and validation of collateralized loan data. leverage lending collateral data often comes in an incredibly messy format and previously required hours of human manipulation to transform into the correct form for uploading to a database. After performing empathy interviews with several colleagues, I was able to find the worst choke points of the process. I then developed a package to automate the upload and validation steps. Turnaround times for uploading data went from potentially weeks to a day.
• Automated reports that reduced reporting time by 12 hours a month. Several of our quarterly and monthly reports that would take a full business analyst are now fully automatic.
2016 — 2016
Greater New York City Area
• Created new validation metrics to assess predictive performance on corporate insurance contracts
• Developed predictive models for corporate auto contracts that beat industry standards by 12%
• Used Hive and SQL for data summarization, querying, and analysis of large datasets
• Created web applications using D3.js and Shiny for managers to track timelines and costs of analytics projects
Education
Columbia University
Master’s Degree
Duquesne University