Software engineer on the Detection team at Argo AI.
Experience
2023 — Now
2023 — Now
Palo Alto, California, United States
As part of the Scalable ML team I in charge of the data infrastructure and pipeline implementation used to train some of the bigger ML models at Latitude.
2020 — 2023
2020 — 2023
Palo Alto, California
2013 — 2020
2013 — 2020
Mountain View, Ca
During my work at Linkedin I've had the privilege to work in many orgs and lead big efforts. from the frontend and api stack (Pemberly) in our product teams to machine learning (Pro-ML) in our Data org. These projects are being used by hundreds of engineers building UIs, APIs and ML models.
https://engineering.linkedin.com/blog/2016/12/pemberly-at-linkedin
https://engineering.linkedin.com/blog/2019/01/scaling-machine-learning-productivity-at-linkedin
2010 — 2013
2010 — 2013
Social TV startup focused on connecting TV fans for realtime interaction during their favorite TV shows. Web site http://www.yap.tv.
As the CTO I was in charge of selecting the whole technology stack. My time was divided about 80% development and 20% management. In terms of stack/technology I spent my time 20% iOS, 60% Scala and 20% devops (Opscode).
• -iOS apps: yap.TV Guide, USA Anywhere, Yap Music--
I worked side by side with the other technical co-founder to develop the first versions of yap.TV Guide in Objective C for iOS 3.
My specific responsibilities were: user login flows, download, store and displaying tweets, xmpp real time layer for group chat and presence notification. custom data store mapping to the format of the data (packets of tweets) with meta data stored in Sqlite3 (initial implementation in CoreData).
Over the following 3 years the applications were featured multiple times on the AppStore, most recently the Yap Music app was featured in the week ending on June 6th 2013.
• -Api server: back end side of the application--
I wrote the initial versions of the api server in Ruby (Ruby on Rails and Sinatra) Due to code rot and performance issue we re-implemented it in Scala (Spray.io, Akka 2.
The data store is a combination of MySQL, Mongodb and DynamoDB with caching in Memcache.
The platform integrates an xmpp server (Ejabberd)
• --Social streaming: twitter and facebook import---
Initial Ruby implementation peaked at 6000 tweets/min with the data in the application falling behind real time with up to one hour. We reimplemented in Scala and the new version handles up to 12000 tweets/min without real time delays.
http://mashable.com/2011/05/25/american-idol-winner-2/
• -Devops--
I was deploying with Opscode Chef on Amazon Web Services (ELB, EC2, EBS, SQS, S3). I used Zookeeper, Nagios and Ganglia in addition to AWS monitoring in order monitor the health of the platform.
2006 — 2010
2006 — 2010
A wide range of projects from Desktop apps to Enterprise level web sites using java, ruby on rails and other scripting (php, python). Partners and (former) clients include: Pizzahut Inc, Landfrog, Quimbik, Software Anywhere, Jacent Technologies.
My most recent project is a one year, one man project on which I did everything from cutting html, javascript, backend, database design, implementation, monitoring, recovery. The system is deployed on Amazon Web Services using EC2 servers (between 2 and 14 instances), ELB, RDS. The technology used is Ruby On Rails, jQuery, MySql, AMQP.
Education
University POLITEHNICA of Bucharest
MS
Trinity College Dublin