# Nishanth Kotla > SWE Intern @ Matter | Ex-AI @ AMD | RA @ NYU | MSCS @ NYU | Ex-ML @ Adobe | IIT Guwahati Alum Location: Brooklyn, New York, United States Profile: https://flows.cv/nishanthkotla I build AI products and the systems behind them. I like the part where something feels messy at first and then slowly becomes clean, reliable, and fast. I spend a lot of time on the practical side of ML, things like tooling, pipelines, and performance, and I enjoy digging into bottlenecks until the fix is obvious. Right now I am balancing work and school at NYU, where I do research and also help teach algorithms. That mix keeps me sharp. I get to think deeply about fundamentals while still shipping real code and learning what breaks in the real world. I care about work that has measurable impact. Lower latency, higher throughput, fewer surprises in production, and a better experience for whoever uses the system next. If you are building inference systems, agent tooling, or ML infrastructure, I would love to connect. You can reach me at kotlanishanth29@gmail.com. ## Work Experience ### Software Engineer @ Matter Intelligence Jan 2026 – Present | Brooklyn, New York, United States Building Geospatial AI Agents and internal tooling for Matter’s Geospatial Intelligence Platform (map automation + agent tools/workflows). ### Course Assistant @ New York University Jan 2026 – Present | New York, New York, United States Course Assistant for CS-UY 2413 Design and Analysis of Algorithms working with Professor Boris Aronov ### Course Assistant @ New York University Jan 2025 – Jan 2025 | New York, NY Course Assistant for CS-UY 2413 Design and Analysis of Algorithms working with Professor Lisa Hellerstein ### Research Assistant @ NYU Langone Health Jan 2025 – Jan 2026 | Manhattan, New York, United States Assisting in Prof. Cem Deniz’s lab, focusing on multimodal deep learning for knee osteoarthritis prognosis using medical imaging and AI methods. ### AI Software Engineer @ AMD Jan 2025 – Jan 2025 | San Jose, California, United States Patent-pending model optimization work from my AMD internship, where I boosted inference throughput by 45% across 15+ GPU/APU workloads by removing kernel bottlenecks and deploying at scale with Docker/Helm on Kubernetes. Implemented BF16/BFP16 quantization (converting Microsoft A16W8 models across 100+ ops with 97.1% YOLOv3 accuracy), integrated PyTorch Dynamo + torch.fx for PTQ/mixed-precision LLMs/CNNs/Transformers, and used Bayesian/RL search to auto-tune per-layer bit-widths and scaling for real-time inference. ### Teaching Assistant @ New York University Jan 2025 – Jan 2025 | New York City Metropolitan Area Mentoring students in NYU’s Bridge to ASICs course on digital design and hardware synthesis, covering Verilog, FPGA/CAD flows, and AI-driven research in hardware design, formal verification, and optimization. ### Graduate Research Assistant @ New York University Jan 2024 – Jan 2024 | New York, NY Graduate Research Assistant at NYU working on domain-specific SAT solvers and Computer Algebra Systems for High-Level Synthesis, focusing on formal verification, synthesis optimization, and design space exploration to improve efficiency and reliability of complex hardware systems. ### Undergraduate Research Assistant @ Indian Institute of Technology, Guwahati Jan 2023 – Jan 2024 | Guwahati, Assam, India Research Assistant at IIT Guwahati leading a computer-vision project on distracted driving, building a specialized video dataset (head pose, eye gaze, mouth state) and real-time CNN+LSTM detection framework with up to 98% classification accuracy and 97.11% overall. This work, done in collaboration with Dr. Anirban Dasgupta, resulted in a research paper on AI-driven distracted driving detection and road safety. ### Research Intern @ Adobe Jan 2023 – Jan 2023 | Bengaluru, Karnataka, India Approved patent and co-authored research paper on an LLM cost-optimization framework at Adobe that cut inference costs by 40–90% while improving model quality by 4–7%. Built ML-driven selection algorithms and ILP/MCKP-based optimization to speed document processing by 24% and shrink token lengths by 20% without quality loss. ## Education ### Master of Science - MS in Computer Science New York University Jan 2024 – Jan 2026 ### Bachelor of Technology - BTech in Electrical, Electronics and Communications Engineering Indian Institute of Technology, Guwahati Jan 2020 – Jan 2024 ## Contact & Social - LinkedIn: https://linkedin.com/in/nishanth-kotla-484b46201 --- Source: https://flows.cv/nishanthkotla JSON Resume: https://flows.cv/nishanthkotla/resume.json Last updated: 2026-04-01