I have been dedicated to pioneering research in AI/ML, particularly in transformer language models and tokenization strategies.
• Published a paper on Mechanistic Interpretability and accepted to ICLR 2026 Latent and Implicit Thinking Workshop, contributing to the understanding of internal circuits in AI models.
• Engaged in a prestigious fellowship, collaborating with top-tier experts from prestigious universities and companies.
• Developed skills in independent research and innovative problem-solving within the AI/ML landscape.
• Utilized Python, Matplotlib, Transformer models, and Hugging Face to create innovative plots on data across 1000 randomly generated 4-digit prompts, implementing techniques like activation steering to get unique results.