● Developing simulation and training frameworks for multi-agent swarm systems, and implementing tools to analyze the mechanistic interpretability of LLMs on technical and scientific text.
● Developing a quantum attention mechanism for transformers in LLMS, enabling complex token interactions in a single layer to improve efficiency and computational complexity
● Implementing a simulated quantum model with noise injection to prevent overfitting, using token entanglement for enhanced attention scoring
● Developing and deploying ML software systems integrating LLMs, VLMs, and Mamba architectures for large-scale public health and clinical analytics, focusing on performance, reliability, and data pipeline automation