# Alif Jakir > co-founder @ superintelligent.group | swarm multiplayer IDE for superintelligent human machine teams Location: Cambridge, Massachusetts, United States Profile: https://flows.cv/alifjakir I've got various interests, most of them having to do with computers and expanding the capabilities of Homo Sapiens as a whole. I'm a strong communicator and have experience coordinating teams. Computer programmer with proficiency in: C++, C, Java, Python, ROS, Unity3D, etc. Check out my website at Alifjakir.com for some of my writing and projects. Policy Intern for winning election in New York State Senate District 5. Currently a transdisciplinary researcher researching Artificial Intelligence, immersive Virtual and Augmented reality technologies, neuroscience, and how to bring all of that together. Other stuff that I work on or study: Drones + Robotic Manipulators and general Robotics, Manufacturing Processes, Computational Neuroscience, Cognition, Metacognitive Frameworks, Cybernetics, Metamodernism, Internet Dynamics, Sociology, Linguistics, Analytic Philosophy, Embodied Cognition, Digital Embodied Agents, Astrobiology, Planetary Systems, Forecasting, etc.. My favorite recent term: intertwingularity: "a term coined by Ted Nelson to express the complexity of interrelations in human knowledge. Intertwingularity is related to Nelson's coined term hypertext, partially inspired by "As We May Think" (1945) by Vannevar Bush." (If you'd like to chat about any of the above topics send me a DM!) Interested in democratizing access to Superintelligent collaborative AI systems for all humans. Working on creating my own transdisciplinary corporation @Halcyox INC. Studying for a dual degree in Computer Science and Data Analytics + Business Intelligence at Clarkson University, with minors in Robotics and Mathematics. TODO: //add more details to summary ## Work Experience ### Co-Founder & Chief Scientific Officer @ Stealth Startup Jan 2025 – Present | Cambridge, Massachusetts, United States Making tools for accessible quantitative analysis of contemporary businesses @ https://commonquant.com/ ### CEO & Co-Founder @ Stealth Jan 2025 – Present | Cambridge, MA Can we combine the creativity of individuals with the power of AI software swarms? Building (https://www.superintelligent.group/). We at Superintelligent.group are building the collaborative software development swarm as a paradigm for software development in the 21st century. We are developing a new suite of tools to allow the intuitive development of complex software using compositional swarms of AI agents, in tandem with small teams of humans. Our architecture for agent composition is built upon the latest research in generative agent simulacra, and long-running agent civilizations. Our first product is slated for alpha testing in 2026. It will allow you to create entire companies of AI Agents that can be spawned to do various kinds of useful and complex work. ### Co-Founder @ Holosonic.Org Jan 2023 – Present | Cambridge, Massachusetts, United States Sylas Horowitz and I co-founded Holosonic, we are working on hardware and software prototypes for a 3D programmable ultrasonic acoustic cell that can trap and manipulate small particles inside of a closed volume inside the air or within a liquid medium. Essentially, novel advancements in material science allow us to control fine-grained living (or otherwise) particles in a spatiotemporally controlled ultrasound wave-field. This allows for the development of autonomous biological factories, offering a groundbreaking approach that would automate a lot of wet-lab work. It would also be quite useful for assembly of microelectronic components in a "touch-less" fashion. We are also simultaneously working on developing this into a product that could eventually be adopted by various R&D facilities. Stay tuned to see more progress! Update: as of February 2025, we've been able to operate an array that is inconsistently stable, so we are working on a rewrite of the software stack to control the acoustic arrays through FPGA's with better precision. ### Visiting Researcher @ MIT Center for Collective Intelligence Jan 2023 – Present | Cambridge, Massachusetts, United States I am creating AI tools to boost Collective Intelligence. This entails developing frameworks to allow swarms of agents to coordinate reasoning amongst each other by decomposition of complex tasks. I am also developing frameworks to allow multi-scale coordination across enterprises to naturally integrate AI into the modern enterprise development workflow. Part of this entails investigating how local models can work together to create complex multimodal integration of all of an enterprises' "complex knowledge". This is particularly valuable for aiding towards more intelligent, systematic approaches to the "digital transformation" large enterprises are slowly shifting towards, which is currently progressing in a bifurcated and disconnected manner across different industrial verticals. I am interested in approaches along this line that allow for real-time networked communication to be stronger: (https://arxiv.org/abs/2311.00728), and strengthening reasoning and collective deliberation within existing social networks e.g. Reddit. I am also investigating ways to harness social networks' inherent diversity of thoughts through collaboration of humans and AI for the next generation of scientific reasoning. The paradigm of autonomous agent driven closed-loop fundamental science offers the opportunity to allow scientists to focus on eliminating friction in the Lab2MassProduction pipeline, making more impact with their research activities. I am interested in the evolution of civilizations on a planetary scale, and am researching scaling up human social simulations to allow for 3D social simulacra to understand the impacts of policies and integrate a complex-systems perspective towards the development of AGI. "The MIT Center for Collective Intelligence explores how people and computers can be connected so that —collectively— they act more intelligently than any person, group, or computer has ever done before." ### Founder and CEO @ Halcyox Jan 2021 – Present | New York, United States Development of technology with novel human computer interfaces, integrating AI more seamlessly into user experiences. Currently serving as the CEO and technical software architect coordinating a multinational team to scale awareness of and foster a culture change to adapt to usage of collaborative AI technologies. Currently developing software for automatic content generation for enterprise use cases utilizing foundation models and synthetic media for large-scale dynamic education. We have also amassed over 6 million organic impressions so far on the content that we've produced to spread greater understanding of upcoming sociotechnical developments. Projects ongoing include: 1. XRAgents 2. Superintelligent.group ### Research Affiliate @ MIT Media Lab Jan 2022 – Present | Cambridge, MA Project #1 (Complete) (Volumetric Hologram of Cortana Controlled With BCI) Ever wanted to actually "hold" a character in your "hand"? Well, I worked on a thing that allows you to hold a character in a volumetric hologram in a virtual hand from afar, it's pretty cool. Created 3D character animation pipeline for holographic touchless experience controlled by EEG-based BCI on Looking Glass Holographic Display for MIT Neurafutures exhibit. Worked under Nataliya Kosmyna, Fluid Interfaces group. Responsibilities included creation of server-client architecture hosted on AWS + Google Cloud to pass messages with real-time asynchronous events. Integration with Unity scene integrated with Ultraleap hand-tracking, Brain Computer Interface, and Looking Glass holographic display. Worked on 3D animations and real-time holographic character interaction system. Programmed in C#. It was quite fun to be able to construct something that pioneered the frontier of holograms, brain computer interfaces, and human computer interaction. Team includes Marinel Tinnirello (http://www.marineltinnirello.com/) with contributions from Ember Arlynx (https://ember.software/). --- Project #2 (Ongoing) (Augmented Reality Brain Computer Interface) Currently in development, this uses the OpenBCI Ultracortex + Northstar Next Hardware together to use neural data to control holographic visuals, and integrates with 3D AI Characters in real-time for affectively understanding characters. ### Cerebras Fellow @ Cerebras Systems Jan 2024 – Jan 2025 various experimental open ended wafer scale supercomputing related projects, received free compute ### Founder in Residence @ Augmentation Lab Jan 2025 – Jan 2025 | Cambridge, Massachusetts, United States Building stuff for fun :) will add more deetz here l8r ! ### Full Stack Software Engineer @ FabuBlox Jan 2024 – Jan 2025 | Boston, Massachusetts, United States Fabublox was founded to solve system-level bottlenecks in the development of nanotechnology by MIT PhD's, it is software that is essentially like GitHub for Micro- and Nano-technology. I used my full-stack software development and database design skills for various aspects of the entire process design management workflow. I upgraded the core features of the process editor, making the Fabublox silicon process engineering design tool more useful. I am currently laying the ground work for the FabRun editor which allows manufacturing execution systems, unifying collaborative hardware manufacturing processes. Eventually, we hope to have a fully integrated workflow that functions as a digital twin, with integrated metrology, analytics, and real-time collaboration. I developed the core of the AI Fab Assistant PANDA (Process Assistant for Novel Device Applications) from scratch, with vector retrieval RAG system, allowing large databases to be conversationally explored, decreasing the time to understanding capabilities of the $27 million+ MOSIS2.0 fab ecosystem (https://www.mosis2.com/fab-services), as well as the portable web-embeddable process editor. I laid the groundwork for allowing AI to augment the entire process design and verification flow. Integrated AI tools like Cursor, Windsurf into the software development workflow to automate code scaffolding, refactoring, documentation and UI prototyping, accelerating product market fit development. > Migrated legacy code from Create-React-App to Vite, accelerating build time. > Developing w/ Node.js, modular React/TypeScript components, and designed UX/UI flows. ### Co-Founder @ VisuaML Jan 2024 – Jan 2025 | Cambridge, Massachusetts, United States The initial goal is to develop advanced tools for visual AI programming, our initiative is attempting to create tools for intuitive co-design of hardware, software, and dataflow systems. Therefore, "Visual ML" or "VisuaML". There is not yet a complete theory of Deep Learning that unifies geometric understanding, neural mathematical principles, and the emergent in-context learning capabilities of modern models. A unified theory of learning that is mathematically sound and based on strong foundations has not yet been developed. Some inspiration on why this is important: > Generalized Tensor Programs for neural architecture by Greg Yang (https://arxiv.org/abs/1910.12478) > Categorical Deep Learning as elucidated by Bruno Gavranović et. al. (https://arxiv.org/abs/2402.15332). VisuaML aims to allow users to construct, evaluate, and collaborate on AI architectures through an intuitive visual interface, without having to understand all the underlying low-level abstractions, similar to Google Docs' seamless collaboration features, but for AI development. Research shows visual programming interfaces reduce development time and enhance collaboration by making the process accessible to non-programmers. Our long-term vision includes a visual programming language based on neural computation that compiles high-level semantic representations onto arbitrary physical hardware, exploiting quirks in the unique properties of each platform. Ideally, compilation of data models should be Universally applicable onto silicon photonics, biological, and other substrates. We're currently looking for team members that are interested in developing the foundational theory, code synthesis, as well as those interested in creating proof of concepts regarding all aspects of our roadmap. ### HUMIC AI Incubator Fellow @ Harvard Undergraduate Machine Intelligence Community Jan 2023 – Jan 2024 | Cambridge, Massachusetts, United States Leading team-based development, setup infrastructure on AWS for foundation and generative AI model hosting, training, collaboration on software architecture, UI/UX. We created applications for story-synthesis, and implemented advanced AI and machine learning concepts through hands on projects. ### Residency @ Harvard St. Commons Jan 2023 – Jan 2024 | Cambridge, Massachusetts, United States core ideas incubated for many projects while being a hacker resident @ https://www.harvardst.co/ ### TARS Undergraduate Researcher @ Clarkson University @ TARS Jan 2019 – Jan 2023 | Potsdam, New York, United States Researched on various topics to do with 3d printing, human computer interaction, computer vision, virtual reality. Coordinated interdisciplinary team to develop drone technologies, teleoperation technologies, and various Virtual Reality interfaces. I spearheaded the genesis of a modular drone robotics platform, using open source technologies. Currently, foundation models are being integrated into the drone platform system. Founded Halcyox from ideas formed in the TARS environment. ### Undergraduate Student Researcher @ DTU - Technical University of Denmark Jan 2023 – Jan 2023 | Kongens Lyngby, Capital Region, Denmark https://kurser.dtu.dk/course/34269 (Computational imaging and spectroscopy) Computational Imaging & Computer Vision Bridging Optics & Image Processing Key Takeaways: - Applied Harmonic Wavelet Analysis & Compressed Sensing to Imaging - Designed Computer Vision & Computational Imaging Systems - Expertise in Signal Analysis from Optical Sensors - Proficiency in Inverse Problems & Linear Optimization for Imaging - Spectral Data Analysis & Scene Physics Understanding - Deep Learning & Machine Learning for Computational Imaging - Hyperspectral Image Processing with Machine Learning Course Highlights: - Digital Imaging Basics & Colorimetry - Sparse Representations & Image Restoration - Scene Analysis & Spectral Imaging - Introduction to Deep Learning in Imaging - Computational Spectroscopy & Hyperspectral Analysis The most fascinating thing I learned during this was information about Shearlets. Shearlets "are a natural extension of wavelets, to accommodate the fact that multivariate functions are typically governed by anisotropic features such as edges in images, since wavelets, as isotropic objects, are not capable of capturing such phenomena." (https://en.wikipedia.org/wiki/Shearlet). I led a team to create a solution for Computational Low Light Image Enhancement using Generative Adversarial Networks (GANs). This involved a meticulous process of data pre-processing, network architecture design, and hyperparameter tuning. One of the key challenges we tackled was the synthesis of well-lit images from their poorly-lit counterparts, followed by a super-resolution process using another GAN. The results were promising, indicating that even with a relatively simple GAN network, we could achieve high-quality reconstructions within our domain. This course bridged optics and image processing. I'm ready to tackle challenging tasks in computer vision, computational imaging, and beyond. #ComputationalImaging #ComputerVision #MathematicsInImaging ### Undergraduate Student Researcher @ DTU - Technical University of Denmark Jan 2023 – Jan 2023 | Kongens Lyngby, Capital Region, Denmark Optical Planar Waveguide Fabrication Course: Optical Planar Waveguide Fabrication at DTU, Denmark 🇩🇰 https://kurser.dtu.dk/course/34539 (Design, fabrication and characterization of optical planar waveguide components) Duration: 3 Weeks of Intensive Learning Hands-on Experience: Mastered Cleanroom Techniques Skills Gained: - Proficiency in Cleanroom Protocols - Expertise in Material Selection for Integrated Optics - Designing Device Fabrication Processes - Precision Nanoscale Fiber-Waveguide Alignment - Setup and Calibration of Optical Measurement Instruments - Proficiency in Simulation & Mask Layout - In-Depth Understanding of Working Principles of Waveguide Components Highlights: - Acquired Mastery in Cleanroom Procedures - Utilized Commercial Software (L-edit/Clewin/Ansys Lumerical) - Gained Expertise in Electron-Beam Lithography - Developed Skills in Scanning Electron Microscope (SEM) Operation - Setup Complex Optical Measurement Circuits - Conducted Propagation Loss Measurements Using the Cutback Method - Analyzed Micro-Ring Resonators for Real-World Applications Photonics is the future of computing, that's what motivated me to explore this research domain, for the next generation of edge devices it will provide dramatic new powerful capabilities. Photonics is also the next step for massive parallel processing, one day every data center will utilize photonic parallel processors. For AI and graphics workloads, highly parallel computation needs to be more energy efficient to be more sustainable. This intensive 3-week course has equipped me with comprehensive skills in the fabrication and characterization of silicon optical planar waveguide components. I am now prepared to apply this knowledge to real-world waveguide research and applications. #OpticalWaveguides #CleanroomSkills #HandsOnExperience ### Undergraduate Researcher @ Harvard John A. Paulson School of Engineering and Applied Sciences Jan 2022 – Jan 2022 | Cambridge, Massachusetts, United States In collaboration with the Harvard Cyborgs and undergraduate Harvard students I worked on constructing a real-time dynamic virtual character interaction schema utilizing NVIDIA Omniverse audio2face + audio2emotion, GPT-3, and a spoken naturalistic interface. When we can talk to our computers like they are people, what are the implications? Users can create personalities for characters and have a realistic talking mesh avatar that can embody any personality construct with a few lines of text describing the character. Operates in near-real time. Useful use cases are for education, entertainment, and therapy among others. Demo video coming soon. ### Vizcom.ai Junior Machine Learning Engineer Intern @ Vizcom Jan 2022 – Jan 2022 | Mountain View, California, United States Working on the machine learning 2d input to 3d mesh output pipeline for Generative 3D AI, training and testing models, understanding 3d reconstruction architectures from papers and implementing them into production using PyTorch. Tested model quality and researched datasets for 3d synthesis. Implementation served as the base for their current 3D features. Vizcom makes AI-assisted tooling for accelerating the design and rendering process for designers and engineers. ### Mixed Reality AI Developer @ Texas Immersive Institute Jan 2022 – Jan 2022 | Remote Volunteering on project for virtual agents in mixed reality environments powered by language AI for conversational assistants specifically geared towards helping young adults communicate their asthma needs. Remotely collaborating and providing feedback on the AI and natural language techniques. ### GDSC Lead @ Clarkson University @ Google Developer Student Club (DVC) Jan 2021 – Jan 2022 | Potsdam, New York, United States I am the Google Developer Student Club lead at Clarkson University, I give workshops on cutting edge technologies, give lectures, and bring speakers from industry and research to the school. I provided workshops on AI assisted code synthesis before it was commonplace and integrated into IDEs, how to use Google, AWS, and Microsoft Azure cloud resources, how to utilize NVIDIA's latest Omniverse improvements. Had speakers and alumni from Google speak about how to adapt for a career in industry. ### NASA RASC-AL Team Lead @ Clarkson University @ NASA - National Aeronautics and Space Administration Jan 2021 – Jan 2022 | Potsdam, New York, United States Leading an internationally coordinated team for a space competition. This is through the Revolutionary Aerospace Systems Concepts - Academic Linkage (RASC-AL). We worked on a hypothetical scenario for a space mission and specified all of the subsystems required. Specifically, we worked on a design for a smart storage system on the Lunar South Pole. It self-charges with solar and nuclear energy, and operates and communicates with other storage systems and the Lunar Gateway system modularly. This design provides nodes for the formation of infrastructure on the lunar surface for future missions. Managed teams across 5 time zones from Hong Kong (HKUST) Japan (Kyushu University), USA (Clarkson University), UAE (Khalifa University), Australia (RMIT) ### L'SPACE Academy - Mission Concept Academy Student @ NASA - National Aeronautics and Space Administration Jan 2021 – Jan 2021 Learned various mission concepts, development of PDR, technical development of plan for Mars ice-water acquisition for future development of Martian settlement. ### GSU Research Experience for Undergraduates (REU) @ Georgia State University Jan 2021 – Jan 2021 https://www.clarkson.edu/news/clarkson-student-chosen-highly-selective-summer-immersive-computing-research-experience I was accepted as 1 of 10 out of 292 applicants for an immersive computing research experience. I spent eight weeks at Georgia State virtually where I worked with faculty advisors, industry advisors, graduate student mentors, and other undergraduate students to complete a research project with various technologies related to immersive computing under the guidance of faculty and industry advisors. I specifically worked on two projects related to compression of images for high-quality streaming purposes, which are extremely important in resource-bottlenecked IoT applications, and for high quality virtual reality applications. ### Marketing Photographer @ Clarkson University Jan 2019 – Jan 2021 | Potsdam, New York, United States Collected photographs of assigned projects in organized and timely fashion. ### Political Intern @ US Federal Government Jan 2018 – Jan 2018 | Greater New York City Area Spoke with voters in field and researched policy to execute an action plan, we were able to win the election and flip the New York State legislature, beating an incumbent of 20 years. ## Education ### Bachelor's degree in Data Analytics + Business Intelligence Clarkson University ### Bachelor's degree in Computer Science Clarkson University ### High School Diploma Half Hollow Hills High School West ## Contact & Social - LinkedIn: https://linkedin.com/in/alif-jakir - Portfolio: https://www.alifjakir.com/ --- Source: https://flows.cv/alifjakir JSON Resume: https://flows.cv/alifjakir/resume.json Last updated: 2026-03-28