>>
Technology>>
Science and technology>>
MIT Scientists Build a 'Period...MIT researchers created a "periodic table" for machine learning, organizing over 20 algorithms by mathematical similarities.
MIT researchers have developed a "periodic table" for machine learning that organizes more than 20 classical AI algorithms into a unified framework based on their underlying mathematical structures. The work, led by graduate student Shaden Alshammari and senior author Mark Hamilton, stems from an accidental discovery: two seemingly disparate algorithms clustering and contrastive learning could be reframed using the same fundamental equation.
The framework, called information contrastive learning (I-Con), describes how algorithms find connections between real data points and approximate those connections internally. Each algorithm aims to minimize deviation between learned approximations and actual relationships in training data. The researchers organized I-Con into a periodic table categorizing algorithms based on how points connect in real datasets and how algorithms approximate those connections.
Just as Mendeleev's periodic table contained empty squares that predicted undiscovered elements, the machine learning periodic table features blank spaces pointing to algorithms that should exist but haven't yet been invented. The researchers filled one gap by borrowing ideas from contrastive learning and applying them to image clustering, creating a new algorithm that classifies unlabeled images 8 percent better than previous state-of-the-art approaches.
"It's not just a metaphor," Alshammari said. "We're starting to see machine learning as a system with structure that is a space we can explore rather than just guess our way through." The table includes algorithms spanning 100 years of research, from spam-detection classification to the deep learning powering large language models. Researchers also used I-Con to show how a data debiasing technique developed for contrastive learning could boost clustering algorithm accuracy.
Senior author Hamilton emphasized that having I-Con as a guide could help machine learning scientists think outside the box: "We've shown that just one very elegant equation, rooted in the science of information, gives you rich algorithms spanning 100 years of research in machine learning. This opens up many new avenues for discovery." The research, funded by the Air Force Artificial Intelligence Accelerator and the National Science Foundation, will be presented at the International Conference on Learning Representations.