ai
  • Crypto News
  • Ai
  • eSports
  • Bitcoin
  • Ethereum
  • Blockchain
Home»Ai»Creating AI that matters | MIT News
Ai

Creating AI that matters | MIT News

Share
Facebook Twitter LinkedIn Pinterest Email

When it comes to artificial intelligence, MIT and IBM were there at the beginning: laying foundational work and creating some of the first programs — AI predecessors — and theorizing how machine “intelligence” might come to be.

Today, collaborations like the MIT-IBM Watson AI Lab, which launched eight years ago, are continuing to deliver expertise for the promise of tomorrow’s AI technology. This is critical for industries and the labor force that stand to benefit, particularly in the short term: from $3-4 trillion of forecast global economic benefits and 80 percent productivity gains for knowledge workers and creative tasks, to significant incorporations of generative AI into business processes (80 percent) and software applications (70 percent) in the next three years.

While industry has seen a boom in notable models, chiefly in the past year, academia continues to drive the innovation, contributing most of the highly cited research. At the MIT-IBM Watson AI Lab, success takes the form of 54 patent disclosures, an excess of 128,000 citations with an h-index of 162, and more than 50 industry-driven use cases. Some of the lab’s many achievements include improved stent placement with AI imaging techniques, slashing computational overhead, shrinking models while maintaining performance, and modeling of interatomic potential for silicate chemistry.

“The lab is uniquely positioned to identify the ‘right’ problems to solve, setting us apart from other entities,” says Aude Oliva, lab MIT director and director of strategic industry engagement in the MIT Schwarzman College of Computing. “Further, the experience our students gain from working on these challenges for enterprise AI translates to their competitiveness in the job market and the promotion of a competitive industry.”

“The MIT-IBM Watson AI Lab has had tremendous impact by bringing together a rich set of collaborations between IBM and MIT’s researchers and students,” says Provost Anantha Chandrakasan, who is the lab’s MIT co-chair and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “By supporting cross-cutting research at the intersection of AI and many other disciplines, the lab is advancing foundational work and accelerating the development of transformative solutions for our nation and the world.”

Long-horizon work

As AI continues to garner interest, many organizations struggle to channel the technology into meaningful outcomes. A 2024 Gartner study finds that, “at least 30% of generative AI projects will be abandoned after proof of concept by the end of 2025,” demonstrating ambition and widespread hunger for AI, but a lack of knowledge for how to develop and apply it to create immediate value.

Here, the lab shines, bridging research and deployment. The majority of the lab’s current-year research portfolio is aligned to use and develop new features, capacities, or products for IBM, the lab’s corporate members, or real-world applications. The last of these comprise large language models, AI hardware, and foundation models, including multi-modal, bio-medical, and geo-spatial ones. Inquiry-driven students and interns are invaluable in this pursuit, offering enthusiasm and new perspectives while accumulating domain knowledge to help derive and engineer advancements in the field, as well as opening up new frontiers for exploration with AI as a tool.

Findings from the AAAI 2025 Presidential panel on the Future of AI Research support the need for contributions from academia-industry collaborations like the lab in the AI arena: “Academics have a role to play in providing independent advice and interpretations of these results [from industry] and their consequences. The private sector focuses more on the short term, and universities and society more on a longer-term perspective.”

Bringing these strengths together, along with the push for open sourcing and open science, can spark innovation that neither could achieve alone. History shows that embracing these principles, and sharing code and making research accessible, has long-term benefits for both the sector and society. In line with IBM and MIT’s missions, the lab contributes technologies, findings, governance, and standards to the public sphere through this collaboration, thereby enhancing transparency, accelerating reproducibility, and ensuring trustworthy advances.

The lab was created to merge MIT’s deep research expertise with IBM’s industrial R&D capacity, aiming for breakthroughs in core AI methods and hardware, as well as new applications in areas like health care, chemistry, finance, cybersecurity, and robust planning and decision-making for business.

Bigger isn’t always better

Today, large foundation models are giving way to smaller, more task-specific models yielding better performance. Contributions from lab members like Song Han, associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS), and IBM Research’s Chuang Gan help make this possible, through work such as once-for-all and AWQ. Innovations such as these improve efficiency with better architectures, algorithm shrinking, and activation-aware weight quantization, letting models like language processing run on edge devices at faster speeds and reduced latency.

Consequently, foundation, vision, multimodal, and large language models have seen benefits, allowing for the lab research groups of Oliva, MIT EECS Associate Professor Yoon Kim, and IBM Research members Rameswar Panda, Yang Zhang, and Rogerio Feris to build on the work. This includes techniques to imbue models with external knowledge and the development of linear attention transformer methods for higher throughput, compared to other state-of-the-art systems. 

Understanding and reasoning in vision and multimodal systems has also seen a boon. Works like “Task2Sim” and “AdaFuse” demonstrate improved vision model performance if pre-training takes place on synthetic data, and how video action recognition can be boosted by fusing channels from past and current feature maps.

As part of a commitment to leaner AI, the lab teams of Gregory Wornell, the MIT EECS Sumitomo Electric Industries Professor in Engineering, IBM Research’s Chuang Gan, and David Cox, VP for foundational AI at IBM Research and the lab’s IBM director, have shown that model adaptability and data efficiency can go hand in hand. Two approaches, EvoScale and Chain-of-Action-Thought reasoning (COAT), enable language models to make the most of limited data and computation by improving on prior generation attempts through structured iteration, narrowing in on a better response. COAT uses a meta-action framework and reinforcement learning to tackle reasoning-intensive tasks via self-correction, while EvoScale brings a similar philosophy to code generation, evolving high-quality candidate solutions. These techniques help to enable resource-conscious, targeted, real-world deployment.

“The impact of MIT-IBM research on our large language model development efforts cannot be overstated,” says Cox. “We’re seeing that smaller, more specialized models and tools are having an outsized impact, especially when they are combined. Innovations from the MIT-IBM Watson AI Lab help shape these technical directions and influence the strategy we are taking in the market through platforms like watsonx.”

For example, numerous lab projects have contributed features, capabilities, and uses to IBM’s Granite Vision, which provides impressive computer vision designed for document understanding, despite its compact size. This comes at a time when there’s a growing need for extraction, interpretation, and trustworthy summarization of information and data contained in long formats for enterprise purposes.

Other achievements that extend beyond direct research on AI and across disciplines are not only beneficial, but necessary for advancing the technology and lifting up society, concludes the 2025 AAAI panel.

Work from the lab’s Caroline Uhler and Devavrat Shah — both Andrew (1956) and Erna Viterbi Professors in EECS and the Institute for Data, Systems, and Society (IDSS) — along with IBM Research’s Kristjan Greenewald, transcends specializations. They are developing causal discovery methods to uncover how interventions affect outcomes, and identify which ones achieve desired results. The studies include developing a framework that can both elucidate how “treatments” for different sub-populations may play out, like on an ecommerce platform or mobility restrictions on morbidity outcomes. Findings from this body of work could influence the fields of marketing and medicine to education and risk management.

“Advances in AI and other areas of computing are influencing how people formulate and tackle challenges in nearly every discipline. At the MIT-IBM Watson AI Lab, researchers recognize this cross-cutting nature of their work and its impact, interrogating problems from multiple viewpoints and bringing real-world problems from industry, in order to develop novel solutions,” says Dan Huttenlocher, MIT lab co-chair, dean of the MIT Schwarzman College of Computing, and the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science.

A significant piece of what makes this research ecosystem thrive is the steady influx of student talent and their contributions through MIT’s Undergraduate Research Opportunities Program (UROP), MIT EECS 6A Program, and the new MIT-IBM Watson AI Lab Internship Program. Altogether, more than 70 young researchers have not only accelerated their technical skill development, but, through guidance and support by the lab’s mentors, gained knowledge in AI domains to become emerging practitioners themselves. This is why the lab continually seeks to identify promising students at all stages in their exploration of AI’s potential.

“In order to unlock the full economic and societal potential of AI, we need to foster ‘useful and efficient intelligence,’” says Sriram Raghavan, IBM Research VP for AI and IBM chair of the lab. “To translate AI promise into progress, it’s crucial that we continue to focus on innovations to develop efficient, optimized, and fit-for-purpose models that can easily be adapted to specific domains and use cases. Academic-industry collaborations, such as the MIT-IBM Watson AI Lab, help drive the breakthroughs that make this possible.”

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Estrada signs with the Dodgers

octobre 21, 2025

New noninvasive endometriosis tests are on the rise

octobre 21, 2025

The Download: Embryo ethics, and reducing chatbot risks

octobre 21, 2025

The embryo builder: This stem-cell scientist is creating synthetic embryos from stem cells.

octobre 21, 2025
Add A Comment

Comments are closed.

Top Posts

SwissCryptoDaily.ch delivers the latest cryptocurrency news, market insights, and expert analysis. Stay informed with daily updates from the world of blockchain and digital assets.

We're social. Connect with us:

Facebook X (Twitter) Instagram Pinterest YouTube
Top Insights

Estrada signs with the Dodgers

octobre 21, 2025

Is Ethereum the yield crypto as rates stay high?

octobre 21, 2025

Elon Musk’s SpaceX Moves $257M Bitcoin, In Second BTC Transfer in 3 Months

octobre 21, 2025
Get Informed

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

Facebook X (Twitter) Instagram Pinterest
  • About us
  • Get In Touch
  • Cookies Policy
  • Privacy-Policy
  • Terms and Conditions
© 2025 Swisscryptodaily.ch.

Type above and press Enter to search. Press Esc to cancel.