ellithligraw writes: Last month a Stanford research paper coauthored by dozens of Stanford researchers which terms some artificial intelligence models “foundations” is causing a debate over the future of AI. A new research facility is proposed at Stanford to study these so-called “models.” Critics call these “foundations” will “mess up the discourse.”
The debate centers on what Wired calls “colossal neural networks and oceans of data.”
Some object to the limited capabilities and sometimes freakish behavior of these models; others warn of focusing too heavily on one way of making machines smarter. “I think the term ‘foundation’ is horribly wrong,” Jitendra Malik, a professor at UC Berkeley who studies AI, told workshop attendees in a video discussion. Malik acknowledged that one type of model identified by the Stanford researchers — large language models that can answer questions or generate text from a prompt — has great practical use. But he said evolutionary biology suggests that language builds on other aspects of intelligence like interaction with the physical world. “These models are really castles in the air; they have no foundation whatsoever,” Malik said. “The language we have in these models is not grounded, there is this fakeness, there is no real understanding….”
Subbarao Kambhampati, a professor at Arizona State University [says] there is no clear path from these models to more general forms of AI…
Emily M. Bender, a professor in the linguistics department at the University of Washington, says she worries that the idea of foundation models reflects a bias toward investing in the data-centric approach to AI favored by industry… “There are all of these other adjacent, really important fields that are just starved for funding,” she says. “Before we throw money into the cloud, I would like to see money going into other disciplines.”
Read more of this story at Slashdot.