もっと詳しく

At the start of the pandemic, remembers MIT Technology Review’s senior editor for AI, the community “rushed to develop software that many believed would allow hospitals to diagnose or triage patients faster, bringing much-needed support to the front lines — in theory.

“In the end, many hundreds of predictive tools were developed. None of them made a real difference, and some were potentially harmful.”

That’s the damning conclusion of multiple studies published in the last few months. In June, the Turing Institute, the UK’s national center for data science and AI, put out a report summing up discussions at a series of workshops it held in late 2020. The clear consensus was that AI tools had made little, if any, impact in the fight against covid.

This echoes the results of two major studies that assessed hundreds of predictive tools developed last year. Laure Wynants, an epidemiologist at Maastricht University in the Netherlands who studies predictive tools, is lead author of one of them, a review in the British Medical Journal that is still being updated as new tools are released and existing ones tested. She and her colleagues have looked at 232 algorithms for diagnosing patients or predicting how sick those with the disease might get. They found that none of them were fit for clinical use. Just two have been singled out as being promising enough for future testing. “It’s shocking,” says Wynants. “I went into it with some worries, but this exceeded my fears.”

Wynants’s study is backed up by another large review carried out by Derek Driggs, a machine-learning researcher at the University of Cambridge, and his colleagues, and published in Nature Machine Intelligence. This team zoomed in on deep-learning models for diagnosing covid and predicting patient risk from medical images, such as chest x-rays and chest computer tomography (CT) scans. They looked at 415 published tools and, like Wynants and her colleagues, concluded that none were fit for clinical use. “This pandemic was a big test for AI and medicine,” says Driggs, who is himself working on a machine-learning tool to help doctors during the pandemic. “It would have gone a long way to getting the public on our side,” he says. “But I don’t think we passed that test….”

If there’s an upside, it is that the pandemic has made it clear to many researchers that the way AI tools are built needs to change. “The pandemic has put problems in the spotlight that we’ve been dragging along for some time,” says Wynants.

The article suggests researchers collaborate on creating high-quality (and shared) data sets — possibly by creating a common data standard — and also disclose their ultimate models and training protocols for review and extension. “In a sense, this is an old problem with research. Academic researchers have few career incentives to share work or validate existing results.

“To address this issue, the World Health Organization is considering an emergency data-sharing contract that would kick in during international health crises.”

Read more of this story at Slashdot.