FDA's New Drug Approval AI Is Generating Fake Studies: Report

0
9KB

Robert F. Kennedy Jr., the Secretary of Health and Human Services, has made a big push to get agencies like the Food and Drug Administration to use generative artificial intelligence tools. In fact, Kennedy recently told Tucker Carlson that AI will soon be used to approve new drugs “very, very quickly.” But a new report from CNN confirms all our worst fears. Elsa, the FDA’s AI tool, is spitting out fake studies.

CNN spoke with six current and former employees at the FDA, three of whom have used Elsa for work that they described as helpful, like creating meeting notes and summaries. But three of those FDA employees told CNN that Elsa just makes up nonexistent studies, something commonly referred to in AI as “hallucinating.” The AI will also misrepresent research, according to these employees.

“Anything that you don’t have time to double-check is unreliable. It hallucinates confidently,” one unnamed FDA employee told CNN.

And that’s the big problem with all AI chatbots. They need to be double-checked for accuracy, often creating even more work for the human behind the computer if they care about the quality of their output at all. People who insist that AI actually saves them time are often fooling themselves, with one recent study of programmers showing that tasks took 20% longer with AI, even among people who were convinced they were more efficient.

Kennedy’s Make America Healthy Again (MAHA) commission issued a report back in May that was later found to be filled with citations for fake studies. An analysis from the nonprofit news outlet NOTUS found that at least seven studies cited didn’t even exist, with many more misrepresenting what was actually said in a given study. We still don’t know if the commission used Elsa to generate that report.

FDA Commissioner Marty Makary initially deployed Elsa across the agency on June 2, and an internal slide leaked to Gizmodo bragged that the system was “cost-effective,” only costing $12,000 in its first week. Makary said that Elsa was “ahead of schedule and under budget” when he first announced the AI rollout. But it seems like you get what you pay for. If you don’t care about the accuracy of your work, Elsa sounds like a great tool for allowing you to get slop out the door faster, generating garbage studies that could potentially have real consequences for public health in the U.S.

CNN notes that if an FDA employee asks Elsa to generate a one-paragraph summary of a 20-page paper on a new drug, there’s no simple way to know if that summary is accurate. And even if the summary is more or less accurate, what if there’s something within that 20-page report that would be a big red flag for any human with expertise? The only way to know for sure if something was missed or if the summary is accurate is to actually read the report.

The FDA employees who spoke with CNN said they tested Elsa by asking basic questions like how many drugs of a certain class have been approved for children. Elsa confidently gave wrong answers, and while it apparently apologized when it was corrected, a robot being “sorry” doesn’t really fix anything.

We still don’t know the workflow being deployed when Kennedy says AI will allow the FDA to approve new drugs, but he testified in June to a House subcommittee that it’s already being used to “increase the speed of drug approvals.” The secretary, whose extremist anti-vaccine beliefs didn’t keep him from becoming a public health leader, seems intent on injecting unproven technologies into mainstream science.

Kennedy also testified to Congress that he wants every American to be strapped with a wearable health device within the next four years. As it happens, President Trump’s pick for Surgeon General, Casey Means, owns a wearables company called Levels that monitors glucose levels in people who aren’t diabetic. There’s absolutely no reason that people without diabetes need to constantly monitor their glucose levels, according to experts. Means, a close ally of Kennedy, has not yet been confirmed by the Senate.

The FDA didn’t respond to questions emailed on Wednesday about what the agency is doing to address Elsa’s fake study problem. Makary acknowledged to CNN that Elsa could “potentially hallucinate,” but that’s “no different” from other large language models and generative AI. And he’s not wrong on that. The problem is that AI is not fit for purpose when it’s consistently just making things up. But that won’t stop folks from continuing to believe that AI is somehow magic.

Like
Love
Haha
3
Pesquisar
Categorias
Leia mais
Sem categoria
Theo quy định mới, xe máy vi phạm lỗi này có thể bị phạt tới 14 triệu đồng, rất nhiều tài xế vẫn chủ quan
Từ ngày 1/1/2025, theo Nghị định 168/2024/NĐ-CP, các...
Por MKCLCSWPhd O'Kon 2025-08-10 07:58:06 0 8KB
Sem categoria
Kể từ bây giờ, người điều khiển xe máy sẽ bị CSGT xử phạt tới hơn 10.000.000 đồng nếu không đáp ứng điều kiện dưới đây
Điều kiện đi xe máy ra đường Người dân tham gia...
Por Ellfantasy Cummerata 2025-07-27 05:04:04 0 8KB
Sem categoria
Nếu bạn bè bạn thường nói 6 điều này, họ thường là những người 'có vẻ ngoài tử tế nhưng lòng dạ độc ác'. Đừng kết bạn thân với họ
Khi đánh giá tính cách của một người, chúng ta không...
Por GlobalTheme864 Thào 2025-07-03 01:06:00 0 9KB
Sem categoria
Trong tương lai, người bị Al thay thế không phải là con người mà là người không biết dùng AI
Câu trả lời từ hai chuyên gia công nghệ hàng đầu thế...
Por OwOfysh Hồng 2025-07-30 09:18:04 0 8KB
Sem categoria
Lương 15 triệu phải trích lại 1.575.000 đồng/tháng, ảnh hưởng trực tiếp đến hàng triệu người
Kể từ ngày 1/7/2025, Luật Bảo hiểm xã hội (BHXH) năm...
Por fynesha Giả 2025-08-08 10:35:06 0 8KB