Download our NEW Mobile App!
10155 Hwy 431 South, New Hope, AL 35760 | Phone: (256) 723-4112 | Mon-Fri: 8a.m.-6p.m. | Sat: 8a.m.-2p.m. | Sun: Closed

Get Healthy!

  • Posted March 24, 2026

AI Gets a 'D' When Judging Scientific, Medical Claims

Folks who rely on chatbots for their scientific and medical info, be forewarned — artificial intelligence (AI) gets a "D" when it’s asked to evaluate whether a claim is true or false, a new study says.

ChatGPT’s accuracy in assessing scientific claims was only about 60% better than random guessing, a score that would earn it a low "D" in a classroom, researchers recently reported in Rutgers Business Review.

“We're not just talking about accuracy, we're talking about inconsistency, because if you ask the same question again and again, you come up with different answers,” said lead researcher Mesut Cicek, a professor of marketing and international business at Washington State University in Pullman, Washington.

“We used 10 prompts with the same exact question. Everything was identical,” Cicek said in a news release. “It would answer true. Next, it says it’s false. It’s true, it’s false, false, true. There were several cases where there were five true, five false.”

For the new study, researchers fed more than 700 claims into ChatGPT and asked it to judge whether each statement was true or false, based on all prior research.

The AI program had about 80% accuracy, but the score dropped to 60% after researchers accounted for random guessing – the odds that a wild guess has a 50-50 chance of being right.

The results reinforce the need to apply skepticism and caution when using AI, especially in tasks involving nuance or complicated reasoning, researchers said.

Chatbots’ ability with language masks AIs lack of conceptual intelligence, the team concluded.

AI can produce fluent, convincing language, but its ability to reason through complex questions falls short because it can’t actually “think,” researchers said.

As a result, AI can deliver persuasive explanations for incorrect answers, potentially misleading the people who rely on it, researchers warned.

“Current AI tools don't understand the world the way we do — they don't have a 'brain,’ ” Cicek said. “They just memorize, and they can give you some insight, but they don't understand what they’re talking about.”

Cicek’s advice?

“Always be skeptical,” he said. “I'm not against AI. I’m using it. But you need to be very careful.”

More information

MIT has more on AI hallucinations and bias.

SOURCE: Washington State University, news release, March 16, 2026

Health News is provided as a service to New Hope Pharmacy site users by HealthDay. New Hope Pharmacy nor its employees, agents, or contractors, review, control, or take responsibility for the content of these articles. Please seek medical advice directly from your pharmacist or physician.
Copyright © 2026 HealthDay All Rights Reserved.

Share

Tags