AI and providing the gold standard of authoritative medical information

Will AI prevent a combustible mixture of ignorance and power blowing up in our faces?

In his final interview in 1996, the late, great science communicator Carl Sagan said, “… we've arranged a society based on science and technology in which nobody understands anything about science and technology. And this combustible mixture of ignorance and power sooner or later is going to blow up in our faces.” He goes on to point out that this will leave society vulnerable, “to the next charlatan to come along”.

Can we rely on the experts?

It would seem that misunderstanding of the science goes far beyond just the general public. The lack of mental grasp also affects people who are supposed to know better. In a recent test at Harvard Medical School 4 out of 5 doctors could not answer a simple question from introductory statistics. The question was:

Q. If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing else about the person’s symptoms or signs?

This wasn’t a particularly difficult question, however, fewer than a fifth of participants gave the correct answer, and most thought that the hypothetical patient had a 95% chance of having the disease.

Diagnostic Pathologist, Dr Clare Craig, kindly provides the correct answer as follows:

A. For those interested in the correct answer... Imagine testing 1,000 patients. 5% will be false positive = 50, and 1 will be a true positive. Therefore, the chance that any one patient with a positive test result has the disease is 1 in 50 or 2%.

Fear mongering rates of disease

So, going by the test alone the vast majority of doctors would say that were an awful lot of diseased patients, when in fact there was a 98% chance that they were not diseased. Once you see how these things are calculated and also consider the potential medical implications of the outcome, one can perhaps see the importance of understanding basic concepts around test accuracy, reproducibility, sensitivity and specificity when it comes to diagnosing illness.

For example, if a test is quoted (as in this instance) of having a specificity of 95%, you might believe that this is quite a good test. However, by definition 5% of all results will be false positives. When testing large numbers or at population level, this factor becomes increasingly pertinent, for example, 5,000 people would have false positive results per 100,000 tested. So, test outcomes would not confirm or even indicate the number of “cases”. (There are then additional factors such as the difference between the test picking up say, a positive result indicating the presence of bug viability – or dead material, but that’s a whole different subject area).

Harmful doctors?

Getting the diagnosis right is fundamental as all evidence-based medicine should obviously be underpinned by accurate diagnostics. If the diagnosis is wrong, then the subsequent treatment is not just likely to be inappropriate, but potentially harmful (iatrogenic). All doctors know this, but this knowledge seemingly does not prevent high levels of mistakes and inappropriate care.

According to PubMed, “Iatrogenic disease is the result of diagnostic and therapeutic procedures undertaken on a patient. With the multitude of drugs prescribed to a single patient adverse drug reactions are bound to occur. The Physician should take suitable steps to detect and manage them”.

One of the basic principles in medical treatment stated by Hippocrates is “First do no harm”. Stories of medical remedies causing more harm than good have been recorded from time immemorial. An iatrogenic disorder occurs when the deleterious effects of the therapeutic or diagnostic regimen causes pathology (illness) independent of the condition for which the regimen is advised.

If you don’t think this is a big issues, estimates from publications indicate that up to 33.9% of patients coming into hospital are there for iatrogenic reasons. So, something is clearly not right.

When Dr Clare Craig was asked about the implications for AI of incorrect diagnostic interpretations, she answered with the highly succinct reply, “Dire”.

Revealing systemic issues in medical research

As though things couldn’t get much worse, Computational Epidemiologist, John W. Ayers @JohnWAyersPhD goes on to point out even greater flaws in the system, especially around the accuracy of published articles and their peer-review. He says that the real question that matters is, "How intelligent is research at the highest level?"

“We reviewed a random sample of studies appearing in JAMA, NEJM, Lancet, AM J of Public Health.... Most don't bother to even interpret their statistical results but when they do 57% WRONGLY interpret the statistics (not the meaning of the study, the factual statistics). Taken together about 10% of studies in the top journals provide an accurate interpretation of their findings when using the most common statistic in medicine: Odds ratios. The @NIH should be addressing this as their number [one] priority.”

It cannot be stressed sufficiently that AI can only utilise existing human information, and so humans need to get this right.

These comments and observations highlight concerns about AI's reliance on medical research, given that only 10% of top medical journal articles accurately interpret statistics. This means there will be potentially flawed AI outputs in healthcare. Peer review failures and statistical misinterpretations, which could completely undermine AI's ability to support accurate medical decisions. There are broader concerns about AI in medicine, where AI's potential to revolutionize healthcare is tempered by the need for robust, accurate data to avoid human errors and ensure ethical application, especially around the principle of, “First do no Harm”.

How does this relate to informed consent and patient trust?

Informed consent in AI-driven healthcare requires patients to understand how AI tools work and the risks involved. However, if AI is trained on unreliable or misinterpreted data as previously highlighted, next generation systems and physicians may struggle to provide accurate information. Patients may feel misled or distrust AI tools, especially if outcomes are poor due to flawed foundations.

The gold standard of medical information

How do we achieve the gold standard of medical information? Medical authority starts with genuine expertise. True medical information should come directly from qualified healthcare professionals with extensive clinical experience and academic credentials. Information and articles must be sourced from senior consulting physicians who are highly experienced leaders in their respective fields, ensuring insights based on observation, facts and data - from a large range of those practitioners at the forefront of medical practice.

Dangers of political pressure on facts and data

More recently, and following the release of Meta’s internal communications, Mark Zuckerberg distanced himself from Reuters and other so-called ‘fact checkers’ because of bias and mistakes. In January this year he said, “The fact checkers have been too politically biased and destroyed more trust than they created”.  

As widely reported in February this 2025, Yale researchers have found COVID Spike Protein in blood 709 days after vaccination. This finding posits that millions of so-called Long COVID patients may in fact be vaccine injured. Reporting, Investigative Journalist, Paul Thacker said he spoke to one of the study authors, Gastroenterologist Dr. Danice Hertz, to get her views on fake fact checks and why so many medical professionals have politicized the issue of vaccines. “Up until this moment, vaccine injury was not acknowledged,” Hertz told me. “And there’s been a concerted effort to cover it up.”

Thacker goes onto point out that a couple weeks after Zuckerberg cut ties in January with biased fact checkers like Reuters, Zuckerberg sat for an interview with Joe Rogan and complained that Meta was pressured to censor anything on vaccine side effects. “Who’s pressuring you take down things that talk about vaccine side effects,” Rogan asked. “It was people in the Biden administration,” Zuckerberg replied.

Can AI help return us to scientific rigour and trust?Navigating waters

Returning to Dr. Craig’s post and that she suggests "dire" implications of medical ignorance along with bad peer-review impacting AI’s role in healthcare, potentially leading to widespread scepticism. The point that she and other like-minded medical professionals make is that unless there are fundamental changes to medical education (for both doctors and patients), publishing and the peer-review process there are serious medical risks. Informed consent will become a thing of the past and that patients’ right to make informed choices will be increasingly challenged, and trust will continue to evaporate.

The internet has become a fantastic source of information, however, the quality of that information varies significantly. On a more positive note, AI may well be capable of navigating the bad- and misinformation and do a better job than humans, and assist everyone to find authoritative medical information in the digital age.

 

A fluid that transports oxygen and other substances through the body, made up of blood cells suspended in a liquid. Full medical glossary
A disease of long duration generally involving slow changes. Full medical glossary
The process of determining which condition a patient may have. Full medical glossary
The basic unit of genetic material carried on chromosomes. Full medical glossary
Prefix suggesting a deficiency, lack of, or small size. Full medical glossary

A symptom, illness, injury or side-effect due to medical treatment

Full medical glossary
In physics it is the tendency of a force to twist or rotate another object Full medical glossary
Relating to injury or concern. Full medical glossary
Affecting the whole body. Full medical glossary
trichomonal vaginosis Full medical glossary
The means of producing immunity by stimulating the formation of antibodies. Full medical glossary