HomeTech and GadgetsArtificial IntelligenceAI and the Health Care Paradigm Shift - Will We Need Doctors...

AI and the Health Care Paradigm Shift – Will We Need Doctors in Twenty or Thirty Years?

January 5, 2020 – Imagine the year is 2040, or maybe 2050, and when you awaken your bed which has monitored you through the night, can instantly determine the state of your wellness. The clothes you wear are keeping tabs on your health throughout the day. And should something happen to you, a slip, a fall, a bang on your head, accident, or environmental insult occur, your personalized AI medical suite that is integrated with your body will be ready to come to the rescue. Your body will contain swarms of nanobots constantly checking on vitals: detecting and repairing blood chemistry, monitoring and adjusting your endocrine, circulatory, nervous and lymphatic systems, and intervening and correcting occurrences of autoimmune flare-ups, the appearance of precancerous cells, and bacterial and viral invasions. Even signs of aging and cell death will be monitored and managed by your AI which will have become an essential part of your person.

We are not there yet, but already artificial intelligence (AI) is helping doctors to help patients. It’s only a question of time before we can see the middleman moved out of the equation.

Today AI in health care is being applied to improve operational efficiency and to decrease the case burden on physicians. This is where machine learning provides distinct advantages whether it be in documenting, charting, billing, or in the case of helping with diagnoses, recognizing patterns that indicate illness much faster and in greater volume than any human can. Pattern recognition means AI tools can be applied to all types of medical practice from nursing to social work, and to community health. Pattern recognition technology can observe a patient’s behaviours better than most health practitioners. It can always be on closing the care gaps that occur when patients are not being observed by humans. It can help patients to stay in homes rather than occupy hospital beds as it provides around the clock data to medical teams without human presence necessary. It even, in the form of robot assistants, provide basic support services and companionship.

Let’s look at some of the most recent reported advances in AI applied to health care.

AI Detects Heart Failure from a Single Heartbeat

There’s a trick that most cardiologists wouldn’t claim to be able to do. A good cardiologist might detect heart failure by a quick look at a patient. Does the patient’s colour look right? Are extremities and fingernails looking bluish? Are there signs of puffiness and swelling in the feet and ankles? But before conclusively diagnosing congestive heart failure (CHF) the doctor would do an electrocardiogram (ECG), a chest x-ray, and have the patient wear a 24-hour halter monitor to provide a 5-minute-segmented record of heart function. Imagine, however, if an AI could draw a conclusion of CHF without all of the above and do it from a single heartbeat. That could not only save time but also help in early intervention when CHF is detected, a condition that 5 million Americans and 26 million around the world live with every day.

That’s what researchers at the University of Surrey have developed using a neural network that accurately detects CHF through the analysis of one ECG heartbeat. Dr. Sebastiano Massaro, Associate Professor of Organisational Neuroscience, at Surrey, is one of the researchers responsible for creating the AI. In a university news release, he states, “We trained and tested the CNN [Convolutional Neural Network] model on large publicly available ECG datasets featuring subjects with CHF as well as healthy, non-arrhythmic hearts. Our model delivered 100% accuracy: by checking just one heartbeat we are able to detect whether or not a person has heart failure. Our model is also one of the first known to be able to identify the ECG s morphological features specifically associated with the severity of the condition.”

The benefit of such an efficient diagnosis without tying up lots of medical resources cannot be underestimated.

Google’s DeepMind AI Helps with Early Detection of Breast Cancer

Breast cancer is the most common cancer suffered by women globally. It is also the second leading cause of death. My wife goes through regular screenings for breast cancer, as do most women in North America and Europe. The tests include mammograms, ultrasound and physical examination. Women are taught to self-examine for early detection. The problem, however, is that screening results are 20% of the time wrong with 50% of women receiving annual mammograms experiencing at least one false cancer positive result within a 10-year period.

In the study reported in Nature, published online on January 1, 2020, Google’s DeepMind AI outperformed six radiologists in the U.S. study in interpreting mammogram results, reducing false cancer positive error rates by better than 350%. At the same time, it helped reduce workloads for those assigned in the United Kingdom to do second readings of mammogram results by 88%, a common practice there.

For the purpose of the study, the AI system provided a comprehensive report, not individual results. But it could be automated to give immediate feedback during screening which would alert radiologists to detected cancers.

Stanford University’s AI Laboratory is Diagnosing Skin Cancer

A neural network developed at Stanford is being used to detect and diagnose skin cancers using imaging analysis. The researchers trained the convolutional neural network using a dataset of almost 130,000 clinical images and over 2,000 diseases and tested its performance against 21 board-certified dermatologists. The neural network achieved a performance equal to the dermatologists.

The goal of Stanford is to turn the AI detection tool into an application that can be put on a smartphone, capable of taking an image of a suspicious skin appearance and competently detecting carcinomas and malignant melanomas.

An AI Detects Alzheimer’s Six Years Before Diagnosis

Early detection of Alzheimer’s is critical to treatment to stem the progression of the disease. Diagnosing it when clinical symptoms start manifesting themselves often comes when too many of the brain’s neurons have already died making it irreversible.

But an AI machine-learning algorithm trained to analyze a massive dataset of PET scans of patients eventually diagnosed with Alzheimer’s, mild cognitive impairment, and no disorders at all, successfully identified between 92 and 98% of patients who eventually developed the disease. Diagnosis occurred more than six years before doctors confirmed the disease.

The study was first published in Radiology in 2018. Sixteen studies since then have looked at a number of AI deep learning and machine learning algorithms aimed at early diagnosis of Alzheimer’s with accuracies over 98%.

For Alzheimer’s research AI represents a diagnostic breakthrough that may lead to much better treatment and even remission of the disease because of early detection.

 

AI will have a profound influence on medicine over the next few decades and is already showing its merits with neural networks, deep learning, and machine learning algorithms being used in the diagnosis of heart failure, breast and skin cancer, and Alzheimer’s.
lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics