December 16, 2019

Applications of Computer Vision in Healthcare

Table of Contents:

How AI is Shifting the Healthcare Paradigm

Computer vision and machine learning models are designed to recognize, understand images and data to execute actions that only humans were once thought to be capable of performing. As a result, tech companies such as Facebook and Amazon, as well as convenience stores and automotive companies have invested billions of dollars into developing computer vision models. But they're not alone. Healthcare is is also joining the party!

As AI continues to grow, it is expanding the frontiers of healthcare, augmenting diagnostic and treatment tools, and helping healthcare professionals prognosticate diseases more effectively. This will undoubtedly improve patient care outcomes and reduce avoidable delays in the patient care continuum. Following are some of the key applications for artificial intelligence as a tool for healthcare providers.

 

Cancer Screening

Computer vision and machine learning algorithms have shown promise in detecting precancerous lesions using the minutest details in tissue imagery, increasing the sensitivity and accuracy of cancer screening tests.

In screening for skin cancer, for instance, AI researchers have developed computer vision models that can analyze images and biopsy specimens of a person’s skin for cancerous changes much faster and more accurately than a doctor would.

Diagnosing skin cancer may be a challenging task because of the minute variability in the appearance of cancerous skin changes. Using deep convolutional neural networks (CNNs), scientists at the Stanford University Artificial Intelligence Lab created a model that analyzes skin images against a dataset of more than 120,000 skin cancer images. The results revealed that this CNN model detects and classifies skin cancer as efficiently as certified dermatologists.

AI as a tool in the healthcare industrySimilarly, computer vision can be applied in breast cancer screening, with algorithms that are trained in recognizing and classifying cancerous changes from millions of mammogram images showing both healthy and diagnosed samples. Applying these algorithms in evaluating an image detects the most subtle cancerous or precancerous pattern within seconds, offering a great supplementary resource for physicians.

In MIT’s Computer Science and Artificial Intelligence Laboratory and Massachusetts General Hospital (MGH), researchers have developed a new deep learning model that can determine a patient’s five-year risk of breast cancer from a mammogram image. The model is trained on recognizing precancerous tissue patterns in the breast using more than 90,000 mammogram images of MGH patients. Using this data, the model detects precancerous breast tissue patterns that are elusive to the human eye.

Detection of bone cancer has also been made easier using machine learning models. RSIP Vision developed an image processing technology that can localize areas of primary and secondary cancerous lesions in bone images to help doctors diagnose bone cancers much earlier and faster. This precise data also helps doctors provide personalized treatment for patients.

 

Disease Diagnostics

Scientists are applying deep learning and natural language programming (NLP) systems to gather patient information, analyze patient’s responses, and narrow down the diagnosis in a pre-appointment interview. The system sends these findings to the doctor before the patient comes in for the visit.

Healthcare professionals using AI as a diagnostic tool.Ellie, a computer vision program designed by scientists at the Institute of Creative Technologies at the University of Southern California, was built to do this. Ellie asks the patient a series of questions and, using the built-in webcam and sensor, scans the patient’s face to assess their facial and body movements to formulate probable diagnoses.

By comparing verbal cues as well as subtle facial and body movements with a dataset of thousands of control, Ellie can detect patients with mental health problems including depression and anxiety. While this program does not replace a human doctor, it provides subtle information that doctors may not easily elicit, improving the diagnosis of certain diseases.

Babylon Health, a British tech startup, is also developing NLP and deep learning systems that use speech and language processing to extract symptoms and physical findings that are integral to formulating a diagnosis. These data are forwarded to a doctor to analyze before evaluating a patient.

These algorithms not only use these patient data to make diagnoses, but they also provide personalized health information and education in greater detail than a doctor ever could.

 

Surgical Assistance Technology

Scientists are beginning to incorporate machine learning models to improve surgical precision and accuracy. These algorithms have helped surgeons make accurate decisions during complex surgical procedures.

For instance, AI developers designed RSIP Vision, an image-processing model that can calibrate, orient, and navigate input images to improve visualization and guide surgical movements during orthopedic procedures. This improves the accuracy of a surgical technique, reduces procedure duration, and improves patient outcomes.

Furthermore, to evaluate intra-operative and postoperative hemorrhage, Gauss Surgical developed a deep learning program, Triton, that estimates blood loss in real-time during and after surgeries. Triton estimates the volume of blood loss by processing images of blood-stained sponges, suction machines, surgical drapes, and other instruments. The system estimates the potential blood loss in a patient from this pool of data and forwards the information to surgeons to help them make blood transfusion decisions.

A study by the Santa Clara Valley Medical Center to determine the efficiency of this app in estimating blood loss during surgeries found that it was more accurate than obstetrician’s visual estimates.

Also, vision processing algorithms are the basis for robotic surgery. Remotely operated robots replace the surgeon’s arms to make extremely precise and fine movements that are impossible with the human hand. During the surgery, the surgeon directs the robot’s movements with an operating console in another room.

These computer vision systems process, correct, and calibrate the images of the operating room, the patient’s body, and the surgical tools to create a magnified 3D image of all three components. This overlays these three images into a single view that allows the robot to track its position and the positions of the surgical tools such that it makes accurate movements.

 

Conclusion

Artificial Intelligence is revolutionizing everything - commerce, social interactions, and even healthcare. With more machine learning models being developed, AI will further transform healthcare, creating a more efficient diagnostic and treatment system to improve patient care outcomes.

Interested to learn how Clarifai's revolutionary computer vision AI tools can help you? Talk to an expert today.