Deep neural networks identify tumours

- 4 min
A team of researchers in the field of artificial intelligence have developed a new deep learning method to identify and segment tumours in medical images. The software makes it possible to automatically analyze several medical imaging modalities. Through a process of self-learning inspired from the way the neurons of the brain work, it can automatically identify liver tumours, outline the prostate for radiation therapy and count the number of cells at the microscopic level with a performance similar to that of an expert human eye.

This algorithm, presented in the February issue of the journal Medical Image Analysis by researchers from the University of Montreal Hospital Research Centre (CRCHUM) and Polytechnique Montréal ‒ in collaboration with the Montreal Institute for Learning Algorithms (MILA) and Imagia – is proof of the power of deep learning in biomedical engineering. This project, entitled Liver Cancer Detection using recent advances in deep learning, began in 2015, with the support of a grant from MEDTEQ.

“We have developed software that could be incorporated into visualization tools to help doctors in the advanced analysis of different medical engineering modalities. The algorithm automates image preprocessing, detection and segmentation (delimitation) tasks that are not currently done because they are too time-consuming for human beings. Our model is very versatile and works for CT-scan liver images, magnetic resonance prostate images (MRI) and electron microscope images of cells,” explained CRCHUM researcher, Polytechnique Montréal professor and senior author of the study Samuel Kadoury.

Take the example of a patient with liver cancer. Currently, when they undergo a CT scan, the image has to be standardized and normalized before being read by a radiologist. This preprocessing step almost requires the use of magic. “Grey shades have to be adjusted, because the images are often too dark or too light to distinguish tumours. This adjustment, made using CAD-like computer-aided diagnosis assistance techniques, is not perfect and sometimes lesions are missed or mistakenly detected. This is what led to the idea of improving viewing by using a computer. The new deep learning technique eliminates this preprocessing step,” explained CHUM radiologist, Université de Montréal professor and study co-author Dr An Tang.

How did artificial intelligence engineers design this intelligent imaging software to view abnormalities in the human body?

“Coming up with automatic learning architectures specifically designed for clinical contexts is a real challenge. So Imagia took the initiative of creating a synergy between the medical world and the artificial intelligence research world and brought together teams with strong expertise in each area,” explained Lisa Di Jorio, head of academic collaboration at Imagia and a co-author of the study.

“We came up with the idea of combining two types of convolutional neural networks that would complement one another to achieve an optimized image segmentation method. The first network uses raw biomedical data and learns optimal data normalization. The second network uses the output of the first to model segmentation maps,” explained Michal Drozdzal, the study’s first author, a former postdoctoral student at Polytechnique whose work was jointly supervised by Imagia, and presently a research scientist at Facebook AI Research in Montreal.

A neuronal network is like a complex series of computer operations that allows the computer to learn by itself when fed large quantities of examples. Convolutional neural networks (CNNs) work a bit like our visual cortex, by piling up several layers of processing to produce a result – an image – as output. They can be represented as a pile of building blocks. There are several types of neuronal networks, each with a different architecture.

The researchers combined two neuronal networks: a fully convolutional network (FC) with a fully convolutional residual network (FC-ResNet).

We had to train the new algorithm to discover lesions by itself, explained Kadoury, who is also the Canada Research Chair in Medical Imaging and Assisted Interventions. “We feed the computer thousands of examples of lesions that were manually identified by human beings. That’s what we call the gold standard. Then, the neuronal networks correct the information through a repetitive process and the algorithm ends up learning how to recognize the image on its own.”

The researchers compared the results obtained by their algorithm with other algorithms. “We see visually that our algorithm performs as well, if not better than, other algorithms and is very close to what a human being would do if they had hours to segment many images. Eventually our algorithm could be used to standardize images from different hospital centres,” asserted Kadoury.

Deep neural networks identify tumours

Contribution

Categories

Research