Towards an effective and efficient deep learning model for COVID-19 patterns detection in X-ray images.

dc.contributor.authorLuz, Eduardo José da Silva
dc.contributor.authorSilva, Pedro Henrique Lopes
dc.contributor.authorSilva, Rodrigo Pereira da
dc.contributor.authorSilva, Ludmila
dc.contributor.authorAnanias, João Víctor Gomes Guimarães
dc.contributor.authorMiozzo, Gustavo
dc.contributor.authorMoreira, Gladston Juliano Prates
dc.contributor.authorGomes, David Menotti
dc.date.accessioned2022-02-15T16:44:49Z
dc.date.available2022-02-15T16:44:49Z
dc.date.issued2021pt_BR
dc.description.abstractPurpose Confronting the pandemic of COVID-19 is nowadays one of the most prominent challenges of the human species. A key factor in slowing down the virus propagation is the rapid diagnosis and isolation of infected patients. The standard method for COVID-19 identification, the Reverse transcription polymerase chain reaction method, is time-consuming and in short supply due to the pandemic. Thus, researchers have been looking for alternative screening methods, and deep learning applied to chest X-rays of patients has been showing promising results. Despite their success, the computational cost of these methods remains high, which imposes difficulties to their accessibility and availability. Thus, the main goal of this work is to propose an accurate yet efficient method in terms of memory and processing time for the problem of COVID-19 screening in chest X-rays. Methods To achieve the defined objective, we propose a new family of models based on the EfficientNet family of deep artificial neural networks which are known for their high accuracy and low footprints. We also exploit the underlying taxonomy of the problem with a hierarchical classifier. A dataset of 13,569 X-ray images divided into healthy, non-COVID-19 pneumonia, and COVID-19 patients is used to train the proposed approaches and other 5 competing architectures. We also propose a cross-dataset evaluation with a second dataset to evaluate the method generalization power. Results The results show that the proposed approach was able to produce a high-quality model, with an overall accuracy of 93.9%, COVID-19 sensitivity of 96.8%, and positive prediction of 100% while having from 5 to 30 times fewer parameters than the other tested architectures. Larger and more heterogeneous databases are still needed for validation before claiming that deep learning can assist physicians in the task of detecting COVID-19 in X-ray images, since the cross-dataset evaluation shows that even state-of-the-art models suffer from a lack of generalization power. Conclusions We believe the reported figures represent state-of-the-art results, both in terms of efficiency and effectiveness, for the COVIDx database, a database of 13,800 X-ray images, 183 of which are from patients affected by COVID-19. The current proposal is a promising candidate for embedding in medical equipment or even physicians’ mobile phones.pt_BR
dc.identifier.citationLUZ, E. J. da S. et al. Towards an effective and efficient deep learning model for COVID-19 patterns detection in X-ray images. Research on Biomedical Engineering, 2021. Disponível em: <https://link.springer.com/article/10.1007%2Fs42600-021-00151-6>. Acesso em: 25 ago. 2021.pt_BR
dc.identifier.doihttps://doi.org/10.1007/s42600-021-00151-6pt_BR
dc.identifier.issn2446-4740
dc.identifier.urihttp://www.repositorio.ufop.br/jspui/handle/123456789/14496
dc.identifier.uri2https://link.springer.com/article/10.1007%2Fs42600-021-00151-6pt_BR
dc.language.isoen_USpt_BR
dc.rightsrestritopt_BR
dc.subjectPneumoniapt_BR
dc.subjectChest (X-ray) radiographypt_BR
dc.titleTowards an effective and efficient deep learning model for COVID-19 patterns detection in X-ray images.pt_BR
dc.typeArtigo publicado em periodicopt_BR
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
ARTIGO_TowardsEffectiveEfficient.pdf
Size:
1.62 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: