Researchers at Dartmouth-Hitchcock Medical Center have created a deep learning model of how to classify lung cancer slides from the same three pathologists in a small study published in Scientific Reports. The model automatically classifies histological types in lung cancer samples to help physicians quickly determine the most appropriate treatment for a patient. The researchers have generated early confirmation data on the system and now plan to test the technology in clinical settings at other medical centers.
Researchers at Dartmouth-Hitchcock Medical Center have created a deep learning model of how to classify lung cancer slides from the same three pathologists in a small study published in Scientific Reports. The model automatically classifies histological types in lung cancer samples to help physicians quickly determine the most appropriate treatment for a patient. The researchers have generated early confirmation data on the system and now plan to test the technology in clinical settings at other medical centers.
Histological pattern classification of lung cancer is a critical but tricky step in the therapeutic pathway. Prognosis, survival, and treatment are all related to classification. However, qualitative assessment criteria and the advantage of individual patients with multiple histological patterns make it difficult to classify samples, which can lead to considerable disagreement among pathologists.
One study found moderate to good agreement among pulmonary pathologists, on a scale of 0 (indicating no agreement) to 1 (indicating absolute agreement) on the kappa score. In that study, the Kappa score was as high as 0.72, but other assessments of difficult cases yielded a result as low as 0.24, suggesting little agreement among the professional pulmonary pathologists who reviewed the slides.
The Dartmouth team recently evaluated whether deep learning models could help advance the field. The model uses a computer system designed to learn to identify areas of cancer cells and aggregate these classifications to infer the histological patterns present on the slides. After training and developing on 279 full-slide images, the team tested it on 143 slides taken at the same medical center. The model had a Kappa score of 0.525, with a 66.6% agreement with the three pathologists for classifying major patterns. Among the three pathologists, the numbers were slightly lower, at 0.485 and 62.7%, respectively.
The findings led the researchers to conclude that the model was “statistically comparable to pathologists on all measures of assessment.” Because the model can produce results quickly, the researchers believe it could be integrated into laboratory information management systems and come up with pattern diagnoses, or automatically trigger genetic testing requests based on the model’s analysis.
To fully deliver on this promise, the researchers will need to demonstrate that the model works outside of a test environment. Notably, training and testing used images from a single medical center. Previous research has shown that the model may be less effective when applied to images captured by other facilities. Researchers at Dartmouth have positioned other teams outside to find out how models perform on external datasets by publicly releasing code. Histological pattern classification of lung cancer is a critical but tricky step in the therapeutic pathway. Prognosis, survival, and treatment are all related to classification.
However, qualitative assessment criteria and the advantage of individual patients with multiple histological patterns make it difficult to classify samples, which can lead to considerable disagreement among pathologists. One study found moderate to good agreement among pulmonary pathologists, on a scale of 0 (indicating no agreement) to 1 (indicating absolute agreement) on the kappa score. In that study, the Kappa score was as high as 0.72, but other assessments of difficult cases yielded a result as low as 0.24, suggesting little agreement among the professional pulmonary pathologists who reviewed the slides.
The Dartmouth team recently evaluated whether deep learning models could help advance the field. The model uses a computer system designed to learn to identify areas of cancer cells and aggregate these classifications to infer the histological patterns present on the slides. After training and developing on 279 full-slide images, the team tested it on 143 slides taken at the same medical center. The model had a Kappa score of 0.525, with a 66.6% agreement with the three pathologists for classifying major patterns. Among the three pathologists, the numbers were slightly lower, at 0.485 and 62.7%, respectively.
The findings led the researchers to conclude that the model was “statistically comparable to pathologists on all measures of assessment.” Because the model can produce results quickly, the researchers believe it could be integrated into laboratory information management systems and come up with pattern diagnoses, or automatically trigger genetic testing requests based on the model’s analysis.
To fully deliver on this promise, the researchers will need to demonstrate that the model works outside of a test environment. Notably, training and testing used images from a single medical center. Previous research has shown that the model may be less effective when applied to images captured by other facilities. Researchers at Dartmouth have positioned other teams outside to find out how models perform on external datasets by publicly releasing code.
The Links: 2MBI300VH-120-50 T298N12TOF LCD-PANEL