- Browse by Subject
Browsing by Subject "Deep learning"
Now showing 1 - 10 of 36
Results Per Page
ItemA deep learning framework for automated classification of histopathological kidney whole-slide images(Elsevier, 2022-04-18) Abdeltawab, Hisham A.; Khalifa, Fahmi A.; Ghazal, Mohammed A.; Cheng, Liang; El-Baz, Ayman S.; Gondim, Dibson D.; Pathology and Laboratory Medicine, School of MedicineBackground: Renal cell carcinoma is the most common type of malignant kidney tumor and is responsible for 14,830 deaths per year in the United States. Among the four most common subtypes of renal cell carcinoma, clear cell renal cell carcinoma has the worst prognosis and clear cell papillary renal cell carcinoma appears to have no malignant potential. Distinction between these two subtypes can be difficult due to morphologic overlap on examination of histopathological preparation stained with hematoxylin and eosin. Ancillary techniques, such as immunohistochemistry, can be helpful, but they are not universally available. We propose and evaluate a new deep learning framework for tumor classification tasks to distinguish clear cell renal cell carcinoma from papillary renal cell carcinoma. Methods: Our deep learning framework is composed of three convolutional neural networks. We divided whole-slide kidney images into patches with three different sizes where each network processes a specific patch size. Our framework provides patchwise and pixelwise classification. The histopathological kidney data is composed of 64 image slides that belong to 4 categories: fat, parenchyma, clear cell renal cell carcinoma, and clear cell papillary renal cell carcinoma. The final output of our framework is an image map where each pixel is classified into one class. To maintain consistency, we processed the map with Gauss-Markov random field smoothing. Results: Our framework succeeded in classifying the four classes and showed superior performance compared to well-established state-of-the-art methods (pixel accuracy: 0.89 ResNet18, 0.92 proposed). Conclusions: Deep learning techniques have a significant potential for cancer diagnosis. ItemA Patch-Wise Deep Learning Approach for Myocardial Blood Flow Quantification with Robustness to Noise and Nonrigid Motion(IEEE, 2021) Youssef, Khalid; Heydari, Bobby; Rivero, Luis Zamudio; Beaulieu, Taylor; Cheema, Karandeep; Dharmakumar, Rohan; Sharif, Behzad; Medicine, School of MedicineQuantitative analysis of dynamic contrast-enhanced cardiovascular MRI (cMRI) datasets enables the assessment of myocardial blood flow (MBF) for objective evaluation of ischemic heart disease in patients with suspected coronary artery disease. State-of-the-art MBF quantification techniques use constrained deconvolution and are highly sensitive to noise and motion-induced errors, which can lead to unreliable outcomes in the setting of high-resolution MBF mapping. To overcome these limitations, recent iterative approaches incorporate spatial-smoothness constraints to tackle pixel-wise MBF mapping. However, such iterative methods require a computational time of up to 30 minutes per acquired myocardial slice, which is a major practical limitation. Furthermore, they cannot enforce robustness to residual nonrigid motion which can occur in clinical stress/rest studies of patients with arrhythmia. We present a non-iterative patch-wise deep learning approach for pixel-wise MBF quantification wherein local spatio-temporal features are learned from a large dataset of myocardial patches acquired in clinical stress/rest cMRI studies. Our approach is scanner-independent, computationally efficient, robust to noise, and has the unique feature of robustness to motion-induced errors. Numerical and experimental results obtained using real patient data demonstrate the effectiveness of our approach.Clinical Relevance- The proposed patch-wise deep learning approach significantly improves the reliability of high-resolution myocardial blood flow quantification in cMRI by improving its robustness to noise and nonrigid myocardial motion and is up to 300-fold faster than state-of-the-art iterative approaches. ItemAdvanced natural language processing and temporal mining for clinical discovery(2015-08-17) Mehrabi, Saeed; Jones, Josette F.; Palakal, Mathew J.; Chien, Stanley Yung-Ping; Liu, Xiaowen; Schmidt, C. MaxThere has been vast and growing amount of healthcare data especially with the rapid adoption of electronic health records (EHRs) as a result of the HITECH act of 2009. It is estimated that around 80% of the clinical information resides in the unstructured narrative of an EHR. Recently, natural language processing (NLP) techniques have offered opportunities to extract information from unstructured clinical texts needed for various clinical applications. A popular method for enabling secondary uses of EHRs is information or concept extraction, a subtask of NLP that seeks to locate and classify elements within text based on the context. Extraction of clinical concepts without considering the context has many complications, including inaccurate diagnosis of patients and contamination of study cohorts. Identifying the negation status and whether a clinical concept belongs to patients or his family members are two of the challenges faced in context detection. A negation algorithm called Dependency Parser Negation (DEEPEN) has been developed in this research study by taking into account the dependency relationship between negation words and concepts within a sentence using the Stanford Dependency Parser. The study results demonstrate that DEEPEN, can reduce the number of incorrect negation assignment for patients with positive findings, and therefore improve the identification of patients with the target clinical findings in EHRs. Additionally, an NLP system consisting of section segmentation and relation discovery was developed to identify patients' family history. To assess the generalizability of the negation and family history algorithm, data from a different clinical institution was used in both algorithm evaluations. ItemAI in Medical Imaging Informatics: Current Challenges and Future Directions(IEEE, 2020-07) Panayides, Andreas S.; Amini, Amir; Filipovic, Nenad D.; Sharma, Ashish; Tsaftaris, Sotirios A.; Young, Alistair; Foran, David; Do, Nhan; Golemati, Spyretta; Kurc, Tahsin; Huang, Kun; Nikita, Konstantina S.; Veasey, Ben P.; Zervakis, Michalis; Saltz, Joel H.; Pattichis, Constantinos S.; Biostatistics & Health Data Science, School of MedicineThis paper reviews state-of-the-art research solutions across the spectrum of medical imaging informatics, discusses clinical translation, and provides future directions for advancing clinical practice. More specifically, it summarizes advances in medical imaging acquisition technologies for different modalities, highlighting the necessity for efficient medical data management strategies in the context of AI in big healthcare data analytics. It then provides a synopsis of contemporary and emerging algorithmic methods for disease classification and organ/ tissue segmentation, focusing on AI and deep learning architectures that have already become the de facto approach. The clinical benefits of in-silico modelling advances linked with evolving 3D reconstruction and visualization applications are further documented. Concluding, integrative analytics approaches driven by associate research branches highlighted in this study promise to revolutionize imaging informatics as known today across the healthcare continuum for both radiology and digital pathology applications. The latter, is projected to enable informed, more accurate diagnosis, timely prognosis, and effective treatment planning, underpinning precision medicine. ItemAssessment of Deep Learning Methods for Differentiating Autoimmune Disorders in Ultrasound Images(Medical University Publishing House Craiova, 2021) Vasile, Corina Maria; Udriştoiu, Anca Loredana; Ghenea, Alice Elena; Padureanu, Vlad; Udriştoiu, Ştefan; Gruionu, Lucian Gheorghe; Gruionu, Gabriel; Iacob, Andreea Valentina; Popescu, Mihaela; Medicine, School of MedicineAt present, deep learning becomes an important tool in medical image analysis, with good performance in diagnosing, pattern detection, and segmentation. Ultrasound imaging offers an easy and rapid method to detect and diagnose thyroid disorders. With the help of a computer-aided diagnosis (CAD) system based on deep learning, we have the possibility of real-time and non-invasive diagnosing of thyroidal US images. This paper proposed a study based on deep learning with transfer learning for differentiating the thyroidal ultrasound images using image pixels and diagnosis labels as inputs. We trained, assessed, and compared two pre-trained models (VGG-19 and Inception v3) using a dataset of ultrasound images consisting of 2 types of thyroid ultrasound images: autoimmune and normal. The training dataset consisted of 615 thyroid ultrasound images, from which 415 images were diagnosed as autoimmune, and 200 images as normal. The models were assessed using a dataset of 120 images, from which 80 images were diagnosed as autoimmune, and 40 images diagnosed as normal. The two deep learning models obtained very good results, as follows: the pre-trained VGG-19 model obtained 98.60% for the overall test accuracy with an overall specificity of 98.94% and overall sensitivity of 97.97%, while the Inception v3 model obtained 96.4% for the overall test accuracy with an overall specificity of 95.58% and overall sensitivity of 95.58. ItemBrcaSeg: A Deep Learning Approach for Tissue Quantification and Genomic Correlations of Histopathological Images(Elsevier, 2021) Lu, Zixiao; Zhan, Xiaohui; Wu, Yi; Cheng, Jun; Shao, Wei; Ni, Dong; Han, Zhi; Zhang, Jie; Feng, Qianjin; Huang, Kun; Medicine, School of MedicineEpithelial and stromal tissues are components of the tumor microenvironment and play a major role in tumor initiation and progression. Distinguishing stroma from epithelial tissues is critically important for spatial characterization of the tumor microenvironment. Here, we propose BrcaSeg, an image analysis pipeline based on a convolutional neural network (CNN) model to classify epithelial and stromal regions in whole-slide hematoxylin and eosin (H&E) stained histopathological images. The CNN model is trained using well-annotated breast cancer tissue microarrays and validated with images from The Cancer Genome Atlas (TCGA) Program. BrcaSeg achieves a classification accuracy of 91.02%, which outperforms other state-of-the-art methods. Using this model, we generate pixel-level epithelial/stromal tissue maps for 1000 TCGA breast cancer slide images that are paired with gene expression data. We subsequently estimate the epithelial and stromal ratios and perform correlation analysis to model the relationship between gene expression and tissue ratios. Gene Ontology (GO) enrichment analyses of genes that are highly correlated with tissue ratios suggest that the same tissue is associated with similar biological processes in different breast cancer subtypes, whereas each subtype also has its own idiosyncratic biological processes governing the development of these tissues. Taken all together, our approach can lead to new insights in exploring relationships between image-based phenotypes and their underlying genomic events and biological processes for all types of solid tumors. BrcaSeg can be accessed at https://github.com/Serian1992/ImgBio. ItemClassifying the unknown: Insect identification with deep hierarchical Bayesian learning(Wiley, 2023) Badirli, Sarkhan; Picard, Christine Johanna; Mohler, George; Richert, Frannie; Akata, Zeynep; Dundar, Murat1. Classifying insect species involves a tedious process of identifying distinctive morphological insect characters by taxonomic experts. Machine learning can harness the power of computers to potentially create an accurate and efficient method for performing this task at scale, given that its analytical processing can be more sensitive to subtle physical differences in insects, which experts may not perceive. However, existing machine learning methods are designed to only classify insect samples into described species, thus failing to identify samples from undescribed species. 2. We propose a novel deep hierarchical Bayesian model for insect classification, given the taxonomic hierarchy inherent in insects. This model can classify samples of both described and undescribed species; described samples are assigned a species while undescribed samples are assigned a genus, which is a pivotal advancement over just identifying them as outliers. We demonstrated this proof of concept on a new database containing paired insect image and DNA barcode data from four insect orders, including 1040 species, which far exceeds the number of species used in existing work. A quarter of the species were excluded from the training set to simulate undescribed species. 3. With the proposed classification framework using combined image and DNA data in the model, species classification accuracy for described species was 96.66% and genus classification accuracy for undescribed species was 81.39%. Including both data sources in the model resulted in significant improvement over including image data only (39.11% accuracy for described species and 35.88% genus accuracy for undescribed species), and modest improvement over including DNA data only (73.39% genus accuracy for undescribed species). 4. Unlike current machine learning methods, the proposed deep hierarchical Bayesian learning approach can simultaneously classify samples of both described and undescribed species, a functionality that could become instrumental in biodiversity monitoring across the globe. This framework can be customized for any taxonomic classification problem for which image and DNA data can be obtained, thus making it relevant for use across all biological kingdoms. ItemCombining transfer learning with retinal lesion features for accurate detection of diabetic retinopathy(Frontiers Media, 2022-11-08) Hassan, Doaa; Gill, Hunter Mathias; Happe, Michael; Bhatwadekar, Ashay D.; Hajrasouliha, Amir R.; Janga, Sarath Chandra; BioHealth Informatics, School of Informatics and ComputingDiabetic retinopathy (DR) is a late microvascular complication of Diabetes Mellitus (DM) that could lead to permanent blindness in patients, without early detection. Although adequate management of DM via regular eye examination can preserve vision in in 98% of the DR cases, DR screening and diagnoses based on clinical lesion features devised by expert clinicians; are costly, time-consuming and not sufficiently accurate. This raises the requirements for Artificial Intelligent (AI) systems which can accurately detect DR automatically and thus preventing DR before affecting vision. Hence, such systems can help clinician experts in certain cases and aid ophthalmologists in rapid diagnoses. To address such requirements, several approaches have been proposed in the literature that use Machine Learning (ML) and Deep Learning (DL) techniques to develop such systems. However, these approaches ignore the highly valuable clinical lesion features that could contribute significantly to the accurate detection of DR. Therefore, in this study we introduce a framework called DR-detector that employs the Extreme Gradient Boosting (XGBoost) ML model trained via the combination of the features extracted by the pretrained convolutional neural networks commonly known as transfer learning (TL) models and the clinical retinal lesion features for accurate detection of DR. The retinal lesion features are extracted via image segmentation technique using the UNET DL model and captures exudates (EXs), microaneurysms (MAs), and hemorrhages (HEMs) that are relevant lesions for DR detection. The feature combination approach implemented in DR-detector has been applied to two common TL models in the literature namely VGG-16 and ResNet-50. We trained the DR-detector model using a training dataset comprising of 1,840 color fundus images collected from e-ophtha, retinal lesions and APTOS 2019 Kaggle datasets of which 920 images are healthy. To validate the DR-detector model, we test the model on external dataset that consists of 81 healthy images collected from High-Resolution Fundus (HRF) dataset and MESSIDOR-2 datasets and 81 images with DR signs collected from Indian Diabetic Retinopathy Image Dataset (IDRID) dataset annotated for DR by expert. The experimental results show that the DR-detector model achieves a testing accuracy of 100% in detecting DR after training it with the combination of ResNet-50 and lesion features and 99.38% accuracy after training it with the combination of VGG-16 and lesion features. More importantly, the results also show a higher contribution of specific lesion features toward the performance of the DR-detector model. For instance, using only the hemorrhages feature to train the model, our model achieves an accuracy of 99.38 in detecting DR, which is higher than the accuracy when training the model with the combination of all lesion features (89%) and equal to the accuracy when training the model with the combination of all lesions and VGG-16 features together. This highlights the possibility of using only the clinical features, such as lesions that are clinically interpretable, to build the next generation of robust artificial intelligence (AI) systems with great clinical interpretability for DR detection. The code of the DR-detector framework is available on GitHub at https://github.com/Janga-Lab/DR-detector and can be readily employed for detecting DR from retinal image datasets. ItemCompressed convolutional neural network for autonomous systems(2018-12) Pathak, Durvesh; El-Sharkawy, Mohamed; Rizkalla, Maher; King, BrianThe word “Perception” seems to be intuitive and maybe the most straightforward problem for the human brain because as a child we have been trained to classify images, detect objects, but for computers, it can be a daunting task. Giving intuition and reasoning to a computer which has mere capabilities to accept commands and process those commands is a big challenge. However, recent leaps in hardware development, sophisticated software frameworks, and mathematical techniques have made it a little less daunting if not easy. There are various applications built around to the concept of “Perception”. These applications require substantial computational resources, expensive hardware, and some sophisticated software frameworks. Building an application for perception for the embedded system is an entirely different ballgame. Embedded system is a culmination of hardware, software and peripherals developed for specific tasks with imposed constraints on memory and power. Therefore, the applications developed should keep in mind the memory and power constraints imposed due to the nature of these systems. Before 2012, the problems related to “Perception” such as classification, object detection were solved using algorithms with manually engineered features. However, in recent years, instead of manually engineering the features, these features are learned through learning algorithms. The game-changing architecture of Convolution Neural Networks proposed in 2012 by Alex K , provided a tremendous momentum in the direction of pushing Neural networks for perception. This thesis is an attempt to develop a convolution neural network architecture for embedded systems, i.e. an architecture that has a small model size and competitive accuracy. Recreate state-of-the-art architectures using fire module’s concept to reduce the model size of the architecture. The proposed compact models are feasible for deployment on embedded devices such as the Bluebox 2.0. Furthermore, attempts are made to integrate the compact Convolution Neural Network with object detection pipelines. ItemDeep Brain Dynamics and Images Mining for Tumor Detection and Precision Medicine(2023-08) Ramesh, Lakshmi; Zhang, Qingxue; King, Brian; Chen, YaobinAutomatic brain tumor segmentation in Magnetic Resonance Imaging scans is essential for the diagnosis, treatment, and surgery of cancerous tumors. However, identifying the hardly detectable tumors poses a considerable challenge, which are usually of different sizes, irregular shapes, and vague invasion areas. Current advancements have not yet fully leveraged the dynamics in the multiple modalities of MRI, since they usually treat multi-modality as multi-channel, and the early channel merging may not fully reveal inter-modal couplings and complementary patterns. In this thesis, we propose a novel deep cross-attention learning algorithm that maximizes the subtle dynamics mining from each of the input modalities and then boosts feature fusion capability. More specifically, we have designed a Multimodal Cross-Attention Module (MM-CAM), equipped with a 3D Multimodal Feature Rectification and Feature Fusion Module. Extensive experiments have shown that the proposed novel deep learning architecture, empowered by the innovative MM-CAM, produces higher-quality segmentation masks of the tumor subregions. Further, we have enhanced the algorithm with image matting refinement techniques. We propose to integrate a Progressive Refinement Module (PRM) and perform Cross-Subregion Refinement (CSR) for the precise identification of tumor boundaries. A Multiscale Dice Loss was also successfully employed to enforce additional supervision for the auxiliary segmentation outputs. This enhancement will facilitate effectively matting-based refinement for medical image segmentation applications. Overall, this thesis, with deep learning, transformer-empowered pattern mining, and sophisticated architecture designs, will greatly advance deep brain dynamics and images mining for tumor detection and precision medicine.