Browsing by Author "Urhan O."
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Scopus A lightweight deep model for brain tumor segmentation(2021-06-09) Oksuz C.; Urhan O.; Gullu M.K.Brain tumors are one of the major causes of increasing deaths worldwide. It is important to correctly identify cancerous tissues by experts in order to make correct treatment planning and to increase patient survival rates. However, manually tracking and segmentation of cancerous tissues in many sections of volumetric MR data is an error-prone and time-consuming process. Developments in the field of deep learning in recent years allow the tasks performed by humans to be performed with higher accuracy and speeds through the developed automatic systems. In this study, a deep learning-based light-weighted model with 6.78M parameters is proposed for the classification of cancerous tissues in the brain. Cross-validation of the proposed method on a public data set results in 84.61%, 82.54%, and 87.15% Boundary F1, mean IoU, and mean accuracy, respectively, shows the robustness of the proposed model.Scopus An integrated convolutional neural network with attention guidance for improved performance of medical image classification(2024-02-01) Öksüz C.; Urhan O.; Güllü M.K.Scopus Brain tumor classification using the fused features extracted from expanded tumor region(2022-02-01) Öksüz C.; Urhan O.; Güllü M.K.In this study, a brain tumor classification method using the fusion of deep and shallow features is proposed to distinguish between meningioma, glioma, pituitary tumor types and to predict the 1p/19q co-deletion status of LGG tumors. Brain tumors can be located in a different region of the brain, and the texture of the surrounding tissues may also vary. Therefore, the inclusion of surrounding tissues into the tumor region (ROI expansion) can make the features more distinctive. In this work, pre-trained AlexNet, ResNet-18, GoogLeNet, and ShuffleNet networks are used to extract deep features from the tumor regions including its surrounding tissues. Even though the deep features are extremely important in classification, some low-level information regarding tumors may be lost as the network deepens. Accordingly, a shallow network is designed to learn low-level information. Next, in order to compensate the information loss, deep features and shallow features are fused. SVM and k-NN classifiers are trained using the fused feature sets. Experimental results achieved on two publicly available data sets demonstrate that using the feature fusion and the ROI expansion at the same time improves the average sensitivity by about 11.72% (ROI expansion: 8.97%, feature fusion: 2.75%). These results confirm the assumption that the tissues surrounding the tumor region carry distinctive information. Not only that, the missing low-level information can be compensated thanks to the feature fusion. Moreover, competitive results are achieved against state-of-the-art studies when the ResNet-18 is used as the deep feature extractor of our classification framework.Scopus COVID-19 detection with severity level analysis using the deep features, and wrapper-based selection of ranked features(2022-09-10) Öksüz C.; Urhan O.; Güllü M.K.The SARS-COV-2 virus, which causes COVID-19 disease, continues to threaten the whole world with its mutations. Many methods developed for COVID-19 detection are validated on the data sets generally including severe forms of the disease. Since the severe forms of the disease have prominent signatures on X-ray images, the performance to be achieved is high. To slow the spread of the disease, effective computer-assisted screening tools with the ability to detect the mild and the moderate forms of the disease that do not have prominent signatures are needed. In this work, various pretrained networks, namely GoogLeNet, ResNet18, SqueezeNet, ShuffleNet, EfficientNetB0, and Xception, are used as feature extractors for the COVID-19 detection with severity level analysis. The best feature extraction layer for each pre-trained network is determined to optimize the performance. After that, features obtained by the best layer are selected by following a wrapper-based feature selection strategy using the features ranked based on Laplacian scores. The experimental results achieved on two publicly available data sets including all the forms of COVID-19 disease reveal that the method generalized well on unseen data. Moreover, 66.67%, 90.32%, and 100% sensitivity are obtained in the detection of mild, moderate, and severe cases, respectively.Scopus Ensemble-LungMaskNet: Automated lung segmentation using ensembled deep encoders(2021-08-25) Oksuz C.; Urhan O.; Gullu M.K.Automated lung segmentation has importance because it gives clues about several diseases to the experts. It is the step that comes before further detailed analyses of the lungs. However, segmentation of the lungs is a challenging task since the opacities and consolidations are caused by various lung diseases. As a result, the clarity of the borders of the lungs may be lost which makes the segmentation task difficult. The presence of various medical equipment such as cables in the image is another factor that makes segmentation difficult. Therefore, it is a necessity to develop methods that can handle such situations. Learning the most useful patterns related to various diseases is possible with deep learning methods. Unlike conventional methods, learning the patterns improves the generalization ability of the models on unseen data. For this purpose, a deep segmentation framework including ensembles of pre-trained lightweight networks is proposed for lung region segmentation in this work. The experimental results achieved on two publicly available data sets demonstrate the effectiveness of the proposed framework.