Browsing by Author "Öksüz C."
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Scopus Adaptive local thresholding based number plate detection(2015-06-19) Öksüz C.; Güllü M.K.In this paper, an automatic number plate recognition approach with low computational load has been proposed to detect licence plate area using character features. In the preprocessing step, unlike classical Sauvola method the output is weighted according to the pixel luminance values, and therefore dark regions are eliminated from the detection. After preprocessing step, regions that cannot show a character property are eliminated using connected component analysis, and then character regions are detected utilizing horizontal projection. Experimental results show that proposed method works faster and gives better detection performance in complex background, variable illumination, distance and inclination conditions.Scopus An integrated convolutional neural network with attention guidance for improved performance of medical image classification(2024-02-01) Öksüz C.; Urhan O.; Güllü M.K.Scopus Brain tumor classification using the fused features extracted from expanded tumor region(2022-02-01) Öksüz C.; Urhan O.; Güllü M.K.In this study, a brain tumor classification method using the fusion of deep and shallow features is proposed to distinguish between meningioma, glioma, pituitary tumor types and to predict the 1p/19q co-deletion status of LGG tumors. Brain tumors can be located in a different region of the brain, and the texture of the surrounding tissues may also vary. Therefore, the inclusion of surrounding tissues into the tumor region (ROI expansion) can make the features more distinctive. In this work, pre-trained AlexNet, ResNet-18, GoogLeNet, and ShuffleNet networks are used to extract deep features from the tumor regions including its surrounding tissues. Even though the deep features are extremely important in classification, some low-level information regarding tumors may be lost as the network deepens. Accordingly, a shallow network is designed to learn low-level information. Next, in order to compensate the information loss, deep features and shallow features are fused. SVM and k-NN classifiers are trained using the fused feature sets. Experimental results achieved on two publicly available data sets demonstrate that using the feature fusion and the ROI expansion at the same time improves the average sensitivity by about 11.72% (ROI expansion: 8.97%, feature fusion: 2.75%). These results confirm the assumption that the tissues surrounding the tumor region carry distinctive information. Not only that, the missing low-level information can be compensated thanks to the feature fusion. Moreover, competitive results are achieved against state-of-the-art studies when the ResNet-18 is used as the deep feature extractor of our classification framework.Scopus COVID-19 detection with severity level analysis using the deep features, and wrapper-based selection of ranked features(2022-09-10) Öksüz C.; Urhan O.; Güllü M.K.The SARS-COV-2 virus, which causes COVID-19 disease, continues to threaten the whole world with its mutations. Many methods developed for COVID-19 detection are validated on the data sets generally including severe forms of the disease. Since the severe forms of the disease have prominent signatures on X-ray images, the performance to be achieved is high. To slow the spread of the disease, effective computer-assisted screening tools with the ability to detect the mild and the moderate forms of the disease that do not have prominent signatures are needed. In this work, various pretrained networks, namely GoogLeNet, ResNet18, SqueezeNet, ShuffleNet, EfficientNetB0, and Xception, are used as feature extractors for the COVID-19 detection with severity level analysis. The best feature extraction layer for each pre-trained network is determined to optimize the performance. After that, features obtained by the best layer are selected by following a wrapper-based feature selection strategy using the features ranked based on Laplacian scores. The experimental results achieved on two publicly available data sets including all the forms of COVID-19 disease reveal that the method generalized well on unseen data. Moreover, 66.67%, 90.32%, and 100% sensitivity are obtained in the detection of mild, moderate, and severe cases, respectively.