1. Home
  2. Browse by Author

Browsing by Author "Ozkan, Akin"

Filter results by typing the first few letters
Now showing 1 - 3 of 3
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 9
    Citation - Scopus: 9
    Benchmarking Classification Models for Cell Viability on Novel Cancer Image Datasets
    (Bentham Science Publ Ltd, 2019) Ozkan, Akin; Isgor, Sultan Belgin; Sengul, Gokhan; Isgor, Yasemin Gulgun
    Background: Dye-exclusion based cell viability analysis has been broadly used in cell biology including anticancer drug discovery studies. Viability analysis refers to the whole decision making process for the distinction of dead cells from live ones. Basically, cell culture samples are dyed with a special stain called trypan blue, so that the dead cells are selectively colored to darkish. This distinction provides critical information that may be used to expose influences of the studied drug on considering cell culture including cancer. Examiner's experience and tiredness substantially affect the consistency throughout the manual observation of cell viability. The unsteady results of cell viability may end up with biased experimental results accordingly. Therefore, a machine learning based automated decision-making procedure is inevitably needed to improve consistency of the cell viability analysis. Objective: In this study, we investigate various combinations of classifiers and feature extractors (i.e. classification models) to maximize the performance of computer vision-based viability analysis. Method: The classification models are tested on novel hemocytometer image datasets which contain two types of cancer cell images, namely, caucasian promyelocytic leukemia (HL60), and chronic myelogenous leukemia (K562). Results: From the experimental results, k-Nearest Neighbor (KNN) and Random Forest (RF) by combining Local Phase Quantization (LPQ) achieve the lowest misclassification rates that are 0.031 and 0.082, respectively. Conclusion: The experimental results show that KNN and RF with LPQ can be powerful alternatives to the conventional manual cell viability analysis. Also, the collected datasets are released from the "biochem.atilim.edu.tr/datasets/ " web address publically to academic studies.
  • Loading...
    Thumbnail Image
    Conference Object
    Citation - WoS: 16
    KINSHIPGAN: SYNTHESIZING OF KINSHIP FACES FROM FAMILY PHOTOS BY REGULARIZING A DEEP FACE NETWORK
    (Ieee, 2018) Ozkan, Savas; Ozkan, Akin
    In this paper, we propose a kinship generator network that can synthesize a possible child face by analyzing his/her parent's photo. For this purpose, we focus on to handle the scarcity of kinship datasets throughout the paper by proposing novel solutions in particular. To extract robust features, we integrate a pre-trained face model to the kinship face generator. Moreover, the generator network is regularized with an additional face dataset and adversarial loss to decrease the overfitting of the limited samples. Lastly, we adapt cycle-domain transformation to attain a more stable results. Experiments are conducted on Families in the Wild (FIW) dataset. The experimental results show that the contributions presented in the paper provide important performance improvements compared to the baseline architecture and our proposed method yields promising perceptual results.
  • Loading...
    Thumbnail Image
    Conference Object
    Citation - WoS: 1
    Method Proposal for Distinction of Microscope Objectives on Hemocytometer Images
    (Ieee, 2016) Ozkan, Akin; Isgor, S. Belgin; Sengul, Gokhan
    Hemocytometer is a special glass plate apparatus used for cell counting that has straight lines (counting chamber) in certain size. Leveraging this special lam and microscope, a cell concentration on an available cell suspension can be estimated. The automation process of hemocytometer images will assist several research disciplines to improve consistency of results and to reduce human labor. Different objective measurements can be utilized to analyze a cell sample on microscope. These differences affect the detail of image content. Basically, while the objective value is getting increased, image scale and detail level taken from image will increase, yet visible area becomes narrower. Due to this variation, different self-cell counting approaches should be developed for images taken with different objective values. In this paper, using the hemocytometer images gathered from a microscope, a novel approach is introduced for which can estimate objective values of a microscope with machine learning methods automatically. For this purpose, a frequency-based visual feature is proposed which embraces hemocytometer structure well. As a result of the conducted tests, %100 distinction accuracy is achieved with the proposed method.