Search Results

Now showing 1 - 6 of 6
  • Article
    Citation - WoS: 8
    Citation - Scopus: 13
    White Blood Cells Classifications by SURF Image Matching, PCA and Dendrogram
    (Allied Acad, 2015) Nazlibilek, Sedat; Karacor, Deniz; Erturk, Korhan Levent; Sengul, Gokhan; Ercan, Tuncay; Aliew, Fuad; Department of Mechatronics Engineering; Information Systems Engineering; Computer Engineering
    Determination and classification of white blood cells are very important for diagnosing many diseases. The number of white blood cells and morphological changes or blasts of them provide valuable information for the positive results of the diseases such as Acute Lymphocytic Leucomia (ALL). Recognition and classification of white cells as basophils, lymphocytes, neutrophils, monocytes and eosinophils also give additional information for the diagnosis of many diseases. We are developing an automatic process for counting, size determination and classification of white blood cells. In this paper, we give the results of the classification process for which we experienced a study with hundreds of images of white blood cells. This process will help to diagnose especially ALL disease in a fast and automatic way. Three methods are used for classification of five types of white blood cells. The first one is a new algorithm utilizing image matching for classification that is called the Speed-Up Robust Feature detector (SURF). The second one is the PCA that gives the advantage of dimension reduction. The third is the classification tree called dendrogram following the PCA. Satisfactory results are obtained by two techniques.
  • Article
    Citation - WoS: 6
    Citation - Scopus: 7
    Teaching Software Verification and Validation Course: a Case Study
    (Tempus Publications, 2014) Mishra, Deepti; Hacaloglu, Tuna; Mishra, Alok; Computer Engineering; Software Engineering; Information Systems Engineering
    Software verification and validation (V & V) is one of the significant areas of software engineering for developing high quality software. It is also becoming part of the curriculum of a universities' software and computer engineering departments. This paper reports the experience of teaching undergraduate software engineering students and discusses the main problems encountered during the course, along with suggestions to overcome these problems. This study covers all the different topics generally covered in the software verification and validation course, including static verification and validation. It is found that prior knowledge about software quality concepts and good programming skills can help students to achieve success in this course. Further, team work can be chosen as a strategy, since it facilitates students' understanding and motivates them to study. It is observed that students were more successful in white box testing than in black box testing.
  • Article
    Citation - WoS: 11
    Citation - Scopus: 12
    A Comprehensive Assessment Plan for Accreditation in Engineering Education: A Case Study in Turkey
    (Tempus Publications, 2015) Turhan, Cigdem; Sengul, Gokhan; Koyuncu, Murat; Information Systems Engineering; Software Engineering; Computer Engineering
    This paper describes the procedure followed by Computer Engineering and Software Engineering programs at Atilim University, Ankara, Turkey, which led to the granting of five years of accreditation by MUDEK, the local accreditation body authorized by The European Network for Accreditation of Engineering Education (ENAEE) to award the EUR-ACElabel, and a full member signatory of Washington Accord of International Engineering Alliance (IEA). It explains the organizational structure established for preparation, determination and measurement of the educational objectives, program outcomes, course outcomes, and the continuous improvement cycle carried out during the preparation period. The aim of the paper is to share methods and experiences which may be beneficial for the other programs that are intended for accreditation.
  • Article
    Determination of Measurement Noise, Conductivity Errors and Electrode Mislocalization Effects To Somatosensory Dipole Localization.
    (Allied Acad, 2012) Sengul, G.; Baysal, U.; Computer Engineering
    Calculating the spatial locations, directions and magnitudes of electrically active sources of human brain by using the measured scalp potentials is known as source localization. An accurate source localization method requires not only EEG data but also the 3-D positions and number of measurement electrodes, the numerical head model of the patient/subject and the conductivities of the layers used in the head model. In this study we computationally determined the effect of noise, conductivity errors and electrode mislocalizations for electrical sources located in somatosensory cortex. We first randomly selected 1000 electric sources in somatosensory cortex, and for these sources we simulated the surface potentials by using average conductivities given in the literature and 3-D positions of the electrodes. We then added random noise to measurements and by using noisy data; we tried to calculate the positions of the dipoles by using different electrode positions or different conductivity values. The estimated electrical sources and original ones are compared and by this way the effect of measurement noise, electrode mislocalizations and conductivity errors to somatosensory dipole localization is investigated. We conclude that for an accurate somatosensory source localization method, we need noiseless measurements, accurate conductivity values of scalp and skull layers and the accurate knowledge of 3-D positions of measurement sensors.
  • Article
    The Effect of Statistically Constrained Minimum Mean Square Estimation Algorithm Which Is Used for Human Head Tissue Conductivity Estimation To Source Localization
    (Journal Neurological Sciences, 2012) Sengul, Gokhan; Şengül, Gökhan; Baysal, Ugur; Şengül, Gökhan; Computer Engineering; Computer Engineering; Computer Engineering
    Determining the electrical active regions of human brain by using EEG and/or MEG data is known as "EEG/MEG bioelectromagnetic inverse problem" or "source localization". A typical source localization system intakes not only EEG/MEG data but also geometry information of subject/patient, a priori information about the electrically active sources, the number and 3-D positions of measurement electrodes and conductivities/resistivities of the tissues in the head model. In this study we investigated the conductivity estimation performance previously proposed Statistically Constrainted Minimum Mean Square Error Estimation (MiMSEE) algorithm by simulation studies and we also investigated the effect of the estimation to source localization activities. In simulation studies we used a three-layered (composed of scalp, skull and brain regions) realistic head model to estimate 100 different conductivity distributions in vivo. As a result we found that the proposed algorithm estimates the conductivity of scalp with an average error of 23%, the conductivity of skull with an average error of 40% and finally the conductivity of brain with an average error of 17%. In the second part of the study we compared the source localization errors for two cases: one, when the average conductivities of tissues given in the literature are used, and second when the subject-specific conductivity estimation is performed with MiMSEE algorithm. The results showed 10.1 mm localization error is obtained when the average conductivities given in the literature are used and 2.7 mm localization is obtained when subject-specific conductivity estimation is performed with MiMSEE algorithm. The results shows that the localization error is reduced by 73.07% when subject-specific conductivity estimation is performed with MiMSEE algorithm. We conclude that using the conductivities obtained from MiMSEE algorithm reduces the source localization error and we recommend to perform subject-specific conductivity estimation for source localization applications.
  • Article
    Citation - Scopus: 21
    Comparative Analysis of Programming Languages Utilized in Artificial Intelligence Applications: Features, Performance, and Suitability
    (Prof.Dr. İskender AKKURT, 2024) Sezen, Arda; Türkmen, Güzin; Şengül, Gökhan
    This study presents a detailed comparative analysis of the foremost programming languages employed in Artificial Intelligence (AI) applications: Python, R, Java, and Julia. These languages are analysed for their performance, features, ease of use, scalability, library support, and their applicability to various AI tasks such as machine learning, data analysis, and scientific computing. Each language is evaluated based on syntax and readability, execution speed, library ecosystem, and integration with external tools. The analysis incorporates a use case of code writing for a linear regression task. The aim of this research is to guide AI practitioners, researchers, and developers in choosing the most appropriate programming language for their specific needs, optimizing both the development process and the performance of AI applications. The findings also highlight the ongoing evolution and community support for these languages, influencing long-term sustainability and adaptability in the rapidly advancing field of AI. This comparative assessment contributes to a deeper understanding of how programming languages can enhance or constrain the development and implementation of AI technologies.