Şengül, Gökhan

Loading...
Profile Picture
Name Variants
Gokhan, Sengul
Sengul, Gokhan
Sengul,G.
Gökhan, Şengül
Engul G.
Şengül G.
Şengül, Gökhan
G.,Sengul
Sengul, G.
S.,Gokhan
Sengul G.
Ş., Gökhan
G.,Şengül
G., Sengul
Şengül,G.
G., Şengül
S., Gokhan
Ş.,Gökhan
Job Title
Profesor Doktor
Email Address
gokhan.sengul@atilim.edu.tr
Main Affiliation
Computer Engineering
Status
Website
ORCID ID
Scopus Author ID
Turkish CoHE Profile ID
Google Scholar ID
WoS Researcher ID
Scholarly Output

77

Articles

47

Citation Count

128

Supervised Theses

10

Scholarly Output Search Results

Now showing 1 - 9 of 9
  • Conference Object
    Citation - Scopus: 3
    Gender Prediction by Using Local Binary Pattern and K Nearest Neighbor and Discriminant Analysis Classifications;
    (Institute of Electrical and Electronics Engineers Inc., 2016) Camalan,S.; Sengul,G.; Information Systems Engineering; Computer Engineering
    In this study, gender prediction is investigated for the face images. To extract the features of the images, Local Binary Pattern (LBP) is used with its different parameters. To classify the images male or female, K-Nearest Neighbors (KNN) and Discriminant Analysis (DA) methods are used. Their performances according to the LBP parameters are compared. Also classification methods' parameters are changed and the comparison results are shown. These methods are applied on FERET database with 530 female and 731 male images. To have better performance, the face parts of the images are cropped then feature extraction and classification methods applied on the face part of the images. © 2016 IEEE.
  • Conference Object
    Citation - Scopus: 0
    The Effect of Split Attention in Surgical Education
    (Springer Verlag, 2014) Özçelik,E.; Ercil Cagiltay,N.; Sengul,G.; Tuner,E.; Unal,B.; Department of Modern Languages; Computer Engineering
    Surgical education through simulation is an important area to improve the level of education and to decrease the risks, ethical considerations and cost of the educational environments. In the literature there are several studies conducted to better understand the effect of these simulation environments on learning. However among those studies the human-computer interaction point of view is very limited. Surgeons need to look at radiological images such as magnetic resonance images (MRI) to be sure about the location of the patient's tumor during a surgical operation. Thus, they go back and forth between physically separated places (e.g. the operating table and light screen display for MRI volume sets). This study is conducted to investigate the effect of presenting different information sources in close proximity on human performance in surgical education. For this purpose, we have developed a surgical education simulation scenario which is controlled by a haptic interface. To better understand the effect of split attention in surgical education, an experimental study is conducted with 27 subjects. The descriptive results of study show that even the integrated group performed the tasks with a higher accuracy level (by traveling less distance, entering less wrong directions and hitting less walls), the results are not statistically significant. Accordingly, even there are some evidences about the effect of split attention on surgical simulation environments, the results of this study need to be validated by controlling students' skill levels on controlling the haptic devices and 2D/3D space perception skills. The results of this study may guide the system developers to better design the HCI interface of their designs especially for the area of surgical simulation. © 2014 Springer International Publishing.
  • Conference Object
    Citation - Scopus: 0
    A Comparison of Pattern Recognition Approaches for Recognizing Handwriting in Arabic Letters
    (Institute of Electrical and Electronics Engineers Inc., 2021) Douma,A.; Ahmed,A.A.; Sengul,G.; Santhosh,J.; Jomah,O.S.M.; Ibrahim Salem,F.G.; Computer Engineering
    For Arabic letters recognition, we achieve three of pattern recognition approaches namely gray level co-occurrence matrix (GLCM), local binary pattern recognition (LBP) and artificial neural network (ANN) and compare between them to result best performance. Two of these methods level co-occurrence matrix and local binary pattern recognition are used for feature extraction whereas in artificial neural network (ANN) we use the intensity values of pixels for input of the neural network. Two classifiers are used, the K-Nearest Neighbor classifier (KNN) for the LBP, GLCM and neural network classifier for (ANN) artificial neural network. Also, we evaluate the results by using leave one person out approach, fold classification and leave one out. © 2021 IEEE.
  • Conference Object
    Citation - Scopus: 0
    Applying the Histogram of Oriented Gradients To Recognize Arabic Letters
    (Institute of Electrical and Electronics Engineers Inc., 2021) Douma,A.; Sengul,G.; Ibrahim Salem,F.G.; Ali Ahmed,A.; Computer Engineering
    the aim of this paper is to recognize the Arabic handwriting letters by using histogram of oriented gradients (HOG). We collected 2240 letters by 8 people, each person wrote 28 alphabet letter 10 times. First of all we resize All 2240 hand writing letter of Arabic Alphabet as images(pre-processing) after that extract these images by using one of feature extraction methods which is histogram of oriented gradients (HOG).For classification, the K-Nearest Neighbor (KNN) is used. The results are shown by using 1120 images in the one case and 2240 images in the second case and evaluate these results with the confusion matrix. Other cases we used leave one out (LOO), 2-fold classification and leave one out cross validation. The best fully performance of HOG was with leave one out technique because of the ability of HOG algorithm to capture the shape of letter in the image according to its edges (gradients). © 2021 IEEE.
  • Article
    Citation - Scopus: 2
    Trends in E-Governments: From E-Govt To M-Govt
    (2013) Ertürk,K.L.; Sengul,G.; Rehan,M.; Information Systems Engineering; Computer Engineering
    New technological advancement and availability in mobile devices, technology, applications and networks have made it possible for a common citizens to access information and transact services while on the move. This gives an opportunity for governments to provide such services to citizens at the minimum cost. E-government practice and routine in public sectors are being supplemented and moving towards to m-governement (mobile government). M-Government can be defined as the massive usage of mobile devices with their applications to develope a quick connection and response between citizen and public sector authorities. M-government is a support of improving the quality, time saving and usability of e-government applications around the clock from any location. The existing technological foundations, applications and services support the idea that m-government will be a significant part of e-government efforts. The policy makers and IT professionals need get ready to embrace these developments and participate in the ways to enhance e-government activities through m-government. This transformational process is going on around the world. This article will investigate where governmental organizations are becomming mobil government organization to quickely reach to their citizens and increasing communication with them beyound the limits. © IDOSI Publications, 2013.
  • Conference Object
    Citation - Scopus: 0
    Deep Learning and Current Trends in Machine Learning
    (Institute of Electrical and Electronics Engineers Inc., 2018) Bostan,A.; Ekin,C.; Sengul,G.; Karakaya,M.; Tirkes,G.; Computer Engineering
    Academic interest and commercial attention can be used to identify how much potential a novel technology may have. Since the prospective advantages in it may help solving some problems that are not solved yet or improving the performance of readily available ones. In this study, we have investigated the Web of Science (WOS) indexing service database for the publications on Deep Learning (DL), Machine Learning (ML), Convolutional Neural Networks (CNN), and Image Processing to reveal out the current trend. The figures indicate the strong potential in DL approach especially in image processing domain. © 2018 IEEE.
  • Conference Object
    Deep Learning and Current Trends in Machine Learning
    (Institute of Electrical and Electronics Engineers Inc., 2018) Bostan,A.; Ekin,C.; Sengul,G.; Karakaya,M.; Tirkes,G.; Computer Engineering
    Academic interest and commercial attention can be used to identify how much potential a novel technology may have. Since the prospective advantages in it may help solving some problems that are not solved yet or improving the performance of readily available ones. In this study, we have investigated the Web of Science (WOS) indexing service database for the publications on Deep Learning (DL), Machine Learning (ML), Convolutional Neural Networks (CNN), and Image Processing to reveal out the current trend. The figures indicate the strong potential in DL approach especially in image processing domain. © 2018 IEEE.
  • Conference Object
    Citation - Scopus: 2
    Haptic User Interface Integration for 3d Game Engines
    (Springer Verlag, 2014) Sengul,G.; Çaǧiltay,N.E.; Özçelik,E.; Tuner,E.; Erol,B.; Computer Engineering; English Translation and Interpretation
    Touch and feel senses of human beings provide important information about the environment. When those senses are integrated with the eyesight, we may get all the necessary information about the environment. In terms of human-computer-interaction, the eyesight information is provided by visual displays. On the other hand, touch and feel senses are provided by means of special devices called "haptic" devices. Haptic devices are used in many fields such as computer-aided design, distance-surgery operations, medical simulation environments, training simulators for both military and medical applications, etc. Besides the touch and sense feelings haptic devices also provide force-feedbacks, which allows designing a realistic environment in virtual reality applications. Haptic devices can be categorized into three classes: tactile devices, kinesthetic devices and hybrid devices. Tactile devices simulate skin to create contact sensations. Kinesthetic devices apply forces to guide or inhibit body movement, and hybrid devices attempt to combine tactile and kinesthetic feedback. Among these kinesthetic devices exerts controlled forces on the human body, and it is the most suitable type for the applications such as surgical simulations. The education environments that require skill-based improvements, the touch and feel senses are very important. In some cases providing such educational environment is very expensive, risky and may also consist of some ethical issues. For example, surgical education is one of these fields. The traditional education is provided in operating room on real patients. This type of education is very expensive, requires long time periods, and does not allow any error-and-try type of experiences. It is stressfully for both the educators and the learners. Additionally there are several ethical considerations. Simulation environments supported by such haptic user interfaces provide an alternative and safer educational alternative. There are several studies showing some evidences of educational benefits of this type of education (Tsuda et al 2009; Sutherland et al 2006). Similarly, this technology can also be successfully integrated to the physical rehabilitation process of some diseases requiring motor skill improvements (Kampiopiotis & Theodorakou, 2003). Hence, today simulation environments are providing several opportunities for creating low cost and more effective training and educational environment. Today, combining three dimensional (3D) simulation environments with these haptic interfaces is an important feature for advancing current human-computer interaction. On the other hand haptic devices do not provide a full simulation environment for the interaction and it is necessary to enhance the environment by software environments. Game engines provide high flexibility to create 3-D simulation environments. Unity3D is one of the tools that provides a game engine and physics engine for creating better 3D simulation environments. In the literature there are many studies combining these two technologies to create several educational and training environments. However, in the literature, there are not many researches showing how these two technologies can be integrated to create simulation environment by providing haptic interfaces as well. There are several issues that need to be handled for creating such integration. First of all the haptic devices control libraries need to be integrated to the game engine. Second, the game engine simulation representations and real-time interaction features need to be coordinately represented by the haptic device degree of freedom and force-feedback speed and features. In this study, the integration architecture of Unity 3D game engine and the PHANToM Haptic device for creating a surgical education simulation environment is provided. The methods used for building this integration and handling the synchronization problems are also described. The algorithms developed for creating a better synchronization and user feedback such as providing a smooth feeling and force feedback for the haptic interaction are also provided. We believe that, this study will be helpful for the people who are creating simulation environment by using Unity3D technology and PHANToM haptic interfaces. © 2014 Springer International Publishing.
  • Conference Object
    Citation - Scopus: 1
    Method Proposal for Distinction of Microscope Objectives on Hemocytometer Images;
    (Institute of Electrical and Electronics Engineers Inc., 2016) Ozkan,A.; Isgor,S.B.; Sengul,G.; Department of Electrical & Electronics Engineering; Chemical Engineering; Computer Engineering
    Hemocytometer is a special glass plate apparatus used for cell counting that has straight lines (counting chamber) in certain size. Leveraging this special lam and microscope, a cell concentration on an available cell suspension can be estimated. The automation process of hemocytometer images will assist several research disciplines to improve consistency of results and to reduce human labor. Different objective measurements can be utilized to analyze a cell sample on microscope. These differences affect the detail of image content. Basically, while the objective value is getting increased, image scale and detail level taken from image will increase, yet visible area becomes narrower. Due to this variation, different self-cell counting approaches should be developed for images taken with different objective values. In this paper, using the hemocytometer images gathered from a microscope, a novel approach is introduced for which can estimate objective values of a microscope with machine learning methods automatically. For this purpose, a frequency-based visual feature is proposed which embraces hemocytometer structure well. As a result of the conducted tests, %100 distinction accuracy is achieved with the proposed method. © 2016 IEEE.