Gökçay, Erhan

Loading...
Profile Picture
Name Variants
Gokcay, E
E.,Gökçay
Gökçay,E.
E., Gökçay
G.,Erhan
Gokcay E.
Goekcay, Erhan
Gokcay, Erhan
Erhan, Gokcay
Gökçay, Erhan
E., Gokcay
GOKCAY, E
Erhan, Gökçay
Gokcay,E.
Gökçay E.
G., Erhan
E.,Gokcay
Job Title
Doktor Öğretim Üyesi
Email Address
erhan.gokcay@atilim.edu.tr
Scopus Author ID
Turkish CoHE Profile ID
Google Scholar ID
WoS Researcher ID
Scholarly Output

14

Articles

6

Citation Count

15

Supervised Theses

3

Scholarly Output Search Results

Now showing 1 - 10 of 14
  • Article
    Citation Count: 1
    An information-theoretic instance-based classifier
    (Elsevier Science inc, 2020) Gökçay, Erhan; Software Engineering
    Classification algorithms are used in many areas to determine new class labels given a training set. Many classification algorithms, linear or not, require a training phase to determine model parameters by using an iterative optimization of the cost function for that particular model or algorithm. The training phase can adjust and fine-tune the boundary line between classes. However, the process may get stuck in a local optimum, which may or may not be close to the desired solution. Another disadvantage of training processes is that upon arrival of a new sample, a retraining of the model is necessary. This work presents a new information-theoretic approach to an instance-based supervised classification. The boundary line between classes is calculated only by the data points without any external parameters or weights, and it is given in closed-form. The separation between classes is nonlinear and smooth, which reduces memorization problems. Since the method does not require a training phase, classified samples can be incorporated in the training set directly, simplifying a streaming classification operation. The boundary line can be replaced with an approximation or regression model for parametric calculations. Features and performance of the proposed method are discussed and compared with similar algorithms. (C) 2020 Elsevier Inc. All rights reserved.
  • Article
    Citation Count: 7
    A generalized Arnold's Cat Map transformation for image scrambling
    (Springer, 2022) Turan, Mehmet; Gokcay, Erhan; Gökçay, Erhan; Buker, Mohamed; Tora, Hakan; Mathematics; Software Engineering; Airframe and Powerplant Maintenance
    This study presents a new approach to generate the transformation matrix for Arnold's Cat Map (ACM). Matrices of standard and modified ACM are well known by many users. Since the structure of the possible matrices is known, one can easily select one of them and use it to recover the image with several trials. However, the proposed method generates a larger set of transform matrices. Thus, one will have difficulty in estimating the transform matrix used for scrambling. There is no fixed structure for our matrix as in standard or modified ACM, making it much harder for the transform matrix to be discovered. It is possible to use different type, order and number of operations to generate the transform matrix. The quality of the shuffling process and the strength against brute-force attacks of the proposed method is tested on several benchmark images.
  • Doctoral Thesis
    Görüntü füzyonu kullanarak tıbbi görüntülerden gürültü arındırma
    (2020) Gökçay, Erhan; Gökçay, Erhan; Software Engineering
    Görüntü füzyonu birçok erişilebilir görüntüden birinci kalite görüntü alma sistemidir. En önemli yöntem yüksek geçirim filtreleme yöntemidir. Daha sonraki yöntemler Dual-Tree Complex DWT (DTCWT), tek-tip rasyonel filtre bankası ve piramit teknikleri üzerine kuruludur. Bu tez çalışması, sefalometrik röntgen görüntülerinde Gaussian ve Poisson gürültü arındırma yöntemleri üzerinden görüntü birleştirme konusunu ele almaktadır. Görüntünün iletilmesi ve toplanması esnasında hedefsiz haberleşme ve ekipman yetersizliği gibi nedenlerden ötürü dijital görüntü uygulamaları hata vermektedir. Korumasız iletim nedeni ile zarar görmüş görüntüler farklı sensörler aracılığı ile tespit edilir. Gürültü arındırma işlemi sonrasında elde edilen görüntüler, yüksek kalite çözünürlüğe sahip tek bir görüntü elde etmek için birbirleri ile birleştirilirler. Tek bir nihai görüntü elde etmek için iki veya daha fazla görüntünün birleştirilmesi işlemine görüntü füzyonu denilir. Bu tezde farklı görüntü füzyon algoritmaları ve (Gaussian ve Poisson) gürültü filtreleri kullanıldı. 4. bölümde yer alan metodoloji ve sonuç kısmı yirmi bir yöntemden oluşmaktadır. Bu yöntemlerden ilk on üç tanesi bu tez çalışması ile alakalı olan görüntü güçlendirme yöntemlerini içermektedir ve yine bu yöntemler tarafımızca önerilen gürültü arındırma işleminde kullanılmıştır. Bu yöntemler şu şekilde sunulmuştur: Görüntü gürültü arındırma işlemimde ilk sekiz yöntem eşikleme ve küçültme yöntemleri kullanılarak sunulmuş, sonrasında iki adet filtre yöntemi görüntü filtreleme de sunulmuş ve son olarak da üç yöntem füzyon yöntemlerinde kullanılmıştır. Son sekiz yöntem çok sayıda aşamadan oluşmaktadır ve bu aşamaların her biri önceden test edilmiş olan en iyi sonuçlar ele alınarak gerçekleştirilmiştir. Bu tez çalışmasında, 'Dual-Tree Complex Discrete Wavelet Transform' olarak adlandırılan çok sensörlü dönüşüm bazlı füzyon teknolojileri ile birlikte elde edilen 400 sefalometrik röntgen görüntülerini kullanan farklı yöntemler kullanılmıştır. Sinyal, 'Dual-Tree Complex Discrete Wavelet Transform' kullanılarak farklı frekans alt bantlarına ayrıştırılmıştır. Düşük frekanslı alt bantlardan gürültüyü arındırmak için iki yanlı filtreleme yöntemi kullanılmış, yüksek frekanslı alt bantlar için ise 'Bivariate Shrinkage' dalgacık eşiklemesi kullanılmıştır. Gürültüden arındırılmış alt bantlar, dalgacık dönüşüm füzyon kuralı esas alınarak birleştirilmiştir. Test sonuçları bu birleştirme algoritmalarının yüksek kaliteli bir görüntü ortaya çıkardığını göstermektedir.
  • Conference Object
    Citation Count: 0
    A Stream Clustering Algorithm using Information Theoretic Clustering Evaluation Function
    (Scitepress, 2018) Gökçay, Erhan; Software Engineering
    There are many stream clustering algorithms that can be divided roughly into density based algorithms and hyper spherical distance based algorithms. Only density based algorithms can detect nonlinear clusters and all algorithms assume that the data stream is an ordered sequence of points. Many algorithms need to receive data in buckets to start processing with online and offline iterations with several passes over the data. In this paper we propose a streaming clustering algorithm using a distance function which can separate highly nonlinear clusters in one pass. The distance function used is based on information theoretic measures and it is called Clustering Evaluation Function. The algorithm can handle data one point at a time and find the correct number of clusters even with highly nonlinear clusters. The data points can arrive in any random order and the number of clusters does not need to be specified. Each point is compared against already discovered clusters and each time clusters are joined or divided using an iteratively updated threshold.
  • Conference Object
    Citation Count: 0
    A decentralized on demand cloud CPU design with instruction level virtualization
    (Springer Verlag, 2018) Gökçay, Erhan; Software Engineering
    Cloud technology provides many advantages and provides many services over traditional computational models. Although the provided virtual services increase resource sharing and cost effectiveness of the system, each node in the system is still centralized. Different CPU and OS versions bring interoperability problems in data exchange between nodes. In most cases less powerful units are left outside the service area. These units can only be considered as consumers of the cloud system. A new service called Cloud CPU is described elsewhere where the cloud provides the computational background for the components of a virtual CPU and the computation is distributed over internet. The design is using all units connected to the internet and it achieves a massively parallel operation. In this paper, the design of Cloud CPU will be extended and description of services needed with the new architecture will be discussed. One of the new services needed is a multi-language compiler where the target language is not fixed as well as the source language. The job of the compiler is not using the cloud for execution but to distribute the computation depending on the provided instruction sets published by each node. The computation makes sense only when all units work together and there is a need to synchronize and connect all nodes included in a particular computation. The need for synchronization will be gone when the computation is finished. Therefore an on demand Cloud-OS service is needed for bookkeeping and synchronization. The need for the Cloud-OS is temporary and the on demand initiated Cloud-OS will be terminated when the computation is ended. © Springer International Publishing AG, part of Springer Nature 2018.
  • Conference Object
    Citation Count: 1
    An IoT application for locating victims aftermath of an earthquake
    (Institute of Electrical and Electronics Engineers Inc., 2017) Şengül, Gökhan; Gökçay, Erhan; Gökçay,E.; Karakaya, Kasım Murat; Software Engineering; Computer Engineering
    This paper presents an Internet of Things (IoT) framework which is specially designed for assisting the research and rescue operations targeted to collapsed buildings aftermath of an earthquake. In general, an IoT network is used to collect and process data from different sources called things. According to the collected data, an IoT system can actuate different mechanisms to react the environment. In the problem at hand, we exploit the IoT capabilities to collect the data about the victims before the building collapses and when it falls down the collected data is processed to generate useful reports which will direct the search and rescue efforts. The proposed framework is tested by a pilot implementation with some simplifications. The initial results and experiences are promising. During the pilot implementation, we observed some issues which are addressed in the proposed IoT framework properly. © 2017 IEEE.
  • Conference Object
    Citation Count: 1
    Effect of secret image transformation on the steganography process
    (Institute of Electrical and Electronics Engineers Inc., 2017) Tora, Hakan; Tora,H.; Gökçay, Erhan; Airframe and Powerplant Maintenance; Software Engineering
    Steganography is the art of hiding information in something else. It is favorable over encryption because encryption only hides the meaning of the information; whereas steganography hides the existence of the information. The existence of a hidden image decreases Peak Signal to Noise Ratio (PSNR) and increases Mean Square Error (MSE) values of the stego image. We propose an approach to improve PSNR and MSE values in stego images. In this method a transformation is applied to the secret image, concealed within another image, before embedding into the cover image. The effect of the transformation is tested with Least Significant Bit (LSB) insertion and Discrete Cosine Transformation (DCT) techniques. MSE and PSNR are calculated for both techniques with and without transformation. Results show a better MSE and PSNR values when a transformation is applied for LSB technique but no significant difference was shown in DCT technique. © 2017 IEEE.
  • Master Thesis
    Kademeli evrişimli sinir ağlarında uyarlanabilir ağ seçimi tekniği
    (2023) Gökçay, Erhan; Gökçay, Erhan; Software Engineering
    Dinamik sinir ağı, derin öğrenmede önemli bir araştırma alanıdır. Sunulan tez, statik modellerin verimliliğini ve uyarlanabilirliğini artırmak için iki veya daha fazla sinir ağını artan derinlikte bağlamak için bir yönlendirici kullanan kademeli sinir ağına odaklanmaktadır. Bu tezde, kademeli derin sinir ağlarında ağ seçimi için parametresiz bir teknik önerdik. Bu teknik, sığ ağların da birçok örneği doğru bir şekilde sınıflandırabilmesi gerçeğinden yararlanarak, eğitim ve çıkarım için gereken hesaplama süresini azaltmayı amaçlamaktadır. Kademeli sinir ağı, softmax marjı ve klasik LeNet modelinin kısa bir açıklamasını takiben, yeni bir kademeli sinir ağı algoritması tanıtılmaktadır. Önerilen model; MNIST, EMNIST ve Fashion-MNIST veri kümelerinde etkinlik ve performans açısından LeNet ile karşılaştırılmaktadır. Sayısal sonuçlar, önerilen teknikle referans modelinin verimliliğinin büyük ölçüde arttığını ve doğruluktan ödün vermeden geliştirildiğini göstermektedir.
  • Article
    Citation Count: 0
    An unrestricted Arnold's cat map transformation
    (Springer, 2024) Turan, Mehmet; Gökçay, Erhan; Tora, Hakan; Tora, Hakan; Software Engineering; Mathematics; Airframe and Powerplant Maintenance
    The Arnold's Cat Map (ACM) is one of the chaotic transformations, which is utilized by numerous scrambling and encryption algorithms in Information Security. Traditionally, the ACM is used in image scrambling whereby repeated application of the ACM matrix, any image can be scrambled. The transformation obtained by the ACM matrix is periodic; therefore, the original image can be reconstructed using the scrambled image whenever the elements of the matrix, hence the key, is known. The transformation matrices in all the chaotic maps employing ACM has limitations on the choice of the free parameters which generally require the area-preserving property of the matrix used in transformation, that is, the determinant of the transformation matrix to be +/- 1.\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 1.$$\end{document} This reduces the number of possible set of keys which leads to discovering the ACM matrix in encryption algorithms using the brute-force method. Additionally, the period obtained is small which also causes the faster discovery of the original image by repeated application of the matrix. These two parameters are important in a brute-force attack to find out the original image from a scrambled one. The objective of the present study is to increase the key space of the ACM matrix, hence increase the security of the scrambling process and make a brute-force attack more difficult. It is proved mathematically that area-preserving property of the traditional matrix is not required for the matrix to be used in scrambling process. Removing the restriction enlarges the maximum possible key space and, in many cases, increases the period as well. Additionally, it is supplied experimentally that, in scrambling images, the new ACM matrix is equivalent or better compared to the traditional one with longer periods. Consequently, the encryption techniques with ACM become more robust compared to the traditional ones. The new ACM matrix is compatible with all algorithms that utilized the original matrix. In this novel contribution, we proved that the traditional enforcement of the determinant of the ACM matrix to be one is redundant and can be removed.
  • Article
    Citation Count: 0
    Entropy based streaming big-data reduction with adjustable compression ratio
    (Springer, 2023) Gökçay, Erhan; Software Engineering
    The Internet of Things is a novel concept in which numerous physical devices are linked to the internet to collect, generate, and distribute data for processing. Data storage and processing become more challenging as the number of devices increases. One solution to the problem is to reduce the amount of stored data in such a way that processing accuracy does not suffer significantly. The reduction can be lossy or lossless, depending on the type of data. The article presents a novel lossy algorithm for reducing the amount of data stored in the system. The reduction process aims to reduce the volume of data while maintaining classification accuracy and properly adjusting the reduction ratio. A nonlinear cluster distance measure is used to create subgroups so that samples can be assigned to the correct clusters even though the cluster shape is nonlinear. Each sample is assumed to arrive one at a time during the reduction. As a result of this approach, the algorithm is suitable for streaming data. The user can adjust the degree of reduction, and the reduction algorithm strives to minimize classification error. The algorithm is not dependent on any particular classification technique. Subclusters are formed and readjusted after each sample during the calculation. To summarize the data from the subclusters, representative points are calculated. The data summary that is created can be saved and used for future processing. The accuracy difference between regular and reduced datasets is used to measure the effectiveness of the proposed method. Different classifiers are used to measure the accuracy difference. The results show that the nonlinear information-theoretic cluster distance measure improves the reduction rates with higher accuracy values compared to existing studies. At the same time, the reduction rate can be adjusted as desired, which is a lacking feature in the current methods. The characteristics are discussed, and the results are compared to previously published algorithms.