Skip Navigation

Serkan Kiranyaz, phD

Qatar University/Tampere University

Serkan Kiranyaz was born in Turkey, 1972. He received his BS and MS degrees in Electrical and Electronics Department at Bilkent University, Ankara, Turkey, in 1994 and 1996, respectively. He received his PhD degree in 2005 and his Docency at 2007 from Tampere University of Technology, Institute of Signal Processing respectively. He was working as a Professor in Signal Processing Department in the same university during 2009 to 2015. He currently works as a Professor in Qatar University, Doha, Qatar. Prof. Kiranyaz has noteworthy expertise and background in various signal processing domains. He published two books, 7 book chapters, 7 patents, more than 90 journal articles in several IEEE Transactions and other high impact journals, and more than 110 papers in international conferences. He served as PI and LPI in several national and international projects. His principal research field is machine learning and signal processing. He is rigorously aiming for reinventing the ways in novel signal processing paradigms, enriching it with new approaches especially in machine intelligence, and revolutionizing the means of “learn-to-process” signals. He made significant contributions on bio-signal analysis, particularly EEG and ECG analysis and processing, classification and segmentation, computer vision with applications to recognition, classification, multimedia retrieval, evolving systems and evolutionary machine learning, swarm intelligence and evolutionary optimization.


New-Generation Neural Networks and Applications

Multi-Layer Perceptrons (MLPs), and their derivatives, Convolutional Neural Networks (CNNs) have a common drawback: they employ a homogenous network structure with an identical “linear” neuron model. This naturally makes them only a crude model of the biological neurons or mammalian neural systems, which are heterogeneous and composed of highly diverse neuron types with distinct biochemical and electrophysiological properties. With such crude models, conventional homogenous networks can learn sufficiently well problems with a monotonous, relatively simple, and linearly separable solution space but they fail to accomplish this whenever the solution space is highly nonlinear and complex. To address this drawback, a heterogeneous and dense network model, Generalized Operational Perceptrons (GOPs) has recently been proposed. GOPs aim to model biological neurons with distinct synaptic connections. GOPs have demonstrated a superior diversity, encountered in biological neural networks, which resulted in an elegant performance level on numerous challenging problems where conventional MLPs entirely failed. Following GOPs footsteps, a heterogeneous and non-linear network model, called Operational Neural Network (ONN), has recently been proposed as a superset of CNNs. ONNs, like their predecessor GOPs, boost the diversity to learn highly complex and multi-modal functions or spaces with minimal network complexity and training data. However, ONNs also exhibit certain drawbacks such as strict dependability to the operators in the operator set library, the mandatory search for the best operator set for each layer/neuron, and the need for setting (fixing) the operator sets of the output layer neuron(s) in advance. Self-organized ONNs (Self-ONNs) with generative neurons can address all these drawbacks without any prior search or training and with elegant computational complexity. However, generative neurons still perform “localized” kernel operations and hence the kernel size of a neuron at a particular layer solely determines the capacity of the receptive fields and the amount of information gathered from the previous layer. In order to improve the receptive field size and to even to find the best possible location for each kernel, non-localized kernel operations for Self-ONNs are embedded in a novel and superior neuron model than the generative neurons hence called the “super (generative) neurons”. This talk will cover a natural evolution of the artificial neuron and network models starting from the ancient (linear) neuron model in the 1940s to the super neurons and new-generation Self-ONNs. The focus will particularly be drawn on numerous Image Processing applications such as image restoration, denoising, and regression where Self-ONNs especially with super neurons have achieved state-of-the-art performance levels with a significant gap.