A Self-Organising Fuzzy Neural Network with Locally Recurrent Self-Adaptive Synapses

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

This paper describes a modification to the learning algorithm and architecture of the self-organizing fuzzy neural network (SOFNN) to improve learning ability. Previously the SOFNN’s computational efficiency was improved using a new method of checking the network structure after it has been modified. Instead of testing the entire structure every time it has been modified, a record is kept of each neuron’s firing strength for all data previously clustered by the network. This record is updated as training progresses and is used to reduce the computational load of checking network structure changes, to ensure performance degradation does not occur, resulting insignificantly reduced training times. To exploit the temporal information contained in the record of saved firing strengths, anew architecture of the SOFNN is proposed in this paper where current feedback connections are added to neurons in layer three of the structure. Recurrent connections allow the network to learn the temporal information from the data and, in contrast to pure feed forward architectures, which exhibit static input output behavior in advance, recurrent models are able to store information from the past (e.g., past measurements of the time series)and are therefore better suited to analyzing dynamic systems. Each recurrent feedback connection includes a weight which must be learned. In this work a learning approach is proposed where the recurrent feedback weight is updated online(not iteratively) and proportional to the aggregate firing activity of each fuzzy neuron. It is shown that this modification, which conforms to the requirements for autonomy and has no additional hyper-parameters, can significantly improve the performance of the SOFNN’s prediction capacity under certain constraints
LanguageEnglish
Title of host publicationUnknown Host Publication
Pages1-8
Number of pages8
Publication statusPublished - 2011
EventIEEE Symposium Series on Computational Intelligence (SSCI 2011), Paris, France -
Duration: 1 Jan 2011 → …

Conference

ConferenceIEEE Symposium Series on Computational Intelligence (SSCI 2011), Paris, France
Period1/01/11 → …

Fingerprint

Fuzzy neural networks
Neurons
Feedback
Computational efficiency
Learning algorithms
Time series
Dynamical systems
Degradation
Testing

Cite this

@inproceedings{d06478675d2a49d699c4590b69633de7,
title = "A Self-Organising Fuzzy Neural Network with Locally Recurrent Self-Adaptive Synapses",
abstract = "This paper describes a modification to the learning algorithm and architecture of the self-organizing fuzzy neural network (SOFNN) to improve learning ability. Previously the SOFNN’s computational efficiency was improved using a new method of checking the network structure after it has been modified. Instead of testing the entire structure every time it has been modified, a record is kept of each neuron’s firing strength for all data previously clustered by the network. This record is updated as training progresses and is used to reduce the computational load of checking network structure changes, to ensure performance degradation does not occur, resulting insignificantly reduced training times. To exploit the temporal information contained in the record of saved firing strengths, anew architecture of the SOFNN is proposed in this paper where current feedback connections are added to neurons in layer three of the structure. Recurrent connections allow the network to learn the temporal information from the data and, in contrast to pure feed forward architectures, which exhibit static input output behavior in advance, recurrent models are able to store information from the past (e.g., past measurements of the time series)and are therefore better suited to analyzing dynamic systems. Each recurrent feedback connection includes a weight which must be learned. In this work a learning approach is proposed where the recurrent feedback weight is updated online(not iteratively) and proportional to the aggregate firing activity of each fuzzy neuron. It is shown that this modification, which conforms to the requirements for autonomy and has no additional hyper-parameters, can significantly improve the performance of the SOFNN’s prediction capacity under certain constraints",
author = "DH Coyle and G Prasad and TM McGinnity",
year = "2011",
language = "English",
pages = "1--8",
booktitle = "Unknown Host Publication",

}

Coyle, DH, Prasad, G & McGinnity, TM 2011, A Self-Organising Fuzzy Neural Network with Locally Recurrent Self-Adaptive Synapses. in Unknown Host Publication. pp. 1-8, IEEE Symposium Series on Computational Intelligence (SSCI 2011), Paris, France, 1/01/11.

A Self-Organising Fuzzy Neural Network with Locally Recurrent Self-Adaptive Synapses. / Coyle, DH; Prasad, G; McGinnity, TM.

Unknown Host Publication. 2011. p. 1-8.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - A Self-Organising Fuzzy Neural Network with Locally Recurrent Self-Adaptive Synapses

AU - Coyle, DH

AU - Prasad, G

AU - McGinnity, TM

PY - 2011

Y1 - 2011

N2 - This paper describes a modification to the learning algorithm and architecture of the self-organizing fuzzy neural network (SOFNN) to improve learning ability. Previously the SOFNN’s computational efficiency was improved using a new method of checking the network structure after it has been modified. Instead of testing the entire structure every time it has been modified, a record is kept of each neuron’s firing strength for all data previously clustered by the network. This record is updated as training progresses and is used to reduce the computational load of checking network structure changes, to ensure performance degradation does not occur, resulting insignificantly reduced training times. To exploit the temporal information contained in the record of saved firing strengths, anew architecture of the SOFNN is proposed in this paper where current feedback connections are added to neurons in layer three of the structure. Recurrent connections allow the network to learn the temporal information from the data and, in contrast to pure feed forward architectures, which exhibit static input output behavior in advance, recurrent models are able to store information from the past (e.g., past measurements of the time series)and are therefore better suited to analyzing dynamic systems. Each recurrent feedback connection includes a weight which must be learned. In this work a learning approach is proposed where the recurrent feedback weight is updated online(not iteratively) and proportional to the aggregate firing activity of each fuzzy neuron. It is shown that this modification, which conforms to the requirements for autonomy and has no additional hyper-parameters, can significantly improve the performance of the SOFNN’s prediction capacity under certain constraints

AB - This paper describes a modification to the learning algorithm and architecture of the self-organizing fuzzy neural network (SOFNN) to improve learning ability. Previously the SOFNN’s computational efficiency was improved using a new method of checking the network structure after it has been modified. Instead of testing the entire structure every time it has been modified, a record is kept of each neuron’s firing strength for all data previously clustered by the network. This record is updated as training progresses and is used to reduce the computational load of checking network structure changes, to ensure performance degradation does not occur, resulting insignificantly reduced training times. To exploit the temporal information contained in the record of saved firing strengths, anew architecture of the SOFNN is proposed in this paper where current feedback connections are added to neurons in layer three of the structure. Recurrent connections allow the network to learn the temporal information from the data and, in contrast to pure feed forward architectures, which exhibit static input output behavior in advance, recurrent models are able to store information from the past (e.g., past measurements of the time series)and are therefore better suited to analyzing dynamic systems. Each recurrent feedback connection includes a weight which must be learned. In this work a learning approach is proposed where the recurrent feedback weight is updated online(not iteratively) and proportional to the aggregate firing activity of each fuzzy neuron. It is shown that this modification, which conforms to the requirements for autonomy and has no additional hyper-parameters, can significantly improve the performance of the SOFNN’s prediction capacity under certain constraints

M3 - Conference contribution

SP - 1

EP - 8

BT - Unknown Host Publication

ER -