Faster Self-Organizing Fuzzy Neural Network Training and Improved Autonomy withTime-Delayed Synapses for Locally Recurrent Learning

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

This chapter describes a number of modifications to the learning algorithm and architecture of the selforganizing fuzzy neural network (SOFNN) to improve its computational efficiency and learning ability. To improve the SOFNN’s computational efficiency, a new method of checking the network structure after it has been modified is proposed. Instead of testing the entire structure every time it has been modified, a record is kept of each neuron’s firing strength for all data previously clustered by the network. This record is updated as training progresses and is used to reduce the computational load of checking networkstructure changes, to ensure performance degradation does not occur, resulting in significantly reduced training times. It is shown that the modified SOFNN compares favorably to other evolving fuzzy systems in terms of accuracy and structural complexity. In addition, a new architecture of the SOFNN is proposed where recurrent feedback connections are added to neurons in layer three of the structure. Recurrent connections allow the network to learn the temporal information from data and, in contrast to pure feed forward architectures which exhibit static input-output behavior in advance, recurrent models are able to store information from the past (e.g., past measurements of the time-series) and are thereforebetter suited to analyzing dynamic systems. Each recurrent feedback connection includes a weight which must be learned. In this work a learning approach is proposed where the recurrent feedback weight isupdated online (not iteratively) and proportional to the aggregate firing activity of each fuzzy neuron. It is shown that this modification can significantly improve the performance of the SOFNN’s prediction capacity under certain constraints.
LanguageEnglish
Title of host publicationSystem and Circuit Design for Biologically-Inspired Intelligent Learning
Pages156-183
Publication statusPublished - Dec 2010

Fingerprint

Fuzzy neural networks
Neurons
Computational efficiency
Feedback
Fuzzy systems
Learning algorithms
Time series
Dynamical systems
Degradation
Testing

Cite this

@inbook{0e89b6db015a492cb13d9d90dbaaf973,
title = "Faster Self-Organizing Fuzzy Neural Network Training and Improved Autonomy withTime-Delayed Synapses for Locally Recurrent Learning",
abstract = "This chapter describes a number of modifications to the learning algorithm and architecture of the selforganizing fuzzy neural network (SOFNN) to improve its computational efficiency and learning ability. To improve the SOFNN’s computational efficiency, a new method of checking the network structure after it has been modified is proposed. Instead of testing the entire structure every time it has been modified, a record is kept of each neuron’s firing strength for all data previously clustered by the network. This record is updated as training progresses and is used to reduce the computational load of checking networkstructure changes, to ensure performance degradation does not occur, resulting in significantly reduced training times. It is shown that the modified SOFNN compares favorably to other evolving fuzzy systems in terms of accuracy and structural complexity. In addition, a new architecture of the SOFNN is proposed where recurrent feedback connections are added to neurons in layer three of the structure. Recurrent connections allow the network to learn the temporal information from data and, in contrast to pure feed forward architectures which exhibit static input-output behavior in advance, recurrent models are able to store information from the past (e.g., past measurements of the time-series) and are thereforebetter suited to analyzing dynamic systems. Each recurrent feedback connection includes a weight which must be learned. In this work a learning approach is proposed where the recurrent feedback weight isupdated online (not iteratively) and proportional to the aggregate firing activity of each fuzzy neuron. It is shown that this modification can significantly improve the performance of the SOFNN’s prediction capacity under certain constraints.",
author = "DH Coyle and G Prasad and TM McGinnity",
year = "2010",
month = "12",
language = "English",
isbn = "ISBN 978-1-60960-018-1 (hardcover)",
pages = "156--183",
booktitle = "System and Circuit Design for Biologically-Inspired Intelligent Learning",

}

Faster Self-Organizing Fuzzy Neural Network Training and Improved Autonomy withTime-Delayed Synapses for Locally Recurrent Learning. / Coyle, DH; Prasad, G; McGinnity, TM.

System and Circuit Design for Biologically-Inspired Intelligent Learning. 2010. p. 156-183.

Research output: Chapter in Book/Report/Conference proceedingChapter

TY - CHAP

T1 - Faster Self-Organizing Fuzzy Neural Network Training and Improved Autonomy withTime-Delayed Synapses for Locally Recurrent Learning

AU - Coyle, DH

AU - Prasad, G

AU - McGinnity, TM

PY - 2010/12

Y1 - 2010/12

N2 - This chapter describes a number of modifications to the learning algorithm and architecture of the selforganizing fuzzy neural network (SOFNN) to improve its computational efficiency and learning ability. To improve the SOFNN’s computational efficiency, a new method of checking the network structure after it has been modified is proposed. Instead of testing the entire structure every time it has been modified, a record is kept of each neuron’s firing strength for all data previously clustered by the network. This record is updated as training progresses and is used to reduce the computational load of checking networkstructure changes, to ensure performance degradation does not occur, resulting in significantly reduced training times. It is shown that the modified SOFNN compares favorably to other evolving fuzzy systems in terms of accuracy and structural complexity. In addition, a new architecture of the SOFNN is proposed where recurrent feedback connections are added to neurons in layer three of the structure. Recurrent connections allow the network to learn the temporal information from data and, in contrast to pure feed forward architectures which exhibit static input-output behavior in advance, recurrent models are able to store information from the past (e.g., past measurements of the time-series) and are thereforebetter suited to analyzing dynamic systems. Each recurrent feedback connection includes a weight which must be learned. In this work a learning approach is proposed where the recurrent feedback weight isupdated online (not iteratively) and proportional to the aggregate firing activity of each fuzzy neuron. It is shown that this modification can significantly improve the performance of the SOFNN’s prediction capacity under certain constraints.

AB - This chapter describes a number of modifications to the learning algorithm and architecture of the selforganizing fuzzy neural network (SOFNN) to improve its computational efficiency and learning ability. To improve the SOFNN’s computational efficiency, a new method of checking the network structure after it has been modified is proposed. Instead of testing the entire structure every time it has been modified, a record is kept of each neuron’s firing strength for all data previously clustered by the network. This record is updated as training progresses and is used to reduce the computational load of checking networkstructure changes, to ensure performance degradation does not occur, resulting in significantly reduced training times. It is shown that the modified SOFNN compares favorably to other evolving fuzzy systems in terms of accuracy and structural complexity. In addition, a new architecture of the SOFNN is proposed where recurrent feedback connections are added to neurons in layer three of the structure. Recurrent connections allow the network to learn the temporal information from data and, in contrast to pure feed forward architectures which exhibit static input-output behavior in advance, recurrent models are able to store information from the past (e.g., past measurements of the time-series) and are thereforebetter suited to analyzing dynamic systems. Each recurrent feedback connection includes a weight which must be learned. In this work a learning approach is proposed where the recurrent feedback weight isupdated online (not iteratively) and proportional to the aggregate firing activity of each fuzzy neuron. It is shown that this modification can significantly improve the performance of the SOFNN’s prediction capacity under certain constraints.

M3 - Chapter

SN - ISBN 978-1-60960-018-1 (hardcover)

SP - 156

EP - 183

BT - System and Circuit Design for Biologically-Inspired Intelligent Learning

ER -