Free Trial

Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources.

  • Create BookmarkCreate Bookmark
  • Create Note or TagCreate Note or Tag
  • PrintPrint
Share this Page URL

Chapter IX. Learning Algorithms for Comp... > THE LEARNING ALGORITHM OF THE COMPLE...


Node Activation Function

Complex Processing Element

The BP algorithm is probably the most widely used supervised learning algorithm in neural networks (NNs) applications. The activation function differentiates the BP algorithm from the conventional LMS algorithm. Various dynamic functions can be used as the activation function if continuously differentiable. Usually, the choice of the activation function depends on how we choose to represent the output signal. For example, if we want the output units to be binary {0, 1}, the sigmoid function is used as the node activation function and then the binary threshold function can be finally used, and when the signal value is bipolar {-1, 1}, a tanh(ax /2) is used since this function is output-limiting and quasi-bistable but is also differentiable (Freemann, & Skapura, 1991). Of course, we can use the linear activation function. However, because the relationship we want to map is likely to be compound, a nonlinear function is very often used as the activation function.

For the sake of the complex BP algorithm, the activation function must be used in its complex version. In (Leung, & Haykin, 1991), the sigmoid function 1/{1+exp(-ax)} was taken as the nonlinear node activation function. The bipolar version of this sigmoid is


You are currently reading a PREVIEW of this book.


Get instant access to over $1 million worth of books and videos.


Start a Free 10-Day Trial

  • Safari Books Online
  • Create BookmarkCreate Bookmark
  • Create Note or TagCreate Note or Tag
  • PrintPrint