Table of Contents#### Download Safari Books Online apps: Apple iOS | Android | BlackBerry

### THE LEARNING ALGORITHM OF THE COMPLEX-VALUED NEURAL NETWORKS

#### Node Activation Function

##### Complex Processing Element

##### Properties

Entire Site

Free Trial

Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources.

The BP algorithm is probably the most widely used supervised learning algorithm in neural networks (NNs) applications. The activation function differentiates the BP algorithm from the conventional LMS algorithm. Various dynamic functions can be used as the activation function if continuously differentiable. Usually, the choice of the activation function depends on how we choose to represent the output signal. For example, if we want the output units to be binary {0, 1}, the sigmoid function is used as the node activation function and then the binary threshold function can be finally used, and when the signal value is bipolar {-1, 1}, a tanh(ax /2) is used since this function is output-limiting and quasi-bistable but is also differentiable (Freemann, & Skapura, 1991). Of course, we can use the linear activation function. However, because the relationship we want to map is likely to be compound, a nonlinear function is very often used as the activation function.

For the sake of the complex BP algorithm, the activation function must be used in its complex version. In (Leung, & Haykin, 1991), the sigmoid function 1/{1+exp(-ax)} was taken as the nonlinear node activation function. The bipolar version of this sigmoid is