论文部分内容阅读
分析了前向神经网络极值点附近的性态,指出基本BP算法用于分类问题时收敛缓慢的原因.我们利用梯度模的幂次去修改学习率,仿真结果表明,将此方法用于分类问题的训练时,收敛速度明显优于基本的BP算法.
This paper analyzes the behavior near the extremal point of the feedforward neural network and points out the reason of the slow convergence when the basic BP algorithm is applied to the classification problem. We use the power of gradient modulo to modify the learning rate. The simulation results show that this method is better than the basic BP algorithm in the training of classification problems.