论文标题
分布式全球优化(DGO)
Distributed Global Optimization (DGO)
论文作者
论文摘要
提出了一种全球优化的新技术及其在神经网络中的应用。该算法也与其他全局优化算法(例如梯度下降(GD),Monte Carlo(MC),遗传算法(GA)和其他商业软件包进行了比较。这种新的优化技术在观察到其收敛的准确性,收敛速度和易用性后,证明了自己值得进一步研究。下面列出了这种新优化技术的某些优点:1。优化功能不必是连续的或可区分的。 2。不使用随机机制,因此该算法不会继承随机搜索的慢速速度。 3。没有微调参数(例如G.D.的步长或S.A.的温度)。 4。该算法可以在平行计算机上实现,因此随着维数的数量的增加,计算时间几乎没有增加(与线性增加相比)。实现O(n)的时间复杂性。
A new technique of global optimization and its applications in particular to neural networks are presented. The algorithm is also compared to other global optimization algorithms such as Gradient descent (GD), Monte Carlo (MC), Genetic Algorithm (GA) and other commercial packages. This new optimization technique proved itself worthy of further study after observing its accuracy of convergence, speed of convergence and ease of use. Some of the advantages of this new optimization technique are listed below: 1. Optimizing function does not have to be continuous or differentiable. 2. No random mechanism is used, therefore this algorithm does not inherit the slow speed of random searches. 3. There are no fine-tuning parameters (such as the step rate of G.D. or temperature of S.A.) needed for this technique. 4. This algorithm can be implemented on parallel computers so that there is little increase in computation time (compared to linear increase) as the number of dimensions increases. The time complexity of O(n) is achieved.