论文标题

关于元启发式的表现:不同的观点

On the Performance of Metaheuristics: A Different Perspective

论文作者

Boveiri, Hamid Reza, Khayami, Raouf

论文摘要

如今,我们沉浸在数十种新的进化和游泳智能术中,这使得很难选择适当的方法来应用于手头的特定优化问题。另一方面,大多数这些元启发式学不过是基本元启发术的略微修改的变体。例如,分别具有专门操作员或额外的局部搜索的差异进化(DE)或洗牌青蛙跳跃(SFL)只是遗传算法(GA)。因此,想到的是,是否可以根据研究其祖先的规范和特征来研究这种新宣言的行为。在本文中,一项关于一些基本元启发学的全面评估研究,即遗传算法(GA),粒子群优化(PSO),人造蜜蜂菌落(ABC),基于教学学习的优化(TLBO)(TLBO)和杜鹃优化算法(COA),这使我们能够更好地实现效果,以使我们能够进行更深入的效果,因此,我们将为您提供其他效果,因此,我们将效果效果,因此,我们将为您提供其他效果,因此,我们的效果将为我们提供,因此,我们将为您提供效果。变化起源于它们。已经对具有不同特征的20种不同组合优化基准函数进行了大量实验,并且结果向我们揭示了这些基本结论以外的一些基本结论,在这些元数据中,{ABC,PSO,PSO,TLBO,GA,GA,COA,COA} I.E. ABC和COA和COA和COA和COA是最佳和最差的方法。此外,从收敛的角度来看,PSO和ABC分别对单峰和多模式功能具有明显的更好的收敛性,而在许多情况下,GA和COA与局部Optima具有过早的融合,在许多情况下需要替代突变机制来增强多元化和全局搜索。

Nowadays, we are immersed in tens of newly-proposed evolutionary and swam-intelligence metaheuristics, which makes it very difficult to choose a proper one to be applied on a specific optimization problem at hand. On the other hand, most of these metaheuristics are nothing but slightly modified variants of the basic metaheuristics. For example, Differential Evolution (DE) or Shuffled Frog Leaping (SFL) are just Genetic Algorithms (GA) with a specialized operator or an extra local search, respectively. Therefore, what comes to the mind is whether the behavior of such newly-proposed metaheuristics can be investigated on the basis of studying the specifications and characteristics of their ancestors. In this paper, a comprehensive evaluation study on some basic metaheuristics i.e. Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Teaching-Learning-Based Optimization (TLBO), and Cuckoo Optimization algorithm (COA) is conducted, which give us a deeper insight into the performance of them so that we will be able to better estimate the performance and applicability of all other variations originated from them. A large number of experiments have been conducted on 20 different combinatorial optimization benchmark functions with different characteristics, and the results reveal to us some fundamental conclusions besides the following ranking order among these metaheuristics, {ABC, PSO, TLBO, GA, COA} i.e. ABC and COA are the best and the worst methods from the performance point of view, respectively. In addition, from the convergence perspective, PSO and ABC have significant better convergence for unimodal and multimodal functions, respectively, while GA and COA have premature convergence to local optima in many cases needing alternative mutation mechanisms to enhance diversification and global search.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源