本周进化领域文章更新

文摘   科技   2024-08-10 13:58   陕西  

进化计算领域文章更新主要包括以下六大方向如下:

基础理论(包括遗传算法、进化策略、遗传编程、群智能等算法设计、理论研究、基准测试、进化思想、算法软件、综述等)

进化优化(包括黑盒优化、多目标优化、约束优化、噪声优化、多任务优化、多模态优化、迁移优化、大规模优化、昂贵优化、学习优化等)

组合优化(包括进化神经组合优化、进化机器人、路线规划、布局布线、工业控制、调度等)

神经进化(包含进化神经网络的参数、超参数、架构、规则等)

进化学习(包括进化特征选择、强化学习、多目标学习、公平性学习、联邦学习、进化计算机视觉、进化自然语言处理、进化数据挖掘等)

应用研究(工业、网络、安全、物理、生物、化学等)

文章来源主要包括:

1. IEEE CIS: CIM, TEVC, TNNLS, TFS, TAI, TETCI, CEC

2. IEEE CS/SMC: TPAMI, TKDE, TPDS, TCYB, TSMC, Proc. IEEE

3. ACM: TELO, GECCO, FOGA, ICML

4. MIT: ECJ, ARTL, JMLR, NIPS

5. Elsevier/Springer: AIJ, SWEVO, SCIS, PPSN

6. AAAI/MK/OR: AAAI, IJCAI, ICLR

7. Else: NMI, NC, PNAS, Nature, Science, ArXiv

基础理论

  • Theoretical Advantage of Multiobjective Evolutionary Algorithms for Problems with Different Degrees of Conflict

https://arxiv.org/abs/2408.04207

The field of multiobjective evolutionary algorithms (MOEAs) often emphasizes its popularity for optimization problems with conflicting objectives. However, it is still theoretically unknown how MOEAs perform for different degrees of conflict, even for no conflicts, compared with typical approaches outside this field. As the first step to tackle this question, we propose the OneMaxMink benchmark class with the degree of the conflict k∈[0..n], a generalized variant of COCZ and OneMinMax. Two typical non-MOEA approaches, scalarization (weighted-sum approach) and ϵ-constraint approach, are considered. We prove that for any set of weights, the set of optima found by scalarization approach cannot cover the full Pareto front. Although the set of the optima of constrained problems constructed via ϵ-constraint approach can cover the full Pareto front, the general used ways (via exterior or nonparameter penalty functions) to solve such constrained problems encountered difficulties. The nonparameter penalty function way cannot construct the set of optima whose function values are the Pareto front, and the exterior way helps (with expected runtime of O(nlnn) for the randomized local search algorithm for reaching any Pareto front point) but with careful settings of ϵ and r (r>1/(ϵ+1−⌈ϵ⌉)). In constrast, the generally analyzed MOEAs can efficiently solve OneMaxMink without above careful designs. We prove that (G)SEMO, MOEA/D, NSGA-II, and SMS-EMOA can cover the full Pareto front in O(max{k,1}nlnn) expected number of function evaluations, which is the same asymptotic runtime as the exterior way in ϵ-constraint approach with careful settings. As a side result, our results also give the performance analysis of solving a constrained problem via multiobjective way.

进化优化

  • A Hierarchical and Ensemble Surrogate-Assisted Evolutionary Algorithm With Model Reduction for Expensive Many-Objective Optimization, IEEE TEVC

https://ieeexplore.ieee.org/document/10630664

The Kriging model has been widely used in regression-based surrogate-assisted evolutionary algorithms (SAEAs) for expensive multiobjective optimization by using one model to approximate one objective, and the fusion of all the models forms the fitness surrogate. However, when tackling expensive many-objective optimization problems, too many models are required to construct such a fitness surrogate, which incurs cumulative prediction uncertainty and higher computational cost. Considering that the fitness surrogate works to predict different objective values to help select promising solutions with good convergence and diversity, this article proposes a novel model reduction idea to change the many-models-based fitness surrogate to a two-models-based indicator surrogate (TIS) that directly approximates convergence and diversity indicators. Based on TIS, a hierarchical and ensemble surrogate-assisted evolutionary algorithm (HES-EA) is proposed with three stages. Firstly, the HES-EA transforms the many objectives of the real-evaluated solutions into two indicators (i.e., the convergence and diversity indicators) and divides these solutions into different clusters. Secondly, a HES consisting of a cluster surrogate and different TISs is trained through these clustered solutions and their indicators. Thirdly, during the optimization process, the HES can predict the candidate solutions’ cluster information via the cluster surrogate and indicator information via the TISs. Promising solutions can thus be selected based on the predicted information via a clustering-based sequential selection strategy without real fitness evaluation consumption. Compared with state-of-the-art SAEAs on three widely used benchmark suites up to 184 instances and one real-world application, HES-EA shows its superiority in both optimization performance and computational cost.

  • Exact Calculation and Properties of the R2 Multiobjective Quality Indicator, IEEE TEVC

https://ieeexplore.ieee.org/document/10630708

Quality indicators play an essential role in evolutionary multiobjective optimization (EMO). Most likely the most often used quality indicator in EMO is hypervolume, due to its strict monotonicity with respect to the dominance relation. However, hypervolume is not free of some weak points. For example, a number of recent papers pointed out its high sensitivity to the specification of the reference point. Furthermore, hypervolume is based on fully geometric reasoning which may lead to some undesired results. Thus, it is worth to consider also other quality indicators. In this paper we prove that another well-known R2 quality indicator is also strictly monotonic with respect to the dominance relation when calculated exactly and the reference point strongly dominates any solution in the evaluated set. Furthermore, we adapt the Improved Quick Hypervolume algorithm to the exact calculation of R2 indicator. To our knowledge this is the first exact algorithm for R2 calculation with publicly available implementation. In addition, through both theoretical analysis and computational experiments we show that R2 performs consistently for Pareto fronts with different shapes. We discuss also differences of Pareto fronts representations generated by an indicator-based EMO with hypervolume and R2, where the latter tends to generate solutions having a high chance to be preferred by the DM, not necessarily uniformly distributed in geometric sense. All of these results, make R2 a sound alternative or a complement to hypervolume in EMO.

  • Performance Metrics for Multi-Objective Optimisation Under Noise, IEEE TEVC

https://ieeexplore.ieee.org/document/10623415

This letter discusses the challenge when evaluating multi-objective optimisation algorithms under noise. It argues that it is important to take into account possible selection errors by a decision maker, due to inaccurate estimates of a solution’s true objective values. It demonstrates that commonly used performance metrics don’t properly account for such errors, and proposes two alternative performance metrics that do account for such errors by adapting the popular R2 and IGD+ metrics.

  • Multi-Region Trend Prediction Strategy With Online Sequential Extreme Learning Machine for Dynamic Multi-Objective Optimization, IEEE TETCI

https://ieeexplore.ieee.org/document/10629160

Dynamic multi-objective optimization problems (DMOPs) involve multiple conflicting and time-varying objectives, requiring dynamic multi-objective algorithms (DMOAs) to track changing Pareto-optimal fronts. In recent decade, prediction-based DMOAs have shown promise in handling DMOPs. However, in existing prediction-based DMOAs some specific solutions in a small number of prior environments are generally used. Consequently, it is difficult for these DMOAs to capture Pareto-optimal set (POS) changes accurately. Besides, gaps may exist in some objective subspaces due to uneven population distribution, causing a difficulty in searching these subspaces. Faced with such difficulties, this article proposes a multi-region trend prediction strategy-based dynamic multi-objective evolutionary algorithm (MTPS-DMOEA) to handle DMOPs. MTPS-DMOEA divides the objective space into multiple subspaces and predicts POS moving trends through the use of POS center points from multiple objective subspaces, which contributes to accurately capturing POS changes. In MTPS-DMOEA, the parameters of the prediction model are continuously updated via online sequential extreme learning machine, facilitating the adequate utilization of useful information in historical environments and hence the enhancement of the generalization performance for the prediction. To fill gaps in some objective subspaces, MTPS-DMOEA introduces diverse solutions generated from the previous POS in adjacent subspaces. We compare the proposed MTPS-DMOEA with six state-of-the-art DMOAs on fourteen benchmark test problems, and the experimental results demonstrate the excellent performance of MTPS-DMOEA in handling DMOPs.

  • ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics

https://arxiv.org/abs/2408.04539

Multi-objective evolutionary algorithms (MOEAs) have emerged as powerful tools for solving complex optimization problems characterized by multiple, often conflicting, objectives. While advancements have been made in computational efficiency as well as diversity and convergence of solutions, a critical challenge persists: the internal evolutionary mechanisms are opaque to human users. Drawing upon the successes of explainable AI in explaining complex algorithms and models, we argue that the need to understand the underlying evolutionary operators and population dynamics within MOEAs aligns well with a visual analytics paradigm. This paper introduces ParetoTracker, a visual analytics framework designed to support the comprehension and inspection of population dynamics in the evolutionary processes of MOEAs. Informed by preliminary literature review and expert interviews, the framework establishes a multi-level analysis scheme, which caters to user engagement and exploration ranging from examining overall trends in performance metrics to conducting fine-grained inspections of evolutionary operations. In contrast to conventional practices that require manual plotting of solutions for each generation, ParetoTracker facilitates the examination of temporal trends and dynamics across consecutive generations in an integrated visual interface. The effectiveness of the framework is demonstrated through case studies and expert interviews focused on widely adopted benchmark optimization problems.

  • A Landscape-Aware Differential Evolution for Multimodal Optimization Problems

https://arxiv.org/abs/2408.02340

How to simultaneously locate multiple global peaks and achieve certain accuracy on the found peaks are two key challenges in solving multimodal optimization problems (MMOPs). In this paper, a landscape-aware differential evolution (LADE) algorithm is proposed for MMOPs, which utilizes landscape knowledge to maintain sufficient diversity and provide efficient search guidance. In detail, the landscape knowledge is efficiently utilized in the following three aspects. First, a landscape-aware peak exploration helps each individual evolve adaptively to locate a peak and simulates the regions of the found peaks according to search history to avoid an individual locating a found peak. Second, a landscape-aware peak distinction distinguishes whether an individual locates a new global peak, a new local peak, or a found peak. Accuracy refinement can thus only be conducted on the global peaks to enhance the search efficiency. Third, a landscape-aware reinitialization specifies the initial position of an individual adaptively according to the distribution of the found peaks, which helps explore more peaks. The experiments are conducted on 20 widely-used benchmark MMOPs. Experimental results show that LADE obtains generally better or competitive performance compared with seven well-performed algorithms proposed recently and four winner algorithms in the IEEE CEC competitions for multimodal optimization.

神经进化

  • Fully forward mode training for optical neural networks, Nature

https://www.nature.com/articles/s41586-024-07687-4

Optical computing promises to improve the speed and energy efficiency of machine learning applications. However, current approaches to efficiently train these models are limited by in silico emulation on digital computers. Here we develop a method called fully forward mode (FFM) learning, which implements the compute-intensive training process on the physical system. The majority of the machine learning operations are thus efficiently conducted in parallel on site, alleviating numerical modelling constraints. In free-space and integrated photonics, we experimentally demonstrate optical systems with state-of-the-art performances for a given network size. FFM learning shows training the deepest optical neural networks with millions of parameters achieves accuracy equivalent to the ideal model. It supports all-optical focusing through scattering media with a resolution of the diffraction limit; it can also image in parallel the objects hidden outside the direct line of sight at over a kilohertz frame rate and can conduct all-optical processing with light intensity as weak as subphoton per pixel (5.40 × 1018- operations-per-second-per-watt energy efficiency) at room temperature. Furthermore, we prove that FFM learning can automatically search non-Hermitian exceptional points without an analytical model. FFM learning not only facilitates orders-of-magnitude-faster learning processes, but can also advance applied and theoretical fields such as deep neural networks, ultrasensitive perception and topological photonics.

  • CCSRP: Robust Pruning of Spiking Neural Networks through Cooperative Coevolution

https://arxiv.org/abs/2408.00794

Spiking neural networks (SNNs) have shown promise in various dynamic visual tasks, yet those ready for practical deployment often lack the compactness and robustness essential in resource-limited and safety-critical settings. Prior research has predominantly concentrated on enhancing the compactness or robustness of artificial neural networks through strategies like network pruning and adversarial training, with little exploration into similar methodologies for SNNs. Robust pruning of SNNs aims to reduce computational overhead while preserving both accuracy and robustness. Current robust pruning approaches generally necessitate expert knowledge and iterative experimentation to establish suitable pruning criteria or auxiliary modules, thus constraining their broader application. Concurrently, evolutionary algorithms (EAs) have been employed to automate the pruning of artificial neural networks, delivering remarkable outcomes yet overlooking the aspect of robustness. In this work, we propose CCSRP, an innovative robust pruning method for SNNs, underpinned by cooperative co-evolution. Robust pruning is articulated as a tri-objective optimization challenge, striving to balance accuracy, robustness, and compactness concurrently, resolved through a cooperative co-evolutionary pruning framework that independently prunes filters across layers using EAs. Our experiments on CIFAR-10 and SVHN demonstrate that CCSRP can match or exceed the performance of the latest methodologies.

进化学习

  • Protein Design by Directed Evolution Guided by Large Language Models, IEEE TEVC

https://ieeexplore.ieee.org/document/10628050

Directed evolution, a strategy for protein engineering, optimizes protein properties (i.e., fitness) by a rigorous and resource-intensive process of screening or selecting among a vast range of mutations. By conducting an in silico screening of sequence properties, machine learning-guided directed evolution (MLDE) can expedite the optimization process and alleviate the experimental workload. In this work, we propose a general MLDE framework in which we apply recent advancements of Deep Learning in protein representation learning and protein property prediction to accelerate the searching and optimization processes. In particular, we introduce an optimization pipeline that utilizes Large Language Models (LLMs) to pinpoint the mutation hotspots in the sequence and then suggest replacements to improve the overall fitness. Our experiments have shown the superior efficiency and efficacy of our proposed framework in the conditional protein generation, in comparision with other state-of-the-art baseline algorithms. We expect this work will shed a new light on not only protein engineering but also on solving combinatorial problems using data-driven methods.

  • An investigation on the use of Large Language Models for hyperparameter tuning in Evolutionary Algorithms

https://arxiv.org/abs/2408.02451

Hyperparameter optimization is a crucial problem in Evolutionary Computation. In fact, the values of the hyperparameters directly impact the trajectory taken by the optimization process, and their choice requires extensive reasoning by human operators. Although a variety of self-adaptive Evolutionary Algorithms have been proposed in the literature, no definitive solution has been found. In this work, we perform a preliminary investigation to automate the reasoning process that leads to the choice of hyperparameter values. We employ two open-source Large Language Models (LLMs), namely Llama2-70b and Mixtral, to analyze the optimization logs online and provide novel real-time hyperparameter recommendations. We study our approach in the context of step-size adaptation for (1+1)-ES. The results suggest that LLMs can be an effective method for optimizing hyperparameters in Evolution Strategies, encouraging further research in this direction.

应用研究

  • Improving Air Mobility for Pre-Disaster Planning with Neural Network Accelerated Genetic Algorithm

https://arxiv.org/abs/2408.00790

Weather disaster related emergency operations pose a great challenge to air mobility in both aircraft and airport operations, especially when the impact is gradually approaching. We propose an optimized framework for adjusting airport operational schedules for such pre-disaster scenarios. We first, aggregate operational data from multiple airports and then determine the optimal count of evacuation flights to maximize the impacted airport's outgoing capacity without impeding regular air traffic. We then propose a novel Neural Network (NN) accelerated Genetic Algorithm(GA) for evacuation planning. Our experiments show that integration yielded comparable results but with smaller computational overhead. We find that the utilization of a NN enhances the efficiency of a GA, facilitating more rapid convergence even when operating with a reduced population size. This effectiveness persists even when the model is trained on data from airports different from those under test.




EvoIGroup
Evolutionary Intelligence (EvoI) Group。主要介绍进化智能在网络科学,机器学习,优化和实际(工业)应用上的研究进展。欢迎投稿推文等。联系方式:evoIgroup@163.com。
 最新文章