本周进化领域文章更新

文摘   科技   2024-08-31 09:55   陕西  

进化计算领域文章更新主要包括以下六大方向如下:

基础理论(包括遗传算法、进化策略、遗传编程、群智能等算法设计、理论研究、基准测试、进化思想、算法软件、综述等)

进化优化(包括黑盒优化、多目标优化、约束优化、噪声优化、多任务优化、多模态优化、迁移优化、大规模优化、昂贵优化、学习优化等)

组合优化(包括进化神经组合优化、进化机器人、路线规划、布局布线、工业控制、调度等)

神经进化(包含进化神经网络的参数、超参数、架构、规则等)

进化学习(包括进化特征选择、强化学习、多目标学习、公平性学习、联邦学习、进化计算机视觉、进化自然语言处理、进化数据挖掘等)

应用研究(工业、网络、安全、物理、生物、化学等)

文章来源主要包括:

1. IEEE CIS: CIM, TEVC, TNNLS, TFS, TAI, TETCI, CEC

2. IEEE CS/SMC: TPAMI, TKDE, TPDS, TCYB, TSMC, Proc. IEEE

3. ACM: TELO, GECCO, FOGA, ICML

4. MIT: ECJ, ARTL, JMLR, NIPS

5. Elsevier/Springer: AIJ, SWEVO, SCIS, PPSN

6. AAAI/MK/OR: AAAI, IJCAI, ICLR

7. Else: NMI, NC, PNAS, Nature, Science, ArXiv

进化优化

  • A Cooperative Multistep Mutation Strategy for Multiobjective Optimization Problems With Deceptive Constraints, IEEE TSMC

https://ieeexplore.ieee.org/document/10654248

Constrained multiobjective optimization problems with deceptive constraints (DCMOPs) are a kind of complex optimization problems and have received some attention. For DCMOPs, the closer a solution is to the feasible region, the larger its constraint value. Moreover, multiple local infeasible regions will have different minimal constraint values according to their distances to feasible regions. Therefore, most of the existing algorithms are easy to fall into local regions, and even cannot find any feasible solution. To address DCMOPs, this article proposes a new evolutionary multitasking algorithm with a cooperative multistep mutation strategy. In this algorithm, the DCMOP is transformed into a multitasking optimization problem, in which the main task is the original DCMOP and the created auxiliary task aims to provide effective help for solving the main task. Specially, the designed cooperative multistep mutation strategy contains two contributions to solve deceptive constraints. First, a multistep mechanism is proposed, in which the individuals will use multiple different steps to generate the multiple offspring solutions along one direction, so as to expand search range to find feasible regions. Second, a cooperative mechanism between the two tasks is proposed, in which the main purpose is to provide effective and stable search directions. To be specific, an opposite solution generation method is utilized to generate the opposite solution of auxiliary population in the search space, and the direction from the auxiliary population to the main population will be formed. Combined with these two mechanisms, the proposed cooperative multistep mutation strategy can effectively improve the population diversity along the promising and stable search directions. In the experiments, the proposed algorithm is tested on the two benchmark DCMOPs, which contain objective space constraints and decision space constraints respectively. The results show the effectiveness and superio...

组合优化

  • A Streaming Feature Selection Method Based on Dynamic Feature Clustering and Particle Swarm Optimization, IEEE TEVC

https://ieeexplore.ieee.org/document/10659137

Feature selection is an effective data preprocessing technique. In some practical applications, features may continuously arrive one by one or by groups, and we cannot know the exact number of features before learning. Streaming feature selection aims to remove redundant and irrelevant features from the continuously arriving features. The paper proposes a three-stage Streaming Feature Selection method based on Dynamic feature clustering and Particle Swarm Optimization (SFS-DPSO). In the first stage, an online relevance analysis is utilized to quickly remove irrelevant features, reducing the size of newly arrived feature groups. In the second stage, a dynamic feature clustering technique is employed to divide redundant features into different groups, thereby reducing the search space for subsequent evolutionary algorithms. In the third stage, a historical information-driven integer particle swarm optimization algorithm is exploited to search for optimal feature subset in the clustered feature space. The proposed algorithm is applied in 12 typical datasets with different difficulty levels and a real-word case, experimental results show that it can achieve better classification results in a reasonable time and is superior to most existing algorithms.

  • CMA-ES for Discrete and Mixed-Variable Optimization on Sets of Points

https://arxiv.org/abs/2408.13046

Discrete and mixed-variable optimization problems have appeared in several real-world applications. Most of the research on mixed-variable optimization considers a mixture of integer and continuous variables, and several integer handlings have been developed to inherit the optimization performance of the continuous optimization methods to mixed-integer optimization. In some applications, acceptable solutions are given by selecting possible points in the disjoint subspaces. This paper focuses on the optimization on sets of points and proposes an optimization method by extending the covariance matrix adaptation evolution strategy (CMA-ES), termed the CMA-ES on sets of points (CMA-ES-SoP). The CMA-ES-SoP incorporates margin correction that maintains the generation probability of neighboring points to prevent premature convergence to a specific non-optimal point, which is an effective integer-handling technique for CMA-ES. In addition, because margin correction with a fixed margin value tends to increase the marginal probabilities for a portion of neighboring points more than necessary, the CMA-ES-SoP updates the target margin value adaptively to make the average of the marginal probabilities close to a predefined target probability. Numerical simulations demonstrated that the CMA-ES-SoP successfully optimized the optimization problems on sets of points, whereas the naive CMA-ES failed to optimize them due to premature convergence.

神经进化

  • An Algorithmic Framework for the Optimization of Deep Neural Networks Architectures and Hyperparameters, JMLR

https://www.jmlr.org/papers/v25/23-0166.html

In this paper, we propose DRAGON (for DiRected Acyclic Graph OptimizatioN), an algorithmic framework to automatically generate efficient deep neural networks architectures and optimize their associated hyperparameters. The framework is based on evolving Directed Acyclic Graphs (DAGs), defining a more flexible search space than the existing ones in the literature. It allows mixtures of different classical operations: convolutions, recurrences and dense layers, but also more newfangled operations such as self-attention. Based on this search space we propose neighbourhood and evolution search operators to optimize both the architecture and hyper-parameters of our networks. These search operators can be used with any metaheuristic capable of handling mixed search spaces. We tested our algorithmic framework with an asynchronous evolutionary algorithm on a time series forecasting benchmark. The results demonstrate that DRAGON outperforms state-of-the-art handcrafted models and AutoML techniques for time series forecasting on numerous datasets. DRAGON has been implemented as a python open-source package.

  • Layer-wise Learning Rate Optimization for Task-Dependent Fine-Tuning of Pre-trained Models: An Evolutionary Approach, ACM TELO

https://dl.acm.org/doi/10.1145/3689827

The superior performance of large-scale pre-trained models, such as Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-trained Transformer (GPT), has received increasing attention in both academic and industrial research and has become one of the current research hotspots. A pre-trained model refers to a model trained on large-scale unlabeled data, whose purpose is to learn general language representation or features for fine-tuning or transfer learning in subsequent tasks. After pre-training is complete, a small amount of labeled data can be used to fine-tune the model for a specific task or domain. This two-stage method of “pre-training+fine-tuning” has achieved advanced results in natural language processing (NLP) tasks. Despite widespread adoption, existing fixed fine-tuning schemes that adapt well to one NLP task may perform inconsistently on other NLP tasks given that different tasks have different latent semantic structures. In this paper, we explore the effectiveness of automatic fine-tuning pattern search for layer-wise learning rates from an evolutionary optimization perspective. Our goal is to use evolutionary algorithms to search for better task-dependent fine-tuning patterns for specific NLP tasks than typical fixed fine-tuning patterns. Experimental results on two real-world language benchmarks and three advanced pre-training language models show the effectiveness and generality of the proposed framework.

进化学习

  • Deep Learning to Predict Late-Onset Breast Cancer Metastasis: the Single Hyperparameter Grid Search (SHGS) Strategy for Meta Tuning Concerning Deep Feed-forward Neural Network

https://arxiv.org/abs/2408.15498

While machine learning has advanced in medicine, its widespread use in clinical applications, especially in predicting breast cancer metastasis, is still limited. We have been dedicated to constructing a DFNN model to predict breast cancer metastasis n years in advance. However, the challenge lies in efficiently identifying optimal hyperparameter values through grid search, given the constraints of time and resources. Issues such as the infinite possibilities for continuous hyperparameters like l1 and l2, as well as the time-consuming and costly process, further complicate the task. To address these challenges, we developed Single Hyperparameter Grid Search (SHGS) strategy, serving as a preselection method before grid search. Our experiments with SHGS applied to DFNN models for breast cancer metastasis prediction focus on analyzing eight target hyperparameters: epochs, batch size, dropout, L1, L2, learning rate, decay, and momentum. We created three figures, each depicting the experiment results obtained from three LSM-I-10-Plus-year datasets. These figures illustrate the relationship between model performance and the target hyperparameter values. For each hyperparameter, we analyzed whether changes in this hyperparameter would affect model performance, examined if there were specific patterns, and explored how to choose values for the particular hyperparameter. Our experimental findings reveal that the optimal value of a hyperparameter is not only dependent on the dataset but is also significantly influenced by the settings of other hyperparameters. Additionally, our experiments suggested some reduced range of values for a target hyperparameter, which may be helpful for low-budget grid search. This approach serves as a prior experience and foundation for subsequent use of grid search to enhance model performance.

  • Generational Computation Reduction in Informal Counterexample-Driven Genetic Programming

https://arxiv.org/abs/2408.12604

Counterexample-driven genetic programming (CDGP) uses specifications provided as formal constraints to generate the training cases used to evaluate evolving programs. It has also been extended to combine formal constraints and user-provided training data to solve symbolic regression problems. Here we show how the ideas underlying CDGP can also be applied using only user-provided training data, without formal specifications. We demonstrate the application of this method, called ``informal CDGP,'' to software synthesis problems. Our results show that informal CDGP finds solutions faster (i.e. with fewer program executions) than standard GP. Additionally, we propose two new variants to informal CDGP, and find that one produces significantly more successful runs on about half of the tested problems. Finally, we study whether the addition of counterexample training cases to the training set is useful by comparing informal CDGP to using a static subsample of the training set, and find that the addition of counterexamples significantly improves performance.

应用研究

  • Evolutionary Multitasking With Adaptive Cross-Dataset Knowledge Transfer for Band Selection of Hyperspectral Images, IEEE TETCI

https://ieeexplore.ieee.org/document/10649895

Band selection (BS) is a critical method for hyperspectral image (HSI) classification, which helps to reduce the computational burden while providing a good class separability. However, most BS methods only deal with each HSI dataset independently, which cannot effectively exploit useful knowledge across multiple datasets. Only one BS method tries to transfer knowledge across multiple datasets, but it ignores the negative transfer effect caused by the inherent difference among datasets. To alleviate this issue, this paper proposes an evolutionary multitasking BS method with adaptive cross-dataset knowledge transfer. Specifically, a new knowledge transfer strategy is designed based on the linear mapping to filter out irrelevant knowledge among different datasets. Then, an adaptive transfer control strategy is designed to adaptively adjust the frequency and intensity of knowledge transfer, which can further enhance the knowledge transfer performance. As validated by our experimental studies, our method can properly mitigate the negative transfer effect caused by the differences in cross-dataset knowledge transfer. When compared to several state-of-the-art BS methods on three common HSI datasets, our method can find superior band subsets with higher quality.

  • Quantum search with prior knowledge, SCIS

https://link.springer.com/article/10.1007/s11432-023-3972-y

The combination of contextual side information and search is a powerful paradigm in the scope of artificial intelligence. The prior knowledge enables the identification of possible solutions but may be imperfect. Contextual information can arise naturally, for example in game AI where prior knowledge is used to bias move decisions. In this work we investigate the problem of taking quantum advantage of contextual information, especially searching with prior knowledge. We propose a new generalization of Grover’s search algorithm that achieves the optimal expected success probability of finding the solution if the number of queries is fixed. Experiments on small-scale quantum circuits verify the advantage of our algorithm. Since contextual information exists widely, our method has wide applications. We take game tree search as an example.

  • A Distance Similarity-based Genetic Optimization Algorithm for Satellite Ground Network Planning Considering Feeding Mode

https://arxiv.org/abs/2408.16300

With the rapid development of the satellite industry, the information transmission network based on communication satellites has gradually become a major and important part of the future satellite ground integration network. However, the low transmission efficiency of the satellite data relay back mission has become a problem that is currently constraining the construction of the system and needs to be solved urgently. Effectively planning the task of satellite ground networking by reasonably scheduling resources is crucial for the efficient transmission of task data. In this paper, we hope to provide a task execution scheme that maximizes the profit of the networking task for satellite ground network planning considering feeding mode (SGNPFM). To solve the SGNPFM problem, a mixed-integer planning model with the objective of maximizing the gain of the link-building task is constructed, which considers various constraints of the satellite in the feed-switching mode. Based on the problem characteristics, we propose a distance similarity-based genetic optimization algorithm (DSGA), which considers the state characteristics between the tasks and introduces a weighted Euclidean distance method to determine the similarity between the tasks. To obtain more high-quality solutions, different similarity evaluation methods are designed to assist the algorithm in intelligently screening individuals. The DSGA also uses an adaptive crossover strategy based on similarity mechanism, which guides the algorithm to achieve efficient population search. In addition, a task scheduling algorithm considering the feed-switching mode is designed for decoding the algorithm to generate a high-quality scheme. The results of simulation experiments show that the DSGA can effectively solve the SGNPFM problem.

  • Compact Pixelated Microstrip Forward Broadside Coupler Using Binary Particle Swarm Optimization

https://arxiv.org/abs/2408.15082

In this paper, a compact microstrip forward broadside coupler (MFBC) with high coupling level is proposed in the frequency band of 3.5-3.8 GHz. The coupler is composed of two parallel pixelated transmission lines. To validate the designstrategy, the proposed MFBC is fabricated and measured. The measured results demonstrate a forward coupler with 3 dB coupling, and a compact size of 0.12 {\lambda}g x 0.10{\lambda}g. Binary Particle Swarm Optimization (BPSO) design methodology and flexibility of pixelation enable us to optimize the proposed MFBC with desired coupling level and operating frequency within a fixed dimension. Also, low sensitivity to misalignment between two coupled TLs makes the proposed coupler a good candidate for near-field Wireless Power Transfer (WPT) application and sensors.




EvoIGroup
Evolutionary Intelligence (EvoI) Group。主要介绍进化智能在网络科学,机器学习,优化和实际(工业)应用上的研究进展。欢迎投稿推文等。联系方式:evoIgroup@163.com。
 最新文章