进化计算领域文章更新主要包括以下六大方向如下:
基础理论(包括遗传算法、进化策略、遗传编程、群智能等算法设计、理论研究、基准测试、进化思想、算法软件、综述等)
进化优化(包括黑盒优化、多目标优化、约束优化、噪声优化、多任务优化、多模态优化、迁移优化、大规模优化、昂贵优化、学习优化等)
组合优化(包括进化神经组合优化、进化机器人、路线规划、布局布线、工业控制、调度等)
神经进化(包含进化神经网络的参数、超参数、架构、规则等)
进化学习(包括进化特征选择、强化学习、多目标学习、公平性学习、联邦学习、进化计算机视觉、进化自然语言处理、进化数据挖掘等)
应用研究(工业、网络、安全、物理、生物、化学等)
文章来源主要包括:
1. IEEE CIS: CIM, TEVC, TNNLS, TFS, TAI, TETCI, CEC
2. IEEE CS/SMC: TPAMI, TKDE, TPDS, TCYB, TSMC, Proc. IEEE
3. ACM: TELO, GECCO, FOGA, ICML
4. MIT: ECJ, ARTL, JMLR, NIPS
5. Elsevier/Springer: AIJ, SWEVO, SCIS, PPSN
6. AAAI/MK/OR: AAAI, IJCAI, ICLR
7. Else: NMI, NC, PNAS, Nature, Science, ArXiv
基础理论
Nearest-Better Network for Fitness Landscape Analysis of Continuous Optimization Problems, IEEE TEVC
https://ieeexplore.ieee.org/document/10722873
Fitness landscape analysis (FLA) is quite important in evolutionary computation. In this paper, we propose a novel FLA method, the nearest-better network (NBN), which uses the nearest-better relationship to simplify the original fitness landscape of continuous optimization problems. We introduce an efficient algorithm to calculate NBN for continuous problems. We also propose four numerical measurements and a 3D visualization method based on NBN. Experiments show that compared to the other main FLA methods, the four numerical measurements proposed here can effectively measure the four intended features: neutrality, ruggedness, modality, and basin of attraction, respectively, and common features of the fitness landscape can be maintained in 3D NBN visualization, regardless of the scale of the problem. NBN also provides a view of how algorithms search in high-dimensional problems with the help of the 3D NBN visualization.
The Impact of Network Structure on Ant Colony Optimization
https://arxiv.org/abs/2410.09059
Ant Colony Optimization (ACO) is a swarm intelligence methodology utilized for solving optimization problems through information transmission mediated by pheromones. As ants sequentially secrete pheromones that subsequently evaporate, the information conveyed predominantly comprises pheromones secreted by recent ants. This paper introduces a network structure into the information transmission process and examines its impact on optimization performance. The network structure is characterized by an asymmetric BA model with parameters for in-degree r and asymmetry ω. At ω=1, the model describes a scale-free network; at ω=0, a random network; and at ω=−1, an extended lattice. We aim to solve the ground state search of the mean-field Ising model, employing a linear decision function for the ants with their response to pheromones quantified by the parameter α. For ω>−1, the pheromone rates for options converge to stable fixed points of the stochastic system. Below the critical threshold αc, there is one stable fixed point, while above αc, there are two. Notably, as ω→−1, both the driving force toward stable fixed points and the strength of the noise reach their maximum, significantly enhancing the probability of finding the ground state of the Ising model.
进化优化
Surrogate-Assisted Search With Competitive Knowledge Transfer for Expensive Optimization, IEEE TEVC
https://ieeexplore.ieee.org/document/10714441
Expensive optimization problems (EOPs) have attracted increasing research attention over the decades due to their ubiquity in a variety of practical applications. Despite many sophisticated surrogate-assisted evolutionary algorithms (SAEAs) that have been developed for solving such problems, most of them lack the ability to transfer knowledge from previously-solved tasks and always start their search from scratch, making them troubled by the notorious cold-start issue. A few preliminary studies that integrate transfer learning into SAEAs still face some issues, such as defective similarity quantification that is prone to underestimate promising knowledge, surrogate-dependency that makes the transfer methods not coherent with the state-of-the-art in SAEAs, etc. In light of the above, a plug and play competitive knowledge transfer method is proposed to boost various SAEAs in this paper. Specifically, both the optimized solutions from the source tasks and the promising solutions acquired by the target surrogate are treated as task-solving knowledge, enabling them to compete with each other to elect the winner for expensive evaluation, thus boosting the search speed on the target task. Moreover, the lower bound of the convergence gain brought by the knowledge competition is mathematically analyzed, which is expected to strengthen the theoretical foundation of sequential transfer optimization. Experimental studies conducted on a series of benchmark problems and a practical application from the petroleum industry verify the efficacy of the proposed method. The source code of the competitive knowledge transfer is available at https://github.com/XmingHsueh/SAS-CKT.
Symbolic Regression-Assisted Offline Data-Driven Evolutionary Computation, IEEE TEVC
https://ieeexplore.ieee.org/document/10720865
When solving optimization problems with expensive or implicit objective functions, evolutionary algorithms commonly utilize surrogate models as cost-effective substitutes for evaluation. This category of algorithms is referred to as data-driven evolutionary algorithms (DDEAs). However, when constructing surrogate models, existing studies rely on the hand-crafted model structure, requiring prior knowledge while leading to the suboptimal fitting ability of the model. To address the issue, this paper proposes a novel symbolic regression-assisted evolutionary algorithm, namely SR-DDEA. SR-DDEA employs symbolic regression to automatically construct the model structure without prior knowledge and obtain accurate surrogates. Specifically, we develop an efficient gene expression programming algorithm to enhance the expressive ability of surrogates, assisted by a queue-based decoding strategy to improve the efficiency of model calculations. We also employ a clustering-based selective ensemble method to maximize data utilization and obtain diverse models. Experimental findings on commonly employed benchmarks demonstrate that our algorithm surpasses other cutting-edge offline DDEAs on test problems of different scales and a practical aerodynamic airfoil design challenge.
Activation Function-Assisted Objective Space Mapping to Enhance Evolutionary Algorithms for Large-Scale Many-Objective Optimization, IEEE TSMC
https://ieeexplore.ieee.org/document/10721214
Large-scale many-objective optimization problems (LSMaOPs) pose great difficulties for traditional evolutionary algorithms due to their slow search for Pareto-optimal solutions in huge decision space and struggle to balance diversity and convergence among numerous locally optimal solutions. An objective space linear inverse mapping method has successfully achieved great saving in execution time in solving LSMaOPs. Linear mapping is a fast and straightforward way, but fails to characterize a complex functional relationship. If we can enhance the expressive capacity of a mapping model, and further obtain a more general function approximator, can the evolutionary search based on objective space mapping be more efficient? To answer this interesting question, this work proposes to employ nonlinear activation functions widely used in neural networks so as to enhance the efficiency of objective space inverse mapping, thus efficiently generating excellent offspring population. A new evolutionary optimization framework based on decision variable analysis is proposed to solve LSMaOPs. In order to demonstrate its performance, this work carries out empirical experiments involving massive decision variables and many objectives. Experimental results prove its superiority over some representative and updated ones.
Neural Network-Based Knowledge Transfer for Multitask Optimization, IEEE TCYB
https://ieeexplore.ieee.org/document/10711878
Knowledge transfer (KT) is crucial for optimizing tasks in evolutionary multitask optimization (EMTO). However, most existing KT methods can only achieve superficial KT but lack the ability to deeply mine the similarities or relationships among different tasks. This limitation may result in negative transfer, thereby degrading the KT performance. As the KT efficiency strongly depends on the similarities of tasks, this article proposes a neural network (NN)-based KT (NNKT) method to analyze the similarities of tasks and obtain the transfer models for information prediction between different tasks for high-quality KT. First, NNKT collects and pairs the solutions of multiple tasks and trains the NNs to obtain the transfer models between tasks. Second, the obtained NNs transfer knowledge by predicting new promising solutions. Meanwhile, a simple adaptive strategy is developed to find the suitable population size to satisfy various search requirements during the evolution process. Comparison of the experimental results between the proposed NN-based multitask optimization (NNMTO) algorithm and some state-of-the-art multitask algorithms on the IEEE Congress on Evolutionary Computation (IEEE CEC) 2017 and IEEE CEC2022 benchmarks demonstrate the efficiency and effectiveness of the NNMTO. Moreover, NNKT can be seamlessly applied to other EMTO algorithms to further enhance their performances. Finally, the NNMTO is applied to a real-world multitask rover navigation application problem to further demonstrate its applicability.
A Lattice-based Method for Optimization in Continuous Spaces with Genetic Algorithms
https://arxiv.org/abs/2410.12188
This work presents a novel lattice-based methodology for incorporating multidimensional constraints into continuous decision variables within a genetic algorithm (GA) framework. The proposed approach consolidates established transcription techniques for crossover of continuous decision variables, aiming to leverage domain knowledge and guide the search process towards feasible regions of the design space. This work offers a robust and general purpose lattice-based GA that is applicable to a broad range of optimization problems. Monte Carlo analysis demonstrates that lattice-based methods find solutions two orders of magnitude closer to optima in fewer generations. The effectiveness of the lattice-based approach is showcased through two illustrative multi-objective design problems: (1) optimal telescope placement for astrophotography and (2) optimal design of a satellite constellation for maximizing ground station access. The optimal telescope placement example shows that lattice-based methods converge to the Pareto front in 15% fewer generations than traditional methods. The orbit design example shows that lattice-based methods discover an order of magnitude more Pareto-optimal solutions than traditional methods in a highly constrained design space. Overall, the results show that the lattice-based method exhibits enhanced exploration capabilities, traversing the solution space more comprehensively and achieving faster convergence compared to conventional GAs.
Offline Model-Based Optimization by Learning to Rank
https://arxiv.org/abs/2410.11502
Offline model-based optimization (MBO) aims to identify a design that maximizes a black-box function using only a fixed, pre-collected dataset of designs and their corresponding scores. A common approach in offline MBO is to train a regression-based surrogate model by minimizing mean squared error (MSE) and then find the best design within this surrogate model by different optimizers (e.g., gradient ascent). However, a critical challenge is the risk of out-of-distribution errors, i.e., the surrogate model may typically overestimate the scores and mislead the optimizers into suboptimal regions. Prior works have attempted to address this issue in various ways, such as using regularization techniques and ensemble learning to enhance the robustness of the model, but it still remains. In this paper, we argue that regression models trained with MSE are not well-aligned with the primary goal of offline MBO, which is to select promising designs rather than to predict their scores precisely. Notably, if a surrogate model can maintain the order of candidate designs based on their relative score relationships, it can produce the best designs even without precise predictions. To validate it, we conduct experiments to compare the relationship between the quality of the final designs and MSE, finding that the correlation is really very weak. In contrast, a metric that measures order-maintaining quality shows a significantly stronger correlation. Based on this observation, we propose learning a ranking-based model that leverages learning to rank techniques to prioritize promising designs based on their relative scores. We show that the generalization error on ranking loss can be well bounded. Empirical results across diverse tasks demonstrate the superior performance of our proposed ranking-based models than twenty existing methods.
组合优化
A Morphological Transfer-Based Multi-Fidelity Evolutionary Algorithm for Soft Robot Design, IEEE CIM
https://ieeexplore.ieee.org/document/10709670
The intelligent soft robot has received wide attention from both academia and the industry due to its remarkable adaptability. It performs intelligent behavioral learning and evolved morphologies in unpredictable environmental conditions. However, designing a soft robot with a well-adapted morphology involves searching through a large number of possible structures. Furthermore, to learn control tasks in diverse environments, the robot performs computationally intensive numerical simulations, which is time-consuming for evaluating the performance of robots. To address both issues, a multi-fidelity evolutionary algorithm is proposed, which consists of three main components. Firstly, a niching-based fidelity adjustment strategy is introduced to significantly reduce the evaluation cost by training the controller of each robot for only a small number of simulation steps. In particular, considering the estimation errors of the low-fidelity evaluation, the population is divided into multiple subpopulations with different fidelity levels for parallel optimization. Secondly, an effective morphology transfer strategy is proposed to improve the quality of offspring by transferring the local structure of robots in different subpopulations. Finally, a fast local search is developed to enhance the search efficiency of the algorithm without performing additional control simulations. The experimental results on 31 test tasks demonstrate that the proposed algorithm outperforms the SOTA design algorithms on 25 test tasks, especially when the computational budget is limited. Compared to the baseline algorithms, our algorithm reduces the computational cost by 60%% while achieving similar performance.
Evaluating Meta-Heuristic Algorithms for Dynamic Capacitated Arc Routing Problems Based on a Novel Lower Bound Method, IEEE CIM
https://ieeexplore.ieee.org/document/10709780
Meta-heuristic algorithms, especially evolutionary algorithms, have been frequently used to find near optimal solutions to combinatorial optimization problems. The evaluation of such algorithms is often conducted through comparisons with other algorithms on a set of benchmark problems. However, even if one algorithm is the best among all those compared, it still has difficulties in determining the true quality of the solutions found because the true optima are unknown, especially in dynamic environments. It would be desirable to evaluate algorithms not only relatively through comparisons with others, but also in absolute terms by estimating their quality compared to the true global optima. Unfortunately, true global optima are normally unknown or hard to find since the problems being addressed are NP-hard. In this paper, instead of using true global optima, lower bounds are derived to carry out an objective evaluation of the solution quality. In particular, the first approach capable of deriving a lower bound for dynamic capacitated arc routing problem (DCARP) instances is proposed, and two optimization algorithms for DCARP are evaluated based on such a lower bound approach. An effective graph pruning strategy is introduced to reduce the time complexity of our proposed approach. Our experiments demonstrate that our approach provides tight lower bounds for small DCARP instances. Two optimization algorithms are evaluated on a set of DCARP instances through the derived lower bounds in our experimental studies, and the results reveal that the algorithms still have room for improvement for large complex instances.
Personalized Learning Path Problem Variations: Computational Complexity and AI Approaches, IEEE TAI
https://ieeexplore.ieee.org/document/10722910
E-learning courses often suffer from high dropout rates and low student satisfaction. One way to address this issue is to use Personalized Learning Paths (PLPs), which are sequences of learning materials that meet the individual needs of students. However, creating PLPs is difficult and often involves combining knowledge graphs, student profiles, and learning materials. Researchers typically assume that the problem of creating PLPs belong to the NP-Hard class of computational problems. However, previous research in this field has neither defined the different variations of the PLP problem nor formally established their computational complexity. Without clear definitions of the PLP variations, researchers risk making invalid comparisons and conclusions when they use different metaheuristics for different PLP problems. In order to unify this conversation, this paper formally proves the NP-completeness of two common PLP variations and their generalizations and uses them to categorize recent research in the PLP field. It then presents an instance of the PLP problem using real-world data and shows how this instance can be cast into two different NP-complete variations. This paper then presents three AI strategies, solving one of the PLP variations with back-tracking and branch and bound heuristics and also converting the PLP variation instance to XCSP 3 , an intermediate constraint satisfaction language to be resolved with a general Constraint Optimization solver. This paper solves the other PLP variation instance using a greedy search heuristic. The paper finishes by comparing the results of the two different PLP variations.
Rethinking Supervised Learning based Neural Combinatorial Optimization for Routing Problem, ACM TELO
https://dl.acm.org/doi/10.1145/3694690
Neural combinatorial optimization (NCO) is a promising learning-based approach to solving complex combinatorial optimization problems such as the traveling salesman problem (TSP), the vehicle routing problem (VRP), and the orienteering problem (OP). However, how to efficiently train a powerful NCO solver for routing problems remains a crucial challenge. The widely used reinforcement learning method suffers from sparse rewards and low data efficiency, while the supervised learning approach requires a large number of high-quality solutions (i.e., labels) that could be costly to obtain. In this work, we find that simple data augmentation operations can drastically reduce the number of required high-quality solutions for supervised learning. Moreover, simple boosting strategies that leverage the property of multiple optima can significantly improve training efficiency. With only a small set of \(50,000\) labeled instances, supervised learning can achieve a competitive in-distribution performance with the widely-used reinforcement learning counterpart. Furthermore, we also investigate the generalization ability for larger out-of-distribution problems. We believe the findings from this work may lead to a rethinking of the value of data-efficient supervised learning for NCO solver training.
Selection of Filters for Photonic Crystal Spectrometer Using Domain-Aware Evolutionary Algorithms
https://arxiv.org/abs/2410.13657
This work addresses the critical challenge of optimal filter selection for a novel trace gas measurement device. This device uses photonic crystal filters to retrieve trace gas concentrations prone to photon and read noise. The filter selection directly influences accuracy and precision of the gas retrieval and therefore is a crucial performance driver. We formulate the problem as a stochastic combinatorial optimization problem and develop a simulator mimicking gas retrieval with noise. The objective function for selecting filters reducing retrieval error is minimized by the employed metaheuristics, that represent various families of optimizers. We aim to improve the found top-performing algorithms using our novel distance-driven extensions, that employ metrics on the space of filter selections. This leads to a novel adaptation of the UMDA algorithm, we call UMDA-U-PLS-Dist, equipped with one of the proposed distance metrics as the most efficient and robust solver among the considered ones. Analysis of filter sets produced by this method reveals that filters with relatively smooth transmission profiles but containing high contrast improve the device performance. Moreover, the top-performing obtained solution shows significant improvement compared to the baseline.
神经演化
Efficient Evaluation Methods for Neural Architecture Search: A Survey, IEEE TAI
https://ieeexplore.ieee.org/document/10713213
Neural Architecture Search (NAS) has received increasing attention because of its exceptional merits in automating the design of Deep Neural Network (DNN) architectures. However, the performance evaluation process, as a key part of NAS, often requires training a large number of DNNs. This inevitably makes NAS computationally expensive. In past years, many Efficient Evaluation Methods (EEMs) have been proposed to address this critical issue. In this paper, we comprehensively survey these EEMs published up to date, and provide a detailed analysis to motivate the further development of this research direction. Specifically, we divide the existing EEMs into four categories based on the number of DNNs trained for constructing these EEMs. The categorization can reflect the degree of efficiency in principle, which can in turn help quickly grasp the methodological features. In surveying each category, we further discuss the design principles and analyze the strengths and weaknesses to clarify the landscape of existing EEMs, thus making easily understanding the research trends of EEMs. Furthermore, we also discuss the current challenges and issues to identify future research directions in this emerging topic. In summary, this survey provides a convenient overview of EEM for interested users, and they can easily select the proper EEM method for the tasks at hand. In addition, the researchers in the NAS field could continue exploring the future directions suggested in the paper.
Transformer Guided Coevolution: Improved Team Formation in Multiagent Adversarial Games
https://arxiv.org/abs/2410.13769
We consider the problem of team formation within multiagent adversarial games. We propose BERTeam, a novel algorithm that uses a transformer-based deep neural network with Masked Language Model training to select the best team of players from a trained population. We integrate this with coevolutionary deep reinforcement learning, which trains a diverse set of individual players to choose teams from. We test our algorithm in the multiagent adversarial game Marine Capture-The-Flag, and we find that BERTeam learns non-trivial team compositions that perform well against unseen opponents. For this game, we find that BERTeam outperforms MCAA, an algorithm that similarly optimizes team formation.
ActNAS : Generating Efficient YOLO Models using Activation NAS
https://arxiv.org/abs/2410.10887
Activation functions introduce non-linearity into Neural Networks, enabling them to learn complex patterns. Different activation functions vary in speed and accuracy, ranging from faster but less accurate options like ReLU to slower but more accurate functions like SiLU or SELU. Typically, same activation function is used throughout an entire model architecture. In this paper, we conduct a comprehensive study on the effects of using mixed activation functions in YOLO-based models, evaluating their impact on latency, memory usage, and accuracy across CPU, NPU, and GPU edge devices. We also propose a novel approach that leverages Neural Architecture Search (NAS) to design YOLO models with optimized mixed activation this http URL best model generated through this method demonstrates a slight improvement in mean Average Precision (mAP) compared to baseline model (SiLU), while it is 22.28% faster and consumes 64.15% less memory on the reference NPU device.
进化学习
A Multi-Tree Genetic Programming-Based Ensemble Approach to Image Classification With Limited Training Data [Research Frontier], IEEE CIM
https://ieeexplore.ieee.org/document/10709804
Large variations across images make image classification a challenging task; limited training data further increases its difficulty. Genetic programming (GP) has been considerably applied to image classification. However, most GP methods tend to directly evolve a single classifier or depend on a predefined classification algorithm, which typically does not lead to ideal generalization performance when only a few training instances are available. Applying ensemble learning to classification often outperforms employing a single classifier. However, single-tree representation (each individual contains a single tree) is widely employed in GP. Training multiple diverse and accurate base learners/classifiers based on single-tree GP is challenging. Therefore, this article proposes a new ensemble construction method based on multi-tree GP (each individual contains multiple trees) for image classification. A single individual forms an ensemble, and its multiple trees constitute base learners. To find the best individual in which multiple trees are diverse and effectively cooperate, i.e., the nth tree can correct the errors of the previous n-1 trees, the new method assigns different weights to multiple trees using the idea of AdaBoost and performs classification via weighted majority voting. Furthermore, a new tree representation is developed to evolve diverse and accurate base learners that extract useful features and conduct classification simultaneously. The new approach achieves significantly better performance than almost all benchmark methods on eight datasets. Additional analyses highlight the effectiveness of the new ensembles and tree representation, demonstrating the potential for providing valuable interpretability in ensemble trees.
Surrogate Modeling to Address the Absence of Protected Membership Attributes in Fairness Evaluation, ACM TELO
https://dl.acm.org/doi/10.1145/3700145
It is imperative to ensure that artificial intelligence models perform well for all groups including those from underprivileged populations. By comparing the performance of models for the protected group with respect to the rest of the population, we can uncover and prevent unwanted bias. However, a significant drawback of such binary fairness evaluation is its dependency on protected group membership attributes. In various real-world scenarios, protected status for individuals is sparse, unavailable, or even illegal to collect. This paper extends the previous work on binary fairness metrics to relax the requirement on deterministic membership to its surrogate counterpart under a probabilistic setting. We show how to conduct binary fairness evaluation when exact protected attributes are not available, but their surrogates as likelihoods are accessible. In theory, we prove that inferred metrics calculated from surrogates are valid under standard statistical assumptions. In practice, we demonstrate the effectiveness of our approach using publicly available data from the Home Mortgage Disclosure Act and simulated benchmarks that mimic real-world conditions under different levels of model disparity. We extend the results from previous work to include comparisons with alternative model-based methods and we develop further practical guidance based on our extensive simulation. Finally, we embody our method in open-source software that is readily available for use in other applications.
Evolutionary Retrofitting
https://arxiv.org/abs/2410.11330
AfterLearnER (After Learning Evolutionary Retrofitting) consists in applying non-differentiable optimization, including evolutionary methods, to refine fully-trained machine learning models by optimizing a set of carefully chosen parameters or hyperparameters of the model, with respect to some actual, exact, and hence possibly non-differentiable error signal, performed on a subset of the standard validation set. The efficiency of AfterLearnER is demonstrated by tackling non-differentiable signals such as threshold-based criteria in depth sensing, the word error rate in speech re-synthesis, image quality in 3D generative adversarial networks (GANs), image generation via Latent Diffusion Models (LDM), the number of kills per life at Doom, computational accuracy or BLEU in code translation, and human appreciations in image synthesis. In some cases, this retrofitting is performed dynamically at inference time by taking into account user inputs. The advantages of AfterLearnER are its versatility (no gradient is needed), the possibility to use non-differentiable feedback including human evaluations, the limited overfitting, supported by a theoretical study and its anytime behavior. Last but not least, AfterLearnER requires only a minimal amount of feedback, i.e., a few dozens to a few hundreds of scalars, rather than the tens of thousands needed in most related published works. Compared to fine-tuning (typically using the same loss, and gradient-based optimization on a smaller but still big dataset at a fine grain), AfterLearnER uses a minimum amount of data on the real objective function without requiring differentiability.
pyhgf: A neural network library for predictive coding
https://arxiv.org/abs/2410.09206
Bayesian models of cognition have gained considerable traction in computational neuroscience and psychiatry. Their scopes are now expected to expand rapidly to artificial intelligence, providing general inference frameworks to support embodied, adaptable, and energy-efficient autonomous agents. A central theory in this domain is predictive coding, which posits that learning and behaviour are driven by hierarchical probabilistic inferences about the causes of sensory inputs. Biological realism constrains these networks to rely on simple local computations in the form of precision-weighted predictions and prediction errors. This can make this framework highly efficient, but its implementation comes with unique challenges on the software development side. Embedding such models in standard neural network libraries often becomes limiting, as these libraries' compilation and differentiation backends can force a conceptual separation between optimization algorithms and the systems being optimized. This critically departs from other biological principles such as self-monitoring, self-organisation, cellular growth and functional plasticity. In this paper, we introduce \texttt{pyhgf}: a Python package backed by JAX and Rust for creating, manipulating and sampling dynamic networks for predictive coding. We improve over other frameworks by enclosing the network components as transparent, modular and malleable variables in the message-passing steps. The resulting graphs can implement arbitrary computational complexities as beliefs propagation. But the transparency of core variables can also translate into inference processes that leverage self-organisation principles, and express structure learning, meta-learning or causal discovery as the consequence of network structural adaptation to surprising inputs. The code, tutorials and documentation are hosted at: this https URL.
应用研究
Biologically Inspired Swarm Dynamic Target Tracking and Obstacle Avoidance
https://arxiv.org/abs/2410.11237
This study proposes a novel artificial intelligence (AI) driven flight computer, integrating an online free-retraining-prediction model, a swarm control, and an obstacle avoidance strategy, to track dynamic targets using a distributed drone swarm for military applications. To enable dynamic target tracking the swarm requires a trajectory prediction capability to achieve intercept allowing for the tracking of rapid maneuvers and movements while maintaining efficient path planning. Traditional predicative methods such as curve fitting or Long ShortTerm Memory (LSTM) have low robustness and struggle with dynamic target tracking in the short term due to slow convergence of single agent-based trajectory prediction and often require extensive offline training or tuning to be effective. Consequently, this paper introduces a novel robust adaptive bidirectional fuzzy brain emotional learning prediction (BFBEL-P) methodology to address these challenges. The controller integrates a fuzzy interface, a neural network enabling rapid adaption, predictive capability and multi-agent solving enabling multiple solutions to be aggregated to achieve rapid convergence times and high accuracy in both the short and long term. This was verified through the use of numerical simulations seeing complex trajectory being predicted and tracked by a swarm of drones. These simulations show improved adaptability and accuracy to state of the art methods in the short term and strong results over long time domains, enabling accurate swarm target tracking and predictive capability.