现在搞人工智能,真是离不开注意力机制。
发论文,模型里没点魔改Attention都不好意思叫创新。
面试算法岗,简单的让你讲讲几种注意力,或者让你手撕个注意力函数、MQA算法之类的。
所以各位小伙伴,尤其是在校生们,还是要打好Attention的基础。这里也分享一些资料:包括缩放点积注意力、多头注意力、交叉注意力、空间注意力、通道注意力等在内的11种主流注意力机制112个创新研究,最新更新到24年9月
这些可以说是目前学术界有关attention最前沿的资料了。并且每篇论文都有对应的代码,可以自己手撕复现,非常方便。
缩放点积注意力
5.Sep.2024—LMLT:Low-to-high Multi-Level Vision Transformer for Image Super-Resolution
4.Sep.2024—MobileUNETR:A Lightweight End-To-End Hybrid Vision Transformer For Efficient Medical Image Segmentation
4.Sep.2024—More is More Addition Bias in Large Language Models
4.Sep.2024—LongLLaVA:Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture
多头注意力
4.Sep.2024—Multi-Head Attention Residual Unfolded Network for Model-Based Pansharpening
30.Aug.2024—From Text to Emotion: Unveiling the Emotion Annotation Capabilities of LLMs
25.Jun.2024—Temporal-Channel Modeling in Multi-head Self-Attention for Synthetic Speech Detection
14.May.2024—Improving Transformers with Dynamically Composable Multi-Head Attention
步幅注意力
25.Aug.2024—Vision-Language and Large Language Model Performance in Gastroenterology: GPT, Claude, Llama, Phi, Mistral, Gemma, and Quantized Models
21.Aug.2024—Unlocking Adversarial Suffix Optimization Without Affirmative Phrases: Efficient Black-box Jailbreaking via LLM as Optimizer
16.Aug.2024—Fine-tuning LLMs for Autonomous Spacecraft Control: A Case Study Using Kerbal Space Program
15.Aug.2024—FuseChat Knowledge Fusion of Chat Models