绝对不在作者没有理解论文的前提下,硬性翻译原文! 绝对不插入任何广告、推流,不会成长为一个流量主账号! 绝对不进行授课,开班等盈利活动!
使用大语言模型解释决策行为。GPT-Driver: Learning to Drive with GPT 利用大语言模型来增强自动驾驶决策的可解释性。Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving 大语言模型(LLMs)代理评估车道占用情况,并评估潜在行动的安全性。Receive, Reason, and React: Drive as You Say with Large Language Models in Autonomous Vehicles
将场景表示转换为文本提示,并使用BERT模型生成文本编码,最后将其与图像编码融合,以解码轨迹预测。Can you text what is happening? Integrating pre-trained language encoders into trajectory prediction models for autonomous driving
根据个性化命令进行规划器调整Exploring the LLM-based planner conditioning on personalized commands 根据预定义的交通规则和系统要求,接受或拒绝用户命令 Human-Centric Autonomous Systems With LLMs for User Command Reasoning
LLMs可以从国家公路交通安全管理局(NHTSA)的事故报告中提取信息,并生成用于仿真和测试的多样化场景。Adept: A testing platform for simulated autonomous driving 利用GPT采用问答(QA)方法生成数据场景,进一步的仿真和测试 TARGET: Automated Scenario Generation from Traffic Rules for Testing Autonomous Vehicles 使用GPT将交通规则从自然语言翻译成特定领域的语言,生成测试场景Language Conditioned Traffic Generation
通过LLMs覆盖交通规则,提升驾驶决策中的可解释性Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving 一个常识模块,用于存储人类驾驶的规则,并创建库和API以与感知、预测和映射系统交互A Language Agent for Autonomous Driving 自上而下的决策系统利用LLMs识别重要代理并做出决策LanguageMPC: Large Language Models as Decision Makers for Autonomous Driving 使用一个内存模块,存储驾驶场景的文本描述,用于少量学习DiLu: A Knowledge-Driven Approach to Autonomous Driving with Large Language Models
比较了微调与上下文学习,得出少量学习略微更有效的结论GPT-Driver: Learning to Drive with GPT 比较了从头开始训练和微调方法,发现基于LoRA的微调能够比从头开始训练表现更好Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving
提出了反思模块,帮助LLM通过人类反馈改进驾驶推理DiLu: A Knowledge-Driven Approach to Autonomous Driving with Large Language Models 通过司机采访数据开发了“教练代理”模块,以指导LLM的类人驾驶风格SurrealDriver: Designing Generative Driver Agent Simulation Framework in Urban Contexts based on Large Language Model 利用人类教练的语音指令,构建了用于深度强化学习的自然语言指令分类法Incorporating Voice Instructions in Model-Based Reinforcement Learning for Self-Driving Cars
提出了一种减少幻觉的方法,即在没有足够信息进行决策时,模型会回答“我不知道”Receive, Reason, and React: Drive as You Say with Large Language Models in Autonomous Vehicles 关于从人类反馈中进行强化学习的全面调查,重点在于提高自动驾驶中的安全性A Survey of Reinforcement Learning from Human Feedback
新的注意力机制结构,用于生成模型中的快速推理PagedAttention: Fast Attention Mechanism for Efficient Inference in Generative Models GPTQ量化技术,通过压缩模型权重提高LLMs的运行速度GPTQ: Quantization for Large Language Models AWQ量化技术,加速大型语言模型的运行AWQ: An Efficient Quantization Method for Accelerating Large Language Models SqueezeLLM,能够实现2.1到2.3倍的加速SqueezeLLM: Accelerating Large Language Models with Quantization SLidR蒸馏技术,改进LiDAR在检测和分割任务的处理效率SLidR: Distillation Techniques for LiDAR Input in Autonomous Driving 时空蒸馏方法,改进LiDAR输入处理的效率ST-SLiDR: Spatio-Temporal Distillation for LiDAR-based Autonomous Driving Systems
基于知识驱动的大型语言模型在感知数据错误时的应对DiLu: A Knowledge-Driven Approach to Autonomous Driving with Large Language Models