LLM 在调用工具方面并不完美。模型可能会尝试调用不存在的工具,或者无法返回与请求的架构匹配的参数。保持架构简单、减少一次传递的工具数量以及使用好的名称和描述等策略可以帮助降低这种风险,但并非万无一失。本指南介绍了一些在graph中构建错误处理以缓解这些故障模式的方法。
from langchain_core.tools import tool
@tool
def get_weather(location: str):
"""Call to get the current weather."""
if location == "shanghai":
raise ValueError("Input queries must be proper nouns")
elif location == "Shanghai":
return "It's 60 degrees and foggy."
else:
raise ValueError("Invalid input.")
from typing import Literal
from langchain_ollama import ChatOllama
import base_conf
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.prebuilt import ToolNode
tool_node = ToolNode([get_weather])
model_with_tools = (ChatOllama(base_url=base_conf.base_url,
model=base_conf.model_name,
temperature=base_conf.temperature)
.bind_tools([get_weather]))
def should_continue(state: MessagesState):
messages = state["messages"]
last_message = messages[-1]
if last_message.tool_calls:
return "tools"
return END
def call_model(state: MessagesState):
messages = state["messages"]
response = model_with_tools.invoke(messages)
return {"messages": [response]}
workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", should_continue, ["tools", END])
workflow.add_edge("tools", "agent")
app = workflow.compile()
当尝试调用该工具时,会发现模型使用错误的输入调用该工具,导致该工具抛出错误。执行该工具的预构建 ToolNode 具有一些内置错误处理功能,可以捕获错误并将其传回模型,以便它可以再试一次:response = app.invoke(
{"messages": [HumanMessage("what is the weather in shanghai?")]},
)
for message in response["messages"]:
string_representation = f"{message.type.upper()}: {message.content}\n"
print(string_representation)
HUMAN: what is the weather in shanghai?
AI:
TOOL: Error: ValueError('Input queries must be proper nouns')
Please fix your mistakes.
AI:
TOOL: It's 60 degrees and foggy.
AI: Currently in Shanghai, it is 60 degrees with foggy conditions. Please make sure to carry an umbrella if you plan on going outside!
可以看到,ToolNode包装的tools在遇到错误的时候会自己处理上下文的异常,告诉LLM重新推理,太便捷了!
如果你想自定义你的策略呢?有办法吗?是有的,首先那就不能使用ToolNode,而是模仿它的思路去做。它的思路是什么呢?def call_tool(state: MessagesState):
tools_by_name = {master_haiku_generator.name: master_haiku_generator}
messages = state["messages"]
last_message = messages[-1]
output_messages = []
for tool_call in last_message.tool_calls:
try:
tool_result = tools_by_name[tool_call["name"]].invoke(tool_call["args"])
output_messages.append(
ToolMessage(
content=json.dumps(tool_result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)
)
except Exception as e:
# Return the error if the tool call fails
output_messages.append(
ToolMessage(
content="",
name=tool_call["name"],
tool_call_id=tool_call["id"],
additional_kwargs={"error": e},
)
)
return {"messages": output_messages}
也就是我们自己在graph中配置一个tool call的node,当调用异常的时候,我们自己捕获,然后自己构造一个ToolMessage 加到state中,然后再在graph中添加一个路由函数,类似:def should_fallback(
state: MessagesState,
) -> Literal["agent", "remove_failed_tool_call_attempt"]:
messages = state["messages"]
failed_tool_messages = [
msg
for msg in messages
if isinstance(msg, ToolMessage)
and msg.additional_kwargs.get("error") is not None
]
if failed_tool_messages:
return "remove_failed_tool_call_attempt"
return "agent"
当我们获取到state中有异常的时候就指定下一个节点,通过这种手动包装ToolMessage加异常路由判断的方式,我们就能实现自己的最灵活的异常处理。