25 - 框架版本更新说明(2026年3月)¶
⚠️ 时效性说明:本文档已于 2026-03-26 重新核实。版本号以官方 PyPI、GitHub Releases 和官方文档为准:LangChain | LangGraph | LlamaIndex | Dify | CrewAI
📊 已核实版本信息(2026-03-26)¶
| 框架 | 当前稳定版 | 核实时间 | 官方来源 |
|---|---|---|---|
| LangChain | 1.2.13 | 2026-03-26 | PyPI |
| LangChain Core | 1.2.22 | 2026-03-26 | PyPI |
| LangGraph | 1.1.3 | 2026-03-26 | PyPI |
| LlamaIndex | 0.14.18 | 2026-03-26 | PyPI |
| CrewAI | 1.11.1 | 2026-03-26 | PyPI |
| Dify | 稳定版 v1.13.2;预发布 1.14.0-rc1 | 2026-03-26 | GitHub Releases |
🔄 重要API变更¶
1. LangChain 1.2.x 重要变更¶
1.1 已弃用的API(Deprecated)¶
Python
# ❌ 已弃用 - 不要使用
from langchain.chains import LLMChain, SimpleSequentialChain, SequentialChain
from langchain.agents import initialize_agent, AgentExecutor
from langchain.memory import ConversationBufferMemory, ConversationSummaryMemory
# ✅ 新API - 使用LCEL
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
# 创建链的新方式
chain = prompt | llm | StrOutputParser()
# Agent的新方式 - 使用LangGraph
from langgraph.prebuilt import create_react_agent
agent = create_react_agent(llm, tools)
1.2 LCEL(LangChain Expression Language)最佳实践¶
Python
"""
LCEL最佳实践 - 2026年3月
"""
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI
from langchain_core.runnables import RunnablePassthrough, RunnableParallel
# 1. 基础链
prompt = ChatPromptTemplate.from_template("Tell me about {topic}")
llm = ChatOpenAI(model="gpt-5-mini")
chain = prompt | llm | StrOutputParser()
# 2. 带RAG的链
from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings
vectorstore = Chroma.from_documents(documents, OpenAIEmbeddings())
retriever = vectorstore.as_retriever()
rag_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
# 3. 并行执行
chain = RunnableParallel(
summary=prompt | llm | StrOutputParser(),
keywords=keyword_prompt | llm | StrOutputParser()
)
# 4. 带记忆的链
from langchain_core.messages import HumanMessage, AIMessage
from langchain_community.chat_message_histories import ChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
# 使用新的记忆API
history = ChatMessageHistory()
chain_with_history = RunnableWithMessageHistory(
chain,
get_session_history=lambda session_id: history,
input_messages_key="input",
history_messages_key="chat_history"
)
2. LangGraph 1.1.x 重要变更¶
2.1 StateGraph核心用法¶
Python
"""
LangGraph 1.1.x 最佳实践
"""
from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
# 1. 定义状态
class AgentState(TypedDict):
messages: list
next_action: str
# 2. 创建StateGraph
graph = StateGraph(AgentState)
# 3. 添加节点
def agent_node(state: AgentState):
llm = ChatOpenAI(model="gpt-5-mini")
response = llm.invoke(state["messages"])
return {"messages": [response]}
graph.add_node("agent", agent_node)
graph.add_node("tool", tool_node)
# 4. 添加边
graph.add_edge(START, "agent")
graph.add_conditional_edges(
"agent",
should_continue,
{"continue": "tool", "end": END}
)
graph.add_edge("tool", "agent")
# 5. 编译
app = graph.compile()
# 6. 执行
result = app.invoke({"messages": [("user", "Hello!")]})
2.2 预构建Agent¶
Python
"""
使用预构建Agent - 推荐
"""
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
# 定义工具
from langchain_core.tools import tool
import ast
import operator
@tool
def search(query: str) -> str:
"""搜索网络"""
return f"Search results for: {query}"
@tool
def calculator(expression: str) -> float:
"""计算数学表达式(使用AST安全解析)"""
operators = {
ast.Add: operator.add,
ast.Sub: operator.sub,
ast.Mult: operator.mul,
ast.Div: operator.truediv,
}
def eval_expr(node):
if isinstance(node, ast.Num):
return node.n
elif isinstance(node, ast.BinOp):
left = eval_expr(node.left)
right = eval_expr(node.right)
return operators[type(node.op)](left, right)
else:
raise ValueError(f"不支持的操作")
tree = ast.parse(expression, mode='eval')
return eval_expr(tree.body)
# 创建Agent
llm = ChatOpenAI(model="gpt-5-mini")
tools = [search, calculator]
agent = create_react_agent(llm, tools)
# 执行
result = agent.invoke({
"messages": [("user", "What is 2 + 2?")]
})
3. LlamaIndex 0.14.x 重要变更¶
3.1 核心API更新¶
Python
"""
LlamaIndex 0.14.x 最佳实践
"""
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding
# 1. 全局配置
Settings.llm = OpenAI(model="gpt-5-mini")
Settings.embed_model = OpenAIEmbedding()
# 2. 加载文档
documents = SimpleDirectoryReader("./data").load_data()
# 3. 创建索引
index = VectorStoreIndex.from_documents(documents)
# 4. 创建查询引擎
query_engine = index.as_query_engine(
similarity_top_k=5,
response_mode="compact"
)
# 5. 查询
response = query_engine.query("What is the document about?")
3.2 高级RAG¶
Python
"""
LlamaIndex高级RAG - 2026
"""
from llama_index.core import VectorStoreIndex
from llama_index.core.retrievers import VectorIndexRetriever
from llama_index.core.query_engine import RetrieverQueryEngine
from llama_index.core.postprocessor import SimilarityPostprocessor
# 1. 自定义检索器
retriever = VectorIndexRetriever(
index=index,
similarity_top_k=10
)
# 2. 后处理器
postprocessor = SimilarityPostprocessor(similarity_cutoff=0.7)
# 3. 组装查询引擎
query_engine = RetrieverQueryEngine(
retriever=retriever,
node_postprocessors=[postprocessor]
)
# 4. 流式查询
streaming_response = query_engine.query("Tell me more")
for text in streaming_response.response_gen:
print(text, end="")
4. CrewAI 1.11.x 重要变更¶
Python
"""
CrewAI 1.11.x 最佳实践
"""
from crewai import Agent, Task, Crew, Process
from langchain_openai import ChatOpenAI
# 1. 配置LLM
llm = ChatOpenAI(model="gpt-5-mini")
# 2. 创建Agent
researcher = Agent(
role="Researcher",
goal="Research AI topics",
backstory="Expert AI researcher",
llm=llm,
verbose=True
)
writer = Agent(
role="Writer",
goal="Write engaging content",
backstory="Professional writer",
llm=llm,
verbose=True
)
# 3. 创建Task
research_task = Task(
description="Research the latest AI trends",
expected_output="A summary of AI trends",
agent=researcher
)
write_task = Task(
description="Write a blog post about AI trends",
expected_output="A blog post",
agent=writer
)
# 4. 创建Crew
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.sequential,
verbose=True
)
# 5. 执行
result = crew.kickoff()
5. Dify 2026 重要变更¶
⚠️ 重要更新:本文于 2026-03-26 重新核实。
langgenius/dify的 GitHub Releases 显示:最新稳定版为 v1.13.2(2026-03-26 发布),最新预发布版为 1.14.0-rc1(2026-03-26 发布)。官方 MCP 文档当前展示的是“添加 MCP 服务器(HTTP)”,因此这里按 HTTP 接入方式示例,不再使用本地command/args形式。
5.1 API调用更新¶
Python
"""
Dify API调用 - 2026
"""
import requests
class DifyClient:
"""Dify客户端"""
def __init__(self, api_key: str, base_url: str = "https://api.dify.ai/v1"):
self.api_key = api_key
self.base_url = base_url
self.headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
def chat(self, query: str, user: str = "default",
conversation_id: str = None, inputs: dict = None):
"""对话接口"""
payload = {
"query": query,
"user": user,
"response_mode": "blocking",
"inputs": inputs or {}
}
if conversation_id:
payload["conversation_id"] = conversation_id
response = requests.post(
f"{self.base_url}/chat-messages",
headers=self.headers,
json=payload
)
return response.json()
def workflow_run(self, inputs: dict, user: str = "default"):
"""工作流接口"""
response = requests.post(
f"{self.base_url}/workflows/run",
headers=self.headers,
json={
"inputs": inputs,
"user": user,
"response_mode": "blocking"
}
)
return response.json()
5.2 MCP集成¶
Python
"""
Dify MCP集成 - 2026
"""
# 当前官方文档展示的是“添加 MCP 服务器(HTTP)”
# 说明:
# 1. “在 Dify 中使用 MCP 工具”与“将 Dify 应用发布为 MCP Server”是两项不同能力
# 2. 使用 MCP 工具时,当前文档示例为 HTTP 传输
# 3. 如服务器要求 OAuth,认证流程在 Dify Web 界面中完成
mcp_server = {
"name": "notion",
"transport": "http",
"server_url": "https://api.notion.com/mcp",
"server_id": "notion_mcp"
}
# 在 Dify Web 界面中完成:
# 1. 工具 -> 添加 MCP 服务器(HTTP)
# 2. 填写 server_url / 名称 / server_id
# 3. 完成 OAuth(如需要)
# 4. 在 Agent / Workflow 中选择已同步的 MCP 工具
📋 迁移检查清单¶
从旧版LangChain迁移¶
- 将
LLMChain替换为LCEL链(prompt | llm | parser) - 将
initialize_agent替换为create_react_agent - 将
ConversationBufferMemory替换为RunnableWithMessageHistory - 将
RetrievalQA替换为LCEL RAG链 - 更新所有import语句为
langchain_core和langchain_community
从旧版LangGraph迁移¶
- 确保使用
StateGraph而非旧版Graph - 使用
START和END常量而非字符串 - 更新条件边的返回格式
从旧版LlamaIndex迁移¶
- 使用
Settings全局配置 - 更新
ServiceContext为Settings - 使用新的查询引擎API
🔗 相关资源¶
最后更新日期:2026-03-26 下次计划更新:2026年6月