检索增强生成(Retrieval-Augmented Generation, RAG)的核心在于将用户感兴趣的数据无缝连接至大型语言模型(LLM),实现数据驱动的生成式AI功能。这一过程充分结合了生成式AI的强大能力与数据资源的深度融合,使得LLM能够更高效地提供精准且上下文相关的回答。RAG 系统的潜力远不止于传统意义上服务于聊天机器人类型的应用程序,还将在改进业务决策、预测分析等创新型 AI 应用场景中扮演关键角色,推动智能化发展迈向新的高度。RAG 的有用性已得到广泛认可,随着技术的不断演进,我们有充分理由期待更多变革性应用的出现,这些应用将彻底改变我们学习和与信息交互的方式。在众多数据形式中,重要的半结构化数据往往存储在复杂的文件类型中,尤其是我们熟知的难以处理的 PDF 文件。考虑到许多重要文件通常以 PDF 格式呈现,典型示例包括财报电话会议记录、投资者报告、新闻文章、10K/10Q 文档以及 ARXIV 上的研究论文等,这些文档在各个领域中的普遍存在。为了将这些关键数据有效地纳入 RAG 管道,我们亟需一种方法,能够从 PDF 文件中干净、准确且高效地提取其中的嵌入信息,包括文本、表格、图像、图形等元素。这一过程的优化不仅能够提升数据的可用性,还能加速信息的提取与应用,推动智能问答和决策系统的更大突破。LlamaParse 是一种先进的 AI 驱动生成式文档解析技术,专为处理包含表格、图形等嵌入对象的复杂文档而设计。它能够高效地解析和提取多种结构化和半结构化数据,从而使文档中的关键信息得以精准、便捷地提取并转化为可用于进一步处理的格式。LlamaParse 不仅支持传统文本的解析,还能够有效处理包含图像和图表等非文本信息的复杂文件,极大提升了文档数据在智能应用中的可用性和应用场景的广度。LlamaParse 的核心功能是支持在这些复杂文档(如 PDF)上创建检索系统。LlamaParse 通过从这些文档中提取数据并将其转换为易于摄取的格式(如 markdown 或文本)来实现这一点。转换数据后,可以将其嵌入并加载到 RAG 管道中。以下代码介绍了使用 LlamaParse 提取 PDF 文件的 RAG 管道的实施。!pip install llama-index
!pip install llama-index-core
!pip install llama-index-embeddings-openai
!pip install llama-parse
!pip install llama-index-vector-stores-kdbai
!pip install pandas
!pip install llama-index-postprocessor-cohere-rerank
!pip install kdbai_client
from llama_parse import LlamaParse
from llama_index.core import Settings
from llama_index.core import StorageContext
from llama_index.core import VectorStoreIndex
from llama_index.core.node_parser import MarkdownElementNodeParser
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.vector_stores.kdbai import KDBAIVectorStore
from llama_index.postprocessor.cohere_rerank import CohereRerank
from getpass import getpass
import os
import kdbai_client as kdbai
为 LlamaCloud、OpenAI 设置 API 密钥:# llama-parse is async-first, running the async code in a notebook requires the use of nest_asyncio
import nest_asyncio
nest_asyncio.apply()
# API access to llama-cloud
os.environ["LLAMA_CLOUD_API_KEY"] = (
os.environ["LLAMA_CLOUD_API_KEY"]
if "LLAMA_CLOUD_API_KEY" in os.environ
else getpass("LLAMA CLOUD API key: ")
)
# Using OpenAI API for embeddings/llms
os.environ["OPENAI_API_KEY"] = (
os.environ["OPENAI_API_KEY"]
if "OPENAI_API_KEY" in os.environ
else getpass("OpenAI API Key: ")
)
#Set up KDB.AI endpoint and API key
KDBAI_ENDPOINT = (
os.environ["KDBAI_ENDPOINT"]
if "KDBAI_ENDPOINT" in os.environ
else input("KDB.AI endpoint: ")
)
KDBAI_API_KEY = (
os.environ["KDBAI_API_KEY"]
if "KDBAI_API_KEY" in os.environ
else getpass("KDB.AI API key: ")
)
session = kdbai.Session(api_key=KDBAI_API_KEY, endpoint=KDBAI_ENDPOINT)
连接到“default”数据库,为 KDB 创建一个架构。AI 表,定义索引,然后创建表:schema = [
dict(name="document_id", type="str"),
dict(name="text", type="str"),
dict(name="embeddings", type="float32s"),
]
indexFlat = {
"name": "flat",
"type": "flat",
"column": "embeddings",
"params": {'dims': 1536, 'metric': 'L2'},
}
db = session.database("default")
KDBAI_TABLE_NAME = "LlamaParse_Table"
# First ensure the table does not already exist
try:
db.table(KDBAI_TABLE_NAME).drop()
except kdbai.KDBAIException:
pass
#Create the table
table = db.create_table(KDBAI_TABLE_NAME, schema, indexes=[indexFlat])
!wget 'https://arxiv.org/pdf/2404.08865' -O './LLM_recall.pdf'
EMBEDDING_MODEL = "text-embedding-3-small"
GENERATION_MODEL = "gpt-4o"
llm = OpenAI(model=GENERATION_MODEL)
embed_model = OpenAIEmbedding(model=EMBEDDING_MODEL)
Settings.llm = llm
Settings.embed_model = embed_model
pdf_file_name = './LLM_recall.pdf'
parsing_instructions = '''The document titled "LLM In-Context Recall is Prompt Dependent" is an academic preprint from April 2024, authored by Daniel Machlab and Rick Battle from the VMware NLP Lab. It explores the in-context recall capabilities of Large Language Models (LLMs) using a method called "needle-in-a-haystack," where a specific factoid is embedded in a block of unrelated text. The study investigates how the recall performance of various LLMs is influenced by the content of prompts and the biases in their training data. The research involves testing multiple LLMs with varying context window sizes to assess their ability to recall information accurately when prompted differently. The paper includes detailed methodologies, results from numerous tests, discussions on the impact of prompt variations and training data, and conclusions on improving LLM utility in practical applications. It contains many tables. Answer questions using the information in this article and be precise.'''
documents = LlamaParse(result_type="markdown", parsing_instructions=parsing_instructions).load_data(pdf_file_name)
print(documents[0].text[:1000])
从 Markdown 文件中提取base_nodes(文本)和对象节点(表):
node_parser = MarkdownElementNodeParser(llm=llm, num_workers=8).from_defaults()
# Retrieve nodes (text) and objects (table)
nodes = node_parser.get_nodes_from_documents(documents)
from openai import OpenAI
client = OpenAI()
def embed_query(query):
query_embedding = client.embeddings.create(
input=query,
model="text-embedding-3-small"
)
return query_embedding.data[0].embedding
def retrieve_data(query):
query_embedding = embed_query(query)
results = table.search(vectors={'flat':[query_embedding]},n=5,filter=[('<>','document_id','4a9551df-5dec-4410-90bb-43d17d722918')])
retrieved_data_for_RAG = []
for index, row in results[0].iterrows():
retrieved_data_for_RAG.append(row['text'])
return retrieved_data_for_RAG
def RAG(query):
question = "You will answer this question based on the provided reference material: " + query
messages = "Here is the provided context: " + "\n"
results = retrieve_data(query)
if results:
for data in results:
messages += data + "\n"
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": question},
{
"role": "user",
"content": [
{"type": "text", "text": messages},
],
}
],
max_tokens=300,
)
content = response.choices[0].message.content
return content
在本篇文章中,我们深入探讨了如何在复杂的 PDF 文档上构建一个高效的检索增强生成(RAG)管道。通过使用 LlamaParse 技术,我们成功将 PDF 文档转换为 Markdown 格式,提取了其中的文本和表格信息,并将这些数据顺利摄取到 KDB 中,为后续的 AI 检索提供支持。由于 RAG 系统已经实现了产品化,能够处理和摄取存储在复杂文档类型中的知识,确保了在各种应用场景中提取和利用关键信息的高效性。LlamaParse 技术正是实现这一目标的关键工具,它为处理和转换复杂文档数据提供了强有力的支持。参考代码:https://github.com/KxSystems/kdbai-samples/blob/main/LlamaParse_pdf_RAG/llamaParse_demo.ipynb