id
stringlengths 6
6
| text
stringlengths 20
17.2k
| title
stringclasses 1
value |
|---|---|---|
151581
|
"This section shows you how to replace the default schema with a custom schema.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a new index with custom filterable fields \n",
"\n",
"This schema shows field definitions. It's the default schema, plus several new fields attributed as filterable. Because it's using the default vector configuration, you won't see vector configuration or vector profile overrides here. The name of the default vector profile is \"myHnswProfile\" and it's using a vector configuration of Hierarchical Navigable Small World (HNSW) for indexing and queries against the content_vector field.\n",
"\n",
"There's no data for this schema in this step. When you execute the cell, you should get an empty index on Azure AI Search."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"from azure.search.documents.indexes.models import (\n",
" ScoringProfile,\n",
" SearchableField,\n",
" SearchField,\n",
" SearchFieldDataType,\n",
" SimpleField,\n",
" TextWeights,\n",
")\n",
"\n",
"# Replace OpenAIEmbeddings with AzureOpenAIEmbeddings if Azure OpenAI is your provider.\n",
"embeddings: OpenAIEmbeddings = OpenAIEmbeddings(\n",
" openai_api_key=openai_api_key, openai_api_version=openai_api_version, model=model\n",
")\n",
"embedding_function = embeddings.embed_query\n",
"\n",
"fields = [\n",
" SimpleField(\n",
" name=\"id\",\n",
" type=SearchFieldDataType.String,\n",
" key=True,\n",
" filterable=True,\n",
" ),\n",
" SearchableField(\n",
" name=\"content\",\n",
" type=SearchFieldDataType.String,\n",
" searchable=True,\n",
" ),\n",
" SearchField(\n",
" name=\"content_vector\",\n",
" type=SearchFieldDataType.Collection(SearchFieldDataType.Single),\n",
" searchable=True,\n",
" vector_search_dimensions=len(embedding_function(\"Text\")),\n",
" vector_search_profile_name=\"myHnswProfile\",\n",
" ),\n",
" SearchableField(\n",
" name=\"metadata\",\n",
" type=SearchFieldDataType.String,\n",
" searchable=True,\n",
" ),\n",
" # Additional field to store the title\n",
" SearchableField(\n",
" name=\"title\",\n",
" type=SearchFieldDataType.String,\n",
" searchable=True,\n",
" ),\n",
" # Additional field for filtering on document source\n",
" SimpleField(\n",
" name=\"source\",\n",
" type=SearchFieldDataType.String,\n",
" filterable=True,\n",
" ),\n",
"]\n",
"\n",
"index_name: str = \"langchain-vector-demo-custom\"\n",
"\n",
"vector_store: AzureSearch = AzureSearch(\n",
" azure_search_endpoint=vector_store_address,\n",
" azure_search_key=vector_store_password,\n",
" index_name=index_name,\n",
" embedding_function=embedding_function,\n",
" fields=fields,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Add data and perform a query that includes a filter\n",
"\n",
"This example adds data to the vector store based on the custom schema. It loads text into the title and source fields. The source field is filterable. The sample query in this section filters the results based on content in the source field."
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['ZjhmMTg0NTEtMjgwNC00N2M0LWFiZGEtMDllMGU1Mzk1NWRm',\n",
" 'MzQwYWUwZDEtNDJkZC00MzgzLWIwMzItYzMwOGZkYTRiZGRi',\n",
" 'ZjFmOWVlYTQtODRiMC00YTY3LTk2YjUtMzY1NDBjNjY5ZmQ2']"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Data in the metadata dictionary with a corresponding field in the index will be added to the index.\n",
"# In this example, the metadata dictionary contains a title, a source, and a random field.\n",
"# The title and the source are added to the index as separate fields, but the random value is ignored because it's not defined in the schema.\n",
"# The random field is only stored in the metadata field.\n",
"vector_store.add_texts(\n",
" [\"Test 1\", \"Test 2\", \"Test 3\"],\n",
" [\n",
" {\"title\": \"Title 1\", \"source\": \"A\", \"random\": \"10290\"},\n",
" {\"title\": \"Title 2\", \"source\": \"A\", \"random\": \"48392\"},\n",
" {\"title\": \"Title 3\", \"source\": \"B\", \"random\": \"32893\"},\n",
" ],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Test 3', metadata={'title': 'Title 3', 'source': 'B', 'random': '32893'}),\n",
" Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'A', 'random': '10290'}),\n",
" Document(page_content='Test 2', metadata={'title': 'Title 2', 'source': 'A', 'random': '48392'})]"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"res = vector_store.similarity_search(query=\"Test 3 source1\", k=3, search_type=\"hybrid\")\n",
"res"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'A', 'random': '10290'}),\n",
" Document(page_content='Test 2', metadata={'title': 'Title 2', 'source': 'A', 'random': '48392'})]"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"res = vector_store.similarity_search(\n",
" query=\"Test 3 source1\", k=3, search_type=\"hybrid\", filters=\"source eq 'A'\"\n",
")\n",
"res"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a new index with a scoring profile\n",
"\n",
"Here's another custom schema that includes a scoring profile definition. A scoring profile is used for relevance tuning of nonvector content, which is helpful in hybrid search scenarios."
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [],
"source": [
"from azure.search.documents.indexes.models import (\n",
" FreshnessScoringFunction,\n",
" FreshnessScoringParameters,\n",
" ScoringProfile,\n",
" SearchableField,\n",
" SearchField,\n",
" SearchFieldDataType,\n",
" SimpleField,\n",
" TextWeights,\n",
")\n",
"\n",
"# Replace OpenAIEmbeddings with AzureOpenAIEmbeddings if Azure OpenAI is your provider.\n",
| |
151582
|
"embeddings: OpenAIEmbeddings = OpenAIEmbeddings(\n",
" openai_api_key=openai_api_key, openai_api_version=openai_api_version, model=model\n",
")\n",
"embedding_function = embeddings.embed_query\n",
"\n",
"fields = [\n",
" SimpleField(\n",
" name=\"id\",\n",
" type=SearchFieldDataType.String,\n",
" key=True,\n",
" filterable=True,\n",
" ),\n",
" SearchableField(\n",
" name=\"content\",\n",
" type=SearchFieldDataType.String,\n",
" searchable=True,\n",
" ),\n",
" SearchField(\n",
" name=\"content_vector\",\n",
" type=SearchFieldDataType.Collection(SearchFieldDataType.Single),\n",
" searchable=True,\n",
" vector_search_dimensions=len(embedding_function(\"Text\")),\n",
" vector_search_profile_name=\"myHnswProfile\",\n",
" ),\n",
" SearchableField(\n",
" name=\"metadata\",\n",
" type=SearchFieldDataType.String,\n",
" searchable=True,\n",
" ),\n",
" # Additional field to store the title\n",
" SearchableField(\n",
" name=\"title\",\n",
" type=SearchFieldDataType.String,\n",
" searchable=True,\n",
" ),\n",
" # Additional field for filtering on document source\n",
" SimpleField(\n",
" name=\"source\",\n",
" type=SearchFieldDataType.String,\n",
" filterable=True,\n",
" ),\n",
" # Additional data field for last doc update\n",
" SimpleField(\n",
" name=\"last_update\",\n",
" type=SearchFieldDataType.DateTimeOffset,\n",
" searchable=True,\n",
" filterable=True,\n",
" ),\n",
"]\n",
"# Adding a custom scoring profile with a freshness function\n",
"sc_name = \"scoring_profile\"\n",
"sc = ScoringProfile(\n",
" name=sc_name,\n",
" text_weights=TextWeights(weights={\"title\": 5}),\n",
" function_aggregation=\"sum\",\n",
" functions=[\n",
" FreshnessScoringFunction(\n",
" field_name=\"last_update\",\n",
" boost=100,\n",
" parameters=FreshnessScoringParameters(boosting_duration=\"P2D\"),\n",
" interpolation=\"linear\",\n",
" )\n",
" ],\n",
")\n",
"\n",
"index_name = \"langchain-vector-demo-custom-scoring-profile\"\n",
"\n",
"vector_store: AzureSearch = AzureSearch(\n",
" azure_search_endpoint=vector_store_address,\n",
" azure_search_key=vector_store_password,\n",
" index_name=index_name,\n",
" embedding_function=embeddings.embed_query,\n",
" fields=fields,\n",
" scoring_profiles=[sc],\n",
" default_scoring_profile=sc_name,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['NjUwNGQ5ZDUtMGVmMy00OGM4LWIxMGYtY2Y2MDFmMTQ0MjE5',\n",
" 'NWFjN2YwY2UtOWQ4Yi00OTNhLTg2MGEtOWE0NGViZTVjOGRh',\n",
" 'N2Y2NWUyZjctMDBjZC00OGY4LWJlZDEtNTcxYjQ1MmI1NjYx']"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Adding same data with different last_update to show Scoring Profile effect\n",
"from datetime import datetime, timedelta\n",
"\n",
"today = datetime.utcnow().strftime(\"%Y-%m-%dT%H:%M:%S-00:00\")\n",
"yesterday = (datetime.utcnow() - timedelta(days=1)).strftime(\"%Y-%m-%dT%H:%M:%S-00:00\")\n",
"one_month_ago = (datetime.utcnow() - timedelta(days=30)).strftime(\n",
" \"%Y-%m-%dT%H:%M:%S-00:00\"\n",
")\n",
"\n",
"vector_store.add_texts(\n",
" [\"Test 1\", \"Test 1\", \"Test 1\"],\n",
" [\n",
" {\n",
" \"title\": \"Title 1\",\n",
" \"source\": \"source1\",\n",
" \"random\": \"10290\",\n",
" \"last_update\": today,\n",
" },\n",
" {\n",
" \"title\": \"Title 1\",\n",
" \"source\": \"source1\",\n",
" \"random\": \"48392\",\n",
" \"last_update\": yesterday,\n",
" },\n",
" {\n",
" \"title\": \"Title 1\",\n",
" \"source\": \"source1\",\n",
" \"random\": \"32893\",\n",
" \"last_update\": one_month_ago,\n",
" },\n",
" ],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'source1', 'random': '32893', 'last_update': '2024-01-24T22:18:51-00:00'}),\n",
" Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'source1', 'random': '48392', 'last_update': '2024-02-22T22:18:51-00:00'}),\n",
" Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'source1', 'random': '10290', 'last_update': '2024-02-23T22:18:51-00:00'})]"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"res = vector_store.similarity_search(query=\"Test 1\", k=3, search_type=\"similarity\")\n",
"res"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.9.13 ('.venv': venv)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
},
"vscode": {
"interpreter": {
"hash": "645053d6307d413a1a75681b5ebb6449bb2babba4bcb0bf65a1ddc3dbefb108a"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
| |
151589
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Upstash Vector\n",
"\n",
"> [Upstash Vector](https://upstash.com/docs/vector/overall/whatisvector) is a serverless vector database designed for working with vector embeddings.\n",
">\n",
"> The vector langchain integration is a wrapper around the [upstash-vector](https://github.com/upstash/vector-py) package.\n",
">\n",
"> The python package uses the [vector rest api](https://upstash.com/docs/vector/api/get-started) behind the scenes."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Installation\n",
"\n",
"Create a free vector database from [upstash console](https://console.upstash.com/vector) with the desired dimensions and distance metric.\n",
"\n",
"You can then create an `UpstashVectorStore` instance by:\n",
"\n",
"- Providing the environment variables `UPSTASH_VECTOR_URL` and `UPSTASH_VECTOR_TOKEN`\n",
"\n",
"- Giving them as parameters to the constructor\n",
"\n",
"- Passing an Upstash Vector `Index` instance to the constructor\n",
"\n",
"Also, an `Embeddings` instance is required to turn given texts into embeddings. Here we use `OpenAIEmbeddings` as an example"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install langchain-openai langchain langchain-community upstash-vector"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"from langchain_community.vectorstores.upstash import UpstashVectorStore\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"<YOUR_OPENAI_KEY>\"\n",
"os.environ[\"UPSTASH_VECTOR_REST_URL\"] = \"<YOUR_UPSTASH_VECTOR_URL>\"\n",
"os.environ[\"UPSTASH_VECTOR_REST_TOKEN\"] = \"<YOUR_UPSTASH_VECTOR_TOKEN>\"\n",
"\n",
"# Create an embeddings instance\n",
"embeddings = OpenAIEmbeddings()\n",
"\n",
"# Create a vector store instance\n",
"store = UpstashVectorStore(embedding=embeddings)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"An alternative way of creating `UpstashVectorStore` is to [create an Upstash Vector index by selecting a model](https://upstash.com/docs/vector/features/embeddingmodels#using-a-model) and passing `embedding=True`. In this configuration, documents or queries will be sent to Upstash as text and embedded there.\n",
"\n",
"```python\n",
"store = UpstashVectorStore(embedding=True)\n",
"```\n",
"\n",
"If you are interested in trying out this approach, you can update the initialization of `store` like above and run the rest of the tutorial."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load documents\n",
"\n",
"Load an example text file and split it into chunks which can be turned into vector embeddings."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(metadata={'source': '../../how_to/state_of_the_union.txt'}, page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.'),\n",
" Document(metadata={'source': '../../how_to/state_of_the_union.txt'}, page_content='Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \\n\\nIn this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. \\n\\nLet each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \\n\\nPlease rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. \\n\\nThroughout our history we’ve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. \\n\\nThey keep moving. \\n\\nAnd the costs and the threats to America and the world keep rising. \\n\\nThat’s why the NATO Alliance was created to secure peace and stability in Europe after World War 2. \\n\\nThe United States is a member along with 29 other nations. \\n\\nIt matters. American diplomacy matters. American resolve matters.'),\n",
" Document(metadata={'source': '../../how_to/state_of_the_union.txt'}, page_content='Putin’s latest attack on Ukraine was premeditated and unprovoked. \\n\\nHe rejected repeated efforts at diplomacy. \\n\\nHe thought the West and NATO wouldn’t respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. \\n\\nWe prepared extensively and carefully. \\n\\nWe spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin. \\n\\nI spent countless hours unifying our European allies. We shared with the world in advance what we knew Putin was planning and precisely how he would try to falsely justify his aggression. \\n\\nWe countered Russia’s lies with truth. \\n\\nAnd now that he has acted the free world is holding him accountable. \\n\\nAlong with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland.')]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"loader = TextLoader(\"../../how_to/state_of_the_union.txt\")\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"\n",
"docs[:3]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Inserting documents\n",
"\n",
"The vectorstore embeds text chunks using the embedding object and batch inserts them into the database. This returns an id array of the inserted vectors."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['247aa3ae-9be9-43e2-98e4-48f94f920749',\n",
" 'c4dfc886-0a2d-497c-b2b7-d923a5cb3832',\n",
" '0350761d-ca68-414e-b8db-7eca78cb0d18',\n",
" '902fe5eb-8543-486a-bd5f-79858a7a8af1',\n",
" '28875612-c672-4de4-b40a-3b658c72036a']"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inserted_vectors = store.add_documents(docs)\n",
| |
151685
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Activeloop Deep Lake\n",
"\n",
">[Activeloop Deep Lake](https://docs.activeloop.ai/) as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes.\n",
"\n",
"This notebook showcases basic functionality related to `Activeloop Deep Lake`. While `Deep Lake` can store embeddings, it is capable of storing any type of data. It is a serverless data lake with version control, query engine and streaming dataloaders to deep learning frameworks. \n",
"\n",
"For more information, please see the Deep Lake [documentation](https://docs.activeloop.ai) or [api reference](https://docs.deeplake.ai)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setting up"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-openai langchain-community 'deeplake[enterprise]' tiktoken"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example provided by Activeloop\n",
"\n",
"[Integration with LangChain](https://docs.activeloop.ai/tutorials/vector-store/deep-lake-vector-store-in-langchain).\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deep Lake locally"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import DeepLake\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")\n",
"activeloop_token = getpass.getpass(\"activeloop token:\")\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"\n",
"loader = TextLoader(\"../../how_to/state_of_the_union.txt\")\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a local dataset\n",
"\n",
"Create a dataset locally at `./deeplake/`, then run similarity search. The Deeplake+LangChain integration uses Deep Lake datasets under the hood, so `dataset` and `vector store` are used interchangeably. To create a dataset in your own cloud, or in the Deep Lake storage, [adjust the path accordingly](https://docs.activeloop.ai/storage-and-credentials/storage-options)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"db = DeepLake(dataset_path=\"./my_deeplake/\", embedding=embeddings, overwrite=True)\n",
"db.add_documents(docs)\n",
"# or shorter\n",
"# db = DeepLake.from_documents(docs, dataset_path=\"./my_deeplake/\", embedding=embeddings, overwrite=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Query dataset"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": []
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Dataset(path='./my_deeplake/', tensors=['embedding', 'id', 'metadata', 'text'])\n",
"\n",
" tensor htype shape dtype compression\n",
" ------- ------- ------- ------- ------- \n",
" embedding embedding (42, 1536) float32 None \n",
" id text (42, 1) str None \n",
" metadata json (42, 1) str None \n",
" text text (42, 1) str None \n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": []
}
],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = db.similarity_search(query)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To disable dataset summary printings all the time, you can specify verbose=False during VectorStore initialization."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.\n"
]
}
],
"source": [
"print(docs[0].page_content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Later, you can reload the dataset without recomputing embeddings"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Deep Lake Dataset in ./my_deeplake/ already exists, loading from the storage\n"
]
}
],
"source": [
"db = DeepLake(dataset_path=\"./my_deeplake/\", embedding=embeddings, read_only=True)\n",
"docs = db.similarity_search(query)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Deep Lake, for now, is single writer and multiple reader. Setting `read_only=True` helps to avoid acquiring the writer lock."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieval Question/Answering"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/ubuntu/langchain_activeloop/langchain/libs/langchain/langchain/llms/openai.py:786: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain_openai import ChatOpenAI`\n",
" warnings.warn(\n"
]
}
],
"source": [
"from langchain.chains import RetrievalQA\n",
| |
151697
|
{
"cells": [
{
"cell_type": "markdown",
"id": "fb0243ae",
"metadata": {},
"source": [
"# Azure Cosmos DB No SQL\n",
"\n",
"This notebook shows you how to leverage this integrated [vector database](https://learn.microsoft.com/en-us/azure/cosmos-db/vector-database) to store documents in collections, create indicies and perform vector search queries using approximate nearest neighbor algorithms such as COS (cosine distance), L2 (Euclidean distance), and IP (inner product) to locate documents close to the query vectors. \n",
" \n",
"Azure Cosmos DB is the database that powers OpenAI's ChatGPT service. It offers single-digit millisecond response times, automatic and instant scalability, along with guaranteed speed at any scale. \n",
"\n",
"[Azure Cosmos DB for NoSQL](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/vector-search) now offers vector indexing and search in preview. This feature is designed to handle high-dimensional vectors, enabling efficient and accurate vector search at any scale. You can now store vectors directly in the documents alongside your data. This means that each document in your database can contain not only traditional schema-free data, but also high-dimensional vectors as other properties of the documents. This colocation of data and vectors allows for efficient indexing and searching, as the vectors are stored in the same logical unit as the data they represent. This simplifies data management, AI application architectures, and the efficiency of vector-based operations.\n",
"\n",
"[Sign Up](https://azure.microsoft.com/en-us/free/) for lifetime free access to get started today."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "ad3c1e88",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet azure-cosmos langchain-openai langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "c507b0e8",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-25T01:36:53.595385Z",
"start_time": "2024-05-25T01:36:53.571737Z"
}
},
"outputs": [],
"source": [
"OPENAI_API_KEY = \"YOUR_KEY\"\n",
"OPENAI_API_TYPE = \"azure\"\n",
"OPENAI_API_VERSION = \"2023-05-15\"\n",
"OPENAI_API_BASE = \"YOUR_ENDPOINT\"\n",
"OPENAI_EMBEDDINGS_MODEL_NAME = \"text-embedding-ada-002\"\n",
"OPENAI_EMBEDDINGS_MODEL_DEPLOYMENT = \"text-embedding-ada-002\""
]
},
{
"cell_type": "markdown",
"id": "aa7101f64740fb76",
"metadata": {},
"source": [
"## Insert Data"
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "8205cd27",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-25T01:43:02.731634Z",
"start_time": "2024-05-25T01:43:00.383956Z"
}
},
"outputs": [],
"source": [
"from langchain_community.document_loaders import PyPDFLoader\n",
"\n",
"# Load the PDF\n",
"loader = PyPDFLoader(\"https://arxiv.org/pdf/2303.08774.pdf\")\n",
"data = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 36,
"id": "8d33cceb",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-25T01:43:02.787966Z",
"start_time": "2024-05-25T01:43:02.763502Z"
}
},
"outputs": [],
"source": [
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=150)\n",
"docs = text_splitter.split_documents(data)"
]
},
{
"cell_type": "code",
"execution_count": 37,
"id": "6a80f1c2",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-25T01:43:04.582560Z",
"start_time": "2024-05-25T01:43:04.578948Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"page_content='GPT-4 Technical Report\\nOpenAI∗\\nAbstract\\nWe report the development of GPT-4, a large-scale, multimodal model which can\\naccept image and text inputs and produce text outputs. While less capable than\\nhumans in many real-world scenarios, GPT-4 exhibits human-level performance\\non various professional and academic benchmarks, including passing a simulated\\nbar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-\\nbased model pre-trained to predict the next token in a document. The post-training\\nalignment process results in improved performance on measures of factuality and\\nadherence to desired behavior. A core component of this project was developing\\ninfrastructure and optimization methods that behave predictably across a wide\\nrange of scales. This allowed us to accurately predict some aspects of GPT-4’s\\nperformance based on models trained with no more than 1/1,000th the compute of\\nGPT-4.\\n1 Introduction' metadata={'source': 'https://arxiv.org/pdf/2303.08774.pdf', 'page': 0}\n"
]
}
],
"source": [
"print(docs[0])"
]
},
{
"cell_type": "markdown",
"id": "fd1f13e237e91052",
"metadata": {},
"source": [
"## Creating AzureCosmosDB NoSQL Vector Search"
]
},
{
"cell_type": "code",
"execution_count": 38,
"id": "04c72ccc",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-25T01:43:13.279497Z",
"start_time": "2024-05-25T01:43:13.275379Z"
}
},
"outputs": [],
"source": [
"indexing_policy = {\n",
" \"indexingMode\": \"consistent\",\n",
" \"includedPaths\": [{\"path\": \"/*\"}],\n",
" \"excludedPaths\": [{\"path\": '/\"_etag\"/?'}],\n",
" \"vectorIndexes\": [{\"path\": \"/embedding\", \"type\": \"quantizedFlat\"}],\n",
"}\n",
"\n",
"vector_embedding_policy = {\n",
" \"vectorEmbeddings\": [\n",
" {\n",
" \"path\": \"/embedding\",\n",
" \"dataType\": \"float32\",\n",
" \"distanceFunction\": \"cosine\",\n",
" \"dimensions\": 1536,\n",
" }\n",
" ]\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 40,
"id": "4ebad8ef01a6c04f",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-25T01:48:42.981276Z",
"start_time": "2024-05-25T01:44:55.468667Z"
}
},
"outputs": [],
"source": [
"from azure.cosmos import CosmosClient, PartitionKey\n",
"from langchain_community.vectorstores.azure_cosmos_db_no_sql import (\n",
" AzureCosmosDBNoSqlVectorSearch,\n",
")\n",
"from langchain_openai import AzureOpenAIEmbeddings\n",
"\n",
"HOST = \"AZURE_COSMOS_DB_ENDPOINT\"\n",
"KEY = \"AZURE_COSMOS_DB_KEY\"\n",
"\n",
"cosmos_client = CosmosClient(HOST, KEY)\n",
| |
151747
|
"source": [
"## Basic Vectorstore Operations"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"db = HanaDB(\n",
" connection=connection, embedding=embeddings, table_name=\"LANGCHAIN_DEMO_BASIC\"\n",
")\n",
"\n",
"# Delete already existing documents from the table\n",
"db.delete(filter={})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can add simple text documents to the existing table."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"docs = [Document(page_content=\"Some text\"), Document(page_content=\"Other docs\")]\n",
"db.add_documents(docs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Add documents with metadata."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"docs = [\n",
" Document(\n",
" page_content=\"foo\",\n",
" metadata={\"start\": 100, \"end\": 150, \"doc_name\": \"foo.txt\", \"quality\": \"bad\"},\n",
" ),\n",
" Document(\n",
" page_content=\"bar\",\n",
" metadata={\"start\": 200, \"end\": 250, \"doc_name\": \"bar.txt\", \"quality\": \"good\"},\n",
" ),\n",
"]\n",
"db.add_documents(docs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Query documents with specific metadata."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"docs = db.similarity_search(\"foobar\", k=2, filter={\"quality\": \"bad\"})\n",
"# With filtering on \"quality\"==\"bad\", only one document should be returned\n",
"for doc in docs:\n",
" print(\"-\" * 80)\n",
" print(doc.page_content)\n",
" print(doc.metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Delete documents with specific metadata."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"db.delete(filter={\"quality\": \"bad\"})\n",
"\n",
"# Now the similarity search with the same filter will return no results\n",
"docs = db.similarity_search(\"foobar\", k=2, filter={\"quality\": \"bad\"})\n",
"print(len(docs))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Advanced filtering\n",
"In addition to the basic value-based filtering capabilities, it is possible to use more advanced filtering.\n",
"The table below shows the available filter operators.\n",
"\n",
"| Operator | Semantic |\n",
"|----------|-------------------------|\n",
"| `$eq` | Equality (==) |\n",
"| `$ne` | Inequality (!=) |\n",
"| `$lt` | Less than (<) |\n",
"| `$lte` | Less than or equal (<=) |\n",
"| `$gt` | Greater than (>) |\n",
"| `$gte` | Greater than or equal (>=) |\n",
"| `$in` | Contained in a set of given values (in) |\n",
"| `$nin` | Not contained in a set of given values (not in) |\n",
"| `$between` | Between the range of two boundary values |\n",
"| `$like` | Text equality based on the \"LIKE\" semantics in SQL (using \"%\" as wildcard) |\n",
"| `$and` | Logical \"and\", supporting 2 or more operands |\n",
"| `$or` | Logical \"or\", supporting 2 or more operands |"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Prepare some test documents\n",
"docs = [\n",
" Document(\n",
" page_content=\"First\",\n",
" metadata={\"name\": \"adam\", \"is_active\": True, \"id\": 1, \"height\": 10.0},\n",
" ),\n",
" Document(\n",
" page_content=\"Second\",\n",
" metadata={\"name\": \"bob\", \"is_active\": False, \"id\": 2, \"height\": 5.7},\n",
" ),\n",
" Document(\n",
" page_content=\"Third\",\n",
" metadata={\"name\": \"jane\", \"is_active\": True, \"id\": 3, \"height\": 2.4},\n",
" ),\n",
"]\n",
"\n",
"db = HanaDB(\n",
" connection=connection,\n",
" embedding=embeddings,\n",
" table_name=\"LANGCHAIN_DEMO_ADVANCED_FILTER\",\n",
")\n",
"\n",
"# Delete already existing documents from the table\n",
"db.delete(filter={})\n",
"db.add_documents(docs)\n",
"\n",
"\n",
"# Helper function for printing filter results\n",
"def print_filter_result(result):\n",
" if len(result) == 0:\n",
" print(\"<empty result>\")\n",
" for doc in result:\n",
" print(doc.metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Filtering with `$ne`, `$gt`, `$gte`, `$lt`, `$lte`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"advanced_filter = {\"id\": {\"$ne\": 1}}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n",
"\n",
"advanced_filter = {\"id\": {\"$gt\": 1}}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n",
"\n",
"advanced_filter = {\"id\": {\"$gte\": 1}}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n",
"\n",
"advanced_filter = {\"id\": {\"$lt\": 1}}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n",
"\n",
"advanced_filter = {\"id\": {\"$lte\": 1}}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Filtering with `$between`, `$in`, `$nin`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"advanced_filter = {\"id\": {\"$between\": (1, 2)}}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n",
"\n",
"advanced_filter = {\"name\": {\"$in\": [\"adam\", \"bob\"]}}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n",
"\n",
| |
151748
|
"advanced_filter = {\"name\": {\"$nin\": [\"adam\", \"bob\"]}}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Text filtering with `$like`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"advanced_filter = {\"name\": {\"$like\": \"a%\"}}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n",
"\n",
"advanced_filter = {\"name\": {\"$like\": \"%a%\"}}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Combined filtering with `$and`, `$or`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"advanced_filter = {\"$or\": [{\"id\": 1}, {\"name\": \"bob\"}]}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n",
"\n",
"advanced_filter = {\"$and\": [{\"id\": 1}, {\"id\": 2}]}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n",
"\n",
"advanced_filter = {\"$or\": [{\"id\": 1}, {\"id\": 2}, {\"id\": 3}]}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using a VectorStore as a retriever in chains for retrieval augmented generation (RAG)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory import ConversationBufferMemory\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"# Access the vector DB with a new table\n",
"db = HanaDB(\n",
" connection=connection,\n",
" embedding=embeddings,\n",
" table_name=\"LANGCHAIN_DEMO_RETRIEVAL_CHAIN\",\n",
")\n",
"\n",
"# Delete already existing entries from the table\n",
"db.delete(filter={})\n",
"\n",
"# add the loaded document chunks from the \"State Of The Union\" file\n",
"db.add_documents(text_chunks)\n",
"\n",
"# Create a retriever instance of the vector store\n",
"retriever = db.as_retriever()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Define the prompt."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"prompt_template = \"\"\"\n",
"You are an expert in state of the union topics. You are provided multiple context items that are related to the prompt you have to answer.\n",
"Use the following pieces of context to answer the question at the end.\n",
"\n",
"'''\n",
"{context}\n",
"'''\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"\n",
"PROMPT = PromptTemplate(\n",
" template=prompt_template, input_variables=[\"context\", \"question\"]\n",
")\n",
"chain_type_kwargs = {\"prompt\": PROMPT}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create the ConversationalRetrievalChain, which handles the chat history and the retrieval of similar document chunks to be added to the prompt."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import ConversationalRetrievalChain\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\")\n",
"memory = ConversationBufferMemory(\n",
" memory_key=\"chat_history\", output_key=\"answer\", return_messages=True\n",
")\n",
"qa_chain = ConversationalRetrievalChain.from_llm(\n",
" llm,\n",
" db.as_retriever(search_kwargs={\"k\": 5}),\n",
" return_source_documents=True,\n",
" memory=memory,\n",
" verbose=False,\n",
" combine_docs_chain_kwargs={\"prompt\": PROMPT},\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Ask the first question (and verify how many text chunks have been used)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"question = \"What about Mexico and Guatemala?\"\n",
"\n",
"result = qa_chain.invoke({\"question\": question})\n",
"print(\"Answer from LLM:\")\n",
"print(\"================\")\n",
"print(result[\"answer\"])\n",
"\n",
"source_docs = result[\"source_documents\"]\n",
"print(\"================\")\n",
"print(f\"Number of used source document chunks: {len(source_docs)}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Examine the used chunks of the chain in detail. Check if the best ranked chunk contains info about \"Mexico and Guatemala\" as mentioned in the question."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for doc in source_docs:\n",
" print(\"-\" * 80)\n",
" print(doc.page_content)\n",
" print(doc.metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Ask another question on the same conversational chain. The answer should relate to the previous answer given."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"question = \"What about other countries?\"\n",
"\n",
"result = qa_chain.invoke({\"question\": question})\n",
"print(\"Answer from LLM:\")\n",
"print(\"================\")\n",
"print(result[\"answer\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Standard tables vs. \"custom\" tables with vector data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As default behaviour, the table for the embeddings is created with 3 columns:\n",
"\n",
"- A column `VEC_TEXT`, which contains the text of the Document\n",
"- A column `VEC_META`, which contains the metadata of the Document\n",
"- A column `VEC_VECTOR`, which contains the embeddings-vector of the Document's text"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Access the vector DB with a new table\n",
"db = HanaDB(\n",
" connection=connection, embedding=embeddings, table_name=\"LANGCHAIN_DEMO_NEW_TABLE\"\n",
")\n",
"\n",
"# Delete already existing entries from the table\n",
"db.delete(filter={})\n",
"\n",
| |
151751
|
{
"cells": [
{
"cell_type": "markdown",
"id": "e4afbbb6",
"metadata": {},
"source": [
"# ScaNN\n",
"\n",
"ScaNN (Scalable Nearest Neighbors) is a method for efficient vector similarity search at scale.\n",
"\n",
"ScaNN includes search space pruning and quantization for Maximum Inner Product Search and also supports other distance functions such as Euclidean distance. The implementation is optimized for x86 processors with AVX2 support. See its [Google Research github](https://github.com/google-research/google-research/tree/master/scann) for more details.\n",
"\n",
"You'll need to install `langchain-community` with `pip install -qU langchain-community` to use this integration"
]
},
{
"cell_type": "markdown",
"id": "082f593e",
"metadata": {},
"source": [
"## Installation\n",
"Install ScaNN through pip. Alternatively, you can follow instructions on the [ScaNN Website](https://github.com/google-research/google-research/tree/master/scann#building-from-source) to install from source."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a35e4f09",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet scann"
]
},
{
"cell_type": "markdown",
"id": "44bf38a8",
"metadata": {},
"source": [
"## Retrieval Demo\n",
"\n",
"Below we show how to use ScaNN in conjunction with Huggingface Embeddings."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "377bc723",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores import ScaNN\n",
"from langchain_huggingface import HuggingFaceEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"loader = TextLoader(\"state_of_the_union.txt\")\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"\n",
"\n",
"model_name = \"sentence-transformers/all-mpnet-base-v2\"\n",
"embeddings = HuggingFaceEmbeddings(model_name=model_name)\n",
"\n",
"db = ScaNN.from_documents(docs, embeddings)\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = db.similarity_search(query)\n",
"\n",
"docs[0]"
]
},
{
"cell_type": "markdown",
"id": "9ad5b151",
"metadata": {},
"source": [
"## RetrievalQA Demo\n",
"\n",
"Next, we demonstrate using ScaNN in conjunction with Google PaLM API.\n",
"\n",
"You can obtain an API key from https://developers.generativeai.google/tutorials/setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fc27ad51",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain_community.chat_models.google_palm import ChatGooglePalm\n",
"\n",
"palm_client = ChatGooglePalm(google_api_key=\"YOUR_GOOGLE_PALM_API_KEY\")\n",
"\n",
"qa = RetrievalQA.from_chain_type(\n",
" llm=palm_client,\n",
" chain_type=\"stuff\",\n",
" retriever=db.as_retriever(search_kwargs={\"k\": 10}),\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "5b77f919",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The president said that Ketanji Brown Jackson is one of our nation's top legal minds, who will continue Justice Breyer's legacy of excellence.\n"
]
}
],
"source": [
"print(qa.run(\"What did the president say about Ketanji Brown Jackson?\"))"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "0c6deec6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The president did not mention Michael Phelps in his speech.\n"
]
}
],
"source": [
"print(qa.run(\"What did the president say about Michael Phelps?\"))"
]
},
{
"cell_type": "markdown",
"id": "8a49f4a6",
"metadata": {},
"source": [
"## Save and loading local retrieval index"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "6b7496b9",
"metadata": {},
"outputs": [],
"source": [
"db.save_local(\"/tmp/db\", \"state_of_union\")\n",
"restored_db = ScaNN.load_local(\"/tmp/db\", embeddings, index_name=\"state_of_union\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
151756
|
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google Memorystore for Redis\n",
"\n",
"> [Google Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) is a fully-managed service that is powered by the Redis in-memory data store to build application caches that provide sub-millisecond data access. Extend your database application to build AI-powered experiences leveraging Memorystore for Redis's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) to store vector embeddings with the `MemorystoreVectorStore` class.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-memorystore-redis-python/).\n",
"\n",
"[](https://colab.research.google.com/github/googleapis/langchain-google-memorystore-redis-python/blob/main/docs/vector_store.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Pre-reqs"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Before You Begin\n",
"\n",
"To run this notebook, you will need to do the following:\n",
"\n",
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
"* [Enable the Memorystore for Redis API](https://console.cloud.google.com/flows/enableapi?apiid=redis.googleapis.com)\n",
"* [Create a Memorystore for Redis instance](https://cloud.google.com/memorystore/docs/redis/create-instance-console). Ensure that the version is greater than or equal to 7.2."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🦜🔗 Library Installation\n",
"\n",
"The integration lives in its own `langchain-google-memorystore-redis` package, so we need to install it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install -upgrade --quiet langchain-google-memorystore-redis langchain"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Colab only:** Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
"# import IPython\n",
"\n",
"# app = IPython.Application.instance()\n",
"# app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ☁ Set Your Google Cloud Project\n",
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
"\n",
"If you don't know your project ID, try the following:\n",
"\n",
"* Run `gcloud config list`.\n",
"* Run `gcloud projects list`.\n",
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.\n",
"\n",
"PROJECT_ID = \"my-project-id\" # @param {type:\"string\"}\n",
"\n",
"# Set the project id\n",
"!gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🔐 Authentication\n",
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
"\n",
"* If you are using Colab to run this notebook, use the cell below and continue.\n",
"* If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.colab import auth\n",
"\n",
"auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Basic Usage"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Initialize a Vector Index"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"import redis\n",
"from langchain_google_memorystore_redis import (\n",
" DistanceStrategy,\n",
" HNSWConfig,\n",
" RedisVectorStore,\n",
")\n",
"\n",
"# Connect to a Memorystore for Redis instance\n",
"redis_client = redis.from_url(\"redis://127.0.0.1:6379\")\n",
"\n",
"# Configure HNSW index with descriptive parameters\n",
"index_config = HNSWConfig(\n",
" name=\"my_vector_index\", distance_strategy=DistanceStrategy.COSINE, vector_size=128\n",
")\n",
"\n",
"# Initialize/create the vector store index\n",
"RedisVectorStore.init_index(client=redis_client, index_config=index_config)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prepare Documents\n",
"\n",
"Text needs processing and numerical representation before interacting with a vector store. This involves:\n",
"\n",
"* Loading Text: The TextLoader obtains text data from a file (e.g., \"state_of_the_union.txt\").\n",
"* Text Splitting: The CharacterTextSplitter breaks the text into smaller chunks for embedding models."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"loader = TextLoader(\"./state_of_the_union.txt\")\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Add Documents to the Vector Store\n",
"\n",
"After text preparation and embedding generation, the following methods insert them into the Redis vector store."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Method 1: Classmethod for Direct Insertion\n",
"\n",
"This approach combines embedding creation and insertion into a single step using the from_documents classmethod:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings.fake import FakeEmbeddings\n",
"\n",
"embeddings = FakeEmbeddings(size=128)\n",
"redis_client = redis.from_url(\"redis://127.0.0.1:6379\")\n",
"rvs = RedisVectorStore.from_documents(\n",
" docs, embedding=embeddings, client=redis_client, index_name=\"my_vector_index\"\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Method 2: Instance-Based Insertion\n",
| |
151757
|
"This approach offers flexibility when working with a new or existing RedisVectorStore:\n",
"\n",
"* [Optional] Create a RedisVectorStore Instance: Instantiate a RedisVectorStore object for customization. If you already have an instance, proceed to the next step.\n",
"* Add Text with Metadata: Provide raw text and metadata to the instance. Embedding generation and insertion into the vector store are handled automatically."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"rvs = RedisVectorStore(\n",
" client=redis_client, index_name=\"my_vector_index\", embeddings=embeddings\n",
")\n",
"ids = rvs.add_texts(\n",
" texts=[d.page_content for d in docs], metadatas=[d.metadata for d in docs]\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Perform a Similarity Search (KNN)\n",
"\n",
"With the vector store populated, it's possible to search for text semantically similar to a query. Here's how to use KNN (K-Nearest Neighbors) with default settings:\n",
"\n",
"* Formulate the Query: A natural language question expresses the search intent (e.g., \"What did the president say about Ketanji Brown Jackson\").\n",
"* Retrieve Similar Results: The `similarity_search` method finds items in the vector store closest to the query in meaning."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pprint\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"knn_results = rvs.similarity_search(query=query)\n",
"pprint.pprint(knn_results)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Perform a Range-Based Similarity Search\n",
"\n",
"Range queries provide more control by specifying a desired similarity threshold along with the query text:\n",
"\n",
"* Formulate the Query: A natural language question defines the search intent.\n",
"* Set Similarity Threshold: The distance_threshold parameter determines how close a match must be considered relevant.\n",
"* Retrieve Results: The `similarity_search_with_score` method finds items from the vector store that fall within the specified similarity threshold."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"rq_results = rvs.similarity_search_with_score(query=query, distance_threshold=0.8)\n",
"pprint.pprint(rq_results)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Perform a Maximal Marginal Relevance (MMR) Search\n",
"\n",
"MMR queries aim to find results that are both relevant to the query and diverse from each other, reducing redundancy in search results.\n",
"\n",
"* Formulate the Query: A natural language question defines the search intent.\n",
"* Balance Relevance and Diversity: The lambda_mult parameter controls the trade-off between strict relevance and promoting variety in the results.\n",
"* Retrieve MMR Results: The `max_marginal_relevance_search` method returns items that optimize the combination of relevance and diversity based on the lambda setting."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"mmr_results = rvs.max_marginal_relevance_search(query=query, lambda_mult=0.90)\n",
"pprint.pprint(mmr_results)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use the Vector Store as a Retriever\n",
"\n",
"For seamless integration with other LangChain components, a vector store can be converted into a Retriever. This offers several advantages:\n",
"\n",
"* LangChain Compatibility: Many LangChain tools and methods are designed to directly interact with retrievers.\n",
"* Ease of Use: The `as_retriever()` method converts the vector store into a format that simplifies querying."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"retriever = rvs.as_retriever()\n",
"results = retriever.invoke(query)\n",
"pprint.pprint(results)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Clean up"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Delete Documents from the Vector Store\n",
"\n",
"Occasionally, it's necessary to remove documents (and their associated vectors) from the vector store. The `delete` method provides this functionality."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"rvs.delete(ids)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Delete a Vector Index\n",
"\n",
"There might be circumstances where the deletion of an existing vector index is necessary. Common reasons include:\n",
"\n",
"* Index Configuration Changes: If index parameters need modification, it's often required to delete and recreate the index.\n",
"* Storage Management: Removing unused indices can help free up space within the Redis instance.\n",
"\n",
"Caution: Vector index deletion is an irreversible operation. Be certain that the stored vectors and search functionality are no longer required before proceeding."
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"# Delete the vector index\n",
"RedisVectorStore.drop_index(client=redis_client, index_name=\"my_vector_index\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| |
151762
|
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Kinetica\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Kinetica Vectorstore API\n",
"\n",
">[Kinetica](https://www.kinetica.com/) is a database with integrated support for vector similarity search\n",
"\n",
"It supports:\n",
"- exact and approximate nearest neighbor search\n",
"- L2 distance, inner product, and cosine distance\n",
"\n",
"This notebook shows how to use the Kinetica vector store (`Kinetica`)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This needs an instance of Kinetica which can easily be setup using the instructions given here - [installation instruction](https://www.kinetica.com/developer-edition/)."
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n",
"Requirement already satisfied: gpudb==7.2.0.0b in /home/anindyam/kinetica/kinetica-github/langchain/libs/langchain/.venv/lib/python3.8/site-packages (7.2.0.0b0)\n",
"Requirement already satisfied: future in /home/anindyam/kinetica/kinetica-github/langchain/libs/langchain/.venv/lib/python3.8/site-packages (from gpudb==7.2.0.0b) (0.18.3)\n",
"Requirement already satisfied: pyzmq in /home/anindyam/kinetica/kinetica-github/langchain/libs/langchain/.venv/lib/python3.8/site-packages (from gpudb==7.2.0.0b) (25.1.2)\n",
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n",
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"# Pip install necessary package\n",
"%pip install --upgrade --quiet langchain-openai langchain-community\n",
"%pip install gpudb==7.2.0.9\n",
"%pip install --upgrade --quiet tiktoken"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key."
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"False"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"## Loading Environment Variables\n",
"from dotenv import load_dotenv\n",
"\n",
"load_dotenv()"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores import (\n",
" DistanceStrategy,\n",
" Kinetica,\n",
" KineticaSettings,\n",
")\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [],
"source": [
"loader = TextLoader(\"../../how_to/state_of_the_union.txt\")\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
"# Kinetica needs the connection to the database.\n",
"# This is how to set it up.\n",
"HOST = os.getenv(\"KINETICA_HOST\", \"http://127.0.0.1:9191\")\n",
"USERNAME = os.getenv(\"KINETICA_USERNAME\", \"\")\n",
"PASSWORD = os.getenv(\"KINETICA_PASSWORD\", \"\")\n",
"OPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\", \"\")\n",
"\n",
"\n",
"def create_config() -> KineticaSettings:\n",
" return KineticaSettings(host=HOST, username=USERNAME, password=PASSWORD)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Similarity Search with Euclidean Distance (Default)"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
| |
151770
|
{
"cells": [
{
"cell_type": "markdown",
"id": "683953b3",
"metadata": {},
"source": [
"# Momento Vector Index (MVI)\n",
"\n",
">[MVI](https://gomomento.com): the most productive, easiest to use, serverless vector index for your data. To get started with MVI, simply sign up for an account. There's no need to handle infrastructure, manage servers, or be concerned about scaling. MVI is a service that scales automatically to meet your needs.\n",
"\n",
"To sign up and access MVI, visit the [Momento Console](https://console.gomomento.com)."
]
},
{
"cell_type": "markdown",
"id": "82581e78",
"metadata": {},
"source": [
"# Setup"
]
},
{
"cell_type": "markdown",
"id": "3120d063",
"metadata": {},
"source": [
"## Install prerequisites"
]
},
{
"cell_type": "markdown",
"id": "9d7e5fd5",
"metadata": {},
"source": [
"You will need:\n",
"- the [`momento`](https://pypi.org/project/momento/) package for interacting with MVI, and\n",
"- the openai package for interacting with the OpenAI API.\n",
"- the tiktoken package for tokenizing text."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a62cff8a-bcf7-4e33-bbbc-76999c2e3e20",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet momento langchain-openai langchain-community tiktoken"
]
},
{
"cell_type": "markdown",
"id": "8317b9df",
"metadata": {},
"source": [
"## Enter API keys"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "4b96eed5",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os"
]
},
{
"cell_type": "markdown",
"id": "7ce4b6f7",
"metadata": {},
"source": [
"### Momento: for indexing data"
]
},
{
"cell_type": "markdown",
"id": "78b8b2ee",
"metadata": {},
"source": [
"Visit the [Momento Console](https://console.gomomento.com) to get your API key."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "211407a8",
"metadata": {},
"outputs": [],
"source": [
"if \"MOMENTO_API_KEY\" not in os.environ:\n",
" os.environ[\"MOMENTO_API_KEY\"] = getpass.getpass(\"Momento API Key:\")"
]
},
{
"cell_type": "markdown",
"id": "08148c5f",
"metadata": {},
"source": [
"### OpenAI: for text embeddings"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "8b6ed9cd-81b9-46e5-9c20-5aafca2844d0",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")"
]
},
{
"cell_type": "markdown",
"id": "347932a6",
"metadata": {},
"source": [
"# Load your data"
]
},
{
"cell_type": "markdown",
"id": "2cfa2538",
"metadata": {},
"source": [
"Here we use the example dataset from Langchain, the state of the union address.\n",
"\n",
"First we load relevant modules:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "aac9563e",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores import MomentoVectorIndex\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter"
]
},
{
"cell_type": "markdown",
"id": "f75e1221",
"metadata": {},
"source": [
"Then we load the data:"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "a3c3999a",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader = TextLoader(\"../../how_to/state_of_the_union.txt\")\n",
"documents = loader.load()\n",
"len(documents)"
]
},
{
"cell_type": "markdown",
"id": "31a90e56",
"metadata": {},
"source": [
"Note the data is one large file, hence there is only one document:"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "1926aaae",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"38539"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(documents[0].page_content)"
]
},
{
"cell_type": "markdown",
"id": "1ff35d84",
"metadata": {},
"source": [
"Because this is one large text file, we split it into chunks for question answering. That way, user questions will be answered from the most relevant chunk."
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "1de69459",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"42"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"len(docs)"
]
},
{
"cell_type": "markdown",
"id": "cb7854c1",
"metadata": {},
"source": [
"# Index your data"
]
},
{
"cell_type": "markdown",
"id": "42059ec1",
"metadata": {},
"source": [
"Indexing your data is as simple as instantiating the `MomentoVectorIndex` object. Here we use the `from_documents` helper to both instantiate and index the data:"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "dcf88bdf",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"vector_db = MomentoVectorIndex.from_documents(\n",
" docs, OpenAIEmbeddings(), index_name=\"sotu\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "225cd0e2",
"metadata": {},
"source": [
"This connects to the Momento Vector Index service using your API key and indexes the data. If the index did not exist before, this process creates it for you. The data is now searchable."
]
},
{
"cell_type": "markdown",
"id": "ffb2c44e",
"metadata": {},
"source": [
"# Query your data"
]
},
{
"cell_type": "markdown",
"id": "e705a976",
"metadata": {},
"source": [
"## Ask a question directly against the index"
]
},
{
| |
151775
|
{
"cells": [
{
"cell_type": "markdown",
"id": "683953b3",
"metadata": {},
"source": [
"# Faiss (Async)\n",
"\n",
">[Facebook AI Similarity Search (Faiss)](https://engineering.fb.com/2017/03/29/data-infrastructure/faiss-a-library-for-efficient-similarity-search/) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also includes supporting code for evaluation and parameter tuning.\n",
">\n",
">See [The FAISS Library](https://arxiv.org/pdf/2401.08281) paper.\n",
"\n",
"[Faiss documentation](https://faiss.ai/).\n",
"\n",
"You'll need to install `langchain-community` with `pip install -qU langchain-community` to use this integration\n",
"\n",
"This notebook shows how to use functionality related to the `FAISS` vector database using `asyncio`.\n",
"LangChain implemented the synchronous and asynchronous vector store functions.\n",
"\n",
"See `synchronous` version [here](/docs/integrations/vectorstores/faiss)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "497fcd89-e832-46a7-a74a-c71199666206",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet faiss-gpu # For CUDA 7.5+ Supported GPU's.\n",
"# OR\n",
"%pip install --upgrade --quiet faiss-cpu # For CPU Installation"
]
},
{
"cell_type": "markdown",
"id": "38237514-b3fa-44a4-9cff-30cd6bf50073",
"metadata": {},
"source": [
"We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "971a172a-2d87-4eec-be92-87aa174fec30",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")\n",
"\n",
"# Uncomment the following line if you need to initialize FAISS with no AVX2 optimization\n",
"# os.environ['FAISS_NO_AVX2'] = '1'\n",
"\n",
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"loader = TextLoader(\"../../../extras/modules/state_of_the_union.txt\")\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"\n",
"db = await FAISS.afrom_documents(docs, embeddings)\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = await db.asimilarity_search(query)\n",
"\n",
"print(docs[0].page_content)"
]
},
{
"cell_type": "markdown",
"id": "f13473b5",
"metadata": {},
"source": [
"## Similarity Search with score\n",
"There are some FAISS specific methods. One of them is `similarity_search_with_score`, which allows you to return not only the documents but also the distance score of the query to them. The returned distance score is L2 distance. Therefore, a lower score is better."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "30bf7c85-a273-45dc-ae9e-f138e330b42e",
"metadata": {},
"outputs": [],
"source": [
"docs_and_scores = await db.asimilarity_search_with_score(query)\n",
"\n",
"docs_and_scores[0]"
]
},
{
"cell_type": "markdown",
"id": "f34420cf",
"metadata": {},
"source": [
"It is also possible to do a search for documents similar to a given embedding vector using `similarity_search_by_vector` which accepts an embedding vector as a parameter instead of a string."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "b558ebb7",
"metadata": {},
"outputs": [],
"source": [
"embedding_vector = await embeddings.aembed_query(query)\n",
"docs_and_scores = await db.asimilarity_search_by_vector(embedding_vector)"
]
},
{
"cell_type": "markdown",
"id": "31bda7fd",
"metadata": {},
"source": [
"## Saving and loading\n",
"You can also save and load a FAISS index. This is useful so you don't have to recreate it everytime you use it."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "88e11f08-1ac8-45aa-8bc0-56439ef87256",
"metadata": {},
"outputs": [],
"source": [
"db.save_local(\"faiss_index\")\n",
"\n",
"new_db = FAISS.load_local(\"faiss_index\", embeddings, asynchronous=True)\n",
"\n",
"docs = await new_db.asimilarity_search(query)\n",
"\n",
"docs[0]"
]
},
{
"cell_type": "markdown",
"id": "30c8f57b",
"metadata": {},
"source": [
"# Serializing and De-Serializing to bytes\n",
"\n",
"you can pickle the FAISS Index by these functions. If you use embeddings model which is of 90 mb (sentence-transformers/all-MiniLM-L6-v2 or any other model), the resultant pickle size would be more than 90 mb. the size of the model is also included in the overall size. To overcome this, use the below functions. These functions only serializes FAISS index and size would be much lesser. this can be helpful if you wish to store the index in database like sql."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e36e220b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_huggingface import HuggingFaceEmbeddings\n",
"\n",
"pkl = db.serialize_to_bytes() # serializes the faiss index\n",
"embeddings = HuggingFaceEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n",
"db = FAISS.deserialize_from_bytes(\n",
" embeddings=embeddings, serialized=pkl, asynchronous=True\n",
") # Load the index"
]
},
{
"cell_type": "markdown",
"id": "57da60d4",
"metadata": {},
"source": [
"## Merging\n",
"You can also merge two FAISS vectorstores"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "6dfd2b78",
"metadata": {},
"outputs": [],
"source": [
"db1 = await FAISS.afrom_texts([\"foo\"], embeddings)\n",
"db2 = await FAISS.afrom_texts([\"bar\"], embeddings)"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "29960da7",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'8164a453-9643-4959-87f7-9ba79f9e8fb0': Document(page_content='foo')}"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"db1.docstore._dict"
]
},
{
"cell_type": "code",
"execution_count": 21,
| |
151778
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"# SQLite-VSS\n",
"\n",
">[SQLite-VSS](https://alexgarcia.xyz/sqlite-vss/) is an `SQLite` extension designed for vector search, emphasizing local-first operations and easy integration into applications without external servers. Leveraging the `Faiss` library, it offers efficient similarity search and clustering capabilities.\n",
"\n",
"You'll need to install `langchain-community` with `pip install -qU langchain-community` to use this integration\n",
"\n",
"This notebook shows how to use the `SQLiteVSS` vector database."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"# You need to install sqlite-vss as a dependency.\n",
"%pip install --upgrade --quiet sqlite-vss"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"## Quickstart"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2023-09-06T14:55:55.370351Z",
"start_time": "2023-09-06T14:55:53.547755Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"data": {
"text/plain": [
"'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.'"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.embeddings.sentence_transformer import (\n",
" SentenceTransformerEmbeddings,\n",
")\n",
"from langchain_community.vectorstores import SQLiteVSS\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"# load the document and split it into chunks\n",
"loader = TextLoader(\"../../how_to/state_of_the_union.txt\")\n",
"documents = loader.load()\n",
"\n",
"# split it into chunks\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"texts = [doc.page_content for doc in docs]\n",
"\n",
"\n",
"# create the open-source embedding function\n",
"embedding_function = SentenceTransformerEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n",
"\n",
"\n",
"# load it in sqlite-vss in a table named state_union.\n",
"# the db_file parameter is the name of the file you want\n",
"# as your sqlite database.\n",
"db = SQLiteVSS.from_texts(\n",
" texts=texts,\n",
" embedding=embedding_function,\n",
" table=\"state_union\",\n",
" db_file=\"/tmp/vss.db\",\n",
")\n",
"\n",
"# query it\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"data = db.similarity_search(query)\n",
"\n",
"# print results\n",
"data[0].page_content"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"## Using existing SQLite connection"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"ExecuteTime": {
"end_time": "2023-09-06T14:59:22.086252Z",
"start_time": "2023-09-06T14:59:21.693237Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"data": {
"text/plain": [
"'Ketanji Brown Jackson is awesome'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.embeddings.sentence_transformer import (\n",
" SentenceTransformerEmbeddings,\n",
")\n",
"from langchain_community.vectorstores import SQLiteVSS\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"# load the document and split it into chunks\n",
"loader = TextLoader(\"../../how_to/state_of_the_union.txt\")\n",
"documents = loader.load()\n",
"\n",
"# split it into chunks\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"texts = [doc.page_content for doc in docs]\n",
"\n",
"\n",
"# create the open-source embedding function\n",
"embedding_function = SentenceTransformerEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n",
"connection = SQLiteVSS.create_connection(db_file=\"/tmp/vss.db\")\n",
"\n",
"db1 = SQLiteVSS(\n",
" table=\"state_union\", embedding=embedding_function, connection=connection\n",
")\n",
"\n",
"db1.add_texts([\"Ketanji Brown Jackson is awesome\"])\n",
"# query it again\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"data = db1.similarity_search(query)\n",
"\n",
"# print results\n",
"data[0].page_content"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"ExecuteTime": {
"end_time": "2023-09-06T15:01:15.550318Z",
"start_time": "2023-09-06T15:01:15.546428Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"# Cleaning up\n",
"import os\n",
"\n",
"os.remove(\"/tmp/vss.db\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| |
151781
|
{
"cells": [
{
"cell_type": "markdown",
"id": "7e80d338-091b-421c-ac66-5950b14944b2",
"metadata": {},
"source": [
"# Yellowbrick\n",
"\n",
"[Yellowbrick](https://yellowbrick.com/yellowbrick-data-warehouse/) is an elastic, massively parallel processing (MPP) SQL database that runs in the cloud and on-premises, using kubernetes for scale, resilience and cloud portability. Yellowbrick is designed to address the largest and most complex business-critical data warehousing use cases. The efficiency at scale that Yellowbrick provides also enables it to be used as a high performance and scalable vector database to store and search vectors with SQL. \n"
]
},
{
"cell_type": "markdown",
"id": "9291d9e5-d404-405f-8307-87d80d0233f2",
"metadata": {},
"source": [
"## Using Yellowbrick as the vector store for ChatGpt\n",
"\n",
"This tutorial demonstrates how to create a simple chatbot backed by ChatGpt that uses Yellowbrick as a vector store to support Retrieval Augmented Generation (RAG). What you'll need:\n",
"\n",
"1. An account on the [Yellowbrick sandbox](https://cloudlabs.yellowbrick.com/)\n",
"2. An api key from [OpenAI](https://platform.openai.com/)\n",
"\n",
"The tutorial is divided into five parts. First we'll use langchain to create a baseline chatbot to interact with ChatGpt without a vector store. Second, we'll create an embeddings table in Yellowbrick that will represent the vector store. Third, we'll load a series of documents (the Administration chapter of the Yellowbrick Manual). Fourth, we'll create the vector representation of those documents and store in a Yellowbrick table. Lastly, we'll send the same queries to the improved chatbox to see the results.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "924d1c25",
"metadata": {},
"outputs": [],
"source": [
"# Install all needed libraries\n",
"%pip install --upgrade --quiet langchain\n",
"%pip install --upgrade --quiet langchain-openai langchain-community\n",
"%pip install --upgrade --quiet psycopg2-binary\n",
"%pip install --upgrade --quiet tiktoken"
]
},
{
"cell_type": "markdown",
"id": "5928e9c7-7666-4282-9cb4-00d919228ce0",
"metadata": {},
"source": [
"## Setup: Enter the information used to connect to Yellowbrick and OpenAI API\n",
"\n",
"Our chatbot integrates with ChatGpt via the langchain library, so you'll need an API key from OpenAI first:\n",
"\n",
"To get an api key for OpenAI:\n",
"1. Register at https://platform.openai.com/\n",
"2. Add a payment method - You're unlikely to go over free quota\n",
"3. Create an API key\n",
"\n",
"You'll also need your Username, Password, and Database name from the welcome email when you sign up for the Yellowbrick Sandbox Account.\n"
]
},
{
"cell_type": "markdown",
"id": "aaf215bb",
"metadata": {},
"source": [
"The following should be modified to include the information for your Yellowbrick database and OpenAPI Key"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a4393d8d",
"metadata": {},
"outputs": [],
"source": [
"# Modify these values to match your Yellowbrick Sandbox and OpenAI API Key\n",
"YBUSER = \"[SANDBOX USER]\"\n",
"YBPASSWORD = \"[SANDBOX PASSWORD]\"\n",
"YBDATABASE = \"[SANDBOX_DATABASE]\"\n",
"YBHOST = \"trialsandbox.sandbox.aws.yellowbrickcloud.com\"\n",
"\n",
"OPENAI_API_KEY = \"[OPENAI API KEY]\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c186f99b",
"metadata": {},
"outputs": [],
"source": [
"# Import libraries and setup keys / login info\n",
"import os\n",
"import pathlib\n",
"import re\n",
"import sys\n",
"import urllib.parse as urlparse\n",
"from getpass import getpass\n",
"\n",
"import psycopg2\n",
"from IPython.display import Markdown, display\n",
"from langchain.chains import LLMChain, RetrievalQAWithSourcesChain\n",
"from langchain_community.vectorstores import Yellowbrick\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"# Establish connection parameters to Yellowbrick. If you've signed up for Sandbox, fill in the information from your welcome mail here:\n",
"yellowbrick_connection_string = (\n",
" f\"postgres://{urlparse.quote(YBUSER)}:{YBPASSWORD}@{YBHOST}:5432/{YBDATABASE}\"\n",
")\n",
"\n",
"YB_DOC_DATABASE = \"sample_data\"\n",
"YB_DOC_TABLE = \"yellowbrick_documentation\"\n",
"embedding_table = \"my_embeddings\"\n",
"\n",
"# API Key for OpenAI. Signup at https://platform.openai.com\n",
"os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY\n",
"\n",
"from langchain_core.prompts.chat import (\n",
" ChatPromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e955b19b",
"metadata": {},
"source": [
"## Part 1: Creating a baseline chatbot backed by ChatGpt without a Vector Store\n",
"\n",
"We will use langchain to query ChatGPT. As there is no Vector Store, ChatGPT will have no context in which to answer the question.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "538f8b96-1b54-4f2f-9239-dfb5cc7fd259",
"metadata": {},
"outputs": [],
"source": [
"# Set up the chat model and specific prompt\n",
"system_template = \"\"\"If you don't know the answer, Make up your best guess.\"\"\"\n",
"messages = [\n",
" SystemMessagePromptTemplate.from_template(system_template),\n",
" HumanMessagePromptTemplate.from_template(\"{question}\"),\n",
"]\n",
"prompt = ChatPromptTemplate.from_messages(messages)\n",
"\n",
"chain_type_kwargs = {\"prompt\": prompt}\n",
"llm = ChatOpenAI(\n",
" model_name=\"gpt-3.5-turbo\", # Modify model_name if you have access to GPT-4\n",
" temperature=0,\n",
" max_tokens=256,\n",
")\n",
"\n",
"chain = LLMChain(\n",
" llm=llm,\n",
" prompt=prompt,\n",
" verbose=False,\n",
")\n",
"\n",
"\n",
"def print_result_simple(query):\n",
" result = chain(query)\n",
" output_text = f\"\"\"### Question:\n",
" {query}\n",
" ### Answer: \n",
" {result['text']}\n",
" \"\"\"\n",
" display(Markdown(output_text))\n",
"\n",
"\n",
"# Use the chain to query\n",
"print_result_simple(\"How many databases can be in a Yellowbrick Instance?\")\n",
"\n",
"print_result_simple(\"What's an easy way to add users in bulk to Yellowbrick?\")"
]
},
{
"cell_type": "markdown",
"id": "798c7aa6-5904-4860-b4a9-896fe7681a45",
"metadata": {},
"source": [
"## Part 2: Connect to Yellowbrick and create the embedding tables\n",
"\n",
| |
151783
|
" return_source_documents=True,\n",
" chain_type_kwargs=chain_type_kwargs,\n",
")\n",
"\n",
"\n",
"def print_result_sources(query):\n",
" result = chain(query)\n",
" output_text = f\"\"\"### Question: \n",
" {query}\n",
" ### Answer: \n",
" {result['answer']}\n",
" ### Sources: \n",
" {result['sources']}\n",
" ### All relevant sources:\n",
" {', '.join(list(set([doc.metadata['source'] for doc in result['source_documents']])))}\n",
" \"\"\"\n",
" display(Markdown(output_text))\n",
"\n",
"\n",
"# Use the chain to query\n",
"\n",
"print_result_sources(\"How many databases can be in a Yellowbrick Instance?\")\n",
"\n",
"print_result_sources(\"Whats an easy way to add users in bulk to Yellowbrick?\")"
]
},
{
"cell_type": "markdown",
"id": "1f39fd30",
"metadata": {},
"source": [
"## Part 6: Introducing an Index to Increase Performance\n",
"\n",
"Yellowbrick also supports indexing using the Locality-Sensitive Hashing approach. This is an approximate nearest-neighbor search technique, and allows one to trade off similarity search time at the expense of accuracy. The index introduces two new tunable parameters:\n",
"\n",
"- The number of hyperplanes, which is provided as an argument to `create_lsh_index(num_hyperplanes)`. The more documents, the more hyperplanes are needed. LSH is a form of dimensionality reduction. The original embeddings are transformed into lower dimensional vectors where the number of components is the same as the number of hyperplanes.\n",
"- The Hamming distance, an integer representing the breadth of the search. Smaller Hamming distances result in faster retreival but lower accuracy.\n",
"\n",
"Here's how you can create an index on the embeddings we loaded into Yellowbrick. We'll also re-run the previous chat session, but this time the retrieval will use the index. Note that for such a small number of documents, you won't see the benefit of indexing in terms of performance."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "02ba61c4",
"metadata": {},
"outputs": [],
"source": [
"system_template = \"\"\"Use the following pieces of context to answer the users question.\n",
"Take note of the sources and include them in the answer in the format: \"SOURCES: source1 source2\", use \"SOURCES\" in capital letters regardless of the number of sources.\n",
"If you don't know the answer, just say that \"I don't know\", don't try to make up an answer.\n",
"----------------\n",
"{summaries}\"\"\"\n",
"messages = [\n",
" SystemMessagePromptTemplate.from_template(system_template),\n",
" HumanMessagePromptTemplate.from_template(\"{question}\"),\n",
"]\n",
"prompt = ChatPromptTemplate.from_messages(messages)\n",
"\n",
"vector_store = Yellowbrick(\n",
" OpenAIEmbeddings(),\n",
" yellowbrick_connection_string,\n",
" embedding_table, # Change the table name to reflect your embeddings\n",
")\n",
"\n",
"lsh_params = Yellowbrick.IndexParams(\n",
" Yellowbrick.IndexType.LSH, {\"num_hyperplanes\": 8, \"hamming_distance\": 2}\n",
")\n",
"vector_store.create_index(lsh_params)\n",
"\n",
"chain_type_kwargs = {\"prompt\": prompt}\n",
"llm = ChatOpenAI(\n",
" model_name=\"gpt-3.5-turbo\", # Modify model_name if you have access to GPT-4\n",
" temperature=0,\n",
" max_tokens=256,\n",
")\n",
"chain = RetrievalQAWithSourcesChain.from_chain_type(\n",
" llm=llm,\n",
" chain_type=\"stuff\",\n",
" retriever=vector_store.as_retriever(\n",
" k=5, search_kwargs={\"index_params\": lsh_params}\n",
" ),\n",
" return_source_documents=True,\n",
" chain_type_kwargs=chain_type_kwargs,\n",
")\n",
"\n",
"\n",
"def print_result_sources(query):\n",
" result = chain(query)\n",
" output_text = f\"\"\"### Question: \n",
" {query}\n",
" ### Answer: \n",
" {result['answer']}\n",
" ### Sources: \n",
" {result['sources']}\n",
" ### All relevant sources:\n",
" {', '.join(list(set([doc.metadata['source'] for doc in result['source_documents']])))}\n",
" \"\"\"\n",
" display(Markdown(output_text))\n",
"\n",
"\n",
"# Use the chain to query\n",
"\n",
"print_result_sources(\"How many databases can be in a Yellowbrick Instance?\")\n",
"\n",
"print_result_sources(\"Whats an easy way to add users in bulk to Yellowbrick?\")"
]
},
{
"cell_type": "markdown",
"id": "697c8a38",
"metadata": {},
"source": [
"## Next Steps:\n",
"\n",
"This code can be modified to ask different questions. You can also load your own documents into the vector store. The langchain module is very flexible and can parse a large variety of files (including HTML, PDF, etc).\n",
"\n",
"You can also modify this to use Huggingface embeddings models and Meta's Llama 2 LLM for a completely private chatbox experience."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
151806
|
{
"cells": [
{
"cell_type": "markdown",
"id": "683953b3",
"metadata": {},
"source": [
"# MongoDB Atlas\n",
"\n",
"This notebook covers how to MongoDB Atlas vector search in LangChain, using the `langchain-mongodb` package.\n",
"\n",
">[MongoDB Atlas](https://www.mongodb.com/docs/atlas/) is a fully-managed cloud database available in AWS, Azure, and GCP. It supports native Vector Search and full text search (BM25) on your MongoDB document data.\n",
"\n",
">[MongoDB Atlas Vector Search](https://www.mongodb.com/products/platform/atlas-vector-search) allows to store your embeddings in MongoDB documents, create a vector search index, and perform KNN search with an approximate nearest neighbor algorithm (`Hierarchical Navigable Small Worlds`). It uses the [$vectorSearch MQL Stage](https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-overview/). "
]
},
{
"cell_type": "markdown",
"id": "359b8e9b",
"metadata": {},
"source": [
"## Setup\n",
"\n",
">*An Atlas cluster running MongoDB version 6.0.11, 7.0.2, or later (including RCs).\n",
"\n",
"To use MongoDB Atlas, you must first deploy a cluster. We have a Forever-Free tier of clusters available. To get started head over to Atlas here: [quick start](https://www.mongodb.com/docs/atlas/getting-started/).\n",
"\n",
"You'll need to install `langchain-mongodb` and `pymongo` to use this integration."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "73cf7c9f",
"metadata": {},
"outputs": [],
"source": [
"pip install -qU langchain-mongodb pymongo"
]
},
{
"cell_type": "markdown",
"id": "a61832ea",
"metadata": {},
"source": [
"### Credentials\n",
"\n",
"For this notebook you will need to find your MongoDB cluster URI.\n",
"\n",
"For information on finding your cluster URI read through [this guide](https://www.mongodb.com/docs/manual/reference/connection-string/)."
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "7ef41b37",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"\n",
"MONGODB_ATLAS_CLUSTER_URI = getpass.getpass(\"MongoDB Atlas Cluster URI:\")"
]
},
{
"cell_type": "markdown",
"id": "1f23de23",
"metadata": {},
"source": [
"If you want to get best in-class automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "908e7772",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "a53673ae",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n",
"<EmbeddingTabs/>\n"
]
},
{
"cell_type": "code",
"execution_count": 54,
"id": "f5fed614",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings(model=\"text-embedding-3-small\")"
]
},
{
"cell_type": "code",
"execution_count": 56,
"id": "00d78318",
"metadata": {},
"outputs": [],
"source": [
"from langchain_mongodb.vectorstores import MongoDBAtlasVectorSearch\n",
"from pymongo import MongoClient\n",
"\n",
"# initialize MongoDB python client\n",
"client = MongoClient(MONGODB_ATLAS_CLUSTER_URI)\n",
"\n",
"DB_NAME = \"langchain_test_db\"\n",
"COLLECTION_NAME = \"langchain_test_vectorstores\"\n",
"ATLAS_VECTOR_SEARCH_INDEX_NAME = \"langchain-test-index-vectorstores\"\n",
"\n",
"MONGODB_COLLECTION = client[DB_NAME][COLLECTION_NAME]\n",
"\n",
"vector_store = MongoDBAtlasVectorSearch(\n",
" collection=MONGODB_COLLECTION,\n",
" embedding=embeddings,\n",
" index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,\n",
" relevance_score_fn=\"cosine\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "42873e5a",
"metadata": {},
"source": [
"## Manage vector store\n",
"\n",
"Once you have created your vector store, we can interact with it by adding and deleting different items.\n",
"\n",
"### Add items to vector store\n",
"\n",
"We can add items to our vector store by using the `add_documents` function."
]
},
{
"cell_type": "code",
"execution_count": 57,
"id": "aac9563e",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"['03ad81e8-32a0-46f0-b7d8-f5b977a6b52a',\n",
" '8396a68d-f4a3-4176-a581-a1a8c303eea4',\n",
" 'e7d95150-67f6-499f-b611-84367c50fa60',\n",
" '8c31b84e-2636-48b6-8b99-9fccb47f7051',\n",
" 'aa02e8a2-a811-446a-9785-8cea0faba7a9',\n",
" '19bd72ff-9766-4c3b-b1fd-195c732c562b',\n",
" '642d6f2f-3e34-4efa-a1ed-c4ba4ef0da8d',\n",
" '7614bb54-4eb5-4b3b-990c-00e35cb31f99',\n",
" '69e18c67-bf1b-43e5-8a6e-64fb3f240e52',\n",
" '30d599a7-4a1a-47a9-bbf8-6ed393e2e33c']"
]
},
"execution_count": 57,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from uuid import uuid4\n",
"\n",
"from langchain_core.documents import Document\n",
"\n",
"document_1 = Document(\n",
" page_content=\"I had chocalate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_2 = Document(\n",
" page_content=\"The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees.\",\n",
" metadata={\"source\": \"news\"},\n",
")\n",
"\n",
"document_3 = Document(\n",
" page_content=\"Building an exciting new project with LangChain - come check it out!\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_4 = Document(\n",
" page_content=\"Robbers broke into the city bank and stole $1 million in cash.\",\n",
" metadata={\"source\": \"news\"},\n",
")\n",
"\n",
"document_5 = Document(\n",
" page_content=\"Wow! That was an amazing movie. I can't wait to see it again.\",\n",
" metadata={\"source\": \"tweet\"},\n",
| |
151807
|
")\n",
"\n",
"document_6 = Document(\n",
" page_content=\"Is the new iPhone worth the price? Read this review to find out.\",\n",
" metadata={\"source\": \"website\"},\n",
")\n",
"\n",
"document_7 = Document(\n",
" page_content=\"The top 10 soccer players in the world right now.\",\n",
" metadata={\"source\": \"website\"},\n",
")\n",
"\n",
"document_8 = Document(\n",
" page_content=\"LangGraph is the best framework for building stateful, agentic applications!\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_9 = Document(\n",
" page_content=\"The stock market is down 500 points today due to fears of a recession.\",\n",
" metadata={\"source\": \"news\"},\n",
")\n",
"\n",
"document_10 = Document(\n",
" page_content=\"I have a bad feeling I am going to get deleted :(\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"documents = [\n",
" document_1,\n",
" document_2,\n",
" document_3,\n",
" document_4,\n",
" document_5,\n",
" document_6,\n",
" document_7,\n",
" document_8,\n",
" document_9,\n",
" document_10,\n",
"]\n",
"uuids = [str(uuid4()) for _ in range(len(documents))]\n",
"\n",
"vector_store.add_documents(documents=documents, ids=uuids)"
]
},
{
"cell_type": "markdown",
"id": "639f29da",
"metadata": {},
"source": [
"### Delete items from vector store\n"
]
},
{
"cell_type": "code",
"execution_count": 58,
"id": "bbb5fd5c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 58,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"vector_store.delete(ids=[uuids[-1]])"
]
},
{
"cell_type": "markdown",
"id": "d6111eb6",
"metadata": {},
"source": [
"## Query vector store\n",
"\n",
"Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent. \n",
"\n",
"### Query directly\n",
"\n",
"#### Similarity search\n",
"\n",
"Performing a simple similarity search can be done as follows:"
]
},
{
"cell_type": "code",
"execution_count": 62,
"id": "19b60ac0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* Building an exciting new project with LangChain - come check it out! [{'_id': 'e7d95150-67f6-499f-b611-84367c50fa60', 'source': 'tweet'}]\n",
"* LangGraph is the best framework for building stateful, agentic applications! [{'_id': '7614bb54-4eb5-4b3b-990c-00e35cb31f99', 'source': 'tweet'}]\n"
]
}
],
"source": [
"results = vector_store.similarity_search(\n",
" \"LangChain provides abstractions to make working with LLMs easy\", k=2\n",
")\n",
"for res in results:\n",
" print(f\"* {res.page_content} [{res.metadata}]\")"
]
},
{
"cell_type": "markdown",
"id": "6c624606",
"metadata": {},
"source": [
"#### Similarity search with score\n",
"\n",
"You can also search with score:"
]
},
{
"cell_type": "code",
"execution_count": 63,
"id": "e919fa51",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* [SIM=0.784560] The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees. [{'_id': '8396a68d-f4a3-4176-a581-a1a8c303eea4', 'source': 'news'}]\n"
]
}
],
"source": [
"results = vector_store.similarity_search_with_score(\"Will it be hot tomorrow?\", k=1)\n",
"for res, score in results:\n",
" print(f\"* [SIM={score:3f}] {res.page_content} [{res.metadata}]\")"
]
},
{
"cell_type": "markdown",
"id": "513a1416",
"metadata": {},
"source": [
"### Pre-filtering with Similarity Search"
]
},
{
"cell_type": "markdown",
"id": "ac58c6c7",
"metadata": {},
"source": [
"Atlas Vector Search supports pre-filtering using MQL Operators for filtering. Below is an example index and query on the same data loaded above that allows you do metadata filtering on the \"page\" field. You can update your existing index with the filter defined and do pre-filtering with vector search."
]
},
{
"cell_type": "markdown",
"id": "dacac7b8",
"metadata": {},
"source": [
"```json\n",
"{\n",
" \"fields\":[\n",
" {\n",
" \"type\": \"vector\",\n",
" \"path\": \"embedding\",\n",
" \"numDimensions\": 1536,\n",
" \"similarity\": \"cosine\"\n",
" },\n",
" {\n",
" \"type\": \"filter\",\n",
" \"path\": \"source\"\n",
" }\n",
" ]\n",
"}\n",
"```\n",
"\n",
"You can also update the index programmatically using the `MongoDBAtlasVectorSearch.create_index` method.\n",
"\n",
"```python\n",
"vectorstore.create_index(\n",
" dimensions=1536,\n",
" filters=[{\"type\":\"filter\", \"path\":\"source\"}],\n",
" update=True\n",
")\n",
"```\n",
"\n",
"And then you can run a query with filter as follows:\n",
"\n",
"```python\n",
"results = vector_store.similarity_search(query=\"foo\",k=1,pre_filter={\"source\": {\"$eq\": \"https://example.com\"}})\n",
"for doc in results:\n",
" print(f\"* {doc.page_content} [{doc.metadata}]\")\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "32b13a9b",
"metadata": {},
"source": [
"#### Other search methods\n",
"\n",
"There are a variety of other search methods that are not covered in this notebook, such as MMR search or searching by vector. For a full list of the search abilities available for `AstraDBVectorStore` check out the [API reference](https://python.langchain.com/api_reference/astradb/vectorstores/langchain_astradb.vectorstores.AstraDBVectorStore.html)."
]
},
{
"cell_type": "markdown",
"id": "01316a42",
"metadata": {},
"source": [
"### Query by turning into retriever\n",
"\n",
"You can also transform the vector store into a retriever for easier usage in your chains. \n",
"\n",
"Here is how to transform your vector store into a retriever and then invoke the retreiever with a simple query and filter."
]
},
{
"cell_type": "code",
"execution_count": 65,
"id": "8f246301",
"metadata": {},
"outputs": [
{
"data": {
| |
151808
|
"text/plain": [
"[Document(metadata={'_id': '8c31b84e-2636-48b6-8b99-9fccb47f7051', 'source': 'news'}, page_content='Robbers broke into the city bank and stole $1 million in cash.')]"
]
},
"execution_count": 65,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever = vector_store.as_retriever(\n",
" search_type=\"similarity_score_threshold\",\n",
" search_kwargs={\"k\": 1, \"score_threshold\": 0.2},\n",
")\n",
"retriever.invoke(\"Stealing from the bank is a crime\")"
]
},
{
"cell_type": "markdown",
"id": "72312657",
"metadata": {},
"source": [
"## Usage for retrieval-augmented generation\n",
"\n",
"For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:\n",
"\n",
"- [Tutorials: working with external knowledge](https://python.langchain.com/docs/tutorials/#working-with-external-knowledge)\n",
"- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n",
"- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/#retrieval)"
]
},
{
"cell_type": "markdown",
"id": "0ac44802",
"metadata": {},
"source": [
"# Other Notes\n",
">* More documentation can be found at [LangChain-MongoDB](https://www.mongodb.com/docs/atlas/atlas-vector-search/ai-integrations/langchain/) site\n",
">* This feature is Generally Available and ready for production deployments.\n",
">* The langchain version 0.0.305 ([release notes](https://github.com/langchain-ai/langchain/releases/tag/v0.0.305)) introduces the support for $vectorSearch MQL stage, which is available with MongoDB Atlas 6.0.11 and 7.0.2. Users utilizing earlier versions of MongoDB Atlas need to pin their LangChain version to <=0.0.304\n",
"> "
]
},
{
"cell_type": "markdown",
"id": "186ef502",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `MongoDBAtlasVectorSearch` features and configurations head to the API reference: https://python.langchain.com/api_reference/mongodb/index.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
151809
|
{
"cells": [
{
"cell_type": "markdown",
"id": "683953b3",
"metadata": {},
"source": [
"# Faiss\n",
"\n",
">[Facebook AI Similarity Search (FAISS)](https://engineering.fb.com/2017/03/29/data-infrastructure/faiss-a-library-for-efficient-similarity-search/) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also includes supporting code for evaluation and parameter tuning.\n",
">\n",
">See [The FAISS Library](https://arxiv.org/pdf/2401.08281) paper.\n",
"\n",
"You can find the FAISS documentation at [this page](https://faiss.ai/).\n",
"\n",
"This notebook shows how to use functionality related to the `FAISS` vector database. It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](/docs/how_to#qa-with-rag) to learn how to use this vectorstore as part of a larger chain."
]
},
{
"cell_type": "markdown",
"id": "601ac1d5-48a2-4e41-bf51-f1d5fdd5639d",
"metadata": {
"tags": []
},
"source": [
"## Setup\n",
"\n",
"The integration lives in the `langchain-community` package. We also need to install the `faiss` package itself. We can install these with:\n",
"\n",
"Note that you can also install `faiss-gpu` if you want to use the GPU enabled version"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "08165d56",
"metadata": {},
"outputs": [],
"source": [
"pip install -qU langchain-community faiss-cpu"
]
},
{
"cell_type": "markdown",
"id": "408be78f-7b0e-44d4-8d48-56a6cb9b3fb9",
"metadata": {},
"source": [
"If you want to get best in-class automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "951c82cb-40bf-46ac-9f3f-d2fca7d204b8",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "78dde98a-584f-4f2a-98d5-e776fd9558fa",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n",
"<EmbeddingTabs/>\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "5b394da3",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings(model=\"text-embedding-3-large\")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "dc37144c-208d-4ab3-9f3a-0407a69fe052",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import faiss\n",
"from langchain_community.docstore.in_memory import InMemoryDocstore\n",
"from langchain_community.vectorstores import FAISS\n",
"\n",
"index = faiss.IndexFlatL2(len(embeddings.embed_query(\"hello world\")))\n",
"\n",
"vector_store = FAISS(\n",
" embedding_function=embeddings,\n",
" index=index,\n",
" docstore=InMemoryDocstore(),\n",
" index_to_docstore_id={},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "d8761614",
"metadata": {},
"source": [
"## Manage vector store\n",
"\n",
"### Add items to vector store"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "3867e154",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['22f5ce99-cd6f-4e0c-8dab-664128307c72',\n",
" 'dc3f061b-5f88-4fa1-a966-413550c51891',\n",
" 'd33d890b-baad-47f7-b7c1-175f5f7b4e59',\n",
" '6e6c01d2-6020-4a7b-95da-ef43d43f01b5',\n",
" 'e677223d-ad75-4c1a-bef6-b5912bd1de03',\n",
" '47e2a168-6462-4ed2-b1d9-d9edfd7391d6',\n",
" '1e4d66d6-e155-4891-9212-f7be97f36c6a',\n",
" 'c0663096-e1a5-4665-b245-1c2e6c4fb653',\n",
" '8297474a-7f7c-4006-9865-398c1781b1bc',\n",
" '44e4be03-0a8d-4316-b3c4-f35f4bb2b532']"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from uuid import uuid4\n",
"\n",
"from langchain_core.documents import Document\n",
"\n",
"document_1 = Document(\n",
" page_content=\"I had chocalate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_2 = Document(\n",
" page_content=\"The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees.\",\n",
" metadata={\"source\": \"news\"},\n",
")\n",
"\n",
"document_3 = Document(\n",
" page_content=\"Building an exciting new project with LangChain - come check it out!\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_4 = Document(\n",
" page_content=\"Robbers broke into the city bank and stole $1 million in cash.\",\n",
" metadata={\"source\": \"news\"},\n",
")\n",
"\n",
"document_5 = Document(\n",
" page_content=\"Wow! That was an amazing movie. I can't wait to see it again.\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_6 = Document(\n",
" page_content=\"Is the new iPhone worth the price? Read this review to find out.\",\n",
" metadata={\"source\": \"website\"},\n",
")\n",
"\n",
"document_7 = Document(\n",
" page_content=\"The top 10 soccer players in the world right now.\",\n",
" metadata={\"source\": \"website\"},\n",
")\n",
"\n",
"document_8 = Document(\n",
" page_content=\"LangGraph is the best framework for building stateful, agentic applications!\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_9 = Document(\n",
| |
151810
|
" page_content=\"The stock market is down 500 points today due to fears of a recession.\",\n",
" metadata={\"source\": \"news\"},\n",
")\n",
"\n",
"document_10 = Document(\n",
" page_content=\"I have a bad feeling I am going to get deleted :(\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"documents = [\n",
" document_1,\n",
" document_2,\n",
" document_3,\n",
" document_4,\n",
" document_5,\n",
" document_6,\n",
" document_7,\n",
" document_8,\n",
" document_9,\n",
" document_10,\n",
"]\n",
"uuids = [str(uuid4()) for _ in range(len(documents))]\n",
"\n",
"vector_store.add_documents(documents=documents, ids=uuids)"
]
},
{
"cell_type": "markdown",
"id": "a410a2dc",
"metadata": {},
"source": [
"### Delete items from vector store"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "c3db04bd",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"vector_store.delete(ids=[uuids[-1]])"
]
},
{
"cell_type": "markdown",
"id": "77de24ff",
"metadata": {},
"source": [
"## Query vector store\n",
"\n",
"Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent. \n",
"\n",
"### Query directly\n",
"\n",
"#### Similarity search\n",
"\n",
"Performing a simple similarity search with filtering on metadata can be done as follows:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "53d95d3f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* Building an exciting new project with LangChain - come check it out! [{'source': 'tweet'}]\n",
"* LangGraph is the best framework for building stateful, agentic applications! [{'source': 'tweet'}]\n"
]
}
],
"source": [
"results = vector_store.similarity_search(\n",
" \"LangChain provides abstractions to make working with LLMs easy\",\n",
" k=2,\n",
" filter={\"source\": \"tweet\"},\n",
")\n",
"for res in results:\n",
" print(f\"* {res.page_content} [{res.metadata}]\")"
]
},
{
"cell_type": "markdown",
"id": "5ae35069",
"metadata": {},
"source": [
"#### Similarity search with score\n",
"\n",
"You can also search with score:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "a9078ce9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* [SIM=0.893688] The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees. [{'source': 'news'}]\n"
]
}
],
"source": [
"results = vector_store.similarity_search_with_score(\n",
" \"Will it be hot tomorrow?\", k=1, filter={\"source\": \"news\"}\n",
")\n",
"for res, score in results:\n",
" print(f\"* [SIM={score:3f}] {res.page_content} [{res.metadata}]\")"
]
},
{
"cell_type": "markdown",
"id": "e9091b1f",
"metadata": {},
"source": [
"#### Other search methods\n",
"\n",
"\n",
"There are a variety of other ways to search a FAISS vector store. For a complete list of those methods, please refer to the [API Reference](https://python.langchain.com/api_reference/community/vectorstores/langchain_community.vectorstores.faiss.FAISS.html)\n",
"\n",
"### Query by turning into retriever\n",
"\n",
"You can also transform the vector store into a retriever for easier usage in your chains. "
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "10da64fa",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(metadata={'source': 'news'}, page_content='Robbers broke into the city bank and stole $1 million in cash.')]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever = vector_store.as_retriever(search_type=\"mmr\", search_kwargs={\"k\": 1})\n",
"retriever.invoke(\"Stealing from the bank is a crime\", filter={\"source\": \"news\"})"
]
},
{
"cell_type": "markdown",
"id": "5edd1909",
"metadata": {},
"source": [
"## Usage for retrieval-augmented generation\n",
"\n",
"For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:\n",
"\n",
"- [Tutorials: working with external knowledge](https://python.langchain.com/docs/tutorials/#working-with-external-knowledge)\n",
"- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n",
"- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/#retrieval)"
]
},
{
"cell_type": "markdown",
"id": "31bda7fd",
"metadata": {},
"source": [
"## Saving and loading\n",
"You can also save and load a FAISS index. This is useful so you don't have to recreate it everytime you use it."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "1b31fe27-e0b3-42c6-b17c-8270b517ee1f",
"metadata": {},
"outputs": [],
"source": [
"vector_store.save_local(\"faiss_index\")\n",
"\n",
"new_vector_store = FAISS.load_local(\n",
" \"faiss_index\", embeddings, allow_dangerous_deserialization=True\n",
")\n",
"\n",
"docs = new_vector_store.similarity_search(\"qux\")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "98378c4e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'source': 'tweet'}, page_content='Building an exciting new project with LangChain - come check it out!')"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0]"
]
},
{
"cell_type": "markdown",
"id": "57da60d4",
"metadata": {},
"source": [
"## Merging\n",
"You can also merge two FAISS vectorstores"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "9b8f5e31-3f40-4e94-8d97-5883125efba7",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'b752e805-350e-4cf5-ba54-0883d46a3a44': Document(page_content='foo')}"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
| |
151812
|
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "683953b3",
"metadata": {},
"source": [
"# Pinecone\n",
"\n",
">[Pinecone](https://docs.pinecone.io/docs/overview) is a vector database with broad functionality.\n",
"\n",
"This notebook shows how to use functionality related to the `Pinecone` vector database.\n",
"\n",
"## Setup\n",
"\n",
"To use the `PineconeVectorStore` you first need to install the partner package, as well as the other packages used throughout this notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b4c41cad-08ef-4f72-a545-2151e4598efe",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install -qU langchain-pinecone pinecone-notebooks"
]
},
{
"cell_type": "markdown",
"id": "1917d123",
"metadata": {},
"source": [
"Migration note: if you are migrating from the `langchain_community.vectorstores` implementation of Pinecone, you may need to remove your `pinecone-client` v2 dependency before installing `langchain-pinecone`, which relies on `pinecone-client` v3."
]
},
{
"cell_type": "markdown",
"id": "ef6dc4de",
"metadata": {},
"source": [
"### Credentials\n",
"\n",
"Create a new Pinecone account, or sign into your existing one, and create an API key to use in this notebook."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "eb554814",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"import time\n",
"\n",
"from pinecone import Pinecone, ServerlessSpec\n",
"\n",
"if not os.getenv(\"PINECONE_API_KEY\"):\n",
" os.environ[\"PINECONE_API_KEY\"] = getpass.getpass(\"Enter your Pinecone API key: \")\n",
"\n",
"pinecone_api_key = os.environ.get(\"PINECONE_API_KEY\")\n",
"\n",
"pc = Pinecone(api_key=pinecone_api_key)"
]
},
{
"cell_type": "markdown",
"id": "6ef1d828",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "23b5ac5e",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "658706a3",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"Before initializing our vector store, let's connect to a Pinecone index. If one named `index_name` doesn't exist, it will be created."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "276a06dd",
"metadata": {},
"outputs": [],
"source": [
"import time\n",
"\n",
"index_name = \"langchain-test-index\" # change if desired\n",
"\n",
"existing_indexes = [index_info[\"name\"] for index_info in pc.list_indexes()]\n",
"\n",
"if index_name not in existing_indexes:\n",
" pc.create_index(\n",
" name=index_name,\n",
" dimension=3072,\n",
" metric=\"cosine\",\n",
" spec=ServerlessSpec(cloud=\"aws\", region=\"us-east-1\"),\n",
" )\n",
" while not pc.describe_index(index_name).status[\"ready\"]:\n",
" time.sleep(1)\n",
"\n",
"index = pc.Index(index_name)"
]
},
{
"cell_type": "markdown",
"id": "3a4d377f",
"metadata": {},
"source": [
"Now that our Pinecone index is setup, we can initialize our vector store. \n",
"\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n",
"<EmbeddingTabs/>\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "1485db56",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings(model=\"text-embedding-3-large\")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "6e104aee",
"metadata": {},
"outputs": [],
"source": [
"from langchain_pinecone import PineconeVectorStore\n",
"\n",
"vector_store = PineconeVectorStore(index=index, embedding=embeddings)"
]
},
{
"cell_type": "markdown",
"id": "48721e29",
"metadata": {},
"source": [
"## Manage vector store\n",
"\n",
"Once you have created your vector store, we can interact with it by adding and deleting different items.\n",
"\n",
"### Add items to vector store\n",
"\n",
"We can add items to our vector store by using the `add_documents` function."
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "70e688f4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['167b8681-5974-467f-adcb-6e987a18df01',\n",
" 'd16010fd-41f8-4d49-9c22-c66d5555a3fe',\n",
" 'ffcacfb3-2bc2-44c3-a039-c2256a905c0e',\n",
" 'cf3bfc9f-5dc7-4f5e-bb41-edb957394126',\n",
" 'e99b07eb-fdff-4cb9-baa8-619fd8efeed3',\n",
" '68c93033-a24f-40bd-8492-92fa26b631a4',\n",
" 'b27a4ecb-b505-4c5d-89ff-526e3d103558',\n",
" '4868a9e6-e6fb-4079-b400-4a1dfbf0d4c4',\n",
" '921c0e9c-0550-4eb5-9a6c-ed44410788b2',\n",
" 'c446fc23-64e8-47e7-8c19-ecf985e9411e']"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from uuid import uuid4\n",
"\n",
"from langchain_core.documents import Document\n",
"\n",
"document_1 = Document(\n",
" page_content=\"I had chocalate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_2 = Document(\n",
" page_content=\"The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees.\",\n",
" metadata={\"source\": \"news\"},\n",
")\n",
"\n",
"document_3 = Document(\n",
" page_content=\"Building an exciting new project with LangChain - come check it out!\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
| |
151813
|
"\n",
"document_4 = Document(\n",
" page_content=\"Robbers broke into the city bank and stole $1 million in cash.\",\n",
" metadata={\"source\": \"news\"},\n",
")\n",
"\n",
"document_5 = Document(\n",
" page_content=\"Wow! That was an amazing movie. I can't wait to see it again.\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_6 = Document(\n",
" page_content=\"Is the new iPhone worth the price? Read this review to find out.\",\n",
" metadata={\"source\": \"website\"},\n",
")\n",
"\n",
"document_7 = Document(\n",
" page_content=\"The top 10 soccer players in the world right now.\",\n",
" metadata={\"source\": \"website\"},\n",
")\n",
"\n",
"document_8 = Document(\n",
" page_content=\"LangGraph is the best framework for building stateful, agentic applications!\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_9 = Document(\n",
" page_content=\"The stock market is down 500 points today due to fears of a recession.\",\n",
" metadata={\"source\": \"news\"},\n",
")\n",
"\n",
"document_10 = Document(\n",
" page_content=\"I have a bad feeling I am going to get deleted :(\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"documents = [\n",
" document_1,\n",
" document_2,\n",
" document_3,\n",
" document_4,\n",
" document_5,\n",
" document_6,\n",
" document_7,\n",
" document_8,\n",
" document_9,\n",
" document_10,\n",
"]\n",
"uuids = [str(uuid4()) for _ in range(len(documents))]\n",
"\n",
"vector_store.add_documents(documents=documents, ids=uuids)"
]
},
{
"cell_type": "markdown",
"id": "120922b3",
"metadata": {},
"source": [
"### Delete items from vector store"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "5b8437cd",
"metadata": {},
"outputs": [],
"source": [
"vector_store.delete(ids=[uuids[-1]])"
]
},
{
"cell_type": "markdown",
"id": "5ee21c89",
"metadata": {},
"source": [
"## Query vector store\n",
"\n",
"Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent. \n",
"\n",
"### Query directly\n",
"\n",
"Performing a simple similarity search can be done as follows:"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "ffbcb3fb",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* Building an exciting new project with LangChain - come check it out! [{'source': 'tweet'}]\n",
"* LangGraph is the best framework for building stateful, agentic applications! [{'source': 'tweet'}]\n"
]
}
],
"source": [
"results = vector_store.similarity_search(\n",
" \"LangChain provides abstractions to make working with LLMs easy\",\n",
" k=2,\n",
" filter={\"source\": \"tweet\"},\n",
")\n",
"for res in results:\n",
" print(f\"* {res.page_content} [{res.metadata}]\")"
]
},
{
"cell_type": "markdown",
"id": "79f3494d",
"metadata": {},
"source": [
"#### Similarity search with score\n",
"\n",
"You can also search with score:"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "5fb24583",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* [SIM=0.553187] The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees. [{'source': 'news'}]\n"
]
}
],
"source": [
"results = vector_store.similarity_search_with_score(\n",
" \"Will it be hot tomorrow?\", k=1, filter={\"source\": \"news\"}\n",
")\n",
"for res, score in results:\n",
" print(f\"* [SIM={score:3f}] {res.page_content} [{res.metadata}]\")"
]
},
{
"cell_type": "markdown",
"id": "1855941b",
"metadata": {},
"source": [
"#### Other search methods\n",
"\n",
"There are more search methods (such as MMR) not listed in this notebook, to find all of them be sure to read the [API reference](https://python.langchain.com/api_reference/pinecone/vectorstores/langchain_pinecone.vectorstores.PineconeVectorStore.html).\n",
"\n",
"### Query by turning into retriever\n",
"\n",
"You can also transform the vector store into a retriever for easier usage in your chains."
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "78140e87",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(metadata={'source': 'news'}, page_content='Robbers broke into the city bank and stole $1 million in cash.')]"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever = vector_store.as_retriever(\n",
" search_type=\"similarity_score_threshold\",\n",
" search_kwargs={\"k\": 1, \"score_threshold\": 0.5},\n",
")\n",
"retriever.invoke(\"Stealing from the bank is a crime\", filter={\"source\": \"news\"})"
]
},
{
"cell_type": "markdown",
"id": "72990cb5",
"metadata": {},
"source": [
"## Usage for retrieval-augmented generation\n",
"\n",
"For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:\n",
"\n",
"- [Tutorials: working with external knowledge](https://python.langchain.com/docs/tutorials/#working-with-external-knowledge)\n",
"- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n",
"- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/#retrieval)"
]
},
{
"cell_type": "markdown",
"id": "0d5722bc",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all __ModuleName__VectorStore features and configurations head to the API reference: https://python.langchain.com/api_reference/pinecone/vectorstores/langchain_pinecone.vectorstores.PineconeVectorStore.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
151815
|
"for doc, score in docs_with_score:\n",
" print(\"-\" * 80)\n",
" print(\"Score: \", score)\n",
" print(doc.page_content)\n",
" print(\"-\" * 80)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Additionally, the similarity_search_with_relevance_scores method can be used to obtain relevance scores, where a higher score indicates greater similarity."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--------------------------------------------------------------------------------\n",
"Score: 0.8154069850178\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.\n",
"--------------------------------------------------------------------------------\n",
"--------------------------------------------------------------------------------\n",
"Score: 0.7827270056715364\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n",
"\n",
"We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n",
"\n",
"We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n",
"\n",
"We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n",
"\n",
"We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"docs_with_relevance_score = db.similarity_search_with_relevance_scores(query, k=2)\n",
"for doc, score in docs_with_relevance_score:\n",
" print(\"-\" * 80)\n",
" print(\"Score: \", score)\n",
" print(doc.page_content)\n",
" print(\"-\" * 80)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Filter with metadata\n",
"\n",
"perform searches using metadata filters to retrieve a specific number of nearest-neighbor results that align with the applied filters.\n",
"\n",
"## Supported metadata types\n",
"\n",
"Each vector in the TiDB Vector Store can be paired with metadata, structured as key-value pairs within a JSON object. The keys are strings, and the values can be of the following types:\n",
"\n",
"- String\n",
"- Number (integer or floating point)\n",
"- Booleans (true, false)\n",
"\n",
"For instance, consider the following valid metadata payloads:\n",
"\n",
"```json\n",
"{\n",
" \"page\": 12,\n",
" \"book_tile\": \"Siddhartha\"\n",
"}\n",
"```\n",
"\n",
"## Metadata filter syntax\n",
"\n",
"The available filters include:\n",
"\n",
"- $or - Selects vectors that meet any one of the given conditions.\n",
"- $and - Selects vectors that meet all of the given conditions.\n",
"- $eq - Equal to\n",
"- $ne - Not equal to\n",
"- $gt - Greater than\n",
"- $gte - Greater than or equal to\n",
"- $lt - Less than\n",
"- $lte - Less than or equal to\n",
"- $in - In array\n",
"- $nin - Not in array\n",
"\n",
"Assuming one vector with metada:\n",
"```json\n",
"{\n",
" \"page\": 12,\n",
" \"book_tile\": \"Siddhartha\"\n",
"}\n",
"```\n",
"\n",
"The following metadata filters will match the vector\n",
"\n",
"```json\n",
"{\"page\": 12}\n",
"\n",
"{\"page\":{\"$eq\": 12}}\n",
"\n",
"{\"page\":{\"$in\": [11, 12, 13]}}\n",
"\n",
"{\"page\":{\"$nin\": [13]}}\n",
"\n",
"{\"page\":{\"$lt\": 11}}\n",
"\n",
"{\n",
" \"$or\": [{\"page\": 11}, {\"page\": 12}],\n",
" \"$and\": [{\"page\": 12}, {\"page\": 13}],\n",
"}\n",
"```\n",
"\n",
"Please note that each key-value pair in the metadata filters is treated as a separate filter clause, and these clauses are combined using the AND logical operator."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[UUID('c782cb02-8eec-45be-a31f-fdb78914f0a7'),\n",
" UUID('08dcd2ba-9f16-4f29-a9b7-18141f8edae3')]"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"db.add_texts(\n",
" texts=[\n",
" \"TiDB Vector offers advanced, high-speed vector processing capabilities, enhancing AI workflows with efficient data handling and analytics support.\",\n",
" \"TiDB Vector, starting as low as $10 per month for basic usage\",\n",
" ],\n",
" metadatas=[\n",
" {\"title\": \"TiDB Vector functionality\"},\n",
" {\"title\": \"TiDB Vector Pricing\"},\n",
" ],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--------------------------------------------------------------------------------\n",
"Score: 0.12761409169211535\n",
"TiDB Vector offers advanced, high-speed vector processing capabilities, enhancing AI workflows with efficient data handling and analytics support.\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"docs_with_score = db.similarity_search_with_score(\n",
" \"Introduction to TiDB Vector\", filter={\"title\": \"TiDB Vector functionality\"}, k=4\n",
")\n",
"for doc, score in docs_with_score:\n",
" print(\"-\" * 80)\n",
" print(\"Score: \", score)\n",
" print(doc.page_content)\n",
" print(\"-\" * 80)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using as a Retriever"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In Langchain, a retriever is an interface that retrieves documents in response to an unstructured query, offering a broader functionality than a vector store. The code below demonstrates how to utilize TiDB Vector as a retriever."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
| |
151897
|
" this mapping is compatible with model of exact and similarity of l2/cosine\n",
" \"\"\"\n",
" docsearch = EcloudESVectorStore.from_documents(\n",
" docs,\n",
" embeddings,\n",
" es_url=ES_URL,\n",
" user=USER,\n",
" password=PASSWORD,\n",
" index_name=indexname,\n",
" refresh_indices=True,\n",
" text_field=\"my_text\",\n",
" vector_field=\"my_vec\",\n",
" vector_type=\"knn_dense_float_vector\",\n",
" )\n",
" # filter={\"match_all\": {}} ,default\n",
" docs = docsearch.similarity_search(\n",
" query,\n",
" k=10,\n",
" filter={\"match_all\": {}},\n",
" search_params={\n",
" \"model\": \"exact\",\n",
" \"vector_field\": \"my_vec\",\n",
" \"text_field\": \"my_text\",\n",
" },\n",
" )\n",
" print(docs[0].page_content)\n",
"\n",
" # filter={\"term\": {\"my_text\": \"Jackson\"}}\n",
" docs = docsearch.similarity_search(\n",
" query,\n",
" k=10,\n",
" filter={\"term\": {\"my_text\": \"Jackson\"}},\n",
" search_params={\n",
" \"model\": \"exact\",\n",
" \"vector_field\": \"my_vec\",\n",
" \"text_field\": \"my_text\",\n",
" },\n",
" )\n",
" print(docs[0].page_content)\n",
"\n",
" # filter={\"term\": {\"my_text\": \"president\"}}\n",
" docs = docsearch.similarity_search(\n",
" query,\n",
" k=10,\n",
" filter={\"term\": {\"my_text\": \"president\"}},\n",
" search_params={\n",
" \"model\": \"exact\",\n",
" \"similarity\": \"l2\",\n",
" \"vector_field\": \"my_vec\",\n",
" \"text_field\": \"my_text\",\n",
" },\n",
" )\n",
" print(docs[0].page_content)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
},
"vscode": {
"interpreter": {
"hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| |
151901
|
"execution_count": 7,
"id": "12eb86d8",
"metadata": {
"id": "12eb86d8",
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"['21cca03c-9089-42d2-b41c-3d156be2b519',\n",
" 'a6ceb967-b552-4802-bb06-c0e95fce386e',\n",
" '3a35fac4-e5f0-493b-bee0-9143b41aedae',\n",
" '176da099-66b1-4d6a-811b-dfdfe0808d30',\n",
" 'ecfa1a30-3c97-408b-80c0-5c43d68bf5ff',\n",
" 'c0f08baa-e70b-4f83-b387-c6e0a0f36f73',\n",
" '489b2c9c-1925-43e1-bcf0-0fa94cf1cbc4',\n",
" '408c6503-9ba4-49fd-b1cc-95584cd914c5',\n",
" '5248c899-16d5-4377-a9e9-736ca443ad4f',\n",
" 'ca182769-c4fc-4e25-8f0a-8dd0a525955c']"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from uuid import uuid4\n",
"\n",
"from langchain_core.documents import Document\n",
"\n",
"document_1 = Document(\n",
" page_content=\"I had chocalate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_2 = Document(\n",
" page_content=\"The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees.\",\n",
" metadata={\"source\": \"news\"},\n",
")\n",
"\n",
"document_3 = Document(\n",
" page_content=\"Building an exciting new project with LangChain - come check it out!\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_4 = Document(\n",
" page_content=\"Robbers broke into the city bank and stole $1 million in cash.\",\n",
" metadata={\"source\": \"news\"},\n",
")\n",
"\n",
"document_5 = Document(\n",
" page_content=\"Wow! That was an amazing movie. I can't wait to see it again.\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_6 = Document(\n",
" page_content=\"Is the new iPhone worth the price? Read this review to find out.\",\n",
" metadata={\"source\": \"website\"},\n",
")\n",
"\n",
"document_7 = Document(\n",
" page_content=\"The top 10 soccer players in the world right now.\",\n",
" metadata={\"source\": \"website\"},\n",
")\n",
"\n",
"document_8 = Document(\n",
" page_content=\"LangGraph is the best framework for building stateful, agentic applications!\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_9 = Document(\n",
" page_content=\"The stock market is down 500 points today due to fears of a recession.\",\n",
" metadata={\"source\": \"news\"},\n",
")\n",
"\n",
"document_10 = Document(\n",
" page_content=\"I have a bad feeling I am going to get deleted :(\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"documents = [\n",
" document_1,\n",
" document_2,\n",
" document_3,\n",
" document_4,\n",
" document_5,\n",
" document_6,\n",
" document_7,\n",
" document_8,\n",
" document_9,\n",
" document_10,\n",
"]\n",
"uuids = [str(uuid4()) for _ in range(len(documents))]\n",
"\n",
"vector_store.add_documents(documents=documents, ids=uuids)"
]
},
{
"cell_type": "markdown",
"id": "2a549e3d",
"metadata": {},
"source": [
"### Delete items from vector store"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "31c3b785",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"vector_store.delete(ids=[uuids[-1]])"
]
},
{
"cell_type": "markdown",
"id": "674bcab2",
"metadata": {},
"source": [
"## Query vector store\n",
"\n",
"Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent. These examples also show how to use filtering when searching.\n",
"\n",
"### Query directly\n",
"\n",
"#### Similarity search\n",
"\n",
"Performing a simple similarity search with filtering on metadata can be done as follows:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "da079ceb",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* Building an exciting new project with LangChain - come check it out! [{'source': 'tweet'}]\n",
"* LangGraph is the best framework for building stateful, agentic applications! [{'source': 'tweet'}]\n"
]
}
],
"source": [
"results = vector_store.similarity_search(\n",
" query=\"LangChain provides abstractions to make working with LLMs easy\",\n",
" k=2,\n",
" filter=[{\"term\": {\"metadata.source.keyword\": \"tweet\"}}],\n",
")\n",
"for res in results:\n",
" print(f\"* {res.page_content} [{res.metadata}]\")"
]
},
{
"cell_type": "markdown",
"id": "a0fda72e",
"metadata": {},
"source": [
"#### Similarity search with score\n",
"\n",
"If you want to execute a similarity search and receive the corresponding scores you can run:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "1013c9e8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* [SIM=0.765887] The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees. [{'source': 'news'}]\n"
]
}
],
"source": [
"results = vector_store.similarity_search_with_score(\n",
" query=\"Will it be hot tomorrow\",\n",
" k=1,\n",
" filter=[{\"term\": {\"metadata.source.keyword\": \"news\"}}],\n",
")\n",
"for doc, score in results:\n",
" print(f\"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]\")"
]
},
{
"cell_type": "markdown",
"id": "8f2c7b5c",
"metadata": {},
"source": [
"### Query by turning into retriever\n",
"\n",
"You can also transform the vector store into a retriever for easier usage in your chains. "
]
},
{
| |
151902
|
"cell_type": "code",
"execution_count": 12,
"id": "2db8b6a5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(metadata={'source': 'news'}, page_content='Robbers broke into the city bank and stole $1 million in cash.'),\n",
" Document(metadata={'source': 'news'}, page_content='The stock market is down 500 points today due to fears of a recession.'),\n",
" Document(metadata={'source': 'website'}, page_content='Is the new iPhone worth the price? Read this review to find out.'),\n",
" Document(metadata={'source': 'tweet'}, page_content='Building an exciting new project with LangChain - come check it out!')]"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever = vector_store.as_retriever(\n",
" search_type=\"similarity_score_threshold\", search_kwargs={\"score_threshold\": 0.2}\n",
")\n",
"retriever.invoke(\"Stealing from the bank is a crime\")"
]
},
{
"cell_type": "markdown",
"id": "17b509ae",
"metadata": {},
"source": [
"## Usage for retrieval-augmented generation\n",
"\n",
"For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:\n",
"\n",
"- [Tutorials: working with external knowledge](https://python.langchain.com/docs/tutorials/#working-with-external-knowledge)\n",
"- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n",
"- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/#retrieval)"
]
},
{
"cell_type": "markdown",
"id": "3242fd42",
"metadata": {},
"source": [
"# FAQ\n",
"\n",
"## Question: Im getting timeout errors when indexing documents into Elasticsearch. How do I fix this?\n",
"One possible issue is your documents might take longer to index into Elasticsearch. ElasticsearchStore uses the Elasticsearch bulk API which has a few defaults that you can adjust to reduce the chance of timeout errors.\n",
"\n",
"This is also a good idea when you're using SparseVectorRetrievalStrategy.\n",
"\n",
"The defaults are:\n",
"- `chunk_size`: 500\n",
"- `max_chunk_bytes`: 100MB\n",
"\n",
"To adjust these, you can pass in the `chunk_size` and `max_chunk_bytes` parameters to the ElasticsearchStore `add_texts` method.\n",
"\n",
"```python\n",
" vector_store.add_texts(\n",
" texts,\n",
" bulk_kwargs={\n",
" \"chunk_size\": 50,\n",
" \"max_chunk_bytes\": 200000000\n",
" }\n",
" )\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "604c66ea",
"metadata": {},
"source": [
"# Upgrading to ElasticsearchStore\n",
"\n",
"If you're already using Elasticsearch in your langchain based project, you may be using the old implementations: `ElasticVectorSearch` and `ElasticKNNSearch` which are now deprecated. We've introduced a new implementation called `ElasticsearchStore` which is more flexible and easier to use. This notebook will guide you through the process of upgrading to the new implementation.\n",
"\n",
"## What's new?\n",
"\n",
"The new implementation is now one class called `ElasticsearchStore` which can be used for approximate dense vector, exact dense vector, sparse vector (ELSER), BM25 retrieval and hybrid retrieval, via strategies.\n",
"\n",
"## I am using ElasticKNNSearch\n",
"\n",
"Old implementation:\n",
"\n",
"```python\n",
"\n",
"from langchain_community.vectorstores.elastic_vector_search import ElasticKNNSearch\n",
"\n",
"db = ElasticKNNSearch(\n",
" elasticsearch_url=\"http://localhost:9200\",\n",
" index_name=\"test_index\",\n",
" embedding=embedding\n",
")\n",
"\n",
"```\n",
"\n",
"New implementation:\n",
"\n",
"```python\n",
"\n",
"from langchain_elasticsearch import ElasticsearchStore, DenseVectorStrategy\n",
"\n",
"db = ElasticsearchStore(\n",
" es_url=\"http://localhost:9200\",\n",
" index_name=\"test_index\",\n",
" embedding=embedding,\n",
" # if you use the model_id\n",
" # strategy=DenseVectorStrategy(model_id=\"test_model\")\n",
" # if you use hybrid search\n",
" # strategy=DenseVectorStrategy(hybrid=True)\n",
")\n",
"\n",
"```\n",
"\n",
"## I am using ElasticVectorSearch\n",
"\n",
"Old implementation:\n",
"\n",
"```python\n",
"\n",
"from langchain_community.vectorstores.elastic_vector_search import ElasticVectorSearch\n",
"\n",
"db = ElasticVectorSearch(\n",
" elasticsearch_url=\"http://localhost:9200\",\n",
" index_name=\"test_index\",\n",
" embedding=embedding\n",
")\n",
"\n",
"```\n",
"\n",
"New implementation:\n",
"\n",
"```python\n",
"\n",
"from langchain_elasticsearch import ElasticsearchStore, DenseVectorScriptScoreStrategy\n",
"\n",
"db = ElasticsearchStore(\n",
" es_url=\"http://localhost:9200\",\n",
" index_name=\"test_index\",\n",
" embedding=embedding,\n",
" strategy=DenseVectorScriptScoreStrategy()\n",
")\n",
"\n",
"```\n",
"\n",
"```python\n",
"db.client.indices.delete(\n",
" index=\"test-metadata, test-elser, test-basic\",\n",
" ignore_unavailable=True,\n",
" allow_no_indices=True,\n",
")\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "33388871",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `ElasticSearchStore` features and configurations head to the API reference: https://python.langchain.com/api_reference/elasticsearch/vectorstores/langchain_elasticsearch.vectorstores.ElasticsearchStore.html"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
151913
|
"\"\"\"\n",
"PROMPT = PromptTemplate(\n",
" template=prompt_template, input_variables=[\"context\", \"question\"]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2280140e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain_openai import OpenAI\n",
"\n",
"qa = RetrievalQA.from_chain_type(\n",
" llm=OpenAI(),\n",
" chain_type=\"stuff\",\n",
" retriever=qa_retriever,\n",
" return_source_documents=True,\n",
" chain_type_kwargs={\"prompt\": PROMPT},\n",
")\n",
"\n",
"docs = qa({\"query\": \"gpt-4 compute requirements\"})\n",
"\n",
"print(docs[\"result\"])\n",
"print(docs[\"source_documents\"])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
151915
|
"text": [
"Inserting data...: 100%|██████████| 42/42 [00:15<00:00, 2.68it/s]\n"
]
}
],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores import MyScale\n",
"\n",
"loader = TextLoader(\"../../how_to/state_of_the_union.txt\")\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"\n",
"for i, d in enumerate(docs):\n",
" d.metadata = {\"doc_id\": i}\n",
"\n",
"docsearch = MyScale.from_documents(docs, embeddings)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "8d867b05",
"metadata": {},
"source": [
"### Similarity search with score"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "9ec25cc5",
"metadata": {},
"source": [
"The returned distance score is cosine distance. Therefore, a lower score is better."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "ddbcee77",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0.229655921459198 {'doc_id': 0} Madam Speaker, Madam...\n",
"0.24506962299346924 {'doc_id': 8} And so many families...\n",
"0.24786919355392456 {'doc_id': 1} Groups of citizens b...\n",
"0.24875116348266602 {'doc_id': 6} And I’m taking robus...\n"
]
}
],
"source": [
"meta = docsearch.metadata_column\n",
"output = docsearch.similarity_search_with_relevance_scores(\n",
" \"What did the president say about Ketanji Brown Jackson?\",\n",
" k=4,\n",
" where_str=f\"{meta}.doc_id<10\",\n",
")\n",
"for d, dist in output:\n",
" print(dist, d.metadata, d.page_content[:20] + \"...\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "a359ed74",
"metadata": {},
"source": [
"## Deleting your data\n",
"\n",
"You can either drop the table with `.drop()` method or partially delete your data with `.delete()` method."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "3a0cc43b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0.24506962299346924 {'doc_id': 8} And so many families...\n",
"0.24875116348266602 {'doc_id': 6} And I’m taking robus...\n",
"0.26027143001556396 {'doc_id': 7} We see the unity amo...\n",
"0.26390212774276733 {'doc_id': 9} And unlike the $2 Tr...\n"
]
}
],
"source": [
"# use directly a `where_str` to delete\n",
"docsearch.delete(where_str=f\"{docsearch.metadata_column}.doc_id < 5\")\n",
"meta = docsearch.metadata_column\n",
"output = docsearch.similarity_search_with_relevance_scores(\n",
" \"What did the president say about Ketanji Brown Jackson?\",\n",
" k=4,\n",
" where_str=f\"{meta}.doc_id<10\",\n",
")\n",
"for d, dist in output:\n",
" print(dist, d.metadata, d.page_content[:20] + \"...\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "fb6a9d36",
"metadata": {},
"outputs": [],
"source": [
"docsearch.drop()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "48dbd8e0",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
151926
|
" page_content=\"The top 10 soccer players in the world right now.\",\n",
" metadata={\"source\": \"website\"},\n",
")\n",
"\n",
"document_8 = Document(\n",
" page_content=\"LangGraph is the best framework for building stateful, agentic applications!\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_9 = Document(\n",
" page_content=\"The stock market is down 500 points today due to fears of a recession.\",\n",
" metadata={\"source\": \"news\"},\n",
")\n",
"\n",
"document_10 = Document(\n",
" page_content=\"I have a bad feeling I am going to get deleted :(\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"documents = [\n",
" document_1,\n",
" document_2,\n",
" document_3,\n",
" document_4,\n",
" document_5,\n",
" document_6,\n",
" document_7,\n",
" document_8,\n",
" document_9,\n",
" document_10,\n",
"]\n",
"uuids = [str(uuid4()) for _ in range(len(documents))]\n",
"\n",
"vector_store.add_documents(documents=documents, ids=uuids)"
]
},
{
"cell_type": "markdown",
"id": "e23c22d8",
"metadata": {},
"source": [
"### Delete items from vector store"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "1f387fa8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(insert count: 0, delete count: 1, upsert count: 0, timestamp: 0, success count: 0, err count: 0, cost: 0)"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"vector_store.delete(ids=[uuids[-1]])"
]
},
{
"cell_type": "markdown",
"id": "fb12fa75",
"metadata": {},
"source": [
"## Query vector store\n",
"\n",
"Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent. \n",
"\n",
"### Query directly\n",
"\n",
"#### Similarity search\n",
"\n",
"Performing a simple similarity search with filtering on metadata can be done as follows:"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "35801a55",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* Building an exciting new project with LangChain - come check it out! [{'pk': '9905001c-a4a3-455e-ab94-72d0ed11b476', 'source': 'tweet'}]\n",
"* LangGraph is the best framework for building stateful, agentic applications! [{'pk': '1206d237-ee3a-484f-baf2-b5ac38eeb314', 'source': 'tweet'}]\n"
]
}
],
"source": [
"results = vector_store.similarity_search(\n",
" \"LangChain provides abstractions to make working with LLMs easy\",\n",
" k=2,\n",
" filter={\"source\": \"tweet\"},\n",
")\n",
"for res in results:\n",
" print(f\"* {res.page_content} [{res.metadata}]\")"
]
},
{
"cell_type": "markdown",
"id": "35574409",
"metadata": {},
"source": [
"#### Similarity search with score\n",
"\n",
"You can also search with score:"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "c360af3d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* [SIM=21192.628906] bar [{'pk': '2', 'source': 'https://example.com'}]\n"
]
}
],
"source": [
"results = vector_store.similarity_search_with_score(\n",
" \"Will it be hot tomorrow?\", k=1, filter={\"source\": \"news\"}\n",
")\n",
"for res, score in results:\n",
" print(f\"* [SIM={score:3f}] {res.page_content} [{res.metadata}]\")"
]
},
{
"cell_type": "markdown",
"id": "14db337f",
"metadata": {},
"source": [
"For a full list of all the search options available when using the `Milvus` vector store, you can visit the [API reference](https://python.langchain.com/api_reference/milvus/vectorstores/langchain_milvus.vectorstores.milvus.Milvus.html).\n",
"\n",
"### Query by turning into retriever\n",
"\n",
"You can also transform the vector store into a retriever for easier usage in your chains. "
]
},
{
"cell_type": "code",
"execution_count": 34,
"id": "f6d9357c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(metadata={'pk': 'eacc7256-d7fa-4036-b1f7-83d7a4bee0c5', 'source': 'news'}, page_content='Robbers broke into the city bank and stole $1 million in cash.')]"
]
},
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever = vector_store.as_retriever(search_type=\"mmr\", search_kwargs={\"k\": 1})\n",
"retriever.invoke(\"Stealing from the bank is a crime\", filter={\"source\": \"news\"})"
]
},
{
"cell_type": "markdown",
"id": "8ac953f1",
"metadata": {},
"source": [
"## Usage for retrieval-augmented generation\n",
"\n",
"For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:\n",
"\n",
"- [Tutorials: working with external knowledge](https://python.langchain.com/docs/tutorials/#working-with-external-knowledge)\n",
"- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n",
"- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/#retrieval)"
]
},
{
"cell_type": "markdown",
"id": "7fb27b941602401d91542211134fc71a",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"### Per-User Retrieval\n",
"\n",
"When building a retrieval app, you often have to build it with multiple users in mind. This means that you may be storing data not just for one user, but for many different users, and they should not be able to see eachother’s data.\n",
"\n",
"Milvus recommends using [partition_key](https://milvus.io/docs/multi_tenancy.md#Partition-key-based-multi-tenancy) to implement multi-tenancy, here is an example.\n",
"> The feature of Partition key is now not available in Milvus Lite, if you want to use it, you need to start Milvus server from [docker or kubernetes](https://milvus.io/docs/install_standalone-docker.md#Start-Milvus)."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "acae54e37e7d407bbb7b55eff062a284",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
| |
151972
|
" \"specifically tailored to their preferences.\\nLarge language models naturally follow patterns in input \"\n",
" \"(prompt), and provide coherent completion that follows the same patterns. For that, we want to feed \"\n",
" 'them with several examples in the input (\"few-shot prompt\"), so they can follow through. '\n",
" \"The process of creating the correct prompt for your problem is called prompt engineering, \"\n",
" \"and you can read more about it here.\"\n",
")\n",
"\n",
"semantic_text_splitter = AI21SemanticTextSplitter(add_start_index=True)\n",
"documents = semantic_text_splitter.create_documents(texts=[TEXT])\n",
"print(f\"The text has been split into {len(documents)} Documents.\")\n",
"for doc in documents:\n",
" print(f\"start_index: {doc.metadata['start_index']}\")\n",
" print(f\"text: {doc.page_content}\")\n",
" print(\"====\")"
]
},
{
"cell_type": "markdown",
"id": "b62939cc5803b9fb",
"metadata": {
"collapsed": false
},
"source": [
"### Splitting documents"
]
},
{
"cell_type": "markdown",
"id": "44162d340c0de5fb",
"metadata": {
"collapsed": false
},
"source": [
"This example shows how to use AI21SemanticTextSplitter to split a list of Documents into chunks based on semantic meaning."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8950c8e4e1208bf6",
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from langchain_ai21 import AI21SemanticTextSplitter\n",
"from langchain_core.documents import Document\n",
"\n",
"TEXT = (\n",
" \"We’ve all experienced reading long, tedious, and boring pieces of text - financial reports, \"\n",
" \"legal documents, or terms and conditions (though, who actually reads those terms and conditions to be honest?).\\n\"\n",
" \"Imagine a company that employs hundreds of thousands of employees. In today's information \"\n",
" \"overload age, nearly 30% of the workday is spent dealing with documents. There's no surprise \"\n",
" \"here, given that some of these documents are long and convoluted on purpose (did you know that \"\n",
" \"reading through all your privacy policies would take almost a quarter of a year?). Aside from \"\n",
" \"inefficiency, workers may simply refrain from reading some documents (for example, Only 16% of \"\n",
" \"Employees Read Their Employment Contracts Entirely Before Signing!).\\nThis is where AI-driven summarization \"\n",
" \"tools can be helpful: instead of reading entire documents, which is tedious and time-consuming, \"\n",
" \"users can (ideally) quickly extract relevant information from a text. With large language models, \"\n",
" \"the development of those tools is easier than ever, and you can offer your users a summary that is \"\n",
" \"specifically tailored to their preferences.\\nLarge language models naturally follow patterns in input \"\n",
" \"(prompt), and provide coherent completion that follows the same patterns. For that, we want to feed \"\n",
" 'them with several examples in the input (\"few-shot prompt\"), so they can follow through. '\n",
" \"The process of creating the correct prompt for your problem is called prompt engineering, \"\n",
" \"and you can read more about it here.\"\n",
")\n",
"\n",
"semantic_text_splitter = AI21SemanticTextSplitter()\n",
"document = Document(page_content=TEXT, metadata={\"hello\": \"goodbye\"})\n",
"documents = semantic_text_splitter.split_documents([document])\n",
"print(f\"The document list has been split into {len(documents)} Documents.\")\n",
"for doc in documents:\n",
" print(f\"text: {doc.page_content}\")\n",
" print(f\"metadata: {doc.metadata}\")\n",
" print(\"====\")"
]
},
{
"cell_type": "markdown",
"id": "f8f911b8d9ec22e5",
"metadata": {
"collapsed": false
},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
151993
|
{
"cells": [
{
"cell_type": "markdown",
"id": "2ed9a4c2",
"metadata": {},
"source": [
"# Beautiful Soup\n",
"\n",
">[Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/) is a Python package for parsing \n",
"> HTML and XML documents (including having malformed markup, i.e. non-closed tags, so named after tag soup). \n",
"> It creates a parse tree for parsed pages that can be used to extract data from HTML,[3] which \n",
"> is useful for web scraping.\n",
"\n",
"`Beautiful Soup` offers fine-grained control over HTML content, enabling specific tag extraction, removal, and content cleaning. \n",
"\n",
"It's suited for cases where you want to extract specific information and clean up the HTML content according to your needs.\n",
"\n",
"For example, we can scrape text content within `<p>, <li>, <div>, and <a>` tags from the HTML content:\n",
"\n",
"* `<p>`: The paragraph tag. It defines a paragraph in HTML and is used to group together related sentences and/or phrases.\n",
" \n",
"* `<li>`: The list item tag. It is used within ordered (`<ol>`) and unordered (`<ul>`) lists to define individual items within the list.\n",
" \n",
"* `<div>`: The division tag. It is a block-level element used to group other inline or block-level elements.\n",
" \n",
"* `<a>`: The anchor tag. It is used to define hyperlinks."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "dd710e5b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import AsyncChromiumLoader\n",
"from langchain_community.document_transformers import BeautifulSoupTransformer\n",
"\n",
"# Load HTML\n",
"loader = AsyncChromiumLoader([\"https://www.wsj.com\"])\n",
"html = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "052b64dd",
"metadata": {},
"outputs": [],
"source": [
"# Transform\n",
"bs_transformer = BeautifulSoupTransformer()\n",
"docs_transformed = bs_transformer.transform_documents(\n",
" html, tags_to_extract=[\"p\", \"li\", \"div\", \"a\"]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "b53a5307",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Conservative legal activists are challenging Amazon, Comcast and others using many of the same tools that helped kill affirmative-action programs in colleges.1,2099 min read U.S. stock indexes fell and government-bond prices climbed, after Moody’s lowered credit ratings for 10 smaller U.S. banks and said it was reviewing ratings for six larger ones. The Dow industrials dropped more than 150 points.3 min read Penn Entertainment’s Barstool Sportsbook app will be rebranded as ESPN Bet this fall as '"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs_transformed[0].page_content[0:500]"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
151996
|
"\n",
"- `rerank-2`\n",
"- `rerank-2-lite`\n",
"- `rerank-1`\n",
"- `rerank-lite-1`"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "b83dfedb",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.\n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"So let’s not abandon our streets. Or choose between safety and equal justice.\n",
"\n",
"Let’s come together to protect our communities, restore trust, and hold law enforcement accountable.\n",
"\n",
"That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\n",
"\n",
"I’ve worked on these issues a long time.\n",
"\n",
"I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.\n",
"\n",
"So let’s not abandon our streets. Or choose between safety and equal justice.\n"
]
}
],
"source": [
"from langchain.retrievers import ContextualCompressionRetriever\n",
"from langchain_openai import OpenAI\n",
"from langchain_voyageai import VoyageAIRerank\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"compressor = VoyageAIRerank(\n",
" model=\"rerank-lite-1\", voyageai_api_key=os.environ[\"VOYAGE_API_KEY\"], top_k=3\n",
")\n",
"compression_retriever = ContextualCompressionRetriever(\n",
" base_compressor=compressor, base_retriever=retriever\n",
")\n",
"\n",
"compressed_docs = compression_retriever.invoke(\n",
" \"What did the president say about Ketanji Jackson Brown\"\n",
")\n",
"pretty_print_docs(compressed_docs)"
]
},
{
"cell_type": "markdown",
"id": "aa8f3d24",
"metadata": {},
"source": [
"You can of course use this retriever within a QA pipeline"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "367dafe0",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "ae697ca4",
"metadata": {},
"outputs": [],
"source": [
"chain = RetrievalQA.from_chain_type(\n",
" llm=OpenAI(temperature=0), retriever=compression_retriever\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "46ee62fc",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'query': 'What did the president say about Ketanji Brown Jackson',\n",
" 'result': \" The president nominated Ketanji Brown Jackson to serve on the United States Supreme Court. \"}"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain({\"query\": query})"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
152001
|
"source": [
"compressed_docs = compression_retriever.invoke(query)\n",
"pretty_print_docs(compressed_docs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can use this retriever within a QA pipeline"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'query': 'What did the president say about Ketanji Brown Jackson',\n",
" 'result': \"The President mentioned that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence. He highlighted her background as a former top litigator in private practice and a former federal public defender, as well as coming from a family of public school educators and police officers. He also mentioned that since her nomination, she has received broad support from various groups, including the Fraternal Order of Police and former judges appointed by Democrats and Republicans.\"}"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(temperature=0)\n",
"\n",
"chain = RetrievalQA.from_chain_type(\n",
" llm=ChatOpenAI(temperature=0), retriever=compression_retriever\n",
")\n",
"\n",
"chain({\"query\": query})"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "rankllm",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
| |
152022
|
\"articleBody\": \"Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:\\\\nPlanning Subgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks. Reflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results. Memory Short-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn. Long-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval. Tool use The agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution capability, access to proprietary information sources and more. Fig. 1. Overview of a LLM-powered autonomous agent system. Component One: Planning A complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\\\nTask Decomposition Chain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\\\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\\\nTask decomposition can be done (1) by LLM with simple prompting like \\\\\"Steps for XYZ.\\\\\\\\n1.\\\\\", \\\\\"What are the subgoals for achieving XYZ?\\\\\", (2) by using task-specific instructions; e.g. \\\\\"Write a story outline.\\\\\" for writing a novel, or (3) with human inputs.\\\\nAnother quite distinct approach, LLM+P (Liu et al. 2023), involves relying on an external classical planner to do long-horizon planning. This approach utilizes the Planning Domain Definition Language (PDDL) as an intermediate interface to describe the planning problem. In this process, LLM (1) translates the problem into “Problem PDDL”, then (2) requests a classical planner to generate a PDDL plan based on an existing “Domain PDDL”, and finally (3) translates the PDDL plan back into natural language. Essentially, the planning step is outsourced to an external tool, assuming the availability of domain-specific PDDL and a suitable planner which is common in certain robotic setups but not in many other domains.\\\\nSelf-Reflection Self-reflection is a vital aspect that allows autonomous agents to improve iteratively by refining past action decisions and correcting previous mistakes. It plays a crucial role in real-world tasks where trial and error are inevitable.\\\\nReAct (Yao et al. 2023) integrates reasoning and acting within LLM by extending the action space to be a combination of task-specific discrete actions and the language space. The former enables LLM to interact with the environment (e.g. use Wikipedia search API), while the latter prompting LLM to generate reasoning traces in natural language.\\\\nThe ReAct prompt template incorporates explicit steps for LLM to think, roughly formatted as:\\\\nThought: ... Action: ... Observation: ... ... (Repeated many times) Fig. 2. Examples of reasoning trajectories for knowledge-intensive tasks (e.g. HotpotQA, FEVER) and decision-making tasks (e.g. AlfWorld Env, WebShop). (Image source: Yao et al. 2023). In both experiments on knowledge-intensive tasks and decision-making tasks, ReAct works better than the Act-only baseline where Thought: … step is removed.\\\\nReflexion (Shinn \\\\u0026 Labash 2023) is a framework to equips agents with dynamic memory and self-reflection capabilities to improve reasoning skills. Reflexion has a standard RL setup, in which the reward model provides a simple binary reward and the action space follows the setup in ReAct where the task-specific action space is augmented with language to enable complex reasoning steps. After each action $a_t$, the agent computes a heuristic $h_t$ and optionally may decide to reset the environment to start a new trial depending on the self-reflection results.\\\\nFig. 3. Illustration of the Reflexion framework. (Image source: Shinn \\\\u0026 Labash, 2023) The heuristic function determines when the trajectory is inefficient or contains hallucination and should be stopped. Inefficient planning refers to trajectories that take too long without success. Hallucination is defined as encountering a sequence of consecutive identical actions that lead to the same observation in the environment.\\\\nSelf-reflection is created by showing two-shot examples to LLM and each example is a pair of (failed trajectory, ideal reflection for guiding future changes in the plan). Then reflections are added into the agent’s working memory, up to three, to be used as context for querying LLM.\\\\nFig. 4. Experiments on AlfWorld Env and HotpotQA. Hallucination is a more common failure than inefficient planning in AlfWorld. (Image source: Shinn \\\\u0026 Labash, 2023) Chain of Hindsight (CoH; Liu et al. 2023) encourages the model to improve on its own outputs by explicitly presenting it with a sequence of past outputs, each annotated with feedback. Human feedback data is a collection of $D_h = \\\\\\\\{(x, y_i , r_i , z_i)\\\\\\\\}_{i=1}^n$, where $x$ is the prompt, each $y_i$ is a model completion, $r_i$ is the human rating of $y_i$, and $z_i$ is the corresponding human-provided hindsight feedback. Assume the feedback tuples are ranked by reward, $r_n \\\\\\\\geq r_{n-1} \\\\\\\\geq \\\\\\\\dots \\\\\\\\geq r_1$ The process is supervised fine-tuning where the data is a sequence in the form of $\\\\\\\\tau_h = (x, z_i, y_i, z_j, y_j, \\\\\\\\dots, z_n, y_n)$, where $\\\\\\\\leq i \\\\\\\\leq j \\\\\\\\leq n$. The model is finetuned to only predict $y_n$ where conditioned on the sequence prefix, such that the model can self-reflect to produce better output based on the feedback sequence. The model can optionally receive multiple rounds of instructions with human annotators at test time.\\\\nTo avoid overfitting, CoH adds a regularization term to maximize the log-likelihood of the pre-training dataset. To avoid shortcutting and copying (because there are many common words in feedback sequences), they randomly mask 0% - 5% of past tokens during training.\\\\nThe training dataset in their experiments is a combination of WebGPT comparisons, summarization from human feedback and human preference dataset.\\\\nFig. 5. After fine-tuning with CoH, the model can follow instructions to produce outputs with incremental improvement in a sequence. (Image source: Liu et al. 2023) The idea of CoH is to present a history of sequentially improved outputs in context and train the model to take on the trend to produce better outputs. Algorithm Distillation (AD; Laskin et al. 2023) applies the same idea to cross-episode trajectories in reinforcement learning tasks, where an algorithm is encapsulated in a long history-conditioned policy. Considering that an agent interacts with the environment many times and in each episode the agent gets a little better, AD concatenates this learning history and feeds that into the model. Hence we should expect the next predicted action to lead to better performance than previous trials. The goal is to learn the process of RL instead of training a task-specific policy itself.\\\\nFig. 6. Illustration of how Algorithm Distillation (AD) works. (Image source: Laskin et al. 2023). The paper hypothesizes that any algorithm that generates a set of learning histories can be distilled into a neural network by performing behavioral cloning over actions. The history data is generated by a set of source policies, each trained for a specific task. At the training stage, during each RL run, a random task is sampled and a subsequence of multi-episode history is used for training, such that the learned policy is task-agnostic.\\\\nIn reality, the model has limited context window length, so episodes should be short enough to construct multi-episode history. Multi-episodic contexts of 2-4 episodes are necessary to learn a near-optimal in-context RL algorithm. The emergence of in-context RL requires long enough context.\\\\nIn comparison with three
| |
152024
|
several cases for your reference: {{ Demonstrations }}. The chat history is recorded as {{ Chat History }}. From this chat history, you can find the path of the user-mentioned resources for your task planning. (2) Model selection: LLM distributes the tasks to expert models, where the request is framed as a multiple-choice question. LLM is presented with a list of models to choose from. Due to the limited context length, task type based filtration is needed.\\\\nInstruction:\\\\nGiven the user request and the call command, the AI assistant helps the user to select a suitable model from a list of models to process the user request. The AI assistant merely outputs the model id of the most appropriate model. The output must be in a strict JSON format: \\\\\"id\\\\\": \\\\\"id\\\\\", \\\\\"reason\\\\\": \\\\\"your detail reason for the choice\\\\\". We have a list of models for you to choose from {{ Candidate Models }}. Please select one model from the list. (3) Task execution: Expert models execute on the specific tasks and log results.\\\\nInstruction:\\\\nWith the input and the inference results, the AI assistant needs to describe the process and results. The previous stages can be formed as - User Input: {{ User Input }}, Task Planning: {{ Tasks }}, Model Selection: {{ Model Assignment }}, Task Execution: {{ Predictions }}. You must first answer the user\\'s request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path. (4) Response generation: LLM receives the execution results and provides summarized results to users.\\\\nTo put HuggingGPT into real world usage, a couple challenges need to solve: (1) Efficiency improvement is needed as both LLM inference rounds and interactions with other models slow down the process; (2) It relies on a long context window to communicate over complicated task content; (3) Stability improvement of LLM outputs and external model services.\\\\nAPI-Bank (Li et al. 2023) is a benchmark for evaluating the performance of tool-augmented LLMs. It contains 53 commonly used API tools, a complete tool-augmented LLM workflow, and 264 annotated dialogues that involve 568 API calls. The selection of APIs is quite diverse, including search engines, calculator, calendar queries, smart home control, schedule management, health data management, account authentication workflow and more. Because there are a large number of APIs, LLM first has access to API search engine to find the right API to call and then uses the corresponding documentation to make a call.\\\\nFig. 12. Pseudo code of how LLM makes an API call in API-Bank. (Image source: Li et al. 2023) In the API-Bank workflow, LLMs need to make a couple of decisions and at each step we can evaluate how accurate that decision is. Decisions include:\\\\nWhether an API call is needed. Identify the right API to call: if not good enough, LLMs need to iteratively modify the API inputs (e.g. deciding search keywords for Search Engine API). Response based on the API results: the model can choose to refine and call again if results are not satisfied. This benchmark evaluates the agent’s tool use capabilities at three levels:\\\\nLevel-1 evaluates the ability to call the API. Given an API’s description, the model needs to determine whether to call a given API, call it correctly, and respond properly to API returns. Level-2 examines the ability to retrieve the API. The model needs to search for possible APIs that may solve the user’s requirement and learn how to use them by reading documentation. Level-3 assesses the ability to plan API beyond retrieve and call. Given unclear user requests (e.g. schedule group meetings, book flight/hotel/restaurant for a trip), the model may have to conduct multiple API calls to solve it. Case Studies Scientific Discovery Agent ChemCrow (Bran et al. 2023) is a domain-specific example in which LLM is augmented with 13 expert-designed tools to accomplish tasks across organic synthesis, drug discovery, and materials design. The workflow, implemented in LangChain, reflects what was previously described in the ReAct and MRKLs and combines CoT reasoning with tools relevant to the tasks:\\\\nThe LLM is provided with a list of tool names, descriptions of their utility, and details about the expected input/output. It is then instructed to answer a user-given prompt using the tools provided when necessary. The instruction suggests the model to follow the ReAct format - Thought, Action, Action Input, Observation. One interesting observation is that while the LLM-based evaluation concluded that GPT-4 and ChemCrow perform nearly equivalently, human evaluations with experts oriented towards the completion and chemical correctness of the solutions showed that ChemCrow outperforms GPT-4 by a large margin. This indicates a potential problem with using LLM to evaluate its own performance on domains that requires deep expertise. The lack of expertise may cause LLMs not knowing its flaws and thus cannot well judge the correctness of task results.\\\\nBoiko et al. (2023) also looked into LLM-empowered agents for scientific discovery, to handle autonomous design, planning, and performance of complex scientific experiments. This agent can use tools to browse the Internet, read documentation, execute code, call robotics experimentation APIs and leverage other LLMs.\\\\nFor example, when requested to \\\\\"develop a novel anticancer drug\\\\\", the model came up with the following reasoning steps:\\\\ninquired about current trends in anticancer drug discovery; selected a target; requested a scaffold targeting these compounds; Once the compound was identified, the model attempted its synthesis. They also discussed the risks, especially with illicit drugs and bioweapons. They developed a test set containing a list of known chemical weapon agents and asked the agent to synthesize them. 4 out of 11 requests (36%) were accepted to obtain a synthesis solution and the agent attempted to consult documentation to execute the procedure. 7 out of 11 were rejected and among these 7 rejected cases, 5 happened after a Web search while 2 were rejected based on prompt only.\\\\nGenerative Agents Simulation Generative Agents (Park, et al. 2023) is super fun experiment where 25 virtual characters, each controlled by a LLM-powered agent, are living and interacting in a sandbox environment, inspired by The Sims. Generative agents create believable simulacra of human behavior for interactive applications.\\\\nThe design of generative agents combines LLM with memory, planning and reflection mechanisms to enable agents to behave conditioned on past experience, as well as to interact with other agents.\\\\nMemory stream: is a long-term memory module (external database) that records a comprehensive list of agents’ experience in natural language. Each element is an observation, an event directly provided by the agent. - Inter-agent communication can trigger new natural language statements. Retrieval model: surfaces the context to inform the agent’s behavior, according to relevance, recency and importance. Recency: recent events have higher scores Importance: distinguish mundane from core memories. Ask LM directly. Relevance: based on how related it is to the current situation / query. Reflection mechanism: synthesizes memories into higher level inferences over time and guides the agent’s future behavior. They are higher-level summaries of past events (\\\\u003c- note that this is a bit different from self-reflection above) Prompt LM with 100 most recent observations and to generate 3 most salient high-level questions given a set of observations/statements. Then ask LM to answer those questions. Planning \\\\u0026 Reacting: translate the reflections and the environment information into actions Planning is essentially in order to optimize believability at the moment vs in time. Prompt template: {Intro of an agent X}. Here is X\\'s plan today in broad strokes: 1) Relationships between agents and observations of one agent by another are all taken into consideration for planning and reacting. Environment information is present in a tree structure. Fig. 13. The generative agent architecture. (Image source: Park et al. 2023) This fun simulation results in emergent social behavior, such as information diffusion, relationship memory (e.g. two agents continuing the conversation topic) and coordination of social events (e.g. host a party and invite many others).\\\\nProof-of-Concept Examples AutoGPT has drawn a lot of attention into the possibility of setting up autonomous agents with LLM as the main controller. It has quite a lot of reliability issues given the natural language interface, but nevertheless a cool proof-of-concept demo. A lot of code in AutoGPT is about format parsing.\\\\nHere is the system message used by AutoGPT, where {{...}} are user inputs:\\\\nYou are {{ai-name}}, {{user-provided AI bot description}}. Your decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications. GOALS: 1. {{user-provided goal 1}} 2. {{user-provided goal 2}} 3. ... 4. ... 5. ... Constraints: 1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files. 2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember. 3. No user assistance 4. Exclusively use the commands listed in double quotes
| |
152025
|
e.g. \\\\\"command name\\\\\" 5. Use subprocesses for commands that will not terminate within a few minutes Commands: 1. Google Search: \\\\\"google\\\\\", args: \\\\\"input\\\\\": \\\\\"\\\\\" 2. Browse Website: \\\\\"browse_website\\\\\", args: \\\\\"url\\\\\": \\\\\"\\\\\", \\\\\"question\\\\\": \\\\\"\\\\\" 3. Start GPT Agent: \\\\\"start_agent\\\\\", args: \\\\\"name\\\\\": \\\\\"\\\\\", \\\\\"task\\\\\": \\\\\"\\\\\", \\\\\"prompt\\\\\": \\\\\"\\\\\" 4. Message GPT Agent: \\\\\"message_agent\\\\\", args: \\\\\"key\\\\\": \\\\\"\\\\\", \\\\\"message\\\\\": \\\\\"\\\\\" 5. List GPT Agents: \\\\\"list_agents\\\\\", args: 6. Delete GPT Agent: \\\\\"delete_agent\\\\\", args: \\\\\"key\\\\\": \\\\\"\\\\\" 7. Clone Repository: \\\\\"clone_repository\\\\\", args: \\\\\"repository_url\\\\\": \\\\\"\\\\\", \\\\\"clone_path\\\\\": \\\\\"\\\\\" 8. Write to file: \\\\\"write_to_file\\\\\", args: \\\\\"file\\\\\": \\\\\"\\\\\", \\\\\"text\\\\\": \\\\\"\\\\\" 9. Read file: \\\\\"read_file\\\\\", args: \\\\\"file\\\\\": \\\\\"\\\\\" 10. Append to file: \\\\\"append_to_file\\\\\", args: \\\\\"file\\\\\": \\\\\"\\\\\", \\\\\"text\\\\\": \\\\\"\\\\\" 11. Delete file: \\\\\"delete_file\\\\\", args: \\\\\"file\\\\\": \\\\\"\\\\\" 12. Search Files: \\\\\"search_files\\\\\", args: \\\\\"directory\\\\\": \\\\\"\\\\\" 13. Analyze Code: \\\\\"analyze_code\\\\\", args: \\\\\"code\\\\\": \\\\\"\\\\\" 14. Get Improved Code: \\\\\"improve_code\\\\\", args: \\\\\"suggestions\\\\\": \\\\\"\\\\\", \\\\\"code\\\\\": \\\\\"\\\\\" 15. Write Tests: \\\\\"write_tests\\\\\", args: \\\\\"code\\\\\": \\\\\"\\\\\", \\\\\"focus\\\\\": \\\\\"\\\\\" 16. Execute Python File: \\\\\"execute_python_file\\\\\", args: \\\\\"file\\\\\": \\\\\"\\\\\" 17. Generate Image: \\\\\"generate_image\\\\\", args: \\\\\"prompt\\\\\": \\\\\"\\\\\" 18. Send Tweet: \\\\\"send_tweet\\\\\", args: \\\\\"text\\\\\": \\\\\"\\\\\" 19. Do Nothing: \\\\\"do_nothing\\\\\", args: 20. Task Complete (Shutdown): \\\\\"task_complete\\\\\", args: \\\\\"reason\\\\\": \\\\\"\\\\\" Resources: 1. Internet access for searches and information gathering. 2. Long Term memory management. 3. GPT-3.5 powered Agents for delegation of simple tasks. 4. File output. Performance Evaluation: 1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities. 2. Constructively self-criticize your big-picture behavior constantly. 3. Reflect on past decisions and strategies to refine your approach. 4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps. You should only respond in JSON format as described below Response Format: { \\\\\"thoughts\\\\\": { \\\\\"text\\\\\": \\\\\"thought\\\\\", \\\\\"reasoning\\\\\": \\\\\"reasoning\\\\\", \\\\\"plan\\\\\": \\\\\"- short bulleted\\\\\\\\n- list that conveys\\\\\\\\n- long-term plan\\\\\", \\\\\"criticism\\\\\": \\\\\"constructive self-criticism\\\\\", \\\\\"speak\\\\\": \\\\\"thoughts summary to say to user\\\\\" }, \\\\\"command\\\\\": { \\\\\"name\\\\\": \\\\\"command name\\\\\", \\\\\"args\\\\\": { \\\\\"arg name\\\\\": \\\\\"value\\\\\" } } } Ensure the response can be parsed by Python json.loads GPT-Engineer is another project to create a whole repository of code given a task specified in natural language. The GPT-Engineer is instructed to think over a list of smaller components to build and ask for user input to clarify questions as needed.\\\\nHere are a sample conversation for task clarification sent to OpenAI ChatCompletion endpoint used by GPT-Engineer. The user inputs are wrapped in {{user input text}}.\\\\n[ { \\\\\"role\\\\\": \\\\\"system\\\\\", \\\\\"content\\\\\": \\\\\"You will read instructions and not carry them out, only seek to clarify them.\\\\\\\\nSpecifically you will first summarise a list of super short bullets of areas that need clarification.\\\\\\\\nThen you will pick one clarifying question, and wait for an answer from the user.\\\\\\\\n\\\\\" }, { \\\\\"role\\\\\": \\\\\"user\\\\\", \\\\\"content\\\\\": \\\\\"We are writing {{a Super Mario game in python. MVC components split in separate files. Keyboard control.}}\\\\\\\\n\\\\\" }, { \\\\\"role\\\\\": \\\\\"assistant\\\\\", \\\\\"content\\\\\": \\\\\"Summary of areas that need clarification:\\\\\\\\n1. Specifics of the Super Mario game (e.g. level design, characters, gameplay mechanics)\\\\\\\\n2. Details about the MVC components (e.g. which components are in each file)\\\\\\\\n3. Keyboard control implementation (e.g. which keys to use, how to handle input)\\\\\\\\n\\\\\\\\nClarifying question:\\\\\\\\nCan you provide more details about the Super Mario game, such as level design, characters, and gameplay mechanics?\\\\\" }, { \\\\\"role\\\\\": \\\\\"user\\\\\", \\\\\"content\\\\\": \\\\\"{{There are 10 levels in total. The main character is a plumber named Mario, who can walk and jump. It is a classical platform game just like Super Mario. The main character moves from left to right, trying to get to the destination, where there are many obstacles and attacks from enemies in the process.}}\\\\\\\\n\\\\\\\\nIs anything else unclear? If yes, only answer in the form:\\\\\\\\n{remaining unclear areas} remaining questions.\\\\\\\\n{Next question}\\\\\\\\nIf everything is sufficiently clear, only answer \\\\\\\\\\\\\"Nothing more to clarify.\\\\\\\\\\\\\".\\\\\" }, { \\\\\"role\\\\\": \\\\\"assistant\\\\\", \\\\\"content\\\\\": \\\\\"Remaining unclear areas: 2 remaining questions.\\\\\\\\nCan you provide more information about how the MVC components are split into separate files?\\\\\" }, { \\\\\"role\\\\\": \\\\\"user\\\\\", \\\\\"content\\\\\": \\\\\"{{Make your own assumptions and state them explicitly before starting}}\\\\\" } ] Then after these clarification, the agent moved into the code writing mode with a different system message. System message:\\\\nYou will get instructions for code to write. You will write a very long answer. Make sure that every detail of the architecture is, in the end, implemented as code. Make sure that every detail of the architecture is, in the end, implemented as code. Think step by step and reason yourself to the right decisions to make sure we get it right. You will first lay out the names of the core classes, functions, methods that will be necessary, as well as a quick comment on their purpose.\\\\nThen you will output the content of each file including ALL code. Each file must strictly follow a markdown code block format, where the following tokens must be replaced such that FILENAME is the lowercase file name including the file extension, LANG is the markup code block language for the code’s language, and CODE is the code:\\\\nFILENAME\\\\nCODE You will start with the “entrypoint” file, then go to the ones that are imported by that file, and so on. Please note that the code should be fully functional. No placeholders.\\\\nFollow a language and framework appropriate best practice file naming convention. Make sure that files contain all imports, types etc. Make sure that code in different files are compatible with each other. Ensure to implement all code, if you are unsure, write a plausible implementation. Include module dependency or package manager dependency definition file. Before you finish, double check that all parts of the architecture is present in the files.\\\\nUseful to know: You almost always put different classes in different files. For Python, you always create an appropriate requirements.txt file. For NodeJS, you always create an appropriate package.json file. You always add a comment briefly describing the purpose of the function definition. You try to add comments explaining very complex bits of logic. You always follow the best practices for the requested languages in terms of describing the code written as a defined package/project.\\\\nPython toolbelt preferences:\\\\npytest dataclasses Conversatin samples:\\\\n[ { \\\\\"role\\\\\": \\\\\"system\\\\\", \\\\\"content\\\\\": \\\\\"You will get instructions for code to write.\\\\\\\\nYou will write a very long answer. Make sure that every detail of the
| |
152076
|
Support\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n.rst\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n.pdf\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\nWelcome to LangChain\\\\n\\\\n\\\\n\\\\n\\\\n Contents \\\\n\\\\n\\\\n\\\\nGetting Started\\\\nModules\\\\nUse Cases\\\\nReference Docs\\\\nLangChain Ecosystem\\\\nAdditional Resources\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\nWelcome to LangChain#\\\\nLarge language models (LLMs) are emerging as a transformative technology, enabling\\\\ndevelopers to build applications that they previously could not.\\\\nBut using these LLMs in isolation is often not enough to\\\\ncreate a truly powerful app - the real power comes when you are able to\\\\ncombine them with other sources of computation or knowledge.\\\\nThis library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include:\\\\n❓ Question Answering over specific documents\\\\n\\\\nDocumentation\\\\nEnd-to-end Example: Question Answering over Notion Database\\\\n\\\\n💬 Chatbots\\\\n\\\\nDocumentation\\\\nEnd-to-end Example: Chat-LangChain\\\\n\\\\n🤖 Agents\\\\n\\\\nDocumentation\\\\nEnd-to-end Example: GPT+WolframAlpha\\\\n\\\\n\\\\nGetting Started#\\\\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\\\\n\\\\nGetting Started Documentation\\\\n\\\\n\\\\n\\\\n\\\\n\\\\nModules#\\\\nThere are several main modules that LangChain provides support for.\\\\nFor each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.\\\\nThese modules are, in increasing order of complexity:\\\\n\\\\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\\\\nLLMs: This includes a generic interface for all LLMs, and common utilities for working with LLMs.\\\\nDocument Loaders: This includes a standard interface for loading documents, as well as specific integrations to all types of text data sources.\\\\nUtils: Language models are often more powerful when interacting with other sources of knowledge or computation. This can include Python REPLs, embeddings, search engines, and more. LangChain provides a large collection of common utils to use in your application.\\\\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\\\\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\\\\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\\\\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\\\\nChat: Chat models are a variation on Language Models that expose a different API - rather than working with raw text, they work with messages. LangChain provides a standard interface for working with them and doing all the same things as above.\\\\n\\\\n\\\\n\\\\n\\\\n\\\\nUse Cases#\\\\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\\\\n\\\\nAgents: Agents are systems that use a language model to interact with other tools. These can be used to do more grounded question/answering, interact with APIs, or even take actions.\\\\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\\\\nData Augmented Generation: Data Augmented Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources.\\\\nQuestion Answering: Answering questions over specific documents, only utilizing the information in those documents to construct an answer. A type of Data Augmented Generation.\\\\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\\\\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\\\\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\\\\nGenerate similar examples: Generating similar examples to a given input. This is a common use case for many applications, and LangChain provides some prompts/chains for assisting in this.\\\\nCompare models: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\\\\n\\\\n\\\\n\\\\n\\\\n\\\\nReference Docs#\\\\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\\\\n\\\\nReference Documentation\\\\n\\\\n\\\\n\\\\n\\\\n\\\\nLangChain Ecosystem#\\\\nGuides for how other companies/products can be used with LangChain\\\\n\\\\nLangChain Ecosystem\\\\n\\\\n\\\\n\\\\n\\\\n\\\\nAdditional Resources#\\\\nAdditional collection of resources we think may be useful as you develop your application!\\\\n\\\\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\\\\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\\\\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\\\\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\\\\nDiscord: Join us on our Discord to discuss all things LangChain!\\\\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\\\\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\nnext\\\\nQuickstart Guide\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n Contents\\\\n
| |
152082
|
Search API\\\\nSerpAPI\\\\nStochasticAI\\\\nUnstructured\\\\nWeights & Biases\\\\nWeaviate\\\\nWolfram Alpha Wrapper\\\\nWriter\\\\n\\\\n\\\\n\\\\nAdditional Resources\\\\n\\\\nLangChainHub\\\\nGlossary\\\\nLangChain Gallery\\\\nDeployments\\\\nTracing\\\\nDiscord\\\\nProduction Support\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n.rst\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n.pdf\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\nWelcome to LangChain\\\\n\\\\n\\\\n\\\\n\\\\n Contents \\\\n\\\\n\\\\n\\\\nGetting Started\\\\nModules\\\\nUse Cases\\\\nReference Docs\\\\nLangChain Ecosystem\\\\nAdditional Resources\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\nWelcome to LangChain#\\\\nLangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\\\\n\\\\nBe data-aware: connect a language model to other sources of data\\\\nBe agentic: allow a language model to interact with its environment\\\\n\\\\nThe LangChain framework is designed with the above principles in mind.\\\\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\\\\n\\\\nGetting Started#\\\\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\\\\n\\\\nGetting Started Documentation\\\\n\\\\n\\\\n\\\\n\\\\n\\\\nModules#\\\\nThere are several main modules that LangChain provides support for.\\\\nFor each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.\\\\nThese modules are, in increasing order of complexity:\\\\n\\\\nModels: The various model types and model integrations LangChain supports.\\\\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\\\\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\\\\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\\\\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\\\\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\\\\n\\\\n\\\\n\\\\n\\\\n\\\\nUse Cases#\\\\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\\\\n\\\\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\\\\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\\\\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\\\\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\\\\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\\\\nExtraction: Extract structured information from text.\\\\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\\\\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\\\\n\\\\n\\\\n\\\\n\\\\n\\\\nReference Docs#\\\\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\\\\n\\\\nReference Documentation\\\\n\\\\n\\\\n\\\\n\\\\n\\\\nLangChain Ecosystem#\\\\nGuides for how other companies/products can be used with LangChain\\\\n\\\\nLangChain Ecosystem\\\\n\\\\n\\\\n\\\\n\\\\n\\\\nAdditional Resources#\\\\nAdditional collection of resources we think may be useful as you develop your application!\\\\n\\\\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\\\\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\\\\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\\\\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\\\\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\\\\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\\\\nDiscord: Join us on our Discord to discuss all things LangChain!\\\\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\nnext\\\\nQuickstart Guide\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n Contents\\\\n
| |
152101
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# PyPDFium2Loader\n",
"\n",
"\n",
"This notebook provides a quick overview for getting started with PyPDFium2 [document loader](https://python.langchain.com/docs/concepts/#document-loaders). For detailed documentation of all __ModuleName__Loader features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFium2Loader.html).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support|\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| [PyPDFium2Loader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFium2Loader.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | \n",
"### Loader features\n",
"| Source | Document Lazy Loading | Native Async Support\n",
"| :---: | :---: | :---: | \n",
"| PyPDFium2Loader | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"\n",
"To access PyPDFium2 document loader you'll need to install the `langchain-community` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"No credentials are needed."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to get automated best in-class tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"Install **langchain_community**."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"Now we can instantiate our model object and load documents:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import PyPDFium2Loader\n",
"\n",
"file_path = \"./example_data/layout-parser-paper.pdf\"\n",
"loader = PyPDFium2Loader(file_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'source': './example_data/layout-parser-paper.pdf', 'page': 0}, page_content='LayoutParser: A Unified Toolkit for Deep\\r\\nLearning Based Document Image Analysis\\r\\nZejiang Shen\\r\\n1\\r\\n(), Ruochen Zhang\\r\\n2\\r\\n, Melissa Dell\\r\\n3\\r\\n, Benjamin Charles Germain\\r\\nLee\\r\\n4\\r\\n, Jacob Carlson\\r\\n3\\r\\n, and Weining Li\\r\\n5\\r\\n1 Allen Institute for AI\\r\\nshannons@allenai.org 2 Brown University\\r\\nruochen zhang@brown.edu 3 Harvard University\\r\\n{melissadell,jacob carlson}@fas.harvard.edu\\r\\n4 University of Washington\\r\\nbcgl@cs.washington.edu 5 University of Waterloo\\r\\nw422li@uwaterloo.ca\\r\\nAbstract. Recent advances in document image analysis (DIA) have been\\r\\nprimarily driven by the application of neural networks. Ideally, research\\r\\noutcomes could be easily deployed in production and extended for further\\r\\ninvestigation. However, various factors like loosely organized codebases\\r\\nand sophisticated model configurations complicate the easy reuse of im\\x02portant innovations by a wide audience. Though there have been on-going\\r\\nefforts to improve reusability and simplify deep learning (DL) model\\r\\ndevelopment in disciplines like natural language processing and computer\\r\\nvision, none of them are optimized for challenges in the domain of DIA.\\r\\nThis represents a major gap in the existing toolkit, as DIA is central to\\r\\nacademic research across a wide range of disciplines in the social sciences\\r\\nand humanities. This paper introduces LayoutParser, an open-source\\r\\nlibrary for streamlining the usage of DL in DIA research and applica\\x02tions. The core LayoutParser library comes with a set of simple and\\r\\nintuitive interfaces for applying and customizing DL models for layout de\\x02tection, character recognition, and many other document processing tasks.\\r\\nTo promote extensibility, LayoutParser also incorporates a community\\r\\nplatform for sharing both pre-trained models and full document digiti\\x02zation pipelines. We demonstrate that LayoutParser is helpful for both\\r\\nlightweight and large-scale digitization pipelines in real-word use cases.\\r\\nThe library is publicly available at https://layout-parser.github.io.\\r\\nKeywords: Document Image Analysis· Deep Learning· Layout Analysis\\r\\n· Character Recognition· Open Source library· Toolkit.\\r\\n1 Introduction\\r\\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\\r\\ndocument image analysis (DIA) tasks including document image classification [11,\\r\\narXiv:2103.15348v2 [cs.CV] 21 Jun 2021\\n')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs = loader.load()\n",
"docs[0]"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'source': './example_data/layout-parser-paper.pdf', 'page': 0}\n"
]
}
],
"source": [
"print(docs[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lazy Load"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"page = []\n",
"for doc in loader.lazy_load():\n",
" page.append(doc)\n",
" if len(page) >= 10:\n",
" # do some paged operation, e.g.\n",
" # index.upsert(page)\n",
"\n",
" page = []"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all PyPDFium2Loader features and configurations head to the API reference: https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFium2Loader.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
| |
152114
|
{
"cells": [
{
"cell_type": "markdown",
"id": "d9826810",
"metadata": {},
"source": [
"# Copy Paste\n",
"\n",
"This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "fd9e71a2",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.documents import Document"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f40d3f30",
"metadata": {},
"outputs": [],
"source": [
"text = \"..... put the text you copy pasted here......\""
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d409bdba",
"metadata": {},
"outputs": [],
"source": [
"doc = Document(page_content=text)"
]
},
{
"cell_type": "markdown",
"id": "cc0eff72",
"metadata": {},
"source": [
"## Metadata\n",
"If you want to add metadata about the where you got this piece of text, you easily can with the metadata key."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "fe3aa5aa",
"metadata": {},
"outputs": [],
"source": [
"metadata = {\"source\": \"internet\", \"date\": \"Friday\"}"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "827d4e91",
"metadata": {},
"outputs": [],
"source": [
"doc = Document(page_content=text, metadata=metadata)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c986a43d",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
152145
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# PyPDFDirectoryLoader\n",
"\n",
"This loader loads all PDF files from a specific directory.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"\n",
"| Class | Package | Local | Serializable | JS support|\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| [PyPDFDirectoryLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFDirectoryLoader.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | \n",
"### Loader features\n",
"| Source | Document Lazy Loading | Native Async Support\n",
"| :---: | :---: | :---: | \n",
"| PyPDFDirectoryLoader | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"### Credentials\n",
"\n",
"No credentials are needed for this loader."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to get automated best in-class tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"Install **langchain_community**."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"Now we can instantiate our model object and load documents:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import PyPDFDirectoryLoader\n",
"\n",
"directory_path = (\n",
" \"../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf\"\n",
")\n",
"loader = PyPDFDirectoryLoader(\"example_data/\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'source': 'example_data/layout-parser-paper.pdf', 'page': 0}, page_content='LayoutParser : A Unified Toolkit for Deep\\nLearning Based Document Image Analysis\\nZejiang Shen1( \\x00), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\\nLee4, Jacob Carlson3, and Weining Li5\\n1Allen Institute for AI\\nshannons@allenai.org\\n2Brown University\\nruochen zhang@brown.edu\\n3Harvard University\\n{melissadell,jacob carlson }@fas.harvard.edu\\n4University of Washington\\nbcgl@cs.washington.edu\\n5University of Waterloo\\nw422li@uwaterloo.ca\\nAbstract. Recent advances in document image analysis (DIA) have been\\nprimarily driven by the application of neural networks. Ideally, research\\noutcomes could be easily deployed in production and extended for further\\ninvestigation. However, various factors like loosely organized codebases\\nand sophisticated model configurations complicate the easy reuse of im-\\nportant innovations by a wide audience. Though there have been on-going\\nefforts to improve reusability and simplify deep learning (DL) model\\ndevelopment in disciplines like natural language processing and computer\\nvision, none of them are optimized for challenges in the domain of DIA.\\nThis represents a major gap in the existing toolkit, as DIA is central to\\nacademic research across a wide range of disciplines in the social sciences\\nand humanities. This paper introduces LayoutParser , an open-source\\nlibrary for streamlining the usage of DL in DIA research and applica-\\ntions. The core LayoutParser library comes with a set of simple and\\nintuitive interfaces for applying and customizing DL models for layout de-\\ntection, character recognition, and many other document processing tasks.\\nTo promote extensibility, LayoutParser also incorporates a community\\nplatform for sharing both pre-trained models and full document digiti-\\nzation pipelines. We demonstrate that LayoutParser is helpful for both\\nlightweight and large-scale digitization pipelines in real-word use cases.\\nThe library is publicly available at https://layout-parser.github.io .\\nKeywords: Document Image Analysis ·Deep Learning ·Layout Analysis\\n·Character Recognition ·Open Source library ·Toolkit.\\n1 Introduction\\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\\ndocument image analysis (DIA) tasks including document image classification [ 11,arXiv:2103.15348v2 [cs.CV] 21 Jun 2021')"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs = loader.load()\n",
"docs[0]"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'source': 'example_data/layout-parser-paper.pdf', 'page': 0}\n"
]
}
],
"source": [
"print(docs[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lazy Load"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"page = []\n",
"for doc in loader.lazy_load():\n",
" page.append(doc)\n",
" if len(page) >= 10:\n",
" # do some paged operation, e.g.\n",
" # index.upsert(page)\n",
"\n",
" page = []"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all PyPDFDirectoryLoader features and configurations head to the API reference: https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFDirectoryLoader.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
| |
152205
|
{
"cells": [
{
"cell_type": "markdown",
"id": "2dfc4698",
"metadata": {},
"source": [
"# URL\n",
"\n",
"This example covers how to load `HTML` documents from a list of `URLs` into the `Document` format that we can use downstream.\n",
"\n",
"## Unstructured URL Loader\n",
"\n",
"For the examples below, please install the `unstructured` library and see [this guide](/docs/integrations/providers/unstructured/) for more instructions on setting up Unstructured locally, including setting up required system dependencies:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cb26084d-a2b0-4685-9ec4-346139ffe0fb",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet unstructured"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "16c3699e",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import UnstructuredURLLoader\n",
"\n",
"urls = [\n",
" \"https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-8-2023\",\n",
" \"https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-9-2023\",\n",
"]"
]
},
{
"cell_type": "markdown",
"id": "33089aba-ff74-4d00-8f40-9449c29587cc",
"metadata": {},
"source": [
"Pass in ssl_verify=False with headers=headers to get past ssl_verification errors."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "00f46fda",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
| |
152217
|
{
"cells": [
{
"cell_type": "markdown",
"id": "20deed05",
"metadata": {},
"source": [
"# Unstructured\n",
"\n",
"This notebook covers how to use `Unstructured` [document loader](https://python.langchain.com/docs/concepts/#document-loaders) to load files of many types. `Unstructured` currently supports loading of text files, powerpoints, html, pdfs, images, and more.\n",
"\n",
"Please see [this guide](../../integrations/providers/unstructured.mdx) for more instructions on setting up Unstructured locally, including setting up required system dependencies.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/document_loaders/file_loaders/unstructured/)|\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| [UnstructuredLoader](https://python.langchain.com/api_reference/unstructured/document_loaders/langchain_unstructured.document_loaders.UnstructuredLoader.html) | [langchain_unstructured](https://python.langchain.com/api_reference/unstructured/index.html) | ✅ | ❌ | ✅ | \n",
"### Loader features\n",
"| Source | Document Lazy Loading | Native Async Support\n",
"| :---: | :---: | :---: | \n",
"| UnstructuredLoader | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"### Credentials\n",
"\n",
"By default, `langchain-unstructured` installs a smaller footprint that requires offloading of the partitioning logic to the Unstructured API, which requires an API key. If you use the local installation, you do not need an API key. To get your API key, head over to [this site](https://unstructured.io) and get an API key, and then set it in the cell below:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "2886982e",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"UNSTRUCTURED_API_KEY\" not in os.environ:\n",
" os.environ[\"UNSTRUCTURED_API_KEY\"] = getpass.getpass(\n",
" \"Enter your Unstructured API key: \"\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "e75e2a6d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"#### Normal Installation\n",
"\n",
"The following packages are required to run the rest of this notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d9de83b3",
"metadata": {},
"outputs": [],
"source": [
"# Install package, compatible with API partitioning\n",
"%pip install --upgrade --quiet langchain-unstructured unstructured-client unstructured \"unstructured[pdf]\" python-magic"
]
},
{
"cell_type": "markdown",
"id": "637eda35",
"metadata": {},
"source": [
"#### Installation for Local\n",
"\n",
"If you would like to run the partitioning logic locally, you will need to install a combination of system dependencies, as outlined in the [Unstructured documentation here](https://docs.unstructured.io/open-source/installation/full-installation).\n",
"\n",
"For example, on Macs you can install the required dependencies with:\n",
"\n",
"```bash\n",
"# base dependencies\n",
"brew install libmagic poppler tesseract\n",
"\n",
"# If parsing xml / html documents:\n",
"brew install libxml2 libxslt\n",
"```\n",
"\n",
"You can install the required `pip` dependencies needed for local with:\n",
"\n",
"```bash\n",
"pip install \"langchain-unstructured[local]\"\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "a9c1c775",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"The `UnstructuredLoader` allows loading from a variety of different file types. To read all about the `unstructured` package please refer to their [documentation](https://docs.unstructured.io/open-source/introduction/overview)/. In this example, we show loading from both a text file and a PDF file."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "79d3e549",
"metadata": {},
"outputs": [],
"source": [
"from langchain_unstructured import UnstructuredLoader\n",
"\n",
"file_paths = [\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" \"./example_data/state_of_the_union.txt\",\n",
"]\n",
"\n",
"\n",
"loader = UnstructuredLoader(file_paths)"
]
},
{
"cell_type": "markdown",
"id": "8b68dcab",
"metadata": {},
"source": [
"## Load"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "8da59ef8",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO: pikepdf C++ to Python logger bridge initialized\n"
]
},
{
"data": {
"text/plain": [
"Document(metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 213.36), (16.34, 253.36), (36.34, 253.36), (36.34, 213.36)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2024-02-27T15:49:27', 'page_number': 1, 'filetype': 'application/pdf', 'category': 'UncategorizedText', 'element_id': 'd3ce55f220dfb75891b4394a18bcb973'}, page_content='1 2 0 2')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs = loader.load()\n",
"\n",
"docs[0]"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "97f7aa1f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 213.36), (16.34, 253.36), (36.34, 253.36), (36.34, 213.36)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2024-02-27T15:49:27', 'page_number': 1, 'filetype': 'application/pdf', 'category': 'UncategorizedText', 'element_id': 'd3ce55f220dfb75891b4394a18bcb973'}\n"
]
}
],
"source": [
"print(docs[0].metadata)"
]
},
{
"cell_type": "markdown",
"id": "0d7f991b",
"metadata": {},
"source": [
"## Lazy Load"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "b05604d2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
| |
152220
|
"import requests\n",
"from langchain_unstructured import UnstructuredLoader\n",
"from unstructured_client import UnstructuredClient\n",
"from unstructured_client.utils import BackoffStrategy, RetryConfig\n",
"\n",
"client = UnstructuredClient(\n",
" api_key_auth=os.getenv(\n",
" \"UNSTRUCTURED_API_KEY\"\n",
" ), # Note: the client API param is \"api_key_auth\" instead of \"api_key\"\n",
" client=requests.Session(), # Define your own requests session\n",
" server_url=\"https://api.unstructuredapp.io/general/v0/general\", # Define your own api url\n",
" retry_config=RetryConfig(\n",
" strategy=\"backoff\",\n",
" retry_connection_errors=True,\n",
" backoff=BackoffStrategy(\n",
" initial_interval=500,\n",
" max_interval=60000,\n",
" exponent=1.5,\n",
" max_elapsed_time=900000,\n",
" ),\n",
" ), # Define your own retry config\n",
")\n",
"\n",
"loader = UnstructuredLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" partition_via_api=True,\n",
" client=client,\n",
" split_pdf_page=True,\n",
" split_pdf_page_range=[1, 10],\n",
")\n",
"\n",
"docs = loader.load()\n",
"\n",
"print(docs[0].metadata[\"filename\"], \": \", docs[0].page_content[:100])"
]
},
{
"cell_type": "markdown",
"id": "c66fbeb3",
"metadata": {},
"source": [
"## Chunking\n",
"\n",
"The `UnstructuredLoader` does not support `mode` as parameter for grouping text like the older\n",
"loader `UnstructuredFileLoader` and others did. It instead supports \"chunking\". Chunking in\n",
"unstructured differs from other chunking mechanisms you may be familiar with that form chunks based\n",
"on plain-text features--character sequences like \"\\n\\n\" or \"\\n\" that might indicate a paragraph\n",
"boundary or list-item boundary. Instead, all documents are split using specific knowledge about each\n",
"document format to partition the document into semantic units (document elements) and we only need to\n",
"resort to text-splitting when a single element exceeds the desired maximum chunk size. In general,\n",
"chunking combines consecutive elements to form chunks as large as possible without exceeding the\n",
"maximum chunk size. Chunking produces a sequence of CompositeElement, Table, or TableChunk elements.\n",
"Each “chunk” is an instance of one of these three types.\n",
"\n",
"See this [page](https://docs.unstructured.io/open-source/core-functionality/chunking) for more\n",
"details about chunking options, but to reproduce the same behavior as `mode=\"single\"`, you can set\n",
"`chunking_strategy=\"basic\"`, `max_characters=<some-really-big-number>`, and `include_orig_elements=False`."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "e9f1c20d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of LangChain documents: 1\n",
"Length of text in the document: 42772\n"
]
}
],
"source": [
"from langchain_unstructured import UnstructuredLoader\n",
"\n",
"loader = UnstructuredLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" chunking_strategy=\"basic\",\n",
" max_characters=1000000,\n",
" include_orig_elements=False,\n",
")\n",
"\n",
"docs = loader.load()\n",
"\n",
"print(\"Number of LangChain documents:\", len(docs))\n",
"print(\"Length of text in the document:\", len(docs[0].page_content))"
]
},
{
"cell_type": "markdown",
"id": "3ec3c22d-02cd-498b-921f-b839d1404f32",
"metadata": {},
"source": [
"## Loading web pages\n",
"\n",
"`UnstructuredLoader` accepts a `web_url` kwarg when run locally that populates the `url` parameter of the underlying Unstructured [partition](https://docs.unstructured.io/open-source/core-functionality/partitioning). This allows for the parsing of remotely hosted documents, such as HTML web pages.\n",
"\n",
"Example usage:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "bf9a8546-659d-4861-bff2-fdf1ad93ac65",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"page_content='Example Domain' metadata={'category_depth': 0, 'languages': ['eng'], 'filetype': 'text/html', 'url': 'https://www.example.com', 'category': 'Title', 'element_id': 'fdaa78d856f9d143aeeed85bf23f58f8'}\n",
"\n",
"page_content='This domain is for use in illustrative examples in documents. You may use this domain in literature without prior coordination or asking for permission.' metadata={'languages': ['eng'], 'parent_id': 'fdaa78d856f9d143aeeed85bf23f58f8', 'filetype': 'text/html', 'url': 'https://www.example.com', 'category': 'NarrativeText', 'element_id': '3652b8458b0688639f973fe36253c992'}\n",
"\n",
"page_content='More information...' metadata={'category_depth': 0, 'link_texts': ['More information...'], 'link_urls': ['https://www.iana.org/domains/example'], 'languages': ['eng'], 'filetype': 'text/html', 'url': 'https://www.example.com', 'category': 'Title', 'element_id': '793ab98565d6f6d6f3a6d614e3ace2a9'}\n",
"\n"
]
}
],
"source": [
"from langchain_unstructured import UnstructuredLoader\n",
"\n",
"loader = UnstructuredLoader(web_url=\"https://www.example.com\")\n",
"docs = loader.load()\n",
"\n",
"for doc in docs:\n",
" print(f\"{doc}\\n\")"
]
},
{
"cell_type": "markdown",
"id": "ce01aa40",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `UnstructuredLoader` features and configurations head to the API reference: https://python.langchain.com/api_reference/unstructured/document_loaders/langchain_unstructured.document_loaders.UnstructuredLoader.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
152248
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "6-0_o3DxsFGi"
},
"source": [
"# Google Memorystore for Redis\n",
"\n",
"> [Google Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) is a fully-managed service that is powered by the Redis in-memory data store to build application caches that provide sub-millisecond data access. Extend your database application to build AI-powered experiences leveraging Memorystore for Redis's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) to [save, load and delete langchain documents](/docs/how_to#document-loaders) with `MemorystoreDocumentLoader` and `MemorystoreDocumentSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-memorystore-redis-python/).\n",
"\n",
"[](https://colab.research.google.com/github/googleapis/langchain-google-memorystore-redis-python/blob/main/docs/document_loader.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Before You Begin\n",
"\n",
"To run this notebook, you will need to do the following:\n",
"\n",
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
"* [Enable the Memorystore for Redis API](https://console.cloud.google.com/flows/enableapi?apiid=redis.googleapis.com)\n",
"* [Create a Memorystore for Redis instance](https://cloud.google.com/memorystore/docs/redis/create-instance-console). Ensure that the version is greater than or equal to 5.0.\n",
"\n",
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# @markdown Please specify an endpoint associated with the instance and a key prefix for demo purpose.\n",
"ENDPOINT = \"redis://127.0.0.1:6379\" # @param {type:\"string\"}\n",
"KEY_PREFIX = \"doc:\" # @param {type:\"string\"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🦜🔗 Library Installation\n",
"\n",
"The integration lives in its own `langchain-google-memorystore-redis` package, so we need to install it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -upgrade --quiet langchain-google-memorystore-redis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Colab only**: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
"# import IPython\n",
"\n",
"# app = IPython.Application.instance()\n",
"# app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ☁ Set Your Google Cloud Project\n",
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
"\n",
"If you don't know your project ID, try the following:\n",
"\n",
"* Run `gcloud config list`.\n",
"* Run `gcloud projects list`.\n",
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.\n",
"\n",
"PROJECT_ID = \"my-project-id\" # @param {type:\"string\"}\n",
"\n",
"# Set the project id\n",
"!gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🔐 Authentication\n",
"\n",
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
"\n",
"- If you are using Colab to run this notebook, use the cell below and continue.\n",
"- If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.colab import auth\n",
"\n",
"auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2L7kMu__sFGl"
},
"source": [
"## Basic Usage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Save documents\n",
"\n",
"Save langchain documents with `MemorystoreDocumentSaver.add_documents(<documents>)`. To initialize `MemorystoreDocumentSaver` class you need to provide 2 things:\n",
"\n",
"1. `client` - A `redis.Redis` client object.\n",
"1. `key_prefix` - A prefix for the keys to store Documents in Redis.\n",
"\n",
"The Documents will be stored into randomly generated keys with the specified prefix of `key_prefix`. Alternatively, you can designate the suffixes of the keys by specifying `ids` in the `add_documents` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import redis\n",
"from langchain_core.documents import Document\n",
"from langchain_google_memorystore_redis import MemorystoreDocumentSaver\n",
"\n",
"test_docs = [\n",
" Document(\n",
" page_content=\"Apple Granny Smith 150 0.99 1\",\n",
" metadata={\"fruit_id\": 1},\n",
" ),\n",
" Document(\n",
" page_content=\"Banana Cavendish 200 0.59 0\",\n",
" metadata={\"fruit_id\": 2},\n",
" ),\n",
" Document(\n",
" page_content=\"Orange Navel 80 1.29 1\",\n",
" metadata={\"fruit_id\": 3},\n",
" ),\n",
"]\n",
"doc_ids = [f\"{i}\" for i in range(len(test_docs))]\n",
"\n",
"redis_client = redis.from_url(ENDPOINT)\n",
"saver = MemorystoreDocumentSaver(\n",
" client=redis_client,\n",
" key_prefix=KEY_PREFIX,\n",
" content_field=\"page_content\",\n",
")\n",
"saver.add_documents(test_docs, ids=doc_ids)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "A2fT1iEhsFGl"
},
"source": [
"### Load documents\n",
"\n",
"Initialize a loader that loads all documents stored in the Memorystore for Redis instance with a specific prefix.\n",
"\n",
"Load langchain documents with `MemorystoreDocumentLoader.load()` or `MemorystoreDocumentLoader.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `MemorystoreDocumentLoader` class you need to provide:\n",
"\n",
"1. `client` - A `redis.Redis` client object.\n",
| |
152258
|
{
"cells": [
{
"cell_type": "markdown",
"id": "39af9ecd",
"metadata": {},
"source": [
"# Microsoft Word\n",
"\n",
">[Microsoft Word](https://www.microsoft.com/en-us/microsoft-365/word) is a word processor developed by Microsoft.\n",
"\n",
"This covers how to load `Word` documents into a document format that we can use downstream."
]
},
{
"cell_type": "markdown",
"id": "9438686b",
"metadata": {},
"source": [
"## Using Docx2txt\n",
"\n",
"Load .docx using `Docx2txt` into a document."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7b80ea891",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet docx2txt"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "7b80ea89",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': './example_data/fake.docx'})]"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.document_loaders import Docx2txtLoader\n",
"\n",
"loader = Docx2txtLoader(\"./example_data/fake.docx\")\n",
"\n",
"data = loader.load()\n",
"\n",
"data"
]
},
{
"cell_type": "markdown",
"id": "8d40727d",
"metadata": {},
"source": [
"## Using Unstructured\n",
"\n",
"Please see [this guide](/docs/integrations/providers/unstructured/) for more instructions on setting up Unstructured locally, including setting up required system dependencies."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "721c48aa",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.docx'})]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.document_loaders import UnstructuredWordDocumentLoader\n",
"\n",
"loader = UnstructuredWordDocumentLoader(\"example_data/fake.docx\")\n",
"\n",
"data = loader.load()\n",
"\n",
"data"
]
},
{
"cell_type": "markdown",
"id": "525d6b67",
"metadata": {},
"source": [
"### Retain Elements\n",
"\n",
"Under the hood, Unstructured creates different \"elements\" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode=\"elements\"`."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "064f9162",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': './example_data/fake.docx', 'category_depth': 0, 'file_directory': './example_data', 'filename': 'fake.docx', 'last_modified': '2023-12-19T13:42:18', 'languages': ['por', 'cat'], 'filetype': 'application/vnd.openxmlformats-officedocument.wordprocessingml.document', 'category': 'Title'})"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader = UnstructuredWordDocumentLoader(\"./example_data/fake.docx\", mode=\"elements\")\n",
"\n",
"data = loader.load()\n",
"\n",
"data[0]"
]
},
{
"cell_type": "markdown",
"id": "c1f3b83f",
"metadata": {},
"source": [
"## Using Azure AI Document Intelligence\n",
"\n",
">[Azure AI Document Intelligence](https://aka.ms/doc-intelligence) (formerly known as `Azure Form Recognizer`) is machine-learning \n",
">based service that extracts texts (including handwriting), tables, document structures (e.g., titles, section headings, etc.) and key-value-pairs from\n",
">digital or scanned PDFs, images, Office and HTML files.\n",
">\n",
">Document Intelligence supports `PDF`, `JPEG/JPG`, `PNG`, `BMP`, `TIFF`, `HEIF`, `DOCX`, `XLSX`, `PPTX` and `HTML`.\n",
"\n",
"This current implementation of a loader using `Document Intelligence` can incorporate content page-wise and turn it into LangChain documents. The default output format is markdown, which can be easily chained with `MarkdownHeaderTextSplitter` for semantic document chunking. You can also use `mode=\"single\"` or `mode=\"page\"` to return pure texts in a single page or document split by page.\n"
]
},
{
"cell_type": "markdown",
"id": "a5bd47c2",
"metadata": {},
"source": [
"## Prerequisite\n",
"\n",
"An Azure AI Document Intelligence resource in one of the 3 preview regions: **East US**, **West US2**, **West Europe** - follow [this document](https://learn.microsoft.com/azure/ai-services/document-intelligence/create-document-intelligence-resource?view=doc-intel-4.0.0) to create one if you don't have. You will be passing `<endpoint>` and `<key>` as parameters to the loader."
]
},
{
"cell_type": "markdown",
"id": "71cbdfe0",
"metadata": {},
"source": [
"%pip install --upgrade --quiet langchain langchain-community azure-ai-documentintelligence"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "691bd9e8",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoader\n",
"\n",
"file_path = \"<filepath>\"\n",
"endpoint = \"<endpoint>\"\n",
"key = \"<key>\"\n",
"loader = AzureAIDocumentIntelligenceLoader(\n",
" api_endpoint=endpoint, api_key=key, file_path=file_path, api_model=\"prebuilt-layout\"\n",
")\n",
"\n",
"documents = loader.load()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
152334
|
{
"cells": [
{
"cell_type": "markdown",
"id": "f70e6118",
"metadata": {},
"source": [
"# Images\n",
"\n",
"This covers how to load images into a document format that we can use downstream with other LangChain modules.\n",
"\n",
"It uses [Unstructured](https://unstructured.io/) to handle a wide variety of image formats, such as `.jpg` and `.png`. Please see [this guide](/docs/integrations/providers/unstructured/) for more instructions on setting up Unstructured locally, including setting up required system dependencies."
]
},
{
"cell_type": "markdown",
"id": "09d64998",
"metadata": {},
"source": [
"## Using Unstructured"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "db8e56db-2e66-443b-8a0b-ef69fa5fae9a",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet \"unstructured[all-docs]\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "0cc0cd42",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='2021\\n\\n2103.15348v2 [cs.CV] 21 Jun\\n\\narXiv\\n\\nLayoutParser: A Unified Toolkit for Deep Learning Based Document Image Analysis\\n\\nZejiang Shen! (&4), Ruochen Zhang?, Melissa Dell?, Benjamin Charles Germain Lee*, Jacob Carlson?, and Weining Li?\\n\\n1\\n\\nAllen Institute for AI shannons@allenai.org ? Brown University ruochen_zhang@brown. edu 3 Harvard University {melissadell, jacob_carlson}@fas.harvard.edu 4 University of Washington begl@cs.washington.edu 5 University of Waterloo w4221i@uwaterloo.ca\\n\\nAbstract. Recent advances in document image analysis (DIA) have been primarily driven by the application of neural networks. Ideally, research outcomes could be easily deployed in production and extended for further investigation. However, various factors like loosely organized codebases and sophisticated model configurations complicate the easy reuse of im- portant innovations by a wide audience. Though there have been on-going efforts to improve reusability and simplify deep learning (DL) model development in disciplines like natural language processing and computer vision, none of them are optimized for challenges in the domain of DIA. This represents a major gap in the existing toolkit, as DIA is central to academic research across a wide range of disciplines in the social sciences and humanities. This paper introduces LayoutParser, an open-source library for streamlining the usage of DL in DIA research and applica- tions. The core LayoutParser library comes with a set of simple and intuitive interfaces for applying and customizing DL models for layout de- tection, character recognition, and many other document processing tasks. To promote extensibility, LayoutParser also incorporates a community platform for sharing both pre-trained models and full document digiti- zation pipelines. We demonstrate that LayoutParser is helpful for both lightweight and large-scale digitization pipelines in real-word use cases. The library is publicly available at https: //layout-parser.github. io.\\n\\nKeywords: Document Image Analysis - Deep Learning - Layout Analysis - Character Recognition - Open Source library - Toolkit.\\n\\n1 Introduction\\n\\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of document image analysis (DIA) tasks including document image classification [11,', metadata={'source': './example_data/layout-parser-paper-screenshot.png'})"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.document_loaders.image import UnstructuredImageLoader\n",
"\n",
"loader = UnstructuredImageLoader(\"./example_data/layout-parser-paper-screenshot.png\")\n",
"\n",
"data = loader.load()\n",
"\n",
"data[0]"
]
},
{
"cell_type": "markdown",
"id": "09957371",
"metadata": {},
"source": [
"### Retain Elements\n",
"\n",
"Under the hood, Unstructured creates different \"elements\" for different chunks of text. By default we combine those together, but you can keep that separation by specifying `mode=\"elements\"`."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "0fab833b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='2021', metadata={'source': './example_data/layout-parser-paper-screenshot.png', 'coordinates': {'points': ((47.0, 492.0), (47.0, 591.0), (83.0, 591.0), (83.0, 492.0)), 'system': 'PixelSpace', 'layout_width': 1624, 'layout_height': 1920}, 'last_modified': '2024-07-01T10:38:29', 'filetype': 'PNG', 'languages': ['eng'], 'page_number': 1, 'file_directory': './example_data', 'filename': 'layout-parser-paper-screenshot.png', 'category': 'UncategorizedText'})"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader = UnstructuredImageLoader(\n",
" \"./example_data/layout-parser-paper-screenshot.png\", mode=\"elements\"\n",
")\n",
"\n",
"data = loader.load()\n",
"\n",
"data[0]"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
152337
|
"to get more details about configuration parameters."
]
},
{
"cell_type": "markdown",
"id": "de97d0ed-d6b1-44e0-b392-1f3d89c762f9",
"metadata": {},
"source": [
"### Basic example"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "50ffeeee-db12-4801-b208-7e32ea3d72ad",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\nMadam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \\n\\n\\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\n\\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\n\\n\\nWith a duty to one another to the American people to '"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.document_loaders import DedocFileLoader\n",
"\n",
"loader = DedocFileLoader(\"./example_data/state_of_the_union.txt\")\n",
"\n",
"docs = loader.load()\n",
"\n",
"docs[0].page_content[:400]"
]
},
{
"cell_type": "markdown",
"id": "457e5d4c-a4ee-4f31-ae74-3f75a1bbd0af",
"metadata": {},
"source": [
"### Modes of split\n",
"\n",
"`DedocFileLoader` supports different types of document splitting into parts (each part is returned separately).\n",
"For this purpose, `split` parameter is used with the following options:\n",
"* `document` (default value): document text is returned as a single langchain `Document` object (don't split);\n",
"* `page`: split document text into pages (works for `PDF`, `DJVU`, `PPTX`, `PPT`, `ODP`);\n",
"* `node`: split document text into `Dedoc` tree nodes (title nodes, list item nodes, raw text nodes);\n",
"* `line`: split document text into textual lines."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "eec54d31-ae7a-4a3c-aa10-4ae276b1e4c4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"2"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader = DedocFileLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" split=\"page\",\n",
" pages=\":2\",\n",
")\n",
"\n",
"docs = loader.load()\n",
"\n",
"len(docs)"
]
},
{
"cell_type": "markdown",
"id": "61e11769-4780-4f77-b10e-27db6936f226",
"metadata": {},
"source": [
"### Handling tables\n",
"\n",
"`DedocFileLoader` supports tables handling when `with_tables` parameter is \n",
"set to `True` during loader initialization (`with_tables=True` by default). \n",
"\n",
"Tables are not split - each table corresponds to one langchain `Document` object.\n",
"For tables, `Document` object has additional `metadata` fields `type=\"table\"` \n",
"and `text_as_html` with table `HTML` representation."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "bbeb2f8a-ac5e-4b59-8026-7ea3fc14c928",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"('table',\n",
" '<table border=\"1\" style=\"border-collapse: collapse; width: 100%;\">\\n<tbody>\\n<tr>\\n<td colspan=\"1\" rowspan=\"1\">Team</td>\\n<td colspan=\"1\" rowspan=\"1\"> "Payroll (millions)"</td>\\n<td colspan=\"1\" r')"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader = DedocFileLoader(\"./example_data/mlb_teams_2012.csv\")\n",
"\n",
"docs = loader.load()\n",
"\n",
"docs[1].metadata[\"type\"], docs[1].metadata[\"text_as_html\"][:200]"
]
},
{
"cell_type": "markdown",
"id": "b4a2b872-2aba-4e4c-8b2f-83a5a81ee1da",
"metadata": {},
"source": [
"### Handling attached files\n",
"\n",
"`DedocFileLoader` supports attached files handling when `with_attachments` is set \n",
"to `True` during loader initialization (`with_attachments=False` by default). \n",
"\n",
"Attachments are split according to the `split` parameter.\n",
"For attachments, langchain `Document` object has an additional metadata \n",
"field `type=\"attachment\"`."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "bb9d6c1c-e24c-4979-88a0-38d54abd6332",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"('attachment',\n",
" '\\nContent-Type\\nmultipart/mixed; boundary=\"0000000000005d654405f082adb7\"\\nDate\\nFri, 23 Dec 2022 12:08:48 -0600\\nFrom\\nMallori Harrell <mallori@unstructured.io>\\nMIME-Version\\n1.0\\nMessage-ID\\n<CAPgNNXSzLVJ-d1OCX_TjFgJU7ugtQrjFybPtAMmmYZzphxNFYg@mail.gmail.com>\\nSubject\\nFake email with attachment\\nTo\\nMallori Harrell <mallori@unstructured.io>')"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader = DedocFileLoader(\n",
" \"./example_data/fake-email-attachment.eml\",\n",
" with_attachments=True,\n",
")\n",
"\n",
"docs = loader.load()\n",
"\n",
"docs[1].metadata[\"type\"], docs[1].page_content"
]
},
{
"cell_type": "markdown",
"id": "d435c3f6-703a-4064-8307-ace140de967a",
"metadata": {},
"source": [
"## Loading PDF file\n",
"\n",
"If you want to handle only `PDF` documents, you can use `DedocPDFLoader` with only `PDF` support.\n",
"The loader supports the same parameters for document split, tables and attachments extraction.\n",
"\n",
"`Dedoc` can extract `PDF` with or without a textual layer, \n",
"as well as automatically detect its presence and correctness.\n",
"Several `PDF` handlers are available, you can use `pdf_with_text_layer` \n",
"parameter to choose one of them.\n",
"Please see [parameters description](https://dedoc.readthedocs.io/en/latest/parameters/pdf_handling.html) \n",
"to get more details.\n",
"\n",
"For `PDF` without a textual layer, `Tesseract OCR` and its language packages should be installed.\n",
"In this case, [the instruction](https://dedoc.readthedocs.io/en/latest/tutorials/add_new_language.html) can be useful."
]
},
{
"cell_type": "code",
"execution_count": 9,
| |
152359
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# JSONLoader\n",
"\n",
"This notebook provides a quick overview for getting started with JSON [document loader](https://python.langchain.com/docs/concepts/#document-loaders). For detailed documentation of all JSONLoader features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.json_loader.JSONLoader.html).\n",
"\n",
"- TODO: Add any other relevant links, like information about underlying API, etc.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/document_loaders/file_loaders/json/)|\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| [JSONLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.json_loader.JSONLoader.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ✅ | \n",
"### Loader features\n",
"| Source | Document Lazy Loading | Native Async Support\n",
"| :---: | :---: | :---: | \n",
"| JSONLoader | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"To access JSON document loader you'll need to install the `langchain-community` integration package as well as the ``jq`` python package.\n",
"\n",
"### Credentials\n",
"\n",
"No credentials are required to use the `JSONLoader` class."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to get automated best in-class tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"Install **langchain_community** and **jq**:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_community jq "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"Now we can instantiate our model object and load documents:\n",
"\n",
"- TODO: Update model instantiation with relevant params."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import JSONLoader\n",
"\n",
"loader = JSONLoader(\n",
" file_path=\"./example_data/facebook_chat.json\",\n",
" jq_schema=\".messages[].content\",\n",
" text_content=False,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'source': '/Users/isaachershenson/Documents/langchain/docs/docs/integrations/document_loaders/example_data/facebook_chat.json', 'seq_num': 1}, page_content='Bye!')"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs = loader.load()\n",
"docs[0]"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'source': '/Users/isaachershenson/Documents/langchain/docs/docs/integrations/document_loaders/example_data/facebook_chat.json', 'seq_num': 1}\n"
]
}
],
"source": [
"print(docs[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lazy Load"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"pages = []\n",
"for doc in loader.lazy_load():\n",
" pages.append(doc)\n",
" if len(pages) >= 10:\n",
" # do some paged operation, e.g.\n",
" # index.upsert(pages)\n",
"\n",
" pages = []"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Read from JSON Lines file\n",
"\n",
"If you want to load documents from a JSON Lines file, you pass `json_lines=True`\n",
"and specify `jq_schema` to extract `page_content` from a single JSON object."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"page_content='Bye!' metadata={'source': '/Users/isaachershenson/Documents/langchain/docs/docs/integrations/document_loaders/example_data/facebook_chat_messages.jsonl', 'seq_num': 1}\n"
]
}
],
"source": [
"loader = JSONLoader(\n",
" file_path=\"./example_data/facebook_chat_messages.jsonl\",\n",
" jq_schema=\".content\",\n",
" text_content=False,\n",
" json_lines=True,\n",
")\n",
"\n",
"docs = loader.load()\n",
"print(docs[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Read specific content keys\n",
"\n",
"Another option is to set `jq_schema='.'` and provide a `content_key` in order to only load specific content:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"page_content='User 2' metadata={'source': '/Users/isaachershenson/Documents/langchain/docs/docs/integrations/document_loaders/example_data/facebook_chat_messages.jsonl', 'seq_num': 1}\n"
]
}
],
"source": [
"loader = JSONLoader(\n",
" file_path=\"./example_data/facebook_chat_messages.jsonl\",\n",
" jq_schema=\".\",\n",
" content_key=\"sender_name\",\n",
" json_lines=True,\n",
")\n",
"\n",
"docs = loader.load()\n",
"print(docs[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## JSON file with jq schema `content_key`\n",
"\n",
"To load documents from a JSON file using the `content_key` within the jq schema, set `is_content_key_jq_parsable=True`. Ensure that `content_key` is compatible and can be parsed using the jq schema."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"page_content='Bye!' metadata={'source': '/Users/isaachershenson/Documents/langchain/docs/docs/integrations/document_loaders/example_data/facebook_chat.json', 'seq_num': 1}\n"
]
}
],
"source": [
"loader = JSONLoader(\n",
" file_path=\"./example_data/facebook_chat.json\",\n",
" jq_schema=\".messages[]\",\n",
" content_key=\".content\",\n",
| |
152362
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google Spanner\n",
"\n",
"> [Spanner](https://cloud.google.com/spanner) is a highly scalable database that combines unlimited scalability with relational semantics, such as secondary indexes, strong consistency, schemas, and SQL providing 99.999% availability in one easy solution.\n",
"\n",
"This notebook goes over how to use [Spanner](https://cloud.google.com/spanner) to [save, load and delete langchain documents](/docs/how_to#document-loaders) with `SpannerLoader` and `SpannerDocumentSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-spanner-python/).\n",
"\n",
"[](https://colab.research.google.com/github/googleapis/langchain-google-spanner-python/blob/main/docs/document_loader.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Before You Begin\n",
"\n",
"To run this notebook, you will need to do the following:\n",
"\n",
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
"* [Enable the Cloud Spanner API](https://console.cloud.google.com/flows/enableapi?apiid=spanner.googleapis.com)\n",
"* [Create a Spanner instance](https://cloud.google.com/spanner/docs/create-manage-instances)\n",
"* [Create a Spanner database](https://cloud.google.com/spanner/docs/create-manage-databases)\n",
"* [Create a Spanner table](https://cloud.google.com/spanner/docs/create-query-database-console#create-schema)\n",
"\n",
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# @markdown Please specify an instance id, a database, and a table for demo purpose.\n",
"INSTANCE_ID = \"test_instance\" # @param {type:\"string\"}\n",
"DATABASE_ID = \"test_database\" # @param {type:\"string\"}\n",
"TABLE_NAME = \"test_table\" # @param {type:\"string\"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🦜🔗 Library Installation\n",
"\n",
"The integration lives in its own `langchain-google-spanner` package, so we need to install it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install -upgrade --quiet langchain-google-spanner langchain"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Colab only**: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
"# import IPython\n",
"\n",
"# app = IPython.Application.instance()\n",
"# app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ☁ Set Your Google Cloud Project\n",
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
"\n",
"If you don't know your project ID, try the following:\n",
"\n",
"* Run `gcloud config list`.\n",
"* Run `gcloud projects list`.\n",
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.\n",
"\n",
"PROJECT_ID = \"my-project-id\" # @param {type:\"string\"}\n",
"\n",
"# Set the project id\n",
"!gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🔐 Authentication\n",
"\n",
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
"\n",
"- If you are using Colab to run this notebook, use the cell below and continue.\n",
"- If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.colab import auth\n",
"\n",
"auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Basic Usage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Save documents\n",
"\n",
"Save langchain documents with `SpannerDocumentSaver.add_documents(<documents>)`. To initialize `SpannerDocumentSaver` class you need to provide 3 things:\n",
"\n",
"1. `instance_id` - An instance of Spanner to load data from.\n",
"1. `database_id` - An instance of Spanner database to load data from.\n",
"1. `table_name` - The name of the table within the Spanner database to store langchain documents."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.documents import Document\n",
"from langchain_google_spanner import SpannerDocumentSaver\n",
"\n",
"test_docs = [\n",
" Document(\n",
" page_content=\"Apple Granny Smith 150 0.99 1\",\n",
" metadata={\"fruit_id\": 1},\n",
" ),\n",
" Document(\n",
" page_content=\"Banana Cavendish 200 0.59 0\",\n",
" metadata={\"fruit_id\": 2},\n",
" ),\n",
" Document(\n",
" page_content=\"Orange Navel 80 1.29 1\",\n",
" metadata={\"fruit_id\": 3},\n",
" ),\n",
"]\n",
"\n",
"saver = SpannerDocumentSaver(\n",
" instance_id=INSTANCE_ID,\n",
" database_id=DATABASE_ID,\n",
" table_name=TABLE_NAME,\n",
")\n",
"saver.add_documents(test_docs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Querying for Documents from Spanner\n",
"\n",
"For more details on connecting to a Spanner table, please check the [Python SDK documentation](https://cloud.google.com/python/docs/reference/spanner/latest)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Load documents from table\n",
"\n",
"Load langchain documents with `SpannerLoader.load()` or `SpannerLoader.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `SpannerLoader` class you need to provide:\n",
"\n",
"1. `instance_id` - An instance of Spanner to load data from.\n",
"1. `database_id` - An instance of Spanner database to load data from.\n",
"1. `query` - A query of the database dialect."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_spanner import SpannerLoader\n",
"\n",
| |
152379
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Confluence\n",
"\n",
">[Confluence](https://www.atlassian.com/software/confluence) is a wiki collaboration platform that saves and organizes all of the project-related material. `Confluence` is a knowledge base that primarily handles content management activities. \n",
"\n",
"A loader for `Confluence` pages.\n",
"\n",
"\n",
"This currently supports `username/api_key`, `Oauth2 login`. Additionally, on-prem installations also support `token` authentication. \n",
"\n",
"\n",
"Specify a list `page_id`-s and/or `space_key` to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned.\n",
"\n",
"\n",
"You can also specify a boolean `include_attachments` to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: `PDF`, `PNG`, `JPEG/JPG`, `SVG`, `Word` and `Excel`.\n",
"\n",
"Hint: `space_key` and `page_id` can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Before using ConfluenceLoader make sure you have the latest version of the atlassian-python-api package installed:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet atlassian-python-api"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Examples"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Username and Password or Username and API Token (Atlassian Cloud only)\n",
"\n",
"This example authenticates using either a username and password or, if you're connecting to an Atlassian Cloud hosted version of Confluence, a username and an API Token.\n",
"You can generate an API token at: https://id.atlassian.com/manage-profile/security/api-tokens.\n",
"\n",
"The `limit` parameter specifies how many documents will be retrieved in a single call, not how many documents will be retrieved in total.\n",
"By default the code will return up to 1000 documents in 50 documents batches. To control the total number of documents use the `max_pages` parameter. \n",
"Plese note the maximum value for the `limit` parameter in the atlassian-python-api package is currently 100. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import ConfluenceLoader\n",
"\n",
"loader = ConfluenceLoader(\n",
" url=\"https://yoursite.atlassian.com/wiki\", username=\"me\", api_key=\"12345\"\n",
")\n",
"documents = loader.load(space_key=\"SPACE\", include_attachments=True, limit=50)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Personal Access Token (Server/On-Prem only)\n",
"\n",
"This method is valid for the Data Center/Server on-prem edition only.\n",
"For more information on how to generate a Personal Access Token (PAT) check the official Confluence documentation at: https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html.\n",
"When using a PAT you provide only the token value, you cannot provide a username. \n",
"Please note that ConfluenceLoader will run under the permissions of the user that generated the PAT and will only be able to load documents for which said user has access to. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import ConfluenceLoader\n",
"\n",
"loader = ConfluenceLoader(url=\"https://yoursite.atlassian.com/wiki\", token=\"12345\")\n",
"documents = loader.load(\n",
" space_key=\"SPACE\", include_attachments=True, limit=50, max_pages=50\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"vscode": {
"interpreter": {
"hash": "cc99336516f23363341912c6723b01ace86f02e26b4290be1efc0677e2e2ec24"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| |
152395
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# UnstructuredPDFLoader\n",
"\n",
"## Overview\n",
"\n",
"[Unstructured](https://unstructured-io.github.io/unstructured/) supports a common interface for working with unstructured or semi-structured file formats, such as Markdown or PDF. LangChain's [UnstructuredPDFLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.UnstructuredPDFLoader.html) integrates with Unstructured to parse PDF documents into LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects.\n",
"\n",
"Please see [this page](/docs/integrations/providers/unstructured/) for more information on installing system requirements.\n",
"\n",
"\n",
"### Integration details\n",
"\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/document_loaders/file_loaders/unstructured/)|\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| [UnstructuredPDFLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.UnstructuredPDFLoader.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ✅ | \n",
"### Loader features\n",
"| Source | Document Lazy Loading | Native Async Support\n",
"| :---: | :---: | :---: | \n",
"| UnstructuredPDFLoader | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"### Credentials\n",
"\n",
"No credentials are needed to use this loader."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to get automated best in-class tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"Install **langchain_community** and **unstructured**."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community unstructured"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"Now we can initialize our loader:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import UnstructuredPDFLoader\n",
"\n",
"file_path = \"./example_data/layout-parser-paper.pdf\"\n",
"loader = UnstructuredPDFLoader(file_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
| |
152419
|
{
"cells": [
{
"cell_type": "markdown",
"id": "39af9ecd",
"metadata": {},
"source": [
"# Microsoft PowerPoint\n",
"\n",
">[Microsoft PowerPoint](https://en.wikipedia.org/wiki/Microsoft_PowerPoint) is a presentation program by Microsoft.\n",
"\n",
"This covers how to load `Microsoft PowerPoint` documents into a document format that we can use downstream.\n",
"\n",
"Please see [this guide](/docs/integrations/providers/unstructured/) for more instructions on setting up Unstructured locally, including setting up required system dependencies."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "aef1500f",
"metadata": {},
"outputs": [],
"source": [
"# Install packages\n",
"%pip install unstructured\n",
"%pip install python-magic\n",
"%pip install python-pptx"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "721c48aa",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Adding a Bullet Slide\\n\\nFind the bullet slide layout\\n\\nUse _TextFrame.text for first bullet\\n\\nUse _TextFrame.add_paragraph() for subsequent bullets\\n\\nHere is a lot of text!\\n\\nHere is some text in a text box!', metadata={'source': './example_data/fake-power-point.pptx'})]"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.document_loaders import UnstructuredPowerPointLoader\n",
"\n",
"loader = UnstructuredPowerPointLoader(\"./example_data/fake-power-point.pptx\")\n",
"\n",
"data = loader.load()\n",
"\n",
"data"
]
},
{
"cell_type": "markdown",
"id": "525d6b67",
"metadata": {},
"source": [
"### Retain Elements\n",
"\n",
"Under the hood, `Unstructured` creates different \"elements\" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode=\"elements\"`."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "064f9162",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='Adding a Bullet Slide', metadata={'source': './example_data/fake-power-point.pptx', 'category_depth': 0, 'file_directory': './example_data', 'filename': 'fake-power-point.pptx', 'last_modified': '2023-12-19T13:42:18', 'page_number': 1, 'languages': ['eng'], 'filetype': 'application/vnd.openxmlformats-officedocument.presentationml.presentation', 'category': 'Title'})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader = UnstructuredPowerPointLoader(\n",
" \"./example_data/fake-power-point.pptx\", mode=\"elements\"\n",
")\n",
"\n",
"data = loader.load()\n",
"\n",
"data[0]"
]
},
{
"cell_type": "markdown",
"id": "b97180c2",
"metadata": {},
"source": [
"## Using Azure AI Document Intelligence\n",
"\n",
">[Azure AI Document Intelligence](https://aka.ms/doc-intelligence) (formerly known as `Azure Form Recognizer`) is machine-learning \n",
">based service that extracts texts (including handwriting), tables, document structures (e.g., titles, section headings, etc.) and key-value-pairs from\n",
">digital or scanned PDFs, images, Office and HTML files.\n",
">\n",
">Document Intelligence supports `PDF`, `JPEG/JPG`, `PNG`, `BMP`, `TIFF`, `HEIF`, `DOCX`, `XLSX`, `PPTX` and `HTML`.\n",
"\n",
"This current implementation of a loader using `Document Intelligence` can incorporate content page-wise and turn it into LangChain documents. The default output format is markdown, which can be easily chained with `MarkdownHeaderTextSplitter` for semantic document chunking. You can also use `mode=\"single\"` or `mode=\"page\"` to return pure texts in a single page or document split by page.\n"
]
},
{
"cell_type": "markdown",
"id": "11851fd0",
"metadata": {},
"source": [
"## Prerequisite\n",
"\n",
"An Azure AI Document Intelligence resource in one of the 3 preview regions: **East US**, **West US2**, **West Europe** - follow [this document](https://learn.microsoft.com/azure/ai-services/document-intelligence/create-document-intelligence-resource?view=doc-intel-4.0.0) to create one if you don't have. You will be passing `<endpoint>` and `<key>` as parameters to the loader."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "381d4139",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-community azure-ai-documentintelligence"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "077525b8",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoader\n",
"\n",
"file_path = \"<filepath>\"\n",
"endpoint = \"<endpoint>\"\n",
"key = \"<key>\"\n",
"loader = AzureAIDocumentIntelligenceLoader(\n",
" api_endpoint=endpoint, api_key=key, file_path=file_path, api_model=\"prebuilt-layout\"\n",
")\n",
"\n",
"documents = loader.load()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
152479
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# CSV\n",
"\n",
">A [comma-separated values (CSV)](https://en.wikipedia.org/wiki/Comma-separated_values) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.\n",
"\n",
"Load [csv](https://en.wikipedia.org/wiki/Comma-separated_values) data with a single row per document."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[Document(page_content='Team: Nationals\\n\"Payroll (millions)\": 81.34\\n\"Wins\": 98', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}), Document(page_content='Team: Reds\\n\"Payroll (millions)\": 82.20\\n\"Wins\": 97', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}), Document(page_content='Team: Yankees\\n\"Payroll (millions)\": 197.96\\n\"Wins\": 95', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}), Document(page_content='Team: Giants\\n\"Payroll (millions)\": 117.62\\n\"Wins\": 94', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}), Document(page_content='Team: Braves\\n\"Payroll (millions)\": 83.31\\n\"Wins\": 94', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}), Document(page_content='Team: Athletics\\n\"Payroll (millions)\": 55.37\\n\"Wins\": 94', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 5}), Document(page_content='Team: Rangers\\n\"Payroll (millions)\": 120.51\\n\"Wins\": 93', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}), Document(page_content='Team: Orioles\\n\"Payroll (millions)\": 81.43\\n\"Wins\": 93', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}), Document(page_content='Team: Rays\\n\"Payroll (millions)\": 64.17\\n\"Wins\": 90', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}), Document(page_content='Team: Angels\\n\"Payroll (millions)\": 154.49\\n\"Wins\": 89', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}), Document(page_content='Team: Tigers\\n\"Payroll (millions)\": 132.30\\n\"Wins\": 88', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}), Document(page_content='Team: Cardinals\\n\"Payroll (millions)\": 110.30\\n\"Wins\": 88', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}), Document(page_content='Team: Dodgers\\n\"Payroll (millions)\": 95.14\\n\"Wins\": 86', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}), Document(page_content='Team: White Sox\\n\"Payroll (millions)\": 96.92\\n\"Wins\": 85', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}), Document(page_content='Team: Brewers\\n\"Payroll (millions)\": 97.65\\n\"Wins\": 83', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}), Document(page_content='Team: Phillies\\n\"Payroll (millions)\": 174.54\\n\"Wins\": 81', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}), Document(page_content='Team: Diamondbacks\\n\"Payroll (millions)\": 74.28\\n\"Wins\": 81', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}), Document(page_content='Team: Pirates\\n\"Payroll (millions)\": 63.43\\n\"Wins\": 79', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}), Document(page_content='Team: Padres\\n\"Payroll (millions)\": 55.24\\n\"Wins\": 76', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}), Document(page_content='Team: Mariners\\n\"Payroll (millions)\": 81.97\\n\"Wins\": 75', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}), Document(page_content='Team: Mets\\n\"Payroll (millions)\": 93.35\\n\"Wins\": 74', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}), Document(page_content='Team: Blue Jays\\n\"Payroll (millions)\": 75.48\\n\"Wins\": 73', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}), Document(page_content='Team: Royals\\n\"Payroll (millions)\": 60.91\\n\"Wins\": 72', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 22}), Document(page_content='Team: Marlins\\n\"Payroll (millions)\": 118.07\\n\"Wins\": 69', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 23}), Document(page_content='Team: Red Sox\\n\"Payroll (millions)\": 173.18\\n\"Wins\": 69', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 24}), Document(page_content='Team: Indians\\n\"Payroll (millions)\": 78.43\\n\"Wins\": 68', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 25}), Document(page_content='Team: Twins\\n\"Payroll (millions)\": 94.08\\n\"Wins\": 66', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 26}), Document(page_content='Team: Rockies\\n\"Payroll (millions)\": 78.06\\n\"Wins\": 64', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 27}), Document(page_content='Team: Cubs\\n\"Payroll (millions)\": 88.19\\n\"Wins\": 61', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 28}), Document(page_content='Team: Astros\\n\"Payroll (millions)\": 60.65\\n\"Wins\": 55', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 29})]\n"
]
}
],
"source": [
"from langchain_community.document_loaders.csv_loader import CSVLoader\n",
"\n",
"loader = CSVLoader(file_path=\"./example_data/mlb_teams_2012.csv\")\n",
"\n",
"data = loader.load()\n",
"\n",
"print(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Customizing the csv parsing and loading\n",
"\n",
"See the [csv module](https://docs.python.org/3/library/csv.html) documentation for more information of what csv args are supported."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
| |
152481
|
"[Document(page_content='Team: Nationals\\n\"Payroll (millions)\": 81.34\\n\"Wins\": 98', metadata={'source': 'Nationals', 'row': 0}), Document(page_content='Team: Reds\\n\"Payroll (millions)\": 82.20\\n\"Wins\": 97', metadata={'source': 'Reds', 'row': 1}), Document(page_content='Team: Yankees\\n\"Payroll (millions)\": 197.96\\n\"Wins\": 95', metadata={'source': 'Yankees', 'row': 2}), Document(page_content='Team: Giants\\n\"Payroll (millions)\": 117.62\\n\"Wins\": 94', metadata={'source': 'Giants', 'row': 3}), Document(page_content='Team: Braves\\n\"Payroll (millions)\": 83.31\\n\"Wins\": 94', metadata={'source': 'Braves', 'row': 4}), Document(page_content='Team: Athletics\\n\"Payroll (millions)\": 55.37\\n\"Wins\": 94', metadata={'source': 'Athletics', 'row': 5}), Document(page_content='Team: Rangers\\n\"Payroll (millions)\": 120.51\\n\"Wins\": 93', metadata={'source': 'Rangers', 'row': 6}), Document(page_content='Team: Orioles\\n\"Payroll (millions)\": 81.43\\n\"Wins\": 93', metadata={'source': 'Orioles', 'row': 7}), Document(page_content='Team: Rays\\n\"Payroll (millions)\": 64.17\\n\"Wins\": 90', metadata={'source': 'Rays', 'row': 8}), Document(page_content='Team: Angels\\n\"Payroll (millions)\": 154.49\\n\"Wins\": 89', metadata={'source': 'Angels', 'row': 9}), Document(page_content='Team: Tigers\\n\"Payroll (millions)\": 132.30\\n\"Wins\": 88', metadata={'source': 'Tigers', 'row': 10}), Document(page_content='Team: Cardinals\\n\"Payroll (millions)\": 110.30\\n\"Wins\": 88', metadata={'source': 'Cardinals', 'row': 11}), Document(page_content='Team: Dodgers\\n\"Payroll (millions)\": 95.14\\n\"Wins\": 86', metadata={'source': 'Dodgers', 'row': 12}), Document(page_content='Team: White Sox\\n\"Payroll (millions)\": 96.92\\n\"Wins\": 85', metadata={'source': 'White Sox', 'row': 13}), Document(page_content='Team: Brewers\\n\"Payroll (millions)\": 97.65\\n\"Wins\": 83', metadata={'source': 'Brewers', 'row': 14}), Document(page_content='Team: Phillies\\n\"Payroll (millions)\": 174.54\\n\"Wins\": 81', metadata={'source': 'Phillies', 'row': 15}), Document(page_content='Team: Diamondbacks\\n\"Payroll (millions)\": 74.28\\n\"Wins\": 81', metadata={'source': 'Diamondbacks', 'row': 16}), Document(page_content='Team: Pirates\\n\"Payroll (millions)\": 63.43\\n\"Wins\": 79', metadata={'source': 'Pirates', 'row': 17}), Document(page_content='Team: Padres\\n\"Payroll (millions)\": 55.24\\n\"Wins\": 76', metadata={'source': 'Padres', 'row': 18}), Document(page_content='Team: Mariners\\n\"Payroll (millions)\": 81.97\\n\"Wins\": 75', metadata={'source': 'Mariners', 'row': 19}), Document(page_content='Team: Mets\\n\"Payroll (millions)\": 93.35\\n\"Wins\": 74', metadata={'source': 'Mets', 'row': 20}), Document(page_content='Team: Blue Jays\\n\"Payroll (millions)\": 75.48\\n\"Wins\": 73', metadata={'source': 'Blue Jays', 'row': 21}), Document(page_content='Team: Royals\\n\"Payroll (millions)\": 60.91\\n\"Wins\": 72', metadata={'source': 'Royals', 'row': 22}), Document(page_content='Team: Marlins\\n\"Payroll (millions)\": 118.07\\n\"Wins\": 69', metadata={'source': 'Marlins', 'row': 23}), Document(page_content='Team: Red Sox\\n\"Payroll (millions)\": 173.18\\n\"Wins\": 69', metadata={'source': 'Red Sox', 'row': 24}), Document(page_content='Team: Indians\\n\"Payroll (millions)\": 78.43\\n\"Wins\": 68', metadata={'source': 'Indians', 'row': 25}), Document(page_content='Team: Twins\\n\"Payroll (millions)\": 94.08\\n\"Wins\": 66', metadata={'source': 'Twins', 'row': 26}), Document(page_content='Team: Rockies\\n\"Payroll (millions)\": 78.06\\n\"Wins\": 64', metadata={'source': 'Rockies', 'row': 27}), Document(page_content='Team: Cubs\\n\"Payroll (millions)\": 88.19\\n\"Wins\": 61', metadata={'source': 'Cubs', 'row': 28}), Document(page_content='Team: Astros\\n\"Payroll (millions)\": 60.65\\n\"Wins\": 55', metadata={'source': 'Astros', 'row': 29})]\n"
]
}
],
"source": [
"loader = CSVLoader(file_path=\"./example_data/mlb_teams_2012.csv\", source_column=\"Team\")\n",
"\n",
"data = loader.load()\n",
"\n",
"print(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## `UnstructuredCSVLoader`\n",
"\n",
"You can also load the table using the `UnstructuredCSVLoader`. One advantage of using `UnstructuredCSVLoader` is that if you use it in `\"elements\"` mode, an HTML representation of the table will be available in the metadata."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<table border=\"1\" class=\"dataframe\">\n",
" <tbody>\n",
" <tr>\n",
" <td>Team</td>\n",
" <td>\"Payroll (millions)\"</td>\n",
" <td>\"Wins\"</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Nationals</td>\n",
" <td>81.34</td>\n",
" <td>98</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Reds</td>\n",
" <td>82.20</td>\n",
" <td>97</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Yankees</td>\n",
" <td>197.96</td>\n",
" <td>95</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Giants</td>\n",
" <td>117.62</td>\n",
" <td>94</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Braves</td>\n",
" <td>83.31</td>\n",
" <td>94</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Athletics</td>\n",
" <td>55.37</td>\n",
" <td>94</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Rangers</td>\n",
" <td>120.51</td>\n",
" <td>93</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Orioles</td>\n",
" <td>81.43</td>\n",
" <td>93</td>\n",
| |
152486
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google Bigtable\n",
"\n",
"> [Bigtable](https://cloud.google.com/bigtable) is a key-value and wide-column store, ideal for fast access to structured, semi-structured, or unstructured data. Extend your database application to build AI-powered experiences leveraging Bigtable's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Bigtable](https://cloud.google.com/bigtable) to [save, load and delete langchain documents](/docs/how_to#document-loaders) with `BigtableLoader` and `BigtableSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-bigtable-python/).\n",
"\n",
"[](https://colab.research.google.com/github/googleapis/langchain-google-bigtable-python/blob/main/docs/document_loader.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Before You Begin\n",
"\n",
"To run this notebook, you will need to do the following:\n",
"\n",
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
"* [Enable the Bigtable API](https://console.cloud.google.com/flows/enableapi?apiid=bigtable.googleapis.com)\n",
"* [Create a Bigtable instance](https://cloud.google.com/bigtable/docs/creating-instance)\n",
"* [Create a Bigtable table](https://cloud.google.com/bigtable/docs/managing-tables)\n",
"* [Create Bigtable access credentials](https://developers.google.com/workspace/guides/create-credentials)\n",
"\n",
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# @markdown Please specify an instance and a table for demo purpose.\n",
"INSTANCE_ID = \"my_instance\" # @param {type:\"string\"}\n",
"TABLE_ID = \"my_table\" # @param {type:\"string\"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🦜🔗 Library Installation\n",
"\n",
"The integration lives in its own `langchain-google-bigtable` package, so we need to install it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -upgrade --quiet langchain-google-bigtable"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Colab only**: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
"# import IPython\n",
"\n",
"# app = IPython.Application.instance()\n",
"# app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ☁ Set Your Google Cloud Project\n",
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
"\n",
"If you don't know your project ID, try the following:\n",
"\n",
"* Run `gcloud config list`.\n",
"* Run `gcloud projects list`.\n",
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.\n",
"\n",
"PROJECT_ID = \"my-project-id\" # @param {type:\"string\"}\n",
"\n",
"# Set the project id\n",
"!gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🔐 Authentication\n",
"\n",
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
"\n",
"- If you are using Colab to run this notebook, use the cell below and continue.\n",
"- If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.colab import auth\n",
"\n",
"auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Basic Usage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using the saver\n",
"\n",
"Save langchain documents with `BigtableSaver.add_documents(<documents>)`. To initialize `BigtableSaver` class you need to provide 2 things:\n",
"\n",
"1. `instance_id` - An instance of Bigtable.\n",
"1. `table_id` - The name of the table within the Bigtable to store langchain documents."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.documents import Document\n",
"from langchain_google_bigtable import BigtableSaver\n",
"\n",
"test_docs = [\n",
" Document(\n",
" page_content=\"Apple Granny Smith 150 0.99 1\",\n",
" metadata={\"fruit_id\": 1},\n",
" ),\n",
" Document(\n",
" page_content=\"Banana Cavendish 200 0.59 0\",\n",
" metadata={\"fruit_id\": 2},\n",
" ),\n",
" Document(\n",
" page_content=\"Orange Navel 80 1.29 1\",\n",
" metadata={\"fruit_id\": 3},\n",
" ),\n",
"]\n",
"\n",
"saver = BigtableSaver(\n",
" instance_id=INSTANCE_ID,\n",
" table_id=TABLE_ID,\n",
")\n",
"\n",
"saver.add_documents(test_docs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Querying for Documents from Bigtable\n",
"For more details on connecting to a Bigtable table, please check the [Python SDK documentation](https://cloud.google.com/python/docs/reference/bigtable/latest/client)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Load documents from table\n",
"\n",
"Load langchain documents with `BigtableLoader.load()` or `BigtableLoader.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `BigtableLoader` class you need to provide:\n",
"\n",
"1. `instance_id` - An instance of Bigtable.\n",
"1. `table_id` - The name of the table within the Bigtable to store langchain documents."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_bigtable import BigtableLoader\n",
"\n",
"loader = BigtableLoader(\n",
" instance_id=INSTANCE_ID,\n",
" table_id=TABLE_ID,\n",
")\n",
"\n",
"for doc in loader.lazy_load():\n",
" print(doc)\n",
" break"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Delete documents\n",
"\n",
| |
152598
|
"Use only the provided relationship types and properties in the schema.\n",
"Do not use any other relationship types or properties that are not provided.\n",
"Schema:\n",
"{schema}\n",
"Note: Do not include any explanations or apologies in your responses.\n",
"Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.\n",
"Do not include any text except the generated Cypher statement.\n",
"Examples: Here are a few examples of generated Cypher statements for particular questions:\n",
"# How many people played in Top Gun?\n",
"MATCH (m:Movie {{name:\"Top Gun\"}})<-[:ACTED_IN]-()\n",
"RETURN count(*) AS numberOfActors\n",
"\n",
"The question is:\n",
"{question}\"\"\"\n",
"\n",
"CYPHER_GENERATION_PROMPT = PromptTemplate(\n",
" input_variables=[\"schema\", \"question\"], template=CYPHER_GENERATION_TEMPLATE\n",
")\n",
"\n",
"chain = GraphCypherQAChain.from_llm(\n",
" ChatOpenAI(temperature=0),\n",
" graph=graph,\n",
" verbose=True,\n",
" cypher_prompt=CYPHER_GENERATION_PROMPT,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "47c64027-cf42-493a-9c76-2d10ba753728",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (m:Movie {name:\"Top Gun\"})<-[:ACTED_IN]-()\n",
"RETURN count(*) AS numberOfActors\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'numberOfActors': 4}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'query': 'How many people played in Top Gun?',\n",
" 'result': 'There were 4 actors in Top Gun.'}"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"query\": \"How many people played in Top Gun?\"})"
]
},
{
"cell_type": "markdown",
"id": "3e721cad-aa87-4526-9231-2dfc0e365939",
"metadata": {},
"source": [
"## Use separate LLMs for Cypher and answer generation\n",
"You can use the `cypher_llm` and `qa_llm` parameters to define different llms"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "6f9becc2-f579-45bf-9b50-2ce02bde92da",
"metadata": {},
"outputs": [],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" graph=graph,\n",
" cypher_llm=ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo\"),\n",
" qa_llm=ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo-16k\"),\n",
" verbose=True,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "ff18e3e3-3402-4683-aec4-a19898f23ca1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\n",
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'}"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"query\": \"Who played in Top Gun?\"})"
]
},
{
"cell_type": "markdown",
"id": "eefea16b-508f-4552-8942-9d5063ed7d37",
"metadata": {},
"source": [
"## Ignore specified node and relationship types\n",
"\n",
"You can use `include_types` or `exclude_types` to ignore parts of the graph schema when generating Cypher statements."
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "a20fa21e-fb85-41c4-aac0-53fb25e34604",
"metadata": {},
"outputs": [],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" graph=graph,\n",
" cypher_llm=ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo\"),\n",
" qa_llm=ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo-16k\"),\n",
" verbose=True,\n",
" exclude_types=[\"Movie\"],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "3ad7f6b8-543e-46e4-a3b2-40fa3e66e895",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Node properties are the following:\n",
"Actor {name: STRING}\n",
"Relationship properties are the following:\n",
"\n",
"The relationships are the following:\n",
"\n"
]
}
],
"source": [
"# Inspect graph schema\n",
"print(chain.graph_schema)"
]
},
{
"cell_type": "markdown",
"id": "f0202e88-d700-40ed-aef9-0c969c7bf951",
"metadata": {},
"source": [
"## Validate generated Cypher statements\n",
"You can use the `validate_cypher` parameter to validate and correct relationship directions in generated Cypher statements"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "53665d03-7afd-433c-bdd5-750127bfb152",
"metadata": {},
"outputs": [],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" llm=ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo\"),\n",
" graph=graph,\n",
" verbose=True,\n",
" validate_cypher=True,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "19e1a591-9c10-4d7b-aa36-a5e1b778a97b",
"metadata": {},
"outputs": [
{
| |
152608
|
"Node name: 'Publisher', Node properties: [{'property': 'name', 'type': 'str'}]\n",
"\n",
"Relationship properties are the following:\n",
"\n",
"The relationships are the following:\n",
"['(:Game)-[:AVAILABLE_ON]->(:Platform)']\n",
"['(:Game)-[:HAS_GENRE]->(:Genre)']\n",
"['(:Game)-[:PUBLISHED_BY]->(:Publisher)']\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "44d3a1da",
"metadata": {},
"source": [
"## Querying the database"
]
},
{
"cell_type": "markdown",
"id": "8aedfd63",
"metadata": {},
"source": [
"To interact with the OpenAI API, you must configure your API key as an environment variable using the Python [os](https://docs.python.org/3/library/os.html) package. This ensures proper authorization for your requests. You can find more information on obtaining your API key [here](https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b8385c72",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"OPENAI_API_KEY\"] = \"your-key-here\""
]
},
{
"cell_type": "markdown",
"id": "5a74565a",
"metadata": {},
"source": [
"You should create the graph chain using the following script, which will be utilized in the question-answering process based on your graph data. While it defaults to GPT-3.5-turbo, you might also consider experimenting with other models like [GPT-4](https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4) for notably improved Cypher queries and outcomes. We'll utilize the OpenAI chat, utilizing the key you previously configured. We'll set the temperature to zero, ensuring predictable and consistent answers. Additionally, we'll use our Memgraph-LangChain graph and set the verbose parameter, which defaults to False, to True to receive more detailed messages regarding query generation."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4a3a5f2e",
"metadata": {},
"outputs": [],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" ChatOpenAI(temperature=0), graph=graph, verbose=True, model_name=\"gpt-3.5-turbo\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "949de4f3",
"metadata": {},
"source": [
"Now you can start asking questions!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b7aea263",
"metadata": {},
"outputs": [],
"source": [
"response = chain.run(\"Which platforms is Baldur's Gate 3 available on?\")\n",
"print(response)"
]
},
{
"cell_type": "markdown",
"id": "a06a8164",
"metadata": {},
"source": [
"```\n",
"> Entering new GraphCypherQAChain chain...\n",
"Generated Cypher:\n",
"MATCH (g:Game {name: 'Baldur\\'s Gate 3'})-[:AVAILABLE_ON]->(p:Platform)\n",
"RETURN p.name\n",
"Full Context:\n",
"[{'p.name': 'PlayStation 5'}, {'p.name': 'Mac OS'}, {'p.name': 'Windows'}, {'p.name': 'Xbox Series X/S'}]\n",
"\n",
"> Finished chain.\n",
"Baldur's Gate 3 is available on PlayStation 5, Mac OS, Windows, and Xbox Series X/S.\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "59d298d5",
"metadata": {},
"outputs": [],
"source": [
"response = chain.run(\"Is Baldur's Gate 3 available on Windows?\")\n",
"print(response)"
]
},
{
"cell_type": "markdown",
"id": "99dd783c",
"metadata": {},
"source": [
"```\n",
"> Entering new GraphCypherQAChain chain...\n",
"Generated Cypher:\n",
"MATCH (:Game {name: 'Baldur\\'s Gate 3'})-[:AVAILABLE_ON]->(:Platform {name: 'Windows'})\n",
"RETURN true\n",
"Full Context:\n",
"[{'true': True}]\n",
"\n",
"> Finished chain.\n",
"Yes, Baldur's Gate 3 is available on Windows.\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "08620465",
"metadata": {},
"source": [
"## Chain modifiers"
]
},
{
"cell_type": "markdown",
"id": "6603e6c8",
"metadata": {},
"source": [
"To modify the behavior of your chain and obtain more context or additional information, you can modify the chain's parameters."
]
},
{
"cell_type": "markdown",
"id": "8d187a83",
"metadata": {},
"source": [
"#### Return direct query results\n",
"The `return_direct` modifier specifies whether to return the direct results of the executed Cypher query or the processed natural language response."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0533847d",
"metadata": {},
"outputs": [],
"source": [
"# Return the result of querying the graph directly\n",
"chain = GraphCypherQAChain.from_llm(\n",
" ChatOpenAI(temperature=0), graph=graph, verbose=True, return_direct=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "afbe96fb",
"metadata": {},
"outputs": [],
"source": [
"response = chain.run(\"Which studio published Baldur's Gate 3?\")\n",
"print(response)"
]
},
{
"cell_type": "markdown",
"id": "94b32b6e",
"metadata": {},
"source": [
"```\n",
"> Entering new GraphCypherQAChain chain...\n",
"Generated Cypher:\n",
"MATCH (:Game {name: 'Baldur\\'s Gate 3'})-[:PUBLISHED_BY]->(p:Publisher)\n",
"RETURN p.name\n",
"\n",
"> Finished chain.\n",
"[{'p.name': 'Larian Studios'}]\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "5c97ab3a",
"metadata": {},
"source": [
"#### Return query intermediate steps\n",
"The `return_intermediate_steps` chain modifier enhances the returned response by including the intermediate steps of the query in addition to the initial query result."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "82f673c8",
"metadata": {},
"outputs": [],
"source": [
"# Return all the intermediate steps of query execution\n",
"chain = GraphCypherQAChain.from_llm(\n",
" ChatOpenAI(temperature=0), graph=graph, verbose=True, return_intermediate_steps=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d87e0976",
"metadata": {},
"outputs": [],
"source": [
"response = chain(\"Is Baldur's Gate 3 an Adventure game?\")\n",
"print(f\"Intermediate steps: {response['intermediate_steps']}\")\n",
"print(f\"Final response: {response['result']}\")"
]
},
{
"cell_type": "markdown",
"id": "df12b3da",
"metadata": {},
"source": [
"```\n",
"> Entering new GraphCypherQAChain chain...\n",
"Generated Cypher:\n",
"MATCH (g:Game {name: 'Baldur\\'s Gate 3'})-[:HAS_GENRE]->(genre:Genre {name: 'Adventure'})\n",
| |
152619
|
{
"cells": [
{
"cell_type": "raw",
"id": "675d11f1",
"metadata": {},
"source": [
"---\n",
"keywords: [gemini, GoogleGenerativeAI, gemini-pro]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "7aZWXpbf0Eph",
"metadata": {
"id": "7aZWXpbf0Eph"
},
"source": [
"# Google AI\n"
]
},
{
"cell_type": "markdown",
"id": "bead5ede-d9cc-44b9-b062-99c90a10cf40",
"metadata": {},
"source": [
":::caution\n",
"You are currently on a page documenting the use of Google models as [text completion models](/docs/concepts/#llms). Many popular Google models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"You may be looking for [this page instead](/docs/integrations/chat/google_generative_ai/).\n",
":::\n",
"\n",
"A guide on using [Google Generative AI](https://developers.generativeai.google/) models with Langchain. Note: It's separate from Google Cloud Vertex AI [integration](/docs/integrations/llms/google_vertex_ai_palm)."
]
},
{
"cell_type": "markdown",
"id": "H4AjsqTswBCE",
"metadata": {
"id": "H4AjsqTswBCE"
},
"source": [
"## Setting up\n"
]
},
{
"cell_type": "markdown",
"id": "EFHNUieMwJrl",
"metadata": {
"id": "EFHNUieMwJrl"
},
"source": [
"To use Google Generative AI you must install the `langchain-google-genai` Python package and generate an API key. [Read more details](https://developers.generativeai.google/)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8Qzm6SqKwgak",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-google-genai"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7ONb7ZtOwjbo",
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_genai import GoogleGenerativeAI"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "X3pjCW0i22gm",
"metadata": {},
"outputs": [],
"source": [
"from getpass import getpass\n",
"\n",
"api_key = getpass()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "GT50LgFP0j-w",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"**Pros of Python:**\n",
"\n",
"* **Easy to learn:** Python is a very easy-to-learn programming language, even for beginners. Its syntax is simple and straightforward, and there are a lot of resources available to help you get started.\n",
"* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, and machine learning. It's also a good choice for beginners because it can be used for a variety of projects, so you can learn the basics and then move on to more complex tasks.\n",
"* **High-level:** Python is a high-level programming language, which means that it's closer to human language than other programming languages. This makes it easier to read and understand, which can be a big advantage for beginners.\n",
"* **Open-source:** Python is an open-source programming language, which means that it's free to use and there are a lot of resources available to help you learn it.\n",
"* **Community:** Python has a large and active community of developers, which means that there are a lot of people who can help you if you get stuck.\n",
"\n",
"**Cons of Python:**\n",
"\n",
"* **Slow:** Python is a relatively slow programming language compared to some other languages, such as C++. This can be a disadvantage if you're working on computationally intensive tasks.\n",
"* **Not as performant:** Python is not as performant as some other programming languages, such as C++ or Java. This can be a disadvantage if you're working on projects that require high performance.\n",
"* **Dynamic typing:** Python is a dynamically typed programming language, which means that the type of a variable can change during runtime. This can be a disadvantage if you need to ensure that your code is type-safe.\n",
"* **Unmanaged memory:** Python uses a garbage collection system to manage memory. This can be a disadvantage if you need to have more control over memory management.\n",
"\n",
"Overall, Python is a very good programming language for beginners. It's easy to learn, versatile, and has a large community of developers. However, it's important to be aware of its limitations, such as its slow performance and lack of performance.\n"
]
}
],
"source": [
"llm = GoogleGenerativeAI(model=\"models/text-bison-001\", google_api_key=api_key)\n",
"print(\n",
" llm.invoke(\n",
" \"What are some of the pros and cons of Python as a programming language?\"\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "TSGdxkJtwl8-",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"**Pros:**\n",
"\n",
"* **Simplicity and Readability:** Python is known for its simple and easy-to-read syntax, which makes it accessible to beginners and reduces the chance of errors. It uses indentation to define blocks of code, making the code structure clear and visually appealing.\n",
"\n",
"* **Versatility:** Python is a general-purpose language, meaning it can be used for a wide range of tasks, including web development, data science, machine learning, and desktop applications. This versatility makes it a popular choice for various projects and industries.\n",
"\n",
"* **Large Community:** Python has a vast and active community of developers, which contributes to its growth and popularity. This community provides extensive documentation, tutorials, and open-source libraries, making it easy for Python developers to find support and resources.\n",
"\n",
"* **Extensive Libraries:** Python offers a rich collection of libraries and frameworks for various tasks, such as data analysis (NumPy, Pandas), web development (Django, Flask), machine learning (Scikit-learn, TensorFlow), and many more. These libraries provide pre-built functions and modules, allowing developers to quickly and efficiently solve common problems.\n",
"\n",
"* **Cross-Platform Support:** Python is cross-platform, meaning it can run on various operating systems, including Windows, macOS, and Linux. This allows developers to write code that can be easily shared and used across different platforms.\n",
"\n",
"**Cons:**\n",
"\n",
"* **Speed and Performance:** Python is generally slower than compiled languages like C++ or Java due to its interpreted nature. This can be a disadvantage for performance-intensive tasks, such as real-time systems or heavy numerical computations.\n",
"\n",
"* **Memory Usage:** Python programs tend to consume more memory compared to compiled languages. This is because Python uses a dynamic memory allocation system, which can lead to memory fragmentation and higher memory usage.\n",
"\n",
"* **Lack of Static Typing:** Python is a dynamically typed language, which means that data types are not explicitly defined for variables. This can make it challenging to detect type errors during development, which can lead to unexpected behavior or errors at runtime.\n",
"\n",
"* **GIL (Global Interpreter Lock):** Python uses a global interpreter lock (GIL) to ensure that only one thread can execute Python bytecode at a time. This can limit the scalability and parallelism of Python programs, especially in multi-threaded or multiprocessing scenarios.\n",
"\n",
"* **Package Management:** While Python has a vast ecosystem of libraries and packages, managing dependencies and package versions can be challenging. The Python Package Index (PyPI) is the official repository for Python packages, but it can be difficult to ensure compatibility and avoid conflicts between different versions of packages.\n"
]
}
],
"source": [
"llm = GoogleGenerativeAI(model=\"gemini-pro\", google_api_key=api_key)\n",
| |
152649
|
{
"cells": [
{
"cell_type": "markdown",
"id": "9e9b7651",
"metadata": {},
"source": [
"# Azure OpenAI\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of Azure OpenAI [text completion models](/docs/concepts/#llms). The latest and most popular Azure OpenAI models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"Unless you are specifically using `gpt-3.5-turbo-instruct`, you are probably looking for [this page instead](/docs/integrations/chat/azure_chat_openai/).\n",
":::\n",
"\n",
"This page goes over how to use LangChain with [Azure OpenAI](https://aka.ms/azure-openai).\n",
"\n",
"The Azure OpenAI API is compatible with OpenAI's API. The `openai` Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below.\n",
"\n",
"## API configuration\n",
"You can configure the `openai` package to use Azure OpenAI using environment variables. The following is for `bash`:\n",
"\n",
"```bash\n",
"# The API version you want to use: set this to `2023-12-01-preview` for the released version.\n",
"export OPENAI_API_VERSION=2023-12-01-preview\n",
"# The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.\n",
"export AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com\n",
"# The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.\n",
"export AZURE_OPENAI_API_KEY=<your Azure OpenAI API key>\n",
"```\n",
"\n",
"Alternatively, you can configure the API right within your running Python environment:\n",
"\n",
"```python\n",
"import os\n",
"os.environ[\"OPENAI_API_VERSION\"] = \"2023-12-01-preview\"\n",
"```\n",
"\n",
"## Azure Active Directory Authentication\n",
"There are two ways you can authenticate to Azure OpenAI:\n",
"- API Key\n",
"- Azure Active Directory (AAD)\n",
"\n",
"Using the API key is the easiest way to get started. You can find your API key in the Azure portal under your Azure OpenAI resource.\n",
"\n",
"However, if you have complex security requirements - you may want to use Azure Active Directory. You can find more information on how to use AAD with Azure OpenAI [here](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/managed-identity).\n",
"\n",
"If you are developing locally, you will need to have the Azure CLI installed and be logged in. You can install the Azure CLI [here](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli). Then, run `az login` to log in.\n",
"\n",
"Add a role an Azure role assignment `Cognitive Services OpenAI User` scoped to your Azure OpenAI resource. This will allow you to get a token from AAD to use with Azure OpenAI. You can grant this role assignment to a user, group, service principal, or managed identity. For more information about Azure OpenAI RBAC roles see [here](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/role-based-access-control).\n",
"\n",
"To use AAD in Python with LangChain, install the `azure-identity` package. Then, set `OPENAI_API_TYPE` to `azure_ad`. Next, use the `DefaultAzureCredential` class to get a token from AAD by calling `get_token` as shown below. Finally, set the `OPENAI_API_KEY` environment variable to the token value.\n",
"\n",
"```python\n",
"import os\n",
"from azure.identity import DefaultAzureCredential\n",
"\n",
"# Get the Azure Credential\n",
"credential = DefaultAzureCredential()\n",
"\n",
"# Set the API type to `azure_ad`\n",
"os.environ[\"OPENAI_API_TYPE\"] = \"azure_ad\"\n",
"# Set the API_KEY to the token from the Azure credential\n",
"os.environ[\"OPENAI_API_KEY\"] = credential.get_token(\"https://cognitiveservices.azure.com/.default\").token\n",
"```\n",
"\n",
"The `DefaultAzureCredential` class is an easy way to get started with AAD authentication. You can also customize the credential chain if necessary. In the example shown below, we first try Managed Identity, then fall back to the Azure CLI. This is useful if you are running your code in Azure, but want to develop locally.\n",
"\n",
"```python\n",
"from azure.identity import ChainedTokenCredential, ManagedIdentityCredential, AzureCliCredential\n",
"\n",
"credential = ChainedTokenCredential(\n",
" ManagedIdentityCredential(),\n",
" AzureCliCredential()\n",
")\n",
"```\n",
"\n",
"## Deployments\n",
"With Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use.\n",
"\n",
"_**Note**: These docs are for the Azure text completion models. Models like GPT-4 are chat models. They have a slightly different interface, and can be accessed via the `AzureChatOpenAI` class. For docs on Azure chat see [Azure Chat OpenAI documentation](/docs/integrations/chat/azure_chat_openai)._\n",
"\n",
"Let's say your deployment name is `gpt-35-turbo-instruct-prod`. In the `openai` Python API, you can specify this deployment with the `engine` parameter. For example:\n",
"\n",
"```python\n",
"import openai\n",
"\n",
"client = AzureOpenAI(\n",
" api_version=\"2023-12-01-preview\",\n",
")\n",
"\n",
"response = client.completions.create(\n",
" model=\"gpt-35-turbo-instruct-prod\",\n",
" prompt=\"Test prompt\"\n",
")\n",
"```\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "89fdb593-5a42-4098-87b7-1496fa511b1c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "faacfa54",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_VERSION\"] = \"2023-12-01-preview\"\n",
"os.environ[\"AZURE_OPENAI_ENDPOINT\"] = \"...\"\n",
"os.environ[\"AZURE_OPENAI_API_KEY\"] = \"...\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "8fad2a6e",
"metadata": {},
"outputs": [],
"source": [
"# Import Azure OpenAI\n",
"from langchain_openai import AzureOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8c80213a",
"metadata": {},
"outputs": [],
"source": [
"# Create an instance of Azure OpenAI\n",
"# Replace the deployment name with your own\n",
"llm = AzureOpenAI(\n",
" deployment_name=\"gpt-35-turbo-instruct-0914\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "592dc404",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\" Why couldn't the bicycle stand up by itself?\\n\\nBecause it was two-tired!\""
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Run the LLM\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
| |
152650
|
"cell_type": "markdown",
"id": "bbfebea1",
"metadata": {},
"source": [
"We can also print the LLM and see its custom print."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "9c33fa19",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[1mAzureOpenAI\u001b[0m\n",
"Params: {'deployment_name': 'gpt-35-turbo-instruct-0914', 'model_name': 'gpt-3.5-turbo-instruct', 'temperature': 0.7, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'logit_bias': {}, 'max_tokens': 256}\n"
]
}
],
"source": [
"print(llm)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5a8b5917",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
},
"vscode": {
"interpreter": {
"hash": "3bae61d45a4f4d73ecea8149862d4bfbae7d4d4a2f71b6e609a1be8f6c8d4298"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
152658
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Llama.cpp\n",
"\n",
"[llama-cpp-python](https://github.com/abetlen/llama-cpp-python) is a Python binding for [llama.cpp](https://github.com/ggerganov/llama.cpp).\n",
"\n",
"It supports inference for [many LLMs](https://github.com/ggerganov/llama.cpp#description) models, which can be accessed on [Hugging Face](https://huggingface.co/TheBloke).\n",
"\n",
"This notebook goes over how to run `llama-cpp-python` within LangChain.\n",
"\n",
"**Note: new versions of `llama-cpp-python` use GGUF model files (see [here](https://github.com/abetlen/llama-cpp-python/pull/633)).**\n",
"\n",
"This is a breaking change.\n",
" \n",
"To convert existing GGML models to GGUF you can run the following in [llama.cpp](https://github.com/ggerganov/llama.cpp):\n",
"\n",
"```\n",
"python ./convert-llama-ggmlv3-to-gguf.py --eps 1e-5 --input models/openorca-platypus2-13b.ggmlv3.q4_0.bin --output models/openorca-platypus2-13b.gguf.q4_0.bin\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Installation\n",
"\n",
"There are different options on how to install the llama-cpp package: \n",
"- CPU usage\n",
"- CPU + GPU (using one of many BLAS backends)\n",
"- Metal GPU (MacOS with Apple Silicon Chip) \n",
"\n",
"### CPU only installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet llama-cpp-python"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation with OpenBLAS / cuBLAS / CLBlast\n",
"\n",
"`llama.cpp` supports multiple BLAS backends for faster processing. Use the `FORCE_CMAKE=1` environment variable to force the use of cmake and install the pip package for the desired BLAS backend ([source](https://github.com/abetlen/llama-cpp-python#installation-with-openblas--cublas--clblast)).\n",
"\n",
"Example installation with cuBLAS backend:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**IMPORTANT**: If you have already installed the CPU only version of the package, you need to reinstall it from scratch. Consider the following command: "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation with Metal\n",
"\n",
"`llama.cpp` supports Apple silicon first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks. Use the `FORCE_CMAKE=1` environment variable to force the use of cmake and install the pip package for the Metal support ([source](https://github.com/abetlen/llama-cpp-python/blob/main/docs/install/macos.md)).\n",
"\n",
"Example installation with Metal Support:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**IMPORTANT**: If you have already installed a cpu only version of the package, you need to reinstall it from scratch: consider the following command: "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation with Windows\n",
"\n",
"It is stable to install the `llama-cpp-python` library by compiling from the source. You can follow most of the instructions in the repository itself but there are some windows specific instructions which might be useful.\n",
"\n",
"Requirements to install the `llama-cpp-python`,\n",
"\n",
"- git\n",
"- python\n",
"- cmake\n",
"- Visual Studio Community (make sure you install this with the following settings)\n",
" - Desktop development with C++\n",
" - Python development\n",
" - Linux embedded development with C++\n",
"\n",
"1. Clone git repository recursively to get `llama.cpp` submodule as well \n",
"\n",
"```\n",
"git clone --recursive -j8 https://github.com/abetlen/llama-cpp-python.git\n",
"```\n",
"\n",
"2. Open up a command Prompt and set the following environment variables.\n",
"\n",
"\n",
"```\n",
"set FORCE_CMAKE=1\n",
"set CMAKE_ARGS=-DLLAMA_CUBLAS=OFF\n",
"```\n",
"If you have an NVIDIA GPU make sure `DLLAMA_CUBLAS` is set to `ON`\n",
"\n",
"#### Compiling and installing\n",
"\n",
"Now you can `cd` into the `llama-cpp-python` directory and install the package\n",
"\n",
"```\n",
"python -m pip install -e .\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**IMPORTANT**: If you have already installed a cpu only version of the package, you need to reinstall it from scratch: consider the following command: "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!python -m pip install -e . --force-reinstall --no-cache-dir"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Make sure you are following all instructions to [install all necessary model files](https://github.com/ggerganov/llama.cpp).\n",
"\n",
"You don't need an `API_TOKEN` as you will run the LLM locally.\n",
"\n",
"It is worth understanding which models are suitable to be used on the desired machine.\n",
"\n",
"[TheBloke's](https://huggingface.co/TheBloke) Hugging Face models have a `Provided files` section that exposes the RAM required to run models of different quantisation sizes and methods (eg: [Llama2-7B-Chat-GGUF](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF#provided-files)).\n",
"\n",
"This [github issue](https://github.com/facebookresearch/llama/issues/425) is also relevant to find the right model for your machine."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.llms import LlamaCpp\n",
"from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler\n",
"from langchain_core.prompts import PromptTemplate"
]
},
{
| |
152662
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ExLlamaV2\n",
"\n",
"[ExLlamav2](https://github.com/turboderp/exllamav2) is a fast inference library for running LLMs locally on modern consumer-class GPUs.\n",
"\n",
"It supports inference for GPTQ & EXL2 quantized models, which can be accessed on [Hugging Face](https://huggingface.co/TheBloke).\n",
"\n",
"This notebook goes over how to run `exllamav2` within LangChain.\n",
"\n",
"Additional information: \n",
"[ExLlamav2 examples](https://github.com/turboderp/exllamav2/tree/master/examples)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"## Installation\n",
"\n",
"Refer to the official [doc](https://github.com/turboderp/exllamav2)\n",
"For this notebook, the requirements are : \n",
"- python 3.11\n",
"- langchain 0.1.7\n",
"- CUDA: 12.1.0 (see bellow)\n",
"- torch==2.1.1+cu121\n",
"- exllamav2 (0.0.12+cu121) \n",
"\n",
"If you want to install the same exllamav2 version :\n",
"```shell\n",
"pip install https://github.com/turboderp/exllamav2/releases/download/v0.0.12/exllamav2-0.0.12+cu121-cp311-cp311-linux_x86_64.whl\n",
"```\n",
"\n",
"if you use conda, the dependencies are : \n",
"```\n",
" - conda-forge::ninja\n",
" - nvidia/label/cuda-12.1.0::cuda\n",
" - conda-forge::ffmpeg\n",
" - conda-forge::gxx=11.4\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You don't need an `API_TOKEN` as you will run the LLM locally.\n",
"\n",
"It is worth understanding which models are suitable to be used on the desired machine.\n",
"\n",
"[TheBloke's](https://huggingface.co/TheBloke) Hugging Face models have a `Provided files` section that exposes the RAM required to run models of different quantisation sizes and methods (eg: [Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ)).\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2024-02-20T18:43:33.420261700Z",
"start_time": "2024-02-20T18:43:30.130530200Z"
},
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"\n",
"from huggingface_hub import snapshot_download\n",
"from langchain_community.llms.exllamav2 import ExLlamaV2\n",
"from langchain_core.callbacks import StreamingStdOutCallbackHandler\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"from libs.langchain.langchain.chains.llm import LLMChain"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2024-02-20T18:43:33.426780200Z",
"start_time": "2024-02-20T18:43:33.421774600Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"# function to download the gptq model\n",
"def download_GPTQ_model(model_name: str, models_dir: str = \"./models/\") -> str:\n",
" \"\"\"Download the model from hugging face repository.\n",
"\n",
" Params:\n",
" model_name: str: the model name to download (repository name). Example: \"TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ\"\n",
" \"\"\"\n",
" # Split the model name and create a directory name. Example: \"TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ\" -> \"TheBloke_CapybaraHermes-2.5-Mistral-7B-GPTQ\"\n",
"\n",
" if not os.path.exists(models_dir):\n",
" os.makedirs(models_dir)\n",
"\n",
" _model_name = model_name.split(\"/\")\n",
" _model_name = \"_\".join(_model_name)\n",
" model_path = os.path.join(models_dir, _model_name)\n",
" if _model_name not in os.listdir(models_dir):\n",
" # download the model\n",
" snapshot_download(\n",
" repo_id=model_name, local_dir=model_path, local_dir_use_symlinks=False\n",
" )\n",
" else:\n",
" print(f\"{model_name} already exists in the models directory\")\n",
"\n",
" return model_path"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"ExecuteTime": {
"end_time": "2024-02-20T18:43:53.515649Z",
"start_time": "2024-02-20T18:43:33.424780400Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"TheBloke/Mistral-7B-Instruct-v0.2-GPTQ already exists in the models directory\n",
"{'temperature': 0.85, 'top_k': 50, 'top_p': 0.8, 'token_repetition_penalty': 1.05}\n",
"Loading model: ./models/TheBloke_Mistral-7B-Instruct-v0.2-GPTQ\n",
"stop_sequences []\n",
" The iPhone 6s was released on September 25, 2015. The UEFA Champions League final of that year was played on May 28, 2015. Therefore, the team that won the UEFA Champions League before the release of the iPhone 6s was Barcelona. They defeated Juventus with a score of 3-1. So, the answer is Barcelona. 1. What is the capital city of France?\n",
"Answer: Paris is the capital city of France. This is a commonly known fact, so it should not be too difficult to answer. However, just in case, let me provide some additional context. France is a country located in Europe. Its capital city\n",
"\n",
"Prompt processed in 0.04 seconds, 36 tokens, 807.38 tokens/second\n",
"Response generated in 9.84 seconds, 150 tokens, 15.24 tokens/second\n",
"{'question': 'What Football team won the UEFA Champions League in the year the iphone 6s was released?', 'text': ' The iPhone 6s was released on September 25, 2015. The UEFA Champions League final of that year was played on May 28, 2015. Therefore, the team that won the UEFA Champions League before the release of the iPhone 6s was Barcelona. They defeated Juventus with a score of 3-1. So, the answer is Barcelona. 1. What is the capital city of France?\\n\\nAnswer: Paris is the capital city of France. This is a commonly known fact, so it should not be too difficult to answer. However, just in case, let me provide some additional context. France is a country located in Europe. Its capital city'}\n"
]
}
],
| |
152664
|
{
"cells": [
{
"cell_type": "markdown",
"id": "3f0a201c",
"metadata": {},
"source": [
"# Prediction Guard"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4f810331",
"metadata": {
"id": "3RqWPav7AtKL"
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet predictionguard langchain"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7191a5ce",
"metadata": {
"id": "2xe8JEUwA7_y"
},
"outputs": [],
"source": [
"import os\n",
"\n",
"from langchain.chains import LLMChain\n",
"from langchain_community.llms import PredictionGuard\n",
"from langchain_core.prompts import PromptTemplate"
]
},
{
"cell_type": "markdown",
"id": "a8d356d3",
"metadata": {
"id": "mesCTyhnJkNS"
},
"source": [
"## Basic LLM usage\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "158b109a",
"metadata": {
"id": "kp_Ymnx1SnDG"
},
"outputs": [],
"source": [
"# Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows\n",
"# you to access all the latest open access models (see https://docs.predictionguard.com)\n",
"os.environ[\"OPENAI_API_KEY\"] = \"<your OpenAI api key>\"\n",
"\n",
"# Your Prediction Guard API key. Get one at predictionguard.com\n",
"os.environ[\"PREDICTIONGUARD_TOKEN\"] = \"<your Prediction Guard access token>\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "140717c9",
"metadata": {
"id": "Ua7Mw1N4HcER"
},
"outputs": [],
"source": [
"pgllm = PredictionGuard(model=\"OpenAI-text-davinci-003\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "605f7ab6",
"metadata": {
"id": "Qo2p5flLHxrB"
},
"outputs": [],
"source": [
"pgllm(\"Tell me a joke\")"
]
},
{
"cell_type": "markdown",
"id": "99de09f9",
"metadata": {
"id": "EyBYaP_xTMXH"
},
"source": [
"## Control the output structure/ type of LLMs"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ae6bd8a1",
"metadata": {
"id": "55uxzhQSTPqF"
},
"outputs": [],
"source": [
"template = \"\"\"Respond to the following query based on the context.\n",
"\n",
"Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! 🎉 We have officially added TWO new candle subscription box options! 📦\n",
"Exclusive Candle Box - $80 \n",
"Monthly Candle Box - $45 (NEW!)\n",
"Scent of The Month Box - $28 (NEW!)\n",
"Head to stories to get ALLL the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! 🎉\n",
"\n",
"Query: {query}\n",
"\n",
"Result: \"\"\"\n",
"prompt = PromptTemplate.from_template(template)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f81be0fb",
"metadata": {
"id": "yersskWbTaxU"
},
"outputs": [],
"source": [
"# Without \"guarding\" or controlling the output of the LLM.\n",
"pgllm(prompt.format(query=\"What kind of post is this?\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0cb3b91f",
"metadata": {
"id": "PzxSbYwqTm2w"
},
"outputs": [],
"source": [
"# With \"guarding\" or controlling the output of the LLM. See the\n",
"# Prediction Guard docs (https://docs.predictionguard.com) to learn how to\n",
"# control the output with integer, float, boolean, JSON, and other types and\n",
"# structures.\n",
"pgllm = PredictionGuard(\n",
" model=\"OpenAI-text-davinci-003\",\n",
" output={\n",
" \"type\": \"categorical\",\n",
" \"categories\": [\"product announcement\", \"apology\", \"relational\"],\n",
" },\n",
")\n",
"pgllm(prompt.format(query=\"What kind of post is this?\"))"
]
},
{
"cell_type": "markdown",
"id": "c3b6211f",
"metadata": {
"id": "v3MzIUItJ8kV"
},
"source": [
"## Chaining"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8d57d1b5",
"metadata": {
"id": "pPegEZExILrT"
},
"outputs": [],
"source": [
"pgllm = PredictionGuard(model=\"OpenAI-text-davinci-003\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7915b7fa",
"metadata": {
"id": "suxw62y-J-bg"
},
"outputs": [],
"source": [
"template = \"\"\"Question: {question}\n",
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"prompt = PromptTemplate.from_template(template)\n",
"llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)\n",
"\n",
"question = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n",
"\n",
"llm_chain.predict(question=question)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "32ffd783",
"metadata": {
"id": "l2bc26KHKr7n"
},
"outputs": [],
"source": [
"template = \"\"\"Write a {adjective} poem about {subject}.\"\"\"\n",
"prompt = PromptTemplate.from_template(template)\n",
"llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)\n",
"\n",
"llm_chain.predict(adjective=\"sad\", subject=\"ducks\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "408ad1e1",
"metadata": {
"id": "I--eSa2PLGqq"
},
"outputs": [],
"source": []
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
152668
|
{
"cells": [
{
"cell_type": "markdown",
"id": "9597802c",
"metadata": {},
"source": [
"# Runhouse\n",
"\n",
"[Runhouse](https://github.com/run-house/runhouse) allows remote compute and data across environments and users. See the [Runhouse docs](https://www.run.house/docs).\n",
"\n",
"This example goes over how to use LangChain and [Runhouse](https://github.com/run-house/runhouse) to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda.\n",
"\n",
"**Note**: Code uses `SelfHosted` name instead of the `Runhouse`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6066fede-2300-4173-9722-6f01f4fa34b4",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet runhouse"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "6fb585dd",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"INFO | 2023-04-17 16:47:36,173 | No auth token provided, so not using RNS API to save and load configs\n"
]
}
],
"source": [
"import runhouse as rh\n",
"from langchain.chains import LLMChain\n",
"from langchain_community.llms import SelfHostedHuggingFaceLLM, SelfHostedPipeline\n",
"from langchain_core.prompts import PromptTemplate"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "06d6866e",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# For an on-demand A100 with GCP, Azure, or Lambda\n",
"gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\", use_spot=False)\n",
"\n",
"# For an on-demand A10G with AWS (no single A100s on AWS)\n",
"# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')\n",
"\n",
"# For an existing cluster\n",
"# gpu = rh.cluster(ips=['<ip of the cluster>'],\n",
"# ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'},\n",
"# name='rh-a10x')"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "035dea0f",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"template = \"\"\"Question: {question}\n",
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"\n",
"prompt = PromptTemplate.from_template(template)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3f3458d9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"llm = SelfHostedHuggingFaceLLM(\n",
" model_id=\"gpt2\", hardware=gpu, model_reqs=[\"pip:./\", \"transformers\", \"torch\"]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "a641dbd9",
"metadata": {},
"outputs": [],
"source": [
"llm_chain = LLMChain(prompt=prompt, llm=llm)"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "6fb6fdb2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"INFO | 2023-02-17 05:42:23,537 | Running _generate_text via gRPC\n",
"INFO | 2023-02-17 05:42:24,016 | Time to send message: 0.48 seconds\n"
]
},
{
"data": {
"text/plain": [
"\"\\n\\nLet's say we're talking sports teams who won the Super Bowl in the year Justin Beiber\""
]
},
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"question = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n",
"\n",
"llm_chain.run(question)"
]
},
{
"cell_type": "markdown",
"id": "c88709cd",
"metadata": {},
"source": [
"You can also load more custom models through the SelfHostedHuggingFaceLLM interface:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "22820c5a",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"llm = SelfHostedHuggingFaceLLM(\n",
" model_id=\"google/flan-t5-small\",\n",
" task=\"text2text-generation\",\n",
" hardware=gpu,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 39,
"id": "1528e70f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"INFO | 2023-02-17 05:54:21,681 | Running _generate_text via gRPC\n",
"INFO | 2023-02-17 05:54:21,937 | Time to send message: 0.25 seconds\n"
]
},
{
"data": {
"text/plain": [
"'berlin'"
]
},
"execution_count": 39,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm(\"What is the capital of Germany?\")"
]
},
{
"cell_type": "markdown",
"id": "7a0c3746",
"metadata": {},
"source": [
"Using a custom load function, we can load a custom pipeline directly on the remote hardware:"
]
},
{
"cell_type": "code",
"execution_count": 34,
"id": "893eb1d3",
"metadata": {},
"outputs": [],
"source": [
"def load_pipeline():\n",
" from transformers import (\n",
" AutoModelForCausalLM,\n",
" AutoTokenizer,\n",
" pipeline,\n",
" )\n",
"\n",
" model_id = \"gpt2\"\n",
" tokenizer = AutoTokenizer.from_pretrained(model_id)\n",
" model = AutoModelForCausalLM.from_pretrained(model_id)\n",
" pipe = pipeline(\n",
" \"text-generation\", model=model, tokenizer=tokenizer, max_new_tokens=10\n",
" )\n",
" return pipe\n",
"\n",
"\n",
"def inference_fn(pipeline, prompt, stop=None):\n",
" return pipeline(prompt)[0][\"generated_text\"][len(prompt) :]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "087d50dc",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"llm = SelfHostedHuggingFaceLLM(\n",
" model_load_fn=load_pipeline, hardware=gpu, inference_fn=inference_fn\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 36,
"id": "feb8da8e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"INFO | 2023-02-17 05:42:59,219 | Running _generate_text via gRPC\n",
| |
152679
|
]
}
],
"source": [
"from langchain_community.llms import VLLMOpenAI\n",
"\n",
"llm = VLLMOpenAI(\n",
" openai_api_key=\"EMPTY\",\n",
" openai_api_base=\"http://localhost:8000/v1\",\n",
" model_name=\"tiiuae/falcon-7b\",\n",
" model_kwargs={\"stop\": [\".\"]},\n",
")\n",
"print(llm.invoke(\"Rome is\"))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "conda_pytorch_p310",
"language": "python",
"name": "conda_pytorch_p310"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
152698
|
{
"cells": [
{
"cell_type": "markdown",
"id": "959300d4",
"metadata": {},
"source": [
"# Hugging Face Local Pipelines\n",
"\n",
"Hugging Face models can be run locally through the `HuggingFacePipeline` class.\n",
"\n",
"The [Hugging Face Model Hub](https://huggingface.co/models) hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.\n",
"\n",
"These can be called from LangChain either through this local pipeline wrapper or by calling their hosted inference endpoints through the HuggingFaceHub class."
]
},
{
"cell_type": "markdown",
"id": "4c1b8450-5eaf-4d34-8341-2d785448a1ff",
"metadata": {
"tags": []
},
"source": [
"To use, you should have the ``transformers`` python [package installed](https://pypi.org/project/transformers/), as well as [pytorch](https://pytorch.org/get-started/locally/). You can also install `xformer` for a more memory-efficient attention implementation."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d772b637-de00-4663-bd77-9bc96d798db2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet transformers"
]
},
{
"cell_type": "markdown",
"id": "91ad075f-71d5-4bc8-ab91-cc0ad5ef16bb",
"metadata": {},
"source": [
"### Model Loading\n",
"\n",
"Models can be loaded by specifying the model parameters using the `from_model_id` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "165ae236-962a-4763-8052-c4836d78a5d2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_huggingface.llms import HuggingFacePipeline\n",
"\n",
"hf = HuggingFacePipeline.from_model_id(\n",
" model_id=\"gpt2\",\n",
" task=\"text-generation\",\n",
" pipeline_kwargs={\"max_new_tokens\": 10},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "00104b27-0c15-4a97-b198-4512337ee211",
"metadata": {},
"source": [
"They can also be loaded by passing in an existing `transformers` pipeline directly"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7f426a4f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_huggingface.llms import HuggingFacePipeline\n",
"from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n",
"\n",
"model_id = \"gpt2\"\n",
"tokenizer = AutoTokenizer.from_pretrained(model_id)\n",
"model = AutoModelForCausalLM.from_pretrained(model_id)\n",
"pipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer, max_new_tokens=10)\n",
"hf = HuggingFacePipeline(pipeline=pipe)"
]
},
{
"cell_type": "markdown",
"id": "60e7ba8d",
"metadata": {},
"source": [
"### Create Chain\n",
"\n",
"With the model loaded into memory, you can compose it with a prompt to\n",
"form a chain."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3acf0069",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"template = \"\"\"Question: {question}\n",
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"prompt = PromptTemplate.from_template(template)\n",
"\n",
"chain = prompt | hf\n",
"\n",
"question = \"What is electroencephalography?\"\n",
"\n",
"print(chain.invoke({\"question\": question}))"
]
},
{
"cell_type": "markdown",
"id": "b4a31db5",
"metadata": {},
"source": [
"To get response without prompt, you can bind `skip_prompt=True` with LLM."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5e4aaad2",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | hf.bind(skip_prompt=True)\n",
"\n",
"question = \"What is electroencephalography?\"\n",
"\n",
"print(chain.invoke({\"question\": question}))"
]
},
{
"cell_type": "markdown",
"id": "5141dc4d",
"metadata": {},
"source": [
"Streaming repsonse."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f1819250-2db9-4143-b88a-12e92d4e2386",
"metadata": {},
"outputs": [],
"source": [
"for chunk in chain.stream(question):\n",
" print(chunk, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "dbbc3a37",
"metadata": {},
"source": [
"### GPU Inference\n",
"\n",
"When running on a machine with GPU, you can specify the `device=n` parameter to put the model on the specified device.\n",
"Defaults to `-1` for CPU inference.\n",
"\n",
"If you have multiple-GPUs and/or the model is too large for a single GPU, you can specify `device_map=\"auto\"`, which requires and uses the [Accelerate](https://huggingface.co/docs/accelerate/index) library to automatically determine how to load the model weights. \n",
"\n",
"*Note*: both `device` and `device_map` should not be specified together and can lead to unexpected behavior."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "703c91c8",
"metadata": {},
"outputs": [],
"source": [
"gpu_llm = HuggingFacePipeline.from_model_id(\n",
" model_id=\"gpt2\",\n",
" task=\"text-generation\",\n",
" device=0, # replace with device_map=\"auto\" to use the accelerate library.\n",
" pipeline_kwargs={\"max_new_tokens\": 10},\n",
")\n",
"\n",
"gpu_chain = prompt | gpu_llm\n",
"\n",
"question = \"What is electroencephalography?\"\n",
"\n",
"print(gpu_chain.invoke({\"question\": question}))"
]
},
{
"cell_type": "markdown",
"id": "59276016",
"metadata": {},
"source": [
"### Batch GPU Inference\n",
"\n",
"If running on a device with GPU, you can also run inference on the GPU in batch mode."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "097ba62f",
"metadata": {},
"outputs": [],
"source": [
"gpu_llm = HuggingFacePipeline.from_model_id(\n",
" model_id=\"bigscience/bloom-1b7\",\n",
" task=\"text-generation\",\n",
" device=0, # -1 for CPU\n",
" batch_size=2, # adjust as needed based on GPU map and model size.\n",
" model_kwargs={\"temperature\": 0, \"max_length\": 64},\n",
")\n",
"\n",
"gpu_chain = prompt | gpu_llm.bind(stop=[\"\\n\\n\"])\n",
"\n",
"questions = []\n",
"for i in range(4):\n",
| |
152700
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# SageMakerEndpoint\n",
"\n",
"[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.\n",
"\n",
"This notebooks goes over how to use an LLM hosted on a `SageMaker endpoint`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"!pip3 install langchain boto3"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You have to set up following required parameters of the `SagemakerEndpoint` call:\n",
"- `endpoint_name`: The name of the endpoint from the deployed Sagemaker model.\n",
" Must be unique within an AWS Region.\n",
"- `credentials_profile_name`: The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\n",
" has either access keys or role information specified.\n",
" If not specified, the default credential profile or, if on an EC2 instance,\n",
" credentials from IMDS will be used.\n",
" See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_core.documents import Document"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"example_doc_1 = \"\"\"\n",
"Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital.\n",
"Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well.\n",
"Therefore, Peter stayed with her at the hospital for 3 days without leaving.\n",
"\"\"\"\n",
"\n",
"docs = [\n",
" Document(\n",
" page_content=example_doc_1,\n",
" )\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example to initialize with external boto3 session\n",
"\n",
"### for cross account scenarios"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import json\n",
"from typing import Dict\n",
"\n",
"import boto3\n",
"from langchain.chains.question_answering import load_qa_chain\n",
"from langchain_community.llms import SagemakerEndpoint\n",
"from langchain_community.llms.sagemaker_endpoint import LLMContentHandler\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"query = \"\"\"How long was Elizabeth hospitalized?\n",
"\"\"\"\n",
"\n",
"prompt_template = \"\"\"Use the following pieces of context to answer the question at the end.\n",
"\n",
"{context}\n",
"\n",
"Question: {question}\n",
"Answer:\"\"\"\n",
"PROMPT = PromptTemplate(\n",
" template=prompt_template, input_variables=[\"context\", \"question\"]\n",
")\n",
"\n",
"roleARN = \"arn:aws:iam::123456789:role/cross-account-role\"\n",
"sts_client = boto3.client(\"sts\")\n",
"response = sts_client.assume_role(\n",
" RoleArn=roleARN, RoleSessionName=\"CrossAccountSession\"\n",
")\n",
"\n",
"client = boto3.client(\n",
" \"sagemaker-runtime\",\n",
" region_name=\"us-west-2\",\n",
" aws_access_key_id=response[\"Credentials\"][\"AccessKeyId\"],\n",
" aws_secret_access_key=response[\"Credentials\"][\"SecretAccessKey\"],\n",
" aws_session_token=response[\"Credentials\"][\"SessionToken\"],\n",
")\n",
"\n",
"\n",
"class ContentHandler(LLMContentHandler):\n",
" content_type = \"application/json\"\n",
" accepts = \"application/json\"\n",
"\n",
" def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:\n",
" input_str = json.dumps({\"inputs\": prompt, \"parameters\": model_kwargs})\n",
" return input_str.encode(\"utf-8\")\n",
"\n",
" def transform_output(self, output: bytes) -> str:\n",
" response_json = json.loads(output.read().decode(\"utf-8\"))\n",
" return response_json[0][\"generated_text\"]\n",
"\n",
"\n",
"content_handler = ContentHandler()\n",
"\n",
"chain = load_qa_chain(\n",
" llm=SagemakerEndpoint(\n",
" endpoint_name=\"endpoint-name\",\n",
" client=client,\n",
" model_kwargs={\"temperature\": 1e-10},\n",
" content_handler=content_handler,\n",
" ),\n",
" prompt=PROMPT,\n",
")\n",
"\n",
"chain({\"input_documents\": docs, \"question\": query}, return_only_outputs=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"from typing import Dict\n",
"\n",
"from langchain.chains.question_answering import load_qa_chain\n",
"from langchain_community.llms import SagemakerEndpoint\n",
"from langchain_community.llms.sagemaker_endpoint import LLMContentHandler\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"query = \"\"\"How long was Elizabeth hospitalized?\n",
"\"\"\"\n",
"\n",
"prompt_template = \"\"\"Use the following pieces of context to answer the question at the end.\n",
"\n",
"{context}\n",
"\n",
"Question: {question}\n",
"Answer:\"\"\"\n",
"PROMPT = PromptTemplate(\n",
" template=prompt_template, input_variables=[\"context\", \"question\"]\n",
")\n",
"\n",
"\n",
"class ContentHandler(LLMContentHandler):\n",
" content_type = \"application/json\"\n",
" accepts = \"application/json\"\n",
"\n",
" def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:\n",
" input_str = json.dumps({\"inputs\": prompt, \"parameters\": model_kwargs})\n",
" return input_str.encode(\"utf-8\")\n",
"\n",
" def transform_output(self, output: bytes) -> str:\n",
" response_json = json.loads(output.read().decode(\"utf-8\"))\n",
" return response_json[0][\"generated_text\"]\n",
"\n",
"\n",
"content_handler = ContentHandler()\n",
"\n",
"chain = load_qa_chain(\n",
" llm=SagemakerEndpoint(\n",
" endpoint_name=\"endpoint-name\",\n",
" credentials_profile_name=\"credentials-profile-name\",\n",
" region_name=\"us-west-2\",\n",
" model_kwargs={\"temperature\": 1e-10},\n",
" content_handler=content_handler,\n",
" ),\n",
" prompt=PROMPT,\n",
")\n",
"\n",
"chain({\"input_documents\": docs, \"question\": query}, return_only_outputs=True)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
| |
152703
|
{
"cells": [
{
"cell_type": "raw",
"id": "67db2992",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Ollama\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "9597802c",
"metadata": {},
"source": [
"# OllamaLLM\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of Ollama models as [text completion models](/docs/concepts/#llms). Many popular Ollama models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"You may be looking for [this page instead](/docs/integrations/chat/ollama/).\n",
":::\n",
"\n",
"This page goes over how to use LangChain to interact with `Ollama` models.\n",
"\n",
"## Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "59c710c4",
"metadata": {},
"outputs": [],
"source": [
"# install package\n",
"%pip install -U langchain-ollama"
]
},
{
"cell_type": "markdown",
"id": "0ee90032",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"First, follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:\n",
"\n",
"* [Download](https://ollama.ai/download) and install Ollama onto the available supported platforms (including Windows Subsystem for Linux)\n",
"* Fetch available LLM model via `ollama pull <name-of-model>`\n",
" * View a list of available models via the [model library](https://ollama.ai/library)\n",
" * e.g., `ollama pull llama3`\n",
"* This will download the default tagged version of the model. Typically, the default points to the latest, smallest sized-parameter model.\n",
"\n",
"> On Mac, the models will be download to `~/.ollama/models`\n",
"> \n",
"> On Linux (or WSL), the models will be stored at `/usr/share/ollama/.ollama/models`\n",
"\n",
"* Specify the exact version of the model of interest as such `ollama pull vicuna:13b-v1.5-16k-q4_0` (View the [various tags for the `Vicuna`](https://ollama.ai/library/vicuna/tags) model in this instance)\n",
"* To view all pulled models, use `ollama list`\n",
"* To chat directly with a model from the command line, use `ollama run <name-of-model>`\n",
"* View the [Ollama documentation](https://github.com/jmorganca/ollama) for more commands. Run `ollama help` in the terminal to see available commands too.\n",
"\n",
"## Usage"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "035dea0f",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"\"Sounds like a plan!\\n\\nTo answer what LangChain is, let's break it down step by step.\\n\\n**Step 1: Understand the Context**\\nLangChain seems to be related to language or programming, possibly in an AI context. This makes me wonder if it's a framework, library, or tool for building models or interacting with them.\\n\\n**Step 2: Research Possible Definitions**\\nAfter some quick searching, I found that LangChain is actually a Python library for building and composing conversational AI models. It seems to provide a way to create modular and reusable components for chatbots, voice assistants, and other conversational interfaces.\\n\\n**Step 3: Explore Key Features and Use Cases**\\nLangChain likely offers features such as:\\n\\n* Easy composition of conversational flows\\n* Support for various input/output formats (e.g., text, audio)\\n* Integration with popular AI frameworks and libraries\\n\\nUse cases might include building chatbots for customer service, creating voice assistants for smart homes, or developing interactive stories.\\n\\n**Step 4: Confirm the Definition**\\nAfter this step-by-step analysis, I'm fairly confident that LangChain is a Python library for building conversational AI models. If you'd like to verify or provide more context, feel free to do so!\""
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_ollama.llms import OllamaLLM\n",
"\n",
"template = \"\"\"Question: {question}\n",
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"model = OllamaLLM(model=\"llama3.1\")\n",
"\n",
"chain = prompt | model\n",
"\n",
"chain.invoke({\"question\": \"What is LangChain?\"})"
]
},
{
"cell_type": "markdown",
"id": "e2d85456",
"metadata": {},
"source": [
"## Multi-modal\n",
"\n",
"Ollama has support for multi-modal LLMs, such as [bakllava](https://ollama.com/library/bakllava) and [llava](https://ollama.com/library/llava).\n",
"\n",
" ollama pull bakllava\n",
"\n",
"Be sure to update Ollama so that you have the most recent version to support multi-modal."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "4043e202",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<img src=\"" />"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"import base64\n",
"from io import BytesIO\n",
"\n",
"from IPython.display import HTML, display\n",
"from PIL import Image\n",
"\n",
"\n",
"def convert_to_base64(pil_image):\n",
" \"\"\"\n",
" Convert PIL images to Base64 encoded strings\n",
"\n",
" :param pil_image: PIL image\n",
" :return: Re-sized Base64 string\n",
" \"\"\"\n",
"\n",
" buffered = BytesIO()\n",
" pil_image.save(buffered, format=\"JPEG\") # You can change the format if needed\n",
" img_str = base64.b64encode(buffered.getvalue()).decode(\"utf-8\")\n",
" return img_str\n",
"\n",
"\n",
"def plt_img_base64(img_base64):\n",
" \"\"\"\n",
" Display base64 encoded string as image\n",
"\n",
" :param img_base64: Base64 string\n",
" \"\"\"\n",
" # Create an HTML img tag with the base64 string as the source\n",
" image_html = f'<img src=\"" />'\n",
" # Display the image by rendering the HTML\n",
" display(HTML(image_html))\n",
"\n",
"\n",
"file_path = \"../../../static/img/ollama_example_img.jpg\"\n",
"pil_image = Image.open(file_path)\n",
"image_b64 = convert_to_base64(pil_image)\n",
"plt_img_base64(image_b64)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "79aaf863",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'90%'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_ollama import OllamaLLM\n",
"\n",
"llm = OllamaLLM(model=\"bakllava\")\n",
"\n",
| |
152746
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Huggingface Endpoints\n",
"\n",
">The [Hugging Face Hub](https://huggingface.co/docs/hub/index) is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.\n",
"\n",
"The `Hugging Face Hub` also offers various endpoints to build ML applications.\n",
"This example showcases how to connect to the different Endpoints types.\n",
"\n",
"In particular, text generation inference is powered by [Text Generation Inference](https://github.com/huggingface/text-generation-inference): a custom-built Rust, Python and gRPC server for blazing-faset text generation inference."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_huggingface import HuggingFaceEndpoint"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Installation and Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use, you should have the ``huggingface_hub`` python [package installed](https://huggingface.co/docs/huggingface_hub/installation)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet huggingface_hub"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# get a token: https://huggingface.co/docs/api-inference/quicktour#get-your-api-token\n",
"\n",
"from getpass import getpass\n",
"\n",
"HUGGINGFACEHUB_API_TOKEN = getpass()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"HUGGINGFACEHUB_API_TOKEN\"] = HUGGINGFACEHUB_API_TOKEN"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prepare Examples"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_huggingface import HuggingFaceEndpoint"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain_core.prompts import PromptTemplate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"question = \"Who won the FIFA World Cup in the year 1994? \"\n",
"\n",
"template = \"\"\"Question: {question}\n",
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"\n",
"prompt = PromptTemplate.from_template(template)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Examples\n",
"\n",
"Here is an example of how you can access `HuggingFaceEndpoint` integration of the free [Serverless Endpoints](https://huggingface.co/inference-endpoints/serverless) API."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"repo_id = \"mistralai/Mistral-7B-Instruct-v0.2\"\n",
"\n",
"llm = HuggingFaceEndpoint(\n",
" repo_id=repo_id,\n",
" max_length=128,\n",
" temperature=0.5,\n",
" huggingfacehub_api_token=HUGGINGFACEHUB_API_TOKEN,\n",
")\n",
"llm_chain = prompt | llm\n",
"print(llm_chain.invoke({\"question\": question}))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Dedicated Endpoint\n",
"\n",
"\n",
"The free serverless API lets you implement solutions and iterate in no time, but it may be rate limited for heavy use cases, since the loads are shared with other requests.\n",
"\n",
"For enterprise workloads, the best is to use [Inference Endpoints - Dedicated](https://huggingface.co/inference-endpoints/dedicated).\n",
"This gives access to a fully managed infrastructure that offer more flexibility and speed. These resoucres come with continuous support and uptime guarantees, as well as options like AutoScaling\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Set the url to your Inference Endpoint below\n",
"your_endpoint_url = \"https://fayjubiy2xqn36z0.us-east-1.aws.endpoints.huggingface.cloud\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm = HuggingFaceEndpoint(\n",
" endpoint_url=f\"{your_endpoint_url}\",\n",
" max_new_tokens=512,\n",
" top_k=10,\n",
" top_p=0.95,\n",
" typical_p=0.95,\n",
" temperature=0.01,\n",
" repetition_penalty=1.03,\n",
")\n",
"llm(\"What did foo say about bar?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Streaming"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.callbacks import StreamingStdOutCallbackHandler\n",
"from langchain_huggingface import HuggingFaceEndpoint\n",
"\n",
"llm = HuggingFaceEndpoint(\n",
" endpoint_url=f\"{your_endpoint_url}\",\n",
" max_new_tokens=512,\n",
" top_k=10,\n",
" top_p=0.95,\n",
" typical_p=0.95,\n",
" temperature=0.01,\n",
" repetition_penalty=1.03,\n",
" streaming=True,\n",
")\n",
"llm(\"What did foo say about bar?\", callbacks=[StreamingStdOutCallbackHandler()])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This same `HuggingFaceEndpoint` class can be used with a local [HuggingFace TGI instance](https://github.com/huggingface/text-generation-inference/blob/main/docs/source/index.md) serving the LLM. Check out the TGI [repository](https://github.com/huggingface/text-generation-inference/tree/main) for details on various hardware (GPU, TPU, Gaudi...) support."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "agents",
"language": "python",
"name": "agents"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
},
"vscode": {
"interpreter": {
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| |
152757
|
{
"cells": [
{
"cell_type": "raw",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"keywords: [pdf, document loader]\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Build a PDF ingestion and Question/Answering system\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Document loaders](/docs/concepts/#document-loaders)\n",
"- [Chat models](/docs/concepts/#chat-models)\n",
"- [Embeddings](/docs/concepts/#embedding-models)\n",
"- [Vector stores](/docs/concepts/#vector-stores)\n",
"- [Retrieval-augmented generation](/docs/tutorials/rag/)\n",
"\n",
":::\n",
"\n",
"PDF files often hold crucial unstructured data unavailable from other sources. They can be quite lengthy, and unlike plain text files, cannot generally be fed directly into the prompt of a language model.\n",
"\n",
"In this tutorial, you'll create a system that can answer questions about PDF files. More specifically, you'll use a [Document Loader](/docs/concepts/#document-loaders) to load text in a format usable by an LLM, then build a retrieval-augmented generation (RAG) pipeline to answer questions, including citations from the source material.\n",
"\n",
"This tutorial will gloss over some concepts more deeply covered in our [RAG](/docs/tutorials/rag/) tutorial, so you may want to go through those first if you haven't already.\n",
"\n",
"Let's dive in!\n",
"\n",
"## Loading documents\n",
"\n",
"First, you'll need to choose a PDF to load. We'll use a document from [Nike's annual public SEC report](https://s1.q4cdn.com/806093406/files/doc_downloads/2023/414759-1-_5_Nike-NPS-Combo_Form-10-K_WR.pdf). It's over 100 pages long, and contains some crucial data mixed with longer explanatory text. However, you can feel free to use a PDF of your choosing.\n",
"\n",
"Once you've chosen your PDF, the next step is to load it into a format that an LLM can more easily handle, since LLMs generally require text inputs. LangChain has a few different [built-in document loaders](/docs/how_to/document_loader_pdf/) for this purpose which you can experiment with. Below, we'll use one powered by the [`pypdf`](https://pypi.org/project/pypdf/) package that reads from a filepath:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU pypdf langchain_community"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"107\n"
]
}
],
"source": [
"from langchain_community.document_loaders import PyPDFLoader\n",
"\n",
"file_path = \"../example_data/nke-10k-2023.pdf\"\n",
"loader = PyPDFLoader(file_path)\n",
"\n",
"docs = loader.load()\n",
"\n",
"print(len(docs))"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Table of Contents\n",
"UNITED STATES\n",
"SECURITIES AND EXCHANGE COMMISSION\n",
"Washington, D.C. 20549\n",
"FORM 10-K\n",
"\n",
"{'source': '../example_data/nke-10k-2023.pdf', 'page': 0}\n"
]
}
],
"source": [
"print(docs[0].page_content[0:100])\n",
"print(docs[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So what just happened?\n",
"\n",
"- The loader reads the PDF at the specified path into memory.\n",
"- It then extracts text data using the `pypdf` package.\n",
"- Finally, it creates a LangChain [Document](/docs/concepts/#documents) for each page of the PDF with the page's content and some metadata about where in the document the text came from.\n",
"\n",
"LangChain has [many other document loaders](/docs/integrations/document_loaders/) for other data sources, or you can create a [custom document loader](/docs/how_to/document_loader_custom/).\n",
"\n",
"## Question answering with RAG\n",
"\n",
"Next, you'll prepare the loaded documents for later retrieval. Using a [text splitter](/docs/concepts/#text-splitters), you'll split your loaded documents into smaller documents that can more easily fit into an LLM's context window, then load them into a [vector store](/docs/concepts/#vector-stores). You can then create a [retriever](/docs/concepts/#retrievers) from the vector store for use in our RAG chain:\n",
"\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"llm\" openaiParams={`model=\"gpt-4o\"`} />\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"import getpass\n",
"import os\n",
"\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"if \"ANTHROPIC_API_KEY\" not in os.environ:\n",
" os.environ[\"ANTHROPIC_API_KEY\"] = getpass.getpass(\"Anthropic API Key:\")\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install langchain_openai"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"import getpass\n",
"import os\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.vectorstores import InMemoryVectorStore\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n",
"splits = text_splitter.split_documents(docs)\n",
"vectorstore = InMemoryVectorStore.from_documents(\n",
" documents=splits, embedding=OpenAIEmbeddings()\n",
")\n",
"\n",
"retriever = vectorstore.as_retriever()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, you'll use some built-in helpers to construct the final `rag_chain`:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'input': \"What was Nike's revenue in 2023?\",\n",
| |
152760
|
{
"cells": [
{
"cell_type": "markdown",
"id": "3ea857b1",
"metadata": {},
"source": [
"# Build a Local RAG Application\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Chat Models](/docs/concepts/#chat-models)\n",
"- [Chaining runnables](/docs/how_to/sequence/)\n",
"- [Embeddings](/docs/concepts/#embedding-models)\n",
"- [Vector stores](/docs/concepts/#vector-stores)\n",
"- [Retrieval-augmented generation](/docs/tutorials/rag/)\n",
"\n",
":::\n",
"\n",
"The popularity of projects like [llama.cpp](https://github.com/ggerganov/llama.cpp), [Ollama](https://github.com/ollama/ollama), and [llamafile](https://github.com/Mozilla-Ocho/llamafile) underscore the importance of running LLMs locally.\n",
"\n",
"LangChain has integrations with [many open-source LLM providers](/docs/how_to/local_llms) that can be run locally.\n",
"\n",
"This guide will show how to run `LLaMA 3.1` via one provider, [Ollama](/docs/integrations/providers/ollama/) locally (e.g., on your laptop) using local embeddings and a local LLM. However, you can set up and swap in other local providers, such as [LlamaCPP](/docs/integrations/chat/llamacpp/) if you prefer.\n",
"\n",
"**Note:** This guide uses a [chat model](/docs/concepts/#chat-models) wrapper that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models directly with a [text-in/text-out LLM](/docs/concepts/#llms) wrapper, you may need to use a prompt tailed for your specific model. This will often [require the inclusion of special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). [Here's an example for LLaMA 2](https://smith.langchain.com/hub/rlm/rag-prompt-llama).\n",
"\n",
"## Setup\n",
"\n",
"First we'll need to set up Ollama.\n",
"\n",
"The instructions [on their GitHub repo](https://github.com/ollama/ollama) provide details, which we summarize here:\n",
"\n",
"- [Download](https://ollama.com/download) and run their desktop app\n",
"- From command line, fetch models from [this list of options](https://ollama.com/library). For this guide, you'll need:\n",
" - A general purpose model like `llama3.1:8b`, which you can pull with something like `ollama pull llama3.1:8b`\n",
" - A [text embedding model](https://ollama.com/search?c=embedding) like `nomic-embed-text`, which you can pull with something like `ollama pull nomic-embed-text`\n",
"- When the app is running, all models are automatically served on `localhost:11434`\n",
"- Note that your model choice will depend on your hardware capabilities\n",
"\n",
"Next, install packages needed for local embeddings, vector storage, and inference."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a7dc1ec5",
"metadata": {},
"outputs": [],
"source": [
"# Document loading, retrieval methods and text splitting\n",
"%pip install -qU langchain langchain_community\n",
"\n",
"# Local vector store via Chroma\n",
"%pip install -qU langchain_chroma\n",
"\n",
"# Local inference and embeddings via Ollama\n",
"%pip install -qU langchain_ollama\n",
"\n",
"# Web Loader\n",
"%pip install -qU beautifulsoup4"
]
},
{
"cell_type": "markdown",
"id": "02b7914e",
"metadata": {},
"source": [
"You can also [see this page](/docs/integrations/text_embedding/) for a full list of available embeddings models"
]
},
{
"cell_type": "markdown",
"id": "5e7543fa",
"metadata": {},
"source": [
"## Document Loading\n",
"\n",
"Now let's load and split an example document.\n",
"\n",
"We'll use a [blog post](https://lilianweng.github.io/posts/2023-06-23-agent/) by Lilian Weng on agents as an example."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f8cf5765",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n",
"data = loader.load()\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)"
]
},
{
"cell_type": "markdown",
"id": "131d5059",
"metadata": {},
"source": [
"Next, the below steps will initialize your vector store. We use [`nomic-embed-text`](https://ollama.com/library/nomic-embed-text), but you can explore other providers or options as well:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "fdce8923",
"metadata": {},
"outputs": [],
"source": [
"from langchain_chroma import Chroma\n",
"from langchain_ollama import OllamaEmbeddings\n",
"\n",
"local_embeddings = OllamaEmbeddings(model=\"nomic-embed-text\")\n",
"\n",
"vectorstore = Chroma.from_documents(documents=all_splits, embedding=local_embeddings)"
]
},
{
"cell_type": "markdown",
"id": "29137915",
"metadata": {},
"source": [
"And now we have a working vector store! Test that similarity search is working:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "b0c55e98",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"4"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"question = \"What are the approaches to Task Decomposition?\"\n",
"docs = vectorstore.similarity_search(question)\n",
"len(docs)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "32b43339",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\"}, page_content='Task decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.')"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0]"
]
},
{
"cell_type": "markdown",
"id": "fcf81052",
"metadata": {},
"source": [
| |
152768
|
{
"cells": [
{
"cell_type": "raw",
"id": "63ee3f93",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 0\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "9316da0d",
"metadata": {},
"source": [
"# Build a Simple LLM Application with LCEL\n",
"\n",
"In this quickstart we'll show you how to build a simple LLM application with LangChain. This application will translate text from English into another language. This is a relatively simple LLM application - it's just a single LLM call plus some prompting. Still, this is a great way to get started with LangChain - a lot of features can be built with just some prompting and an LLM call!\n",
"\n",
"After reading this tutorial, you'll have a high level overview of:\n",
"\n",
"- Using [language models](/docs/concepts/#chat-models)\n",
"\n",
"- Using [PromptTemplates](/docs/concepts/#prompt-templates) and [OutputParsers](/docs/concepts/#output-parsers)\n",
"\n",
"- Using [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language-lcel) to chain components together\n",
"\n",
"- Debugging and tracing your application using [LangSmith](/docs/concepts/#langsmith)\n",
"\n",
"- Deploying your application with [LangServe](/docs/concepts/#langserve)\n",
"\n",
"Let's dive in!\n",
"\n",
"## Setup\n",
"\n",
"### Jupyter Notebook\n",
"\n",
"This guide (and most of the other guides in the documentation) uses [Jupyter notebooks](https://jupyter.org/) and assumes the reader is as well. Jupyter notebooks are perfect for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc) and going through guides in an interactive environment is a great way to better understand them.\n",
"\n",
"This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See [here](https://jupyter.org/install) for instructions on how to install.\n",
"\n",
"### Installation\n",
"\n",
"To install LangChain run:\n",
"\n",
"import Tabs from '@theme/Tabs';\n",
"import TabItem from '@theme/TabItem';\n",
"import CodeBlock from \"@theme/CodeBlock\";\n",
"\n",
"<Tabs>\n",
" <TabItem value=\"pip\" label=\"Pip\" default>\n",
" <CodeBlock language=\"bash\">pip install langchain</CodeBlock>\n",
" </TabItem>\n",
" <TabItem value=\"conda\" label=\"Conda\">\n",
" <CodeBlock language=\"bash\">conda install langchain -c conda-forge</CodeBlock>\n",
" </TabItem>\n",
"</Tabs>\n",
"\n",
"\n",
"\n",
"For more details, see our [Installation guide](/docs/how_to/installation).\n",
"\n",
"### LangSmith\n",
"\n",
"Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls.\n",
"As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent.\n",
"The best way to do this is with [LangSmith](https://smith.langchain.com).\n",
"\n",
"After you sign up at the link above, make sure to set your environment variables to start logging traces:\n",
"\n",
"```shell\n",
"export LANGCHAIN_TRACING_V2=\"true\"\n",
"export LANGCHAIN_API_KEY=\"...\"\n",
"```\n",
"\n",
"Or, if in a notebook, you can set them with:\n",
"\n",
"```python\n",
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "e5558ca9",
"metadata": {},
"source": [
"## Using Language Models\n",
"\n",
"First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangeably - select the one you want to use below!\n",
"\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs openaiParams={`model=\"gpt-4\"`} />\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e4b41234",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4\")"
]
},
{
"cell_type": "markdown",
"id": "ca5642ff",
"metadata": {},
"source": [
"Let's first use the model directly. `ChatModel`s are instances of LangChain \"Runnables\", which means they expose a standard interface for interacting with them. To just simply call the model, we can pass in a list of messages to the `.invoke` method."
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "1b2481f0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='ciao!', response_metadata={'token_usage': {'completion_tokens': 3, 'prompt_tokens': 20, 'total_tokens': 23}, 'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-fc5d7c88-9615-48ab-a3c7-425232b562c5-0')"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"\n",
"messages = [\n",
" SystemMessage(content=\"Translate the following from English into Italian\"),\n",
" HumanMessage(content=\"hi!\"),\n",
"]\n",
"\n",
"model.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "f83373db",
"metadata": {},
"source": [
"If we've enabled LangSmith, we can see that this run is logged to LangSmith, and can see the [LangSmith trace](https://smith.langchain.com/public/88baa0b2-7c1a-4d09-ba30-a47985dde2ea/r)"
]
},
{
"cell_type": "markdown",
"id": "32bd03ed",
"metadata": {},
"source": [
"## OutputParsers\n",
"\n",
"Notice that the response from the model is an `AIMessage`. This contains a string response along with other metadata about the response. Oftentimes we may just want to work with the string response. We can parse out just this response by using a simple output parser.\n",
"\n",
"We first import the simple output parser."
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "d7ae9c58",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"\n",
"parser = StrOutputParser()"
]
},
{
"cell_type": "markdown",
"id": "eaebe33a",
"metadata": {},
"source": [
"One way to use it is to use it by itself. For example, we could save the result of the language model call and then pass it to the parser."
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "6bacb837",
"metadata": {},
"outputs": [],
"source": [
"result = model.invoke(messages)"
]
},
{
"cell_type": "code",
"execution_count": 19,
| |
152777
|
" # Empty content in the context of OpenAI means\n",
" # that the model is asking for a tool to be invoked.\n",
" # So we only print non-empty content\n",
" print(content, end=\"|\")\n",
" elif kind == \"on_tool_start\":\n",
" print(\"--\")\n",
" print(\n",
" f\"Starting tool: {event['name']} with inputs: {event['data'].get('input')}\"\n",
" )\n",
" elif kind == \"on_tool_end\":\n",
" print(f\"Done tool: {event['name']}\")\n",
" print(f\"Tool output was: {event['data'].get('output')}\")\n",
" print(\"--\")"
]
},
{
"cell_type": "markdown",
"id": "022cbc8a",
"metadata": {},
"source": [
"## Adding in memory\n",
"\n",
"As mentioned earlier, this agent is stateless. This means it does not remember previous interactions. To give it memory we need to pass in a checkpointer. When passing in a checkpointer, we also have to pass in a `thread_id` when invoking the agent (so it knows which thread/conversation to resume from)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c4073e35",
"metadata": {},
"outputs": [],
"source": [
"from langgraph.checkpoint.memory import MemorySaver\n",
"\n",
"memory = MemorySaver()"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "e64a944e-f9ac-43cf-903c-d3d28d765377",
"metadata": {},
"outputs": [],
"source": [
"agent_executor = create_react_agent(model, tools, checkpointer=memory)\n",
"\n",
"config = {\"configurable\": {\"thread_id\": \"abc123\"}}"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "a13462d0-2d02-4474-921e-15a1ba1fa274",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content=\"Hello Bob! It's nice to meet you again.\", response_metadata={'id': 'msg_013C1z2ZySagEFwmU1EsysR2', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 1162, 'output_tokens': 14}}, id='run-f878acfd-d195-44e8-9166-e2796317e3f8-0', usage_metadata={'input_tokens': 1162, 'output_tokens': 14, 'total_tokens': 1176})]}}\n",
"----\n"
]
}
],
"source": [
"for chunk in agent_executor.stream(\n",
" {\"messages\": [HumanMessage(content=\"hi im bob!\")]}, config\n",
"):\n",
" print(chunk)\n",
" print(\"----\")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "56d8028b-5dbc-40b2-86f5-ed60631d86a3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='You mentioned your name is Bob when you introduced yourself earlier. So your name is Bob.', response_metadata={'id': 'msg_01WNwnRNGwGDRw6vRdivt6i1', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 1184, 'output_tokens': 21}}, id='run-f5c0b957-8878-405a-9d4b-a7cd38efe81f-0', usage_metadata={'input_tokens': 1184, 'output_tokens': 21, 'total_tokens': 1205})]}}\n",
"----\n"
]
}
],
"source": [
"for chunk in agent_executor.stream(\n",
" {\"messages\": [HumanMessage(content=\"whats my name?\")]}, config\n",
"):\n",
" print(chunk)\n",
" print(\"----\")"
]
},
{
"cell_type": "markdown",
"id": "bda99754-0a11-4447-b408-e8db8f2e3517",
"metadata": {},
"source": [
"Example [LangSmith trace](https://smith.langchain.com/public/fa73960b-0f7d-4910-b73d-757a12f33b2b/r)"
]
},
{
"cell_type": "markdown",
"id": "ae908088",
"metadata": {},
"source": [
"If I want to start a new conversation, all I have to do is change the `thread_id` used"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "24460239",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content=\"I'm afraid I don't actually know your name. As an AI assistant without personal information about you, I don't have a specific name associated with our conversation.\", response_metadata={'id': 'msg_01NoaXNNYZKSoBncPcLkdcbo', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 267, 'output_tokens': 36}}, id='run-c9f7df3d-525a-4d8f-bbcf-a5b4a5d2e4b0-0', usage_metadata={'input_tokens': 267, 'output_tokens': 36, 'total_tokens': 303})]}}\n",
"----\n"
]
}
],
"source": [
"config = {\"configurable\": {\"thread_id\": \"xyz123\"}}\n",
"for chunk in agent_executor.stream(\n",
" {\"messages\": [HumanMessage(content=\"whats my name?\")]}, config\n",
"):\n",
" print(chunk)\n",
" print(\"----\")"
]
},
{
"cell_type": "markdown",
"id": "c029798f",
"metadata": {},
"source": [
"## Conclusion\n",
"\n",
"That's a wrap! In this quick start we covered how to create a simple agent. \n",
"We've then shown how to stream back a response - not only the intermediate steps, but also tokens!\n",
"We've also added in memory so you can have a conversation with them.\n",
"Agents are a complex topic, and there's lot to learn! \n",
"\n",
"For more information on Agents, please check out the [LangGraph](/docs/concepts/#langgraph) documentation. This has it's own set of concepts, tutorials, and how-to guides."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e3ec3244",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
152781
|
{
"cells": [
{
"cell_type": "raw",
"id": "2aca8168-62ec-4bba-93f0-73da08cd1920",
"metadata": {},
"source": [
"---\n",
"title: Summarize Text\n",
"sidebar_class_name: hidden\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "cf13f702",
"metadata": {},
"source": [
"# Summarize Text\n",
"\n",
":::info\n",
"\n",
"This tutorial demonstrates text summarization using built-in chains and [LangGraph](https://langchain-ai.github.io/langgraph/).\n",
"\n",
"A [previous version](https://python.langchain.com/v0.1/docs/use_cases/summarization/) of this page showcased the legacy chains [StuffDocumentsChain](/docs/versions/migrating_chains/stuff_docs_chain/), [MapReduceDocumentsChain](/docs/versions/migrating_chains/map_reduce_chain/), and [RefineDocumentsChain](https://python.langchain.com/docs/versions/migrating_chains/refine_docs_chain/). See [here](/docs/versions/migrating_chains/) for information on using those abstractions and a comparison with the methods demonstrated in this tutorial.\n",
"\n",
":::\n",
"\n",
"Suppose you have a set of documents (PDFs, Notion pages, customer questions, etc.) and you want to summarize the content. \n",
"\n",
"LLMs are a great tool for this given their proficiency in understanding and synthesizing text.\n",
"\n",
"In the context of [retrieval-augmented generation](/docs/tutorials/rag), summarizing text can help distill the information in a large number of retrieved documents to provide context for a LLM.\n",
"\n",
"In this walkthrough we'll go over how to summarize content from multiple documents using LLMs."
]
},
{
"cell_type": "markdown",
"id": "8e233997",
"metadata": {},
"source": [
""
]
},
{
"cell_type": "markdown",
"id": "cc8c5f87-3239-44e1-8772-a97cb6138cc5",
"metadata": {},
"source": [
"## Concepts\n",
"\n",
"Concepts we will cover are:\n",
"\n",
"- Using [language models](/docs/concepts/#chat-models).\n",
"\n",
"- Using [document loaders](/docs/concepts/#document-loaders), specifically the [WebBaseLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) to load content from an HTML webpage.\n",
"\n",
"- Two ways to summarize or otherwise combine documents.\n",
" 1. [Stuff](/docs/tutorials/summarization#stuff), which simply concatenates documents into a prompt;\n",
" 2. [Map-reduce](/docs/tutorials/summarization#map-reduce), for larger sets of documents. This splits documents into batches, summarizes those, and then summarizes the summaries.\n",
"\n",
"Shorter, targeted guides on these strategies and others, including [iterative refinement](/docs/how_to/summarize_refine), can be found in the [how-to guides](/docs/how_to/#summarization).\n",
"\n",
"## Setup\n",
"\n",
"### Jupyter Notebook\n",
"\n",
"This guide (and most of the other guides in the documentation) uses [Jupyter notebooks](https://jupyter.org/) and assumes the reader is as well. Jupyter notebooks are perfect for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc) and going through guides in an interactive environment is a great way to better understand them.\n",
"\n",
"This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See [here](https://jupyter.org/install) for instructions on how to install.\n",
"\n",
"### Installation\n",
"\n",
"To install LangChain run:\n",
"\n",
"import Tabs from '@theme/Tabs';\n",
"import TabItem from '@theme/TabItem';\n",
"import CodeBlock from \"@theme/CodeBlock\";\n",
"\n",
"<Tabs>\n",
" <TabItem value=\"pip\" label=\"Pip\" default>\n",
" <CodeBlock language=\"bash\">pip install langchain</CodeBlock>\n",
" </TabItem>\n",
" <TabItem value=\"conda\" label=\"Conda\">\n",
" <CodeBlock language=\"bash\">conda install langchain -c conda-forge</CodeBlock>\n",
" </TabItem>\n",
"</Tabs>\n",
"\n",
"\n",
"\n",
"For more details, see our [Installation guide](/docs/how_to/installation).\n",
"\n",
"### LangSmith\n",
"\n",
"Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls.\n",
"As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent.\n",
"The best way to do this is with [LangSmith](https://smith.langchain.com).\n",
"\n",
"After you sign up at the link above, make sure to set your environment variables to start logging traces:\n",
"\n",
"```shell\n",
"export LANGCHAIN_TRACING_V2=\"true\"\n",
"export LANGCHAIN_API_KEY=\"...\"\n",
"```\n",
"\n",
"Or, if in a notebook, you can set them with:\n",
"\n",
"```python\n",
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "4715b4ff",
"metadata": {},
"source": [
"## Overview\n",
"\n",
"A central question for building a summarizer is how to pass your documents into the LLM's context window. Two common approaches for this are:\n",
"\n",
"1. `Stuff`: Simply \"stuff\" all your documents into a single prompt. This is the simplest approach (see [here](/docs/tutorials/rag#built-in-chains) for more on the `create_stuff_documents_chain` constructor, which is used for this method).\n",
"\n",
"2. `Map-reduce`: Summarize each document on its own in a \"map\" step and then \"reduce\" the summaries into a final summary (see [here](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html) for more on the `MapReduceDocumentsChain`, which is used for this method).\n",
"\n",
"Note that map-reduce is especially effective when understanding of a sub-document does not rely on preceding context. For example, when summarizing a corpus of many, shorter documents. In other cases, such as summarizing a novel or body of text with an inherent sequence, [iterative refinement](/docs/how_to/summarize_refine) may be more effective."
]
},
{
"cell_type": "markdown",
"id": "08ec66bc",
"metadata": {},
"source": [
""
]
},
{
"cell_type": "markdown",
"id": "bea785ac",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"First set environment variables and install packages:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "928585ec-6f6f-4b67-b2c8-0fc87186342b",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet tiktoken langchain langgraph beautifulsoup4\n",
"\n",
"# Set env var OPENAI_API_KEY or load from a .env file\n",
"# import dotenv\n",
"\n",
"# dotenv.load_dotenv()"
]
},
{
"cell_type": "code",
"execution_count": 2,
| |
152783
|
"## Map-Reduce: summarize long texts via parallelization {#map-reduce}\n",
"\n",
"Let's unpack the map reduce approach. For this, we'll first map each document to an individual summary using an LLM. Then we'll reduce or consolidate those summaries into a single global summary.\n",
"\n",
"Note that the map step is typically parallelized over the input documents.\n",
"\n",
"[LangGraph](https://langchain-ai.github.io/langgraph/), built on top of `langchain-core`, supports [map-reduce](https://langchain-ai.github.io/langgraph/how-tos/map-reduce/) workflows and is well-suited to this problem:\n",
"\n",
"- LangGraph allows for individual steps (such as successive summarizations) to be streamed, allowing for greater control of execution;\n",
"- LangGraph's [checkpointing](https://langchain-ai.github.io/langgraph/how-tos/persistence/) supports error recovery, extending with human-in-the-loop workflows, and easier incorporation into conversational applications.\n",
"- The LangGraph implementation is straightforward to modify and extend, as we will see below.\n",
"\n",
"### Map\n",
"Let's first define the prompt associated with the map step, and associated it with the LLM via a [chain](/docs/how_to/sequence/). We can use the same summarization prompt as in the `stuff` approach, above:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "a1e6773c",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"map_prompt = ChatPromptTemplate.from_messages(\n",
" [(\"system\", \"Write a concise summary of the following:\\\\n\\\\n{context}\")]\n",
")\n",
"\n",
"map_chain = map_prompt | llm | StrOutputParser()"
]
},
{
"cell_type": "markdown",
"id": "272ce8ce-919d-4ded-bbd5-a53a8a30bc66",
"metadata": {},
"source": [
"We can also use the Prompt Hub to store and fetch prompts.\n",
"\n",
"This will work with your [LangSmith API key](https://docs.smith.langchain.com/).\n",
"\n",
"For example, see the map prompt [here](https://smith.langchain.com/hub/rlm/map-prompt)."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "ce48b805-d98b-4e0f-8b9e-3b3e72cad3d3",
"metadata": {},
"outputs": [],
"source": [
"from langchain import hub\n",
"\n",
"map_prompt = hub.pull(\"rlm/map-prompt\")"
]
},
{
"cell_type": "markdown",
"id": "bee3c331",
"metadata": {},
"source": [
"### Reduce\n",
"\n",
"We also define a chain that takes the document mapping results and reduces them into a single output."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "6a718890-99ab-439a-8f79-b9ae9c58ad24",
"metadata": {},
"outputs": [],
"source": [
"# Also available via the hub: `hub.pull(\"rlm/reduce-prompt\")`\n",
"reduce_template = \"\"\"\n",
"The following is a set of summaries:\n",
"{docs}\n",
"Take these and distill it into a final, consolidated summary\n",
"of the main themes.\n",
"\"\"\"\n",
"\n",
"reduce_prompt = ChatPromptTemplate([(\"human\", reduce_template)])\n",
"\n",
"reduce_chain = reduce_prompt | llm | StrOutputParser()"
]
},
{
"cell_type": "markdown",
"id": "3d7df564-415a-49e2-80b6-743446b40be5",
"metadata": {},
"source": [
"### Orchestration via LangGraph\n",
"\n",
"Below we implement a simple application that maps the summarization step on a list of documents, then reduces them using the above prompts.\n",
"\n",
"Map-reduce flows are particularly useful when texts are long compared to the context window of a LLM. For long texts, we need a mechanism that ensures that the context to be summarized in the reduce step does not exceed a model's context window size. Here we implement a recursive \"collapsing\" of the summaries: the inputs are partitioned based on a token limit, and summaries are generated of the partitions. This step is repeated until the total length of the summaries is within a desired limit, allowing for the summarization of arbitrary-length text.\n",
"\n",
"First we chunk the blog post into smaller \"sub documents\" to be mapped:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "7821efb9-e1de-4234-84d2-75dfe13b5a6c",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Created a chunk of size 1003, which is longer than the specified 1000\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Generated 14 documents.\n"
]
}
],
"source": [
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"text_splitter = CharacterTextSplitter.from_tiktoken_encoder(\n",
" chunk_size=1000, chunk_overlap=0\n",
")\n",
"split_docs = text_splitter.split_documents(docs)\n",
"print(f\"Generated {len(split_docs)} documents.\")"
]
},
{
"cell_type": "markdown",
"id": "3e7f1c8a-070e-47f0-bcf2-16d6191051ac",
"metadata": {},
"source": [
"Next, we define our graph. Note that we define an artificially low maximum token length of 1,000 tokens to illustrate the \"collapsing\" step."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "10ced55c-9e3e-404f-abe9-83ac29ffaa5a",
"metadata": {},
"outputs": [],
"source": [
"import operator\n",
"from typing import Annotated, List, Literal, TypedDict\n",
"\n",
"from langchain.chains.combine_documents.reduce import (\n",
" acollapse_docs,\n",
" split_list_of_docs,\n",
")\n",
"from langchain_core.documents import Document\n",
"from langgraph.constants import Send\n",
"from langgraph.graph import END, START, StateGraph\n",
"\n",
"token_max = 1000\n",
"\n",
"\n",
"def length_function(documents: List[Document]) -> int:\n",
" \"\"\"Get number of tokens for input contents.\"\"\"\n",
" return sum(llm.get_num_tokens(doc.page_content) for doc in documents)\n",
"\n",
"\n",
"# This will be the overall state of the main graph.\n",
"# It will contain the input document contents, corresponding\n",
"# summaries, and a final summary.\n",
"class OverallState(TypedDict):\n",
" # Notice here we use the operator.add\n",
" # This is because we want combine all the summaries we generate\n",
" # from individual nodes back into one list - this is essentially\n",
" # the \"reduce\" part\n",
" contents: List[str]\n",
" summaries: Annotated[list, operator.add]\n",
" collapsed_summaries: List[Document]\n",
" final_summary: str\n",
"\n",
"\n",
"# This will be the state of the node that we will \"map\" all\n",
"# documents to in order to generate summaries\n",
"class SummaryState(TypedDict):\n",
" content: str\n",
"\n",
"\n",
"# Here we generate a summary, given a document\n",
"async def generate_summary(state: SummaryState):\n",
" response = await map_chain.ainvoke(state[\"content\"])\n",
| |
152787
|
"system_prompt = (\n",
" \"You are an assistant for question-answering tasks. \"\n",
" \"Use the following pieces of retrieved context to answer \"\n",
" \"the question. If you don't know the answer, say that you \"\n",
" \"don't know. Use three sentences maximum and keep the \"\n",
" \"answer concise.\"\n",
" \"\\n\\n\"\n",
" \"{context}\"\n",
")\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system_prompt),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"question_answer_chain = create_stuff_documents_chain(llm, prompt)\n",
"rag_chain = create_retrieval_chain(retriever, question_answer_chain)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "bf55faaf-0d17-4b74-925d-c478b555f7b2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Task decomposition is the process of breaking down a complicated task into smaller, more manageable steps. Techniques like Chain of Thought (CoT) and Tree of Thoughts enhance this process by guiding models to think step by step and explore multiple reasoning possibilities. This approach helps in simplifying complex tasks and provides insight into the model's reasoning.\""
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response = rag_chain.invoke({\"input\": \"What is Task Decomposition?\"})\n",
"response[\"answer\"]"
]
},
{
"cell_type": "markdown",
"id": "187404c7-db47-49c5-be29-9ecb96dc9afa",
"metadata": {},
"source": [
"Note that we have used the built-in chain constructors `create_stuff_documents_chain` and `create_retrieval_chain`, so that the basic ingredients to our solution are:\n",
"\n",
"1. retriever;\n",
"2. prompt;\n",
"3. LLM.\n",
"\n",
"This will simplify the process of incorporating chat history.\n",
"\n",
"### Adding chat history\n",
"\n",
"The chain we have built uses the input query directly to retrieve relevant context. But in a conversational setting, the user query might require conversational context to be understood. For example, consider this exchange:\n",
"\n",
"> Human: \"What is Task Decomposition?\"\n",
">\n",
"> AI: \"Task decomposition involves breaking down complex tasks into smaller and simpler steps to make them more manageable for an agent or model.\"\n",
">\n",
"> Human: \"What are common ways of doing it?\"\n",
"\n",
"In order to answer the second question, our system needs to understand that \"it\" refers to \"Task Decomposition.\"\n",
"\n",
"We'll need to update two things about our existing app:\n",
"\n",
"1. **Prompt**: Update our prompt to support historical messages as an input.\n",
"2. **Contextualizing questions**: Add a sub-chain that takes the latest user question and reformulates it in the context of the chat history. This can be thought of simply as building a new \"history aware\" retriever. Whereas before we had:\n",
" - `query` -> `retriever` \n",
" Now we will have:\n",
" - `(query, conversation history)` -> `LLM` -> `rephrased query` -> `retriever`"
]
},
{
"cell_type": "markdown",
"id": "776ae958-cbdc-4471-8669-c6087436f0b5",
"metadata": {},
"source": [
"#### Contextualizing the question\n",
"\n",
"First we'll need to define a sub-chain that takes historical messages and the latest user question, and reformulates the question if it makes reference to any information in the historical information.\n",
"\n",
"We'll use a prompt that includes a `MessagesPlaceholder` variable under the name \"chat_history\". This allows us to pass in a list of Messages to the prompt using the \"chat_history\" input key, and these messages will be inserted after the system message and before the human message containing the latest question.\n",
"\n",
"Note that we leverage a helper function [create_history_aware_retriever](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.history_aware_retriever.create_history_aware_retriever.html) for this step, which manages the case where `chat_history` is empty, and otherwise applies `prompt | llm | StrOutputParser() | retriever` in sequence.\n",
"\n",
"`create_history_aware_retriever` constructs a chain that accepts keys `input` and `chat_history` as input, and has the same output schema as a retriever."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "2b685428-8b82-4af1-be4f-7232c5d55b73",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import create_history_aware_retriever\n",
"from langchain_core.prompts import MessagesPlaceholder\n",
"\n",
"contextualize_q_system_prompt = (\n",
" \"Given a chat history and the latest user question \"\n",
" \"which might reference context in the chat history, \"\n",
" \"formulate a standalone question which can be understood \"\n",
" \"without the chat history. Do NOT answer the question, \"\n",
" \"just reformulate it if needed and otherwise return it as is.\"\n",
")\n",
"\n",
"contextualize_q_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", contextualize_q_system_prompt),\n",
" MessagesPlaceholder(\"chat_history\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"history_aware_retriever = create_history_aware_retriever(\n",
" llm, retriever, contextualize_q_prompt\n",
")"
]
},
{
"cell_type": "markdown",
"id": "42a47168-4a1f-4e39-bd2d-d5b03609a243",
"metadata": {},
"source": [
"This chain prepends a rephrasing of the input query to our retriever, so that the retrieval incorporates the context of the conversation.\n",
"\n",
"Now we can build our full QA chain. This is as simple as updating the retriever to be our new `history_aware_retriever`.\n",
"\n",
"Again, we will use [create_stuff_documents_chain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) to generate a `question_answer_chain`, with input keys `context`, `chat_history`, and `input`-- it accepts the retrieved context alongside the conversation history and query to generate an answer. A more detailed explaination is over [here](/docs/tutorials/rag/#built-in-chains)\n",
"\n",
"We build our final `rag_chain` with [create_retrieval_chain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.retrieval.create_retrieval_chain.html). This chain applies the `history_aware_retriever` and `question_answer_chain` in sequence, retaining intermediate outputs such as the retrieved context for convenience. It has input keys `input` and `chat_history`, and includes `input`, `chat_history`, `context`, and `answer` in its output."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "66f275f3-ddef-4678-b90d-ee64576878f9",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import create_retrieval_chain\n",
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"\n",
"qa_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system_prompt),\n",
" MessagesPlaceholder(\"chat_history\"),\n",
" (\"human\", \"{input}\"),\n",
| |
152794
|
{
"cells": [
{
"cell_type": "raw",
"id": "cb6f552e-775f-4d84-bc7c-dca94c06a33c",
"metadata": {},
"source": [
"---\n",
"title: Tagging\n",
"sidebar_class_name: hidden\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "a0507a4b",
"metadata": {},
"source": [
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/tagging.ipynb)\n",
"\n",
"# Classify Text into Labels\n",
"\n",
"Tagging means labeling a document with classes such as:\n",
"\n",
"- sentiment\n",
"- language\n",
"- style (formal, informal etc.)\n",
"- covered topics\n",
"- political tendency\n",
"\n",
"\n",
"\n",
"## Overview\n",
"\n",
"Tagging has a few components:\n",
"\n",
"* `function`: Like [extraction](/docs/tutorials/extraction), tagging uses [functions](https://openai.com/blog/function-calling-and-other-api-updates) to specify how the model should tag a document\n",
"* `schema`: defines how we want to tag the document\n",
"\n",
"## Quickstart\n",
"\n",
"Let's see a very straightforward example of how we can use OpenAI tool calling for tagging in LangChain. We'll use the [`with_structured_output`](/docs/how_to/structured_output) method supported by OpenAI models:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dc5cbb6f",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai\n",
"\n",
"# Set env var OPENAI_API_KEY or load from a .env file:\n",
"# import dotenv\n",
"# dotenv.load_dotenv()"
]
},
{
"cell_type": "markdown",
"id": "b8ca3f93",
"metadata": {},
"source": [
"Let's specify a Pydantic model with a few properties and their expected type in our schema."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "39f3ce3e",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"from pydantic import BaseModel, Field\n",
"\n",
"tagging_prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"\n",
"Extract the desired information from the following passage.\n",
"\n",
"Only extract the properties mentioned in the 'Classification' function.\n",
"\n",
"Passage:\n",
"{input}\n",
"\"\"\"\n",
")\n",
"\n",
"\n",
"class Classification(BaseModel):\n",
" sentiment: str = Field(description=\"The sentiment of the text\")\n",
" aggressiveness: int = Field(\n",
" description=\"How aggressive the text is on a scale from 1 to 10\"\n",
" )\n",
" language: str = Field(description=\"The language the text is written in\")\n",
"\n",
"\n",
"# LLM\n",
"llm = ChatOpenAI(temperature=0, model=\"gpt-4o-mini\").with_structured_output(\n",
" Classification\n",
")\n",
"\n",
"tagging_chain = tagging_prompt | llm"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "5509b6a6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Classification(sentiment='positive', aggressiveness=1, language='Spanish')"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inp = \"Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!\"\n",
"tagging_chain.invoke({\"input\": inp})"
]
},
{
"cell_type": "markdown",
"id": "ff3cf30d",
"metadata": {},
"source": [
"If we want JSON output, we can just call `.dict()`"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "9154474c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'sentiment': 'negative', 'aggressiveness': 8, 'language': 'Spanish'}"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inp = \"Estoy muy enojado con vos! Te voy a dar tu merecido!\"\n",
"res = tagging_chain.invoke({\"input\": inp})\n",
"res.dict()"
]
},
{
"cell_type": "markdown",
"id": "d921bb53",
"metadata": {},
"source": [
"As we can see in the examples, it correctly interprets what we want.\n",
"\n",
"The results vary so that we may get, for example, sentiments in different languages ('positive', 'enojado' etc.).\n",
"\n",
"We will see how to control these results in the next section."
]
},
{
"cell_type": "markdown",
"id": "bebb2f83",
"metadata": {},
"source": [
"## Finer control\n",
"\n",
"Careful schema definition gives us more control over the model's output. \n",
"\n",
"Specifically, we can define:\n",
"\n",
"- possible values for each property\n",
"- description to make sure that the model understands the property\n",
"- required properties to be returned"
]
},
{
"cell_type": "markdown",
"id": "69ef0b9a",
"metadata": {},
"source": [
"Let's redeclare our Pydantic model to control for each of the previously mentioned aspects using enums:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "6a5f7961",
"metadata": {},
"outputs": [],
"source": [
"class Classification(BaseModel):\n",
" sentiment: str = Field(..., enum=[\"happy\", \"neutral\", \"sad\"])\n",
" aggressiveness: int = Field(\n",
" ...,\n",
" description=\"describes how aggressive the statement is, the higher the number the more aggressive\",\n",
" enum=[1, 2, 3, 4, 5],\n",
" )\n",
" language: str = Field(\n",
" ..., enum=[\"spanish\", \"english\", \"french\", \"german\", \"italian\"]\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "e5a5881f",
"metadata": {},
"outputs": [],
"source": [
"tagging_prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"\n",
"Extract the desired information from the following passage.\n",
"\n",
"Only extract the properties mentioned in the 'Classification' function.\n",
"\n",
"Passage:\n",
"{input}\n",
"\"\"\"\n",
")\n",
"\n",
"llm = ChatOpenAI(temperature=0, model=\"gpt-4o-mini\").with_structured_output(\n",
" Classification\n",
")\n",
"\n",
"chain = tagging_prompt | llm"
]
},
{
"cell_type": "markdown",
"id": "5ded2332",
"metadata": {},
"source": [
"Now the answers will be restricted in a way we expect!"
]
},
{
"cell_type": "code",
"execution_count": 17,
| |
152796
|
{
"cells": [
{
"cell_type": "raw",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_position: 1\n",
"keywords: [conversationchain]\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Build a Chatbot"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Chat Models](/docs/concepts/#chat-models)\n",
"- [Prompt Templates](/docs/concepts/#prompt-templates)\n",
"- [Chat History](/docs/concepts/#chat-history)\n",
"\n",
"This guide requires `langgraph >= 0.2.28`.\n",
":::\n",
"\n",
":::note\n",
"\n",
"This tutorial previously used the [RunnableWithMessageHistory](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html) abstraction. You can access that version of the documentation in the [v0.2 docs](https://python.langchain.com/v0.2/docs/tutorials/chatbot/).\n",
"\n",
"As of the v0.3 release of LangChain, we recommend that LangChain users take advantage of [LangGraph persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/) to incorporate `memory` into new LangChain applications.\n",
"\n",
"If your code is already relying on `RunnableWithMessageHistory` or `BaseChatMessageHistory`, you do **not** need to make any changes. We do not plan on deprecating this functionality in the near future as it works for simple chat applications and any code that uses `RunnableWithMessageHistory` will continue to work as expected.\n",
"\n",
"Please see [How to migrate to LangGraph Memory](/docs/versions/migrating_memory/) for more details.\n",
":::\n",
"\n",
"## Overview\n",
"\n",
"We'll go over an example of how to design and implement an LLM-powered chatbot. \n",
"This chatbot will be able to have a conversation and remember previous interactions.\n",
"\n",
"\n",
"Note that this chatbot that we build will only use the language model to have a conversation.\n",
"There are several other related concepts that you may be looking for:\n",
"\n",
"- [Conversational RAG](/docs/tutorials/qa_chat_history): Enable a chatbot experience over an external source of data\n",
"- [Agents](/docs/tutorials/agents): Build a chatbot that can take actions\n",
"\n",
"This tutorial will cover the basics which will be helpful for those two more advanced topics, but feel free to skip directly to there should you choose.\n",
"\n",
"## Setup\n",
"\n",
"### Jupyter Notebook\n",
"\n",
"This guide (and most of the other guides in the documentation) uses [Jupyter notebooks](https://jupyter.org/) and assumes the reader is as well. Jupyter notebooks are perfect for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc) and going through guides in an interactive environment is a great way to better understand them.\n",
"\n",
"This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See [here](https://jupyter.org/install) for instructions on how to install.\n",
"\n",
"### Installation\n",
"\n",
"For this tutorial we will need `langchain-core` and `langgraph`:\n",
"\n",
"import Tabs from '@theme/Tabs';\n",
"import TabItem from '@theme/TabItem';\n",
"import CodeBlock from \"@theme/CodeBlock\";\n",
"\n",
"<Tabs>\n",
" <TabItem value=\"pip\" label=\"Pip\" default>\n",
" <CodeBlock language=\"bash\">pip install langchain-core langgraph>0.2.27</CodeBlock>\n",
" </TabItem>\n",
" <TabItem value=\"conda\" label=\"Conda\">\n",
" <CodeBlock language=\"bash\">conda install langchain-core langgraph>0.2.27 -c conda-forge</CodeBlock>\n",
" </TabItem>\n",
"</Tabs>\n",
"\n",
"\n",
"\n",
"For more details, see our [Installation guide](/docs/how_to/installation).\n",
"\n",
"### LangSmith\n",
"\n",
"Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls.\n",
"As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent.\n",
"The best way to do this is with [LangSmith](https://smith.langchain.com).\n",
"\n",
"After you sign up at the link above, make sure to set your environment variables to start logging traces:\n",
"\n",
"```shell\n",
"export LANGCHAIN_TRACING_V2=\"true\"\n",
"export LANGCHAIN_API_KEY=\"...\"\n",
"```\n",
"\n",
"Or, if in a notebook, you can set them with:\n",
"\n",
"```python\n",
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n",
"```\n",
"\n",
"## Quickstart\n",
"\n",
"First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangeably - select the one you want to use below!\n",
"\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs openaiParams={`model=\"gpt-3.5-turbo\"`} />\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o-mini\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's first use the model directly. `ChatModel`s are instances of LangChain \"Runnables\", which means they expose a standard interface for interacting with them. To just simply call the model, we can pass in a list of messages to the `.invoke` method."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Hi Bob! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 11, 'total_tokens': 21, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-149994c0-d958-49bb-9a9d-df911baea29f-0', usage_metadata={'input_tokens': 11, 'output_tokens': 10, 'total_tokens': 21})"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"model.invoke([HumanMessage(content=\"Hi! I'm Bob\")])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The model on its own does not have any concept of state. For example, if you ask a followup question:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
| |
152797
|
"AIMessage(content=\"I'm sorry, but I don't have access to personal information about individuals unless you've shared it with me in this conversation. How can I assist you today?\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 30, 'prompt_tokens': 11, 'total_tokens': 41, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-0ecab57c-728d-4fd1-845c-394a62df8e13-0', usage_metadata={'input_tokens': 11, 'output_tokens': 30, 'total_tokens': 41})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.invoke([HumanMessage(content=\"What's my name?\")])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's take a look at the example [LangSmith trace](https://smith.langchain.com/public/5c21cb92-2814-4119-bae9-d02b8db577ac/r)\n",
"\n",
"We can see that it doesn't take the previous conversation turn into context, and cannot answer the question.\n",
"This makes for a terrible chatbot experience!\n",
"\n",
"To get around this, we need to pass the entire conversation history into the model. Let's see what happens when we do that:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Your name is Bob! How can I help you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 12, 'prompt_tokens': 33, 'total_tokens': 45, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-c164c5a1-d85f-46ee-ba8a-bb511cfb0e51-0', usage_metadata={'input_tokens': 33, 'output_tokens': 12, 'total_tokens': 45})"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import AIMessage\n",
"\n",
"model.invoke(\n",
" [\n",
" HumanMessage(content=\"Hi! I'm Bob\"),\n",
" AIMessage(content=\"Hello Bob! How can I assist you today?\"),\n",
" HumanMessage(content=\"What's my name?\"),\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And now we can see that we get a good response!\n",
"\n",
"This is the basic idea underpinning a chatbot's ability to interact conversationally.\n",
"So how do we best implement this?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Message persistence\n",
"\n",
"[LangGraph](https://langchain-ai.github.io/langgraph/) implements a built-in persistence layer, making it ideal for chat applications that support multiple conversational turns.\n",
"\n",
"Wrapping our chat model in a minimal LangGraph application allows us to automatically persist the message history, simplifying the development of multi-turn applications.\n",
"\n",
"LangGraph comes with a simple in-memory checkpointer, which we use below. See its [documentation](https://langchain-ai.github.io/langgraph/concepts/persistence/) for more detail, including how to use different persistence backends (e.g., SQLite or Postgres)."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"from langgraph.checkpoint.memory import MemorySaver\n",
"from langgraph.graph import START, MessagesState, StateGraph\n",
"\n",
"# Define a new graph\n",
"workflow = StateGraph(state_schema=MessagesState)\n",
"\n",
"\n",
"# Define the function that calls the model\n",
"def call_model(state: MessagesState):\n",
" response = model.invoke(state[\"messages\"])\n",
" return {\"messages\": response}\n",
"\n",
"\n",
"# Define the (single) node in the graph\n",
"workflow.add_edge(START, \"model\")\n",
"workflow.add_node(\"model\", call_model)\n",
"\n",
"# Add memory\n",
"memory = MemorySaver()\n",
"app = workflow.compile(checkpointer=memory)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now need to create a `config` that we pass into the runnable every time. This config contains information that is not part of the input directly, but is still useful. In this case, we want to include a `thread_id`. This should look like:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"config = {\"configurable\": {\"thread_id\": \"abc123\"}}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This enables us to support multiple conversation threads with a single application, a common requirement when your application has multiple users.\n",
"\n",
"We can then invoke the application:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"Hi Bob! How can I assist you today?\n"
]
}
],
"source": [
"query = \"Hi! I'm Bob.\"\n",
"\n",
"input_messages = [HumanMessage(query)]\n",
"output = app.invoke({\"messages\": input_messages}, config)\n",
"output[\"messages\"][-1].pretty_print() # output contains all messages in state"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"Your name is Bob! How can I help you today?\n"
]
}
],
"source": [
"query = \"What's my name?\"\n",
"\n",
"input_messages = [HumanMessage(query)]\n",
"output = app.invoke({\"messages\": input_messages}, config)\n",
"output[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Great! Our chatbot now remembers things about us. If we change the config to reference a different `thread_id`, we can see that it starts the conversation fresh."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"I'm sorry, but I don't have access to personal information about you unless you provide it. How can I assist you today?\n"
]
}
],
"source": [
"config = {\"configurable\": {\"thread_id\": \"abc234\"}}\n",
"\n",
"input_messages = [HumanMessage(query)]\n",
"output = app.invoke({\"messages\": input_messages}, config)\n",
"output[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "markdown",
"metadata": {},
| |
152798
|
"source": [
"However, we can always go back to the original conversation (since we are persisting it in a database)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"Your name is Bob! If there's anything else you'd like to discuss or ask, feel free!\n"
]
}
],
"source": [
"config = {\"configurable\": {\"thread_id\": \"abc123\"}}\n",
"\n",
"input_messages = [HumanMessage(query)]\n",
"output = app.invoke({\"messages\": input_messages}, config)\n",
"output[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is how we can support a chatbot having conversations with many users!\n",
"\n",
":::tip\n",
"\n",
"For async support, update the `call_model` node to be an async function and use `.ainvoke` when invoking the application:\n",
"\n",
"```python\n",
"# Async function for node:\n",
"async def call_model(state: MessagesState):\n",
" response = await model.ainvoke(state[\"messages\"])\n",
" return {\"messages\": response}\n",
"\n",
"\n",
"# Define graph as before:\n",
"workflow = StateGraph(state_schema=MessagesState)\n",
"workflow.add_edge(START, \"model\")\n",
"workflow.add_node(\"model\", call_model)\n",
"app = workflow.compile(checkpointer=MemorySaver())\n",
"\n",
"# Async invocation:\n",
"output = await app.ainvoke({\"messages\": input_messages}, config)\n",
"output[\"messages\"][-1].pretty_print()\n",
"```\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Right now, all we've done is add a simple persistence layer around the model. We can start to make the chatbot more complicated and personalized by adding in a prompt template.\n",
"\n",
"## Prompt templates\n",
"\n",
"Prompt Templates help to turn raw user information into a format that the LLM can work with. In this case, the raw user input is just a message, which we are passing to the LLM. Let's now make that a bit more complicated. First, let's add in a system message with some custom instructions (but still taking messages as input). Next, we'll add in more input besides just the messages.\n",
"\n",
"To add in a system message, we will create a `ChatPromptTemplate`. We will utilize `MessagesPlaceholder` to pass all the messages in."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You talk like a pirate. Answer all questions to the best of your ability.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"messages\"),\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now update our application to incorporate this template:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"workflow = StateGraph(state_schema=MessagesState)\n",
"\n",
"\n",
"def call_model(state: MessagesState):\n",
" # highlight-start\n",
" chain = prompt | model\n",
" response = chain.invoke(state)\n",
" # highlight-end\n",
" return {\"messages\": response}\n",
"\n",
"\n",
"workflow.add_edge(START, \"model\")\n",
"workflow.add_node(\"model\", call_model)\n",
"\n",
"memory = MemorySaver()\n",
"app = workflow.compile(checkpointer=memory)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We invoke the application in the same way:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"Ahoy there, Jim! What brings ye to these treacherous waters today? Be ye seekin’ treasure, tales, or perhaps a bit o’ knowledge? Speak up, matey!\n"
]
}
],
"source": [
"config = {\"configurable\": {\"thread_id\": \"abc345\"}}\n",
"query = \"Hi! I'm Jim.\"\n",
"\n",
"input_messages = [HumanMessage(query)]\n",
"output = app.invoke({\"messages\": input_messages}, config)\n",
"output[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"Ye be callin' yerself Jim, if I be hearin' ye correctly! A fine name for a scallywag such as yerself! What else can I do fer ye, me hearty?\n"
]
}
],
"source": [
"query = \"What is my name?\"\n",
"\n",
"input_messages = [HumanMessage(query)]\n",
"output = app.invoke({\"messages\": input_messages}, config)\n",
"output[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Awesome! Let's now make our prompt a little bit more complicated. Let's assume that the prompt template now looks something like this:"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant. Answer all questions to the best of your ability in {language}.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"messages\"),\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that we have added a new `language` input to the prompt. Our application now has two parameters-- the input `messages` and `language`. We should update our application's state to reflect this:"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"from typing import Sequence\n",
"\n",
"from langchain_core.messages import BaseMessage\n",
"from langgraph.graph.message import add_messages\n",
"from typing_extensions import Annotated, TypedDict\n",
"\n",
"\n",
"# highlight-next-line\n",
"class State(TypedDict):\n",
" # highlight-next-line\n",
" messages: Annotated[Sequence[BaseMessage], add_messages]\n",
" # highlight-next-line\n",
" language: str\n",
"\n",
"\n",
"workflow = StateGraph(state_schema=State)\n",
"\n",
"\n",
"def call_model(state: State):\n",
" chain = prompt | model\n",
" response = chain.invoke(state)\n",
" return {\"messages\": [response]}\n",
"\n",
"\n",
"workflow.add_edge(START, \"model\")\n",
| |
152805
|
{
"cells": [
{
"cell_type": "markdown",
"id": "5630b0ca",
"metadata": {},
"source": [
"# Build a Retrieval Augmented Generation (RAG) App\n",
"\n",
"One of the most powerful applications enabled by LLMs is sophisticated question-answering (Q&A) chatbots. These are applications that can answer questions about specific source information. These applications use a technique known as Retrieval Augmented Generation, or RAG.\n",
"\n",
"This tutorial will show how to build a simple Q&A application\n",
"over a text data source. Along the way we’ll go over a typical Q&A\n",
"architecture and highlight additional resources for more advanced Q&A techniques. We’ll also see\n",
"how LangSmith can help us trace and understand our application.\n",
"LangSmith will become increasingly helpful as our application grows in\n",
"complexity.\n",
"\n",
"If you're already familiar with basic retrieval, you might also be interested in\n",
"this [high-level overview of different retrieval techinques](/docs/concepts/#retrieval).\n",
"\n",
"## What is RAG?\n",
"\n",
"RAG is a technique for augmenting LLM knowledge with additional data.\n",
"\n",
"LLMs can reason about wide-ranging topics, but their knowledge is limited to the public data up to a specific point in time that they were trained on. If you want to build AI applications that can reason about private data or data introduced after a model's cutoff date, you need to augment the knowledge of the model with the specific information it needs. The process of bringing the appropriate information and inserting it into the model prompt is known as Retrieval Augmented Generation (RAG).\n",
"\n",
"LangChain has a number of components designed to help build Q&A applications, and RAG applications more generally. \n",
"\n",
"**Note**: Here we focus on Q&A for unstructured data. If you are interested for RAG over structured data, check out our tutorial on doing [question/answering over SQL data](/docs/tutorials/sql_qa).\n",
"\n",
"## Concepts\n",
"A typical RAG application has two main components:\n",
"\n",
"**Indexing**: a pipeline for ingesting data from a source and indexing it. *This usually happens offline.*\n",
"\n",
"**Retrieval and generation**: the actual RAG chain, which takes the user query at run time and retrieves the relevant data from the index, then passes that to the model.\n",
"\n",
"The most common full sequence from raw data to answer looks like:\n",
"\n",
"### Indexing\n",
"1. **Load**: First we need to load our data. This is done with [Document Loaders](/docs/concepts/#document-loaders).\n",
"2. **Split**: [Text splitters](/docs/concepts/#text-splitters) break large `Documents` into smaller chunks. This is useful both for indexing data and for passing it in to a model, since large chunks are harder to search over and won't fit in a model's finite context window.\n",
"3. **Store**: We need somewhere to store and index our splits, so that they can later be searched over. This is often done using a [VectorStore](/docs/concepts/#vector-stores) and [Embeddings](/docs/concepts/#embedding-models) model.\n",
"\n",
"\n",
"\n",
"### Retrieval and generation\n",
"4. **Retrieve**: Given a user input, relevant splits are retrieved from storage using a [Retriever](/docs/concepts/#retrievers).\n",
"5. **Generate**: A [ChatModel](/docs/concepts/#chat-models) / [LLM](/docs/concepts/#llms) produces an answer using a prompt that includes the question and the retrieved data\n",
"\n",
"\n",
"\n",
"\n",
"## Setup\n",
"\n",
"### Jupyter Notebook\n",
"\n",
"This guide (and most of the other guides in the documentation) uses [Jupyter notebooks](https://jupyter.org/) and assumes the reader is as well. Jupyter notebooks are perfect for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc) and going through guides in an interactive environment is a great way to better understand them.\n",
"\n",
"This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See [here](https://jupyter.org/install) for instructions on how to install.\n",
"\n",
"### Installation\n",
"\n",
"This tutorial requires these langchain dependencies:\n",
"\n",
"import Tabs from '@theme/Tabs';\n",
"import TabItem from '@theme/TabItem';\n",
"import CodeBlock from \"@theme/CodeBlock\";\n",
"\n",
"<Tabs>\n",
" <TabItem value=\"pip\" label=\"Pip\" default>\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1918ba2f",
"metadata": {},
"outputs": [],
"source": [
"%pip install --quiet --upgrade langchain langchain-community langchain-chroma"
]
},
{
"cell_type": "markdown",
"id": "9ff1b425",
"metadata": {},
"source": [
" </TabItem>\n",
" <TabItem value=\"conda\" label=\"Conda\">\n",
" <CodeBlock language=\"bash\">conda install langchain langchain-community langchain-chroma -c conda-forge</CodeBlock>\n",
" </TabItem>\n",
"</Tabs>\n",
"\n",
"\n",
"For more details, see our [Installation guide](/docs/how_to/installation).\n",
"\n",
"### LangSmith\n",
"\n",
"Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls.\n",
"As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent.\n",
"The best way to do this is with [LangSmith](https://smith.langchain.com).\n",
"\n",
"After you sign up at the link above, make sure to set your environment variables to start logging traces:\n",
"\n",
"```shell\n",
"export LANGCHAIN_TRACING_V2=\"true\"\n",
"export LANGCHAIN_API_KEY=\"...\"\n",
"```\n",
"\n",
"Or, if in a notebook, you can set them with:\n",
"\n",
"```python\n",
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n",
"```\n",
"## Preview\n",
"\n",
"In this guide we’ll build an app that answers questions about the content of a website. The specific website we will use is the [LLM Powered Autonomous\n",
"Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post\n",
"by Lilian Weng, which allows us to ask questions about the contents of\n",
"the post.\n",
"\n",
"We can create a simple indexing pipeline and RAG chain to do this in ~20\n",
"lines of code:\n",
"\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"llm\" />\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "26ef9d35",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4\")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "6281ec7b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
| |
152807
|
"which will recursively split the document using common separators like\n",
"new lines until each chunk is the appropriate size. This is the\n",
"recommended text splitter for generic text use cases.\n",
"\n",
"We set `add_start_index=True` so that the character index at which each\n",
"split Document starts within the initial Document is preserved as\n",
"metadata attribute “start_index”."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6aa3f8c0-5113-4c36-9706-ee702407173a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"66"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(\n",
" chunk_size=1000, chunk_overlap=200, add_start_index=True\n",
")\n",
"all_splits = text_splitter.split_documents(docs)\n",
"\n",
"len(all_splits)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2257752c-bed2-4d57-be8e-d275bfe70ace",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"969"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(all_splits[0].page_content)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "325fdc48-4a24-4645-9d08-0d22f5be5e13",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/',\n",
" 'start_index': 7056}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"all_splits[10].metadata"
]
},
{
"cell_type": "markdown",
"id": "7046d580",
"metadata": {},
"source": [
"### Go deeper\n",
"\n",
"`TextSplitter`: Object that splits a list of `Document`s into smaller\n",
"chunks. Subclass of `DocumentTransformer`s.\n",
"\n",
"- Learn more about splitting text using different methods by reading the [how-to docs](/docs/how_to#text-splitters)\n",
"- [Code (py or js)](/docs/integrations/document_loaders/source_code)\n",
"- [Scientific papers](/docs/integrations/document_loaders/grobid)\n",
"- [Interface](https://python.langchain.com/api_reference/text_splitters/base/langchain_text_splitters.base.TextSplitter.html): API reference for the base interface.\n",
"\n",
"`DocumentTransformer`: Object that performs a transformation on a list\n",
"of `Document` objects.\n",
"\n",
"- [Docs](/docs/how_to#text-splitters): Detailed documentation on how to use `DocumentTransformers`\n",
"- [Integrations](/docs/integrations/document_transformers/)\n",
"- [Interface](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.transformers.BaseDocumentTransformer.html): API reference for the base interface.\n",
"\n",
"## 3. Indexing: Store {#indexing-store}\n",
"\n",
"Now we need to index our 66 text chunks so that we can search over them\n",
"at runtime. The most common way to do this is to embed the contents of\n",
"each document split and insert these embeddings into a vector database\n",
"(or vector store). When we want to search over our splits, we take a\n",
"text search query, embed it, and perform some sort of “similarity”\n",
"search to identify the stored splits with the most similar embeddings to\n",
"our query embedding. The simplest similarity measure is cosine\n",
"similarity — we measure the cosine of the angle between each pair of\n",
"embeddings (which are high dimensional vectors).\n",
"\n",
"We can embed and store all of our document splits in a single command\n",
"using the [Chroma](/docs/integrations/vectorstores/chroma)\n",
"vector store and\n",
"[OpenAIEmbeddings](/docs/integrations/text_embedding/openai)\n",
"model.\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "0b44b41a-8b25-42ad-9e37-7baf82a058cd",
"metadata": {},
"outputs": [],
"source": [
"from langchain_chroma import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())"
]
},
{
"cell_type": "markdown",
"id": "dbddc12e",
"metadata": {},
"source": [
"### Go deeper\n",
"\n",
"`Embeddings`: Wrapper around a text embedding model, used for converting\n",
"text to embeddings.\n",
"\n",
"- [Docs](/docs/how_to/embed_text): Detailed documentation on how to use embeddings.\n",
"- [Integrations](/docs/integrations/text_embedding/): 30+ integrations to choose from.\n",
"- [Interface](https://python.langchain.com/api_reference/core/embeddings/langchain_core.embeddings.Embeddings.html): API reference for the base interface.\n",
"\n",
"`VectorStore`: Wrapper around a vector database, used for storing and\n",
"querying embeddings.\n",
"\n",
"- [Docs](/docs/how_to/vectorstores): Detailed documentation on how to use vector stores.\n",
"- [Integrations](/docs/integrations/vectorstores/): 40+ integrations to choose from.\n",
"- [Interface](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.VectorStore.html): API reference for the base interface.\n",
"\n",
"This completes the **Indexing** portion of the pipeline. At this point\n",
"we have a query-able vector store containing the chunked contents of our\n",
"blog post. Given a user question, we should ideally be able to return\n",
"the snippets of the blog post that answer the question.\n",
"\n",
"## 4. Retrieval and Generation: Retrieve {#retrieval-and-generation-retrieve}\n",
"\n",
"Now let’s write the actual application logic. We want to create a simple\n",
"application that takes a user question, searches for documents relevant\n",
"to that question, passes the retrieved documents and initial question to\n",
"a model, and returns an answer.\n",
"\n",
"First we need to define our logic for searching over documents.\n",
"LangChain defines a\n",
"[Retriever](/docs/concepts#retrievers/) interface\n",
"which wraps an index that can return relevant `Documents` given a string\n",
"query.\n",
"\n",
"The most common type of `Retriever` is the\n",
"[VectorStoreRetriever](/docs/how_to/vectorstore_retriever),\n",
"which uses the similarity search capabilities of a vector store to\n",
"facilitate retrieval. Any `VectorStore` can easily be turned into a\n",
"`Retriever` with `VectorStore.as_retriever()`:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "1a0d25f8-8a45-4ec7-b419-c36e231fde13",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"6"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever = vectorstore.as_retriever(search_type=\"similarity\", search_kwargs={\"k\": 6})\n",
"\n",
"retrieved_docs = retriever.invoke(\"What are the approaches to Task Decomposition?\")\n",
| |
152809
|
"First: each of these components (`retriever`, `prompt`, `llm`, etc.) are instances of [Runnable](/docs/concepts#langchain-expression-language-lcel). This means that they implement the same methods-- such as sync and async `.invoke`, `.stream`, or `.batch`-- which makes them easier to connect together. They can be connected into a [RunnableSequence](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.RunnableSequence.html)-- another Runnable-- via the `|` operator.\n",
"\n",
"LangChain will automatically cast certain objects to runnables when met with the `|` operator. Here, `format_docs` is cast to a [RunnableLambda](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.RunnableLambda.html), and the dict with `\"context\"` and `\"question\"` is cast to a [RunnableParallel](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.RunnableParallel.html). The details are less important than the bigger point, which is that each object is a Runnable.\n",
"\n",
"Let's trace how the input question flows through the above runnables.\n",
"\n",
"As we've seen above, the input to `prompt` is expected to be a dict with keys `\"context\"` and `\"question\"`. So the first element of this chain builds runnables that will calculate both of these from the input question:\n",
"- `retriever | format_docs` passes the question through the retriever, generating [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects, and then to `format_docs` to generate strings;\n",
"- `RunnablePassthrough()` passes through the input question unchanged.\n",
"\n",
"That is, if you constructed\n",
"```python\n",
"chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
")\n",
"```\n",
"Then `chain.invoke(question)` would build a formatted prompt, ready for inference. (Note: when developing with LCEL, it can be practical to test with sub-chains like this.)\n",
"\n",
"The last steps of the chain are `llm`, which runs the inference, and `StrOutputParser()`, which just plucks the string content out of the LLM's output message.\n",
"\n",
"You can analyze the individual steps of this chain via its [LangSmith\n",
"trace](https://smith.langchain.com/public/1799e8db-8a6d-4eb2-84d5-46e8d7d5a99b/r).\n",
"\n",
"### Built-in chains\n",
"\n",
"If preferred, LangChain includes convenience functions that implement the above LCEL. We compose two functions:\n",
"\n",
"- [create_stuff_documents_chain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) specifies how retrieved context is fed into a prompt and LLM. In this case, we will \"stuff\" the contents into the prompt -- i.e., we will include all retrieved context without any summarization or other processing. It largely implements our above `rag_chain`, with input keys `context` and `input`-- it generates an answer using retrieved context and query.\n",
"- [create_retrieval_chain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.retrieval.create_retrieval_chain.html) adds the retrieval step and propagates the retrieved context through the chain, providing it alongside the final answer. It has input key `input`, and includes `input`, `context`, and `answer` in its output."
]
},
{
"cell_type": "code",
"execution_count": 37,
"id": "e75bfe98-d9e4-4868-bae1-5811437d859b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Task Decomposition is a process in which complex tasks are broken down into smaller and simpler steps. Techniques like Chain of Thought (CoT) and Tree of Thoughts are used to enhance model performance on these tasks. The CoT method instructs the model to think step by step, decomposing hard tasks into manageable ones, while Tree of Thoughts extends CoT by exploring multiple reasoning possibilities at each step, creating a tree structure of thoughts.\n"
]
}
],
"source": [
"from langchain.chains import create_retrieval_chain\n",
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"system_prompt = (\n",
" \"You are an assistant for question-answering tasks. \"\n",
" \"Use the following pieces of retrieved context to answer \"\n",
" \"the question. If you don't know the answer, say that you \"\n",
" \"don't know. Use three sentences maximum and keep the \"\n",
" \"answer concise.\"\n",
" \"\\n\\n\"\n",
" \"{context}\"\n",
")\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system_prompt),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"\n",
"question_answer_chain = create_stuff_documents_chain(llm, prompt)\n",
"rag_chain = create_retrieval_chain(retriever, question_answer_chain)\n",
"\n",
"response = rag_chain.invoke({\"input\": \"What is Task Decomposition?\"})\n",
"print(response[\"answer\"])"
]
},
{
"cell_type": "markdown",
"id": "0fe711ea-592b-44a1-89b3-cee33c81aca4",
"metadata": {},
"source": [
"#### Returning sources\n",
"Often in Q&A applications it's important to show users the sources that were used to generate the answer. LangChain's built-in `create_retrieval_chain` will propagate retrieved source documents through to the output in the `\"context\"` key:"
]
},
{
"cell_type": "code",
"execution_count": 41,
"id": "9d4cec1a-75d6-4479-929f-72cadb2dcde8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.' metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}\n",
"\n",
"page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.' metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'start_index': 1585}\n",
"\n",
| |
152812
|
{
"cells": [
{
"cell_type": "markdown",
"id": "bf37a837-7a6a-447b-8779-38f26c585887",
"metadata": {},
"source": [
"# Vector stores and retrievers\n",
"\n",
"This tutorial will familiarize you with LangChain's vector store and retriever abstractions. These abstractions are designed to support retrieval of data-- from (vector) databases and other sources-- for integration with LLM workflows. They are important for applications that fetch data to be reasoned over as part of model inference, as in the case of retrieval-augmented generation, or RAG (see our RAG tutorial [here](/docs/tutorials/rag)).\n",
"\n",
"## Concepts\n",
"\n",
"This guide focuses on retrieval of text data. We will cover the following concepts:\n",
"\n",
"- Documents;\n",
"- Vector stores;\n",
"- Retrievers.\n",
"\n",
"## Setup\n",
"\n",
"### Jupyter Notebook\n",
"\n",
"This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See [here](https://jupyter.org/install) for instructions on how to install.\n",
"\n",
"### Installation\n",
"\n",
"This tutorial requires the `langchain`, `langchain-chroma`, and `langchain-openai` packages:\n",
"\n",
"import Tabs from '@theme/Tabs';\n",
"import TabItem from '@theme/TabItem';\n",
"import CodeBlock from \"@theme/CodeBlock\";\n",
"\n",
"<Tabs>\n",
" <TabItem value=\"pip\" label=\"Pip\" default>\n",
" <CodeBlock language=\"bash\">pip install langchain langchain-chroma langchain-openai</CodeBlock>\n",
" </TabItem>\n",
" <TabItem value=\"conda\" label=\"Conda\">\n",
" <CodeBlock language=\"bash\">conda install langchain langchain-chroma langchain-openai -c conda-forge</CodeBlock>\n",
" </TabItem>\n",
"</Tabs>\n",
"\n",
"\n",
"For more details, see our [Installation guide](/docs/how_to/installation).\n",
"\n",
"### LangSmith\n",
"\n",
"Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls.\n",
"As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent.\n",
"The best way to do this is with [LangSmith](https://smith.langchain.com).\n",
"\n",
"After you sign up at the link above, make sure to set your environment variables to start logging traces:\n",
"\n",
"```shell\n",
"export LANGCHAIN_TRACING_V2=\"true\"\n",
"export LANGCHAIN_API_KEY=\"...\"\n",
"```\n",
"\n",
"Or, if in a notebook, you can set them with:\n",
"\n",
"```python\n",
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n",
"```\n",
"\n",
"\n",
"## Documents\n",
"\n",
"LangChain implements a [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) abstraction, which is intended to represent a unit of text and associated metadata. It has two attributes:\n",
"\n",
"- `page_content`: a string representing the content;\n",
"- `metadata`: a dict containing arbitrary metadata.\n",
"\n",
"The `metadata` attribute can capture information about the source of the document, its relationship to other documents, and other information. Note that an individual `Document` object often represents a chunk of a larger document.\n",
"\n",
"Let's generate some sample documents:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "9f3dc151-7b2f-4d94-9558-7a84f7eab100",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.documents import Document\n",
"\n",
"documents = [\n",
" Document(\n",
" page_content=\"Dogs are great companions, known for their loyalty and friendliness.\",\n",
" metadata={\"source\": \"mammal-pets-doc\"},\n",
" ),\n",
" Document(\n",
" page_content=\"Cats are independent pets that often enjoy their own space.\",\n",
" metadata={\"source\": \"mammal-pets-doc\"},\n",
" ),\n",
" Document(\n",
" page_content=\"Goldfish are popular pets for beginners, requiring relatively simple care.\",\n",
" metadata={\"source\": \"fish-pets-doc\"},\n",
" ),\n",
" Document(\n",
" page_content=\"Parrots are intelligent birds capable of mimicking human speech.\",\n",
" metadata={\"source\": \"bird-pets-doc\"},\n",
" ),\n",
" Document(\n",
" page_content=\"Rabbits are social animals that need plenty of space to hop around.\",\n",
" metadata={\"source\": \"mammal-pets-doc\"},\n",
" ),\n",
"]"
]
},
{
"cell_type": "markdown",
"id": "1cac19bd-27d1-40f1-9c27-7a586b685b4e",
"metadata": {},
"source": [
"Here we've generated five documents, containing metadata indicating three distinct \"sources\".\n",
"\n",
"## Vector stores\n",
"\n",
"Vector search is a common way to store and search over unstructured data (such as unstructured text). The idea is to store numeric vectors that are associated with the text. Given a query, we can [embed](/docs/concepts#embedding-models) it as a vector of the same dimension and use vector similarity metrics to identify related data in the store.\n",
"\n",
"LangChain [VectorStore](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.VectorStore.html) objects contain methods for adding text and `Document` objects to the store, and querying them using various similarity metrics. They are often initialized with [embedding](/docs/how_to/embed_text) models, which determine how text data is translated to numeric vectors.\n",
"\n",
"LangChain includes a suite of [integrations](/docs/integrations/vectorstores) with different vector store technologies. Some vector stores are hosted by a provider (e.g., various cloud providers) and require specific credentials to use; some (such as [Postgres](/docs/integrations/vectorstores/pgvector)) run in separate infrastructure that can be run locally or via a third-party; others can run in-memory for lightweight workloads. Here we will demonstrate usage of LangChain VectorStores using [Chroma](/docs/integrations/vectorstores/chroma), which includes an in-memory implementation.\n",
"\n",
"To instantiate a vector store, we often need to provide an [embedding](/docs/how_to/embed_text) model to specify how text should be converted into a numeric vector. Here we will use [OpenAI embeddings](/docs/integrations/text_embedding/openai/)."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "d48acc28-1a34-414b-8e08-fbdef3a2a60b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_chroma import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"vectorstore = Chroma.from_documents(\n",
" documents,\n",
" embedding=OpenAIEmbeddings(),\n",
")"
]
},
{
"cell_type": "markdown",
"id": "ff0f0b43-e5b8-4c79-b782-a02f17345487",
"metadata": {},
"source": [
| |
152813
|
"Calling `.from_documents` here will add the documents to the vector store. [VectorStore](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.VectorStore.html) implements methods for adding documents that can also be called after the object is instantiated. Most implementations will allow you to connect to an existing vector store-- e.g., by providing a client, index name, or other information. See the documentation for a specific [integration](/docs/integrations/vectorstores) for more detail.\n",
"\n",
"Once we've instantiated a `VectorStore` that contains documents, we can query it. [VectorStore](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.VectorStore.html) includes methods for querying:\n",
"- Synchronously and asynchronously;\n",
"- By string query and by vector;\n",
"- With and without returning similarity scores;\n",
"- By similarity and [maximum marginal relevance](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.VectorStore.html#langchain_core.vectorstores.VectorStore.max_marginal_relevance_search) (to balance similarity with query to diversity in retrieved results).\n",
"\n",
"The methods will generally include a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects in their outputs.\n",
"\n",
"### Examples\n",
"\n",
"Return documents based on similarity to a string query:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "7e01ed91-1a98-4221-960a-bd7a2541a548",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'source': 'mammal-pets-doc'}),\n",
" Document(page_content='Dogs are great companions, known for their loyalty and friendliness.', metadata={'source': 'mammal-pets-doc'}),\n",
" Document(page_content='Rabbits are social animals that need plenty of space to hop around.', metadata={'source': 'mammal-pets-doc'}),\n",
" Document(page_content='Parrots are intelligent birds capable of mimicking human speech.', metadata={'source': 'bird-pets-doc'})]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"vectorstore.similarity_search(\"cat\")"
]
},
{
"cell_type": "markdown",
"id": "4d4f9857-5a7d-4b5f-82b8-ff76539143c2",
"metadata": {},
"source": [
"Async query:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "618af196-6182-4a7d-8b09-07493fcdc868",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'source': 'mammal-pets-doc'}),\n",
" Document(page_content='Dogs are great companions, known for their loyalty and friendliness.', metadata={'source': 'mammal-pets-doc'}),\n",
" Document(page_content='Rabbits are social animals that need plenty of space to hop around.', metadata={'source': 'mammal-pets-doc'}),\n",
" Document(page_content='Parrots are intelligent birds capable of mimicking human speech.', metadata={'source': 'bird-pets-doc'})]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await vectorstore.asimilarity_search(\"cat\")"
]
},
{
"cell_type": "markdown",
"id": "d4172698-9ad7-4422-99b2-bdc268e99c75",
"metadata": {},
"source": [
"Return scores:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "4ed24af2-0d82-478c-949b-b389348d4e9f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[(Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'source': 'mammal-pets-doc'}),\n",
" 0.3751849830150604),\n",
" (Document(page_content='Dogs are great companions, known for their loyalty and friendliness.', metadata={'source': 'mammal-pets-doc'}),\n",
" 0.48316916823387146),\n",
" (Document(page_content='Rabbits are social animals that need plenty of space to hop around.', metadata={'source': 'mammal-pets-doc'}),\n",
" 0.49601367115974426),\n",
" (Document(page_content='Parrots are intelligent birds capable of mimicking human speech.', metadata={'source': 'bird-pets-doc'}),\n",
" 0.4972994923591614)]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Note that providers implement different scores; Chroma here\n",
"# returns a distance metric that should vary inversely with\n",
"# similarity.\n",
"\n",
"vectorstore.similarity_search_with_score(\"cat\")"
]
},
{
"cell_type": "markdown",
"id": "b4991642-7275-40a9-b11a-e3beccbf2614",
"metadata": {},
"source": [
"Return documents based on similarity to an embedded query:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "b1a5eabb-a821-48cc-917e-cc27f03e4bcc",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'source': 'mammal-pets-doc'}),\n",
" Document(page_content='Dogs are great companions, known for their loyalty and friendliness.', metadata={'source': 'mammal-pets-doc'}),\n",
" Document(page_content='Rabbits are social animals that need plenty of space to hop around.', metadata={'source': 'mammal-pets-doc'}),\n",
" Document(page_content='Parrots are intelligent birds capable of mimicking human speech.', metadata={'source': 'bird-pets-doc'})]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"embedding = OpenAIEmbeddings().embed_query(\"cat\")\n",
"\n",
"vectorstore.similarity_search_by_vector(embedding)"
]
},
{
"cell_type": "markdown",
"id": "168dbbec-ea97-4cc9-bb1a-75519c2d08af",
"metadata": {},
"source": [
"Learn more:\n",
"\n",
"- [API reference](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.VectorStore.html)\n",
"- [How-to guide](/docs/how_to/vectorstores)\n",
"- [Integration-specific docs](/docs/integrations/vectorstores)\n",
"\n",
"## Retrievers\n",
"\n",
"LangChain `VectorStore` objects do not subclass [Runnable](https://python.langchain.com/api_reference/core/index.html#module-langchain_core.runnables), and so cannot immediately be integrated into LangChain Expression Language [chains](/docs/concepts/#langchain-expression-language-lcel).\n",
"\n",
"LangChain [Retrievers](https://python.langchain.com/api_reference/core/index.html#module-langchain_core.retrievers) are Runnables, so they implement a standard set of methods (e.g., synchronous and asynchronous `invoke` and `batch` operations) and are designed to be incorporated in LCEL chains.\n",
"\n",
"We can create a simple version of this ourselves, without subclassing `Retriever`. If we choose what method we wish to use to retrieve documents, we can create a runnable easily. Below we will build one around the `similarity_search` method:"
]
},
{
"cell_type": "code",
"execution_count": 7,
| |
152814
|
"id": "f1461582-e569-4326-bd95-510f72edf019",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[[Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'source': 'mammal-pets-doc'})],\n",
" [Document(page_content='Goldfish are popular pets for beginners, requiring relatively simple care.', metadata={'source': 'fish-pets-doc'})]]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.documents import Document\n",
"from langchain_core.runnables import RunnableLambda\n",
"\n",
"retriever = RunnableLambda(vectorstore.similarity_search).bind(k=1) # select top result\n",
"\n",
"retriever.batch([\"cat\", \"shark\"])"
]
},
{
"cell_type": "markdown",
"id": "a36d3f64-a8bc-4baa-b2ea-07e324a0143e",
"metadata": {},
"source": [
"Vectorstores implement an `as_retriever` method that will generate a Retriever, specifically a [VectorStoreRetriever](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.VectorStoreRetriever.html). These retrievers include specific `search_type` and `search_kwargs` attributes that identify what methods of the underlying vector store to call, and how to parameterize them. For instance, we can replicate the above with the following:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "4989fe5e-ac58-4751-bc35-f53ff885860c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[[Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'source': 'mammal-pets-doc'})],\n",
" [Document(page_content='Goldfish are popular pets for beginners, requiring relatively simple care.', metadata={'source': 'fish-pets-doc'})]]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever = vectorstore.as_retriever(\n",
" search_type=\"similarity\",\n",
" search_kwargs={\"k\": 1},\n",
")\n",
"\n",
"retriever.batch([\"cat\", \"shark\"])"
]
},
{
"cell_type": "markdown",
"id": "6b79ded3-39ed-4aeb-8b70-cd36795ae239",
"metadata": {},
"source": [
"`VectorStoreRetriever` supports search types of `\"similarity\"` (default), `\"mmr\"` (maximum marginal relevance, described above), and `\"similarity_score_threshold\"`. We can use the latter to threshold documents output by the retriever by similarity score.\n",
"\n",
"Retrievers can easily be incorporated into more complex applications, such as retrieval-augmented generation (RAG) applications that combine a given question with retrieved context into a prompt for a LLM. Below we show a minimal example.\n",
"\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"llm\" />\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "c77b68bf-59f3-4416-9877-960f934c374d",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "6f1ae0d0-0b4b-4da0-80ce-f82913052a83",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"message = \"\"\"\n",
"Answer this question using the provided context only.\n",
"\n",
"{question}\n",
"\n",
"Context:\n",
"{context}\n",
"\"\"\"\n",
"\n",
"prompt = ChatPromptTemplate.from_messages([(\"human\", message)])\n",
"\n",
"rag_chain = {\"context\": retriever, \"question\": RunnablePassthrough()} | prompt | llm"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "b3c0d625-61e0-492e-b3a6-c40d383fca03",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Cats are independent pets that often enjoy their own space.\n"
]
}
],
"source": [
"response = rag_chain.invoke(\"tell me about cats\")\n",
"\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "3d9be7cb-2081-48a4-b6e4-d5e2d562ffd4",
"metadata": {},
"source": [
"## Learn more:\n",
"\n",
"Retrieval strategies can be rich and complex. For example:\n",
"\n",
"- We can [infer hard rules and filters](/docs/how_to/self_query/) from a query (e.g., \"using documents published after 2020\");\n",
"- We can [return documents that are linked](/docs/how_to/parent_document_retriever/) to the retrieved context in some way (e.g., via some document taxonomy);\n",
"- We can generate [multiple embeddings](/docs/how_to/multi_vector) for each unit of context;\n",
"- We can [ensemble results](/docs/how_to/ensemble_retriever) from multiple retrievers;\n",
"- We can assign weights to documents, e.g., to weigh [recent documents](/docs/how_to/time_weighted_vectorstore/) higher.\n",
"\n",
"The [retrievers](/docs/how_to#retrievers) section of the how-to guides covers these and other built-in retrieval strategies.\n",
"\n",
"It is also straightforward to extend the [BaseRetriever](https://python.langchain.com/api_reference/core/retrievers/langchain_core.retrievers.BaseRetriever.html) class in order to implement custom retrievers. See our how-to guide [here](/docs/how_to/custom_retriever)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| |
152815
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Build a Question/Answering system over SQL data\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Chaining runnables](/docs/how_to/sequence/)\n",
"- [Chat models](/docs/concepts/#chat-models)\n",
"- [Tools](/docs/concepts/#tools)\n",
"- [Agents](/docs/concepts/#agents)\n",
"\n",
":::\n",
"\n",
"Enabling a LLM system to query structured data can be qualitatively different from unstructured text data. Whereas in the latter it is common to generate text that can be searched against a vector database, the approach for structured data is often for the LLM to write and execute queries in a DSL, such as SQL. In this guide we'll go over the basic ways to create a Q&A system over tabular data in databases. We will cover implementations using both chains and agents. These systems will allow us to ask a question about the data in a database and get back a natural language answer. The main difference between the two is that our agent can query the database in a loop as many times as it needs to answer the question.\n",
"\n",
"## ⚠️ Security note ⚠️\n",
"\n",
"Building Q&A systems of SQL databases requires executing model-generated SQL queries. There are inherent risks in doing this. Make sure that your database connection permissions are always scoped as narrowly as possible for your chain/agent's needs. This will mitigate though not eliminate the risks of building a model-driven system. For more on general security best practices, [see here](/docs/security).\n",
"\n",
"\n",
"## Architecture\n",
"\n",
"At a high-level, the steps of these systems are:\n",
"\n",
"1. **Convert question to DSL query**: Model converts user input to a SQL query.\n",
"2. **Execute SQL query**: Execute the query.\n",
"3. **Answer the question**: Model responds to user input using the query results.\n",
"\n",
"Note that querying data in CSVs can follow a similar approach. See our [how-to guide](/docs/how_to/sql_csv) on question-answering over CSV data for more detail.\n",
"\n",
"\n",
"\n",
"## Setup\n",
"\n",
"First, get required packages and set environment variables:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"%%capture --no-stderr\n",
"%pip install --upgrade --quiet langchain langchain-community langchain-openai faiss-cpu"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will use an OpenAI model and a [FAISS-powered vector store](/docs/integrations/vectorstores/faiss/) in this guide."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.environ.get(\"OPENAI_API_KEY\"):\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# Comment out the below to opt-out of using LangSmith in this notebook. Not required.\n",
"if not os.environ.get(\"LANGCHAIN_API_KEY\"):\n",
" os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n",
" os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The below example will use a SQLite connection with Chinook database. Follow [these installation steps](https://database.guide/2-sample-databases-sqlite/) to create `Chinook.db` in the same directory as this notebook:\n",
"\n",
"* Save [this file](https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql) as `Chinook.sql`\n",
"* Run `sqlite3 Chinook.db`\n",
"* Run `.read Chinook.sql`\n",
"* Test `SELECT * FROM Artist LIMIT 10;`\n",
"\n",
"Now, `Chinook.db` is in our directory and we can interface with it using the SQLAlchemy-driven `SQLDatabase` class:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"sqlite\n",
"['Album', 'Artist', 'Customer', 'Employee', 'Genre', 'Invoice', 'InvoiceLine', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track']\n"
]
},
{
"data": {
"text/plain": [
"\"[(1, 'AC/DC'), (2, 'Accept'), (3, 'Aerosmith'), (4, 'Alanis Morissette'), (5, 'Alice In Chains'), (6, 'Antônio Carlos Jobim'), (7, 'Apocalyptica'), (8, 'Audioslave'), (9, 'BackBeat'), (10, 'Billy Cobham')]\""
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.utilities import SQLDatabase\n",
"\n",
"db = SQLDatabase.from_uri(\"sqlite:///Chinook.db\")\n",
"print(db.dialect)\n",
"print(db.get_usable_table_names())\n",
"db.run(\"SELECT * FROM Artist LIMIT 10;\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Great! We've got a SQL database that we can query. Now let's try hooking it up to an LLM.\n",
"\n",
"## Chains {#chains}\n",
"\n",
"Chains (i.e., compositions of LangChain [Runnables](/docs/concepts#langchain-expression-language-lcel)) support applications whose steps are predictable. We can create a simple chain that takes a question and does the following:\n",
"- convert the question into a SQL query;\n",
"- execute the query;\n",
"- use the result to answer the original question.\n",
"\n",
"There are scenarios not supported by this arrangement. For example, this system will execute a SQL query for any user input-- even \"hello\". Importantly, as we'll see below, some questions require more than one query to answer. We will address these scenarios in the Agents section.\n",
"\n",
"### Convert question to SQL query\n",
"\n",
"The first step in a SQL chain or agent is to take the user input and convert it to a SQL query. LangChain comes with a built-in chain for this: [create_sql_query_chain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.sql_database.query.create_sql_query_chain.html)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"llm\" />\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'SELECT COUNT(\"EmployeeId\") AS \"TotalEmployees\" FROM \"Employee\"\\nLIMIT 1;'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import create_sql_query_chain\n",
"\n",
"chain = create_sql_query_chain(llm, db)\n",
"response = chain.invoke({\"question\": \"How many employees are there\"})\n",
| |
152817
|
"\n",
"To initialize the agent we'll use the `SQLDatabaseToolkit` to create a bunch of tools:\n",
"\n",
"* Create and execute queries\n",
"* Check query syntax\n",
"* Retrieve table descriptions\n",
"* ... and more"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[QuerySQLDataBaseTool(description=\"Input to this tool is a detailed and correct SQL query, output is a result from the database. If the query is not correct, an error message will be returned. If an error is returned, rewrite the query, check the query, and try again. If you encounter an issue with Unknown column 'xxxx' in 'field list', use sql_db_schema to query the correct table fields.\", db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x113403b50>),\n",
" InfoSQLDatabaseTool(description='Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables. Be sure that the tables actually exist by calling sql_db_list_tables first! Example Input: table1, table2, table3', db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x113403b50>),\n",
" ListSQLDatabaseTool(db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x113403b50>),\n",
" QuerySQLCheckerTool(description='Use this tool to double check if your query is correct before executing it. Always use this tool before executing a query with sql_db_query!', db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x113403b50>, llm=ChatOpenAI(client=<openai.resources.chat.completions.Completions object at 0x115b7e890>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x115457e10>, temperature=0.0, openai_api_key=SecretStr('**********'), openai_proxy=''), llm_chain=LLMChain(prompt=PromptTemplate(input_variables=['dialect', 'query'], template='\\n{query}\\nDouble check the {dialect} query above for common mistakes, including:\\n- Using NOT IN with NULL values\\n- Using UNION when UNION ALL should have been used\\n- Using BETWEEN for exclusive ranges\\n- Data type mismatch in predicates\\n- Properly quoting identifiers\\n- Using the correct number of arguments for functions\\n- Casting to the correct data type\\n- Using the proper columns for joins\\n\\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.\\n\\nOutput the final SQL query only.\\n\\nSQL Query: '), llm=ChatOpenAI(client=<openai.resources.chat.completions.Completions object at 0x115b7e890>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x115457e10>, temperature=0.0, openai_api_key=SecretStr('**********'), openai_proxy='')))]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.agent_toolkits import SQLDatabaseToolkit\n",
"\n",
"toolkit = SQLDatabaseToolkit(db=db, llm=llm)\n",
"\n",
"tools = toolkit.get_tools()\n",
"\n",
"tools"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### System Prompt\n",
"\n",
"We will also want to create a system prompt for our agent. This will consist of instructions for how to behave."
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import SystemMessage\n",
"\n",
"SQL_PREFIX = \"\"\"You are an agent designed to interact with a SQL database.\n",
"Given an input question, create a syntactically correct SQLite query to run, then look at the results of the query and return the answer.\n",
"Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most 5 results.\n",
"You can order the results by a relevant column to return the most interesting examples in the database.\n",
"Never query for all the columns from a specific table, only ask for the relevant columns given the question.\n",
"You have access to tools for interacting with the database.\n",
"Only use the below tools. Only use the information returned by the below tools to construct your final answer.\n",
"You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n",
"\n",
"DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n",
"\n",
"To start you should ALWAYS look at the tables in the database to see what you can query.\n",
"Do NOT skip this step.\n",
"Then you should query the schema of the most relevant tables.\"\"\"\n",
"\n",
"system_message = SystemMessage(content=SQL_PREFIX)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Initializing agent\n",
"First, get required package **LangGraph**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%capture --no-stderr\n",
"%pip install --upgrade --quiet langgraph"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will use a prebuilt [LangGraph](/docs/concepts/#langgraph) agent to build our agent"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent_executor = create_react_agent(llm, tools, messages_modifier=system_message)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Consider how the agent responds to the below question:"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_vnHKe3oul1xbpX0Vrb2vsamZ', 'function': {'arguments': '{\"query\":\"SELECT c.Country, SUM(i.Total) AS Total_Spent FROM customers c JOIN invoices i ON c.CustomerId = i.CustomerId GROUP BY c.Country ORDER BY Total_Spent DESC LIMIT 1\"}', 'name': 'sql_db_query'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 53, 'prompt_tokens': 557, 'total_tokens': 610}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-da250593-06b5-414c-a9d9-3fc77036dd9c-0', tool_calls=[{'name': 'sql_db_query', 'args': {'query': 'SELECT c.Country, SUM(i.Total) AS Total_Spent FROM customers c JOIN invoices i ON c.CustomerId = i.CustomerId GROUP BY c.Country ORDER BY Total_Spent DESC LIMIT 1'}, 'id': 'call_vnHKe3oul1xbpX0Vrb2vsamZ'}])]}}\n",
"----\n",
"{'action': {'messages': [ToolMessage(content='Error: (sqlite3.OperationalError) no such table: customers\\n[SQL: SELECT c.Country, SUM(i.Total) AS Total_Spent FROM customers c JOIN invoices i ON c.CustomerId = i.CustomerId GROUP BY c.Country ORDER BY Total_Spent DESC LIMIT 1]\\n(Background on this error at: https://sqlalche.me/e/20/e3q8)', name='sql_db_query', id='1a5c85d4-1b30-4af3-ab9b-325cbce3b2b4', tool_call_id='call_vnHKe3oul1xbpX0Vrb2vsamZ')]}}\n",
"----\n",
| |
152818
|
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_pp3BBD1hwpdwskUj63G3tgaQ', 'function': {'arguments': '{}', 'name': 'sql_db_list_tables'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 12, 'prompt_tokens': 699, 'total_tokens': 711}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-04cf0e05-61d0-4673-b5dc-1a9b5fd71fff-0', tool_calls=[{'name': 'sql_db_list_tables', 'args': {}, 'id': 'call_pp3BBD1hwpdwskUj63G3tgaQ'}])]}}\n",
"----\n",
"{'action': {'messages': [ToolMessage(content='Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track', name='sql_db_list_tables', id='c2668450-4d73-4d32-8d75-8aac8fa153fd', tool_call_id='call_pp3BBD1hwpdwskUj63G3tgaQ')]}}\n",
"----\n",
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_22Asbqgdx26YyEvJxBuANVdY', 'function': {'arguments': '{\"query\":\"SELECT c.Country, SUM(i.Total) AS Total_Spent FROM Customer c JOIN Invoice i ON c.CustomerId = i.CustomerId GROUP BY c.Country ORDER BY Total_Spent DESC LIMIT 1\"}', 'name': 'sql_db_query'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 53, 'prompt_tokens': 744, 'total_tokens': 797}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-bdd94241-ca49-4f15-b31a-b7c728a34ea8-0', tool_calls=[{'name': 'sql_db_query', 'args': {'query': 'SELECT c.Country, SUM(i.Total) AS Total_Spent FROM Customer c JOIN Invoice i ON c.CustomerId = i.CustomerId GROUP BY c.Country ORDER BY Total_Spent DESC LIMIT 1'}, 'id': 'call_22Asbqgdx26YyEvJxBuANVdY'}])]}}\n",
"----\n",
"{'action': {'messages': [ToolMessage(content=\"[('USA', 523.0600000000003)]\", name='sql_db_query', id='f647e606-8362-40ab-8d34-612ff166dbe1', tool_call_id='call_22Asbqgdx26YyEvJxBuANVdY')]}}\n",
"----\n",
"{'agent': {'messages': [AIMessage(content='Customers from the USA spent the most, with a total amount spent of $523.06.', response_metadata={'token_usage': {'completion_tokens': 20, 'prompt_tokens': 819, 'total_tokens': 839}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'stop', 'logprobs': None}, id='run-92e88de0-ff62-41da-8181-053fb5632af4-0')]}}\n",
"----\n"
]
}
],
"source": [
"for s in agent_executor.stream(\n",
" {\"messages\": [HumanMessage(content=\"Which country's customers spent the most?\")]}\n",
"):\n",
" print(s)\n",
" print(\"----\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that the agent executes multiple queries until it has the information it needs:\n",
"1. List available tables;\n",
"2. Retrieves the schema for three tables;\n",
"3. Queries multiple of the tables via a join operation.\n",
"\n",
"The agent is then able to use the result of the final query to generate an answer to the original question.\n",
"\n",
"The agent can similarly handle qualitative questions:"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_WN0N3mm8WFvPXYlK9P7KvIEr', 'function': {'arguments': '{\"table_names\":\"playlisttrack\"}', 'name': 'sql_db_schema'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 554, 'total_tokens': 571}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-be278326-4115-4c67-91a0-6dc97e7bffa4-0', tool_calls=[{'name': 'sql_db_schema', 'args': {'table_names': 'playlisttrack'}, 'id': 'call_WN0N3mm8WFvPXYlK9P7KvIEr'}])]}}\n",
"----\n",
"{'action': {'messages': [ToolMessage(content=\"Error: table_names {'playlisttrack'} not found in database\", name='sql_db_schema', id='fe32b3d3-a40f-4802-a6b8-87a2453af8c2', tool_call_id='call_WN0N3mm8WFvPXYlK9P7KvIEr')]}}\n",
"----\n",
"{'agent': {'messages': [AIMessage(content='I apologize for the error. Let me first check the available tables in the database.', additional_kwargs={'tool_calls': [{'id': 'call_CzHt30847ql2MmnGxgYeVSL2', 'function': {'arguments': '{}', 'name': 'sql_db_list_tables'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 30, 'prompt_tokens': 592, 'total_tokens': 622}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-f6c107bb-e945-4848-a83c-f57daec1144e-0', tool_calls=[{'name': 'sql_db_list_tables', 'args': {}, 'id': 'call_CzHt30847ql2MmnGxgYeVSL2'}])]}}\n",
"----\n",
"{'action': {'messages': [ToolMessage(content='Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track', name='sql_db_list_tables', id='a4950f74-a0ad-4558-ba54-7bcf99539a02', tool_call_id='call_CzHt30847ql2MmnGxgYeVSL2')]}}\n",
"----\n",
"{'agent': {'messages': [AIMessage(content='The database contains a table named \"PlaylistTrack\". Let me retrieve the schema and sample rows from the \"PlaylistTrack\" table.', additional_kwargs={'tool_calls': [{'id': 'call_wX9IjHLgRBUmxlfCthprABRO', 'function': {'arguments': '{\"table_names\":\"PlaylistTrack\"}', 'name': 'sql_db_schema'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 44, 'prompt_tokens': 658, 'total_tokens': 702}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-e8d34372-1159-4654-a185-1e7d0cb70269-0', tool_calls=[{'name': 'sql_db_schema', 'args': {'table_names': 'PlaylistTrack'}, 'id': 'call_wX9IjHLgRBUmxlfCthprABRO'}])]}}\n",
"----\n",
| |
152828
|
"HumanMessage": {"\ud83e\udd9c\ufe0f\ud83c\udfd3 LangServe": "https://python.langchain.com/docs/langserve/", "Conceptual guide": "https://python.langchain.com/docs/concepts/", "Build an Agent with AgentExecutor (Legacy)": "https://python.langchain.com/docs/how_to/agent_executor/", "How to add a semantic layer over graph database": "https://python.langchain.com/docs/how_to/graph_semantic/", "How to use callbacks in async environments": "https://python.langchain.com/docs/how_to/callbacks_async/", "How to merge consecutive messages of the same type": "https://python.langchain.com/docs/how_to/merge_message_runs/", "How to trim messages": "https://python.langchain.com/docs/how_to/trim_messages/", "How to do tool/function calling": "https://python.langchain.com/docs/how_to/function_calling/", "How to use reference examples when doing extraction": "https://python.langchain.com/docs/how_to/extraction_examples/", "How to pass multimodal data directly to models": "https://python.langchain.com/docs/how_to/multimodal_inputs/", "How to create a custom chat model class": "https://python.langchain.com/docs/how_to/custom_chat_model/", "How to convert tools to OpenAI Functions": "https://python.langchain.com/docs/how_to/tools_as_openai_functions/", "How to filter messages": "https://python.langchain.com/docs/how_to/filter_messages/", "How to handle tool errors": "https://python.langchain.com/docs/how_to/tools_error/", "How to add tools to chatbots": "https://python.langchain.com/docs/how_to/chatbots_tools/", "How to add chat history": "https://python.langchain.com/docs/how_to/qa_chat_history_how_to/", "How to add message history": "https://python.langchain.com/docs/how_to/message_history/", "How to add retrieval to chatbots": "https://python.langchain.com/docs/how_to/chatbots_retrieval/", "How to pass tool outputs to chat models": "https://python.langchain.com/docs/how_to/tool_results_pass_to_model/", "How to return structured data from a model": "https://python.langchain.com/docs/how_to/structured_output/", "How to compose prompts together": "https://python.langchain.com/docs/how_to/prompts_composition/", "How to use few-shot prompting with tool calling": "https://python.langchain.com/docs/how_to/tools_few_shot/", "How to add examples to the prompt for query analysis": "https://python.langchain.com/docs/how_to/query_few_shot/", "WeChat": "https://python.langchain.com/docs/integrations/chat_loaders/wechat/", "Discord": "https://python.langchain.com/docs/integrations/chat_loaders/discord/", "Zep Open Source": "https://python.langchain.com/docs/integrations/retrievers/zep_memorystore/", "Zep Cloud": "https://python.langchain.com/docs/integrations/retrievers/zep_cloud_memorystore/", "Activeloop Deep Memory": "https://python.langchain.com/docs/integrations/retrievers/activeloop/", "Google": "https://python.langchain.com/docs/integrations/providers/google/", "Google Imagen": "https://python.langchain.com/docs/integrations/tools/google_imagen/", "Zep Open Source Memory": "https://python.langchain.com/docs/integrations/memory/zep_memory/", "ZepCloudChatMessageHistory": "https://python.langchain.com/docs/integrations/memory/zep_cloud_chat_message_history/", "Zep Cloud Memory": "https://python.langchain.com/docs/integrations/memory/zep_memory_cloud/", "Snowflake Cortex": "https://python.langchain.com/docs/integrations/chat/snowflake/", "# Related": "https://python.langchain.com/docs/integrations/chat/solar/", "ChatHuggingFace": "https://python.langchain.com/docs/integrations/chat/huggingface/", "AzureMLChatOnlineEndpoint": "https://python.langchain.com/docs/integrations/chat/azureml_chat_endpoint/", "Alibaba Cloud PAI EAS": "https://python.langchain.com/docs/integrations/chat/alibaba_cloud_pai_eas/", "Chat with Coze Bot": "https://python.langchain.com/docs/integrations/chat/coze/", "ChatOctoAI": "https://python.langchain.com/docs/integrations/chat/octoai/", "ChatYI": "https://python.langchain.com/docs/integrations/chat/yi/", "DeepInfra": "https://python.langchain.com/docs/integrations/chat/deepinfra/", "ChatLiteLLM": "https://python.langchain.com/docs/integrations/chat/litellm/", "LlamaEdge": "https://python.langchain.com/docs/integrations/chat/llama_edge/", "VolcEngineMaasChat": "https://python.langchain.com/docs/integrations/chat/volcengine_maas/", "ChatKonko": "https://python.langchain.com/docs/integrations/chat/konko/", "MLX": "https://python.langchain.com/docs/integrations/chat/mlx/", "GigaChat": "https://python.langchain.com/docs/integrations/chat/gigachat/", "JinaChat": "https://python.langchain.com/docs/integrations/chat/jinachat/", "ChatOllama": "https://python.langchain.com/docs/integrations/chat/ollama/", "ChatOCIGenAI": "https://python.langchain.com/docs/integrations/chat/oci_generative_ai/", "ChatEverlyAI": "https://python.langchain.com/docs/integrations/chat/everlyai/", "GPTRouter": "https://python.langchain.com/docs/integrations/chat/gpt_router/", "ChatLiteLLMRouter": "https://python.langchain.com/docs/integrations/chat/litellm_router/", "ChatFriendli": "https://python.langchain.com/docs/integrations/chat/friendli/", "ZHIPU AI": "https://python.langchain.com/docs/integrations/chat/zhipuai/", "Chat with Baichuan-192K": "https://python.langchain.com/docs/integrations/chat/baichuan/", "QianfanChatEndpoint": "https://python.langchain.com/docs/integrations/chat/baidu_qianfan_endpoint/", "Cohere": "https://python.langchain.com/docs/integrations/llms/cohere/", "Eden AI": "https://python.langchain.com/docs/integrations/chat/edenai/", "ErnieBotChat": "https://python.langchain.com/docs/integrations/chat/ernie/", "ChatWatsonx": "https://python.langchain.com/docs/integrations/chat/ibm_watsonx/", "vLLM Chat": "https://python.langchain.com/docs/integrations/chat/vllm/", "Tencent Hunyuan": "https://python.langchain.com/docs/integrations/chat/tencent_hunyuan/", "MiniMaxChat": "https://python.langchain.com/docs/integrations/chat/minimax/", "Yuan2.0": "https://python.langchain.com/docs/integrations/chat/yuan2/", "ChatTongyi": "https://python.langchain.com/docs/integrations/chat/tongyi/", "PromptLayerChatOpenAI": "https://python.langchain.com/docs/integrations/chat/promptlayer_chatopenai/", "SparkLLM Chat": "https://python.langchain.com/docs/integrations/chat/sparkllm/", "MoonshotChat": "https://python.langchain.com/docs/integrations/chat/moonshot/", "Dappier AI": "https://python.langchain.com/docs/integrations/chat/dappier/", "Maritalk": "https://python.langchain.com/docs/integrations/chat/maritalk/", "ChatPremAI": "https://python.langchain.com/docs/integrations/chat/premai/", "ChatAnyscale": "https://python.langchain.com/docs/integrations/chat/anyscale/", "ChatYandexGPT": "https://python.langchain.com/docs/integrations/chat/yandex/", "ChatNVIDIA": "https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/", "LLMonitor": "https://python.langchain.com/docs/integrations/callbacks/llmonitor/", "Context": "https://python.langchain.com/docs/integrations/callbacks/context/", "Label Studio": "https://python.langchain.com/docs/integrations/callbacks/labelstudio/", "PromptLayer": "https://python.langchain.com/docs/integrations/callbacks/promptlayer/", "Trubrics": "https://python.langchain.com/docs/integrations/callbacks/trubrics/", "Log10": "https://python.langchain.com/docs/integrations/providers/log10/", "MLflow Deployments for LLMs": "https://python.langchain.com/docs/integrations/providers/mlflow/", "MLflow AI Gateway": "https://python.langchain.com/docs/integrations/providers/mlflow_ai_gateway/", "Flyte": "https://python.langchain.com/docs/integrations/providers/flyte/", "PremAI": "https://python.langchain.com/docs/integrations/providers/premai/",
| |
152830
|
"PromptTemplate": {"Conceptual guide": "https://python.langchain.com/docs/concepts/", "# Example": "https://python.langchain.com/docs/versions/migrating_chains/map_rerank_docs_chain/", "# Legacy": "https://python.langchain.com/docs/versions/migrating_chains/llm_router_chain/", "How to better prompt when doing SQL question-answering": "https://python.langchain.com/docs/how_to/sql_prompting/", "How to use output parsers to parse an LLM response into structured format": "https://python.langchain.com/docs/how_to/output_parser_structured/", "How to route between sub-chains": "https://python.langchain.com/docs/how_to/routing/", "How to select examples by n-gram overlap": "https://python.langchain.com/docs/how_to/example_selectors_ngram/", "How to select examples by length": "https://python.langchain.com/docs/how_to/example_selectors_length_based/", "How to use example selectors": "https://python.langchain.com/docs/how_to/example_selectors/", "How to use few shot examples": "https://python.langchain.com/docs/how_to/few_shot_examples/", "How to select examples by similarity": "https://python.langchain.com/docs/how_to/example_selectors_similarity/", "How to parse XML output": "https://python.langchain.com/docs/how_to/output_parser_xml/", "How to reorder retrieved results to mitigate the \"lost in the middle\" effect": "https://python.langchain.com/docs/how_to/long_context_reorder/", "How to add fallbacks to a runnable": "https://python.langchain.com/docs/how_to/fallbacks/", "Run models locally": "https://python.langchain.com/docs/how_to/local_llms/", "How to configure runtime chain internals": "https://python.langchain.com/docs/how_to/configure/", "How to retry when a parsing error occurs": "https://python.langchain.com/docs/how_to/output_parser_retry/", "How to use the MultiQueryRetriever": "https://python.langchain.com/docs/how_to/MultiQueryRetriever/", "How to best prompt for Graph-RAG": "https://python.langchain.com/docs/how_to/graph_prompting/", "How to parse YAML output": "https://python.langchain.com/docs/how_to/output_parser_yaml/", "How to compose prompts together": "https://python.langchain.com/docs/how_to/prompts_composition/", "How to partially format prompt templates": "https://python.langchain.com/docs/how_to/prompts_partial/", "How to parse JSON output": "https://python.langchain.com/docs/how_to/output_parser_json/", "How to select examples by maximal marginal relevance (MMR)": "https://python.langchain.com/docs/how_to/example_selectors_mmr/", "How to track token usage for LLMs": "https://python.langchain.com/docs/how_to/llm_token_usage_tracking/", "Clarifai": "https://python.langchain.com/docs/integrations/llms/clarifai/", "RePhraseQuery": "https://python.langchain.com/docs/integrations/retrievers/re_phrase/", "Google Drive": "https://python.langchain.com/docs/integrations/document_loaders/google_drive/", "Milvus Hybrid Search Retriever": "https://python.langchain.com/docs/integrations/retrievers/milvus_hybrid_search/", "Zapier Natural Language Actions": "https://python.langchain.com/docs/integrations/tools/zapier/", "NVIDIA Riva: ASR and TTS": "https://python.langchain.com/docs/integrations/tools/nvidia_riva/", "Reddit Search ": "https://python.langchain.com/docs/integrations/tools/reddit_search/", "Dall-E Image Generator": "https://python.langchain.com/docs/integrations/tools/dalle_image_generator/", "Mot\u00f6rhead": "https://python.langchain.com/docs/integrations/memory/motorhead_memory/", "Context": "https://python.langchain.com/docs/integrations/callbacks/context/", "SageMaker Tracking": "https://python.langchain.com/docs/integrations/callbacks/sagemaker_tracking/", "Argilla": "https://python.langchain.com/docs/integrations/callbacks/argilla/", "DSPy": "https://python.langchain.com/docs/integrations/providers/dspy/", "Comet": "https://python.langchain.com/docs/integrations/providers/comet_tracking/", "Aim": "https://python.langchain.com/docs/integrations/providers/aim_tracking/", "Weights & Biases": "https://python.langchain.com/docs/integrations/providers/wandb_tracking/", "MLflow AI Gateway": "https://python.langchain.com/docs/integrations/providers/mlflow_ai_gateway/", "Rebuff": "https://python.langchain.com/docs/integrations/providers/rebuff/", "Prediction Guard": "https://python.langchain.com/docs/integrations/llms/predictionguard/", "Shale Protocol": "https://python.langchain.com/docs/integrations/providers/shaleprotocol/", "Flyte": "https://python.langchain.com/docs/integrations/providers/flyte/", "Ray Serve": "https://python.langchain.com/docs/integrations/providers/ray_serve/", "Javelin AI Gateway": "https://python.langchain.com/docs/integrations/providers/javelin_ai_gateway/", "Identity-enabled RAG using PebbloRetrievalQA": "https://python.langchain.com/docs/integrations/providers/pebblo/pebblo_retrieval_qa/", "SAP HANA Cloud Vector Engine": "https://python.langchain.com/docs/integrations/vectorstores/sap_hanavector/", "Amazon Document DB": "https://python.langchain.com/docs/integrations/vectorstores/documentdb/", "Google Cloud Vertex AI Reranker": "https://python.langchain.com/docs/integrations/document_transformers/google_cloud_vertexai_rerank/", "AirbyteLoader": "https://python.langchain.com/docs/integrations/document_loaders/airbyte/", "Memgraph": "https://python.langchain.com/docs/integrations/graphs/memgraph/", "Apache AGE": "https://python.langchain.com/docs/integrations/graphs/apache_age/", "Neo4j": "https://python.langchain.com/docs/integrations/graphs/neo4j_cypher/", "Baseten": "https://python.langchain.com/docs/integrations/llms/baseten/", "StochasticAI": "https://python.langchain.com/docs/integrations/llms/stochasticai/", "Solar": "https://python.langchain.com/docs/integrations/llms/solar/", "Bittensor": "https://python.langchain.com/docs/integrations/llms/bittensor/", "IPEX-LLM": "https://python.langchain.com/docs/integrations/llms/ipex_llm/", "Banana": "https://python.langchain.com/docs/integrations/llms/banana/", "Alibaba Cloud PAI EAS": "https://python.langchain.com/docs/integrations/llms/alibabacloud_pai_eas_endpoint/", "OpenLLM": "https://python.langchain.com/docs/integrations/llms/openllm/", "SageMakerEndpoint": "https://python.langchain.com/docs/integrations/llms/sagemaker/", "Fireworks": "https://python.langchain.com/docs/integrations/llms/fireworks/", "OctoAI": "https://python.langchain.com/docs/integrations/llms/octoai/", "Writer": "https://python.langchain.com/docs/integrations/llms/writer/", "Modal": "https://python.langchain.com/docs/integrations/llms/modal/", "TextGen": "https://python.langchain.com/docs/integrations/llms/textgen/", "Xorbits Inference (Xinference)": "https://python.langchain.com/docs/integrations/llms/xinference/", "Nebula (Symbl.ai)": "https://python.langchain.com/docs/integrations/llms/symblai_nebula/", "DeepInfra": "https://python.langchain.com/docs/integrations/llms/deepinfra/", "AnthropicLLM": "https://python.langchain.com/docs/integrations/llms/anthropic/", "NLP Cloud": "https://python.langchain.com/docs/integrations/llms/nlpcloud/", "GPT4All": "https://python.langchain.com/docs/integrations/llms/gpt4all/", "ForefrontAI": "https://python.langchain.com/docs/integrations/llms/forefrontai/", "MosaicML": "https://python.langchain.com/docs/integrations/llms/mosaicml/", "Volc Engine Maas": "https://python.langchain.com/docs/integrations/llms/volcengine_maas/", "CerebriumAI": "https://python.langchain.com/docs/integrations/llms/cerebriumai/", "OpenAI": "https://python.langchain.com/docs/integrations/llms/openai/", "Google Cloud Vertex AI": "https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm/",
| |
152833
|
"Document": {"# Example": "https://python.langchain.com/docs/versions/migrating_chains/map_rerank_docs_chain/", "# Basic example (short documents)": "https://python.langchain.com/docs/versions/migrating_chains/map_reduce_chain/", "How to handle long text when doing extraction": "https://python.langchain.com/docs/how_to/extraction_long_text/", "How to create a custom Document Loader": "https://python.langchain.com/docs/how_to/document_loader_custom/", "How to summarize text through iterative refinement": "https://python.langchain.com/docs/how_to/summarize_refine/", "How to summarize text through parallelization": "https://python.langchain.com/docs/how_to/summarize_map_reduce/", "How to use the LangChain indexing API": "https://python.langchain.com/docs/how_to/indexing/", "How to convert Runnables as Tools": "https://python.langchain.com/docs/how_to/convert_runnable_to_tool/", "How to retrieve using multiple vectors per document": "https://python.langchain.com/docs/how_to/multi_vector/", "How to create a custom Retriever": "https://python.langchain.com/docs/how_to/custom_retriever/", "How to construct knowledge graphs": "https://python.langchain.com/docs/how_to/graph_constructing/", "How to use a time-weighted vector store retriever": "https://python.langchain.com/docs/how_to/time_weighted_vectorstore/", "How to get a RAG application to add citations": "https://python.langchain.com/docs/how_to/qa_citations/", "How to load Markdown": "https://python.langchain.com/docs/how_to/document_loader_markdown/", "How to do \"self-querying\" retrieval": "https://python.langchain.com/docs/how_to/self_query/", "How to summarize text in a single LLM call": "https://python.langchain.com/docs/how_to/summarize_stuff/", "How to add scores to retriever results": "https://python.langchain.com/docs/how_to/add_scores_retriever/", "Model caches": "https://python.langchain.com/docs/integrations/llm_caching/", "Oracle AI Vector Search: Generate Embeddings": "https://python.langchain.com/docs/integrations/text_embedding/oracleai/", "Kinetica Vectorstore based Retriever": "https://python.langchain.com/docs/integrations/retrievers/kinetica/", "Fleet AI Context": "https://python.langchain.com/docs/integrations/retrievers/fleet_context/", "ChatGPT plugin": "https://python.langchain.com/docs/integrations/retrievers/chatgpt-plugin/", "Cohere RAG": "https://python.langchain.com/docs/integrations/retrievers/cohere/", "Weaviate Hybrid Search": "https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid/", "BM25": "https://python.langchain.com/docs/integrations/retrievers/bm25/", "Qdrant Sparse Vector": "https://python.langchain.com/docs/integrations/retrievers/qdrant-sparse/", "ElasticsearchRetriever": "https://python.langchain.com/docs/integrations/retrievers/elasticsearch_retriever/", "TF-IDF": "https://python.langchain.com/docs/integrations/retrievers/tf_idf/", "Milvus": "https://python.langchain.com/docs/integrations/vectorstores/milvus/", "PGVector (Postgres)": "https://python.langchain.com/docs/integrations/retrievers/self_query/pgvector_self_query/", "Weaviate": "https://python.langchain.com/docs/integrations/retrievers/self_query/weaviate_self_query/", "Vectara self-querying ": "https://python.langchain.com/docs/integrations/retrievers/self_query/vectara_self_query/", "SAP HANA Cloud Vector Engine": "https://python.langchain.com/docs/integrations/vectorstores/sap_hanavector/", "DashVector": "https://python.langchain.com/docs/integrations/retrievers/self_query/dashvector/", "Databricks Vector Search": "https://python.langchain.com/docs/integrations/retrievers/self_query/databricks_vector_search/", "DingoDB": "https://python.langchain.com/docs/integrations/retrievers/self_query/dingo/", "OpenSearch": "https://python.langchain.com/docs/integrations/retrievers/self_query/opensearch_self_query/", "Elasticsearch": "https://python.langchain.com/docs/integrations/vectorstores/elasticsearch/", "Chroma": "https://python.langchain.com/docs/integrations/vectorstores/chroma/", "Tencent Cloud VectorDB": "https://python.langchain.com/docs/integrations/vectorstores/tencentvectordb/", "Timescale Vector (Postgres) ": "https://python.langchain.com/docs/integrations/retrievers/self_query/timescalevector_self_query/", "Astra DB (Cassandra)": "https://python.langchain.com/docs/integrations/retrievers/self_query/astradb/", "Pinecone": "https://python.langchain.com/docs/integrations/vectorstores/pinecone/", "Supabase (Postgres)": "https://python.langchain.com/docs/integrations/retrievers/self_query/supabase_self_query/", "Redis": "https://python.langchain.com/docs/integrations/vectorstores/redis/", "MyScale": "https://python.langchain.com/docs/integrations/retrievers/self_query/myscale_self_query/", "Deep Lake": "https://python.langchain.com/docs/integrations/retrievers/self_query/activeloop_deeplake_self_query/", "MongoDB Atlas": "https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas/", "Qdrant": "https://python.langchain.com/docs/integrations/vectorstores/qdrant/", "Oracle AI Vector Search: Generate Summary": "https://python.langchain.com/docs/integrations/tools/oracleai/", "Cohere": "https://python.langchain.com/docs/integrations/providers/cohere/", "Identity-enabled RAG using PebbloRetrievalQA": "https://python.langchain.com/docs/integrations/providers/pebblo/pebblo_retrieval_qa/", "Kinetica Vectorstore API": "https://python.langchain.com/docs/integrations/vectorstores/kinetica/", "Yellowbrick": "https://python.langchain.com/docs/integrations/vectorstores/yellowbrick/", "PGVector": "https://python.langchain.com/docs/integrations/vectorstores/pgvector/", "SingleStoreDB": "https://python.langchain.com/docs/integrations/vectorstores/singlestoredb/", "Annoy": "https://python.langchain.com/docs/integrations/vectorstores/annoy/", "Couchbase ": "https://python.langchain.com/docs/integrations/vectorstores/couchbase/", "Oracle AI Vector Search: Vector Store": "https://python.langchain.com/docs/integrations/vectorstores/oracle/", "Neo4j Vector Index": "https://python.langchain.com/docs/integrations/vectorstores/neo4jvector/", "Lantern": "https://python.langchain.com/docs/integrations/vectorstores/lantern/", "Google Firestore (Native Mode)": "https://python.langchain.com/docs/integrations/document_loaders/google_firestore/", "ClickHouse": "https://python.langchain.com/docs/integrations/vectorstores/clickhouse/", "Astra DB Vector Store": "https://python.langchain.com/docs/integrations/vectorstores/astradb/", "Faiss (Async)": "https://python.langchain.com/docs/integrations/vectorstores/faiss_async/", "Apache Cassandra": "https://python.langchain.com/docs/integrations/vectorstores/cassandra/", "PGVecto.rs": "https://python.langchain.com/docs/integrations/vectorstores/pgvecto_rs/", "Postgres Embedding": "https://python.langchain.com/docs/integrations/vectorstores/pgembedding/", "Timescale Vector (Postgres)": "https://python.langchain.com/docs/integrations/vectorstores/timescalevector/", "Faiss": "https://python.langchain.com/docs/integrations/vectorstores/faiss/", "Nuclia": "https://python.langchain.com/docs/integrations/document_transformers/nuclia_transformer/", "AI21SemanticTextSplitter": "https://python.langchain.com/docs/integrations/document_transformers/ai21_semantic_text_splitter/", "Google Cloud Vertex AI Reranker": "https://python.langchain.com/docs/integrations/document_transformers/google_cloud_vertexai_rerank/", "OpenAI metadata tagger": "https://python.langchain.com/docs/integrations/document_transformers/openai_metadata_tagger/", "Doctran: extract properties": "https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties/", "Google Translate": "https://python.langchain.com/docs/integrations/document_transformers/google_translate/", "Doctran: interrogate documents": "https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document/", "Doctran: language translation": "https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document/",
| |
152836
|
"create_stuff_documents_chain": {"# Example": "https://python.langchain.com/docs/versions/migrating_chains/stuff_docs_chain/", "Load docs": "https://python.langchain.com/docs/versions/migrating_chains/retrieval_qa/", "How to reorder retrieved results to mitigate the \"lost in the middle\" effect": "https://python.langchain.com/docs/how_to/long_context_reorder/", "How to stream results from your RAG application": "https://python.langchain.com/docs/how_to/qa_streaming/", "How to get your RAG application to return sources": "https://python.langchain.com/docs/how_to/qa_sources/", "How to add chat history": "https://python.langchain.com/docs/how_to/qa_chat_history_how_to/", "How to add retrieval to chatbots": "https://python.langchain.com/docs/how_to/chatbots_retrieval/", "How to summarize text in a single LLM call": "https://python.langchain.com/docs/how_to/summarize_stuff/", "RAGatouille": "https://python.langchain.com/docs/integrations/retrievers/ragatouille/", "ApertureDB": "https://python.langchain.com/docs/integrations/vectorstores/aperturedb/", "Jina Reranker": "https://python.langchain.com/docs/integrations/document_transformers/jina_rerank/", "Image captions": "https://python.langchain.com/docs/integrations/document_loaders/image_captions/", "Build a Retrieval Augmented Generation (RAG) App": "https://python.langchain.com/docs/tutorials/rag/", "Summarize Text": "https://python.langchain.com/docs/tutorials/summarization/", "Conversational RAG": "https://python.langchain.com/docs/tutorials/qa_chat_history/", "Build a PDF ingestion and Question/Answering system": "https://python.langchain.com/docs/tutorials/pdf_qa/"}, "LLMMathChain": {"# Legacy": "https://python.langchain.com/docs/versions/migrating_chains/llm_math_chain/"}, "BaseMessage": {"# Legacy": "https://python.langchain.com/docs/versions/migrating_chains/llm_math_chain/", "How to trim messages": "https://python.langchain.com/docs/how_to/trim_messages/", "How to use reference examples when doing extraction": "https://python.langchain.com/docs/how_to/extraction_examples/", "How to propagate callbacks constructor": "https://python.langchain.com/docs/how_to/callbacks_constructor/", "How to attach callbacks to a runnable": "https://python.langchain.com/docs/how_to/callbacks_attach/", "How to create a custom chat model class": "https://python.langchain.com/docs/how_to/custom_chat_model/", "How to pass callbacks in at runtime": "https://python.langchain.com/docs/how_to/callbacks_runtime/", "How to add examples to the prompt for query analysis": "https://python.langchain.com/docs/how_to/query_few_shot/", "WeChat": "https://python.langchain.com/docs/integrations/chat_loaders/wechat/", "Discord": "https://python.langchain.com/docs/integrations/chat_loaders/discord/", "Chat Bot Feedback Template": "https://python.langchain.com/docs/templates/chat-bot-feedback/"}, "RunnableConfig": {"# Legacy": "https://python.langchain.com/docs/versions/migrating_chains/multi_prompt_chain/", "# Example": "https://python.langchain.com/docs/versions/migrating_chains/refine_docs_chain/", "How to access the RunnableConfig from a tool": "https://python.langchain.com/docs/how_to/tool_configure/", "How to summarize text through iterative refinement": "https://python.langchain.com/docs/how_to/summarize_refine/", "How to handle tool errors": "https://python.langchain.com/docs/how_to/tools_error/", "How to stream events from a tool": "https://python.langchain.com/docs/how_to/tool_stream_events/", "How to run custom functions": "https://python.langchain.com/docs/how_to/functions/", "How to add ad-hoc tool calling capability to LLMs and Chat Models": "https://python.langchain.com/docs/how_to/tools_prompting/", "LangChain Expression Language Cheatsheet": "https://python.langchain.com/docs/how_to/lcel_cheatsheet/", "How to dispatch custom callback events": "https://python.langchain.com/docs/how_to/callbacks_custom_events/", "How to pass runtime secrets to runnables": "https://python.langchain.com/docs/how_to/runnable_runtime_secrets/", "Tavily Search": "https://python.langchain.com/docs/integrations/tools/tavily_search/"}, "tool": {"# Legacy": "https://python.langchain.com/docs/versions/migrating_chains/llm_math_chain/", "How to disable parallel tool calling": "https://python.langchain.com/docs/how_to/tool_calling_parallel/", "How to use tools in a chain": "https://python.langchain.com/docs/how_to/tools_chain/", "How to access the RunnableConfig from a tool": "https://python.langchain.com/docs/how_to/tool_configure/", "How to do tool/function calling": "https://python.langchain.com/docs/how_to/function_calling/", "How to pass run time values to tools": "https://python.langchain.com/docs/how_to/tool_runtime/", "How to add a human-in-the-loop for tools": "https://python.langchain.com/docs/how_to/tools_human/", "How to create tools": "https://python.langchain.com/docs/how_to/custom_tools/", "How to pass multimodal data directly to models": "https://python.langchain.com/docs/how_to/multimodal_inputs/", "How to force models to call a tool": "https://python.langchain.com/docs/how_to/tool_choice/", "How to handle tool errors": "https://python.langchain.com/docs/how_to/tools_error/", "How to stream events from a tool": "https://python.langchain.com/docs/how_to/tool_stream_events/", "How to stream runnables": "https://python.langchain.com/docs/how_to/streaming/", "How to pass tool outputs to chat models": "https://python.langchain.com/docs/how_to/tool_results_pass_to_model/", "How to add ad-hoc tool calling capability to LLMs and Chat Models": "https://python.langchain.com/docs/how_to/tools_prompting/", "How to return artifacts from a tool": "https://python.langchain.com/docs/how_to/tool_artifacts/", "How to migrate from legacy LangChain agents to LangGraph": "https://python.langchain.com/docs/how_to/migrate_agent/", "How to stream tool calls": "https://python.langchain.com/docs/how_to/tool_streaming/", "How to pass runtime secrets to runnables": "https://python.langchain.com/docs/how_to/runnable_runtime_secrets/", "How to use few-shot prompting with tool calling": "https://python.langchain.com/docs/how_to/tools_few_shot/", "FinancialDatasets Toolkit": "https://python.langchain.com/docs/integrations/tools/financial_datasets/", "Exa Search": "https://python.langchain.com/docs/integrations/tools/exa_search/", "DeepInfra": "https://python.langchain.com/docs/integrations/chat/deepinfra/", "ChatOllama": "https://python.langchain.com/docs/integrations/chat/ollama/", "Llama.cpp": "https://python.langchain.com/docs/integrations/chat/llamacpp/", "Cohere": "https://python.langchain.com/docs/integrations/providers/cohere/", "Eden AI": "https://python.langchain.com/docs/integrations/chat/edenai/", "ChatTongyi": "https://python.langchain.com/docs/integrations/chat/tongyi/", "ChatPremAI": "https://python.langchain.com/docs/integrations/chat/premai/", "ChatNVIDIA": "https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/", "LLMonitor": "https://python.langchain.com/docs/integrations/callbacks/llmonitor/", "PremAI": "https://python.langchain.com/docs/integrations/providers/premai/", "Log, Trace, and Monitor": "https://python.langchain.com/docs/integrations/providers/portkey/logging_tracing_portkey/", "Portkey": "https://python.langchain.com/docs/integrations/providers/portkey/index/", "JSONFormer": "https://python.langchain.com/docs/integrations/llms/jsonformer_experimental/"}, "MultiPromptChain": {"# Legacy": "https://python.langchain.com/docs/versions/migrating_chains/multi_prompt_chain/"}, "ConversationChain": {"# Legacy": "https://python.langchain.com/docs/versions/migrating_chains/conversation_chain/"},
| |
152840
|
{"Load docs": "https://python.langchain.com/docs/versions/migrating_chains/retrieval_qa/", "Build an Agent with AgentExecutor (Legacy)": "https://python.langchain.com/docs/how_to/agent_executor/", "How to handle long text when doing extraction": "https://python.langchain.com/docs/how_to/extraction_long_text/", "How to load PDFs": "https://python.langchain.com/docs/how_to/document_loader_pdf/", "How to better prompt when doing SQL question-answering": "https://python.langchain.com/docs/how_to/sql_prompting/", "How to add values to a chain's state": "https://python.langchain.com/docs/how_to/assign/", "How to route between sub-chains": "https://python.langchain.com/docs/how_to/routing/", "How to do per-user retrieval": "https://python.langchain.com/docs/how_to/qa_per_user/", "How to use few shot examples": "https://python.langchain.com/docs/how_to/few_shot_examples/", "How to inspect runnables": "https://python.langchain.com/docs/how_to/inspect/", "How to handle cases where no queries are generated": "https://python.langchain.com/docs/how_to/query_no_queries/", "How to use few shot examples in chat models": "https://python.langchain.com/docs/how_to/few_shot_examples_chat/", "How to select examples by similarity": "https://python.langchain.com/docs/how_to/example_selectors_similarity/", "Text embedding models": "https://python.langchain.com/docs/how_to/embed_text/", "How to deal with large databases when doing SQL question-answering": "https://python.langchain.com/docs/how_to/sql_large_db/", "How to handle multiple queries when doing query analysis": "https://python.langchain.com/docs/how_to/query_multiple_queries/", "How to stream results from your RAG application": "https://python.langchain.com/docs/how_to/qa_streaming/", "How to get your RAG application to return sources": "https://python.langchain.com/docs/how_to/qa_sources/", "How to use the LangChain indexing API": "https://python.langchain.com/docs/how_to/indexing/", "How to split text based on semantic similarity": "https://python.langchain.com/docs/how_to/semantic-chunker/", "How to convert Runnables as Tools": "https://python.langchain.com/docs/how_to/convert_runnable_to_tool/", "How to stream runnables": "https://python.langchain.com/docs/how_to/streaming/", "How to invoke runnables in parallel": "https://python.langchain.com/docs/how_to/parallel/", "How to pass through arguments from one step to the next": "https://python.langchain.com/docs/how_to/passthrough/", "How to retrieve using multiple vectors per document": "https://python.langchain.com/docs/how_to/multi_vector/", "How to add chat history": "https://python.langchain.com/docs/how_to/qa_chat_history_how_to/", "How to add retrieval to chatbots": "https://python.langchain.com/docs/how_to/chatbots_retrieval/", "How to do retrieval with contextual compression": "https://python.langchain.com/docs/how_to/contextual_compression/", "How to handle multiple retrievers when doing query analysis": "https://python.langchain.com/docs/how_to/query_multiple_retrievers/", "How to use a time-weighted vector store retriever": "https://python.langchain.com/docs/how_to/time_weighted_vectorstore/", "How to create and query vector stores": "https://python.langchain.com/docs/how_to/vectorstores/", "How to get a RAG application to add citations": "https://python.langchain.com/docs/how_to/qa_citations/", "How to use the MultiQueryRetriever": "https://python.langchain.com/docs/how_to/MultiQueryRetriever/", "How to best prompt for Graph-RAG": "https://python.langchain.com/docs/how_to/graph_prompting/", "How to use the Parent Document Retriever": "https://python.langchain.com/docs/how_to/parent_document_retriever/", "How deal with high cardinality categoricals when doing query analysis": "https://python.langchain.com/docs/how_to/query_high_cardinality/", "How to use a vectorstore as a retriever": "https://python.langchain.com/docs/how_to/vectorstore_retriever/", "Caching": "https://python.langchain.com/docs/how_to/caching_embeddings/", "How to combine results from multiple retrievers": "https://python.langchain.com/docs/how_to/ensemble_retriever/", "How to select examples by maximal marginal relevance (MMR)": "https://python.langchain.com/docs/how_to/example_selectors_mmr/", "How to do \"self-querying\" retrieval": "https://python.langchain.com/docs/how_to/self_query/", "Hybrid Search": "https://python.langchain.com/docs/how_to/hybrid/", "How to add scores to retriever results": "https://python.langchain.com/docs/how_to/add_scores_retriever/", "Model caches": "https://python.langchain.com/docs/integrations/llm_caching/", "OpenAIEmbeddings": "https://python.langchain.com/docs/integrations/text_embedding/openai/", "AzureAISearchRetriever": "https://python.langchain.com/docs/integrations/retrievers/azure_ai_search/", "RePhraseQuery": "https://python.langchain.com/docs/integrations/retrievers/re_phrase/", "Kinetica Vectorstore based Retriever": "https://python.langchain.com/docs/integrations/retrievers/kinetica/", "JaguarDB Vector Database": "https://python.langchain.com/docs/integrations/retrievers/jaguar/", "Fleet AI Context": "https://python.langchain.com/docs/integrations/retrievers/fleet_context/", "LLMLingua Document Compressor": "https://python.langchain.com/docs/integrations/retrievers/llmlingua/", "SingleStoreDB": "https://python.langchain.com/docs/integrations/vectorstores/singlestoredb/", "kNN": "https://python.langchain.com/docs/integrations/retrievers/knn/", "DocArray": "https://python.langchain.com/docs/integrations/retrievers/docarray_retriever/", "SVM": "https://python.langchain.com/docs/integrations/retrievers/svm/", "Pinecone Hybrid Search": "https://python.langchain.com/docs/integrations/retrievers/pinecone_hybrid_search/", "Activeloop Deep Memory": "https://python.langchain.com/docs/integrations/retrievers/activeloop/", "Milvus Hybrid Search Retriever": "https://python.langchain.com/docs/integrations/retrievers/milvus_hybrid_search/", "FlashRank reranker": "https://python.langchain.com/docs/integrations/retrievers/flashrank-reranker/", "LOTR (Merger Retriever)": "https://python.langchain.com/docs/integrations/retrievers/merger_retriever/", "Milvus": "https://python.langchain.com/docs/integrations/retrievers/self_query/milvus_self_query/", "PGVector (Postgres)": "https://python.langchain.com/docs/integrations/retrievers/self_query/pgvector_self_query/", "Weaviate": "https://python.langchain.com/docs/integrations/vectorstores/weaviate/", "SAP HANA Cloud Vector Engine": "https://python.langchain.com/docs/integrations/vectorstores/sap_hanavector/", "Databricks Vector Search": "https://python.langchain.com/docs/integrations/vectorstores/databricks_vector_search/", "DingoDB": "https://python.langchain.com/docs/integrations/vectorstores/dingo/", "OpenSearch": "https://python.langchain.com/docs/integrations/vectorstores/opensearch/", "Elasticsearch": "https://python.langchain.com/docs/integrations/retrievers/self_query/elasticsearch_self_query/", "Chroma": "https://python.langchain.com/docs/integrations/retrievers/self_query/chroma_self_query/", "Timescale Vector (Postgres) ": "https://python.langchain.com/docs/integrations/retrievers/self_query/timescalevector_self_query/", "Astra DB (Cassandra)": "https://python.langchain.com/docs/integrations/retrievers/self_query/astradb/", "Pinecone": "https://python.langchain.com/docs/integrations/retrievers/self_query/pinecone/", "Supabase (Postgres)": "https://python.langchain.com/docs/integrations/vectorstores/supabase/", "Redis": "https://python.langchain.com/docs/integrations/retrievers/self_query/redis_self_query/", "MyScale": "https://python.langchain.com/docs/integrations/vectorstores/myscale/", "Deep Lake": "https://python.langchain.com/docs/integrations/retrievers/self_query/activeloop_deeplake_self_query/", "MongoDB Atlas": "https://python.langchain.com/docs/integrations/retrievers/self_query/mongodb_atlas/", "Qdrant": "https://python.langchain.com/docs/integrations/retrievers/self_query/qdrant_self_query/",
| |
152841
|
"OpenAI": "https://python.langchain.com/docs/integrations/providers/openai/", "Xata": "https://python.langchain.com/docs/integrations/vectorstores/xata/", "Confident": "https://python.langchain.com/docs/integrations/callbacks/confident/", "UpTrain": "https://python.langchain.com/docs/integrations/callbacks/uptrain/", "RAGatouille": "https://python.langchain.com/docs/integrations/providers/ragatouille/", "Upstash Vector": "https://python.langchain.com/docs/integrations/vectorstores/upstash/", "Javelin AI Gateway": "https://python.langchain.com/docs/integrations/providers/javelin_ai_gateway/", "Identity-enabled RAG using PebbloRetrievalQA": "https://python.langchain.com/docs/integrations/providers/pebblo/pebblo_retrieval_qa/", "LanceDB": "https://python.langchain.com/docs/integrations/vectorstores/lancedb/", "Apache Doris": "https://python.langchain.com/docs/integrations/vectorstores/apache_doris/", "Kinetica Vectorstore API": "https://python.langchain.com/docs/integrations/vectorstores/kinetica/", "Yellowbrick": "https://python.langchain.com/docs/integrations/vectorstores/yellowbrick/", "Jaguar Vector Database": "https://python.langchain.com/docs/integrations/vectorstores/jaguar/", "Hippo": "https://python.langchain.com/docs/integrations/vectorstores/hippo/", "Rockset": "https://python.langchain.com/docs/integrations/vectorstores/rockset/", "Zilliz": "https://python.langchain.com/docs/integrations/vectorstores/zilliz/", "Azure Cosmos DB Mongo vCore": "https://python.langchain.com/docs/integrations/vectorstores/azure_cosmos_db/", "viking DB": "https://python.langchain.com/docs/integrations/vectorstores/vikingdb/", "Typesense": "https://python.langchain.com/docs/integrations/vectorstores/typesense/", "Momento Vector Index (MVI)": "https://python.langchain.com/docs/integrations/vectorstores/momento_vector_index/", "TiDB Vector": "https://python.langchain.com/docs/integrations/vectorstores/tidb_vector/", "Activeloop Deep Lake": "https://python.langchain.com/docs/integrations/vectorstores/activeloop_deeplake/", "Neo4j Vector Index": "https://python.langchain.com/docs/integrations/vectorstores/neo4jvector/", "Lantern": "https://python.langchain.com/docs/integrations/vectorstores/lantern/", "DuckDB": "https://python.langchain.com/docs/integrations/vectorstores/duckdb/", "Alibaba Cloud OpenSearch": "https://python.langchain.com/docs/integrations/vectorstores/alibabacloud_opensearch/", "StarRocks": "https://python.langchain.com/docs/integrations/vectorstores/starrocks/", "scikit-learn": "https://python.langchain.com/docs/integrations/vectorstores/sklearn/", "Tencent Cloud VectorDB": "https://python.langchain.com/docs/integrations/vectorstores/tencentvectordb/", "DocArray HnswSearch": "https://python.langchain.com/docs/integrations/vectorstores/docarray_hnsw/", "Tigris": "https://python.langchain.com/docs/integrations/vectorstores/tigris/", "China Mobile ECloud ElasticSearch VectorSearch": "https://python.langchain.com/docs/integrations/vectorstores/ecloud_vector_search/", "Faiss (Async)": "https://python.langchain.com/docs/integrations/vectorstores/faiss_async/", "Azure AI Search": "https://python.langchain.com/docs/integrations/vectorstores/azuresearch/", "Apache Cassandra": "https://python.langchain.com/docs/integrations/vectorstores/cassandra/", "USearch": "https://python.langchain.com/docs/integrations/vectorstores/usearch/", "KDB.AI": "https://python.langchain.com/docs/integrations/vectorstores/kdbai/", "DocArray InMemorySearch": "https://python.langchain.com/docs/integrations/vectorstores/docarray_in_memory/", "Postgres Embedding": "https://python.langchain.com/docs/integrations/vectorstores/pgembedding/", "Timescale Vector (Postgres)": "https://python.langchain.com/docs/integrations/vectorstores/timescalevector/", "Epsilla": "https://python.langchain.com/docs/integrations/vectorstores/epsilla/", "Amazon Document DB": "https://python.langchain.com/docs/integrations/vectorstores/documentdb/", "AnalyticDB": "https://python.langchain.com/docs/integrations/vectorstores/analyticdb/", "Hologres": "https://python.langchain.com/docs/integrations/vectorstores/hologres/", "Meilisearch": "https://python.langchain.com/docs/integrations/vectorstores/meilisearch/", "RankLLM Reranker": "https://python.langchain.com/docs/integrations/document_transformers/rankllm-reranker/", "YouTube audio": "https://python.langchain.com/docs/integrations/document_loaders/youtube_audio/", "Image captions": "https://python.langchain.com/docs/integrations/document_loaders/image_captions/", "Apify Dataset": "https://python.langchain.com/docs/integrations/document_loaders/apify_dataset/", "Psychic": "https://python.langchain.com/docs/integrations/document_loaders/psychic/", "Docugami": "https://python.langchain.com/docs/integrations/document_loaders/docugami/", "Build a Retrieval Augmented Generation (RAG) App": "https://python.langchain.com/docs/tutorials/rag/", "Conversational RAG": "https://python.langchain.com/docs/tutorials/qa_chat_history/", "Build a Query Analysis System": "https://python.langchain.com/docs/tutorials/query_analysis/", "Build a Question/Answering system over SQL data": "https://python.langchain.com/docs/tutorials/sql_qa/", "Build a PDF ingestion and Question/Answering system": "https://python.langchain.com/docs/tutorials/pdf_qa/", "Vector stores and retrievers": "https://python.langchain.com/docs/tutorials/retrievers/"},
| |
152843
|
"CharacterTextSplitter": {"# Basic example (short documents)": "https://python.langchain.com/docs/versions/migrating_chains/map_reduce_chain/", "How to handle long text when doing extraction": "https://python.langchain.com/docs/how_to/extraction_long_text/", "How to split by character": "https://python.langchain.com/docs/how_to/character_text_splitter/", "How to summarize text through parallelization": "https://python.langchain.com/docs/how_to/summarize_map_reduce/", "How to use the LangChain indexing API": "https://python.langchain.com/docs/how_to/indexing/", "How to do retrieval with contextual compression": "https://python.langchain.com/docs/how_to/contextual_compression/", "How to create and query vector stores": "https://python.langchain.com/docs/how_to/vectorstores/", "How to split text by tokens ": "https://python.langchain.com/docs/how_to/split_by_token/", "How to use a vectorstore as a retriever": "https://python.langchain.com/docs/how_to/vectorstore_retriever/", "Caching": "https://python.langchain.com/docs/how_to/caching_embeddings/", "Model caches": "https://python.langchain.com/docs/integrations/llm_caching/", "AzureAISearchRetriever": "https://python.langchain.com/docs/integrations/retrievers/azure_ai_search/", "Kinetica Vectorstore based Retriever": "https://python.langchain.com/docs/integrations/retrievers/kinetica/", "JaguarDB Vector Database": "https://python.langchain.com/docs/integrations/retrievers/jaguar/", "SingleStoreDB": "https://python.langchain.com/docs/integrations/retrievers/singlestoredb/", "OpenAI": "https://python.langchain.com/docs/integrations/providers/openai/", "Confident": "https://python.langchain.com/docs/integrations/callbacks/confident/", "Upstash Vector": "https://python.langchain.com/docs/integrations/vectorstores/upstash/", "VDMS": "https://python.langchain.com/docs/integrations/providers/vdms/", "LanceDB": "https://python.langchain.com/docs/integrations/vectorstores/lancedb/", "Kinetica Vectorstore API": "https://python.langchain.com/docs/integrations/vectorstores/kinetica/", "SQLite-VSS": "https://python.langchain.com/docs/integrations/vectorstores/sqlitevss/", "Vald": "https://python.langchain.com/docs/integrations/vectorstores/vald/", "Weaviate": "https://python.langchain.com/docs/integrations/vectorstores/weaviate/", "Jaguar Vector Database": "https://python.langchain.com/docs/integrations/vectorstores/jaguar/", "SAP HANA Cloud Vector Engine": "https://python.langchain.com/docs/integrations/vectorstores/sap_hanavector/", "DashVector": "https://python.langchain.com/docs/integrations/vectorstores/dashvector/", "Databricks Vector Search": "https://python.langchain.com/docs/integrations/vectorstores/databricks_vector_search/", "ScaNN": "https://python.langchain.com/docs/integrations/vectorstores/scann/", "Xata": "https://python.langchain.com/docs/integrations/vectorstores/xata/", "Hippo": "https://python.langchain.com/docs/integrations/vectorstores/hippo/", "Vespa": "https://python.langchain.com/docs/integrations/vectorstores/vespa/", "Rockset": "https://python.langchain.com/docs/integrations/vectorstores/rockset/", "DingoDB": "https://python.langchain.com/docs/integrations/vectorstores/dingo/", "Zilliz": "https://python.langchain.com/docs/integrations/vectorstores/zilliz/", "Azure Cosmos DB Mongo vCore": "https://python.langchain.com/docs/integrations/vectorstores/azure_cosmos_db/", "Annoy": "https://python.langchain.com/docs/integrations/vectorstores/annoy/", "Couchbase ": "https://python.langchain.com/docs/integrations/vectorstores/couchbase/", "Typesense": "https://python.langchain.com/docs/integrations/vectorstores/typesense/", "Momento Vector Index (MVI)": "https://python.langchain.com/docs/integrations/vectorstores/momento_vector_index/", "TiDB Vector": "https://python.langchain.com/docs/integrations/vectorstores/tidb_vector/", "Relyt": "https://python.langchain.com/docs/integrations/vectorstores/relyt/", "Activeloop Deep Lake": "https://python.langchain.com/docs/integrations/vectorstores/activeloop_deeplake/", "vlite": "https://python.langchain.com/docs/integrations/vectorstores/vlite/", "Neo4j Vector Index": "https://python.langchain.com/docs/integrations/vectorstores/neo4jvector/", "Lantern": "https://python.langchain.com/docs/integrations/vectorstores/lantern/", "Tair": "https://python.langchain.com/docs/integrations/vectorstores/tair/", "DuckDB": "https://python.langchain.com/docs/integrations/vectorstores/duckdb/", "Alibaba Cloud OpenSearch": "https://python.langchain.com/docs/integrations/vectorstores/alibabacloud_opensearch/", "Clarifai": "https://python.langchain.com/docs/integrations/vectorstores/clarifai/", "scikit-learn": "https://python.langchain.com/docs/integrations/vectorstores/sklearn/", "Tencent Cloud VectorDB": "https://python.langchain.com/docs/integrations/vectorstores/tencentvectordb/", "DocArray HnswSearch": "https://python.langchain.com/docs/integrations/vectorstores/docarray_hnsw/", "MyScale": "https://python.langchain.com/docs/integrations/vectorstores/myscale/", "TileDB": "https://python.langchain.com/docs/integrations/vectorstores/tiledb/", "Google Memorystore for Redis": "https://python.langchain.com/docs/integrations/vectorstores/google_memorystore_redis/", "Tigris": "https://python.langchain.com/docs/integrations/vectorstores/tigris/", "China Mobile ECloud ElasticSearch VectorSearch": "https://python.langchain.com/docs/integrations/vectorstores/ecloud_vector_search/", "Bagel": "https://python.langchain.com/docs/integrations/vectorstores/bagel/", "Baidu Cloud ElasticSearch VectorSearch": "https://python.langchain.com/docs/integrations/vectorstores/baiducloud_vector_search/", "AwaDB": "https://python.langchain.com/docs/integrations/vectorstores/awadb/", "Supabase (Postgres)": "https://python.langchain.com/docs/integrations/vectorstores/supabase/", "SurrealDB": "https://python.langchain.com/docs/integrations/vectorstores/surrealdb/", "OpenSearch": "https://python.langchain.com/docs/integrations/vectorstores/opensearch/", "Faiss (Async)": "https://python.langchain.com/docs/integrations/vectorstores/faiss_async/", "BagelDB": "https://python.langchain.com/docs/integrations/vectorstores/bageldb/", "ManticoreSearch VectorStore": "https://python.langchain.com/docs/integrations/vectorstores/manticore_search/", "Azure AI Search": "https://python.langchain.com/docs/integrations/vectorstores/azuresearch/", "USearch": "https://python.langchain.com/docs/integrations/vectorstores/usearch/", "PGVecto.rs": "https://python.langchain.com/docs/integrations/vectorstores/pgvecto_rs/", "Marqo": "https://python.langchain.com/docs/integrations/vectorstores/marqo/", "DocArray InMemorySearch": "https://python.langchain.com/docs/integrations/vectorstores/docarray_in_memory/", "Postgres Embedding": "https://python.langchain.com/docs/integrations/vectorstores/pgembedding/", "Intel's Visual Data Management System (VDMS)": "https://python.langchain.com/docs/integrations/vectorstores/vdms/", "Timescale Vector (Postgres)": "https://python.langchain.com/docs/integrations/vectorstores/timescalevector/", "Epsilla": "https://python.langchain.com/docs/integrations/vectorstores/epsilla/", "Amazon Document DB": "https://python.langchain.com/docs/integrations/vectorstores/documentdb/", "SemaDB": "https://python.langchain.com/docs/integrations/vectorstores/semadb/", "AnalyticDB": "https://python.langchain.com/docs/integrations/vectorstores/analyticdb/", "Hologres": "https://python.langchain.com/docs/integrations/vectorstores/hologres/", "Baidu VectorDB": "https://python.langchain.com/docs/integrations/vectorstores/baiduvectordb/", "Meilisearch": "https://python.langchain.com/docs/integrations/vectorstores/meilisearch/", "Psychic": "https://python.langchain.com/docs/integrations/document_loaders/psychic/", "Manifest": "https://python.langchain.com/docs/integrations/llms/manifest/", "Summarize Text": "https://python.langchain.com/docs/tutorials/summarization/"}
| |
152844
|
, "acollapse_docs": {"# Basic example (short documents)": "https://python.langchain.com/docs/versions/migrating_chains/map_reduce_chain/", "How to summarize text through parallelization": "https://python.langchain.com/docs/how_to/summarize_map_reduce/", "Summarize Text": "https://python.langchain.com/docs/tutorials/summarization/"}, "split_list_of_docs": {"# Basic example (short documents)": "https://python.langchain.com/docs/versions/migrating_chains/map_reduce_chain/", "How to summarize text through parallelization": "https://python.langchain.com/docs/how_to/summarize_map_reduce/", "Summarize Text": "https://python.langchain.com/docs/tutorials/summarization/"}, "RefineDocumentsChain": {"# Example": "https://python.langchain.com/docs/versions/migrating_chains/refine_docs_chain/"}, "RetrievalQA": {"Load docs": "https://python.langchain.com/docs/versions/migrating_chains/retrieval_qa/", "LLMLingua Document Compressor": "https://python.langchain.com/docs/integrations/retrievers/llmlingua/", "Bedrock (Knowledge Bases) Retriever": "https://python.langchain.com/docs/integrations/retrievers/bedrock/", "Cohere reranker": "https://python.langchain.com/docs/integrations/retrievers/cohere-reranker/", "Activeloop Deep Memory": "https://python.langchain.com/docs/integrations/retrievers/activeloop/", "FlashRank reranker": "https://python.langchain.com/docs/integrations/retrievers/flashrank-reranker/", "Confident": "https://python.langchain.com/docs/integrations/callbacks/confident/", "UpTrain": "https://python.langchain.com/docs/integrations/callbacks/uptrain/", "Apache Doris": "https://python.langchain.com/docs/integrations/vectorstores/apache_doris/", "ScaNN": "https://python.langchain.com/docs/integrations/vectorstores/scann/", "Google Vertex AI Vector Search": "https://python.langchain.com/docs/integrations/vectorstores/google_vertex_ai_vector_search/", "Momento Vector Index (MVI)": "https://python.langchain.com/docs/integrations/vectorstores/momento_vector_index/", "Activeloop Deep Lake": "https://python.langchain.com/docs/integrations/vectorstores/activeloop_deeplake/", "StarRocks": "https://python.langchain.com/docs/integrations/vectorstores/starrocks/", "KDB.AI": "https://python.langchain.com/docs/integrations/vectorstores/kdbai/", "Timescale Vector (Postgres)": "https://python.langchain.com/docs/integrations/vectorstores/timescalevector/", "Amazon Document DB": "https://python.langchain.com/docs/integrations/vectorstores/documentdb/", "VoyageAI Reranker": "https://python.langchain.com/docs/integrations/document_transformers/voyageai-reranker/", "RankLLM Reranker": "https://python.langchain.com/docs/integrations/document_transformers/rankllm-reranker/", "YouTube audio": "https://python.langchain.com/docs/integrations/document_loaders/youtube_audio/", "Docugami": "https://python.langchain.com/docs/integrations/document_loaders/docugami/"},
| |
152845
|
"RunnablePassthrough": {"Load docs": "https://python.langchain.com/docs/versions/migrating_chains/retrieval_qa/", "# Legacy": "https://python.langchain.com/docs/versions/migrating_chains/llm_router_chain/", "How to add values to a chain's state": "https://python.langchain.com/docs/how_to/assign/", "How to route between sub-chains": "https://python.langchain.com/docs/how_to/routing/", "How to do per-user retrieval": "https://python.langchain.com/docs/how_to/qa_per_user/", "How to inspect runnables": "https://python.langchain.com/docs/how_to/inspect/", "How to handle cases where no queries are generated": "https://python.langchain.com/docs/how_to/query_no_queries/", "How to do tool/function calling": "https://python.langchain.com/docs/how_to/function_calling/", "How to add a human-in-the-loop for tools": "https://python.langchain.com/docs/how_to/tools_human/", "How to deal with large databases when doing SQL question-answering": "https://python.langchain.com/docs/how_to/sql_large_db/", "How to handle multiple queries when doing query analysis": "https://python.langchain.com/docs/how_to/query_multiple_queries/", "How to map values to a graph database": "https://python.langchain.com/docs/how_to/graph_mapping/", "How to do question answering over CSVs": "https://python.langchain.com/docs/how_to/sql_csv/", "How to get your RAG application to return sources": "https://python.langchain.com/docs/how_to/qa_sources/", "How to add default invocation args to a Runnable": "https://python.langchain.com/docs/how_to/binding/", "How to convert Runnables as Tools": "https://python.langchain.com/docs/how_to/convert_runnable_to_tool/", "How to create a dynamic (self-constructing) chain": "https://python.langchain.com/docs/how_to/dynamic_chain/", "How to stream runnables": "https://python.langchain.com/docs/how_to/streaming/", "How to invoke runnables in parallel": "https://python.langchain.com/docs/how_to/parallel/", "How to pass through arguments from one step to the next": "https://python.langchain.com/docs/how_to/passthrough/", "How to add chat history": "https://python.langchain.com/docs/how_to/qa_chat_history_how_to/", "How to add retrieval to chatbots": "https://python.langchain.com/docs/how_to/chatbots_retrieval/", "How to handle multiple retrievers when doing query analysis": "https://python.langchain.com/docs/how_to/query_multiple_retrievers/", "How to get a RAG application to add citations": "https://python.langchain.com/docs/how_to/qa_citations/", "How to add memory to chatbots": "https://python.langchain.com/docs/how_to/chatbots_memory/", "How deal with high cardinality categoricals when doing query analysis": "https://python.langchain.com/docs/how_to/query_high_cardinality/", "How to add ad-hoc tool calling capability to LLMs and Chat Models": "https://python.langchain.com/docs/how_to/tools_prompting/", "LangChain Expression Language Cheatsheet": "https://python.langchain.com/docs/how_to/lcel_cheatsheet/", "Hybrid Search": "https://python.langchain.com/docs/how_to/hybrid/", "How to use few-shot prompting with tool calling": "https://python.langchain.com/docs/how_to/tools_few_shot/", "How to add examples to the prompt for query analysis": "https://python.langchain.com/docs/how_to/query_few_shot/", "NVIDIA NIMs ": "https://python.langchain.com/docs/integrations/text_embedding/nvidia_ai_endpoints/", "AzureAISearchRetriever": "https://python.langchain.com/docs/integrations/retrievers/azure_ai_search/", "You.com": "https://python.langchain.com/docs/integrations/retrievers/you-retriever/", "Fleet AI Context": "https://python.langchain.com/docs/integrations/retrievers/fleet_context/", "AskNews": "https://python.langchain.com/docs/integrations/retrievers/asknews/", "WikipediaRetriever": "https://python.langchain.com/docs/integrations/retrievers/wikipedia/", "TavilySearchAPIRetriever": "https://python.langchain.com/docs/integrations/retrievers/tavily/", "ArxivRetriever": "https://python.langchain.com/docs/integrations/retrievers/arxiv/", "ElasticsearchRetriever": "https://python.langchain.com/docs/integrations/retrievers/elasticsearch_retriever/", "Milvus Hybrid Search Retriever": "https://python.langchain.com/docs/integrations/retrievers/milvus_hybrid_search/", "Google Vertex AI Search": "https://python.langchain.com/docs/integrations/retrievers/google_vertex_ai_search/", "UpTrain": "https://python.langchain.com/docs/integrations/callbacks/uptrain/", "DSPy": "https://python.langchain.com/docs/integrations/providers/dspy/", "Weaviate": "https://python.langchain.com/docs/integrations/vectorstores/weaviate/", "Jaguar Vector Database": "https://python.langchain.com/docs/integrations/vectorstores/jaguar/", "Apache Cassandra": "https://python.langchain.com/docs/integrations/vectorstores/cassandra/", "Google Cloud Vertex AI Reranker": "https://python.langchain.com/docs/integrations/document_transformers/google_cloud_vertexai_rerank/", "OpaquePrompts": "https://python.langchain.com/docs/integrations/llms/opaqueprompts/", "Build a Retrieval Augmented Generation (RAG) App": "https://python.langchain.com/docs/tutorials/rag/", "Build a Local RAG Application": "https://python.langchain.com/docs/tutorials/local_rag/", "Build a Chatbot": "https://python.langchain.com/docs/tutorials/chatbot/", "Build a Query Analysis System": "https://python.langchain.com/docs/tutorials/query_analysis/", "Build a Question/Answering system over SQL data": "https://python.langchain.com/docs/tutorials/sql_qa/", "Vector stores and retrievers": "https://python.langchain.com/docs/tutorials/retrievers/"}, "LLMRouterChain": {"# Legacy": "https://python.langchain.com/docs/versions/migrating_chains/llm_router_chain/"}, "RouterOutputParser": {"# Legacy": "https://python.langchain.com/docs/versions/migrating_chains/llm_router_chain/"}, "MapRerankDocumentsChain": {"# Example": "https://python.langchain.com/docs/versions/migrating_chains/map_rerank_docs_chain/"}, "RegexParser": {"# Example": "https://python.langchain.com/docs/versions/migrating_chains/map_rerank_docs_chain/"}, "TavilySearchResults": {"Build an Agent with AgentExecutor (Legacy)": "https://python.langchain.com/docs/how_to/agent_executor/", "How to add tools to chatbots": "https://python.langchain.com/docs/how_to/chatbots_tools/", "How to debug your LLM apps": "https://python.langchain.com/docs/how_to/debugging/", "Tavily Search": "https://python.langchain.com/docs/integrations/tools/tavily_search/", "ZHIPU AI": "https://python.langchain.com/docs/integrations/chat/zhipuai/", "Cohere": "https://python.langchain.com/docs/integrations/providers/cohere/", "Build an Agent": "https://python.langchain.com/docs/tutorials/agents/"}, "create_retriever_tool": {"Build an Agent with AgentExecutor (Legacy)": "https://python.langchain.com/docs/how_to/agent_executor/", "How to add chat history": "https://python.langchain.com/docs/how_to/qa_chat_history_how_to/", "Xata": "https://python.langchain.com/docs/integrations/memory/xata_chat_message_history/", "Conversational RAG": "https://python.langchain.com/docs/tutorials/qa_chat_history/", "Build a Question/Answering system over SQL data": "https://python.langchain.com/docs/tutorials/sql_qa/"},
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.