id
stringlengths
6
6
text
stringlengths
20
17.2k
title
stringclasses
1 value
149612
One more complex type if example is where the example is an entire conversation, usually in which a model initially responds incorrectly and a user then tells the model how to correct its answer. This is called a multi-turn example. Multi-turn examples can be useful for more nuanced tasks where its useful to show common errors and spell out exactly why they're wrong and what should be done instead. #### 2. Number of examples Once we have a dataset of examples, we need to think about how many examples should be in each prompt. The key tradeoff is that more examples generally improve performance, but larger prompts increase costs and latency. And beyond some threshold having too many examples can start to confuse the model. Finding the right number of examples is highly dependent on the model, the task, the quality of the examples, and your cost and latency constraints. Anecdotally, the better the model is the fewer examples it needs to perform well and the more quickly you hit steeply diminishing returns on adding more examples. But, the best/only way to reliably answer this question is to run some experiments with different numbers of examples. #### 3. Selecting examples Assuming we are not adding our entire example dataset into each prompt, we need to have a way of selecting examples from our dataset based on a given input. We can do this: - Randomly - By (semantic or keyword-based) similarity of the inputs - Based on some other constraints, like token size LangChain has a number of [`ExampleSelectors`](/docs/concepts/#example-selectors) which make it easy to use any of these techniques. Generally, selecting by semantic similarity leads to the best model performance. But how important this is is again model and task specific, and is something worth experimenting with. #### 4. Formatting examples Most state-of-the-art models these days are chat models, so we'll focus on formatting examples for those. Our basic options are to insert the examples: - In the system prompt as a string - As their own messages If we insert our examples into the system prompt as a string, we'll need to make sure it's clear to the model where each example begins and which parts are the input versus output. Different models respond better to different syntaxes, like [ChatML](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/chat-markup-language), XML, TypeScript, etc. If we insert our examples as messages, where each example is represented as a sequence of Human, AI messages, we might want to also assign [names](/docs/concepts/#messages) to our messages like `"example_user"` and `"example_assistant"` to make it clear that these messages correspond to different actors than the latest input message. **Formatting tool call examples** One area where formatting examples as messages can be tricky is when our example outputs have tool calls. This is because different models have different constraints on what types of message sequences are allowed when any tool calls are generated. - Some models require that any AIMessage with tool calls be immediately followed by ToolMessages for every tool call, - Some models additionally require that any ToolMessages be immediately followed by an AIMessage before the next HumanMessage, - Some models require that tools are passed in to the model if there are any tool calls / ToolMessages in the chat history. These requirements are model-specific and should be checked for the model you are using. If your model requires ToolMessages after tool calls and/or AIMessages after ToolMessages and your examples only include expected tool calls and not the actual tool outputs, you can try adding dummy ToolMessages / AIMessages to the end of each example with generic contents to satisfy the API constraints. In these cases it's especially worth experimenting with inserting your examples as strings versus messages, as having dummy messages can adversely affect certain models. You can see a case study of how Anthropic and OpenAI respond to different few-shot prompting techniques on two different tool calling benchmarks [here](https://blog.langchain.dev/few-shot-prompting-to-improve-tool-calling-performance/). ### Retrieval LLMs are trained on a large but fixed dataset, limiting their ability to reason over private or recent information. Fine-tuning an LLM with specific facts is one way to mitigate this, but is often [poorly suited for factual recall](https://www.anyscale.com/blog/fine-tuning-is-for-form-not-facts) and [can be costly](https://www.glean.com/blog/how-to-build-an-ai-assistant-for-the-enterprise). `Retrieval` is the process of providing relevant information to an LLM to improve its response for a given input. `Retrieval augmented generation` (`RAG`) [paper](https://arxiv.org/abs/2005.11401) is the process of grounding the LLM generation (output) using the retrieved information. :::tip * See our RAG from Scratch [code](https://github.com/langchain-ai/rag-from-scratch) and [video series](https://youtube.com/playlist?list=PLfaIDFEXuae2LXbO1_PKyVJiQ23ZztA0x&feature=shared). * For a high-level guide on retrieval, see this [tutorial on RAG](/docs/tutorials/rag/). ::: RAG is only as good as the retrieved documents’ relevance and quality. Fortunately, an emerging set of techniques can be employed to design and improve RAG systems. We've focused on taxonomizing and summarizing many of these techniques (see below figure) and will share some high-level strategic guidance in the following sections. You can and should experiment with using different pieces together. You might also find [this LangSmith guide](https://docs.smith.langchain.com/how_to_guides/evaluation/evaluate_llm_application) useful for showing how to evaluate different iterations of your app. ![](/img/rag_landscape.png) #### Query Translation First, consider the user input(s) to your RAG system. Ideally, a RAG system can handle a wide range of inputs, from poorly worded questions to complex multi-part queries. **Using an LLM to review and optionally modify the input is the central idea behind query translation.** This serves as a general buffer, optimizing raw user inputs for your retrieval system. For example, this can be as simple as extracting keywords or as complex as generating multiple sub-questions for a complex query. | Name | When to use | Description | |---------------|-------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [Multi-query](/docs/how_to/MultiQueryRetriever/) | When you need to cover multiple perspectives of a question. | Rewrite the user question from multiple perspectives, retrieve documents for each rewritten question, return the unique documents for all queries. | | [Decomposition](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) | When a question can be broken down into smaller subproblems. | Decompose a question into a set of subproblems / questions, which can either be solved sequentially (use the answer from first + retrieval to answer the second) or in parallel (consolidate each answer into final answer). | | [Step-back](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) | When a higher-level conceptual understanding is required. | First prompt the LLM to ask a generic step-back question about higher-level concepts or principles, and retrieve relevant facts about them. Use this grounding to help answer the user question. [Paper](https://arxiv.org/pdf/2310.06117). | | [HyDE](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) | If you have challenges retrieving relevant documents using the raw user inputs. | Use an LLM to convert questions into hypothetical documents that answer the question. Use the embedded hypothetical documents to retrieve real documents with the premise that doc-doc similarity search can produce more relevant matches. [Paper](https://arxiv.org/abs/2212.10496). | :::tip See our RAG from Scratch videos for a few different specific approaches: - [Multi-query](https://youtu.be/JChPi0CRnDY?feature=shared) - [Decomposition](https://youtu.be/h0OPWlEOank?feature=shared) - [Step-back](https://youtu.be/xn1jEjRyJ2U?feature=shared) - [HyDE](https://youtu.be/SaDzIVkYqyY?feature=shared) ::: #### Routing Second, consider the data sources available to your RAG system. You want to query across more than one database or across structured and unstructured data sources. **Using an LLM to review the input and route it to the appropriate data source is a simple and effective approach for querying across sources.** | Name | When to use | Description | |------------------|--------------------------------------------|-------------| | [Logical routing](/docs/how_to/routing/) | When you can prompt an LLM with rules to decide where to route the input. | Logical routing can use an LLM to reason about the query and choose which datastore is most appropriate. | | [Semantic routing](/docs/how_to/routing/#routing-by-semantic-similarity) | When semantic similarity is an effective way to determine where to route the input. | Semantic routing embeds both query and, typically a set of prompts. It then chooses the appropriate prompt based upon similarity. | :::tip
149614
| Name | Index Type | Uses an LLM | When to Use | Description | |---------------------------|------------------------------|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [Contextual Compression](/docs/how_to/contextual_compression/) | Any | Sometimes | If you are finding that your retrieved documents contain too much irrelevant information and are distracting the LLM. | This puts a post-processing step on top of another retriever and extracts only the most relevant information from retrieved documents. This can be done with embeddings or an LLM. | | [Ensemble](/docs/how_to/ensemble_retriever/) | Any | No | If you have multiple retrieval methods and want to try combining them. | This fetches documents from multiple retrievers and then combines them. | | [Re-ranking](/docs/integrations/retrievers/cohere-reranker/) | Any | Yes | If you want to rank retrieved documents based upon relevance, especially if you want to combine results from multiple retrieval methods . | Given a query and a list of documents, Rerank indexes the documents from most to least semantically relevant to the query. | :::tip See our RAG from Scratch video on [RAG-Fusion](https://youtu.be/77qELPbNgxA?feature=shared) ([paper](https://arxiv.org/abs/2402.03367)), on approach for post-processing across multiple queries: Rewrite the user question from multiple perspectives, retrieve documents for each rewritten question, and combine the ranks of multiple search result lists to produce a single, unified ranking with [Reciprocal Rank Fusion (RRF)](https://towardsdatascience.com/forget-rag-the-future-is-rag-fusion-1147298d8ad1). ::: #### Generation **Finally, consider ways to build self-correction into your RAG system.** RAG systems can suffer from low quality retrieval (e.g., if a user question is out of the domain for the index) and / or hallucinations in generation. A naive retrieve-generate pipeline has no ability to detect or self-correct from these kinds of errors. The concept of ["flow engineering"](https://x.com/karpathy/status/1748043513156272416) has been introduced [in the context of code generation](https://arxiv.org/abs/2401.08500): iteratively build an answer to a code question with unit tests to check and self-correct errors. Several works have applied this RAG, such as Self-RAG and Corrective-RAG. In both cases, checks for document relevance, hallucinations, and / or answer quality are performed in the RAG answer generation flow. We've found that graphs are a great way to reliably express logical flows and have implemented ideas from several of these papers [using LangGraph](https://github.com/langchain-ai/langgraph/tree/main/examples/rag), as shown in the figure below (red - routing, blue - fallback, green - self-correction): - **Routing:** Adaptive RAG ([paper](https://arxiv.org/abs/2403.14403)). Route questions to different retrieval approaches, as discussed above - **Fallback:** Corrective RAG ([paper](https://arxiv.org/pdf/2401.15884.pdf)). Fallback to web search if docs are not relevant to query - **Self-correction:** Self-RAG ([paper](https://arxiv.org/abs/2310.11511)). Fix answers w/ hallucinations or don’t address question ![](/img/langgraph_rag.png) | Name | When to use | Description | |-------------------|-----------------------------------------------------------|-------------| | Self-RAG | When needing to fix answers with hallucinations or irrelevant content. | Self-RAG performs checks for document relevance, hallucinations, and answer quality during the RAG answer generation flow, iteratively building an answer and self-correcting errors. | | Corrective-RAG | When needing a fallback mechanism for low relevance docs. | Corrective-RAG includes a fallback (e.g., to web search) if the retrieved documents are not relevant to the query, ensuring higher quality and more relevant retrieval. | :::tip See several videos and cookbooks showcasing RAG with LangGraph: - [LangGraph Corrective RAG](https://www.youtube.com/watch?v=E2shqsYwxck) - [LangGraph combining Adaptive, Self-RAG, and Corrective RAG](https://www.youtube.com/watch?v=-ROS6gfYIts) - [Cookbooks for RAG using LangGraph](https://github.com/langchain-ai/langgraph/tree/main/examples/rag) See our LangGraph RAG recipes with partners: - [Meta](https://github.com/meta-llama/llama-recipes/tree/main/recipes/3p_integrations/langchain) - [Mistral](https://github.com/mistralai/cookbook/tree/main/third_party/langchain) ::: ### Text splitting LangChain offers many different types of `text splitters`. These all live in the `langchain-text-splitters` package. Table columns: - **Name**: Name of the text splitter - **Classes**: Classes that implement this text splitter - **Splits On**: How this text splitter splits text - **Adds Metadata**: Whether or not this text splitter adds metadata about where each chunk came from. - **Description**: Description of the splitter, including recommendation on when to use it.
149632
# langchain ## 0.2.0 ### Deleted As of release 0.2.0, `langchain` is required to be integration-agnostic. This means that code in `langchain` should not by default instantiate any specific chat models, llms, embedding models, vectorstores etc; instead, the user will be required to specify those explicitly. The following functions and classes require an explicit LLM to be passed as an argument: - `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit` - `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit` - `langchain.chains.openai_functions.get_openapi_chain` - `langchain.chains.router.MultiRetrievalQAChain.from_retrievers` - `langchain.indexes.VectorStoreIndexWrapper.query` - `langchain.indexes.VectorStoreIndexWrapper.query_with_sources` - `langchain.indexes.VectorStoreIndexWrapper.aquery_with_sources` - `langchain.chains.flare.FlareChain` The following classes now require passing an explicit Embedding model as an argument: - `langchain.indexes.VectostoreIndexCreator` The following code has been removed: - `langchain.natbot.NatBotChain.from_default` removed in favor of the `from_llm` class method. ### Deprecated We have two main types of deprecations: 1. Code that was moved from `langchain` into another package (e.g, `langchain-community`) If you try to import it from `langchain`, the import will keep on working, but will raise a deprecation warning. The warning will provide a replacement import statement. ```python python -c "from langchain.document_loaders.markdown import UnstructuredMarkdownLoader" ``` ```python LangChainDeprecationWarning: Importing UnstructuredMarkdownLoader from langchain.document_loaders is deprecated. Please replace deprecated imports: >> from langchain.document_loaders import UnstructuredMarkdownLoader with new imports of: >> from langchain_community.document_loaders import UnstructuredMarkdownLoader ``` We will continue supporting the imports in `langchain` until release 0.4 as long as the relevant package where the code lives is installed. (e.g., as long as `langchain_community` is installed.) However, we advise for users to not rely on these imports and instead migrate to the new imports. To help with this process, we’re releasing a migration script via the LangChain CLI. See further instructions in migration guide. 1. Code that has better alternatives available and will eventually be removed, so there’s only a single way to do things. (e.g., `predict_messages` method in ChatModels has been deprecated in favor of `invoke`). Many of these were marked for removal in 0.2. We have bumped the removal to 0.3. ## 0.1.0 (Jan 5, 2024) ### Deleted No deletions. ### Deprecated Deprecated classes and methods will be removed in 0.2.0 | Deprecated | Alternative | Reason | |---------------------------------|-----------------------------------|------------------------------------------------| | ChatVectorDBChain | ConversationalRetrievalChain | More general to all retrievers | | create_ernie_fn_chain | create_ernie_fn_runnable | Use LCEL under the hood | | created_structured_output_chain | create_structured_output_runnable | Use LCEL under the hood | | NatBotChain | | Not used | | create_openai_fn_chain | create_openai_fn_runnable | Use LCEL under the hood | | create_structured_output_chain | create_structured_output_runnable | Use LCEL under the hood | | load_query_constructor_chain | load_query_constructor_runnable | Use LCEL under the hood | | VectorDBQA | RetrievalQA | More general to all retrievers | | Sequential Chain | LCEL | Obviated by LCEL | | SimpleSequentialChain | LCEL | Obviated by LCEL | | TransformChain | LCEL/RunnableLambda | Obviated by LCEL | | create_tagging_chain | create_structured_output_runnable | Use LCEL under the hood | | ChatAgent | create_react_agent | Use LCEL builder over a class | | ConversationalAgent | create_react_agent | Use LCEL builder over a class | | ConversationalChatAgent | create_json_chat_agent | Use LCEL builder over a class | | initialize_agent | Individual create agent methods | Individual create agent methods are more clear | | ZeroShotAgent | create_react_agent | Use LCEL builder over a class | | OpenAIFunctionsAgent | create_openai_functions_agent | Use LCEL builder over a class | | OpenAIMultiFunctionsAgent | create_openai_tools_agent | Use LCEL builder over a class | | SelfAskWithSearchAgent | create_self_ask_with_search | Use LCEL builder over a class | | StructuredChatAgent | create_structured_chat_agent | Use LCEL builder over a class | | XMLAgent | create_xml_agent | Use LCEL builder over a class |
149633
# langchain-core ## 0.1.x #### Deprecated - `BaseChatModel` methods `__call__`, `call_as_llm`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.invoke` instead. - `BaseChatModel` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.ainvoke` instead. - `BaseLLM` methods `__call__, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseLLM.invoke` instead. - `BaseLLM` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseLLM.ainvoke` instead.
149698
--- sidebar_position: 2 sidebar_label: Release policy --- # LangChain release policy The LangChain ecosystem is composed of different component packages (e.g., `langchain-core`, `langchain`, `langchain-community`, `langgraph`, `langserve`, partner packages etc.) ## Versioning ### `langchain`, `langchain-core`, and integration packages `langchain`, `langchain-core`, `langchain-text-splitters`, and integration packages (`langchain-openai`, `langchain-anthropic`, etc.) follow [semantic versioning](https://semver.org/) in the format of 0.**Y**.**Z**. The packages are under rapid development, and so are currently versioning the packages with a major version of 0. Minor version increases will occur for: - Breaking changes for any public interfaces *not* marked as `beta`. Patch version increases will occur for: - Bug fixes, - New features, - Any changes to private interfaces, - Any changes to `beta` features. When upgrading between minor versions, users should review the list of breaking changes and deprecations. From time to time, we will version packages as **release candidates**. These are versions that are intended to be released as stable versions, but we want to get feedback from the community before doing so. Release candidates will be versioned as 0.**Y**.**Z**rc**N**. For example, 0.2.0rc1. If no issues are found, the release candidate will be released as a stable version with the same version number. If issues are found, we will release a new release candidate with an incremented `N` value (e.g., 0.2.0rc2). ### `langchain-community` `langchain-community` is currently on version `0.2.x`. Minor version increases will occur for: - Updates to the major/minor versions of required `langchain-x` dependencies. E.g., when updating the required version of `langchain-core` from `^0.2.x` to `0.3.0`. Patch version increases will occur for: - Bug fixes, - New features, - Any changes to private interfaces, - Any changes to `beta` features, - Breaking changes to integrations to reflect breaking changes in the third-party service. Whenever possible we will avoid making breaking changes in patch versions. However, if an external API makes a breaking change then breaking changes to the corresponding `langchain-community` integration can occur in a patch version. ### `langchain-experimental` `langchain-experimental` is currently on version `0.0.x`. All changes will be accompanied with patch version increases. ## Release cadence We expect to space out **minor** releases (e.g., from 0.2.x to 0.3.0) of `langchain` and `langchain-core` by at least 2-3 months, as such releases may contain breaking changes. Patch versions are released frequently, up to a few times per week, as they contain bug fixes and new features. ## API stability The development of LLM applications is a rapidly evolving field, and we are constantly learning from our users and the community. As such, we expect that the APIs in `langchain` and `langchain-core` will continue to evolve to better serve the needs of our users. Even though both `langchain` and `langchain-core` are currently in a pre-1.0 state, we are committed to maintaining API stability in these packages. - Breaking changes to the public API will result in a minor version bump (the second digit) - Any bug fixes or new features will result in a patch version bump (the third digit) We will generally try to avoid making unnecessary changes, and will provide a deprecation policy for features that are being removed. ### Stability of other packages The stability of other packages in the LangChain ecosystem may vary: - `langchain-community` is a community maintained package that contains 3rd party integrations. While we do our best to review and test changes in `langchain-community`, `langchain-community` is expected to experience more breaking changes than `langchain` and `langchain-core` as it contains many community contributions. - Partner packages may follow different stability and versioning policies, and users should refer to the documentation of those packages for more information; however, in general these packages are expected to be stable. ### What is a "API stability"? API stability means: - All the public APIs (everything in this documentation) will not be moved or renamed without providing backwards-compatible aliases. - If new features are added to these APIs – which is quite possible – they will not break or change the meaning of existing methods. In other words, "stable" does not (necessarily) mean "complete." - If, for some reason, an API declared stable must be removed or replaced, it will be declared deprecated but will remain in the API for at least two minor releases. Warnings will be issued when the deprecated method is called. ### **APIs marked as internal** Certain APIs are explicitly marked as “internal” in a couple of ways: - Some documentation refers to internals and mentions them as such. If the documentation says that something is internal, it may change. - Functions, methods, and other objects prefixed by a leading underscore (**`_`**). This is the standard Python convention of indicating that something is private; if any method starts with a single **`_`**, it’s an internal API. - **Exception:** Certain methods are prefixed with `_` , but do not contain an implementation. These methods are *meant* to be overridden by sub-classes that provide the implementation. Such methods are generally part of the **Public API** of LangChain. ## Deprecation policy We will generally avoid deprecating features until a better alternative is available. When a feature is deprecated, it will continue to work in the current and next minor version of `langchain` and `langchain-core`. After that, the feature will be removed. Since we're expecting to space out minor releases by at least 2-3 months, this means that a feature can be removed within 2-6 months of being deprecated. In some situations, we may allow the feature to remain in the code base for longer periods of time, if it's not causing issues in the packages, to reduce the burden on users.
149699
# LangChain v0.3 *Last updated: 09.16.24* ## What's changed * All packages have been upgraded from Pydantic 1 to Pydantic 2 internally. Use of Pydantic 2 in user code is fully supported with all packages without the need for bridges like `langchain_core.pydantic_v1` or `pydantic.v1`. * Pydantic 1 will no longer be supported as it reached its end-of-life in June 2024. * Python 3.8 will no longer be supported as its end-of-life is October 2024. **These are the only breaking changes.** ## What’s new The following features have been added during the development of 0.2.x: - Moved more integrations from `langchain-community` to their own `langchain-x` packages. This is a non-breaking change, as the legacy implementations are left in `langchain-community` and marked as deprecated. This allows us to better manage the dependencies of, test, and version these integrations. You can see all the latest integration packages in the [API reference](https://python.langchain.com/v0.2/api_reference/reference.html#integrations). - Simplified tool definition and usage. Read more [here](https://blog.langchain.dev/improving-core-tool-interfaces-and-docs-in-langchain/). - Added utilities for interacting with chat models: [universal model constructor](https://python.langchain.com/v0.2/docs/how_to/chat_models_universal_init/), [rate limiter](https://python.langchain.com/v0.2/docs/how_to/chat_model_rate_limiting/), [message utilities](https://python.langchain.com/v0.2/docs/how_to/#messages), - Added the ability to [dispatch custom events](https://python.langchain.com/v0.2/docs/how_to/callbacks_custom_events/). - Revamped integration docs and API reference. Read more [here](https://blog.langchain.dev/langchain-integration-docs-revamped/). - Marked as deprecated a number of legacy chains and added migration guides for all of them. These are slated for removal in `langchain` 1.0.0. See the deprecated chains and associated [migration guides here](https://python.langchain.com/v0.2/docs/versions/migrating_chains/). ## How to update your code If you're using `langchain` / `langchain-community` / `langchain-core` 0.0 or 0.1, we recommend that you first [upgrade to 0.2](https://python.langchain.com/v0.2/docs/versions/v0_2/). If you're using `langgraph`, upgrade to `langgraph>=0.2.20,<0.3`. This will work with either 0.2 or 0.3 versions of all the base packages. Here is a complete list of all packages that have been released and what we recommend upgrading your version constraints to. Any package that now requires `langchain-core` 0.3 had a minor version bump. Any package that is now compatible with both `langchain-core` 0.2 and 0.3 had a patch version bump. You can use the `langchain-cli` to update deprecated imports automatically. The CLI will handle updating deprecated imports that were introduced in LangChain 0.0.x and LangChain 0.1, as well as updating the `langchain_core.pydantic_v1` and `langchain.pydantic_v1` imports.
149700
### Base packages | Package | Latest | Recommended constraint | |--------------------------|--------|------------------------| | langchain | 0.3.0 | >=0.3,&lt;0.4 | | langchain-community | 0.3.0 | >=0.3,&lt;0.4 | | langchain-text-splitters | 0.3.0 | >=0.3,&lt;0.4 | | langchain-core | 0.3.0 | >=0.3,&lt;0.4 | | langchain-experimental | 0.3.0 | >=0.3,&lt;0.4 | ### Downstream packages | Package | Latest | Recommended constraint | |-----------|--------|------------------------| | langgraph | 0.2.20 | >=0.2.20,&lt;0.3 | | langserve | 0.3.0 | >=0.3,&lt;0.4 | ### Integration packages | Package | Latest | Recommended constraint | | -------------------------------------- | ------- | -------------------------- | | langchain-ai21 | 0.2.0 | >=0.2,&lt;0.3 | | langchain-aws | 0.2.0 | >=0.2,&lt;0.3 | | langchain-anthropic | 0.2.0 | >=0.2,&lt;0.3 | | langchain-astradb | 0.4.1 | >=0.4.1,&lt;0.5 | | langchain-azure-dynamic-sessions | 0.2.0 | >=0.2,&lt;0.3 | | langchain-box | 0.2.0 | >=0.2,&lt;0.3 | | langchain-chroma | 0.1.4 | >=0.1.4,&lt;0.2 | | langchain-cohere | 0.3.0 | >=0.3,&lt;0.4 | | langchain-elasticsearch | 0.3.0 | >=0.3,&lt;0.4 | | langchain-exa | 0.2.0 | >=0.2,&lt;0.3 | | langchain-fireworks | 0.2.0 | >=0.2,&lt;0.3 | | langchain-groq | 0.2.0 | >=0.2,&lt;0.3 | | langchain-google-community | 2.0.0 | >=2,&lt;3 | | langchain-google-genai | 2.0.0 | >=2,&lt;3 | | langchain-google-vertexai | 2.0.0 | >=2,&lt;3 | | langchain-huggingface | 0.1.0 | >=0.1,&lt;0.2 | | langchain-ibm | 0.2.0 | >=0.2,&lt;0.3 | | langchain-milvus | 0.1.6 | >=0.1.6,&lt;0.2 | | langchain-mistralai | 0.2.0 | >=0.2,&lt;0.3 | | langchain-mongodb | 0.2.0 | >=0.2,&lt;0.3 | | langchain-nomic | 0.1.3 | >=0.1.3,&lt;0.2 | | langchain-ollama | 0.2.0 | >=0.2,&lt;0.3 | | langchain-openai | 0.2.0 | >=0.2,&lt;0.3 | | langchain-pinecone | 0.2.0 | >=0.2,&lt;0.3 | | langchain-postgres | 0.0.13 | >=0.0.13,&lt;0.1 | | langchain-prompty | 0.1.0 | >=0.1,&lt;0.2 | | langchain-qdrant | 0.1.4 | >=0.1.4,&lt;0.2 | | langchain-redis | 0.1.0 | >=0.1,&lt;0.2 | | langchain-sema4 | 0.2.0 | >=0.2,&lt;0.3 | | langchain-together | 0.2.0 | >=0.2,&lt;0.3 | | langchain-unstructured | 0.1.4 | >=0.1.4,&lt;0.2 | | langchain-upstage | 0.3.0 | >=0.3,&lt;0.4 | | langchain-voyageai | 0.2.0 | >=0.2,&lt;0.3 | | langchain-weaviate | 0.0.3 | >=0.0.3,&lt;0.1 | Once you've updated to recent versions of the packages, you may need to address the following issues stemming from the internal switch from Pydantic v1 to Pydantic v2: - If your code depends on Pydantic aside from LangChain, you will need to upgrade your pydantic version constraints to be `pydantic>=2,<3`. See [Pydantic’s migration guide](https://docs.pydantic.dev/latest/migration/) for help migrating your non-LangChain code to Pydantic v2 if you use pydantic v1. - There are a number of side effects to LangChain components caused by the internal switch from Pydantic v1 to v2. We have listed some of the common cases below together with the recommended solutions. ## Common issues when transitioning to Pydantic 2 ### 1. Do not use the `langchain_core.pydantic_v1` namespace Replace any usage of `langchain_core.pydantic_v1` or `langchain.pydantic_v1` with direct imports from `pydantic`. For example, ```python from langchain_core.pydantic_v1 import BaseModel ``` to: ```python from pydantic import BaseModel ``` This may require you to make additional updates to your Pydantic code given that there are a number of breaking changes in Pydantic 2. See the [Pydantic Migration](https://docs.pydantic.dev/latest/migration/) for how to upgrade your code from Pydantic 1 to 2. ### 2. Passing Pydantic objects to LangChain APIs Users using the following APIs: * `BaseChatModel.bind_tools` * `BaseChatModel.with_structured_output` * `Tool.from_function` * `StructuredTool.from_function` should ensure that they are passing Pydantic 2 objects to these APIs rather than Pydantic 1 objects (created via the `pydantic.v1` namespace of pydantic 2). :::caution While `v1` objets may be accepted by some of these APIs, users are advised to use Pydantic 2 objects to avoid future issues. ::: ### 3. Sub-classing LangChain models Any sub-classing from existing LangChain models (e.g., `BaseTool`, `BaseChatModel`, `LLM`) should upgrade to use Pydantic 2 features. For example, any user code that's relying on Pydantic 1 features (e.g., `validator`) should be updated to the Pydantic 2 equivalent (e.g., `field_validator`), and any references to `pydantic.v1`, `langchain_core.pydantic_v1`, `langchain.pydantic_v1` should be replaced with imports from `pydantic`. ```python from pydantic.v1 import validator, Field # if pydantic 2 is installed # from pydantic import validator, Field # if pydantic 1 is installed # from langchain_core.pydantic_v1 import validator, Field # from langchain.pydantic_v1 import validator, Field class CustomTool(BaseTool): # BaseTool is v1 code x: int = Field(default=1) def _run(*args, **kwargs): return "hello" @validator('x') # v1 code @classmethod def validate_x(cls, x: int) -> int: return 1 ``` Should change to: ```python from pydantic import Field, field_validator # pydantic v2 from langchain_core.pydantic_v1 import BaseTool class CustomTool(BaseTool): # BaseTool is v1 code x: int = Field(default=1) def _run(*args, **kwargs): return "hello"
149702
CustomTool( name='custom_tool', description="hello", x=1, ) ``` ### 4. model_rebuild() When sub-classing from LangChain models, users may need to add relevant imports to the file and rebuild the model. You can read more about `model_rebuild` [here](https://docs.pydantic.dev/latest/concepts/models/#rebuilding-model-schema). ```python from langchain_core.output_parsers import BaseOutputParser class FooParser(BaseOutputParser): ... ``` New code: ```python from typing import Optional as Optional from langchain_core.output_parsers import BaseOutputParser class FooParser(BaseOutputParser): ... FooParser.model_rebuild() ``` ## Migrate using langchain-cli The `langchain-cli` can help update deprecated LangChain imports in your code automatically. Please note that the `langchain-cli` only handles deprecated LangChain imports and cannot help to upgrade your code from pydantic 1 to pydantic 2. For help with the Pydantic 1 to 2 migration itself please refer to the [Pydantic Migration Guidelines](https://docs.pydantic.dev/latest/migration/). As of 0.0.31, the `langchain-cli` relies on [gritql](https://about.grit.io/) for applying code mods. ### Installation ```bash pip install -U langchain-cli langchain-cli --version # <-- Make sure the version is at least 0.0.31 ``` ### Usage Given that the migration script is not perfect, you should make sure you have a backup of your code first (e.g., using version control like `git`). The `langchain-cli` will handle the `langchain_core.pydantic_v1` deprecation introduced in LangChain 0.3 as well as older deprecations (e.g.,`from langchain.chat_models import ChatOpenAI` which should be `from langchain_openai import ChatOpenAI`), You will need to run the migration script **twice** as it only applies one import replacement per run. For example, say that your code is still using the old import `from langchain.chat_models import ChatOpenAI`: After the first run, you’ll get: `from langchain_community.chat_models import ChatOpenAI` After the second run, you’ll get: `from langchain_openai import ChatOpenAI` ```bash # Run a first time # Will replace from langchain.chat_models import ChatOpenAI langchain-cli migrate --help [path to code] # Help langchain-cli migrate [path to code] # Apply # Run a second time to apply more import replacements langchain-cli migrate --diff [path to code] # Preview langchain-cli migrate [path to code] # Apply ``` ### Other options ```bash # See help menu langchain-cli migrate --help # Preview Changes without applying langchain-cli migrate --diff [path to code] # Approve changes interactively langchain-cli migrate --interactive [path to code] ```
149704
--- sidebar_position: 0 --- # Overview ## What’s new in LangChain? The following features have been added during the development of 0.1.x: - Better streaming support via the [Event Streaming API](https://python.langchain.com/docs/expression_language/streaming/#using-stream-events). - [Standardized tool calling support](https://blog.langchain.dev/tool-calling-with-langchain/) - A standardized interface for [structuring output](https://github.com/langchain-ai/langchain/discussions/18154) - [@chain decorator](https://python.langchain.com/docs/expression_language/how_to/decorator/) to more easily create **RunnableLambdas** - https://python.langchain.com/docs/expression_language/how_to/inspect/ - In Python, better async support for many core abstractions (thank you [@cbornet](https://github.com/cbornet)!!) - Include response metadata in `AIMessage` to make it easy to access raw output from the underlying models - Tooling to visualize [your runnables](https://python.langchain.com/docs/expression_language/how_to/inspect/) or [your langgraph app](https://github.com/langchain-ai/langgraph/blob/main/examples/visualization.ipynb) - Interoperability of chat message histories across most providers - [Over 20+ partner packages in python](https://python.langchain.com/docs/integrations/providers/) for popular integrations ## What’s coming to LangChain? - We’ve been working hard on [langgraph](https://langchain-ai.github.io/langgraph/). We will be building more capabilities on top of it and focusing on making it the go-to framework for agent architectures. - Vectorstores V2! We’ll be revisiting our vectorstores abstractions to help improve usability and reliability. - Better documentation and versioned docs! - We’re planning a breaking release (0.3.0) sometime between July-September to [upgrade to full support of Pydantic 2](https://github.com/langchain-ai/langchain/discussions/19339), and will drop support for Pydantic 1 (including objects originating from the `v1` namespace of Pydantic 2). ## What changed? Due to the rapidly evolving field, LangChain has also evolved rapidly. This document serves to outline at a high level what has changed and why. ### TLDR **As of 0.2.0:** - This release completes the work that we started with release 0.1.0 by removing the dependency of `langchain` on `langchain-community`. - `langchain` package no longer requires `langchain-community` . Instead `langchain-community` will now depend on `langchain-core` and `langchain` . - User code that still relies on deprecated imports from `langchain` will continue to work as long `langchain_community` is installed. These imports will start raising errors in release 0.4.x. **As of 0.1.0:** - `langchain` was split into the following component packages: `langchain-core`, `langchain`, `langchain-community`, `langchain-[partner]` to improve the usability of langchain code in production settings. You can read more about it on our [blog](https://blog.langchain.dev/langchain-v0-1-0/). ### Ecosystem organization By the release of 0.1.0, LangChain had grown to a large ecosystem with many integrations and a large community. To improve the usability of LangChain in production, we split the single `langchain` package into multiple packages. This allowed us to create a good foundation architecture for the LangChain ecosystem and improve the usability of `langchain` in production. Here is the high level break down of the Eco-system: - **langchain-core**: contains core abstractions involving LangChain Runnables, tooling for observability, and base implementations of important abstractions (e.g., Chat Models). - **langchain:** contains generic code that is built using interfaces defined in `langchain-core`. This package is for code that generalizes well across different implementations of specific interfaces. For example, `create_tool_calling_agent` works across chat models that support [tool calling capabilities](https://blog.langchain.dev/tool-calling-with-langchain/). - **langchain-community**: community maintained 3rd party integrations. Contains integrations based on interfaces defined in **langchain-core**. Maintained by the LangChain community. - **Partner Packages (e.g., langchain-[partner])**: Partner packages are packages dedicated to especially popular integrations (e.g., `langchain-openai`, `langchain-anthropic` etc.). The dedicated packages generally benefit from better reliability and support. - `langgraph`: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. - `langserve`: Deploy LangChain chains as REST APIs. In the 0.1.0 release, `langchain-community` was retained as required a dependency of `langchain`. This allowed imports of vectorstores, chat models, and other integrations to continue working through `langchain` rather than forcing users to update all of their imports to `langchain-community`. For the 0.2.0 release, we’re removing the dependency of `langchain` on `langchain-community`. This is something we’ve been planning to do since the 0.1 release because we believe this is the right package architecture. Old imports will continue to work as long as `langchain-community` is installed. These imports will be removed in the 0.4.0 release. To understand why we think breaking the dependency of `langchain` on `langchain-community` is best we should understand what each package is meant to do. `langchain` is meant to contain high-level chains and agent architectures. The logic in these should be specified at the level of abstractions like `ChatModel` and `Retriever`, and should not be specific to any one integration. This has two main benefits: 1. `langchain` is fairly lightweight. Here is the full list of required dependencies (after the split) ```toml python = ">=3.8.1,<4.0" langchain-core = "^0.2.0" langchain-text-splitters = ">=0.0.1,<0.1" langsmith = "^0.1.17" pydantic = ">=1,<3" SQLAlchemy = ">=1.4,<3" requests = "^2" PyYAML = ">=5.3" numpy = "^1" aiohttp = "^3.8.3" tenacity = "^8.1.0" jsonpatch = "^1.33" ``` 2. `langchain` chains/agents are largely integration-agnostic, which makes it easy to experiment with different integrations and future-proofs your code should there be issues with one specific integration. There is also a third less tangible benefit which is that being integration-agnostic forces us to find only those very generic abstractions and architectures which generalize well across integrations. Given how general the abilities of the foundational tech are, and how quickly the space is moving, having generic architectures is a good way of future-proofing your applications. `langchain-community` is intended to have all integration-specific components that are not yet being maintained in separate `langchain-{partner}` packages. Today this is still the majority of integrations and a lot of code. This code is primarily contributed by the community, while `langchain` is largely written by core maintainers. All of these integrations use optional dependencies and conditional imports, which prevents dependency bloat and conflicts but means compatible dependency versions are not made explicit. Given the volume of integrations in `langchain-community` and the speed at which integrations change, it’s very hard to follow semver versioning, and we currently don’t. All of which is to say that there’s no large benefits to `langchain` depending on `langchain-community` and some obvious downsides: the functionality in `langchain` should be integration agnostic anyways, `langchain-community` can’t be properly versioned, and depending on `langchain-community` increases the [vulnerability surface](https://github.com/langchain-ai/langchain/discussions/19083) of `langchain`. For more context about the reason for the organization please see our blog: https://blog.langchain.dev/langchain-v0-1-0/
149705
--- sidebar_position: 1 --- # Migration LangChain v0.2 was released in May 2024. This release includes a number of [breaking changes and deprecations](/docs/versions/v0_2/deprecations). This document contains a guide on upgrading to 0.2.x. :::note Reference - [Breaking Changes & Deprecations](/docs/versions/v0_2/deprecations) - [Migrating legacy chains to LCEL](/docs/versions/migrating_chains) - [Migrating to Astream Events v2](/docs/versions/v0_2/migrating_astream_events) ::: # Migration This documentation will help you upgrade your code to LangChain `0.2.x.`. To prepare for migration, we first recommend you take the following steps: 1. Install the 0.2.x versions of langchain-core, langchain and upgrade to recent versions of other packages that you may be using. (e.g. langgraph, langchain-community, langchain-openai, etc.) 2. Verify that your code runs properly with the new packages (e.g., unit tests pass). 3. Install a recent version of `langchain-cli` , and use the tool to replace old imports used by your code with the new imports. (See instructions below.) 4. Manually resolve any remaining deprecation warnings. 5. Re-run unit tests. 6. If you are using `astream_events`, please review how to [migrate to astream events v2](/docs/versions/v0_2/migrating_astream_events). ## Upgrade to new imports We created a tool to help migrate your code. This tool is still in **beta** and may not cover all cases, but we hope that it will help you migrate your code more quickly. The migration script has the following limitations: 1. It’s limited to helping users move from old imports to new imports. It does not help address other deprecations. 2. It can’t handle imports that involve `as` . 3. New imports are always placed in global scope, even if the old import that was replaced was located inside some local scope (e..g, function body). 4. It will likely miss some deprecated imports. Here is an example of the import changes that the migration script can help apply automatically: | From Package | To Package | Deprecated Import | New Import | |---------------------|--------------------------|--------------------------------------------------------------------|---------------------------------------------------------------------| | langchain | langchain-community | from langchain.vectorstores import InMemoryVectorStore | from langchain_community.vectorstores import InMemoryVectorStore | | langchain-community | langchain_openai | from langchain_community.chat_models import ChatOpenAI | from langchain_openai import ChatOpenAI | | langchain-community | langchain-core | from langchain_community.document_loaders import Blob | from langchain_core.document_loaders import Blob | | langchain | langchain-core | from langchain.schema.document import Document | from langchain_core.documents import Document | | langchain | langchain-text-splitters | from langchain.text_splitter import RecursiveCharacterTextSplitter | from langchain_text_splitters import RecursiveCharacterTextSplitter | ## Installation ```bash pip install langchain-cli langchain-cli --version # <-- Make sure the version is at least 0.0.22 ``` ## Usage Given that the migration script is not perfect, you should make sure you have a backup of your code first (e.g., using version control like `git`). You will need to run the migration script **twice** as it only applies one import replacement per run. For example, say your code still uses `from langchain.chat_models import ChatOpenAI`: After the first run, you’ll get: `from langchain_community.chat_models import ChatOpenAI` After the second run, you’ll get: `from langchain_openai import ChatOpenAI` ```bash # Run a first time # Will replace from langchain.chat_models import ChatOpenAI langchain-cli migrate --diff [path to code] # Preview langchain-cli migrate [path to code] # Apply # Run a second time to apply more import replacements langchain-cli migrate --diff [path to code] # Preview langchain-cli migrate [path to code] # Apply ``` ### Other options ```bash # See help menu langchain-cli migrate --help # Preview Changes without applying langchain-cli migrate --diff [path to code] # Run on code including ipython notebooks # Apply all import updates except for updates from langchain to langchain-core langchain-cli migrate --disable langchain_to_core --include-ipynb [path to code] ```
149706
--- sidebar_position: 3 sidebar_label: Changes keywords: [retrievalqa, llmchain, conversationalretrievalchain] --- # Deprecations and Breaking Changes This code contains a list of deprecations and removals in the `langchain` and `langchain-core` packages. New features and improvements are not listed here. See the [overview](/docs/versions/v0_2/overview/) for a summary of what's new in this release. ## Breaking changes As of release 0.2.0, `langchain` is required to be integration-agnostic. This means that code in `langchain` should not by default instantiate any specific chat models, llms, embedding models, vectorstores etc; instead, the user will be required to specify those explicitly. The following functions and classes require an explicit LLM to be passed as an argument: - `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit` - `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit` - `langchain.chains.openai_functions.get_openapi_chain` - `langchain.chains.router.MultiRetrievalQAChain.from_retrievers` - `langchain.indexes.VectorStoreIndexWrapper.query` - `langchain.indexes.VectorStoreIndexWrapper.query_with_sources` - `langchain.indexes.VectorStoreIndexWrapper.aquery_with_sources` - `langchain.chains.flare.FlareChain` The following classes now require passing an explicit Embedding model as an argument: - `langchain.indexes.VectostoreIndexCreator` The following code has been removed: - `langchain.natbot.NatBotChain.from_default` removed in favor of the `from_llm` class method. Behavior was changed for the following code: ### @tool decorator `@tool` decorator now assigns the function doc-string as the tool description. Previously, the `@tool` decorator using to prepend the function signature. Before 0.2.0: ```python @tool def my_tool(x: str) -> str: """Some description.""" return "something" print(my_tool.description) ``` Would result in: `my_tool: (x: str) -> str - Some description.` As of 0.2.0: It will result in: `Some description.` ## Code that moved to another package Code that was moved from `langchain` into another package (e.g, `langchain-community`) If you try to import it from `langchain`, the import will keep on working, but will raise a deprecation warning. The warning will provide a replacement import statement. ```shell python -c "from langchain.document_loaders.markdown import UnstructuredMarkdownLoader" ``` ```shell LangChainDeprecationWarning: Importing UnstructuredMarkdownLoader from langchain.document_loaders is deprecated. Please replace deprecated imports: >> from langchain.document_loaders import UnstructuredMarkdownLoader with new imports of: >> from langchain_community.document_loaders import UnstructuredMarkdownLoader ``` We will continue supporting the imports in `langchain` until release 0.4 as long as the relevant package where the code lives is installed. (e.g., as long as `langchain_community` is installed.) However, we advise for users to not rely on these imports and instead migrate to the new imports. To help with this process, we’re releasing a migration script via the LangChain CLI. See further instructions in migration guide. ## Code targeted for removal Code that has better alternatives available and will eventually be removed, so there’s only a single way to do things. (e.g., `predict_messages` method in ChatModels has been deprecated in favor of `invoke`). ### astream events V1 If you are using `astream_events`, please review how to [migrate to astream events v2](/docs/versions/v0_2/migrating_astream_events). ### langchain_core #### try_load_from_hub In module: `utils.loading` Deprecated: 0.1.30 Removal: 0.3.0 Alternative: Using the hwchase17/langchain-hub repo for prompts is deprecated. Please use https://smith.langchain.com/hub instead. #### BaseLanguageModel.predict In module: `language_models.base` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: invoke #### BaseLanguageModel.predict_messages In module: `language_models.base` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: invoke #### BaseLanguageModel.apredict In module: `language_models.base` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: ainvoke #### BaseLanguageModel.apredict_messages In module: `language_models.base` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: ainvoke #### RunTypeEnum In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: Use string instead. #### TracerSessionV1Base In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: #### TracerSessionV1Create In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: #### TracerSessionV1 In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: #### TracerSessionBase In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: #### TracerSession In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: #### BaseRun In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: Run #### LLMRun In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: Run #### ChainRun In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: Run #### ToolRun In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0 Alternative: Run #### BaseChatModel.__call__ In module: `language_models.chat_models` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: invoke #### BaseChatModel.call_as_llm In module: `language_models.chat_models` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: invoke #### BaseChatModel.predict In module: `language_models.chat_models` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: invoke #### BaseChatModel.predict_messages In module: `language_models.chat_models` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: invoke #### BaseChatModel.apredict In module: `language_models.chat_models` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: ainvoke #### BaseChatModel.apredict_messages In module: `language_models.chat_models` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: ainvoke #### BaseLLM.__call__ In module: `language_models.llms` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: invoke #### BaseLLM.predict In module: `language_models.llms` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: invoke #### BaseLLM.predict_messages In module: `language_models.llms` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: invoke #### BaseLLM.apredict In module: `language_models.llms` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: ainvoke #### BaseLLM.apredict_messages In module: `language_models.llms` Deprecated: 0.1.7 Removal: 0.3.0 Alternative: ainvoke #### BaseRetriever.get_relevant_documents In module: `retrievers` Deprecated: 0.1.46 Removal: 0.3.0 Alternative: invoke #### BaseRetriever.aget_relevant_documents In module: `retrievers` Deprecated: 0.1.46 Removal: 0.3.0 Alternative: ainvoke #### ChatPromptTemplate.from_role_strings In module: `prompts.chat` Deprecated: 0.0.1 Removal: Alternative: from_messages classmethod #### ChatPromptTemplate.from_strings In module: `prompts.chat` Deprecated: 0.0.1 Removal: Alternative: from_messages classmethod #### BaseTool.__call__
149709
{ "cells": [ { "cell_type": "markdown", "id": "ed78c53c-55ad-4ea2-9cc2-a39a1963c098", "metadata": {}, "source": [ "# Migrating from StuffDocumentsChain\n", "\n", "[StuffDocumentsChain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.combine_documents.stuff.StuffDocumentsChain.html) combines documents by concatenating them into a single context window. It is a straightforward and effective strategy for combining documents for question-answering, summarization, and other purposes.\n", "\n", "[create_stuff_documents_chain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) is the recommended alternative. It functions the same as `StuffDocumentsChain`, with better support for streaming and batch functionality. Because it is a simple combination of [LCEL primitives](/docs/concepts/#langchain-expression-language-lcel), it is also easier to extend and incorporate into other LangChain applications.\n", "\n", "Below we will go through both `StuffDocumentsChain` and `create_stuff_documents_chain` on a simple example for illustrative purposes.\n", "\n", "Let's first load a chat model:\n", "\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs customVarName=\"llm\" />" ] }, { "cell_type": "code", "execution_count": 1, "id": "dac0bef2-9453-46f2-a893-f7569b6a0170", "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)" ] }, { "cell_type": "markdown", "id": "d4022d03-7b5e-4c81-98ff-5b82a2a4eaae", "metadata": {}, "source": [ "## Example\n", "\n", "Let's go through an example where we analyze a set of documents. We first generate some simple documents for illustrative purposes:" ] }, { "cell_type": "code", "execution_count": 2, "id": "24fa0ba9-e245-47d1-bc2e-6286dd884117", "metadata": {}, "outputs": [], "source": [ "from langchain_core.documents import Document\n", "\n", "documents = [\n", " Document(page_content=\"Apples are red\", metadata={\"title\": \"apple_book\"}),\n", " Document(page_content=\"Blueberries are blue\", metadata={\"title\": \"blueberry_book\"}),\n", " Document(page_content=\"Bananas are yelow\", metadata={\"title\": \"banana_book\"}),\n", "]" ] }, { "cell_type": "markdown", "id": "3a769128-205f-417d-a25d-519e7cb03be7", "metadata": {}, "source": [ "### Legacy\n", "\n", "<details open>\n", "\n", "Below we show an implementation with `StuffDocumentsChain`. We define the prompt template for a summarization task and instantiate a [LLMChain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.llm.LLMChain.html) object for this purpose. We define how documents are formatted into the prompt and ensure consistency among the keys in the various prompts." ] }, { "cell_type": "code", "execution_count": 15, "id": "9734c0f3-64e7-4ae6-8578-df03b3dabb26", "metadata": {}, "outputs": [], "source": [ "from langchain.chains import LLMChain, StuffDocumentsChain\n", "from langchain_core.prompts import ChatPromptTemplate, PromptTemplate\n", "\n", "# This controls how each document will be formatted. Specifically,\n", "# it will be passed to `format_document` - see that function for more\n", "# details.\n", "document_prompt = PromptTemplate(\n", " input_variables=[\"page_content\"], template=\"{page_content}\"\n", ")\n", "document_variable_name = \"context\"\n", "# The prompt here should take as an input variable the\n", "# `document_variable_name`\n", "prompt = ChatPromptTemplate.from_template(\"Summarize this content: {context}\")\n", "\n", "llm_chain = LLMChain(llm=llm, prompt=prompt)\n", "chain = StuffDocumentsChain(\n", " llm_chain=llm_chain,\n", " document_prompt=document_prompt,\n", " document_variable_name=document_variable_name,\n", ")" ] }, { "cell_type": "markdown", "id": "0cb733bf-eb71-4fae-a8f4-d522924020cb", "metadata": {}, "source": [ "We can now invoke our chain:" ] }, { "cell_type": "code", "execution_count": 19, "id": "d7d1ce10-bbee-4cb0-879d-7de4f69191c4", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'This content describes the colors of different fruits: apples are red, blueberries are blue, and bananas are yellow.'" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "result = chain.invoke(documents)\n", "result[\"output_text\"]" ] }, { "cell_type": "code", "execution_count": 20, "id": "79b10d40-1521-433b-9026-6ec836ffeeb3", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'input_documents': [Document(metadata={'title': 'apple_book'}, page_content='Apples are red'), Document(metadata={'title': 'blueberry_book'}, page_content='Blueberries are blue'), Document(metadata={'title': 'banana_book'}, page_content='Bananas are yelow')], 'output_text': 'This content describes the colors of different fruits: apples are red, blueberries are blue, and bananas are yellow.'}\n" ] } ], "source": [ "for chunk in chain.stream(documents):\n", " print(chunk)" ] }, { "cell_type": "markdown", "id": "b4cb6a5b-37ea-48cc-a096-b948d3ff7e9f", "metadata": {}, "source": [ "</details>\n", "\n", "### LCEL\n", "\n", "<details open>\n", "\n", "Below we show an implementation using `create_stuff_documents_chain`:" ] }, { "cell_type": "code", "execution_count": 21, "id": "de38f27a-c648-44be-8c37-0a458c2920a9", "metadata": {}, "outputs": [], "source": [ "from langchain.chains.combine_documents import create_stuff_documents_chain\n", "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "prompt = ChatPromptTemplate.from_template(\"Summarize this content: {context}\")\n", "chain = create_stuff_documents_chain(llm, prompt)" ] }, { "cell_type": "markdown", "id": "9d0e6996-9bf8-4097-9c1a-1c539eac3ed1", "metadata": {}, "source": [ "Invoking the chain, we obtain a similar result as before:" ] }, { "cell_type": "code", "execution_count": 24, "id": "f2d2bdfb-3a6a-464b-b4c2-e4252b2e53a0", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'This content describes the colors of different fruits: apples are red, blueberries are blue, and bananas are yellow.'" ] }, "execution_count": 24, "metadata": {},
149716
{ "cells": [ { "cell_type": "markdown", "id": "2c7bdc91-9b89-4e59-bc27-89508b024635", "metadata": {}, "source": [ "# Migrating from MapReduceDocumentsChain\n", "\n", "[MapReduceDocumentsChain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html) implements a map-reduce strategy over (potentially long) texts. The strategy is as follows:\n", "\n", "- Split a text into smaller documents;\n", "- Map a process onto the smaller documents;\n", "- Reduce or consolidate the results of the process into a final result.\n", "\n", "Note that the map step is typically parallelized over the input documents.\n", "\n", "A common process applied in this context is summarization, in which the map step summarizes individual documents, and the reduce step generates a summary of the summaries.\n", "\n", "In the reduce step, `MapReduceDocumentsChain` supports a recursive \"collapsing\" of the summaries: the inputs would be partitioned based on a token limit, and summaries would be generated of the partitions. This step would be repeated until the total length of the summaries was within a desired limit, allowing for the summarization of arbitrary-length text. This is particularly useful for models with smaller context windows.\n", "\n", "LangGraph suports [map-reduce](https://langchain-ai.github.io/langgraph/how-tos/map-reduce/) workflows, and confers a number of advantages for this problem:\n", "\n", "- LangGraph allows for individual steps (such as successive summarizations) to be streamed, allowing for greater control of execution;\n", "- LangGraph's [checkpointing](https://langchain-ai.github.io/langgraph/how-tos/persistence/) supports error recovery, extending with human-in-the-loop workflows, and easier incorporation into conversational applications.\n", "- The LangGraph implementation is easier to extend, as we will see below.\n", "\n", "Below we will go through both `MapReduceDocumentsChain` and a corresponding LangGraph implementation, first on a simple example for illustrative purposes, and second on a longer example text to demonstrate the recursive reduce step.\n", "\n", "Let's first load a chat model:\n", "\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs customVarName=\"llm\" />" ] }, { "cell_type": "code", "execution_count": 1, "id": "0bdf886b-aeeb-407e-81b8-28bad59ad57a", "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)" ] }, { "cell_type": "markdown", "id": "41cfb569-f7e6-48cb-a90e-45a482009971", "metadata": {}, "source": [ "## Basic example (short documents)\n", "\n", "Let's use the following 3 documents for illustrative purposes." ] }, { "cell_type": "code", "execution_count": 2, "id": "b221a71f-982b-4c08-8597-96c890e00965", "metadata": {}, "outputs": [], "source": [ "from langchain_core.documents import Document\n", "\n", "documents = [\n", " Document(page_content=\"Apples are red\", metadata={\"title\": \"apple_book\"}),\n", " Document(page_content=\"Blueberries are blue\", metadata={\"title\": \"blueberry_book\"}),\n", " Document(page_content=\"Bananas are yelow\", metadata={\"title\": \"banana_book\"}),\n", "]" ] }, { "cell_type": "markdown", "id": "c717514a-1b6d-4a0f-9093-ef594b9a0b17", "metadata": {}, "source": [ "### Legacy\n", "\n", "<details open>\n", " \n", "Below we show an implementation with `MapReduceDocumentsChain`. We define the prompt templates for the map and reduce steps, instantiate separate chains for these steps, and finally instantiate the `MapReduceDocumentsChain`:" ] }, { "cell_type": "code", "execution_count": 4, "id": "84ee3851-b4a9-4fbe-a78f-d05168715b91", "metadata": {}, "outputs": [], "source": [ "from langchain.chains import MapReduceDocumentsChain, ReduceDocumentsChain\n", "from langchain.chains.combine_documents.stuff import StuffDocumentsChain\n", "from langchain.chains.llm import LLMChain\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_text_splitters import CharacterTextSplitter\n", "\n", "# Map\n", "map_template = \"Write a concise summary of the following: {docs}.\"\n", "map_prompt = ChatPromptTemplate([(\"human\", map_template)])\n", "map_chain = LLMChain(llm=llm, prompt=map_prompt)\n", "\n", "\n", "# Reduce\n", "reduce_template = \"\"\"\n", "The following is a set of summaries:\n", "{docs}\n", "Take these and distill it into a final, consolidated summary\n", "of the main themes.\n", "\"\"\"\n", "reduce_prompt = ChatPromptTemplate([(\"human\", reduce_template)])\n", "reduce_chain = LLMChain(llm=llm, prompt=reduce_prompt)\n", "\n", "\n", "# Takes a list of documents, combines them into a single string, and passes this to an LLMChain\n", "combine_documents_chain = StuffDocumentsChain(\n", " llm_chain=reduce_chain, document_variable_name=\"docs\"\n", ")\n", "\n", "# Combines and iteratively reduces the mapped documents\n", "reduce_documents_chain = ReduceDocumentsChain(\n", " # This is final chain that is called.\n", " combine_documents_chain=combine_documents_chain,\n", " # If documents exceed context for `StuffDocumentsChain`\n", " collapse_documents_chain=combine_documents_chain,\n", " # The maximum number of tokens to group documents into.\n", " token_max=1000,\n", ")\n", "\n", "# Combining documents by mapping a chain over them, then combining results\n", "map_reduce_chain = MapReduceDocumentsChain(\n", " # Map chain\n", " llm_chain=map_chain,\n", " # Reduce chain\n", " reduce_documents_chain=reduce_documents_chain,\n", " # The variable name in the llm_chain to put the documents in\n", " document_variable_name=\"docs\",\n", " # Return the results of the map steps in the output\n", " return_intermediate_steps=False,\n", ")" ] }, { "cell_type": "code", "execution_count": 5, "id": "4f57ed52-08a5-49f6-ab19-1be51a853a2f", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Fruits come in a variety of colors, with apples being red, blueberries being blue, and bananas being yellow.\n" ] } ], "source": [ "result = map_reduce_chain.invoke(documents)\n", "\n", "print(result[\"output_text\"])" ] }, { "cell_type": "markdown", "id": "46d29559-5948-4ce9-b7c5-fa6729cf2485", "metadata": {}, "source": [ "In the [LangSmith trace](https://smith.langchain.com/public/8d88a2c0-5d26-41f6-9176-d06549b17aa6/r) we observe four LLM calls: one summarizing each of the three input documents, and one summarizing the summaries." ] }, { "cell_type": "markdown", "id": "b5399533-8662-4fad-b885-e3df3d809c44", "metadata": {},
149720
{ "cells": [ { "cell_type": "markdown", "id": "9db5ad7a-857e-46ea-9d0c-ba3fbe62fc81", "metadata": {}, "source": [ "# Migrating from MapRerankDocumentsChain\n", "\n", "[MapRerankDocumentsChain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html) implements a strategy for analyzing long texts. The strategy is as follows:\n", "\n", "- Split a text into smaller documents;\n", "- Map a process to the set of documents, where the process includes generating a score;\n", "- Rank the results by score and return the maximum.\n", "\n", "A common process in this scenario is question-answering using pieces of context from a document. Forcing the model to generate score along with its answer helps to select for answers generated only by relevant context.\n", "\n", "An [LangGraph](https://langchain-ai.github.io/langgraph/) implementation allows for the incorporation of [tool calling](/docs/concepts/#functiontool-calling) and other features for this problem. Below we will go through both `MapRerankDocumentsChain` and a corresponding LangGraph implementation on a simple example for illustrative purposes." ] }, { "cell_type": "markdown", "id": "39f11f9f-ac24-485e-bc15-285bebb9c12e", "metadata": {}, "source": [ "## Example\n", "\n", "Let's go through an example where we analyze a set of documents. Let's use the following 3 documents:" ] }, { "cell_type": "code", "execution_count": 2, "id": "ef975d40-6ea3-4280-84cb-fae4c285c72b", "metadata": {}, "outputs": [], "source": [ "from langchain_core.documents import Document\n", "\n", "documents = [\n", " Document(page_content=\"Alice has blue eyes\", metadata={\"title\": \"book_chapter_2\"}),\n", " Document(page_content=\"Bob has brown eyes\", metadata={\"title\": \"book_chapter_1\"}),\n", " Document(\n", " page_content=\"Charlie has green eyes\", metadata={\"title\": \"book_chapter_3\"}\n", " ),\n", "]" ] }, { "cell_type": "markdown", "id": "e3b99cfc-b99c-4da8-9c87-903e0249d227", "metadata": {}, "source": [ "### Legacy\n", "\n", "<details open>\n", "\n", "Below we show an implementation with `MapRerankDocumentsChain`. We define the prompt template for a question-answering task and instantiate a [LLMChain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.llm.LLMChain.html) object for this purpose. We define how documents are formatted into the prompt and ensure consistency among the keys in the various prompts." ] }, { "cell_type": "code", "execution_count": 4, "id": "3b65e056-d739-4985-8bfc-0edf783f2b16", "metadata": {}, "outputs": [], "source": [ "from langchain.chains import LLMChain, MapRerankDocumentsChain\n", "from langchain.output_parsers.regex import RegexParser\n", "from langchain_core.prompts import PromptTemplate\n", "from langchain_openai import OpenAI\n", "\n", "document_variable_name = \"context\"\n", "llm = OpenAI()\n", "# The prompt here should take as an input variable the\n", "# `document_variable_name`\n", "# The actual prompt will need to be a lot more complex, this is just\n", "# an example.\n", "prompt_template = (\n", " \"What color are Bob's eyes? \"\n", " \"Output both your answer and a score (1-10) of how confident \"\n", " \"you are in the format: <Answer>\\nScore: <Score>.\\n\\n\"\n", " \"Provide no other commentary.\\n\\n\"\n", " \"Context: {context}\"\n", ")\n", "output_parser = RegexParser(\n", " regex=r\"(.*?)\\nScore: (.*)\",\n", " output_keys=[\"answer\", \"score\"],\n", ")\n", "prompt = PromptTemplate(\n", " template=prompt_template,\n", " input_variables=[\"context\"],\n", " output_parser=output_parser,\n", ")\n", "llm_chain = LLMChain(llm=llm, prompt=prompt)\n", "chain = MapRerankDocumentsChain(\n", " llm_chain=llm_chain,\n", " document_variable_name=document_variable_name,\n", " rank_key=\"score\",\n", " answer_key=\"answer\",\n", ")" ] }, { "cell_type": "code", "execution_count": 9, "id": "fe94c2e5-4c56-4604-a16c-055c196f4a57", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/langchain/libs/langchain/langchain/chains/llm.py:369: UserWarning: The apply_and_parse method is deprecated, instead pass an output parser directly to LLMChain.\n", " warnings.warn(\n" ] }, { "data": { "text/plain": [ "'Brown'" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "response = chain.invoke(documents)\n", "response[\"output_text\"]" ] }, { "cell_type": "markdown", "id": "317e51c2-810f-463b-9da2-604fe95a8b48", "metadata": {}, "source": [ "Inspecting the [LangSmith trace](https://smith.langchain.com/public/7a071bd1-0283-4b90-898c-6e4a2b5a0593/r) for the above run, we can see three LLM calls-- one for each document-- and that the scoring mechanism mitigated against hallucinations.\n", "\n", "</details>\n", "\n", "### LangGraph\n", "\n", "<details open>\n", "\n", "Below we show a LangGraph implementation of this process. Note that our template is simplified, as we delegate the formatting instructions to the chat model's tool-calling features via the [.with_structured_output](/docs/how_to/structured_output/) method.\n", "\n", "Here we follow a basic [map-reduce](https://langchain-ai.github.io/langgraph/how-tos/map-reduce/) workflow to execute the LLM calls in parallel.\n", "\n", "We will need to install `langgraph`:" ] }, { "cell_type": "code", "execution_count": null, "id": "b8fab4f6-eed1-4662-8d3f-82846a2edfb3", "metadata": {}, "outputs": [], "source": [ "pip install -qU langgraph" ] }, { "cell_type": "code", "execution_count": 6, "id": "b8493533-7ab3-4f75-aab1-390340bff2ea", "metadata": {}, "outputs": [], "source": [ "import operator\n", "from typing import Annotated, List, TypedDict\n", "\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_openai import ChatOpenAI\n", "from langgraph.constants import Send\n", "from langgraph.graph import END, START, StateGraph\n", "\n", "\n", "class AnswerWithScore(TypedDict):\n", " answer: str\n", " score: Annotated[int, ..., \"Score from 1-10.\"]\n", "\n", "\n", "llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)\n", "\n",
149722
{ "cells": [ { "cell_type": "markdown", "id": "ce8457ed-c0b1-4a74-abbd-9d3d2211270f", "metadata": {}, "source": [ "# Migrating from LLMChain\n", "\n", "[`LLMChain`](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.llm.LLMChain.html) combined a prompt template, LLM, and output parser into a class.\n", "\n", "Some advantages of switching to the LCEL implementation are:\n", "\n", "- Clarity around contents and parameters. The legacy `LLMChain` contains a default output parser and other options.\n", "- Easier streaming. `LLMChain` only supports streaming via callbacks.\n", "- Easier access to raw message outputs if desired. `LLMChain` only exposes these via a parameter or via callback." ] }, { "cell_type": "code", "execution_count": null, "id": "b99b47ec", "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet langchain-openai" ] }, { "cell_type": "code", "execution_count": 1, "id": "717c8673", "metadata": {}, "outputs": [], "source": [ "import os\n", "from getpass import getpass\n", "\n", "if \"OPENAI_API_KEY\" not in os.environ:\n", " os.environ[\"OPENAI_API_KEY\"] = getpass()" ] }, { "cell_type": "markdown", "id": "e3621b62-a037-42b8-8faa-59575608bb8b", "metadata": {}, "source": [ "## Legacy\n", "\n", "<details open>" ] }, { "cell_type": "code", "execution_count": 5, "id": "f91c9809-8ee7-4e38-881d-0ace4f6ea883", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'adjective': 'funny',\n", " 'text': \"Why couldn't the bicycle stand up by itself?\\n\\nBecause it was two tired!\"}" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain.chains import LLMChain\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_openai import ChatOpenAI\n", "\n", "prompt = ChatPromptTemplate.from_messages(\n", " [(\"user\", \"Tell me a {adjective} joke\")],\n", ")\n", "\n", "legacy_chain = LLMChain(llm=ChatOpenAI(), prompt=prompt)\n", "\n", "legacy_result = legacy_chain({\"adjective\": \"funny\"})\n", "legacy_result" ] }, { "cell_type": "markdown", "id": "9f89e97b", "metadata": {}, "source": [ "Note that `LLMChain` by default returned a `dict` containing both the input and the output from `StrOutputParser`, so to extract the output, you need to access the `\"text\"` key." ] }, { "cell_type": "code", "execution_count": 6, "id": "c7fa1618", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"Why couldn't the bicycle stand up by itself?\\n\\nBecause it was two tired!\"" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "legacy_result[\"text\"]" ] }, { "cell_type": "markdown", "id": "cdc3b527-c09e-4c77-9711-c3cc4506cd95", "metadata": {}, "source": [ "</details>\n", "\n", "## LCEL\n", "\n", "<details open>" ] }, { "cell_type": "code", "execution_count": 3, "id": "f0903025-9aa8-4a53-8336-074341c00e59", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Why was the math book sad?\\n\\nBecause it had too many problems.'" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_openai import ChatOpenAI\n", "\n", "prompt = ChatPromptTemplate.from_messages(\n", " [(\"user\", \"Tell me a {adjective} joke\")],\n", ")\n", "\n", "chain = prompt | ChatOpenAI() | StrOutputParser()\n", "\n", "chain.invoke({\"adjective\": \"funny\"})" ] }, { "cell_type": "markdown", "id": "3c0b0513-77b8-4371-a20e-3e487cec7e7f", "metadata": {}, "source": [ "If you'd like to mimic the `dict` packaging of input and output in `LLMChain`, you can use a [`RunnablePassthrough.assign`](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) like:" ] }, { "cell_type": "code", "execution_count": 4, "id": "20f11321-834a-485a-a8ad-85734d572902", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'adjective': 'funny',\n", " 'text': 'Why did the scarecrow win an award? Because he was outstanding in his field!'}" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import RunnablePassthrough\n", "\n", "outer_chain = RunnablePassthrough().assign(text=chain)\n", "\n", "outer_chain.invoke({\"adjective\": \"funny\"})" ] }, { "cell_type": "markdown", "id": "b2717810", "metadata": {}, "source": [ "</details>\n", "\n", "## Next steps\n", "\n", "See [this tutorial](/docs/tutorials/llm_chain) for more detail on building with prompt templates, LLMs, and output parsers.\n", "\n", "Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.4" } }, "nbformat": 4, "nbformat_minor": 5 }
149729
{ "cells": [ { "cell_type": "markdown", "id": "d20aeaad-b3ca-4a7d-b02d-3267503965af", "metadata": {}, "source": [ "# Migrating from ConversationalChain\n", "\n", "[`ConversationChain`](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.conversation.base.ConversationChain.html) incorporated a memory of previous messages to sustain a stateful conversation.\n", "\n", "Some advantages of switching to the Langgraph implementation are:\n", "\n", "- Innate support for threads/separate sessions. To make this work with `ConversationChain`, you'd need to instantiate a separate memory class outside the chain.\n", "- More explicit parameters. `ConversationChain` contains a hidden default prompt, which can cause confusion.\n", "- Streaming support. `ConversationChain` only supports streaming via callbacks.\n", "\n", "Langgraph's [checkpointing](https://langchain-ai.github.io/langgraph/how-tos/persistence/) system supports multiple threads or sessions, which can be specified via the `\"thread_id\"` key in its configuration parameters." ] }, { "cell_type": "code", "execution_count": null, "id": "b99b47ec", "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet langchain langchain-openai" ] }, { "cell_type": "code", "execution_count": 2, "id": "717c8673", "metadata": {}, "outputs": [], "source": [ "import os\n", "from getpass import getpass\n", "\n", "if \"OPENAI_API_KEY\" not in os.environ:\n", " os.environ[\"OPENAI_API_KEY\"] = getpass()" ] }, { "cell_type": "markdown", "id": "00df631d-5121-4918-94aa-b88acce9b769", "metadata": {}, "source": [ "## Legacy\n", "\n", "<details open>" ] }, { "cell_type": "code", "execution_count": 2, "id": "4f2cc6dc-d70a-4c13-9258-452f14290da6", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input': \"I'm Bob, how are you?\",\n", " 'history': '',\n", " 'response': \"Arrr matey, I be a pirate sailin' the high seas. What be yer business with me?\"}" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain.chains import ConversationChain\n", "from langchain.memory import ConversationBufferMemory\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_openai import ChatOpenAI\n", "\n", "template = \"\"\"\n", "You are a pirate. Answer the following questions as best you can.\n", "Chat history: {history}\n", "Question: {input}\n", "\"\"\"\n", "\n", "prompt = ChatPromptTemplate.from_template(template)\n", "\n", "memory = ConversationBufferMemory()\n", "\n", "chain = ConversationChain(\n", " llm=ChatOpenAI(),\n", " memory=memory,\n", " prompt=prompt,\n", ")\n", "\n", "chain({\"input\": \"I'm Bob, how are you?\"})" ] }, { "cell_type": "code", "execution_count": 3, "id": "53f2c723-178f-470a-8147-54e7cb982211", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input': 'What is my name?',\n", " 'history': \"Human: I'm Bob, how are you?\\nAI: Arrr matey, I be a pirate sailin' the high seas. What be yer business with me?\",\n", " 'response': 'Your name be Bob, matey.'}" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain({\"input\": \"What is my name?\"})" ] }, { "cell_type": "markdown", "id": "f8e36b0e-c7dc-4130-a51b-189d4b756c7f", "metadata": {}, "source": [ "</details>\n", "\n", "## Langgraph\n", "\n", "<details open>" ] }, { "cell_type": "code", "execution_count": 4, "id": "a59b910c-0d02-41aa-bc99-441f11989cf8", "metadata": {}, "outputs": [], "source": [ "import uuid\n", "\n", "from langchain_openai import ChatOpenAI\n", "from langgraph.checkpoint.memory import MemorySaver\n", "from langgraph.graph import START, MessagesState, StateGraph\n", "\n", "model = ChatOpenAI(model=\"gpt-4o-mini\")\n", "\n", "# Define a new graph\n", "workflow = StateGraph(state_schema=MessagesState)\n", "\n", "\n", "# Define the function that calls the model\n", "def call_model(state: MessagesState):\n", " response = model.invoke(state[\"messages\"])\n", " return {\"messages\": response}\n", "\n", "\n", "# Define the two nodes we will cycle between\n", "workflow.add_edge(START, \"model\")\n", "workflow.add_node(\"model\", call_model)\n", "\n", "# Add memory\n", "memory = MemorySaver()\n", "app = workflow.compile(checkpointer=memory)\n", "\n", "\n", "# The thread id is a unique key that identifies\n", "# this particular conversation.\n", "# We'll just generate a random uuid here.\n", "thread_id = uuid.uuid4()\n", "config = {\"configurable\": {\"thread_id\": thread_id}}" ] }, { "cell_type": "code", "execution_count": 5, "id": "3a9df4bb-e804-4373-9a15-a29dc0371595", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "================================\u001b[1m Human Message \u001b[0m=================================\n", "\n", "I'm Bob, how are you?\n", "==================================\u001b[1m Ai Message \u001b[0m==================================\n", "\n", "Ahoy, Bob! I be feelin' as lively as a ship in full sail! How be ye on this fine day?\n" ] } ], "source": [ "query = \"I'm Bob, how are you?\"\n", "\n", "input_messages = [\n", " {\n", " \"role\": \"system\",\n", " \"content\": \"You are a pirate. Answer the following questions as best you can.\",\n", " },\n", " {\"role\": \"user\", \"content\": query},\n", "]\n", "for event in app.stream({\"messages\": input_messages}, config, stream_mode=\"values\"):\n", " event[\"messages\"][-1].pretty_print()" ] }, { "cell_type": "code", "execution_count": 6, "id": "d3f77e69-fa3d-496c-968c-86371e1e8cf1", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "================================\u001b[1m Human Message \u001b[0m=================================\n", "\n", "What is my name?\n", "==================================\u001b[1m Ai Message \u001b[0m==================================\n", "\n",
149731
{ "cells": [ { "cell_type": "markdown", "id": "292a3c83-44a9-4426-bbec-f1a778d00d93", "metadata": {}, "source": [ "# Migrating from ConversationalRetrievalChain\n", "\n", "The [`ConversationalRetrievalChain`](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html) was an all-in one way that combined retrieval-augmented generation with chat history, allowing you to \"chat with\" your documents.\n", "\n", "Advantages of switching to the LCEL implementation are similar to the [`RetrievalQA` migration guide](./retrieval_qa.ipynb):\n", "\n", "- Clearer internals. The `ConversationalRetrievalChain` chain hides an entire question rephrasing step which dereferences the initial query against the chat history.\n", " - This means the class contains two sets of configurable prompts, LLMs, etc.\n", "- More easily return source documents.\n", "- Support for runnable methods like streaming and async operations.\n", "\n", "Here are equivalent implementations with custom prompts.\n", "We'll use the following ingestion code to load a [blog post by Lilian Weng](https://lilianweng.github.io/posts/2023-06-23-agent/) on autonomous agents into a local vector store:\n", "\n", "## Shared setup\n", "\n", "For both versions, we'll need to load the data with the `WebBaseLoader` document loader, split it with `RecursiveCharacterTextSplitter`, and add it to an in-memory `FAISS` vector store.\n", "\n", "We will also instantiate a chat model to use." ] }, { "cell_type": "code", "execution_count": null, "id": "b99b47ec", "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet langchain-community langchain langchain-openai faiss-cpu beautifulsoup4" ] }, { "cell_type": "code", "execution_count": 2, "id": "717c8673", "metadata": {}, "outputs": [], "source": [ "import os\n", "from getpass import getpass\n", "\n", "if \"OPENAI_API_KEY\" not in os.environ:\n", " os.environ[\"OPENAI_API_KEY\"] = getpass()" ] }, { "cell_type": "code", "execution_count": 2, "id": "44119498-5a98-4077-9e2f-c75500e7eace", "metadata": {}, "outputs": [], "source": [ "# Load docs\n", "from langchain_community.document_loaders import WebBaseLoader\n", "from langchain_community.vectorstores import FAISS\n", "from langchain_openai.chat_models import ChatOpenAI\n", "from langchain_openai.embeddings import OpenAIEmbeddings\n", "from langchain_text_splitters import RecursiveCharacterTextSplitter\n", "\n", "loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n", "data = loader.load()\n", "\n", "# Split\n", "text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n", "all_splits = text_splitter.split_documents(data)\n", "\n", "# Store splits\n", "vectorstore = FAISS.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())\n", "\n", "# LLM\n", "llm = ChatOpenAI()" ] }, { "cell_type": "markdown", "id": "8bc06416", "metadata": {}, "source": [ "## Legacy\n", "\n", "<details open>" ] }, { "cell_type": "code", "execution_count": 5, "id": "8b471e7d-3ccb-4ab3-bc09-304c4b14a908", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'question': 'What are autonomous agents?',\n", " 'chat_history': '',\n", " 'answer': 'Autonomous agents are entities empowered with capabilities like planning, task decomposition, and memory to perform complex tasks independently. These agents can leverage tools like browsing the internet, reading documentation, executing code, and calling APIs to achieve their objectives. They are designed to handle tasks like scientific discovery and experimentation autonomously.'}" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain.chains import ConversationalRetrievalChain\n", "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "condense_question_template = \"\"\"\n", "Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.\n", "\n", "Chat History:\n", "{chat_history}\n", "Follow Up Input: {question}\n", "Standalone question:\"\"\"\n", "\n", "condense_question_prompt = ChatPromptTemplate.from_template(condense_question_template)\n", "\n", "qa_template = \"\"\"\n", "You are an assistant for question-answering tasks.\n", "Use the following pieces of retrieved context to answer\n", "the question. If you don't know the answer, say that you\n", "don't know. Use three sentences maximum and keep the\n", "answer concise.\n", "\n", "Chat History:\n", "{chat_history}\n", "\n", "Other context:\n", "{context}\n", "\n", "Question: {question}\n", "\"\"\"\n", "\n", "qa_prompt = ChatPromptTemplate.from_template(qa_template)\n", "\n", "convo_qa_chain = ConversationalRetrievalChain.from_llm(\n", " llm,\n", " vectorstore.as_retriever(),\n", " condense_question_prompt=condense_question_prompt,\n", " combine_docs_chain_kwargs={\n", " \"prompt\": qa_prompt,\n", " },\n", ")\n", "\n", "convo_qa_chain(\n", " {\n", " \"question\": \"What are autonomous agents?\",\n", " \"chat_history\": \"\",\n", " }\n", ")" ] }, { "cell_type": "markdown", "id": "43a8a23c", "metadata": {}, "source": [ "</details>\n", "\n", "## LCEL\n", "\n", "<details open>" ] }, { "cell_type": "code", "execution_count": 7, "id": "35657a13-ad67-4af1-b1f9-f58606ae43b4", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input': 'What are autonomous agents?',\n", " 'chat_history': [],\n", " 'context': [Document(metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Boiko et al. (2023) also looked into LLM-empowered agents for scientific discovery, to handle autonomous design, planning, and performance of complex scientific experiments. This agent can use tools to browse the Internet, read documentation, execute code, call robotics experimentation APIs and leverage other LLMs.\\nFor example, when requested to \"develop a novel anticancer drug\", the model came up with the following reasoning steps:'),\n",
149735
{ "cells": [ { "cell_type": "raw", "metadata": { "vscode": { "languageId": "raw" } }, "source": [ "---\n", "sidebar_position: 1\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# How to migrate from v0.0 chains\n", "\n", "LangChain has evolved since its initial release, and many of the original \"Chain\" classes \n", "have been deprecated in favor of the more flexible and powerful frameworks of LCEL and LangGraph. \n", "\n", "This guide will help you migrate your existing v0.0 chains to the new abstractions.\n", "\n", ":::info How deprecated implementations work\n", "Even though many of these implementations are deprecated, they are **still supported** in the codebase. \n", "However, they are not recommended for new development, and we recommend re-implementing them using the following guides!\n", "\n", "To see the planned removal version for each deprecated implementation, check their API reference.\n", ":::\n", "\n", ":::info Prerequisites\n", "\n", "These guides assume some familiarity with the following concepts:\n", "- [LangChain Expression Language](/docs/concepts#langchain-expression-language-lcel)\n", "- [LangGraph](https://langchain-ai.github.io/langgraph/)\n", ":::\n", "\n", "LangChain maintains a number of legacy abstractions. Many of these can be reimplemented via short combinations of LCEL and LangGraph primitives.\n", "\n", "### LCEL\n", "[LCEL](/docs/concepts/#langchain-expression-language-lcel) is designed to streamline the process of building useful apps with LLMs and combining related components. It does this by providing:\n", "\n", "1. **A unified interface**: Every LCEL object implements the `Runnable` interface, which defines a common set of invocation methods (`invoke`, `batch`, `stream`, `ainvoke`, ...). This makes it possible to also automatically and consistently support useful operations like streaming of intermediate steps and batching, since every chain composed of LCEL objects is itself an LCEL object.\n", "2. **Composition primitives**: LCEL provides a number of primitives that make it easy to compose chains, parallelize components, add fallbacks, dynamically configure chain internals, and more.\n", "\n", "### LangGraph\n", "[LangGraph](https://langchain-ai.github.io/langgraph/), built on top of LCEL, allows for performant orchestrations of application components while maintaining concise and readable code. It includes built-in persistence, support for cycles, and prioritizes controllability.\n", "If LCEL grows unwieldy for larger or more complex chains, they may benefit from a LangGraph implementation.\n", "\n", "### Advantages\n", "Using these frameworks for existing v0.0 chains confers some advantages:\n", "\n", "- The resulting chains typically implement the full `Runnable` interface, including streaming and asynchronous support where appropriate;\n", "- The chains may be more easily extended or modified;\n", "- The parameters of the chain are typically surfaced for easier customization (e.g., prompts) over previous versions, which tended to be subclasses and had opaque parameters and internals.\n", "- If using LangGraph, the chain supports built-in persistence, allowing for conversational experiences via a \"memory\" of the chat history.\n", "- If using LangGraph, the steps of the chain can be streamed, allowing for greater control and customizability.\n", "\n", "\n", "The below pages assist with migration from various specific chains to LCEL and LangGraph:\n", "\n", "- [LLMChain](./llm_chain.ipynb)\n", "- [ConversationChain](./conversation_chain.ipynb)\n", "- [RetrievalQA](./retrieval_qa.ipynb)\n", "- [ConversationalRetrievalChain](./conversation_retrieval_chain.ipynb)\n", "- [StuffDocumentsChain](./stuff_docs_chain.ipynb)\n", "- [MapReduceDocumentsChain](./map_reduce_chain.ipynb)\n", "- [MapRerankDocumentsChain](./map_rerank_docs_chain.ipynb)\n", "- [RefineDocumentsChain](./refine_docs_chain.ipynb)\n", "- [LLMRouterChain](./llm_router_chain.ipynb)\n", "- [MultiPromptChain](./multi_prompt_chain.ipynb)\n", "- [LLMMathChain](./llm_math_chain.ipynb)\n", "- [ConstitutionalChain](./constitutional_chain.ipynb)\n", "\n", "Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) and [LangGraph docs](https://langchain-ai.github.io/langgraph/) for more background information." ] } ], "metadata": { "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 2 }
149747
--- sidebar_position: 1 --- # How to migrate to LangGraph memory As of the v0.3 release of LangChain, we recommend that LangChain users take advantage of [LangGraph persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/) to incorporate `memory` into their LangChain application. * Users that rely on `RunnableWithMessageHistory` or `BaseChatMessageHistory` do **not** need to make any changes, but are encouraged to consider using LangGraph for more complex use cases. * Users that rely on deprecated memory abstractions from LangChain 0.0.x should follow this guide to upgrade to the new LangGraph persistence feature in LangChain 0.3.x. ## Why use LangGraph for memory? The main advantages of persistence in LangGraph are: - Built-in support for multiple users and conversations, which is a typical requirement for real-world conversational AI applications. - Ability to save and resume complex conversations at any point. This helps with: - Error recovery - Allowing human intervention in AI workflows - Exploring different conversation paths ("time travel") - Full compatibility with both traditional [language models](/docs/concepts/#llms) and modern [chat models](/docs/concepts/#chat-models). Early memory implementations in LangChain weren't designed for newer chat model APIs, causing issues with features like tool-calling. LangGraph memory can persist any custom state. - Highly customizable, allowing you to fully control how memory works and use different storage backends. ## Evolution of memory in LangChain The concept of memory has evolved significantly in LangChain since its initial release. ### LangChain 0.0.x memory Broadly speaking, LangChain 0.0.x memory was used to handle three main use cases: | Use Case | Example | |--------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------| | Managing conversation history | Keep only the last `n` turns of the conversation between the user and the AI. | | Extraction of structured information | Extract structured information from the conversation history, such as a list of facts learned about the user. | | Composite memory implementations | Combine multiple memory sources, e.g., a list of known facts about the user along with facts learned during a given conversation. | While the LangChain 0.0.x memory abstractions were useful, they were limited in their capabilities and not well suited for real-world conversational AI applications. These memory abstractions lacked built-in support for multi-user, multi-conversation scenarios, which are essential for practical conversational AI systems. Most of these implementations have been officially deprecated in LangChain 0.3.x in favor of LangGraph persistence. ### RunnableWithMessageHistory and BaseChatMessageHistory :::note Please see [How to use BaseChatMessageHistory with LangGraph](./chat_history), if you would like to use `BaseChatMessageHistory` (with or without `RunnableWithMessageHistory`) in LangGraph. ::: As of LangChain v0.1, we started recommending that users rely primarily on [BaseChatMessageHistory](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html#langchain_core.runnables.history.RunnableWithMessageHistory). `BaseChatMessageHistory` serves as a simple persistence for storing and retrieving messages in a conversation. At that time, the only option for orchestrating LangChain chains was via [LCEL](https://python.langchain.com/docs/how_to/#langchain-expression-language-lcel). To incorporate memory with `LCEL`, users had to use the [RunnableWithMessageHistory](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html#langchain_core.runnables.history.RunnableWithMessageHistory) interface. While sufficient for basic chat applications, many users found the API unintuitive and challenging to use. As of LangChain v0.3, we recommend that **new** code takes advantage of LangGraph for both orchestration and persistence: - Orchestration: In LangGraph, users define [graphs](https://langchain-ai.github.io/langgraph/concepts/low_level/) that specify the flow of the application. This allows users to keep using `LCEL` within individual nodes when `LCEL` is needed, while making it easy to define complex orchestration logic that is more readable and maintainable. - Persistence: Users can rely on LangGraph's [persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/) to store and retrieve data. LangGraph persistence is extremely flexible and can support a much wider range of use cases than the `RunnableWithMessageHistory` interface. :::important If you have been using `RunnableWithMessageHistory` or `BaseChatMessageHistory`, you do not need to make any changes. We do not plan on deprecating either functionality in the near future. This functionality is sufficient for simple chat applications and any code that uses `RunnableWithMessageHistory` will continue to work as expected. ::: ## Migrations :::info Prerequisites These guides assume some familiarity with the following concepts: - [LangGraph](https://langchain-ai.github.io/langgraph/) - [v0.0.x Memory](https://python.langchain.com/v0.1/docs/modules/memory/) - [How to add persistence ("memory") to your graph](https://langchain-ai.github.io/langgraph/how-tos/persistence/) ::: ### 1. Managing conversation history The goal of managing conversation history is to store and retrieve the history in a way that is optimal for a chat model to use. Often this involves trimming and / or summarizing the conversation history to keep the most relevant parts of the conversation while having the conversation fit inside the context window of the chat model. Memory classes that fall into this category include: | Memory Type | How to Migrate | Description | |-----------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `ConversationBufferMemory` | [Link to Migration Guide](conversation_buffer_memory) | A basic memory implementation that simply stores the conversation history. | | `ConversationStringBufferMemory` | [Link to Migration Guide](conversation_buffer_memory) | A special case of `ConversationBufferMemory` designed for LLMs and no longer relevant. | | `ConversationBufferWindowMemory` | [Link to Migration Guide](conversation_buffer_window_memory) | Keeps the last `n` turns of the conversation. Drops the oldest turn when the buffer is full. | | `ConversationTokenBufferMemory` | [Link to Migration Guide](conversation_buffer_window_memory) | Keeps only the most recent messages in the conversation under the constraint that the total number of tokens in the conversation does not exceed a certain limit. | | `ConversationSummaryMemory` | [Link to Migration Guide](conversation_summary_memory) | Continually summarizes the conversation history. The summary is updated after each conversation turn. The abstraction returns the summary of the conversation history. | | `ConversationSummaryBufferMemory` | [Link to Migration Guide](conversation_summary_memory) | Provides a running summary of the conversation together with the most recent messages in the conversation under the constraint that the total number of tokens in the conversation does not exceed a certain limit. | | `VectorStoreRetrieverMemory` | See related [long-term memory agent tutorial](long_term_memory_agent) | Stores the conversation history in a vector store and retrieves the most relevant parts of past conversation based on the input. |
149749
{ "cells": [ { "cell_type": "markdown", "id": "ce8457ed-c0b1-4a74-abbd-9d3d2211270f", "metadata": {}, "source": [ "# Migrating off ConversationBufferWindowMemory or ConversationTokenBufferMemory\n", "\n", "Follow this guide if you're trying to migrate off one of the old memory classes listed below:\n", "\n", "\n", "| Memory Type | Description |\n", "|----------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n", "| `ConversationBufferWindowMemory` | Keeps the last `n` messages of the conversation. Drops the oldest messages when there are more than `n` messages. |\n", "| `ConversationTokenBufferMemory` | Keeps only the most recent messages in the conversation under the constraint that the total number of tokens in the conversation does not exceed a certain limit. |\n", "\n", "`ConversationBufferWindowMemory` and `ConversationTokenBufferMemory` apply additional processing on top of the raw conversation history to trim the conversation history to a size that fits inside the context window of a chat model. \n", "\n", "This processing functionality can be accomplished using LangChain's built-in [trim_messages](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.utils.trim_messages.html) function." ] }, { "cell_type": "markdown", "id": "79935247-acc7-4a05-a387-5d72c9c8c8cb", "metadata": {}, "source": [ ":::important\n", "\n", "We’ll begin by exploring a straightforward method that involves applying processing logic to the entire conversation history.\n", "\n", "While this approach is easy to implement, it has a downside: as the conversation grows, so does the latency, since the logic is re-applied to all previous exchanges in the conversation at each turn.\n", "\n", "More advanced strategies focus on incrementally updating the conversation history to avoid redundant processing.\n", "\n", "For instance, the langgraph [how-to guide on summarization](https://langchain-ai.github.io/langgraph/how-tos/memory/add-summary-conversation-history/) demonstrates\n", "how to maintain a running summary of the conversation while discarding older messages, ensuring they aren't re-processed during later turns.\n", ":::" ] }, { "cell_type": "markdown", "id": "d07f9459-9fb6-4942-99c9-64558aedd7d4", "metadata": {}, "source": [ "## Set up" ] }, { "cell_type": "code", "execution_count": 1, "id": "b99b47ec", "metadata": {}, "outputs": [], "source": [ "%%capture --no-stderr\n", "%pip install --upgrade --quiet langchain-openai langchain" ] }, { "cell_type": "code", "execution_count": 1, "id": "7127478f-4413-48be-bfec-d0cd91b8cf70", "metadata": {}, "outputs": [], "source": [ "import os\n", "from getpass import getpass\n", "\n", "if \"OPENAI_API_KEY\" not in os.environ:\n", " os.environ[\"OPENAI_API_KEY\"] = getpass()" ] }, { "cell_type": "markdown", "id": "d6a7bc93-21a9-44c8-842e-9cc82f1ada7c", "metadata": {}, "source": [ "## Legacy usage with LLMChain / Conversation Chain\n", "\n", "<details open>" ] }, { "cell_type": "code", "execution_count": 3, "id": "371616e1-ca41-4a57-99e0-5fbf7d63f2ad", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'text': 'Nice to meet you, Bob! How can I assist you today?', 'chat_history': []}\n", "{'text': 'Your name is Bob. How can I assist you further, Bob?', 'chat_history': [HumanMessage(content='my name is bob', additional_kwargs={}, response_metadata={}), AIMessage(content='Nice to meet you, Bob! How can I assist you today?', additional_kwargs={}, response_metadata={})]}\n" ] } ], "source": [ "from langchain.chains import LLMChain\n", "from langchain.memory import ConversationBufferWindowMemory\n", "from langchain_core.messages import SystemMessage\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.prompts.chat import (\n", " ChatPromptTemplate,\n", " HumanMessagePromptTemplate,\n", " MessagesPlaceholder,\n", ")\n", "from langchain_openai import ChatOpenAI\n", "\n", "prompt = ChatPromptTemplate(\n", " [\n", " SystemMessage(content=\"You are a helpful assistant.\"),\n", " MessagesPlaceholder(variable_name=\"chat_history\"),\n", " HumanMessagePromptTemplate.from_template(\"{text}\"),\n", " ]\n", ")\n", "\n", "# highlight-start\n", "memory = ConversationBufferWindowMemory(memory_key=\"chat_history\", return_messages=True)\n", "# highlight-end\n", "\n", "legacy_chain = LLMChain(\n", " llm=ChatOpenAI(),\n", " prompt=prompt,\n", " # highlight-next-line\n", " memory=memory,\n", ")\n", "\n", "legacy_result = legacy_chain.invoke({\"text\": \"my name is bob\"})\n", "print(legacy_result)\n", "\n", "legacy_result = legacy_chain.invoke({\"text\": \"what was my name\"})\n", "print(legacy_result)" ] }, { "cell_type": "markdown", "id": "f48cac47-c8b6-444c-8e1b-f7115c0b2d8d", "metadata": {}, "source": [ "</details>\n", "\n", "## Reimplementing ConversationBufferWindowMemory logic\n", "\n", "Let's first create appropriate logic to process the conversation history, and then we'll see how to integrate it into an application. You can later replace this basic setup with more advanced logic tailored to your specific needs.\n", "\n", "We'll use `trim_messages` to implement logic that keeps the last `n` messages of the conversation. It will drop the oldest messages when the number of messages exceeds `n`.\n", "\n", "In addition, we will also keep the system message if it's present -- when present, it's the first message in a conversation that includes instructions for the chat model." ] }, { "cell_type": "code", "execution_count": 4, "id": "0a92b3f3-0315-46ac-bb28-d07398dd23ea", "metadata": {}, "outputs": [], "source": [ "from langchain_core.messages import (\n", " AIMessage,\n", " BaseMessage,\n", " HumanMessage,\n", " SystemMessage,\n", " trim_messages,\n", ")\n", "from langchain_openai import ChatOpenAI\n", "\n", "messages = [\n", " SystemMessage(\"you're a good assistant, you always respond with a joke.\"),\n", " HumanMessage(\"i wonder why it's called langchain\"),\n", " AIMessage(\n", " 'Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn\\'t have the same ring to it!'\n", " ),\n", " HumanMessage(\"and who is harrison chasing anyways\"),\n", " AIMessage(\n", " \"Hmmm let me think.\\n\\nWhy, he's probably chasing after the last cup of coffee in the office!\"\n", " ),\n", " HumanMessage(\"why is 42 always the answer?\"),\n", " AIMessage(\n", " \"Because it’s the only number that’s constantly right, even when it doesn’t add up!\"\n", " ),\n", " HumanMessage(\"What did the cow say?\"),\n", "]" ] }, { "cell_type": "code",
149751
" include_system=True,\n", " allow_partial=False,\n", " )\n", "\n", " # highlight-end\n", " response = model.invoke(selected_messages)\n", " # We return a list, because this will get added to the existing list\n", " return {\"messages\": response}\n", "\n", "\n", "# Define the two nodes we will cycle between\n", "workflow.add_edge(START, \"model\")\n", "workflow.add_node(\"model\", call_model)\n", "\n", "\n", "# Adding memory is straight forward in langgraph!\n", "# highlight-next-line\n", "memory = MemorySaver()\n", "\n", "app = workflow.compile(\n", " # highlight-next-line\n", " checkpointer=memory\n", ")\n", "\n", "\n", "# The thread id is a unique key that identifies\n", "# this particular conversation.\n", "# We'll just generate a random uuid here.\n", "thread_id = uuid.uuid4()\n", "# highlight-next-line\n", "config = {\"configurable\": {\"thread_id\": thread_id}}\n", "\n", "input_message = HumanMessage(content=\"hi! I'm bob\")\n", "for event in app.stream({\"messages\": [input_message]}, config, stream_mode=\"values\"):\n", " event[\"messages\"][-1].pretty_print()\n", "\n", "# Here, let's confirm that the AI remembers our name!\n", "config = {\"configurable\": {\"thread_id\": thread_id}}\n", "input_message = HumanMessage(content=\"what was my name?\")\n", "for event in app.stream({\"messages\": [input_message]}, config, stream_mode=\"values\"):\n", " event[\"messages\"][-1].pretty_print()" ] }, { "cell_type": "markdown", "id": "84229e2e-a578-4b21-840a-814223406402", "metadata": {}, "source": [ "</details>\n", "\n", "## Usage with a pre-built langgraph agent\n", "\n", "This example shows usage of an Agent Executor with a pre-built agent constructed using the [create_tool_calling_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.tool_calling_agent.base.create_tool_calling_agent.html) function.\n", "\n", "If you are using one of the [old LangChain pre-built agents](https://python.langchain.com/v0.1/docs/modules/agents/agent_types/), you should be able\n", "to replace that code with the new [langgraph pre-built agent](https://langchain-ai.github.io/langgraph/how-tos/create-react-agent/) which leverages\n", "native tool calling capabilities of chat models and will likely work better out of the box.\n", "\n", "<details open>" ] }, { "cell_type": "code", "execution_count": 8, "id": "f671db87-8f01-453e-81fd-4e603140a512", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "================================\u001b[1m Human Message \u001b[0m=================================\n", "\n", "hi! I'm bob. What is my age?\n", "==================================\u001b[1m Ai Message \u001b[0m==================================\n", "Tool Calls:\n", " get_user_age (call_jsMvoIFv970DhqqLCJDzPKsp)\n", " Call ID: call_jsMvoIFv970DhqqLCJDzPKsp\n", " Args:\n", " name: bob\n", "=================================\u001b[1m Tool Message \u001b[0m=================================\n", "Name: get_user_age\n", "\n", "42 years old\n", "==================================\u001b[1m Ai Message \u001b[0m==================================\n", "\n", "Bob, you are 42 years old.\n", "================================\u001b[1m Human Message \u001b[0m=================================\n", "\n", "do you remember my name?\n", "==================================\u001b[1m Ai Message \u001b[0m==================================\n", "\n", "Yes, your name is Bob.\n" ] } ], "source": [ "import uuid\n", "\n", "from langchain_core.messages import (\n", " AIMessage,\n", " BaseMessage,\n", " HumanMessage,\n", " SystemMessage,\n", " trim_messages,\n", ")\n", "from langchain_core.tools import tool\n", "from langchain_openai import ChatOpenAI\n", "from langgraph.checkpoint.memory import MemorySaver\n", "from langgraph.prebuilt import create_react_agent\n", "\n", "\n", "@tool\n", "def get_user_age(name: str) -> str:\n", " \"\"\"Use this tool to find the user's age.\"\"\"\n", " # This is a placeholder for the actual implementation\n", " if \"bob\" in name.lower():\n", " return \"42 years old\"\n", " return \"41 years old\"\n", "\n", "\n", "memory = MemorySaver()\n", "model = ChatOpenAI()\n", "\n", "\n", "# highlight-start\n", "def state_modifier(state) -> list[BaseMessage]:\n", " \"\"\"Given the agent state, return a list of messages for the chat model.\"\"\"\n", " # We're using the message processor defined above.\n", " return trim_messages(\n", " state[\"messages\"],\n", " token_counter=len, # <-- len will simply count the number of messages rather than tokens\n", " max_tokens=5, # <-- allow up to 5 messages.\n", " strategy=\"last\",\n", " # Most chat models expect that chat history starts with either:\n", " # (1) a HumanMessage or\n", " # (2) a SystemMessage followed by a HumanMessage\n", " # start_on=\"human\" makes sure we produce a valid chat history\n", " start_on=\"human\",\n", " # Usually, we want to keep the SystemMessage\n", " # if it's present in the original history.\n", " # The SystemMessage has special instructions for the model.\n", " include_system=True,\n", " allow_partial=False,\n", " )\n", "\n", "\n", "# highlight-end\n", "\n", "app = create_react_agent(\n", " model,\n", " tools=[get_user_age],\n", " checkpointer=memory,\n", " # highlight-next-line\n", " state_modifier=state_modifier,\n", ")\n", "\n", "# The thread id is a unique key that identifies\n", "# this particular conversation.\n", "# We'll just generate a random uuid here.\n", "thread_id = uuid.uuid4()\n", "config = {\"configurable\": {\"thread_id\": thread_id}}\n", "\n", "# Tell the AI that our name is Bob, and ask it to use a tool to confirm\n", "# that it's capable of working like an agent.\n", "input_message = HumanMessage(content=\"hi! I'm bob. What is my age?\")\n", "\n", "for event in app.stream({\"messages\": [input_message]}, config, stream_mode=\"values\"):\n", " event[\"messages\"][-1].pretty_print()\n", "\n", "# Confirm that the chat bot has access to previous conversation\n", "# and can respond to the user saying that the user's name is Bob.\n", "input_message = HumanMessage(content=\"do you remember my name?\")\n", "\n", "for event in app.stream({\"messages\": [input_message]}, config, stream_mode=\"values\"):\n", " event[\"messages\"][-1].pretty_print()" ] }, { "attachments": {}, "cell_type": "markdown", "id": "f4d16e09-1d90-4153-8576-6d3996cb5a6c",
149788
# INVALID_PROMPT_INPUT A [prompt template](/docs/concepts#prompt-templates) received missing or invalid input variables. ## Troubleshooting The following may help resolve this error: - Double-check your prompt template to ensure that it is correct. - If you are using the default f-string format and you are using curly braces `{` anywhere in your template, they should be double escaped like this: `{{` (and if you want to render a double curly brace, you should use four curly braces: `{{{{`). - If you are using a [`MessagesPlaceholder`](/docs/concepts/#messagesplaceholder), make sure that you are passing in an array of messages or message-like objects. - If you are using shorthand tuples to declare your prompt template, make sure that the variable name is wrapped in curly braces (`["placeholder", "{messages}"]`). - Try viewing the inputs into your prompt template using [LangSmith](https://docs.smith.langchain.com/) or log statements to confirm they appear as expected. - If you are pulling a prompt from the [LangChain Prompt Hub](https://smith.langchain.com/prompts), try pulling and logging it or running it in isolation with a sample input to confirm that it is what you expect.
149828
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# MESSAGE_COERCION_FAILURE\n", "\n", "Instead of always requiring instances of `BaseMessage`, several modules in LangChain take `MessageLikeRepresentation`, which is defined as:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "from typing import Union\n", "\n", "from langchain_core.prompts.chat import (\n", " BaseChatPromptTemplate,\n", " BaseMessage,\n", " BaseMessagePromptTemplate,\n", ")\n", "\n", "MessageLikeRepresentation = Union[\n", " Union[BaseMessagePromptTemplate, BaseMessage, BaseChatPromptTemplate],\n", " tuple[\n", " Union[str, type],\n", " Union[str, list[dict], list[object]],\n", " ],\n", " str,\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These include OpenAI style message objects (`{ role: \"user\", content: \"Hello world!\" }`),\n", "tuples, and plain strings (which are converted to [`HumanMessages`](/docs/concepts#humanmessage)).\n", "\n", "If one of these modules receives a value outside of one of these formats, you will receive an error like the following:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "ename": "ValueError", "evalue": "Message dict must contain 'role' and 'content' keys, got {'role': 'HumanMessage', 'random_field': 'random value'}", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mKeyError\u001b[0m Traceback (most recent call last)", "File \u001b[0;32m~/langchain/oss-py/libs/core/langchain_core/messages/utils.py:318\u001b[0m, in \u001b[0;36m_convert_to_message\u001b[0;34m(message)\u001b[0m\n\u001b[1;32m 317\u001b[0m \u001b[38;5;66;03m# None msg content is not allowed\u001b[39;00m\n\u001b[0;32m--> 318\u001b[0m msg_content \u001b[38;5;241m=\u001b[39m \u001b[43mmsg_kwargs\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mpop\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mcontent\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 319\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mKeyError\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n", "\u001b[0;31mKeyError\u001b[0m: 'content'", "\nThe above exception was the direct cause of the following exception:\n", "\u001b[0;31mValueError\u001b[0m Traceback (most recent call last)", "Cell \u001b[0;32mIn[5], line 10\u001b[0m\n\u001b[1;32m 3\u001b[0m uncoercible_message \u001b[38;5;241m=\u001b[39m {\n\u001b[1;32m 4\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mrole\u001b[39m\u001b[38;5;124m\"\u001b[39m: \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mHumanMessage\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[1;32m 5\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mrandom_field\u001b[39m\u001b[38;5;124m\"\u001b[39m: \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mrandom value\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 6\u001b[0m }\n\u001b[1;32m 8\u001b[0m model \u001b[38;5;241m=\u001b[39m ChatAnthropic(model\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mclaude-3-5-sonnet-20240620\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m---> 10\u001b[0m \u001b[43mmodel\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43minvoke\u001b[49m\u001b[43m(\u001b[49m\u001b[43m[\u001b[49m\u001b[43muncoercible_message\u001b[49m\u001b[43m]\u001b[49m\u001b[43m)\u001b[49m\n",
149830
"File \u001b[0;32m~/langchain/oss-py/libs/core/langchain_core/language_models/chat_models.py:267\u001b[0m, in \u001b[0;36mBaseChatModel._convert_input\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m 265\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m StringPromptValue(text\u001b[38;5;241m=\u001b[39m\u001b[38;5;28minput\u001b[39m)\n\u001b[1;32m 266\u001b[0m \u001b[38;5;28;01melif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(\u001b[38;5;28minput\u001b[39m, Sequence):\n\u001b[0;32m--> 267\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m ChatPromptValue(messages\u001b[38;5;241m=\u001b[39m\u001b[43mconvert_to_messages\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m)\u001b[49m)\n\u001b[1;32m 268\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 269\u001b[0m msg \u001b[38;5;241m=\u001b[39m (\n\u001b[1;32m 270\u001b[0m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mInvalid input type \u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mtype\u001b[39m(\u001b[38;5;28minput\u001b[39m)\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m. \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 271\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mMust be a PromptValue, str, or list of BaseMessages.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 272\u001b[0m )\n", "File \u001b[0;32m~/langchain/oss-py/libs/core/langchain_core/messages/utils.py:348\u001b[0m, in \u001b[0;36mconvert_to_messages\u001b[0;34m(messages)\u001b[0m\n\u001b[1;32m 346\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(messages, PromptValue):\n\u001b[1;32m 347\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m messages\u001b[38;5;241m.\u001b[39mto_messages()\n\u001b[0;32m--> 348\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43m[\u001b[49m\u001b[43m_convert_to_message\u001b[49m\u001b[43m(\u001b[49m\u001b[43mm\u001b[49m\u001b[43m)\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mfor\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mm\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01min\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mmessages\u001b[49m\u001b[43m]\u001b[49m\n", "File \u001b[0;32m~/langchain/oss-py/libs/core/langchain_core/messages/utils.py:348\u001b[0m, in \u001b[0;36m<listcomp>\u001b[0;34m(.0)\u001b[0m\n\u001b[1;32m 346\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(messages, PromptValue):\n\u001b[1;32m 347\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m messages\u001b[38;5;241m.\u001b[39mto_messages()\n\u001b[0;32m--> 348\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m [\u001b[43m_convert_to_message\u001b[49m\u001b[43m(\u001b[49m\u001b[43mm\u001b[49m\u001b[43m)\u001b[49m \u001b[38;5;28;01mfor\u001b[39;00m m \u001b[38;5;129;01min\u001b[39;00m messages]\n",
149831
"File \u001b[0;32m~/langchain/oss-py/libs/core/langchain_core/messages/utils.py:321\u001b[0m, in \u001b[0;36m_convert_to_message\u001b[0;34m(message)\u001b[0m\n\u001b[1;32m 319\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mKeyError\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 320\u001b[0m msg \u001b[38;5;241m=\u001b[39m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mMessage dict must contain \u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mrole\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m and \u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mcontent\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m keys, got \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mmessage\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m--> 321\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(msg) \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01me\u001b[39;00m\n\u001b[1;32m 322\u001b[0m _message \u001b[38;5;241m=\u001b[39m _create_message_from_message_type(\n\u001b[1;32m 323\u001b[0m msg_type, msg_content, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mmsg_kwargs\n\u001b[1;32m 324\u001b[0m )\n\u001b[1;32m 325\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n", "\u001b[0;31mValueError\u001b[0m: Message dict must contain 'role' and 'content' keys, got {'role': 'HumanMessage', 'random_field': 'random value'}" ] } ], "source": [ "from langchain_anthropic import ChatAnthropic\n", "\n", "uncoercible_message = {\"role\": \"HumanMessage\", \"random_field\": \"random value\"}\n", "\n", "model = ChatAnthropic(model=\"claude-3-5-sonnet-20240620\")\n", "\n", "model.invoke([uncoercible_message])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Troubleshooting\n", "\n", "The following may help resolve this error:\n", "\n", "- Ensure that all inputs to chat models are an array of LangChain message classes or a supported message-like.\n", " - Check that there is no stringification or other unexpected transformation occuring.\n", "- Check the error's stack trace and add log or debugger statements." ] }, { "cell_type": "markdown", "metadata": {}, "source": [] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.4" } }, "nbformat": 4, "nbformat_minor": 2 }
149833
# MODEL_AUTHENTICATION Your model provider is denying you access to their service. ## Troubleshooting The following may help resolve this error: - Confirm that your API key or other credentials are correct. - If you are relying on an environment variable to authenticate, confirm that the variable name is correct and that it has a value set. - Note that environment variables can also be set by packages like `dotenv`. - For models, you can try explicitly passing an `api_key` parameter to rule out any environment variable issues like this: ```python model = ChatOpenAI(api_key="YOUR_KEY_HERE") ``` - If you are using a proxy or other custom endpoint, make sure that your custom provider does not expect an alternative authentication scheme.
149855
{ "cells": [ { "cell_type": "markdown", "id": "8c5eb99a", "metadata": {}, "source": [ "# How to inspect runnables\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", "\n", ":::\n", "\n", "Once you create a runnable with [LangChain Expression Language](/docs/concepts/#langchain-expression-language), you may often want to inspect it to get a better sense for what is going on. This notebook covers some methods for doing so.\n", "\n", "This guide shows some ways you can programmatically introspect the internal steps of chains. If you are instead interested in debugging issues in your chain, see [this section](/docs/how_to/debugging) instead.\n", "\n", "First, let's create an example chain. We will create one that does retrieval:" ] }, { "cell_type": "code", "execution_count": null, "id": "d816e954", "metadata": {}, "outputs": [], "source": [ "%pip install -qU langchain langchain-openai faiss-cpu tiktoken" ] }, { "cell_type": "code", "execution_count": 2, "id": "139228c2", "metadata": {}, "outputs": [], "source": [ "from langchain_community.vectorstores import FAISS\n", "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.runnables import RunnablePassthrough\n", "from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n", "\n", "vectorstore = FAISS.from_texts(\n", " [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n", ")\n", "retriever = vectorstore.as_retriever()\n", "\n", "template = \"\"\"Answer the question based only on the following context:\n", "{context}\n", "\n", "Question: {question}\n", "\"\"\"\n", "prompt = ChatPromptTemplate.from_template(template)\n", "\n", "model = ChatOpenAI()\n", "\n", "chain = (\n", " {\"context\": retriever, \"question\": RunnablePassthrough()}\n", " | prompt\n", " | model\n", " | StrOutputParser()\n", ")" ] }, { "cell_type": "markdown", "id": "849e3c42", "metadata": {}, "source": [ "## Get a graph\n", "\n", "You can use the `get_graph()` method to get a graph representation of the runnable:" ] }, { "cell_type": "code", "execution_count": null, "id": "2448b6c2", "metadata": {}, "outputs": [], "source": [ "chain.get_graph()" ] }, { "cell_type": "markdown", "id": "065b02fb", "metadata": {}, "source": [ "## Print a graph\n", "\n", "While that is not super legible, you can use the `print_ascii()` method to show that graph in a way that's easier to understand:" ] }, { "cell_type": "code", "execution_count": 5, "id": "d5ab1515", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " +---------------------------------+ \n", " | Parallel<context,question>Input | \n", " +---------------------------------+ \n", " ** ** \n", " *** *** \n", " ** ** \n", "+----------------------+ +-------------+ \n", "| VectorStoreRetriever | | Passthrough | \n", "+----------------------+ +-------------+ \n", " ** ** \n", " *** *** \n", " ** ** \n", " +----------------------------------+ \n", " | Parallel<context,question>Output | \n", " +----------------------------------+ \n", " * \n", " * \n", " * \n", " +--------------------+ \n", " | ChatPromptTemplate | \n", " +--------------------+ \n", " * \n", " * \n", " * \n", " +------------+ \n", " | ChatOpenAI | \n", " +------------+ \n", " * \n", " * \n", " * \n", " +-----------------+ \n", " | StrOutputParser | \n", " +-----------------+ \n", " * \n", " * \n", " * \n", " +-----------------------+ \n", " | StrOutputParserOutput | \n", " +-----------------------+ \n" ] } ], "source": [ "chain.get_graph().print_ascii()" ] }, { "cell_type": "markdown", "id": "2babf851", "metadata": {}, "source": [ "## Get the prompts\n", "\n", "You may want to see just the prompts that are used in a chain with the `get_prompts()` method:" ] }, { "cell_type": "code", "execution_count": 6, "id": "34b2118d", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[ChatPromptTemplate(input_variables=['context', 'question'], messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context', 'question'], template='Answer the question based only on the following context:\\n{context}\\n\\nQuestion: {question}\\n'))])]" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain.get_prompts()" ] }, { "cell_type": "markdown", "id": "c5a74bd5", "metadata": {}, "source": [ "## Next steps\n", "\n", "You've now learned how to introspect your composed LCEL chains.\n", "\n", "Next, check out the other how-to guides on runnables in this section, or the related how-to guide on [debugging your chains](/docs/how_to/debugging)." ] }, { "cell_type": "code", "execution_count": null, "id": "ed965769", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
149856
{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ "---\n", "sidebar_position: 2\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# How to add retrieval to chatbots\n", "\n", "Retrieval is a common technique chatbots use to augment their responses with data outside a chat model's training data. This section will cover how to implement retrieval in the context of chatbots, but it's worth noting that retrieval is a very subtle and deep topic - we encourage you to explore [other parts of the documentation](/docs/how_to#qa-with-rag) that go into greater depth!\n", "\n", "## Setup\n", "\n", "You'll need to install a few packages, and have your OpenAI API key set as an environment variable named `OPENAI_API_KEY`:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[33mWARNING: You are using pip version 22.0.4; however, version 23.3.2 is available.\n", "You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n", "\u001b[0mNote: you may need to restart the kernel to use updated packages.\n" ] }, { "data": { "text/plain": [ "True" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%pip install -qU langchain langchain-openai langchain-chroma beautifulsoup4\n", "\n", "# Set env var OPENAI_API_KEY or load from a .env file:\n", "import dotenv\n", "\n", "dotenv.load_dotenv()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's also set up a chat model that we'll use for the below examples." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "from langchain_openai import ChatOpenAI\n", "\n", "chat = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0.2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creating a retriever\n", "\n", "We'll use [the LangSmith documentation](https://docs.smith.langchain.com/overview) as source material and store the content in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](/docs/how_to#qa-with-rag).\n", "\n", "Let's use a document loader to pull text from the docs:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import WebBaseLoader\n", "\n", "loader = WebBaseLoader(\"https://docs.smith.langchain.com/overview\")\n", "data = loader.load()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we split it into smaller chunks that the LLM's context window can handle and store it in a vector database:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "from langchain_text_splitters import RecursiveCharacterTextSplitter\n", "\n", "text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n", "all_splits = text_splitter.split_documents(data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we embed and store those chunks in a vector database:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "from langchain_chroma import Chroma\n", "from langchain_openai import OpenAIEmbeddings\n", "\n", "vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And finally, let's create a retriever from our initialized vectorstore:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='Skip to main content🦜️🛠️ LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content=\"does that affect the output?\\u200bSo you notice a bad output, and you go into LangSmith to see what's going on. You find the faulty LLM call and are now looking at the exact input. You want to try changing a word or a phrase to see what happens -- what do you do?We constantly ran into this issue. Initially, we copied the prompt to a playground of sorts. But this got annoying, so we built a playground of our own! When examining an LLM call, you can click the Open in Playground button to access this\", metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})]" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# k is the number of chunks to retrieve\n", "retriever = vectorstore.as_retriever(k=4)\n", "\n",
149857
"docs = retriever.invoke(\"Can LangSmith help test my LLM applications?\")\n", "\n", "docs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that invoking the retriever above results in some parts of the LangSmith docs that contain information about testing that our chatbot can use as context when answering questions. And now we've got a retriever that can return related data from the LangSmith docs!\n", "\n", "## Document chains\n", "\n", "Now that we have a retriever that can return LangChain docs, let's create a chain that can use them as context to answer questions. We'll use a `create_stuff_documents_chain` helper function to \"stuff\" all of the input documents into the prompt. It will also handle formatting the docs as strings.\n", "\n", "In addition to a chat model, the function also expects a prompt that has a `context` variables, as well as a placeholder for chat history messages named `messages`. We'll create an appropriate prompt and pass it as shown below:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "from langchain.chains.combine_documents import create_stuff_documents_chain\n", "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n", "\n", "SYSTEM_TEMPLATE = \"\"\"\n", "Answer the user's questions based on the below context. \n", "If the context doesn't contain any relevant information to the question, don't make something up and just say \"I don't know\":\n", "\n", "<context>\n", "{context}\n", "</context>\n", "\"\"\"\n", "\n", "question_answering_prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " SYSTEM_TEMPLATE,\n", " ),\n", " MessagesPlaceholder(variable_name=\"messages\"),\n", " ]\n", ")\n", "\n", "document_chain = create_stuff_documents_chain(chat, question_answering_prompt)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can invoke this `document_chain` by itself to answer questions. Let's use the docs we retrieved above and the same question, `how can langsmith help with testing?`:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Yes, LangSmith can help test and evaluate your LLM applications. It simplifies the initial setup, and you can use it to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.messages import HumanMessage\n", "\n", "document_chain.invoke(\n", " {\n", " \"context\": docs,\n", " \"messages\": [\n", " HumanMessage(content=\"Can LangSmith help test my LLM applications?\")\n", " ],\n", " }\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Looks good! For comparison, we can try it with no context docs and compare the result:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"I don't know about LangSmith's specific capabilities for testing LLM applications. It's best to reach out to LangSmith directly to inquire about their services and how they can assist with testing your LLM applications.\"" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "document_chain.invoke(\n", " {\n", " \"context\": [],\n", " \"messages\": [\n", " HumanMessage(content=\"Can LangSmith help test my LLM applications?\")\n", " ],\n", " }\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that the LLM does not return any results.\n", "\n", "## Retrieval chains\n", "\n", "Let's combine this document chain with the retriever. Here's one way this can look:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "from typing import Dict\n", "\n", "from langchain_core.runnables import RunnablePassthrough\n", "\n", "\n", "def parse_retriever_input(params: Dict):\n", " return params[\"messages\"][-1].content\n", "\n", "\n", "retrieval_chain = RunnablePassthrough.assign(\n", " context=parse_retriever_input | retriever,\n", ").assign(\n", " answer=document_chain,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Given a list of input messages, we extract the content of the last message in the list and pass that to the retriever to fetch some documents. Then, we pass those documents as context to our document chain to generate a final response.\n", "\n", "Invoking this chain combines both steps outlined above:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'messages': [HumanMessage(content='Can LangSmith help test my LLM applications?')],\n", " 'context': [Document(page_content='Skip to main content🦜️🛠️ LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
149858
" Document(page_content=\"does that affect the output?\\u200bSo you notice a bad output, and you go into LangSmith to see what's going on. You find the faulty LLM call and are now looking at the exact input. You want to try changing a word or a phrase to see what happens -- what do you do?We constantly ran into this issue. Initially, we copied the prompt to a playground of sorts. But this got annoying, so we built a playground of our own! When examining an LLM call, you can click the Open in Playground button to access this\", metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})],\n", " 'answer': 'Yes, LangSmith can help test and evaluate your LLM applications. It simplifies the initial setup, and you can use it to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'}" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "retrieval_chain.invoke(\n", " {\n", " \"messages\": [\n", " HumanMessage(content=\"Can LangSmith help test my LLM applications?\")\n", " ],\n", " }\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Looks good!\n", "\n", "## Query transformation\n", "\n", "Our retrieval chain is capable of answering questions about LangSmith, but there's a problem - chatbots interact with users conversationally, and therefore have to deal with followup questions.\n", "\n", "The chain in its current form will struggle with this. Consider a followup question to our original question like `Tell me more!`. If we invoke our retriever with that query directly, we get documents irrelevant to LLM application testing:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='playground. Here, you can modify the prompt and re-run it to observe the resulting changes to the output - as many times as needed!Currently, this feature supports only OpenAI and Anthropic models and works for LLM and Chat Model calls. We plan to extend its functionality to more LLM types, chains, agents, and retrievers in the future.What is the exact sequence of events?\\u200bIn complicated chains and agents, it can often be hard to understand what is going on under the hood. What calls are being', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='however, there is still no complete substitute for human review to get the utmost quality and reliability from your application.', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='Skip to main content🦜️🛠️ LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})]" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "retriever.invoke(\"Tell me more!\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is because the retriever has no innate concept of state, and will only pull documents most similar to the query given. To solve this, we can transform the query into a standalone query without any external references an LLM.\n", "\n", "Here's an example:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='\"LangSmith LLM application testing and evaluation\"')" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.messages import AIMessage, HumanMessage\n", "\n", "query_transform_prompt = ChatPromptTemplate.from_messages(\n", " [\n", " MessagesPlaceholder(variable_name=\"messages\"),\n", " (\n", " \"user\",\n", " \"Given the above conversation, generate a search query to look up in order to get information relevant to the conversation. Only respond with the query, nothing else.\",\n", " ),\n", " ]\n", ")\n", "\n", "query_transformation_chain = query_transform_prompt | chat\n", "\n", "query_transformation_chain.invoke(\n", " {\n", " \"messages\": [\n", " HumanMessage(content=\"Can LangSmith help test my LLM applications?\"),\n", " AIMessage(\n", " content=\"Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.\"\n", " ),\n", " HumanMessage(content=\"Tell me more!\"),\n", " ],\n", " }\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Awesome! That transformed query would pull up context documents related to LLM application testing.\n", "\n", "Let's add this to our retrieval chain. We can wrap our retriever as follows:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.runnables import RunnableBranch\n", "\n",
149859
"query_transforming_retriever_chain = RunnableBranch(\n", " (\n", " lambda x: len(x.get(\"messages\", [])) == 1,\n", " # If only one message, then we just pass that message's content to retriever\n", " (lambda x: x[\"messages\"][-1].content) | retriever,\n", " ),\n", " # If messages, then we pass inputs to LLM chain to transform the query, then pass to retriever\n", " query_transform_prompt | chat | StrOutputParser() | retriever,\n", ").with_config(run_name=\"chat_retriever_chain\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then, we can use this query transformation chain to make our retrieval chain better able to handle such followup questions:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "SYSTEM_TEMPLATE = \"\"\"\n", "Answer the user's questions based on the below context. \n", "If the context doesn't contain any relevant information to the question, don't make something up and just say \"I don't know\":\n", "\n", "<context>\n", "{context}\n", "</context>\n", "\"\"\"\n", "\n", "question_answering_prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " SYSTEM_TEMPLATE,\n", " ),\n", " MessagesPlaceholder(variable_name=\"messages\"),\n", " ]\n", ")\n", "\n", "document_chain = create_stuff_documents_chain(chat, question_answering_prompt)\n", "\n", "conversational_retrieval_chain = RunnablePassthrough.assign(\n", " context=query_transforming_retriever_chain,\n", ").assign(\n", " answer=document_chain,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Awesome! Let's invoke this new chain with the same inputs as earlier:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'messages': [HumanMessage(content='Can LangSmith help test my LLM applications?')],\n", " 'context': [Document(page_content='Skip to main content🦜️🛠️ LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content=\"does that affect the output?\\u200bSo you notice a bad output, and you go into LangSmith to see what's going on. You find the faulty LLM call and are now looking at the exact input. You want to try changing a word or a phrase to see what happens -- what do you do?We constantly ran into this issue. Initially, we copied the prompt to a playground of sorts. But this got annoying, so we built a playground of our own! When examining an LLM call, you can click the Open in Playground button to access this\", metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})],\n", " 'answer': 'Yes, LangSmith can help test and evaluate LLM (Language Model) applications. It simplifies the initial setup, and you can use it to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'}" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "conversational_retrieval_chain.invoke(\n", " {\n", " \"messages\": [\n", " HumanMessage(content=\"Can LangSmith help test my LLM applications?\"),\n", " ]\n", " }\n", ")" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'messages': [HumanMessage(content='Can LangSmith help test my LLM applications?'),\n", " AIMessage(content='Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'),\n", " HumanMessage(content='Tell me more!')],\n", " 'context': [Document(page_content='LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
149862
{ "cells": [ { "cell_type": "raw", "id": "8165bd4c", "metadata": { "vscode": { "languageId": "raw" } }, "source": [ "---\n", "keywords: [memory]\n", "---" ] }, { "cell_type": "markdown", "id": "f47033eb", "metadata": {}, "source": [ "# How to add message history\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", "- [Prompt templates](/docs/concepts/#prompt-templates)\n", "- [Chat Messages](/docs/concepts/#message-types)\n", "- [LangGraph persistence](https://langchain-ai.github.io/langgraph/how-tos/persistence/)\n", "\n", ":::\n", "\n", ":::note\n", "\n", "This guide previously covered the [RunnableWithMessageHistory](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html) abstraction. You can access this version of the guide in the [v0.2 docs](https://python.langchain.com/v0.2/docs/how_to/message_history/).\n", "\n", "As of the v0.3 release of LangChain, we recommend that LangChain users take advantage of [LangGraph persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/) to incorporate `memory` into new LangChain applications.\n", "\n", "If your code is already relying on `RunnableWithMessageHistory` or `BaseChatMessageHistory`, you do **not** need to make any changes. We do not plan on deprecating this functionality in the near future as it works for simple chat applications and any code that uses `RunnableWithMessageHistory` will continue to work as expected.\n", "\n", "Please see [How to migrate to LangGraph Memory](/docs/versions/migrating_memory/) for more details.\n", ":::\n", "\n", "Passing conversation state into and out a chain is vital when building a chatbot. LangGraph implements a built-in persistence layer, allowing chain states to be automatically persisted in memory, or external backends such as SQLite, Postgres or Redis. Details can be found in the LangGraph [persistence documentation](https://langchain-ai.github.io/langgraph/how-tos/persistence/).\n", "\n", "In this guide we demonstrate how to add persistence to arbitrary LangChain runnables by wrapping them in a minimal LangGraph application. This lets us persist the message history and other elements of the chain's state, simplifying the development of multi-turn applications. It also supports multiple threads, enabling a single application to interact separately with multiple users.\n", "\n", "## Setup\n", "\n", "Let's initialize a chat model:\n", "\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs\n", " customVarName=\"llm\"\n", "/>\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "ca50d084-ae4b-4aea-9eb7-2ebc699df9bc", "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "# %pip install -qU langchain langchain_anthropic\n", "\n", "# import os\n", "# from getpass import getpass\n", "\n", "# os.environ[\"ANTHROPIC_API_KEY\"] = getpass()\n", "from langchain_anthropic import ChatAnthropic\n", "\n", "llm = ChatAnthropic(model=\"claude-3-haiku-20240307\", temperature=0)" ] }, { "cell_type": "markdown", "id": "1f6121bc-2080-4ccc-acf0-f77de4bc951d", "metadata": {}, "source": [ "## Example: message inputs\n", "\n", "Adding memory to a [chat model](/docs/concepts/#chat-models) provides a simple example. Chat models accept a list of messages as input and output a message. LangGraph includes a built-in `MessagesState` that we can use for this purpose.\n", "\n", "Below, we:\n", "1. Define the graph state to be a list of messages;\n", "2. Add a single node to the graph that calls a chat model;\n", "3. Compile the graph with an in-memory checkpointer to store messages between runs.\n", "\n", ":::info\n", "\n", "The output of a LangGraph application is its [state](https://langchain-ai.github.io/langgraph/concepts/low_level/). This can be any Python type, but in this context it will typically be a `TypedDict` that matches the schema of your runnable.\n", "\n", ":::" ] }, { "cell_type": "code", "execution_count": 2, "id": "f691a73a-a866-4354-9fff-8315605e2b8f", "metadata": {}, "outputs": [], "source": [ "from langchain_core.messages import HumanMessage\n", "from langgraph.checkpoint.memory import MemorySaver\n", "from langgraph.graph import START, MessagesState, StateGraph\n", "\n", "# Define a new graph\n", "workflow = StateGraph(state_schema=MessagesState)\n", "\n", "\n", "# Define the function that calls the model\n", "def call_model(state: MessagesState):\n", " response = llm.invoke(state[\"messages\"])\n", " # Update message history with response:\n", " return {\"messages\": response}\n", "\n", "\n", "# Define the (single) node in the graph\n", "workflow.add_edge(START, \"model\")\n", "workflow.add_node(\"model\", call_model)\n", "\n", "# Add memory\n", "memory = MemorySaver()\n", "app = workflow.compile(checkpointer=memory)" ] }, { "cell_type": "markdown", "id": "c0b396a8-f81e-4139-b4b2-75adf61d8179", "metadata": {}, "source": [ "When we run the application, we pass in a configuration `dict` that specifies a `thread_id`. This ID is used to distinguish conversational threads (e.g., between different users)." ] }, { "cell_type": "code", "execution_count": 3, "id": "e4309511-2140-4d91-8f5f-ea3661e6d179", "metadata": {}, "outputs": [], "source": [ "config = {\"configurable\": {\"thread_id\": \"abc123\"}}" ] }, { "cell_type": "markdown", "id": "108c45a2-4971-4120-ba64-9a4305a414bb", "metadata": {}, "source": [ "We can then invoke the application:" ] }, { "cell_type": "code", "execution_count": 4, "id": "72a5ff6c-501f-4151-8dd9-f600f70554be", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "==================================\u001b[1m Ai Message \u001b[0m==================================\n", "\n", "It's nice to meet you, Bob! I'm Claude, an AI assistant created by Anthropic. How can I help you today?\n" ] } ], "source": [ "query = \"Hi! I'm Bob.\"\n", "\n", "input_messages = [HumanMessage(query)]\n", "output = app.invoke({\"messages\": input_messages}, config)\n", "output[\"messages\"][-1].pretty_print() # output contains all messages in state" ] }, { "cell_type": "code", "execution_count": 5, "id": "5931fb35-0fac-40e7-8ac6-b14cb4e926cd", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [
149869
"message_history = messages[\"messages\"]\n", "\n", "new_query = \"Pardon?\"\n", "\n", "messages = langgraph_agent_executor.invoke(\n", " {\"messages\": message_history + [(\"human\", new_query)]}\n", ")\n", "{\n", " \"input\": new_query,\n", " \"output\": messages[\"messages\"][-1].content,\n", "}" ] }, { "cell_type": "markdown", "id": "f4466a4d-e55e-4ece-bee8-2269a0b5677b", "metadata": {}, "source": [ "## Prompt Templates\n", "\n", "With legacy LangChain agents you have to pass in a prompt template. You can use this to control the agent.\n", "\n", "With LangGraph [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent), by default there is no prompt. You can achieve similar control over the agent in a few ways:\n", "\n", "1. Pass in a system message as input\n", "2. Initialize the agent with a system message\n", "3. Initialize the agent with a function to transform messages before passing to the model.\n", "\n", "Let's take a look at all of these below. We will pass in custom instructions to get the agent to respond in Spanish.\n", "\n", "First up, using `AgentExecutor`:" ] }, { "cell_type": "code", "execution_count": 7, "id": "a9a11ccd-75e2-4c11-844d-a34870b0ff91", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input': 'what is the value of magic_function(3)?',\n", " 'output': 'El valor de `magic_function(3)` es 5.'}" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\"system\", \"You are a helpful assistant. Respond only in Spanish.\"),\n", " (\"human\", \"{input}\"),\n", " # Placeholders fill up a **list** of messages\n", " (\"placeholder\", \"{agent_scratchpad}\"),\n", " ]\n", ")\n", "\n", "\n", "agent = create_tool_calling_agent(model, tools, prompt)\n", "agent_executor = AgentExecutor(agent=agent, tools=tools)\n", "\n", "agent_executor.invoke({\"input\": query})" ] }, { "cell_type": "markdown", "id": "bd5f5500-5ae4-4000-a9fd-8c5a2cc6404d", "metadata": {}, "source": [ "Now, let's pass a custom system message to [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent).\n", "\n", "LangGraph's prebuilt `create_react_agent` does not take a prompt template directly as a parameter, but instead takes a [`state_modifier`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) parameter. This modifies the graph state before the llm is called, and can be one of four values:\n", "\n", "- A `SystemMessage`, which is added to the beginning of the list of messages.\n", "- A `string`, which is converted to a `SystemMessage` and added to the beginning of the list of messages.\n", "- A `Callable`, which should take in full graph state. The output is then passed to the language model.\n", "- Or a [`Runnable`](/docs/concepts/#langchain-expression-language-lcel), which should take in full graph state. The output is then passed to the language model.\n", "\n", "Here's how it looks in action:" ] }, { "cell_type": "code", "execution_count": 8, "id": "a9486805-676a-4d19-a5c4-08b41b172989", "metadata": {}, "outputs": [], "source": [ "from langchain_core.messages import SystemMessage\n", "from langgraph.prebuilt import create_react_agent\n", "\n", "system_message = \"You are a helpful assistant. Respond only in Spanish.\"\n", "# This could also be a SystemMessage object\n", "# system_message = SystemMessage(content=\"You are a helpful assistant. Respond only in Spanish.\")\n", "\n", "langgraph_agent_executor = create_react_agent(\n", " model, tools, state_modifier=system_message\n", ")\n", "\n", "\n", "messages = langgraph_agent_executor.invoke({\"messages\": [(\"user\", query)]})" ] }, { "cell_type": "markdown", "id": "fc6059fd-0df7-4b6f-a84c-b5874e983638", "metadata": {}, "source": [ "We can also pass in an arbitrary function. This function should take in a list of messages and output a list of messages.\n", "We can do all types of arbitrary formatting of messages here. In this cases, let's just add a SystemMessage to the start of the list of messages." ] }, { "cell_type": "code", "execution_count": 9, "id": "d369ab45-0c82-45f4-9d3e-8efb8dd47e2c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'input': 'what is the value of magic_function(3)?', 'output': 'The value of magic_function(3) is 5. ¡Pandamonium!'}\n" ] } ], "source": [ "from langgraph.prebuilt import create_react_agent\n", "from langgraph.prebuilt.chat_agent_executor import AgentState\n", "\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\"system\", \"You are a helpful assistant. Respond only in Spanish.\"),\n", " (\"placeholder\", \"{messages}\"),\n", " ]\n", ")\n", "\n", "\n", "def _modify_state_messages(state: AgentState):\n", " return prompt.invoke({\"messages\": state[\"messages\"]}).to_messages() + [\n", " (\"user\", \"Also say 'Pandamonium!' after the answer.\")\n", " ]\n", "\n", "\n", "langgraph_agent_executor = create_react_agent(\n", " model, tools, state_modifier=_modify_state_messages\n", ")\n", "\n", "\n", "messages = langgraph_agent_executor.invoke({\"messages\": [(\"human\", query)]})\n", "print(\n", " {\n", " \"input\": query,\n", " \"output\": messages[\"messages\"][-1].content,\n", " }\n", ")" ] }, { "cell_type": "markdown", "id": "68df3a09", "metadata": {}, "source": [ "## Memory" ] }, { "cell_type": "markdown", "id": "96e7ffc8", "metadata": {}, "source": [ "### In LangChain\n", "\n", "With LangChain's [AgentExecutor](https://python.langchain.com/api_reference/langchain/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor.iter), you could add chat [Memory](https://python.langchain.com/api_reference/langchain/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor.memory) so it can engage in a multi-turn conversation." ] }, { "cell_type": "code", "execution_count": 10, "id": "b97beba5-8f74-430c-9399-91b77c8fa15c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Hi Polly! The output of applying the magic function to the input 3 is 5.\n", "---\n",
149872
" (\"system\", \"You are a helpful assistant.\"),\n", " (\"placeholder\", \"{messages}\"),\n", " ]\n", ")\n", "\n", "\n", "def _modify_state_messages(state: AgentState):\n", " return prompt.invoke({\"messages\": state[\"messages\"]}).to_messages()\n", "\n", "\n", "langgraph_agent_executor = create_react_agent(\n", " model, tools, state_modifier=_modify_state_messages\n", ")\n", "\n", "for step in langgraph_agent_executor.stream(\n", " {\"messages\": [(\"human\", query)]}, stream_mode=\"updates\"\n", "):\n", " print(step)" ] }, { "cell_type": "markdown", "id": "6898ccbc-42b1-4373-954a-2c7b3849fbb0", "metadata": {}, "source": [ "## `return_intermediate_steps`\n", "\n", "### In LangChain\n", "\n", "Setting this parameter on AgentExecutor allows users to access intermediate_steps, which pairs agent actions (e.g., tool invocations) with their outcomes.\n" ] }, { "cell_type": "code", "execution_count": 14, "id": "a2f720f3-c121-4be2-b498-92c16bb44b0a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[(ToolAgentAction(tool='magic_function', tool_input={'input': 3}, log=\"\\nInvoking: `magic_function` with `{'input': 3}`\\n\\n\\n\", message_log=[AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_wjaAyTjI2LSYOq7C8QZYSxEs', 'function': {'arguments': '{\"input\":3}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_c9aa9c0491'}, id='run-99e06b70-1ef6-4761-834b-87b6c5252e20', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_wjaAyTjI2LSYOq7C8QZYSxEs', 'type': 'tool_call'}], tool_call_chunks=[{'name': 'magic_function', 'args': '{\"input\":3}', 'id': 'call_wjaAyTjI2LSYOq7C8QZYSxEs', 'index': 0, 'type': 'tool_call_chunk'}])], tool_call_id='call_wjaAyTjI2LSYOq7C8QZYSxEs'), 5)]\n" ] } ], "source": [ "agent_executor = AgentExecutor(agent=agent, tools=tools, return_intermediate_steps=True)\n", "result = agent_executor.invoke({\"input\": query})\n", "print(result[\"intermediate_steps\"])" ] }, { "cell_type": "markdown", "id": "594f7567-302f-4fa8-85bb-025ac8322162", "metadata": {}, "source": [ "### In LangGraph\n", "\n", "By default the [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) in LangGraph appends all messages to the central state. Therefore, it is easy to see any intermediate steps by just looking at the full state." ] }, { "cell_type": "code", "execution_count": 15, "id": "ef23117a-5ccb-42ce-80c3-ea49a9d3a942", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'messages': [HumanMessage(content='what is the value of magic_function(3)?', id='2d369331-8052-4167-bd85-9f6d8ad021ae'),\n", " AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_oXiSQSe6WeWj7XIKXxZrO2IC', 'function': {'arguments': '{\"input\":3}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 55, 'total_tokens': 69}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_3aa7262c27', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-297e7fc9-726f-46a0-8c67-dc28ed1724d0-0', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_oXiSQSe6WeWj7XIKXxZrO2IC', 'type': 'tool_call'}], usage_metadata={'input_tokens': 55, 'output_tokens': 14, 'total_tokens': 69}),\n", " ToolMessage(content='5', name='magic_function', id='46370faf-9598-423c-b94b-aca8cb4f035d', tool_call_id='call_oXiSQSe6WeWj7XIKXxZrO2IC'),\n", " AIMessage(content='The value of `magic_function(3)` is 5.', response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 78, 'total_tokens': 92}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_3aa7262c27', 'finish_reason': 'stop', 'logprobs': None}, id='run-f48efaff-0c2c-4632-bbf9-7ee626f73d02-0', usage_metadata={'input_tokens': 78, 'output_tokens': 14, 'total_tokens': 92})]}" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langgraph.prebuilt import create_react_agent\n", "\n", "langgraph_agent_executor = create_react_agent(model, tools=tools)\n", "\n", "messages = langgraph_agent_executor.invoke({\"messages\": [(\"human\", query)]})\n", "\n", "messages" ] }, { "cell_type": "markdown", "id": "45b528e5-57e1-450e-8d91-513eab53b543", "metadata": {}, "source": [ "## `max_iterations`\n", "\n", "### In LangChain\n", "\n", "`AgentExecutor` implements a `max_iterations` parameter, allowing users to abort a run that exceeds a specified number of iterations." ] }, { "cell_type": "code", "execution_count": 16, "id": "16f189a7-fc78-4cb5-aa16-a94ca06401a6", "metadata": {}, "outputs": [], "source": [ "@tool\n", "def magic_function(input: str) -> str:\n", " \"\"\"Applies a magic function to an input.\"\"\"\n", " return \"Sorry, there was an error. Please try again.\"\n", "\n", "\n", "tools = [magic_function]" ] }, { "cell_type": "code", "execution_count": 17, "id": "c96aefd7-6f6e-4670-aca6-1ac3d4e7871f", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n", "\u001b[32;1m\u001b[1;3m\n", "Invoking: `magic_function` with `{'input': '3'}`\n", "\n", "\n",
149878
" 'content': 'San Francisco Weather Forecast for Apr 2024 - Risk of Rain Graph. Rain Risk Graph: Monthly Overview. Bar heights indicate rain risk percentages. Yellow bars mark low-risk days, while black and grey bars signal higher risks. Grey-yellow bars act as buffers, advising to keep at least one day clear from the riskier grey and black days, guiding ...'}]" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "search.invoke(\"what is the weather in SF\")" ] }, { "cell_type": "markdown", "id": "e8097977", "metadata": {}, "source": [ "### Retriever\n", "\n", "We will also create a retriever over some data of our own. For a deeper explanation of each step here, see [this tutorial](/docs/tutorials/rag)." ] }, { "cell_type": "code", "execution_count": 8, "id": "9c9ce713", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import WebBaseLoader\n", "from langchain_community.vectorstores import FAISS\n", "from langchain_openai import OpenAIEmbeddings\n", "from langchain_text_splitters import RecursiveCharacterTextSplitter\n", "\n", "loader = WebBaseLoader(\"https://docs.smith.langchain.com/overview\")\n", "docs = loader.load()\n", "documents = RecursiveCharacterTextSplitter(\n", " chunk_size=1000, chunk_overlap=200\n", ").split_documents(docs)\n", "vector = FAISS.from_documents(documents, OpenAIEmbeddings())\n", "retriever = vector.as_retriever()" ] }, { "cell_type": "code", "execution_count": 9, "id": "dae53ec6", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Document(page_content='# The data to predict and grade over evaluators=[exact_match], # The evaluators to score the results experiment_prefix=\"sample-experiment\", # The name of the experiment metadata={ \"version\": \"1.0.0\", \"revision_id\": \"beta\" },)import { Client, Run, Example } from \\'langsmith\\';import { runOnDataset } from \\'langchain/smith\\';import { EvaluationResult } from \\'langsmith/evaluation\\';const client = new Client();// Define dataset: these are your test casesconst datasetName = \"Sample Dataset\";const dataset = await client.createDataset(datasetName, { description: \"A sample dataset in LangSmith.\"});await client.createExamples({ inputs: [ { postfix: \"to LangSmith\" }, { postfix: \"to Evaluations in LangSmith\" }, ], outputs: [ { output: \"Welcome to LangSmith\" }, { output: \"Welcome to Evaluations in LangSmith\" }, ], datasetId: dataset.id,});// Define your evaluatorconst exactMatch = async ({ run, example }: { run: Run; example?:', metadata={'source': 'https://docs.smith.langchain.com/overview', 'title': 'Getting started with LangSmith | \\uf8ffü¶úÔ∏è\\uf8ffüõ†Ô∏è LangSmith', 'description': 'Introduction', 'language': 'en'})" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "retriever.invoke(\"how to upload a dataset\")[0]" ] }, { "cell_type": "markdown", "id": "04aeca39", "metadata": {}, "source": [ "Now that we have populated our index that we will do doing retrieval over, we can easily turn it into a tool (the format needed for an agent to properly use it)" ] }, { "cell_type": "code", "execution_count": 10, "id": "117594b5", "metadata": {}, "outputs": [], "source": [ "from langchain.tools.retriever import create_retriever_tool" ] }, { "cell_type": "code", "execution_count": 11, "id": "7280b031", "metadata": {}, "outputs": [], "source": [ "retriever_tool = create_retriever_tool(\n", " retriever,\n", " \"langsmith_search\",\n", " \"Search for information about LangSmith. For any questions about LangSmith, you must use this tool!\",\n", ")" ] }, { "cell_type": "markdown", "id": "c3b47c1d", "metadata": {}, "source": [ "### Tools\n", "\n", "Now that we have created both, we can create a list of tools that we will use downstream." ] }, { "cell_type": "code", "execution_count": 12, "id": "b8e8e710", "metadata": {}, "outputs": [], "source": [ "tools = [search, retriever_tool]" ] }, { "cell_type": "markdown", "id": "e00068b0", "metadata": {}, "source": [ "## Using Language Models\n", "\n", "Next, let's learn how to use a language model by to call tools. LangChain supports many different language models that you can use interchangably - select the one you want to use below!\n", "\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs openaiParams={`model=\"gpt-4\"`} />\n" ] }, { "cell_type": "code", "execution_count": 4, "id": "69185491", "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "from langchain_openai import ChatOpenAI\n", "\n", "model = ChatOpenAI(model=\"gpt-4\")" ] }, { "cell_type": "markdown", "id": "642ed8bf", "metadata": {}, "source": [ "You can call the language model by passing in a list of messages. By default, the response is a `content` string." ] }, { "cell_type": "code", "execution_count": 13, "id": "c96c960b", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Hello! How can I assist you today?'" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.messages import HumanMessage\n", "\n", "response = model.invoke([HumanMessage(content=\"hi!\")])\n", "response.content" ] }, { "cell_type": "markdown", "id": "47bf8210", "metadata": {}, "source": [ "We can now see what it is like to enable this model to do tool calling. In order to enable that we use `.bind_tools` to give the language model knowledge of these tools" ] }, { "cell_type": "code", "execution_count": 14, "id": "ba692a74", "metadata": {}, "outputs": [], "source": [ "model_with_tools = model.bind_tools(tools)" ] }, { "cell_type": "markdown", "id": "fd920b69", "metadata": {}, "source": [ "We can now call the model. Let's first call it with a normal message, and see how it responds. We can look at both the `content` field as well as the `tool_calls` field." ] }, { "cell_type": "code", "execution_count": 18, "id": "b6a7e925", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ContentString: Hello! How can I assist you today?\n", "ToolCalls: []\n" ] } ], "source": [ "response = model_with_tools.invoke([HumanMessage(content=\"Hi!\")])\n", "\n", "print(f\"ContentString: {response.content}\")\n", "print(f\"ToolCalls: {response.tool_calls}\")" ] }, {
149885
{ "cells": [ { "cell_type": "markdown", "id": "ea37db49-d389-4291-be73-885d06c1fb7e", "metadata": {}, "source": [ "# How to use prompting alone (no tool calling) to do extraction\n", "\n", "Tool calling features are not required for generating structured output from LLMs. LLMs that are able to follow prompt instructions well can be tasked with outputting information in a given format.\n", "\n", "This approach relies on designing good prompts and then parsing the output of the LLMs to make them extract information well.\n", "\n", "To extract data without tool-calling features: \n", "\n", "1. Instruct the LLM to generate text following an expected format (e.g., JSON with a certain schema);\n", "2. Use [output parsers](/docs/concepts#output-parsers) to structure the model response into a desired Python object.\n", "\n", "First we select a LLM:\n", "\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs customVarName=\"model\" />\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "25487939-8713-4ec7-b774-e4a761ac8298", "metadata": { "execution": { "iopub.execute_input": "2024-09-10T20:35:44.442501Z", "iopub.status.busy": "2024-09-10T20:35:44.442044Z", "iopub.status.idle": "2024-09-10T20:35:44.872217Z", "shell.execute_reply": "2024-09-10T20:35:44.871897Z" } }, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "from langchain_anthropic.chat_models import ChatAnthropic\n", "\n", "model = ChatAnthropic(model_name=\"claude-3-sonnet-20240229\", temperature=0)" ] }, { "cell_type": "markdown", "id": "3e412374-3beb-4bbf-966b-400c1f66a258", "metadata": {}, "source": [ ":::tip\n", "This tutorial is meant to be simple, but generally should really include reference examples to squeeze out performance!\n", ":::" ] }, { "cell_type": "markdown", "id": "abc1a945-0f80-4953-add4-cd572b6f2a51", "metadata": {}, "source": [ "## Using PydanticOutputParser\n", "\n", "The following example uses the built-in `PydanticOutputParser` to parse the output of a chat model." ] }, { "cell_type": "code", "execution_count": 2, "id": "497eb023-c043-443d-ac62-2d4ea85fe1b0", "metadata": { "execution": { "iopub.execute_input": "2024-09-10T20:35:44.873979Z", "iopub.status.busy": "2024-09-10T20:35:44.873840Z", "iopub.status.idle": "2024-09-10T20:35:44.878966Z", "shell.execute_reply": "2024-09-10T20:35:44.878718Z" } }, "outputs": [], "source": [ "from typing import List, Optional\n", "\n", "from langchain_core.output_parsers import PydanticOutputParser\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from pydantic import BaseModel, Field, validator\n", "\n", "\n", "class Person(BaseModel):\n", " \"\"\"Information about a person.\"\"\"\n", "\n", " name: str = Field(..., description=\"The name of the person\")\n", " height_in_meters: float = Field(\n", " ..., description=\"The height of the person expressed in meters.\"\n", " )\n", "\n", "\n", "class People(BaseModel):\n", " \"\"\"Identifying information about all people in a text.\"\"\"\n", "\n", " people: List[Person]\n", "\n", "\n", "# Set up a parser\n", "parser = PydanticOutputParser(pydantic_object=People)\n", "\n", "# Prompt\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"Answer the user query. Wrap the output in `json` tags\\n{format_instructions}\",\n", " ),\n", " (\"human\", \"{query}\"),\n", " ]\n", ").partial(format_instructions=parser.get_format_instructions())" ] }, { "cell_type": "markdown", "id": "c31aa2c8-05a9-4a12-80c5-ea1250dea0ae", "metadata": {}, "source": [ "Let's take a look at what information is sent to the model" ] }, { "cell_type": "code", "execution_count": 3, "id": "20b99ffb-a114-49a9-a7be-154c525f8ada", "metadata": { "execution": { "iopub.execute_input": "2024-09-10T20:35:44.880355Z", "iopub.status.busy": "2024-09-10T20:35:44.880277Z", "iopub.status.idle": "2024-09-10T20:35:44.881834Z", "shell.execute_reply": "2024-09-10T20:35:44.881601Z" } }, "outputs": [], "source": [ "query = \"Anna is 23 years old and she is 6 feet tall\"" ] }, { "cell_type": "code", "execution_count": 4, "id": "4f3a66ce-de19-4571-9e54-67504ae3fba7", "metadata": { "execution": { "iopub.execute_input": "2024-09-10T20:35:44.883138Z", "iopub.status.busy": "2024-09-10T20:35:44.883049Z", "iopub.status.idle": "2024-09-10T20:35:44.885139Z", "shell.execute_reply": "2024-09-10T20:35:44.884801Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "System: Answer the user query. Wrap the output in `json` tags\n", "The output should be formatted as a JSON instance that conforms to the JSON schema below.\n", "\n", "As an example, for the schema {\"properties\": {\"foo\": {\"title\": \"Foo\", \"description\": \"a list of strings\", \"type\": \"array\", \"items\": {\"type\": \"string\"}}}, \"required\": [\"foo\"]}\n", "the object {\"foo\": [\"bar\", \"baz\"]} is a well-formatted instance of the schema. The object {\"properties\": {\"foo\": [\"bar\", \"baz\"]}} is not well-formatted.\n", "\n", "Here is the output schema:\n", "```\n",
149886
"{\"$defs\": {\"Person\": {\"description\": \"Information about a person.\", \"properties\": {\"name\": {\"description\": \"The name of the person\", \"title\": \"Name\", \"type\": \"string\"}, \"height_in_meters\": {\"description\": \"The height of the person expressed in meters.\", \"title\": \"Height In Meters\", \"type\": \"number\"}}, \"required\": [\"name\", \"height_in_meters\"], \"title\": \"Person\", \"type\": \"object\"}}, \"description\": \"Identifying information about all people in a text.\", \"properties\": {\"people\": {\"items\": {\"$ref\": \"#/$defs/Person\"}, \"title\": \"People\", \"type\": \"array\"}}, \"required\": [\"people\"]}\n", "```\n", "Human: Anna is 23 years old and she is 6 feet tall\n" ] } ], "source": [ "print(prompt.format_prompt(query=query).to_string())" ] }, { "cell_type": "markdown", "id": "6f1048e0-1bfd-49f9-b697-74389a5ce69c", "metadata": {}, "source": [ "Having defined our prompt, we simply chain together the prompt, model and output parser:" ] }, { "cell_type": "code", "execution_count": 5, "id": "7e0041eb-37dc-4384-9fe3-6dd8c356371e", "metadata": { "execution": { "iopub.execute_input": "2024-09-10T20:35:44.886765Z", "iopub.status.busy": "2024-09-10T20:35:44.886675Z", "iopub.status.idle": "2024-09-10T20:35:46.835960Z", "shell.execute_reply": "2024-09-10T20:35:46.835282Z" } }, "outputs": [ { "data": { "text/plain": [ "People(people=[Person(name='Anna', height_in_meters=1.83)])" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain = prompt | model | parser\n", "chain.invoke({\"query\": query})" ] }, { "cell_type": "markdown", "id": "dd492fe4-110a-4b83-a191-79fffbc1055a", "metadata": {}, "source": [ "Check out the associated [Langsmith trace](https://smith.langchain.com/public/92ed52a3-92b9-45af-a663-0a9c00e5e396/r).\n", "\n", "Note that the schema shows up in two places: \n", "\n", "1. In the prompt, via `parser.get_format_instructions()`;\n", "2. In the chain, to receive the formatted output and structure it into a Python object (in this case, the Pydantic object `People`)." ] }, { "cell_type": "markdown", "id": "815b3b87-3bc6-4b56-835e-c6b6703cef5d", "metadata": {}, "source": [ "## Custom Parsing\n", "\n", "If desired, it's easy to create a custom prompt and parser with `LangChain` and `LCEL`.\n", "\n", "To create a custom parser, define a function to parse the output from the model (typically an [AIMessage](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.AIMessage.html)) into an object of your choice.\n", "\n", "See below for a simple implementation of a JSON parser." ] }, { "cell_type": "code", "execution_count": 6, "id": "b1f11912-c1bb-4a2a-a482-79bf3996961f", "metadata": { "execution": { "iopub.execute_input": "2024-09-10T20:35:46.839577Z", "iopub.status.busy": "2024-09-10T20:35:46.839233Z", "iopub.status.idle": "2024-09-10T20:35:46.849663Z", "shell.execute_reply": "2024-09-10T20:35:46.849177Z" } }, "outputs": [], "source": [ "import json\n", "import re\n", "from typing import List, Optional\n", "\n", "from langchain_anthropic.chat_models import ChatAnthropic\n", "from langchain_core.messages import AIMessage\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from pydantic import BaseModel, Field, validator\n", "\n", "\n", "class Person(BaseModel):\n", " \"\"\"Information about a person.\"\"\"\n", "\n", " name: str = Field(..., description=\"The name of the person\")\n", " height_in_meters: float = Field(\n", " ..., description=\"The height of the person expressed in meters.\"\n", " )\n", "\n", "\n", "class People(BaseModel):\n", " \"\"\"Identifying information about all people in a text.\"\"\"\n", "\n", " people: List[Person]\n", "\n", "\n", "# Prompt\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"Answer the user query. Output your answer as JSON that \"\n", " \"matches the given schema: ```json\\n{schema}\\n```. \"\n", " \"Make sure to wrap the answer in ```json and ``` tags\",\n", " ),\n", " (\"human\", \"{query}\"),\n", " ]\n", ").partial(schema=People.schema())\n", "\n", "\n", "# Custom parser\n", "def extract_json(message: AIMessage) -> List[dict]:\n", " \"\"\"Extracts JSON content from a string where JSON is embedded between ```json and ``` tags.\n", "\n", " Parameters:\n", " text (str): The text containing the JSON content.\n", "\n", " Returns:\n", " list: A list of extracted JSON strings.\n", " \"\"\"\n", " text = message.content\n", " # Define the regular expression pattern to match JSON blocks\n", " pattern = r\"```json(.*?)```\"\n", "\n", " # Find all non-overlapping matches of the pattern in the string\n", " matches = re.findall(pattern, text, re.DOTALL)\n", "\n", " # Return the list of matched JSON strings, stripping any leading or trailing whitespace\n", " try:\n", " return [json.loads(match.strip()) for match in matches]\n", " except Exception:\n", " raise ValueError(f\"Failed to parse: {message}\")" ] }, { "cell_type": "code", "execution_count": 7, "id": "9260d5e8-3b6c-4639-9f3b-fb2f90239e4b", "metadata": { "execution": { "iopub.execute_input": "2024-09-10T20:35:46.851870Z", "iopub.status.busy": "2024-09-10T20:35:46.851698Z", "iopub.status.idle": "2024-09-10T20:35:46.854786Z", "shell.execute_reply": "2024-09-10T20:35:46.854424Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "System: Answer the user query. Output your answer as JSON that matches the given schema: ```json\n",
149888
{ "cells": [ { "cell_type": "raw", "id": "f781411d", "metadata": { "vscode": { "languageId": "raw" } }, "source": [ "---\n", "keywords: [charactertextsplitter]\n", "---" ] }, { "cell_type": "markdown", "id": "c3ee8d00", "metadata": {}, "source": [ "# How to split by character\n", "\n", "This is the simplest method. This splits based on a given character sequence, which defaults to `\"\\n\\n\"`. Chunk length is measured by number of characters.\n", "\n", "1. How the text is split: by single character separator.\n", "2. How the chunk size is measured: by number of characters.\n", "\n", "To obtain the string content directly, use `.split_text`.\n", "\n", "To create LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects (e.g., for use in downstream tasks), use `.create_documents`." ] }, { "cell_type": "code", "execution_count": null, "id": "bf8698ce-44b2-4944-b9a9-254344b537af", "metadata": {}, "outputs": [], "source": [ "%pip install -qU langchain-text-splitters" ] }, { "cell_type": "code", "execution_count": 1, "id": "313fb032", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.'\n" ] } ], "source": [ "from langchain_text_splitters import CharacterTextSplitter\n", "\n", "# Load an example document\n", "with open(\"state_of_the_union.txt\") as f:\n", " state_of_the_union = f.read()\n", "\n", "text_splitter = CharacterTextSplitter(\n", " separator=\"\\n\\n\",\n", " chunk_size=1000,\n", " chunk_overlap=200,\n", " length_function=len,\n", " is_separator_regex=False,\n", ")\n", "texts = text_splitter.create_documents([state_of_the_union])\n", "print(texts[0])" ] }, { "cell_type": "markdown", "id": "dadcb9d6", "metadata": {}, "source": [ "Use `.create_documents` to propagate metadata associated with each document to the output chunks:" ] }, { "cell_type": "code", "execution_count": 2, "id": "1affda60", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' metadata={'document': 1}\n" ] } ], "source": [ "metadatas = [{\"document\": 1}, {\"document\": 2}]\n", "documents = text_splitter.create_documents(\n", " [state_of_the_union, state_of_the_union], metadatas=metadatas\n", ")\n", "print(documents[0])" ] }, { "cell_type": "markdown", "id": "ee080e12-6f44-4311-b1ef-302520a41d66", "metadata": {}, "source": [ "Use `.split_text` to obtain the string content directly:" ] }, { "cell_type": "code", "execution_count": 7, "id": "2a830a9f", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.'" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "text_splitter.split_text(state_of_the_union)[0]" ] }, { "cell_type": "code", "execution_count": null, "id": "a9a3b9cd", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
149897
{ "cells": [ { "cell_type": "markdown", "id": "0fee7096", "metadata": {}, "source": [ "# How to use the output-fixing parser\n", "\n", "This output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors.\n", "\n", "But we can do other things besides throw errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it.\n", "\n", "For this example, we'll use the above Pydantic output parser. Here's what happens if we pass it a result that does not comply with the schema:" ] }, { "cell_type": "code", "execution_count": 1, "id": "9bad594d", "metadata": {}, "outputs": [], "source": [ "from typing import List\n", "\n", "from langchain_core.output_parsers import PydanticOutputParser\n", "from langchain_openai import ChatOpenAI\n", "from pydantic import BaseModel, Field" ] }, { "cell_type": "code", "execution_count": 2, "id": "15283e0b", "metadata": {}, "outputs": [], "source": [ "class Actor(BaseModel):\n", " name: str = Field(description=\"name of an actor\")\n", " film_names: List[str] = Field(description=\"list of names of films they starred in\")\n", "\n", "\n", "actor_query = \"Generate the filmography for a random actor.\"\n", "\n", "parser = PydanticOutputParser(pydantic_object=Actor)" ] }, { "cell_type": "code", "execution_count": 3, "id": "072d2d4c", "metadata": {}, "outputs": [], "source": [ "misformatted = \"{'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}\"" ] }, { "cell_type": "code", "execution_count": 4, "id": "4cbb35b3", "metadata": {}, "outputs": [ { "ename": "OutputParserException", "evalue": "Failed to parse Actor from completion {'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}. Got: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mJSONDecodeError\u001b[0m Traceback (most recent call last)", "File \u001b[0;32m~/workplace/langchain/libs/langchain/langchain/output_parsers/pydantic.py:29\u001b[0m, in \u001b[0;36mPydanticOutputParser.parse\u001b[0;34m(self, text)\u001b[0m\n\u001b[1;32m 28\u001b[0m json_str \u001b[38;5;241m=\u001b[39m match\u001b[38;5;241m.\u001b[39mgroup()\n\u001b[0;32m---> 29\u001b[0m json_object \u001b[38;5;241m=\u001b[39m \u001b[43mjson\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mloads\u001b[49m\u001b[43m(\u001b[49m\u001b[43mjson_str\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstrict\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mFalse\u001b[39;49;00m\u001b[43m)\u001b[49m\n\u001b[1;32m 30\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mpydantic_object\u001b[38;5;241m.\u001b[39mparse_obj(json_object)\n", "File \u001b[0;32m~/.pyenv/versions/3.10.1/lib/python3.10/json/__init__.py:359\u001b[0m, in \u001b[0;36mloads\u001b[0;34m(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)\u001b[0m\n\u001b[1;32m 358\u001b[0m kw[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mparse_constant\u001b[39m\u001b[38;5;124m'\u001b[39m] \u001b[38;5;241m=\u001b[39m parse_constant\n\u001b[0;32m--> 359\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mcls\u001b[39;49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkw\u001b[49m\u001b[43m)\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mdecode\u001b[49m\u001b[43m(\u001b[49m\u001b[43ms\u001b[49m\u001b[43m)\u001b[49m\n",
149900
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# How to propagate callbacks constructor\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "\n", "- [Callbacks](/docs/concepts/#callbacks)\n", "- [Custom callback handlers](/docs/how_to/custom_callbacks)\n", "\n", ":::\n", "\n", "Most LangChain modules allow you to pass `callbacks` directly into the constructor (i.e., initializer). In this case, the callbacks will only be called for that instance (and any nested runs).\n", "\n", ":::warning\n", "Constructor callbacks are scoped only to the object they are defined on. They are **not** inherited by children of the object. This can lead to confusing behavior,\n", "and it's generally better to pass callbacks as a run time argument.\n", ":::\n", "\n", "Here's an example:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "%pip install -qU langchain langchain_anthropic\n", "\n", "import getpass\n", "import os\n", "\n", "os.environ[\"ANTHROPIC_API_KEY\"] = getpass.getpass()" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Chat model started\n", "Chat model ended, response: generations=[[ChatGeneration(text='1 + 2 = 3', message=AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01CdKsRmeS9WRb8BWnHDEHm7', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-2d7fdf2a-7405-4e17-97c0-67e6b2a65305-0'))]] llm_output={'id': 'msg_01CdKsRmeS9WRb8BWnHDEHm7', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} run=None\n" ] }, { "data": { "text/plain": [ "AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01CdKsRmeS9WRb8BWnHDEHm7', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-2d7fdf2a-7405-4e17-97c0-67e6b2a65305-0')" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from typing import Any, Dict, List\n", "\n", "from langchain_anthropic import ChatAnthropic\n", "from langchain_core.callbacks import BaseCallbackHandler\n", "from langchain_core.messages import BaseMessage\n", "from langchain_core.outputs import LLMResult\n", "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "\n", "class LoggingHandler(BaseCallbackHandler):\n", " def on_chat_model_start(\n", " self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs\n", " ) -> None:\n", " print(\"Chat model started\")\n", "\n", " def on_llm_end(self, response: LLMResult, **kwargs) -> None:\n", " print(f\"Chat model ended, response: {response}\")\n", "\n", " def on_chain_start(\n", " self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs\n", " ) -> None:\n", " print(f\"Chain {serialized.get('name')} started\")\n", "\n", " def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None:\n", " print(f\"Chain ended, outputs: {outputs}\")\n", "\n", "\n", "callbacks = [LoggingHandler()]\n", "llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\", callbacks=callbacks)\n", "prompt = ChatPromptTemplate.from_template(\"What is 1 + {number}?\")\n", "\n", "chain = prompt | llm\n", "\n", "chain.invoke({\"number\": \"2\"})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can see that we only see events from the chat model run - no chain events from the prompt or broader chain.\n", "\n", "## Next steps\n", "\n", "You've now learned how to pass callbacks into a constructor.\n", "\n", "Next, check out the other how-to guides in this section, such as how to [pass callbacks at runtime](/docs/how_to/callbacks_runtime)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.4" } }, "nbformat": 4, "nbformat_minor": 4 }
149901
{ "cells": [ { "cell_type": "markdown", "id": "4d6c0c86", "metadata": {}, "source": [ "# How to retry when a parsing error occurs\n", "\n", "While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it isn't. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example." ] }, { "cell_type": "code", "execution_count": 1, "id": "f28526bd", "metadata": {}, "outputs": [], "source": [ "from langchain.output_parsers import OutputFixingParser\n", "from langchain_core.output_parsers import PydanticOutputParser\n", "from langchain_core.prompts import PromptTemplate\n", "from langchain_openai import ChatOpenAI, OpenAI\n", "from pydantic import BaseModel, Field" ] }, { "cell_type": "code", "execution_count": 2, "id": "67c5e1ac", "metadata": {}, "outputs": [], "source": [ "template = \"\"\"Based on the user question, provide an Action and Action Input for what step should be taken.\n", "{format_instructions}\n", "Question: {query}\n", "Response:\"\"\"\n", "\n", "\n", "class Action(BaseModel):\n", " action: str = Field(description=\"action to take\")\n", " action_input: str = Field(description=\"input to the action\")\n", "\n", "\n", "parser = PydanticOutputParser(pydantic_object=Action)" ] }, { "cell_type": "code", "execution_count": 3, "id": "007aa87f", "metadata": {}, "outputs": [], "source": [ "prompt = PromptTemplate(\n", " template=\"Answer the user query.\\n{format_instructions}\\n{query}\\n\",\n", " input_variables=[\"query\"],\n", " partial_variables={\"format_instructions\": parser.get_format_instructions()},\n", ")" ] }, { "cell_type": "code", "execution_count": 4, "id": "10d207ff", "metadata": {}, "outputs": [], "source": [ "prompt_value = prompt.format_prompt(query=\"who is leo di caprios gf?\")" ] }, { "cell_type": "code", "execution_count": 5, "id": "68622837", "metadata": {}, "outputs": [], "source": [ "bad_response = '{\"action\": \"search\"}'" ] }, { "cell_type": "markdown", "id": "25631465", "metadata": {}, "source": [ "If we try to parse this response as is, we will get an error:" ] }, { "cell_type": "code", "execution_count": 6, "id": "894967c1", "metadata": {}, "outputs": [ { "ename": "OutputParserException", "evalue": "Failed to parse Action from completion {\"action\": \"search\"}. Got: 1 validation error for Action\naction_input\n field required (type=value_error.missing)", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mValidationError\u001b[0m Traceback (most recent call last)", "File \u001b[0;32m~/workplace/langchain/libs/langchain/langchain/output_parsers/pydantic.py:30\u001b[0m, in \u001b[0;36mPydanticOutputParser.parse\u001b[0;34m(self, text)\u001b[0m\n\u001b[1;32m 29\u001b[0m json_object \u001b[38;5;241m=\u001b[39m json\u001b[38;5;241m.\u001b[39mloads(json_str, strict\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mFalse\u001b[39;00m)\n\u001b[0;32m---> 30\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mpydantic_object\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mparse_obj\u001b[49m\u001b[43m(\u001b[49m\u001b[43mjson_object\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 32\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m (json\u001b[38;5;241m.\u001b[39mJSONDecodeError, ValidationError) \u001b[38;5;28;01mas\u001b[39;00m e:\n", "File \u001b[0;32m~/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/pydantic/main.py:526\u001b[0m, in \u001b[0;36mpydantic.main.BaseModel.parse_obj\u001b[0;34m()\u001b[0m\n", "File \u001b[0;32m~/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/pydantic/main.py:341\u001b[0m, in \u001b[0;36mpydantic.main.BaseModel.__init__\u001b[0;34m()\u001b[0m\n", "\u001b[0;31mValidationError\u001b[0m: 1 validation error for Action\naction_input\n field required (type=value_error.missing)", "\nDuring handling of the above exception, another exception occurred:\n", "\u001b[0;31mOutputParserException\u001b[0m Traceback (most recent call last)", "Cell \u001b[0;32mIn[6], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[43mparser\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mparse\u001b[49m\u001b[43m(\u001b[49m\u001b[43mbad_response\u001b[49m\u001b[43m)\u001b[49m\n",
149902
"File \u001b[0;32m~/workplace/langchain/libs/langchain/langchain/output_parsers/pydantic.py:35\u001b[0m, in \u001b[0;36mPydanticOutputParser.parse\u001b[0;34m(self, text)\u001b[0m\n\u001b[1;32m 33\u001b[0m name \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mpydantic_object\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__name__\u001b[39m\n\u001b[1;32m 34\u001b[0m msg \u001b[38;5;241m=\u001b[39m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mFailed to parse \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mname\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m from completion \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mtext\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m. Got: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00me\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m---> 35\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m OutputParserException(msg, llm_output\u001b[38;5;241m=\u001b[39mtext)\n", "\u001b[0;31mOutputParserException\u001b[0m: Failed to parse Action from completion {\"action\": \"search\"}. Got: 1 validation error for Action\naction_input\n field required (type=value_error.missing)" ] } ], "source": [ "parser.parse(bad_response)" ] }, { "cell_type": "markdown", "id": "f6b64696", "metadata": {}, "source": [ "If we try to use the `OutputFixingParser` to fix this error, it will be confused - namely, it doesn't know what to actually put for action input." ] }, { "cell_type": "code", "execution_count": 7, "id": "78b2b40d", "metadata": {}, "outputs": [], "source": [ "fix_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI())" ] }, { "cell_type": "code", "execution_count": 8, "id": "4fe1301d", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Action(action='search', action_input='input')" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fix_parser.parse(bad_response)" ] }, { "cell_type": "markdown", "id": "9bd9ea7d", "metadata": {}, "source": [ "Instead, we can use the RetryOutputParser, which passes in the prompt (as well as the original output) to try again to get a better response." ] }, { "cell_type": "code", "execution_count": 9, "id": "7e8a8a28", "metadata": {}, "outputs": [], "source": [ "from langchain.output_parsers import RetryOutputParser" ] }, { "cell_type": "code", "execution_count": 10, "id": "5c86e141", "metadata": {}, "outputs": [], "source": [ "retry_parser = RetryOutputParser.from_llm(parser=parser, llm=OpenAI(temperature=0))" ] }, { "cell_type": "code", "execution_count": 11, "id": "9c04f731", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Action(action='search', action_input='leo di caprio girlfriend')" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "retry_parser.parse_with_prompt(bad_response, prompt_value)" ] }, { "cell_type": "markdown", "id": "16827256-5801-4388-b6fa-608991e29961", "metadata": {}, "source": [ "We can also add the RetryOutputParser easily with a custom chain which transform the raw LLM/ChatModel output into a more workable format." ] }, { "cell_type": "code", "execution_count": 1, "id": "7eaff2fb-56d3-481c-99a1-a968a49d0654", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Action(action='search', action_input='leo di caprio girlfriend')\n" ] } ], "source": [ "from langchain_core.runnables import RunnableLambda, RunnableParallel\n", "\n", "completion_chain = prompt | OpenAI(temperature=0)\n", "\n", "main_chain = RunnableParallel(\n", " completion=completion_chain, prompt_value=prompt\n", ") | RunnableLambda(lambda x: retry_parser.parse_with_prompt(**x))\n", "\n", "\n", "main_chain.invoke({\"query\": \"who is leo di caprios gf?\"})" ] }, { "cell_type": "markdown", "id": "e3a2513a", "metadata": {}, "source": [ "Find out api documentation for [RetryOutputParser](https://python.langchain.com/api_reference/langchain/output_parsers/langchain.output_parsers.retry.RetryOutputParser.html#langchain.output_parsers.retry.RetryOutputParser)." ] }, { "cell_type": "code", "execution_count": null, "id": "a2f94fd8", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.1" } }, "nbformat": 4, "nbformat_minor": 5 }
149903
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# How to better prompt when doing SQL question-answering\n", "\n", "In this guide we'll go over prompting strategies to improve SQL query generation using [create_sql_query_chain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.sql_database.query.create_sql_query_chain.html). We'll largely focus on methods for getting relevant database-specific information in your prompt.\n", "\n", "We will cover: \n", "\n", "- How the dialect of the LangChain [SQLDatabase](https://python.langchain.com/api_reference/community/utilities/langchain_community.utilities.sql_database.SQLDatabase.html) impacts the prompt of the chain;\n", "- How to format schema information into the prompt using `SQLDatabase.get_context`;\n", "- How to build and select few-shot examples to assist the model.\n", "\n", "## Setup\n", "\n", "First, get required packages and set environment variables:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet langchain langchain-community langchain-experimental langchain-openai" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Uncomment the below to use LangSmith. Not required.\n", "# import os\n", "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n", "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The below example will use a SQLite connection with Chinook database. Follow [these installation steps](https://database.guide/2-sample-databases-sqlite/) to create `Chinook.db` in the same directory as this notebook:\n", "\n", "* Save [this file](https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql) as `Chinook_Sqlite.sql`\n", "* Run `sqlite3 Chinook.db`\n", "* Run `.read Chinook_Sqlite.sql`\n", "* Test `SELECT * FROM Artist LIMIT 10;`\n", "\n", "Now, `Chinhook.db` is in our directory and we can interface with it using the SQLAlchemy-driven `SQLDatabase` class:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "sqlite\n", "['Album', 'Artist', 'Customer', 'Employee', 'Genre', 'Invoice', 'InvoiceLine', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track']\n", "[(1, 'AC/DC'), (2, 'Accept'), (3, 'Aerosmith'), (4, 'Alanis Morissette'), (5, 'Alice In Chains'), (6, 'Antônio Carlos Jobim'), (7, 'Apocalyptica'), (8, 'Audioslave'), (9, 'BackBeat'), (10, 'Billy Cobham')]\n" ] } ], "source": [ "from langchain_community.utilities import SQLDatabase\n", "\n", "db = SQLDatabase.from_uri(\"sqlite:///Chinook.db\", sample_rows_in_table_info=3)\n", "print(db.dialect)\n", "print(db.get_usable_table_names())\n", "print(db.run(\"SELECT * FROM Artist LIMIT 10;\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dialect-specific prompting\n", "\n", "One of the simplest things we can do is make our prompt specific to the SQL dialect we're using. When using the built-in [create_sql_query_chain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.sql_database.query.create_sql_query_chain.html) and [SQLDatabase](https://python.langchain.com/api_reference/community/utilities/langchain_community.utilities.sql_database.SQLDatabase.html), this is handled for you for any of the following dialects:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['crate',\n", " 'duckdb',\n", " 'googlesql',\n", " 'mssql',\n", " 'mysql',\n", " 'mariadb',\n", " 'oracle',\n", " 'postgresql',\n", " 'sqlite',\n", " 'clickhouse',\n", " 'prestodb']" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain.chains.sql_database.prompt import SQL_PROMPTS\n", "\n", "list(SQL_PROMPTS)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For example, using our current DB we can see that we'll get a SQLite-specific prompt.\n", "\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs customVarName=\"llm\" />\n" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "You are a SQLite expert. Given an input question, first create a syntactically correct SQLite query to run, then look at the results of the query and return the answer to the input question.\n", "Unless the user specifies in the question a specific number of examples to obtain, query for at most 5 results using the LIMIT clause as per SQLite. You can order the results to return the most informative data in the database.\n", "Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (\") to denote them as delimited identifiers.\n", "Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n", "Pay attention to use date('now') function to get the current date, if the question involves \"today\".\n", "\n", "Use the following format:\n", "\n", "Question: Question here\n", "SQLQuery: SQL Query to run\n", "SQLResult: Result of the SQLQuery\n", "Answer: Final answer here\n", "\n", "Only use the following tables:\n", "\u001b[33;1m\u001b[1;3m{table_info}\u001b[0m\n", "\n", "Question: \u001b[33;1m\u001b[1;3m{input}\u001b[0m\n" ] } ], "source": [ "from langchain.chains import create_sql_query_chain\n", "\n", "chain = create_sql_query_chain(llm, db)\n", "chain.get_prompts()[0].pretty_print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Table definitions and example rows\n", "\n", "In most SQL chains, we'll need to feed the model at least part of the database schema. Without this it won't be able to write valid queries. Our database comes with some convenience methods to give us the relevant context. Specifically, we can get the table names, their schemas, and a sample of rows from each table.\n", "\n", "Here we will use `SQLDatabase.get_context`, which provides available tables and their schemas:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [
149909
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# How to debug your LLM apps\n", "\n", "Like building any type of software, at some point you'll need to debug when building with LLMs. A model call will fail, or model output will be misformatted, or there will be some nested model calls and it won't be clear where along the way an incorrect output was created.\n", "\n", "There are three main methods for debugging:\n", "\n", "- Verbose Mode: This adds print statements for \"important\" events in your chain.\n", "- Debug Mode: This add logging statements for ALL events in your chain.\n", "- LangSmith Tracing: This logs events to [LangSmith](https://docs.smith.langchain.com/) to allow for visualization there.\n", "\n", "| | Verbose Mode | Debug Mode | LangSmith Tracing |\n", "|------------------------|--------------|------------|-------------------|\n", "| Free | ✅ | ✅ | ✅ |\n", "| UI | ❌ | ❌ | ✅ |\n", "| Persisted | ❌ | ❌ | ✅ |\n", "| See all events | ❌ | ✅ | ✅ |\n", "| See \"important\" events | ✅ | ❌ | ✅ |\n", "| Runs Locally | ✅ | ✅ | ❌ |\n", "\n", "\n", "## Tracing\n", "\n", "Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls.\n", "As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent.\n", "The best way to do this is with [LangSmith](https://smith.langchain.com).\n", "\n", "After you sign up at the link above, make sure to set your environment variables to start logging traces:\n", "\n", "```shell\n", "export LANGCHAIN_TRACING_V2=\"true\"\n", "export LANGCHAIN_API_KEY=\"...\"\n", "```\n", "\n", "Or, if in a notebook, you can set them with:\n", "\n", "```python\n", "import getpass\n", "import os\n", "\n", "os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", "os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n", "```\n", "\n", "Let's suppose we have an agent, and want to visualize the actions it takes and tool outputs it receives. Without any debugging, here's what we see:\n", "\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs\n", " customVarName=\"llm\"\n", "/>\n" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI(model=\"gpt-4-turbo\", temperature=0)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input': 'Who directed the 2023 film Oppenheimer and what is their age in days?',\n", " 'output': 'The 2023 film \"Oppenheimer\" was directed by Christopher Nolan.\\n\\nTo calculate Christopher Nolan\\'s age in days, we first need his birthdate, which is July 30, 1970. Let\\'s calculate his age in days from his birthdate to today\\'s date, December 7, 2023.\\n\\n1. Calculate the total number of days from July 30, 1970, to December 7, 2023.\\n2. Nolan was born on July 30, 1970. From July 30, 1970, to July 30, 2023, is 53 years.\\n3. From July 30, 2023, to December 7, 2023, is 130 days.\\n\\nNow, calculate the total days:\\n- 53 years = 53 x 365 = 19,345 days\\n- Adding leap years from 1970 to 2023: There are 13 leap years (1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2012, 2016, 2020). So, add 13 days.\\n- Total days from years and leap years = 19,345 + 13 = 19,358 days\\n- Add the days from July 30, 2023, to December 7, 2023 = 130 days\\n\\nTotal age in days = 19,358 + 130 = 19,488 days\\n\\nChristopher Nolan is 19,488 days old as of December 7, 2023.'}" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain.agents import AgentExecutor, create_tool_calling_agent\n", "from langchain_community.tools.tavily_search import TavilySearchResults\n", "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "tools = [TavilySearchResults(max_results=1)]\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"You are a helpful assistant.\",\n", " ),\n", " (\"placeholder\", \"{chat_history}\"),\n", " (\"human\", \"{input}\"),\n", " (\"placeholder\", \"{agent_scratchpad}\"),\n", " ]\n", ")\n", "\n", "# Construct the Tools agent\n", "agent = create_tool_calling_agent(llm, tools, prompt)\n", "\n", "# Create an agent executor by passing in the agent and tools\n", "agent_executor = AgentExecutor(agent=agent, tools=tools)\n", "agent_executor.invoke(\n", " {\"input\": \"Who directed the 2023 film Oppenheimer and what is their age in days?\"}\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We don't get much output, but since we set up LangSmith we can easily see what happened under the hood:\n", "\n", "https://smith.langchain.com/public/a89ff88f-9ddc-4757-a395-3a1b365655bf/r" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## `set_debug` and `set_verbose`\n", "\n", "If you're prototyping in Jupyter Notebooks or running Python scripts, it can be helpful to print out the intermediate steps of a chain run.\n", "\n", "There are a number of ways to enable printing at varying degrees of verbosity.\n", "\n", "Note: These still work even with LangSmith enabled, so you can have both turned on and running at the same time\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### `set_verbose(True)`\n", "\n", "Setting the `verbose` flag will print out inputs and outputs in a slightly more readable format and will skip logging certain raw outputs (like the token usage stats for an LLM call) so that you can focus on application logic." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n", "\u001b[32;1m\u001b[1;3m\n", "Invoking: `tavily_search_results_json` with `{'query': 'director of the 2023 film Oppenheimer'}`\n",
149916
# How to use LangChain with different Pydantic versions As of the `0.3` release, LangChain uses Pydantic 2 internally. Users should install Pydantic 2 and are advised to **avoid** using the `pydantic.v1` namespace of Pydantic 2 with LangChain APIs. If you're working with prior versions of LangChain, please see the following guide on [Pydantic compatibility](https://python.langchain.com/v0.2/docs/how_to/pydantic_compatibility).
149922
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# How to attach callbacks to a runnable\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "\n", "- [Callbacks](/docs/concepts/#callbacks)\n", "- [Custom callback handlers](/docs/how_to/custom_callbacks)\n", "- [Chaining runnables](/docs/how_to/sequence)\n", "- [Attach runtime arguments to a Runnable](/docs/how_to/binding)\n", "\n", ":::\n", "\n", "If you are composing a chain of runnables and want to reuse callbacks across multiple executions, you can attach callbacks with the [`.with_config()`](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_config) method. This saves you the need to pass callbacks in each time you invoke the chain.\n", "\n", ":::important\n", "\n", "`with_config()` binds a configuration which will be interpreted as **runtime** configuration. So these callbacks will propagate to all child components.\n", ":::\n", "\n", "Here's an example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "%pip install -qU langchain langchain_anthropic\n", "\n", "import getpass\n", "import os\n", "\n", "os.environ[\"ANTHROPIC_API_KEY\"] = getpass.getpass()" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Chain RunnableSequence started\n", "Chain ChatPromptTemplate started\n", "Chain ended, outputs: messages=[HumanMessage(content='What is 1 + 2?')]\n", "Chat model started\n", "Chat model ended, response: generations=[[ChatGeneration(text='1 + 2 = 3', message=AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-d6bcfd72-9c94-466d-bac0-f39e456ad6e3-0'))]] llm_output={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} run=None\n", "Chain ended, outputs: content='1 + 2 = 3' response_metadata={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} id='run-d6bcfd72-9c94-466d-bac0-f39e456ad6e3-0'\n" ] }, { "data": { "text/plain": [ "AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-d6bcfd72-9c94-466d-bac0-f39e456ad6e3-0')" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from typing import Any, Dict, List\n", "\n", "from langchain_anthropic import ChatAnthropic\n", "from langchain_core.callbacks import BaseCallbackHandler\n", "from langchain_core.messages import BaseMessage\n", "from langchain_core.outputs import LLMResult\n", "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "\n", "class LoggingHandler(BaseCallbackHandler):\n", " def on_chat_model_start(\n", " self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs\n", " ) -> None:\n", " print(\"Chat model started\")\n", "\n", " def on_llm_end(self, response: LLMResult, **kwargs) -> None:\n", " print(f\"Chat model ended, response: {response}\")\n", "\n", " def on_chain_start(\n", " self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs\n", " ) -> None:\n", " print(f\"Chain {serialized.get('name')} started\")\n", "\n", " def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None:\n", " print(f\"Chain ended, outputs: {outputs}\")\n", "\n", "\n", "callbacks = [LoggingHandler()]\n", "llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\")\n", "prompt = ChatPromptTemplate.from_template(\"What is 1 + {number}?\")\n", "\n", "chain = prompt | llm\n", "\n", "chain_with_callbacks = chain.with_config(callbacks=callbacks)\n", "\n", "chain_with_callbacks.invoke({\"number\": \"2\"})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The bound callbacks will run for all nested module runs.\n", "\n", "## Next steps\n", "\n", "You've now learned how to attach callbacks to a chain.\n", "\n", "Next, check out the other how-to guides in this section, such as how to [pass callbacks in at runtime](/docs/how_to/callbacks_runtime)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.4" } }, "nbformat": 4, "nbformat_minor": 4 }
149923
# Text embedding models :::info Head to [Integrations](/docs/integrations/text_embedding/) for documentation on built-in integrations with text embedding model providers. ::: The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space. The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former, `.embed_documents`, takes as input multiple texts, while the latter, `.embed_query`, takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself). `.embed_query` will return a list of floats, whereas `.embed_documents` returns a list of lists of floats. ## Get started ### Setup import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; <Tabs> <TabItem value="openai" label="OpenAI" default> To start we'll need to install the OpenAI partner package: ```bash pip install langchain-openai ``` Accessing the API requires an API key, which you can get by creating an account and heading [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable by running: ```bash export OPENAI_API_KEY="..." ``` If you'd prefer not to set an environment variable you can pass the key in directly via the `api_key` named parameter when initiating the OpenAI LLM class: ```python from langchain_openai import OpenAIEmbeddings embeddings_model = OpenAIEmbeddings(api_key="...") ``` Otherwise you can initialize without any params: ```python from langchain_openai import OpenAIEmbeddings embeddings_model = OpenAIEmbeddings() ``` </TabItem> <TabItem value="cohere" label="Cohere"> To start we'll need to install the Cohere SDK package: ```bash pip install langchain-cohere ``` Accessing the API requires an API key, which you can get by creating an account and heading [here](https://dashboard.cohere.com/api-keys). Once we have a key we'll want to set it as an environment variable by running: ```shell export COHERE_API_KEY="..." ``` If you'd prefer not to set an environment variable you can pass the key in directly via the `cohere_api_key` named parameter when initiating the Cohere LLM class: ```python from langchain_cohere import CohereEmbeddings embeddings_model = CohereEmbeddings(cohere_api_key="...", model='embed-english-v3.0') ``` Otherwise you can initialize simply as shown below: ```python from langchain_cohere import CohereEmbeddings embeddings_model = CohereEmbeddings(model='embed-english-v3.0') ``` Do note that it is mandatory to pass the model parameter while initializing the CohereEmbeddings class. </TabItem> <TabItem value="huggingface" label="Hugging Face"> To start we'll need to install the Hugging Face partner package: ```bash pip install langchain-huggingface ``` You can then load any [Sentence Transformers model](https://huggingface.co/models?library=sentence-transformers) from the Hugging Face Hub. ```python from langchain_huggingface import HuggingFaceEmbeddings embeddings_model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2") ``` </TabItem> </Tabs> ### `embed_documents` #### Embed list of texts Use `.embed_documents` to embed a list of strings, recovering a list of embeddings: ```python embeddings = embeddings_model.embed_documents( [ "Hi there!", "Oh, hello!", "What's your name?", "My friends call me World", "Hello World!" ] ) len(embeddings), len(embeddings[0]) ``` ```output (5, 1536) ``` ### `embed_query` #### Embed single query Use `.embed_query` to embed a single piece of text (e.g., for the purpose of comparing to other embedded pieces of texts). ```python embedded_query = embeddings_model.embed_query("What was the name mentioned in the conversation?") embedded_query[:5] ``` ```output [0.0053587136790156364, -0.0004999046213924885, 0.038883671164512634, -0.003001077566295862, -0.00900818221271038] ```
149925
{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ "---\n", "sidebar_position: 1\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# How to add memory to chatbots\n", "\n", "A key feature of chatbots is their ability to use content of previous conversation turns as context. This state management can take several forms, including:\n", "\n", "- Simply stuffing previous messages into a chat model prompt.\n", "- The above, but trimming old messages to reduce the amount of distracting information the model has to deal with.\n", "- More complex modifications like synthesizing summaries for long running conversations.\n", "\n", "We'll go into more detail on a few techniques below!\n", "\n", ":::note\n", "\n", "This how-to guide previously built a chatbot using [RunnableWithMessageHistory](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html). You can access this version of the guide in the [v0.2 docs](https://python.langchain.com/v0.2/docs/how_to/chatbots_memory/).\n", "\n", "As of the v0.3 release of LangChain, we recommend that LangChain users take advantage of [LangGraph persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/) to incorporate `memory` into new LangChain applications.\n", "\n", "If your code is already relying on `RunnableWithMessageHistory` or `BaseChatMessageHistory`, you do **not** need to make any changes. We do not plan on deprecating this functionality in the near future as it works for simple chat applications and any code that uses `RunnableWithMessageHistory` will continue to work as expected.\n", "\n", "Please see [How to migrate to LangGraph Memory](/docs/versions/migrating_memory/) for more details.\n", ":::\n", "\n", "## Setup\n", "\n", "You'll need to install a few packages, and have your OpenAI API key set as an environment variable named `OPENAI_API_KEY`:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdin", "output_type": "stream", "text": [ "OpenAI API Key: ········\n" ] } ], "source": [ "%pip install --upgrade --quiet langchain langchain-openai langgraph\n", "\n", "import getpass\n", "import os\n", "\n", "if not os.environ.get(\"OPENAI_API_KEY\"):\n", " os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's also set up a chat model that we'll use for the below examples." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "from langchain_openai import ChatOpenAI\n", "\n", "model = ChatOpenAI(model=\"gpt-4o-mini\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Message passing\n", "\n", "The simplest form of memory is simply passing chat history messages into a chain. Here's an example:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "I said, \"I love programming\" in French: \"J'adore la programmation.\"\n" ] } ], "source": [ "from langchain_core.messages import AIMessage, HumanMessage, SystemMessage\n", "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n", "\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " SystemMessage(\n", " content=\"You are a helpful assistant. Answer all questions to the best of your ability.\"\n", " ),\n", " MessagesPlaceholder(variable_name=\"messages\"),\n", " ]\n", ")\n", "\n", "chain = prompt | model\n", "\n", "ai_msg = chain.invoke(\n", " {\n", " \"messages\": [\n", " HumanMessage(\n", " content=\"Translate from English to French: I love programming.\"\n", " ),\n", " AIMessage(content=\"J'adore la programmation.\"),\n", " HumanMessage(content=\"What did you just say?\"),\n", " ],\n", " }\n", ")\n", "print(ai_msg.content)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that by passing the previous conversation into a chain, it can use it as context to answer questions. This is the basic concept underpinning chatbot memory - the rest of the guide will demonstrate convenient techniques for passing or reformatting messages." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Automatic history management\n", "\n", "The previous examples pass messages to the chain (and model) explicitly. This is a completely acceptable approach, but it does require external management of new messages. LangChain also provides a way to build applications that have memory using LangGraph's [persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/). You can [enable persistence](https://langchain-ai.github.io/langgraph/how-tos/persistence/) in LangGraph applications by providing a `checkpointer` when compiling the graph." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "from langgraph.checkpoint.memory import MemorySaver\n", "from langgraph.graph import START, MessagesState, StateGraph\n", "\n", "workflow = StateGraph(state_schema=MessagesState)\n", "\n", "\n", "# Define the function that calls the model\n", "def call_model(state: MessagesState):\n", " system_prompt = (\n", " \"You are a helpful assistant. \"\n", " \"Answer all questions to the best of your ability.\"\n", " )\n", " messages = [SystemMessage(content=system_prompt)] + state[\"messages\"]\n", " response = model.invoke(messages)\n", " return {\"messages\": response}\n", "\n", "\n", "# Define the node and edge\n", "workflow.add_node(\"model\", call_model)\n", "workflow.add_edge(START, \"model\")\n", "\n", "# Add simple in-memory checkpointer\n", "# highlight-start\n", "memory = MemorySaver()\n", "app = workflow.compile(checkpointer=memory)\n", "# highlight-end" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " We'll pass the latest input to the conversation here and let the LangGraph keep track of the conversation history using the checkpointer:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'messages': [HumanMessage(content='Translate to French: I love programming.', additional_kwargs={}, response_metadata={}, id='be5e7099-3149-4293-af49-6b36c8ccd71b'),\n", " AIMessage(content=\"J'aime programmer.\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 4, 'prompt_tokens': 35, 'total_tokens': 39, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_e9627b5346', 'finish_reason': 'stop', 'logprobs': None}, id='run-8a753d7a-b97b-4d01-a661-626be6f41b38-0', usage_metadata={'input_tokens': 35, 'output_tokens': 4, 'total_tokens': 39})]}" ] },
149935
" data = graph.query(description_query, params={\"candidate\": entity})\n", " return data[0][\"context\"]\n", " except IndexError:\n", " return \"No information was found\"" ] }, { "cell_type": "markdown", "id": "bdecc24b-8065-4755-98cc-9c6d093d4897", "metadata": {}, "source": [ "You can observe that we have defined the Cypher statement used to retrieve information.\n", "Therefore, we can avoid generating Cypher statements and use the LLM agent to only populate the input parameters.\n", "To provide additional information to an LLM agent about when to use the tool and their input parameters, we wrap the function as a tool." ] }, { "cell_type": "code", "execution_count": 6, "id": "f4cde772-0d05-475d-a2f0-b53e1669bd13", "metadata": {}, "outputs": [], "source": [ "from typing import Optional, Type\n", "\n", "from langchain_core.callbacks import (\n", " AsyncCallbackManagerForToolRun,\n", " CallbackManagerForToolRun,\n", ")\n", "from langchain_core.tools import BaseTool\n", "\n", "# Import things that are needed generically\n", "from pydantic import BaseModel, Field\n", "\n", "\n", "class InformationInput(BaseModel):\n", " entity: str = Field(description=\"movie or a person mentioned in the question\")\n", "\n", "\n", "class InformationTool(BaseTool):\n", " name = \"Information\"\n", " description = (\n", " \"useful for when you need to answer questions about various actors or movies\"\n", " )\n", " args_schema: Type[BaseModel] = InformationInput\n", "\n", " def _run(\n", " self,\n", " entity: str,\n", " run_manager: Optional[CallbackManagerForToolRun] = None,\n", " ) -> str:\n", " \"\"\"Use the tool.\"\"\"\n", " return get_information(entity)\n", "\n", " async def _arun(\n", " self,\n", " entity: str,\n", " run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n", " ) -> str:\n", " \"\"\"Use the tool asynchronously.\"\"\"\n", " return get_information(entity)" ] }, { "cell_type": "markdown", "id": "ff4820aa-2b57-4558-901f-6d984b326738", "metadata": {}, "source": [ "## OpenAI Agent\n", "\n", "LangChain expression language makes it very convenient to define an agent to interact with a graph database over the semantic layer." ] }, { "cell_type": "code", "execution_count": 7, "id": "6e959ac2-537d-4358-a43b-e3a47f68e1d6", "metadata": {}, "outputs": [], "source": [ "from typing import List, Tuple\n", "\n", "from langchain.agents import AgentExecutor\n", "from langchain.agents.format_scratchpad import format_to_openai_function_messages\n", "from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser\n", "from langchain_core.messages import AIMessage, HumanMessage\n", "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n", "from langchain_core.utils.function_calling import convert_to_openai_function\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n", "tools = [InformationTool()]\n", "\n", "llm_with_tools = llm.bind(functions=[convert_to_openai_function(t) for t in tools])\n", "\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"You are a helpful assistant that finds information about movies \"\n", " \" and recommends them. If tools require follow up questions, \"\n", " \"make sure to ask the user for clarification. Make sure to include any \"\n", " \"available options that need to be clarified in the follow up questions \"\n", " \"Do only the things the user specifically requested. \",\n", " ),\n", " MessagesPlaceholder(variable_name=\"chat_history\"),\n", " (\"user\", \"{input}\"),\n", " MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n", " ]\n", ")\n", "\n", "\n", "def _format_chat_history(chat_history: List[Tuple[str, str]]):\n", " buffer = []\n", " for human, ai in chat_history:\n", " buffer.append(HumanMessage(content=human))\n", " buffer.append(AIMessage(content=ai))\n", " return buffer\n", "\n", "\n", "agent = (\n", " {\n", " \"input\": lambda x: x[\"input\"],\n", " \"chat_history\": lambda x: _format_chat_history(x[\"chat_history\"])\n", " if x.get(\"chat_history\")\n", " else [],\n", " \"agent_scratchpad\": lambda x: format_to_openai_function_messages(\n", " x[\"intermediate_steps\"]\n", " ),\n", " }\n", " | prompt\n", " | llm_with_tools\n", " | OpenAIFunctionsAgentOutputParser()\n", ")\n", "\n", "agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)" ] }, { "cell_type": "code", "execution_count": 8, "id": "b0459833-fe84-4ebc-9823-a3a3ffd929e9", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n", "\u001b[32;1m\u001b[1;3m\n", "Invoking: `Information` with `{'entity': 'Casino'}`\n", "\n", "\n", "\u001b[0m\u001b[36;1m\u001b[1;3mtype:Movie\n", "title: Casino\n", "year: 1995-11-22\n", "ACTED_IN: Joe Pesci, Robert De Niro, Sharon Stone, James Woods\n", "\u001b[0m\u001b[32;1m\u001b[1;3mThe movie \"Casino\" starred Joe Pesci, Robert De Niro, Sharon Stone, and James Woods.\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n" ] }, { "data": { "text/plain": [ "{'input': 'Who played in Casino?',\n", " 'output': 'The movie \"Casino\" starred Joe Pesci, Robert De Niro, Sharon Stone, and James Woods.'}" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "agent_executor.invoke({\"input\": \"Who played in Casino?\"})" ] }, { "cell_type": "code", "execution_count": null, "id": "c2759973-de8a-4624-8930-c90a21d6caa3", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py",
149938
{ "cells": [ { "cell_type": "markdown", "id": "d9172545", "metadata": {}, "source": [ "# How to retrieve using multiple vectors per document\n", "\n", "It can often be useful to store multiple vectors per document. There are multiple use cases where this is beneficial. For example, we can embed multiple chunks of a document and associate those embeddings with the parent document, allowing retriever hits on the chunks to return the larger document.\n", "\n", "LangChain implements a base [MultiVectorRetriever](https://python.langchain.com/api_reference/langchain/retrievers/langchain.retrievers.multi_vector.MultiVectorRetriever.html), which simplifies this process. Much of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the `MultiVectorRetriever`.\n", "\n", "The methods to create multiple vectors per document include:\n", "\n", "- Smaller chunks: split a document into smaller chunks, and embed those (this is [ParentDocumentRetriever](https://python.langchain.com/api_reference/langchain/retrievers/langchain.retrievers.parent_document_retriever.ParentDocumentRetriever.html)).\n", "- Summary: create a summary for each document, embed that along with (or instead of) the document.\n", "- Hypothetical questions: create hypothetical questions that each document would be appropriate to answer, embed those along with (or instead of) the document.\n", "\n", "Note that this also enables another method of adding embeddings - manually. This is useful because you can explicitly add questions or queries that should lead to a document being recovered, giving you more control.\n", "\n", "Below we walk through an example. First we instantiate some documents. We will index them in an (in-memory) [Chroma](/docs/integrations/providers/chroma/) vector store using [OpenAI](https://python.langchain.com/docs/integrations/text_embedding/openai/) embeddings, but any LangChain vector store or embeddings model will suffice." ] }, { "cell_type": "code", "execution_count": null, "id": "09cecd95-3499-465a-895a-944627ffb77f", "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet langchain-chroma langchain langchain-openai > /dev/null" ] }, { "cell_type": "code", "execution_count": 1, "id": "18c1421a", "metadata": {}, "outputs": [], "source": [ "from langchain.storage import InMemoryByteStore\n", "from langchain_chroma import Chroma\n", "from langchain_community.document_loaders import TextLoader\n", "from langchain_openai import OpenAIEmbeddings\n", "from langchain_text_splitters import RecursiveCharacterTextSplitter\n", "\n", "loaders = [\n", " TextLoader(\"paul_graham_essay.txt\"),\n", " TextLoader(\"state_of_the_union.txt\"),\n", "]\n", "docs = []\n", "for loader in loaders:\n", " docs.extend(loader.load())\n", "text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000)\n", "docs = text_splitter.split_documents(docs)\n", "\n", "# The vectorstore to use to index the child chunks\n", "vectorstore = Chroma(\n", " collection_name=\"full_documents\", embedding_function=OpenAIEmbeddings()\n", ")" ] }, { "cell_type": "markdown", "id": "fa17beda", "metadata": {}, "source": [ "## Smaller chunks\n", "\n", "Often times it can be useful to retrieve larger chunks of information, but embed smaller chunks. This allows for embeddings to capture the semantic meaning as closely as possible, but for as much context as possible to be passed downstream. Note that this is what the [ParentDocumentRetriever](https://python.langchain.com/api_reference/langchain/retrievers/langchain.retrievers.parent_document_retriever.ParentDocumentRetriever.html) does. Here we show what is going on under the hood.\n", "\n", "We will make a distinction between the vector store, which indexes embeddings of the (sub) documents, and the document store, which houses the \"parent\" documents and associates them with an identifier." ] }, { "cell_type": "code", "execution_count": 2, "id": "0e7b6b45", "metadata": {}, "outputs": [], "source": [ "import uuid\n", "\n", "from langchain.retrievers.multi_vector import MultiVectorRetriever\n", "\n", "# The storage layer for the parent documents\n", "store = InMemoryByteStore()\n", "id_key = \"doc_id\"\n", "\n", "# The retriever (empty to start)\n", "retriever = MultiVectorRetriever(\n", " vectorstore=vectorstore,\n", " byte_store=store,\n", " id_key=id_key,\n", ")\n", "\n", "doc_ids = [str(uuid.uuid4()) for _ in docs]" ] }, { "cell_type": "markdown", "id": "d4feded4-856a-4282-91c3-53aabc62e6ff", "metadata": {}, "source": [ "We next generate the \"sub\" documents by splitting the original documents. Note that we store the document identifier in the `metadata` of the corresponding [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) object." ] }, { "cell_type": "code", "execution_count": 3, "id": "5d23247d", "metadata": {}, "outputs": [], "source": [ "# The splitter to use to create smaller chunks\n", "child_text_splitter = RecursiveCharacterTextSplitter(chunk_size=400)\n", "\n", "sub_docs = []\n", "for i, doc in enumerate(docs):\n", " _id = doc_ids[i]\n", " _sub_docs = child_text_splitter.split_documents([doc])\n", " for _doc in _sub_docs:\n", " _doc.metadata[id_key] = _id\n", " sub_docs.extend(_sub_docs)" ] }, { "cell_type": "markdown", "id": "8e0634f8-90d5-4250-981a-5257c8a6d455", "metadata": {}, "source": [ "Finally, we index the documents in our vector store and document store:" ] }, { "cell_type": "code", "execution_count": 4, "id": "92ed5861", "metadata": {}, "outputs": [], "source": [ "retriever.vectorstore.add_documents(sub_docs)\n", "retriever.docstore.mset(list(zip(doc_ids, docs)))" ] }, { "cell_type": "markdown", "id": "14c48c6d-850c-4317-9b6e-1ade92f2f710", "metadata": {}, "source": [ "The vector store alone will retrieve small chunks:" ] }, { "cell_type": "code", "execution_count": 5, "id": "8afed60c", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Document(page_content='Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.', metadata={'doc_id': '064eca46-a4c4-4789-8e3b-583f9597e54f', 'source': 'state_of_the_union.txt'})" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "retriever.vectorstore.similarity_search(\"justice breyer\")[0]" ] }, { "cell_type": "markdown", "id": "717097c7-61d9-4306-8625-ef8f1940c127", "metadata": {}, "source": [ "Whereas the retriever will return the larger parent document:" ] }, {
149940
] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "retrieved_docs = retriever.invoke(\"justice breyer\")\n", "\n", "len(retrieved_docs[0].page_content)" ] }, { "cell_type": "markdown", "id": "097a5396", "metadata": {}, "source": [ "## Hypothetical Queries\n", "\n", "An LLM can also be used to generate a list of hypothetical questions that could be asked of a particular document, which might bear close semantic similarity to relevant queries in a [RAG](/docs/tutorials/rag) application. These questions can then be embedded and associated with the documents to improve retrieval.\n", "\n", "Below, we use the [with_structured_output](/docs/how_to/structured_output/) method to structure the LLM output into a list of strings." ] }, { "cell_type": "code", "execution_count": 16, "id": "03d85234-c33a-4a43-861d-47328e1ec2ea", "metadata": {}, "outputs": [], "source": [ "from typing import List\n", "\n", "from pydantic import BaseModel, Field\n", "\n", "\n", "class HypotheticalQuestions(BaseModel):\n", " \"\"\"Generate hypothetical questions.\"\"\"\n", "\n", " questions: List[str] = Field(..., description=\"List of questions\")\n", "\n", "\n", "chain = (\n", " {\"doc\": lambda x: x.page_content}\n", " # Only asking for 3 hypothetical questions, but this could be adjusted\n", " | ChatPromptTemplate.from_template(\n", " \"Generate a list of exactly 3 hypothetical questions that the below document could be used to answer:\\n\\n{doc}\"\n", " )\n", " | ChatOpenAI(max_retries=0, model=\"gpt-4o\").with_structured_output(\n", " HypotheticalQuestions\n", " )\n", " | (lambda x: x.questions)\n", ")" ] }, { "cell_type": "markdown", "id": "6dddc40f-62af-413c-b944-f94a5e1f2f4e", "metadata": {}, "source": [ "Invoking the chain on a single document demonstrates that it outputs a list of questions:" ] }, { "cell_type": "code", "execution_count": 17, "id": "11d30554", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[\"What impact did the IBM 1401 have on the author's early programming experiences?\",\n", " \"How did the transition from using the IBM 1401 to microcomputers influence the author's programming journey?\",\n", " \"What role did Lisp play in shaping the author's understanding and approach to AI?\"]" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain.invoke(docs[0])" ] }, { "cell_type": "markdown", "id": "dcffc572-7b20-4b77-857a-90ec360a8f7e", "metadata": {}, "source": [ "We can batch then batch the chain over all documents and assemble our vector store and document store as before:" ] }, { "cell_type": "code", "execution_count": 18, "id": "b2cd6e75", "metadata": {}, "outputs": [], "source": [ "# Batch chain over documents to generate hypothetical questions\n", "hypothetical_questions = chain.batch(docs, {\"max_concurrency\": 5})\n", "\n", "\n", "# The vectorstore to use to index the child chunks\n", "vectorstore = Chroma(\n", " collection_name=\"hypo-questions\", embedding_function=OpenAIEmbeddings()\n", ")\n", "# The storage layer for the parent documents\n", "store = InMemoryByteStore()\n", "id_key = \"doc_id\"\n", "# The retriever (empty to start)\n", "retriever = MultiVectorRetriever(\n", " vectorstore=vectorstore,\n", " byte_store=store,\n", " id_key=id_key,\n", ")\n", "doc_ids = [str(uuid.uuid4()) for _ in docs]\n", "\n", "\n", "# Generate Document objects from hypothetical questions\n", "question_docs = []\n", "for i, question_list in enumerate(hypothetical_questions):\n", " question_docs.extend(\n", " [Document(page_content=s, metadata={id_key: doc_ids[i]}) for s in question_list]\n", " )\n", "\n", "\n", "retriever.vectorstore.add_documents(question_docs)\n", "retriever.docstore.mset(list(zip(doc_ids, docs)))" ] }, { "cell_type": "markdown", "id": "75cba8ab-a06f-4545-85fc-cf49d0204b5e", "metadata": {}, "source": [ "Note that querying the underlying vector store will retrieve hypothetical questions that are semantically similar to the input query:" ] }, { "cell_type": "code", "execution_count": 19, "id": "7b442b90", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='What might be the potential benefits of nominating Circuit Court of Appeals Judge Ketanji Brown Jackson to the United States Supreme Court?', metadata={'doc_id': '43292b74-d1b8-4200-8a8b-ea0cb57fbcdb'}),\n", " Document(page_content='How might the Bipartisan Infrastructure Law impact the economic competition between the U.S. and China?', metadata={'doc_id': '66174780-d00c-4166-9791-f0069846e734'}),\n", " Document(page_content='What factors led to the creation of Y Combinator?', metadata={'doc_id': '72003c4e-4cc9-4f09-a787-0b541a65b38c'}),\n", " Document(page_content='How did the ability to publish essays online change the landscape for writers and thinkers?', metadata={'doc_id': 'e8d2c648-f245-4bcc-b8d3-14e64a164b64'})]" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sub_docs = retriever.vectorstore.similarity_search(\"justice breyer\")\n", "\n", "sub_docs" ] }, { "cell_type": "markdown", "id": "63c32e43-5f4a-463b-a0c2-2101986f70e6", "metadata": {}, "source": [ "And invoking the retriever will return the corresponding document:" ] }, { "cell_type": "code", "execution_count": 20, "id": "7594b24e", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "9194" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "retrieved_docs = retriever.invoke(\"justice breyer\")\n", "len(retrieved_docs[0].page_content)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
149941
{ "cells": [ { "cell_type": "raw", "id": "45e0127d", "metadata": {}, "source": [ "---\n", "sidebar_position: 4\n", "---" ] }, { "cell_type": "markdown", "id": "d8ca736e", "metadata": {}, "source": [ "# How to partially format prompt templates\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [Prompt templates](/docs/concepts/#prompt-templates)\n", "\n", ":::\n", "\n", "Like partially binding arguments to a function, it can make sense to \"partial\" a prompt template - e.g. pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.\n", "\n", "LangChain supports this in two ways:\n", "\n", "1. Partial formatting with string values.\n", "2. Partial formatting with functions that return string values.\n", "\n", "In the examples below, we go over the motivations for both use cases as well as how to do it in LangChain.\n", "\n", "## Partial with strings\n", "\n", "One common use case for wanting to partial a prompt template is if you get access to some of the variables in a prompt before others. For example, suppose you have a prompt template that requires two variables, `foo` and `baz`. If you get the `foo` value early on in your chain, but the `baz` value later, it can be inconvenient to pass both variables all the way through the chain. Instead, you can partial the prompt template with the `foo` value, and then pass the partialed prompt template along and just use that. Below is an example of doing this:\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "5f1942bd", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "foobaz\n" ] } ], "source": [ "from langchain_core.prompts import PromptTemplate\n", "\n", "prompt = PromptTemplate.from_template(\"{foo}{bar}\")\n", "partial_prompt = prompt.partial(foo=\"foo\")\n", "print(partial_prompt.format(bar=\"baz\"))" ] }, { "cell_type": "markdown", "id": "79af4cea", "metadata": {}, "source": [ "You can also just initialize the prompt with the partialed variables.\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "572fa26f", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "foobaz\n" ] } ], "source": [ "prompt = PromptTemplate(\n", " template=\"{foo}{bar}\", input_variables=[\"bar\"], partial_variables={\"foo\": \"foo\"}\n", ")\n", "print(prompt.format(bar=\"baz\"))" ] }, { "cell_type": "markdown", "id": "ab12d50d", "metadata": {}, "source": [ "## Partial with functions\n", "\n", "The other common use is to partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables is inconvenient. In this case, it's handy to be able to partial the prompt with a function that always returns the current date.\n" ] }, { "cell_type": "code", "execution_count": 4, "id": "c538703a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Tell me a funny joke about the day 04/21/2024, 19:43:57\n" ] } ], "source": [ "from datetime import datetime\n", "\n", "\n", "def _get_datetime():\n", " now = datetime.now()\n", " return now.strftime(\"%m/%d/%Y, %H:%M:%S\")\n", "\n", "\n", "prompt = PromptTemplate(\n", " template=\"Tell me a {adjective} joke about the day {date}\",\n", " input_variables=[\"adjective\", \"date\"],\n", ")\n", "partial_prompt = prompt.partial(date=_get_datetime)\n", "print(partial_prompt.format(adjective=\"funny\"))" ] }, { "cell_type": "markdown", "id": "da80290e", "metadata": {}, "source": [ "You can also just initialize the prompt with the partialed variables, which often makes more sense in this workflow.\n" ] }, { "cell_type": "code", "execution_count": 5, "id": "f86fce6d", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Tell me a funny joke about the day 04/21/2024, 19:43:57\n" ] } ], "source": [ "prompt = PromptTemplate(\n", " template=\"Tell me a {adjective} joke about the day {date}\",\n", " input_variables=[\"adjective\"],\n", " partial_variables={\"date\": _get_datetime},\n", ")\n", "print(prompt.format(adjective=\"funny\"))" ] }, { "cell_type": "markdown", "id": "3895b210", "metadata": {}, "source": [ "## Next steps\n", "\n", "You've now learned how to partially apply variables to your prompt templates.\n", "\n", "Next, check out the other how-to guides on prompt templates in this section, like [adding few-shot examples to your prompt templates](/docs/how_to/few_shot_examples_chat)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
149942
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# How to create custom callback handlers\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "\n", "- [Callbacks](/docs/concepts/#callbacks)\n", "\n", ":::\n", "\n", "LangChain has some built-in callback handlers, but you will often want to create your own handlers with custom logic.\n", "\n", "To create a custom callback handler, we need to determine the [event(s)](https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html#langchain-core-callbacks-base-basecallbackhandler) we want our callback handler to handle as well as what we want our callback handler to do when the event is triggered. Then all we need to do is attach the callback handler to the object, for example via [the constructor](/docs/how_to/callbacks_constructor) or [at runtime](/docs/how_to/callbacks_runtime).\n", "\n", "In the example below, we'll implement streaming with a custom handler.\n", "\n", "In our custom callback handler `MyCustomHandler`, we implement the `on_llm_new_token` handler to print the token we have just received. We then attach our custom handler to the model object as a constructor callback." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "%pip install -qU langchain langchain_anthropic\n", "\n", "import getpass\n", "import os\n", "\n", "os.environ[\"ANTHROPIC_API_KEY\"] = getpass.getpass()" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "My custom handler, token: Here\n", "My custom handler, token: 's\n", "My custom handler, token: a\n", "My custom handler, token: bear\n", "My custom handler, token: joke\n", "My custom handler, token: for\n", "My custom handler, token: you\n", "My custom handler, token: :\n", "My custom handler, token: \n", "\n", "Why\n", "My custom handler, token: di\n", "My custom handler, token: d the\n", "My custom handler, token: bear\n", "My custom handler, token: dissol\n", "My custom handler, token: ve\n", "My custom handler, token: in\n", "My custom handler, token: water\n", "My custom handler, token: ?\n", "My custom handler, token: \n", "Because\n", "My custom handler, token: it\n", "My custom handler, token: was\n", "My custom handler, token: a\n", "My custom handler, token: polar\n", "My custom handler, token: bear\n", "My custom handler, token: !\n" ] } ], "source": [ "from langchain_anthropic import ChatAnthropic\n", "from langchain_core.callbacks import BaseCallbackHandler\n", "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "\n", "class MyCustomHandler(BaseCallbackHandler):\n", " def on_llm_new_token(self, token: str, **kwargs) -> None:\n", " print(f\"My custom handler, token: {token}\")\n", "\n", "\n", "prompt = ChatPromptTemplate.from_messages([\"Tell me a joke about {animal}\"])\n", "\n", "# To enable streaming, we pass in `streaming=True` to the ChatModel constructor\n", "# Additionally, we pass in our custom handler as a list to the callbacks parameter\n", "model = ChatAnthropic(\n", " model=\"claude-3-sonnet-20240229\", streaming=True, callbacks=[MyCustomHandler()]\n", ")\n", "\n", "chain = prompt | model\n", "\n", "response = chain.invoke({\"animal\": \"bears\"})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can see [this reference page](https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html#langchain-core-callbacks-base-basecallbackhandler) for a list of events you can handle. Note that the `handle_chain_*` events run for most LCEL runnables.\n", "\n", "## Next steps\n", "\n", "You've now learned how to create your own custom callback handlers.\n", "\n", "Next, check out the other how-to guides in this section, such as [how to attach callbacks to a runnable](/docs/how_to/callbacks_attach)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.5" } }, "nbformat": 4, "nbformat_minor": 2 }
149945
{ "cells": [ { "cell_type": "markdown", "id": "c95fcd15cd52c944", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "source": [ "# How to split by HTML sections\n", "## Description and motivation\n", "Similar in concept to the [HTMLHeaderTextSplitter](/docs/how_to/HTML_header_metadata_splitter), the `HTMLSectionSplitter` is a \"structure-aware\" chunker that splits text at the element level and adds metadata for each header \"relevant\" to any given chunk.\n", "\n", "It can return chunks element by element or combine elements with the same metadata, with the objectives of (a) keeping related text grouped (more or less) semantically and (b) preserving context-rich information encoded in document structures.\n", "\n", "Use `xslt_path` to provide an absolute path to transform the HTML so that it can detect sections based on provided tags. The default is to use the `converting_to_header.xslt` file in the `data_connection/document_transformers` directory. This is for converting the html to a format/layout that is easier to detect sections. For example, `span` based on their font size can be converted to header tags to be detected as a section.\n", "\n", "## Usage examples\n", "### 1) How to split HTML strings:" ] }, { "cell_type": "code", "execution_count": 1, "id": "initial_id", "metadata": { "ExecuteTime": { "end_time": "2023-10-02T18:57:49.208965400Z", "start_time": "2023-10-02T18:57:48.899756Z" }, "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='Foo \\n Some intro text about Foo.', metadata={'Header 1': 'Foo'}),\n", " Document(page_content='Bar main section \\n Some intro text about Bar. \\n Bar subsection 1 \\n Some text about the first subtopic of Bar. \\n Bar subsection 2 \\n Some text about the second subtopic of Bar.', metadata={'Header 2': 'Bar main section'}),\n", " Document(page_content='Baz \\n Some text about Baz \\n \\n \\n Some concluding text about Foo', metadata={'Header 2': 'Baz'})]" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_text_splitters import HTMLSectionSplitter\n", "\n", "html_string = \"\"\"\n", " <!DOCTYPE html>\n", " <html>\n", " <body>\n", " <div>\n", " <h1>Foo</h1>\n", " <p>Some intro text about Foo.</p>\n", " <div>\n", " <h2>Bar main section</h2>\n", " <p>Some intro text about Bar.</p>\n", " <h3>Bar subsection 1</h3>\n", " <p>Some text about the first subtopic of Bar.</p>\n", " <h3>Bar subsection 2</h3>\n", " <p>Some text about the second subtopic of Bar.</p>\n", " </div>\n", " <div>\n", " <h2>Baz</h2>\n", " <p>Some text about Baz</p>\n", " </div>\n", " <br>\n", " <p>Some concluding text about Foo</p>\n", " </div>\n", " </body>\n", " </html>\n", "\"\"\"\n", "\n", "headers_to_split_on = [(\"h1\", \"Header 1\"), (\"h2\", \"Header 2\")]\n", "\n", "html_splitter = HTMLSectionSplitter(headers_to_split_on)\n", "html_header_splits = html_splitter.split_text(html_string)\n", "html_header_splits" ] }, { "cell_type": "markdown", "id": "e29b4aade2a0070c", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "source": [ "### 2) How to constrain chunk sizes:\n", "\n", "`HTMLSectionSplitter` can be used with other text splitters as part of a chunking pipeline. Internally, it uses the `RecursiveCharacterTextSplitter` when the section size is larger than the chunk size. It also considers the font size of the text to determine whether it is a section or not based on the determined font size threshold." ] }, { "cell_type": "code", "execution_count": 3, "id": "6ada8ea093ea0475", "metadata": { "ExecuteTime": { "end_time": "2023-10-02T18:57:51.016141300Z", "start_time": "2023-10-02T18:57:50.647495400Z" }, "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='Foo \\n Some intro text about Foo.', metadata={'Header 1': 'Foo'}),\n", " Document(page_content='Bar main section \\n Some intro text about Bar.', metadata={'Header 2': 'Bar main section'}),\n", " Document(page_content='Bar subsection 1 \\n Some text about the first subtopic of Bar.', metadata={'Header 3': 'Bar subsection 1'}),\n", " Document(page_content='Bar subsection 2 \\n Some text about the second subtopic of Bar.', metadata={'Header 3': 'Bar subsection 2'}),\n", " Document(page_content='Baz \\n Some text about Baz \\n \\n \\n Some concluding text about Foo', metadata={'Header 2': 'Baz'})]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_text_splitters import RecursiveCharacterTextSplitter\n", "\n", "html_string = \"\"\"\n", " <!DOCTYPE html>\n", " <html>\n", " <body>\n", " <div>\n", " <h1>Foo</h1>\n", " <p>Some intro text about Foo.</p>\n", " <div>\n", " <h2>Bar main section</h2>\n", " <p>Some intro text about Bar.</p>\n", " <h3>Bar subsection 1</h3>\n", " <p>Some text about the first subtopic of Bar.</p>\n", " <h3>Bar subsection 2</h3>\n", " <p>Some text about the second subtopic of Bar.</p>\n", " </div>\n", " <div>\n", " <h2>Baz</h2>\n", " <p>Some text about Baz</p>\n", " </div>\n", " <br>\n", " <p>Some concluding text about Foo</p>\n", " </div>\n", " </body>\n", " </html>\n", "\"\"\"\n", "\n", "headers_to_split_on = [\n", " (\"h1\", \"Header 1\"),\n", " (\"h2\", \"Header 2\"),\n", " (\"h3\", \"Header 3\"),\n", " (\"h4\", \"Header 4\"),\n", "]\n", "\n", "html_splitter = HTMLSectionSplitter(headers_to_split_on)\n", "\n", "html_header_splits = html_splitter.split_text(html_string)\n", "\n", "chunk_size = 500\n", "chunk_overlap = 30\n", "text_splitter = RecursiveCharacterTextSplitter(\n", " chunk_size=chunk_size, chunk_overlap=chunk_overlap\n", ")\n", "\n", "# Split\n",
149950
{ "cells": [ { "cell_type": "raw", "id": "e2596041-9b76-4e74-836f-e6235086bbf0", "metadata": {}, "source": [ "---\n", "sidebar_position: 1\n", "keywords: [RunnableParallel, RunnableMap, LCEL]\n", "---" ] }, { "cell_type": "markdown", "id": "b022ab74-794d-4c54-ad47-ff9549ddb9d2", "metadata": {}, "source": [ "# How to invoke runnables in parallel\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", "- [Chaining runnables](/docs/how_to/sequence)\n", "\n", ":::\n", "\n", "The [`RunnableParallel`](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.RunnableParallel.html) primitive is essentially a dict whose values are runnables (or things that can be coerced to runnables, like functions). It runs all of its values in parallel, and each value is called with the overall input of the `RunnableParallel`. The final return value is a dict with the results of each value under its appropriate key.\n", "\n", "## Formatting with `RunnableParallels`\n", "\n", "`RunnableParallels` are useful for parallelizing operations, but can also be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence. You can use them to split or fork the chain so that multiple components can process the input in parallel. Later, other components can join or merge the results to synthesize a final response. This type of chain creates a computation graph that looks like the following:\n", "\n", "```text\n", " Input\n", " / \\\n", " / \\\n", " Branch1 Branch2\n", " \\ /\n", " \\ /\n", " Combine\n", "```\n", "\n", "Below, the input to prompt is expected to be a map with keys `\"context\"` and `\"question\"`. The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the `\"question\"` key.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "2627ffd7", "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "%pip install -qU langchain langchain_openai\n", "\n", "import os\n", "from getpass import getpass\n", "\n", "if \"OPENAI_API_KEY\" not in os.environ:\n", " os.environ[\"OPENAI_API_KEY\"] = getpass()" ] }, { "cell_type": "code", "execution_count": 2, "id": "267d1460-53c1-4fdb-b2c3-b6a1eb7fccff", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Harrison worked at Kensho.'" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_community.vectorstores import FAISS\n", "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.runnables import RunnablePassthrough\n", "from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n", "\n", "vectorstore = FAISS.from_texts(\n", " [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n", ")\n", "retriever = vectorstore.as_retriever()\n", "template = \"\"\"Answer the question based only on the following context:\n", "{context}\n", "\n", "Question: {question}\n", "\"\"\"\n", "\n", "# The prompt expects input with keys for \"context\" and \"question\"\n", "prompt = ChatPromptTemplate.from_template(template)\n", "\n", "model = ChatOpenAI()\n", "\n", "retrieval_chain = (\n", " {\"context\": retriever, \"question\": RunnablePassthrough()}\n", " | prompt\n", " | model\n", " | StrOutputParser()\n", ")\n", "\n", "retrieval_chain.invoke(\"where did harrison work?\")" ] }, { "cell_type": "markdown", "id": "392cd4c4-e7ed-4ab8-934d-f7a4eca55ee1", "metadata": {}, "source": [ ":::tip\n", "Note that when composing a RunnableParallel with another Runnable we don't even need to wrap our dictionary in the RunnableParallel class — the type conversion is handled for us. In the context of a chain, these are equivalent:\n", ":::\n", "\n", "```\n", "{\"context\": retriever, \"question\": RunnablePassthrough()}\n", "```\n", "\n", "```\n", "RunnableParallel({\"context\": retriever, \"question\": RunnablePassthrough()})\n", "```\n", "\n", "```\n", "RunnableParallel(context=retriever, question=RunnablePassthrough())\n", "```\n", "\n", "See the section on [coercion for more](/docs/how_to/sequence/#coercion)." ] }, { "cell_type": "markdown", "id": "7c1b8baa-3a80-44f0-bb79-d22f79815d3d", "metadata": {}, "source": [ "## Using itemgetter as shorthand\n", "\n", "Note that you can use Python's `itemgetter` as shorthand to extract data from the map when combining with `RunnableParallel`. You can find more information about itemgetter in the [Python Documentation](https://docs.python.org/3/library/operator.html#operator.itemgetter). \n", "\n", "In the example below, we use itemgetter to extract specific keys from the map:" ] }, { "cell_type": "code", "execution_count": 3, "id": "84fc49e1-2daf-4700-ae33-a0a6ed47d5f6", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Harrison ha lavorato a Kensho.'" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from operator import itemgetter\n", "\n", "from langchain_community.vectorstores import FAISS\n", "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.runnables import RunnablePassthrough\n", "from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n", "\n", "vectorstore = FAISS.from_texts(\n", " [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n", ")\n", "retriever = vectorstore.as_retriever()\n", "\n", "template = \"\"\"Answer the question based only on the following context:\n", "{context}\n", "\n", "Question: {question}\n", "\n", "Answer in the following language: {language}\n", "\"\"\"\n", "prompt = ChatPromptTemplate.from_template(template)\n", "\n", "chain = (\n", " {\n", " \"context\": itemgetter(\"question\") | retriever,\n", " \"question\": itemgetter(\"question\"),\n", " \"language\": itemgetter(\"language\"),\n", " }\n", " | prompt\n", " | model\n", " | StrOutputParser()\n", ")\n", "\n", "chain.invoke({\"question\": \"where did harrison work\", \"language\": \"italian\"})" ] }, {
149957
- [How to: recursively split text](/docs/how_to/recursive_text_splitter) - [How to: split by HTML headers](/docs/how_to/HTML_header_metadata_splitter) - [How to: split by HTML sections](/docs/how_to/HTML_section_aware_splitter) - [How to: split by character](/docs/how_to/character_text_splitter) - [How to: split code](/docs/how_to/code_splitter) - [How to: split Markdown by headers](/docs/how_to/markdown_header_metadata_splitter) - [How to: recursively split JSON](/docs/how_to/recursive_json_splitter) - [How to: split text into semantic chunks](/docs/how_to/semantic-chunker) - [How to: split by tokens](/docs/how_to/split_by_token) ### Embedding models [Embedding Models](/docs/concepts/#embedding-models) take a piece of text and create a numerical representation of it. - [How to: embed text data](/docs/how_to/embed_text) - [How to: cache embedding results](/docs/how_to/caching_embeddings) ### Vector stores [Vector stores](/docs/concepts/#vector-stores) are databases that can efficiently store and retrieve embeddings. - [How to: use a vector store to retrieve data](/docs/how_to/vectorstores) ### Retrievers [Retrievers](/docs/concepts/#retrievers) are responsible for taking a query and returning relevant documents. - [How to: use a vector store to retrieve data](/docs/how_to/vectorstore_retriever) - [How to: generate multiple queries to retrieve data for](/docs/how_to/MultiQueryRetriever) - [How to: use contextual compression to compress the data retrieved](/docs/how_to/contextual_compression) - [How to: write a custom retriever class](/docs/how_to/custom_retriever) - [How to: add similarity scores to retriever results](/docs/how_to/add_scores_retriever) - [How to: combine the results from multiple retrievers](/docs/how_to/ensemble_retriever) - [How to: reorder retrieved results to mitigate the "lost in the middle" effect](/docs/how_to/long_context_reorder) - [How to: generate multiple embeddings per document](/docs/how_to/multi_vector) - [How to: retrieve the whole document for a chunk](/docs/how_to/parent_document_retriever) - [How to: generate metadata filters](/docs/how_to/self_query) - [How to: create a time-weighted retriever](/docs/how_to/time_weighted_vectorstore) - [How to: use hybrid vector and keyword retrieval](/docs/how_to/hybrid) ### Indexing Indexing is the process of keeping your vectorstore in-sync with the underlying data source. - [How to: reindex data to keep your vectorstore in-sync with the underlying data source](/docs/how_to/indexing) ### Tools LangChain [Tools](/docs/concepts/#tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. Refer [here](/docs/integrations/tools/) for a list of pre-buit tools. - [How to: create tools](/docs/how_to/custom_tools) - [How to: use built-in tools and toolkits](/docs/how_to/tools_builtin) - [How to: use chat models to call tools](/docs/how_to/tool_calling) - [How to: pass tool outputs to chat models](/docs/how_to/tool_results_pass_to_model) - [How to: pass run time values to tools](/docs/how_to/tool_runtime) - [How to: add a human-in-the-loop for tools](/docs/how_to/tools_human) - [How to: handle tool errors](/docs/how_to/tools_error) - [How to: force models to call a tool](/docs/how_to/tool_choice) - [How to: disable parallel tool calling](/docs/how_to/tool_calling_parallel) - [How to: access the `RunnableConfig` from a tool](/docs/how_to/tool_configure) - [How to: stream events from a tool](/docs/how_to/tool_stream_events) - [How to: return artifacts from a tool](/docs/how_to/tool_artifacts/) - [How to: convert Runnables to tools](/docs/how_to/convert_runnable_to_tool) - [How to: add ad-hoc tool calling capability to models](/docs/how_to/tools_prompting) - [How to: pass in runtime secrets](/docs/how_to/runnable_runtime_secrets) ### Multimodal - [How to: pass multimodal data directly to models](/docs/how_to/multimodal_inputs/) - [How to: use multimodal prompts](/docs/how_to/multimodal_prompts/)
149966
{ "cells": [ { "cell_type": "markdown", "id": "4ef893cf-eac1-45e6-9eb6-72e9ca043200", "metadata": {}, "source": [ "# How to get your RAG application to return sources\n", "\n", "Often in Q&A applications it's important to show users the sources that were used to generate the answer. The simplest way to do this is for the chain to return the Documents that were retrieved in each generation.\n", "\n", "We'll work off of the Q&A app we built over the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng in the [RAG tutorial](/docs/tutorials/rag).\n", "\n", "We will cover two approaches:\n", "\n", "1. Using the built-in [create_retrieval_chain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.retrieval.create_retrieval_chain.html), which returns sources by default;\n", "2. Using a simple [LCEL](/docs/concepts#langchain-expression-language-lcel) implementation, to show the operating principle.\n", "\n", "We will also show how to structure sources into the model response, such that a model can report what specific sources it used in generating its answer." ] }, { "cell_type": "markdown", "id": "487d8d79-5ee9-4aa4-9fdf-cd5f4303e099", "metadata": {}, "source": [ "## Setup\n", "\n", "### Dependencies\n", "\n", "We'll use OpenAI embeddings and a Chroma vector store in this walkthrough, but everything shown here works with any [Embeddings](/docs/concepts#embedding-models), [VectorStore](/docs/concepts#vectorstores) or [Retriever](/docs/concepts#retrievers). \n", "\n", "We'll use the following packages:" ] }, { "cell_type": "code", "execution_count": 1, "id": "28d272cd-4e31-40aa-bbb4-0be0a1f49a14", "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet langchain langchain-community langchainhub langchain-openai langchain-chroma beautifulsoup4" ] }, { "cell_type": "markdown", "id": "51ef48de-70b6-4f43-8e0b-ab9b84c9c02a", "metadata": {}, "source": [ "We need to set environment variable `OPENAI_API_KEY`, which can be done directly or loaded from a `.env` file like so:" ] }, { "cell_type": "code", "execution_count": null, "id": "143787ca-d8e6-4dc9-8281-4374f4d71720", "metadata": {}, "outputs": [], "source": [ "import getpass\n", "import os\n", "\n", "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n", "\n", "# import dotenv\n", "\n", "# dotenv.load_dotenv()" ] }, { "cell_type": "markdown", "id": "1665e740-ce01-4f09-b9ed-516db0bd326f", "metadata": {}, "source": [ "### LangSmith\n", "\n", "Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com).\n", "\n", "Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:" ] }, { "cell_type": "code", "execution_count": null, "id": "07411adb-3722-4f65-ab7f-8f6f57663d11", "metadata": {}, "outputs": [], "source": [ "os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", "os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()" ] }, { "cell_type": "markdown", "id": "fa6ba684-26cf-4860-904e-a4d51380c134", "metadata": {}, "source": [ "## Using `create_retrieval_chain`\n", "\n", "Let's first select a LLM:\n", "\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs customVarName=\"llm\" />\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "5e7513b0-81e5-4477-8007-101e523f271c", "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI()" ] }, { "cell_type": "markdown", "id": "6b1bdfd7-8acf-4655-834d-ba7463a80fef", "metadata": {}, "source": [ "Here is Q&A app with sources we built over the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng in the [RAG tutorial](/docs/tutorials/rag):" ] }, { "cell_type": "code", "execution_count": null, "id": "24a69b8c-024e-4e34-b827-9c9de46512a3", "metadata": {}, "outputs": [], "source": [ "import bs4\n", "from langchain.chains import create_retrieval_chain\n", "from langchain.chains.combine_documents import create_stuff_documents_chain\n", "from langchain_chroma import Chroma\n", "from langchain_community.document_loaders import WebBaseLoader\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_openai import OpenAIEmbeddings\n", "from langchain_text_splitters import RecursiveCharacterTextSplitter\n", "\n", "# 1. Load, chunk and index the contents of the blog to create a retriever.\n", "loader = WebBaseLoader(\n", " web_paths=(\"https://lilianweng.github.io/posts/2023-06-23-agent/\",),\n", " bs_kwargs=dict(\n", " parse_only=bs4.SoupStrainer(\n", " class_=(\"post-content\", \"post-title\", \"post-header\")\n", " )\n", " ),\n", ")\n", "docs = loader.load()\n", "\n", "text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n", "splits = text_splitter.split_documents(docs)\n", "vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())\n", "retriever = vectorstore.as_retriever()\n", "\n", "\n", "# 2. Incorporate the retriever into a question-answering chain.\n", "system_prompt = (\n", " \"You are an assistant for question-answering tasks. \"\n", " \"Use the following pieces of retrieved context to answer \"\n", " \"the question. If you don't know the answer, say that you \"\n", " \"don't know. Use three sentences maximum and keep the \"\n", " \"answer concise.\"\n", " \"\\n\\n\"\n", " \"{context}\"\n", ")\n", "\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\"system\", system_prompt),\n", " (\"human\", \"{input}\"),\n", " ]\n", ")\n", "\n",
149968
" Document(metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}, page_content='Fig. 11. Illustration of how HuggingGPT works. (Image source: Shen et al. 2023)\\nThe system comprises of 4 stages:\\n(1) Task planning: LLM works as the brain and parses the user requests into multiple tasks. There are four attributes associated with each task: task type, ID, dependencies, and arguments. They use few-shot examples to guide LLM to do task parsing and planning.\\nInstruction:')],\n", " 'answer': 'Task decomposition is a technique used in artificial intelligence to break down complex tasks into smaller and more manageable subtasks. This approach helps agents or models to tackle difficult problems by dividing them into simpler steps, improving performance and interpretability. Different methods like Chain of Thought and Tree of Thoughts have been developed to enhance task decomposition in AI systems.'}" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.runnables import RunnablePassthrough\n", "\n", "\n", "def format_docs(docs):\n", " return \"\\n\\n\".join(doc.page_content for doc in docs)\n", "\n", "\n", "# This Runnable takes a dict with keys 'input' and 'context',\n", "# formats them into a prompt, and generates a response.\n", "rag_chain_from_docs = (\n", " {\n", " \"input\": lambda x: x[\"input\"], # input query\n", " \"context\": lambda x: format_docs(x[\"context\"]), # context\n", " }\n", " | prompt # format query and context into prompt\n", " | llm # generate response\n", " | StrOutputParser() # coerce to string\n", ")\n", "\n", "# Pass input query to retriever\n", "retrieve_docs = (lambda x: x[\"input\"]) | retriever\n", "\n", "# Below, we chain `.assign` calls. This takes a dict and successively\n", "# adds keys-- \"context\" and \"answer\"-- where the value for each key\n", "# is determined by a Runnable. The Runnable operates on all existing\n", "# keys in the dict.\n", "chain = RunnablePassthrough.assign(context=retrieve_docs).assign(\n", " answer=rag_chain_from_docs\n", ")\n", "\n", "chain.invoke({\"input\": \"What is Task Decomposition\"})" ] }, { "cell_type": "markdown", "id": "b437da5d-ca09-4d15-9be2-c35e5a1ace77", "metadata": {}, "source": [ ":::tip\n", "\n", "Check out the [LangSmith trace](https://smith.langchain.com/public/1c055a3b-0236-4670-a3fb-023d418ba796/r)\n", "\n", ":::" ] }, { "cell_type": "markdown", "id": "c1c17797-d965-4fd2-b8d4-d386f25dd352", "metadata": {}, "source": [ "## Structure sources in model response\n", "\n", "Up to this point, we've simply propagated the documents returned from the retrieval step through to the final response. But this may not illustrate what subset of information the model relied on when generating its answer. Below, we show how to structure sources into the model response, allowing the model to report what specific context it relied on for its answer.\n", "\n", "Because the above LCEL implementation is composed of [Runnable](/docs/concepts/#runnable-interface) primitives, it is straightforward to extend. Below, we make a simple change:\n", "\n", "- We use the model's tool-calling features to generate [structured output](/docs/how_to/structured_output/), consisting of an answer and list of sources. The schema for the response is represented in the `AnswerWithSources` TypedDict, below.\n", "- We remove the `StrOutputParser()`, as we expect `dict` output in this scenario." ] }, { "cell_type": "code", "execution_count": 17, "id": "8f916b14-1b0a-4975-a62f-52f1353bde15", "metadata": {}, "outputs": [], "source": [ "from typing import List\n", "\n", "from langchain_core.runnables import RunnablePassthrough\n", "from typing_extensions import Annotated, TypedDict\n", "\n", "\n", "# Desired schema for response\n", "class AnswerWithSources(TypedDict):\n", " \"\"\"An answer to the question, with sources.\"\"\"\n", "\n", " answer: str\n", " sources: Annotated[\n", " List[str],\n", " ...,\n", " \"List of sources (author + year) used to answer the question\",\n", " ]\n", "\n", "\n", "# Our rag_chain_from_docs has the following changes:\n", "# - add `.with_structured_output` to the LLM;\n", "# - remove the output parser\n", "rag_chain_from_docs = (\n", " {\n", " \"input\": lambda x: x[\"input\"],\n", " \"context\": lambda x: format_docs(x[\"context\"]),\n", " }\n", " | prompt\n", " | llm.with_structured_output(AnswerWithSources)\n", ")\n", "\n", "retrieve_docs = (lambda x: x[\"input\"]) | retriever\n", "\n", "chain = RunnablePassthrough.assign(context=retrieve_docs).assign(\n", " answer=rag_chain_from_docs\n", ")\n", "\n", "response = chain.invoke({\"input\": \"What is Chain of Thought?\"})" ] }, { "cell_type": "code", "execution_count": 18, "id": "7a8fc0c5-afb3-4012-a467-3951996a6850", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{\n", " \"answer\": \"Chain of Thought (CoT) is a prompting technique that enhances model performance on complex tasks by instructing the model to \\\"think step by step\\\" to decompose hard tasks into smaller and simpler steps. It transforms big tasks into multiple manageable tasks and sheds light on the interpretation of the model's thinking process.\",\n", " \"sources\": [\n", " \"Wei et al. 2022\"\n", " ]\n", "}\n" ] } ], "source": [ "import json\n", "\n", "print(json.dumps(response[\"answer\"], indent=2))" ] }, { "cell_type": "markdown", "id": "7440f785-29c5-4c6b-9656-0d9d5efbac05", "metadata": {}, "source": [ ":::tip\n", "\n", "View [LangSmith trace](https://smith.langchain.com/public/0eeddf06-3a7b-4f27-974c-310ca8160f60/r)\n", "\n", ":::" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
149982
{ "cells": [ { "cell_type": "markdown", "id": "a05c860c", "metadata": {}, "source": [ "# How to split text by tokens \n", "\n", "Language models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model. " ] }, { "cell_type": "markdown", "id": "7683b36a", "metadata": {}, "source": [ "## tiktoken\n", "\n", ":::note\n", "[tiktoken](https://github.com/openai/tiktoken) is a fast `BPE` tokenizer created by `OpenAI`.\n", ":::\n", "\n", "\n", "We can use `tiktoken` to estimate tokens used. It will probably be more accurate for the OpenAI models.\n", "\n", "1. How the text is split: by character passed in.\n", "2. How the chunk size is measured: by `tiktoken` tokenizer.\n", "\n", "[CharacterTextSplitter](https://python.langchain.com/api_reference/text_splitters/character/langchain_text_splitters.character.CharacterTextSplitter.html), [RecursiveCharacterTextSplitter](https://python.langchain.com/api_reference/text_splitters/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html), and [TokenTextSplitter](https://python.langchain.com/api_reference/langchain_text_splitters/base/langchain_text_splitters.base.TokenTextSplitter.html) can be used with `tiktoken` directly." ] }, { "cell_type": "code", "execution_count": null, "id": "6c4ef83e-f43a-4658-ad1a-3952e0a5bbe7", "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet langchain-text-splitters tiktoken" ] }, { "cell_type": "code", "execution_count": 1, "id": "1ad2d0f2", "metadata": {}, "outputs": [], "source": [ "from langchain_text_splitters import CharacterTextSplitter\n", "\n", "# This is a long document we can split up.\n", "with open(\"state_of_the_union.txt\") as f:\n", " state_of_the_union = f.read()" ] }, { "cell_type": "markdown", "id": "a3ba1d8a", "metadata": {}, "source": [ "To split with a [CharacterTextSplitter](https://python.langchain.com/api_reference/text_splitters/character/langchain_text_splitters.character.CharacterTextSplitter.html) and then merge chunks with `tiktoken`, use its `.from_tiktoken_encoder()` method. Note that splits from this method can be larger than the chunk size measured by the `tiktoken` tokenizer.\n", "\n", "The `.from_tiktoken_encoder()` method takes either `encoding_name` as an argument (e.g. `cl100k_base`), or the `model_name` (e.g. `gpt-4`). All additional arguments like `chunk_size`, `chunk_overlap`, and `separators` are used to instantiate `CharacterTextSplitter`:" ] }, { "cell_type": "code", "execution_count": 6, "id": "825f7c0a", "metadata": {}, "outputs": [], "source": [ "text_splitter = CharacterTextSplitter.from_tiktoken_encoder(\n", " encoding_name=\"cl100k_base\", chunk_size=100, chunk_overlap=0\n", ")\n", "texts = text_splitter.split_text(state_of_the_union)" ] }, { "cell_type": "code", "execution_count": 3, "id": "ae35d165", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n", "\n", "Last year COVID-19 kept us apart. This year we are finally together again. \n", "\n", "Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n", "\n", "With a duty to one another to the American people to the Constitution.\n" ] } ], "source": [ "print(texts[0])" ] }, { "cell_type": "markdown", "id": "de5b6a6e", "metadata": {}, "source": [ "To implement a hard constraint on the chunk size, we can use `RecursiveCharacterTextSplitter.from_tiktoken_encoder`, where each split will be recursively split if it has a larger size:" ] }, { "cell_type": "code", "execution_count": 4, "id": "0262a991", "metadata": {}, "outputs": [], "source": [ "from langchain_text_splitters import RecursiveCharacterTextSplitter\n", "\n", "text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(\n", " model_name=\"gpt-4\",\n", " chunk_size=100,\n", " chunk_overlap=0,\n", ")" ] }, { "cell_type": "markdown", "id": "04457e3a", "metadata": {}, "source": [ "We can also load a `TokenTextSplitter` splitter, which works with `tiktoken` directly and will ensure each split is smaller than chunk size." ] }, { "cell_type": "code", "execution_count": 8, "id": "4454c70e", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Madam Speaker, Madam Vice President, our\n" ] } ], "source": [ "from langchain_text_splitters import TokenTextSplitter\n", "\n", "text_splitter = TokenTextSplitter(chunk_size=10, chunk_overlap=0)\n", "\n", "texts = text_splitter.split_text(state_of_the_union)\n", "print(texts[0])" ] }, { "cell_type": "markdown", "id": "3bc155d0", "metadata": {}, "source": [ "Some written languages (e.g. Chinese and Japanese) have characters which encode to 2 or more tokens. Using the `TokenTextSplitter` directly can split the tokens for a character between two chunks causing malformed Unicode characters. Use `RecursiveCharacterTextSplitter.from_tiktoken_encoder` or `CharacterTextSplitter.from_tiktoken_encoder` to ensure chunks contain valid Unicode strings." ] }, { "cell_type": "markdown", "id": "55f95f06", "metadata": {}, "source": [ "## spaCy\n", "\n", ":::note\n", "[spaCy](https://spacy.io/) is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.\n", ":::\n", "\n", "LangChain implements splitters based on the [spaCy tokenizer](https://spacy.io/api/tokenizer).\n", "\n", "1. How the text is split: by `spaCy` tokenizer.\n", "2. How the chunk size is measured: by number of characters." ] }, { "cell_type": "code", "execution_count": null, "id": "d0b9242f-690c-4819-b35a-bb68187281ed", "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet spacy" ] }, { "cell_type": "code", "execution_count": 1, "id": "f1de7767", "metadata": {}, "outputs": [], "source": [ "# This is a long document we can split up.\n", "with open(\"state_of_the_union.txt\") as f:\n", " state_of_the_union = f.read()" ] }, { "cell_type": "code", "execution_count": 4, "id": "cef2b29e", "metadata": {}, "outputs": [ {
149996
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# How to pass callbacks in at runtime\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "\n", "- [Callbacks](/docs/concepts/#callbacks)\n", "- [Custom callback handlers](/docs/how_to/custom_callbacks)\n", "\n", ":::\n", "\n", "In many cases, it is advantageous to pass in handlers instead when running the object. When we pass through [`CallbackHandlers`](https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html#langchain-core-callbacks-base-basecallbackhandler) using the `callbacks` keyword arg when executing an run, those callbacks will be issued by all nested objects involved in the execution. For example, when a handler is passed through to an Agent, it will be used for all callbacks related to the agent and all the objects involved in the agent's execution, in this case, the Tools and LLM.\n", "\n", "This prevents us from having to manually attach the handlers to each individual nested object. Here's an example:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "%pip install -qU langchain langchain_anthropic\n", "\n", "import getpass\n", "import os\n", "\n", "os.environ[\"ANTHROPIC_API_KEY\"] = getpass.getpass()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Chain RunnableSequence started\n", "Chain ChatPromptTemplate started\n", "Chain ended, outputs: messages=[HumanMessage(content='What is 1 + 2?')]\n", "Chat model started\n", "Chat model ended, response: generations=[[ChatGeneration(text='1 + 2 = 3', message=AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01D8Tt5FdtBk5gLTfBPm2tac', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-bb0dddd8-85f3-4e6b-8553-eaa79f859ef8-0'))]] llm_output={'id': 'msg_01D8Tt5FdtBk5gLTfBPm2tac', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} run=None\n", "Chain ended, outputs: content='1 + 2 = 3' response_metadata={'id': 'msg_01D8Tt5FdtBk5gLTfBPm2tac', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} id='run-bb0dddd8-85f3-4e6b-8553-eaa79f859ef8-0'\n" ] }, { "data": { "text/plain": [ "AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01D8Tt5FdtBk5gLTfBPm2tac', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-bb0dddd8-85f3-4e6b-8553-eaa79f859ef8-0')" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from typing import Any, Dict, List\n", "\n", "from langchain_anthropic import ChatAnthropic\n", "from langchain_core.callbacks import BaseCallbackHandler\n", "from langchain_core.messages import BaseMessage\n", "from langchain_core.outputs import LLMResult\n", "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "\n", "class LoggingHandler(BaseCallbackHandler):\n", " def on_chat_model_start(\n", " self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs\n", " ) -> None:\n", " print(\"Chat model started\")\n", "\n", " def on_llm_end(self, response: LLMResult, **kwargs) -> None:\n", " print(f\"Chat model ended, response: {response}\")\n", "\n", " def on_chain_start(\n", " self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs\n", " ) -> None:\n", " print(f\"Chain {serialized.get('name')} started\")\n", "\n", " def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None:\n", " print(f\"Chain ended, outputs: {outputs}\")\n", "\n", "\n", "callbacks = [LoggingHandler()]\n", "llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\")\n", "prompt = ChatPromptTemplate.from_template(\"What is 1 + {number}?\")\n", "\n", "chain = prompt | llm\n", "\n", "chain.invoke({\"number\": \"2\"}, config={\"callbacks\": callbacks})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If there are already existing callbacks associated with a module, these will run in addition to any passed in at runtime.\n", "\n", "## Next steps\n", "\n", "You've now learned how to pass callbacks at runtime.\n", "\n", "Next, check out the other how-to guides in this section, such as how to [pass callbacks into a module constructor](/docs/how_to/custom_callbacks)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.5" } }, "nbformat": 4, "nbformat_minor": 2 }
150005
{ "cells": [ { "cell_type": "markdown", "id": "612eac0a", "metadata": {}, "source": [ "# How to do retrieval with contextual compression\n", "\n", "One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.\n", "\n", "Contextual compression is meant to fix this. The idea is simple: instead of immediately returning retrieved documents as-is, you can compress them using the context of the given query, so that only the relevant information is returned. “Compressing” here refers to both compressing the contents of an individual document and filtering out documents wholesale.\n", "\n", "To use the Contextual Compression Retriever, you'll need:\n", "\n", "- a base retriever\n", "- a Document Compressor\n", "\n", "The Contextual Compression Retriever passes queries to the base retriever, takes the initial documents and passes them through the Document Compressor. The Document Compressor takes a list of documents and shortens it by reducing the contents of documents or dropping documents altogether.\n", "\n", "## Get started" ] }, { "cell_type": "code", "execution_count": 1, "id": "e0029369", "metadata": {}, "outputs": [], "source": [ "# Helper function for printing docs\n", "\n", "\n", "def pretty_print_docs(docs):\n", " print(\n", " f\"\\n{'-' * 100}\\n\".join(\n", " [f\"Document {i+1}:\\n\\n\" + d.page_content for i, d in enumerate(docs)]\n", " )\n", " )" ] }, { "cell_type": "markdown", "id": "9d2360fc", "metadata": {}, "source": [ "## Using a vanilla vector store retriever\n", "Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can see that given an example question our retriever returns one or two relevant docs and a few irrelevant docs. And even the relevant docs have a lot of irrelevant information in them.\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "25c26947-958d-4219-8ca0-daa3a51bd344", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Document 1:\n", "\n", "Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n", "\n", "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n", "\n", "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", "\n", "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.\n", "----------------------------------------------------------------------------------------------------\n", "Document 2:\n", "\n", "A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n", "\n", "And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n", "\n", "We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n", "\n", "We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n", "\n", "We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n", "\n", "We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.\n", "----------------------------------------------------------------------------------------------------\n", "Document 3:\n", "\n", "And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n", "\n", "As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n", "\n", "While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n", "\n", "And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n", "\n", "So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n", "\n", "First, beat the opioid epidemic.\n", "----------------------------------------------------------------------------------------------------\n", "Document 4:\n", "\n", "Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n", "\n", "And as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n", "\n", "That ends on my watch. \n", "\n", "Medicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n", "\n", "We’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n", "\n", "Let’s pass the Paycheck Fairness Act and paid leave. \n", "\n", "Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n", "\n", "Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.\n" ] } ], "source": [ "from langchain_community.document_loaders import TextLoader\n", "from langchain_community.vectorstores import FAISS\n", "from langchain_openai import OpenAIEmbeddings\n", "from langchain_text_splitters import CharacterTextSplitter\n", "\n", "documents = TextLoader(\"state_of_the_union.txt\").load()\n", "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n", "texts = text_splitter.split_documents(documents)\n", "retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever()\n", "\n", "docs = retriever.invoke(\"What did the president say about Ketanji Brown Jackson\")\n", "pretty_print_docs(docs)" ] }, { "cell_type": "markdown", "id": "3473c553", "metadata": {}, "source": [ "## Adding contextual compression with an `LLMChainExtractor`\n", "Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll add an `LLMChainExtractor`, which will iterate over the initially returned documents and extract from each only the content that is relevant to the query.\n" ] }, { "cell_type": "code", "execution_count": 3, "id": "d83e3c63-bcde-43e9-998e-35bf2ebef49b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Document 1:\n", "\n",
150007
"\n", "We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.\n" ] } ], "source": [ "from langchain.retrievers.document_compressors import EmbeddingsFilter\n", "from langchain_openai import OpenAIEmbeddings\n", "\n", "embeddings = OpenAIEmbeddings()\n", "embeddings_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)\n", "compression_retriever = ContextualCompressionRetriever(\n", " base_compressor=embeddings_filter, base_retriever=retriever\n", ")\n", "\n", "compressed_docs = compression_retriever.invoke(\n", " \"What did the president say about Ketanji Jackson Brown\"\n", ")\n", "pretty_print_docs(compressed_docs)" ] }, { "cell_type": "markdown", "id": "2074462b", "metadata": {}, "source": [ "## Stringing compressors and document transformers together\n", "Using the `DocumentCompressorPipeline` we can also easily combine multiple compressors in sequence. Along with compressors we can add `BaseDocumentTransformer`s to our pipeline, which don't perform any contextual compression but simply perform some transformation on a set of documents. For example `TextSplitter`s can be used as document transformers to split documents into smaller pieces, and the `EmbeddingsRedundantFilter` can be used to filter out redundant documents based on embedding similarity between documents.\n", "\n", "Below we create a compressor pipeline by first splitting our docs into smaller chunks, then removing redundant documents, and then filtering based on relevance to the query.\n" ] }, { "cell_type": "code", "execution_count": 8, "id": "617a1756", "metadata": {}, "outputs": [], "source": [ "from langchain.retrievers.document_compressors import DocumentCompressorPipeline\n", "from langchain_community.document_transformers import EmbeddingsRedundantFilter\n", "from langchain_text_splitters import CharacterTextSplitter\n", "\n", "splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=0, separator=\". \")\n", "redundant_filter = EmbeddingsRedundantFilter(embeddings=embeddings)\n", "relevant_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)\n", "pipeline_compressor = DocumentCompressorPipeline(\n", " transformers=[splitter, redundant_filter, relevant_filter]\n", ")" ] }, { "cell_type": "code", "execution_count": 8, "id": "40b9c1db-7ac2-4257-935a-b107da50bb43", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Document 1:\n", "\n", "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", "\n", "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson\n", "----------------------------------------------------------------------------------------------------\n", "Document 2:\n", "\n", "As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n", "\n", "While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year\n", "----------------------------------------------------------------------------------------------------\n", "Document 3:\n", "\n", "A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder\n", "----------------------------------------------------------------------------------------------------\n", "Document 4:\n", "\n", "Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n", "\n", "And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n", "\n", "We can do both\n" ] } ], "source": [ "compression_retriever = ContextualCompressionRetriever(\n", " base_compressor=pipeline_compressor, base_retriever=retriever\n", ")\n", "\n", "compressed_docs = compression_retriever.invoke(\n", " \"What did the president say about Ketanji Jackson Brown\"\n", ")\n", "pretty_print_docs(compressed_docs)" ] }, { "cell_type": "code", "execution_count": null, "id": "78581dcb", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
150009
"Params: {'model_name': 'CustomChatModel'}\n" ] } ], "source": [ "llm = CustomLLM(n=5)\n", "print(llm)" ] }, { "cell_type": "code", "execution_count": 3, "id": "8cd49199", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "'This '" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "llm.invoke(\"This is a foobar thing\")" ] }, { "cell_type": "code", "execution_count": 4, "id": "511b3cb1-9c6f-49b6-9002-a2ec490632b0", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "'world'" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "await llm.ainvoke(\"world\")" ] }, { "cell_type": "code", "execution_count": 5, "id": "d9d5bec2-d60a-4ebd-a97d-ac32c98ab02f", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "['woof ', 'meow ']" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "llm.batch([\"woof woof woof\", \"meow meow meow\"])" ] }, { "cell_type": "code", "execution_count": 6, "id": "fe246b29-7a93-4bef-8861-389445598c25", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "['woof ', 'meow ']" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "await llm.abatch([\"woof woof woof\", \"meow meow meow\"])" ] }, { "cell_type": "code", "execution_count": 7, "id": "3a67c38f-b83b-4eb9-a231-441c55ee8c82", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "h|e|l|l|o|" ] } ], "source": [ "async for token in llm.astream(\"hello\"):\n", " print(token, end=\"|\", flush=True)" ] }, { "cell_type": "markdown", "id": "b62c282b-3a35-4529-aac4-2c2f0916790e", "metadata": {}, "source": [ "Let's confirm that in integrates nicely with other `LangChain` APIs." ] }, { "cell_type": "code", "execution_count": 15, "id": "d5578e74-7fa8-4673-afee-7a59d442aaff", "metadata": { "tags": [] }, "outputs": [], "source": [ "from langchain_core.prompts import ChatPromptTemplate" ] }, { "cell_type": "code", "execution_count": 16, "id": "672ff664-8673-4832-9f4f-335253880141", "metadata": { "tags": [] }, "outputs": [], "source": [ "prompt = ChatPromptTemplate.from_messages(\n", " [(\"system\", \"you are a bot\"), (\"human\", \"{input}\")]\n", ")" ] }, { "cell_type": "code", "execution_count": 17, "id": "c400538a-9146-4c93-9fac-293d8f9ca6bf", "metadata": { "tags": [] }, "outputs": [], "source": [ "llm = CustomLLM(n=7)\n", "chain = prompt | llm" ] }, { "cell_type": "code", "execution_count": 18, "id": "080964af-3e2d-4573-85cb-0d7cc58a6f42", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'event': 'on_chain_start', 'run_id': '05f24b4f-7ea3-4fb6-8417-3aa21633462f', 'name': 'RunnableSequence', 'tags': [], 'metadata': {}, 'data': {'input': {'input': 'hello there!'}}}\n", "{'event': 'on_prompt_start', 'name': 'ChatPromptTemplate', 'run_id': '7e996251-a926-4344-809e-c425a9846d21', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'input': {'input': 'hello there!'}}}\n", "{'event': 'on_prompt_end', 'name': 'ChatPromptTemplate', 'run_id': '7e996251-a926-4344-809e-c425a9846d21', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'input': {'input': 'hello there!'}, 'output': ChatPromptValue(messages=[SystemMessage(content='you are a bot'), HumanMessage(content='hello there!')])}}\n", "{'event': 'on_llm_start', 'name': 'CustomLLM', 'run_id': 'a8766beb-10f4-41de-8750-3ea7cf0ca7e2', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'input': {'prompts': ['System: you are a bot\\nHuman: hello there!']}}}\n", "{'event': 'on_llm_stream', 'name': 'CustomLLM', 'run_id': 'a8766beb-10f4-41de-8750-3ea7cf0ca7e2', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': 'S'}}\n", "{'event': 'on_chain_stream', 'run_id': '05f24b4f-7ea3-4fb6-8417-3aa21633462f', 'tags': [], 'metadata': {}, 'name': 'RunnableSequence', 'data': {'chunk': 'S'}}\n", "{'event': 'on_llm_stream', 'name': 'CustomLLM', 'run_id': 'a8766beb-10f4-41de-8750-3ea7cf0ca7e2', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': 'y'}}\n", "{'event': 'on_chain_stream', 'run_id': '05f24b4f-7ea3-4fb6-8417-3aa21633462f', 'tags': [], 'metadata': {}, 'name': 'RunnableSequence', 'data': {'chunk': 'y'}}\n" ] } ], "source": [ "idx = 0\n", "async for event in chain.astream_events({\"input\": \"hello there!\"}, version=\"v1\"):\n", " print(event)\n", " idx += 1\n", " if idx > 7:\n", " # Truncate\n", " break" ] }, { "cell_type": "markdown", "id": "a85e848a-5316-4318-b770-3f8fd34f4231", "metadata": {}, "source": [ "## Contributing\n", "\n", "We appreciate all chat model integration contributions. \n", "\n", "Here's a checklist to help make sure your contribution gets added to LangChain:\n", "\n", "Documentation:\n", "\n",
150027
{ "cells": [ { "cell_type": "markdown", "id": "fc0db1bc", "metadata": {}, "source": [ "# How to reorder retrieved results to mitigate the \"lost in the middle\" effect\n", "\n", "Substantial performance degradations in [RAG](/docs/tutorials/rag) applications have been [documented](https://arxiv.org/abs/2307.03172) as the number of retrieved documents grows (e.g., beyond ten). In brief: models are liable to miss relevant information in the middle of long contexts.\n", "\n", "By contrast, queries against vector stores will typically return documents in descending order of relevance (e.g., as measured by cosine similarity of [embeddings](/docs/concepts/#embedding-models)).\n", "\n", "To mitigate the [\"lost in the middle\"](https://arxiv.org/abs/2307.03172) effect, you can re-order documents after retrieval such that the most relevant documents are positioned at extrema (e.g., the first and last pieces of context), and the least relevant documents are positioned in the middle. In some cases this can help surface the most relevant information to LLMs.\n", "\n", "The [LongContextReorder](https://python.langchain.com/api_reference/community/document_transformers/langchain_community.document_transformers.long_context_reorder.LongContextReorder.html) document transformer implements this re-ordering procedure. Below we demonstrate an example." ] }, { "cell_type": "code", "execution_count": null, "id": "2074fdaa-edff-468a-970f-6f5f26e93d4a", "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet sentence-transformers langchain-chroma langchain langchain-openai langchain-huggingface > /dev/null" ] }, { "cell_type": "markdown", "id": "c97eaaf2-34b7-4770-9949-e1abc4ca5226", "metadata": {}, "source": [ "First we embed some artificial documents and index them in an (in-memory) [Chroma](/docs/integrations/providers/chroma/) vector store. We will use [Hugging Face](/docs/integrations/text_embedding/huggingfacehub/) embeddings, but any LangChain vector store or embeddings model will suffice." ] }, { "cell_type": "code", "execution_count": 2, "id": "49cbcd8e", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='This is a document about the Boston Celtics'),\n", " Document(page_content='The Celtics are my favourite team.'),\n", " Document(page_content='L. Kornet is one of the best Celtics players.'),\n", " Document(page_content='The Boston Celtics won the game by 20 points'),\n", " Document(page_content='Larry Bird was an iconic NBA player.'),\n", " Document(page_content='Elden Ring is one of the best games in the last 15 years.'),\n", " Document(page_content='Basquetball is a great sport.'),\n", " Document(page_content='I simply love going to the movies'),\n", " Document(page_content='Fly me to the moon is one of my favourite songs.'),\n", " Document(page_content='This is just a random text.')]" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_chroma import Chroma\n", "from langchain_huggingface import HuggingFaceEmbeddings\n", "\n", "# Get embeddings.\n", "embeddings = HuggingFaceEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n", "\n", "texts = [\n", " \"Basquetball is a great sport.\",\n", " \"Fly me to the moon is one of my favourite songs.\",\n", " \"The Celtics are my favourite team.\",\n", " \"This is a document about the Boston Celtics\",\n", " \"I simply love going to the movies\",\n", " \"The Boston Celtics won the game by 20 points\",\n", " \"This is just a random text.\",\n", " \"Elden Ring is one of the best games in the last 15 years.\",\n", " \"L. Kornet is one of the best Celtics players.\",\n", " \"Larry Bird was an iconic NBA player.\",\n", "]\n", "\n", "# Create a retriever\n", "retriever = Chroma.from_texts(texts, embedding=embeddings).as_retriever(\n", " search_kwargs={\"k\": 10}\n", ")\n", "query = \"What can you tell me about the Celtics?\"\n", "\n", "# Get relevant documents ordered by relevance score\n", "docs = retriever.invoke(query)\n", "docs" ] }, { "cell_type": "markdown", "id": "175d031a-43fa-42f4-93c4-2ba52c3c3ee5", "metadata": {}, "source": [ "Note that documents are returned in descending order of relevance to the query. The `LongContextReorder` document transformer will implement the re-ordering described above:" ] }, { "cell_type": "code", "execution_count": 3, "id": "9a1181f2-a3dc-4614-9233-2196ab65939e", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='The Celtics are my favourite team.'),\n", " Document(page_content='The Boston Celtics won the game by 20 points'),\n", " Document(page_content='Elden Ring is one of the best games in the last 15 years.'),\n", " Document(page_content='I simply love going to the movies'),\n", " Document(page_content='This is just a random text.'),\n", " Document(page_content='Fly me to the moon is one of my favourite songs.'),\n", " Document(page_content='Basquetball is a great sport.'),\n", " Document(page_content='Larry Bird was an iconic NBA player.'),\n", " Document(page_content='L. Kornet is one of the best Celtics players.'),\n", " Document(page_content='This is a document about the Boston Celtics')]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_community.document_transformers import LongContextReorder\n", "\n", "# Reorder the documents:\n", "# Less relevant document will be at the middle of the list and more\n", "# relevant elements at beginning / end.\n", "reordering = LongContextReorder()\n", "reordered_docs = reordering.transform_documents(docs)\n", "\n", "# Confirm that the 4 relevant documents are at beginning and end.\n", "reordered_docs" ] }, { "cell_type": "markdown", "id": "a8d2ef0c-c397-4d8d-8118-3f7acf86d241", "metadata": {}, "source": [ "Below, we show how to incorporate the re-ordered documents into a simple question-answering chain:" ] }, { "cell_type": "code", "execution_count": 5, "id": "8bbea705-d5b9-4ed5-9957-e12547283622", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "The Celtics are a professional basketball team and one of the most iconic franchises in the NBA. They are highly regarded and have a large fan base. The team has had many successful seasons and is often considered one of the top teams in the league. They have a strong history and have produced many great players, such as Larry Bird and L. Kornet. The team is based in Boston and is often referred to as the Boston Celtics.\n" ] } ], "source": [ "from langchain.chains.combine_documents import create_stuff_documents_chain\n", "from langchain_core.prompts import PromptTemplate\n", "from langchain_openai import OpenAI\n", "\n", "llm = OpenAI()\n", "\n", "prompt_template = \"\"\"\n",
150028
"Given these texts:\n", "-----\n", "{context}\n", "-----\n", "Please answer the following question:\n", "{query}\n", "\"\"\"\n", "\n", "prompt = PromptTemplate(\n", " template=prompt_template,\n", " input_variables=[\"context\", \"query\"],\n", ")\n", "\n", "# Create and invoke the chain:\n", "chain = create_stuff_documents_chain(llm, prompt)\n", "response = chain.invoke({\"context\": reordered_docs, \"query\": query})\n", "print(response)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
150029
{ "cells": [ { "cell_type": "markdown", "id": "5436020b", "metadata": {}, "source": [ "# How to create tools\n", "\n", "When constructing an agent, you will need to provide it with a list of `Tool`s that it can use. Besides the actual function that is called, the Tool consists of several components:\n", "\n", "| Attribute | Type | Description |\n", "|---------------|---------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n", "| name | str | Must be unique within a set of tools provided to an LLM or agent. |\n", "| description | str | Describes what the tool does. Used as context by the LLM or agent. |\n", "| args_schema | pydantic.BaseModel | Optional but recommended, and required if using callback handlers. It can be used to provide more information (e.g., few-shot examples) or validation for expected parameters. |\n", "| return_direct | boolean | Only relevant for agents. When True, after invoking the given tool, the agent will stop and return the result direcly to the user. |\n", "\n", "LangChain supports the creation of tools from:\n", "\n", "1. Functions;\n", "2. LangChain [Runnables](/docs/concepts#runnable-interface);\n", "3. By sub-classing from [BaseTool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.BaseTool.html) -- This is the most flexible method, it provides the largest degree of control, at the expense of more effort and code.\n", "\n", "Creating tools from functions may be sufficient for most use cases, and can be done via a simple [@tool decorator](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.tool.html#langchain_core.tools.tool). If more configuration is needed-- e.g., specification of both sync and async implementations-- one can also use the [StructuredTool.from_function](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.structured.StructuredTool.html#langchain_core.tools.structured.StructuredTool.from_function) class method.\n", "\n", "In this guide we provide an overview of these methods.\n", "\n", ":::tip\n", "\n", "Models will perform better if the tools have well chosen names, descriptions and JSON schemas.\n", ":::" ] }, { "cell_type": "markdown", "id": "c7326b23", "metadata": {}, "source": [ "## Creating tools from functions\n", "\n", "### @tool decorator\n", "\n", "This `@tool` decorator is the simplest way to define a custom tool. The decorator uses the function name as the tool name by default, but this can be overridden by passing a string as the first argument. Additionally, the decorator will use the function's docstring as the tool's description - so a docstring MUST be provided. " ] }, { "cell_type": "code", "execution_count": 1, "id": "cc7005cd-072f-4d37-8453-6297468e5192", "metadata": { "execution": { "iopub.execute_input": "2024-09-10T20:25:52.645451Z", "iopub.status.busy": "2024-09-10T20:25:52.645081Z", "iopub.status.idle": "2024-09-10T20:25:53.030958Z", "shell.execute_reply": "2024-09-10T20:25:53.030669Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "multiply\n", "Multiply two numbers.\n", "{'a': {'title': 'A', 'type': 'integer'}, 'b': {'title': 'B', 'type': 'integer'}}\n" ] } ], "source": [ "from langchain_core.tools import tool\n", "\n", "\n", "@tool\n", "def multiply(a: int, b: int) -> int:\n", " \"\"\"Multiply two numbers.\"\"\"\n", " return a * b\n", "\n", "\n", "# Let's inspect some of the attributes associated with the tool.\n", "print(multiply.name)\n", "print(multiply.description)\n", "print(multiply.args)" ] }, { "cell_type": "markdown", "id": "96698b67-993a-4c97-b867-333132e1eb14", "metadata": {}, "source": [ "Or create an **async** implementation, like this:" ] }, { "cell_type": "code", "execution_count": 2, "id": "0c0991db-b997-4611-be37-4346e660506b", "metadata": { "execution": { "iopub.execute_input": "2024-09-10T20:25:53.032544Z", "iopub.status.busy": "2024-09-10T20:25:53.032420Z", "iopub.status.idle": "2024-09-10T20:25:53.035349Z", "shell.execute_reply": "2024-09-10T20:25:53.035123Z" } }, "outputs": [], "source": [ "from langchain_core.tools import tool\n", "\n", "\n", "@tool\n", "async def amultiply(a: int, b: int) -> int:\n", " \"\"\"Multiply two numbers.\"\"\"\n", " return a * b" ] }, { "cell_type": "markdown", "id": "8f0edc51-c586-414c-8941-c8abe779943f", "metadata": {}, "source": [ "Note that `@tool` supports parsing of annotations, nested schemas, and other features:" ] }, { "cell_type": "code", "execution_count": 3, "id": "5626423f-053e-4a66-adca-1d794d835397", "metadata": { "execution": { "iopub.execute_input": "2024-09-10T20:25:53.036658Z", "iopub.status.busy": "2024-09-10T20:25:53.036574Z", "iopub.status.idle": "2024-09-10T20:25:53.041154Z", "shell.execute_reply": "2024-09-10T20:25:53.040964Z" } }, "outputs": [ { "data": { "text/plain": [ "{'description': 'Multiply a by the maximum of b.',\n", " 'properties': {'a': {'description': 'scale factor',\n", " 'title': 'A',\n", " 'type': 'string'},\n", " 'b': {'description': 'list of ints over which to take maximum',\n", " 'items': {'type': 'integer'},\n", " 'title': 'B',\n", " 'type': 'array'}},\n", " 'required': ['a', 'b'],\n", " 'title': 'multiply_by_maxSchema',\n", " 'type': 'object'}" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from typing import Annotated, List\n", "\n", "\n", "@tool\n", "def multiply_by_max(\n", " a: Annotated[str, \"scale factor\"],\n", " b: Annotated[List[int], \"list of ints over which to take maximum\"],\n", ") -> int:\n", " \"\"\"Multiply a by the maximum of b.\"\"\"\n", " return a * max(b)\n", "\n", "\n", "multiply_by_max.args_schema.schema()" ] }, { "cell_type": "markdown", "id": "98d6eee9", "metadata": {}, "source": [ "You can also customize the tool name and JSON args by passing them into the tool decorator." ] }, { "cell_type": "code", "execution_count": 4, "id": "9216d03a-f6ea-4216-b7e1-0661823a4c0b",
150032
"\n", "print(multiply.invoke({\"a\": 2, \"b\": 3}))\n", "print(await multiply.ainvoke({\"a\": 2, \"b\": 3}))" ] }, { "cell_type": "markdown", "id": "97aba6cc-4bdf-4fab-aff3-d89e7d9c3a09", "metadata": {}, "source": [ "## How to create async tools\n", "\n", "LangChain Tools implement the [Runnable interface 🏃](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html).\n", "\n", "All Runnables expose the `invoke` and `ainvoke` methods (as well as other methods like `batch`, `abatch`, `astream` etc).\n", "\n", "So even if you only provide an `sync` implementation of a tool, you could still use the `ainvoke` interface, but there\n", "are some important things to know:\n", "\n", "* LangChain's by default provides an async implementation that assumes that the function is expensive to compute, so it'll delegate execution to another thread.\n", "* If you're working in an async codebase, you should create async tools rather than sync tools, to avoid incuring a small overhead due to that thread.\n", "* If you need both sync and async implementations, use `StructuredTool.from_function` or sub-class from `BaseTool`.\n", "* If implementing both sync and async, and the sync code is fast to run, override the default LangChain async implementation and simply call the sync code.\n", "* You CANNOT and SHOULD NOT use the sync `invoke` with an `async` tool." ] }, { "cell_type": "code", "execution_count": 11, "id": "6615cb77-fd4c-4676-8965-f92cc71d4944", "metadata": { "execution": { "iopub.execute_input": "2024-09-10T20:25:53.142587Z", "iopub.status.busy": "2024-09-10T20:25:53.142504Z", "iopub.status.idle": "2024-09-10T20:25:53.147205Z", "shell.execute_reply": "2024-09-10T20:25:53.146995Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "6\n", "10\n" ] } ], "source": [ "from langchain_core.tools import StructuredTool\n", "\n", "\n", "def multiply(a: int, b: int) -> int:\n", " \"\"\"Multiply two numbers.\"\"\"\n", " return a * b\n", "\n", "\n", "calculator = StructuredTool.from_function(func=multiply)\n", "\n", "print(calculator.invoke({\"a\": 2, \"b\": 3}))\n", "print(\n", " await calculator.ainvoke({\"a\": 2, \"b\": 5})\n", ") # Uses default LangChain async implementation incurs small overhead" ] }, { "cell_type": "code", "execution_count": 12, "id": "bb2af583-eadd-41f4-a645-bf8748bd3dcd", "metadata": { "execution": { "iopub.execute_input": "2024-09-10T20:25:53.148383Z", "iopub.status.busy": "2024-09-10T20:25:53.148307Z", "iopub.status.idle": "2024-09-10T20:25:53.152684Z", "shell.execute_reply": "2024-09-10T20:25:53.152486Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "6\n", "10\n" ] } ], "source": [ "from langchain_core.tools import StructuredTool\n", "\n", "\n", "def multiply(a: int, b: int) -> int:\n", " \"\"\"Multiply two numbers.\"\"\"\n", " return a * b\n", "\n", "\n", "async def amultiply(a: int, b: int) -> int:\n", " \"\"\"Multiply two numbers.\"\"\"\n", " return a * b\n", "\n", "\n", "calculator = StructuredTool.from_function(func=multiply, coroutine=amultiply)\n", "\n", "print(calculator.invoke({\"a\": 2, \"b\": 3}))\n", "print(\n", " await calculator.ainvoke({\"a\": 2, \"b\": 5})\n", ") # Uses use provided amultiply without additional overhead" ] }, { "cell_type": "markdown", "id": "c80ffdaa-e4ba-4a70-8500-32bf4f60cc1a", "metadata": {}, "source": [ "You should not and cannot use `.invoke` when providing only an async definition." ] }, { "cell_type": "code", "execution_count": 13, "id": "4ad0932c-8610-4278-8c57-f9218f654c8a", "metadata": { "execution": { "iopub.execute_input": "2024-09-10T20:25:53.153849Z", "iopub.status.busy": "2024-09-10T20:25:53.153773Z", "iopub.status.idle": "2024-09-10T20:25:53.158312Z", "shell.execute_reply": "2024-09-10T20:25:53.158130Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Raised not implemented error. You should not be doing this.\n" ] } ], "source": [ "@tool\n", "async def multiply(a: int, b: int) -> int:\n", " \"\"\"Multiply two numbers.\"\"\"\n", " return a * b\n", "\n", "\n", "try:\n", " multiply.invoke({\"a\": 2, \"b\": 3})\n", "except NotImplementedError:\n", " print(\"Raised not implemented error. You should not be doing this.\")" ] }, { "cell_type": "markdown", "id": "f9c746a7-88d7-4afb-bcb8-0e98b891e8b6", "metadata": {}, "source": [ "## Handling Tool Errors \n", "\n", "If you're using tools with agents, you will likely need an error handling strategy, so the agent can recover from the error and continue execution.\n", "\n", "A simple strategy is to throw a `ToolException` from inside the tool and specify an error handler using `handle_tool_error`. \n", "\n", "When the error handler is specified, the exception will be caught and the error handler will decide which output to return from the tool.\n", "\n", "You can set `handle_tool_error` to `True`, a string value, or a function. If it's a function, the function should take a `ToolException` as a parameter and return a value.\n", "\n", "Please note that only raising a `ToolException` won't be effective. You need to first set the `handle_tool_error` of the tool because its default value is `False`." ] }, { "cell_type": "code", "execution_count": 14, "id": "7094c0e8-6192-4870-a942-aad5b5ae48fd", "metadata": { "execution": { "iopub.execute_input": "2024-09-10T20:25:53.159440Z", "iopub.status.busy": "2024-09-10T20:25:53.159364Z", "iopub.status.idle": "2024-09-10T20:25:53.160922Z", "shell.execute_reply": "2024-09-10T20:25:53.160712Z" } },
150039
"id": "8ba1764d-0272-4f98-adcf-b48cb2c0a315", "metadata": {}, "source": [ "### Invoking the tool\n", "\n", "Great! We're able to generate tool invocations. But what if we want to actually call the tool? To do so we'll need to pass the generated tool args to our tool. As a simple example we'll just extract the arguments of the first tool_call:" ] }, { "cell_type": "code", "execution_count": 12, "id": "4f5325ca-e5dc-4d1a-ba36-b085a029c90a", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "92" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from operator import itemgetter\n", "\n", "chain = llm_with_tools | (lambda x: x.tool_calls[0][\"args\"]) | multiply\n", "chain.invoke(\"What's four times 23\")" ] }, { "cell_type": "markdown", "id": "79a9eb63-383d-4dd4-a162-08b4a52ef4d9", "metadata": {}, "source": [ "Check out the [LangSmith trace here](https://smith.langchain.com/public/16bbabb9-fc9b-41e5-a33d-487c42df4f85/r)." ] }, { "cell_type": "markdown", "id": "0521d3d5", "metadata": {}, "source": [ "## Agents\n", "\n", "Chains are great when we know the specific sequence of tool usage needed for any user input. But for certain use cases, how many times we use tools depends on the input. In these cases, we want to let the model itself decide how many times to use tools and in what order. [Agents](/docs/tutorials/agents) let us do just this.\n", "\n", "LangChain comes with a number of built-in agents that are optimized for different use cases. Read about all the [agent types here](/docs/concepts#agents).\n", "\n", "We'll use the [tool calling agent](https://python.langchain.com/api_reference/langchain/agents/langchain.agents.tool_calling_agent.base.create_tool_calling_agent.html), which is generally the most reliable kind and the recommended one for most use cases.\n", "\n", "![agent](../../static/img/tool_agent.svg)" ] }, { "cell_type": "code", "execution_count": 13, "id": "21723cf4-9421-4a8d-92a6-eeeb8f4367f1", "metadata": {}, "outputs": [], "source": [ "from langchain import hub\n", "from langchain.agents import AgentExecutor, create_tool_calling_agent" ] }, { "cell_type": "code", "execution_count": 14, "id": "6be83879-9da3-4dd9-b147-a79f76affd7a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "================================\u001b[1m System Message \u001b[0m================================\n", "\n", "You are a helpful assistant\n", "\n", "=============================\u001b[1m Messages Placeholder \u001b[0m=============================\n", "\n", "\u001b[33;1m\u001b[1;3m{chat_history}\u001b[0m\n", "\n", "================================\u001b[1m Human Message \u001b[0m=================================\n", "\n", "\u001b[33;1m\u001b[1;3m{input}\u001b[0m\n", "\n", "=============================\u001b[1m Messages Placeholder \u001b[0m=============================\n", "\n", "\u001b[33;1m\u001b[1;3m{agent_scratchpad}\u001b[0m\n" ] } ], "source": [ "# Get the prompt to use - can be replaced with any prompt that includes variables \"agent_scratchpad\" and \"input\"!\n", "prompt = hub.pull(\"hwchase17/openai-tools-agent\")\n", "prompt.pretty_print()" ] }, { "cell_type": "markdown", "id": "616f9714-5b18-4eed-b88a-d38e4cb1de99", "metadata": {}, "source": [ "Agents are also great because they make it easy to use multiple tools." ] }, { "cell_type": "code", "execution_count": 15, "id": "95c86d32-ee45-4c87-a28c-14eff19b49e9", "metadata": {}, "outputs": [], "source": [ "@tool\n", "def add(first_int: int, second_int: int) -> int:\n", " \"Add two integers.\"\n", " return first_int + second_int\n", "\n", "\n", "@tool\n", "def exponentiate(base: int, exponent: int) -> int:\n", " \"Exponentiate the base to the exponent power.\"\n", " return base**exponent\n", "\n", "\n", "tools = [multiply, add, exponentiate]" ] }, { "cell_type": "code", "execution_count": 16, "id": "17b09ac6-c9b7-4340-a8a0-3d3061f7888c", "metadata": {}, "outputs": [], "source": [ "# Construct the tool calling agent\n", "agent = create_tool_calling_agent(llm, tools, prompt)" ] }, { "cell_type": "code", "execution_count": 17, "id": "675091d2-cac9-45c4-a5d7-b760ee6c1986", "metadata": {}, "outputs": [], "source": [ "# Create an agent executor by passing in the agent and tools\n", "agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)" ] }, { "cell_type": "markdown", "id": "a6099ab6-2fa6-452d-b73c-7fb65daab451", "metadata": {}, "source": [ "With an agent, we can ask questions that require arbitrarily-many uses of our tools:" ] }, { "cell_type": "code", "execution_count": 18, "id": "f7dbb240-809e-4e41-8f63-1a4636e8e26d", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n", "\u001b[32;1m\u001b[1;3m\n", "Invoking: `exponentiate` with `{'base': 3, 'exponent': 5}`\n", "\n", "\n", "\u001b[0m\u001b[38;5;200m\u001b[1;3m243\u001b[0m\u001b[32;1m\u001b[1;3m\n", "Invoking: `add` with `{'first_int': 12, 'second_int': 3}`\n", "\n", "\n", "\u001b[0m\u001b[33;1m\u001b[1;3m15\u001b[0m\u001b[32;1m\u001b[1;3m\n", "Invoking: `multiply` with `{'first_int': 243, 'second_int': 15}`\n", "\n", "\n", "\u001b[0m\u001b[36;1m\u001b[1;3m3645\u001b[0m\u001b[32;1m\u001b[1;3m\n", "Invoking: `exponentiate` with `{'base': 405, 'exponent': 2}`\n", "\n", "\n",
150042
"Note that we didn't need to wrap the custom function `(lambda x: x.content[:5])` in a `RunnableLambda` constructor because the `model` on the left of the pipe operator is already a Runnable. The custom function is **coerced** into a runnable. See [this section](/docs/how_to/sequence/#coercion) for more information.\n", "\n", "## Passing run metadata\n", "\n", "Runnable lambdas can optionally accept a [RunnableConfig](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.config.RunnableConfig.html#langchain_core.runnables.config.RunnableConfig) parameter, which they can use to pass callbacks, tags, and other configuration information to nested runs." ] }, { "cell_type": "code", "execution_count": 5, "id": "ff0daf0c-49dd-4d21-9772-e5fa133c5f36", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'foo': 'bar'}\n", "Tokens Used: 62\n", "\tPrompt Tokens: 56\n", "\tCompletion Tokens: 6\n", "Successful Requests: 1\n", "Total Cost (USD): $9.6e-05\n" ] } ], "source": [ "import json\n", "\n", "from langchain_core.runnables import RunnableConfig\n", "\n", "\n", "def parse_or_fix(text: str, config: RunnableConfig):\n", " fixing_chain = (\n", " ChatPromptTemplate.from_template(\n", " \"Fix the following text:\\n\\n```text\\n{input}\\n```\\nError: {error}\"\n", " \" Don't narrate, just respond with the fixed data.\"\n", " )\n", " | model\n", " | StrOutputParser()\n", " )\n", " for _ in range(3):\n", " try:\n", " return json.loads(text)\n", " except Exception as e:\n", " text = fixing_chain.invoke({\"input\": text, \"error\": e}, config)\n", " return \"Failed to parse\"\n", "\n", "\n", "from langchain_community.callbacks import get_openai_callback\n", "\n", "with get_openai_callback() as cb:\n", " output = RunnableLambda(parse_or_fix).invoke(\n", " \"{foo: bar}\", {\"tags\": [\"my-tag\"], \"callbacks\": [cb]}\n", " )\n", " print(output)\n", " print(cb)" ] }, { "cell_type": "code", "execution_count": 6, "id": "1a5e709e-9d75-48c7-bb9c-503251990505", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'foo': 'bar'}\n", "Tokens Used: 62\n", "\tPrompt Tokens: 56\n", "\tCompletion Tokens: 6\n", "Successful Requests: 1\n", "Total Cost (USD): $9.6e-05\n" ] } ], "source": [ "from langchain_community.callbacks import get_openai_callback\n", "\n", "with get_openai_callback() as cb:\n", " output = RunnableLambda(parse_or_fix).invoke(\n", " \"{foo: bar}\", {\"tags\": [\"my-tag\"], \"callbacks\": [cb]}\n", " )\n", " print(output)\n", " print(cb)" ] }, { "cell_type": "markdown", "id": "922b48bd", "metadata": {}, "source": [ "## Streaming\n", "\n", ":::note\n", "[RunnableLambda](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.RunnableLambda.html) is best suited for code that does not need to support streaming. If you need to support streaming (i.e., be able to operate on chunks of inputs and yield chunks of outputs), use [RunnableGenerator](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.RunnableGenerator.html) instead as in the example below.\n", ":::\n", "\n", "You can use generator functions (ie. functions that use the `yield` keyword, and behave like iterators) in a chain.\n", "\n", "The signature of these generators should be `Iterator[Input] -> Iterator[Output]`. Or for async generators: `AsyncIterator[Input] -> AsyncIterator[Output]`.\n", "\n", "These are useful for:\n", "- implementing a custom output parser\n", "- modifying the output of a previous step, while preserving streaming capabilities\n", "\n", "Here's an example of a custom output parser for comma-separated lists. First, we create a chain that generates such a list as text:" ] }, { "cell_type": "code", "execution_count": 7, "id": "29f55c38", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "lion, tiger, wolf, gorilla, panda" ] } ], "source": [ "from typing import Iterator, List\n", "\n", "prompt = ChatPromptTemplate.from_template(\n", " \"Write a comma-separated list of 5 animals similar to: {animal}. Do not include numbers\"\n", ")\n", "\n", "str_chain = prompt | model | StrOutputParser()\n", "\n", "for chunk in str_chain.stream({\"animal\": \"bear\"}):\n", " print(chunk, end=\"\", flush=True)" ] }, { "cell_type": "markdown", "id": "46345323", "metadata": {}, "source": [ "Next, we define a custom function that will aggregate the currently streamed output and yield it when the model generates the next comma in the list:" ] }, { "cell_type": "code", "execution_count": 8, "id": "f08b8a5b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['lion']\n", "['tiger']\n", "['wolf']\n", "['gorilla']\n", "['raccoon']\n" ] } ], "source": [ "# This is a custom parser that splits an iterator of llm tokens\n", "# into a list of strings separated by commas\n", "def split_into_list(input: Iterator[str]) -> Iterator[List[str]]:\n", " # hold partial input until we get a comma\n", " buffer = \"\"\n", " for chunk in input:\n", " # add current chunk to buffer\n", " buffer += chunk\n", " # while there are commas in the buffer\n", " while \",\" in buffer:\n", " # split buffer on comma\n", " comma_index = buffer.index(\",\")\n", " # yield everything before the comma\n", " yield [buffer[:comma_index].strip()]\n", " # save the rest for the next iteration\n", " buffer = buffer[comma_index + 1 :]\n", " # yield the last chunk\n", " yield [buffer.strip()]\n", "\n", "\n", "list_chain = str_chain | split_into_list\n", "\n", "for chunk in list_chain.stream({\"animal\": \"bear\"}):\n", " print(chunk, flush=True)" ] }, { "cell_type": "markdown", "id": "0a5adb69", "metadata": {}, "source": [ "Invoking it gives a full array of values:" ] }, { "cell_type": "code", "execution_count": 9, "id": "9ea4ddc6", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['lion', 'tiger', 'wolf', 'gorilla', 'raccoon']" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" }
150046
"It can be helpful to return not only tool outputs but also tool inputs. We can easily do this with LCEL by `RunnablePassthrough.assign`-ing the tool output. This will take whatever the input is to the RunnablePassrthrough components (assumed to be a dictionary) and add a key to it while still passing through everything that's currently in the input:" ] }, { "cell_type": "code", "execution_count": 23, "id": "45404406-859d-4caa-8b9d-5838162c80a0", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'name': 'multiply',\n", " 'arguments': {'x': 13, 'y': 4.14137281},\n", " 'output': 53.83784653}" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import RunnablePassthrough\n", "\n", "chain = (\n", " prompt | model | JsonOutputParser() | RunnablePassthrough.assign(output=invoke_tool)\n", ")\n", "chain.invoke({\"input\": \"what's thirteen times 4.14137281\"})" ] }, { "cell_type": "markdown", "id": "1797fe82-ea35-4cba-834a-1caf9740d184", "metadata": {}, "source": [ "## What's next?\n", "\n", "This how-to guide shows the \"happy path\" when the model correctly outputs all the required tool information.\n", "\n", "In reality, if you're using more complex tools, you will start encountering errors from the model, especially for models that have not been fine tuned for tool calling and for less capable models.\n", "\n", "You will need to be prepared to add strategies to improve the output from the model; e.g.,\n", "\n", "1. Provide few shot examples.\n", "2. Add error handling (e.g., catch the exception and feed it back to the LLM to ask it to correct its previous output)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.4" } }, "nbformat": 4, "nbformat_minor": 5 }
150047
{ "cells": [ { "cell_type": "raw", "id": "beba2e0e", "metadata": {}, "source": [ "---\n", "sidebar_position: 2\n", "---" ] }, { "cell_type": "markdown", "id": "bb0735c0", "metadata": {}, "source": [ "# How to use few shot examples in chat models\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [Prompt templates](/docs/concepts/#prompt-templates)\n", "- [Example selectors](/docs/concepts/#example-selectors)\n", "- [Chat models](/docs/concepts/#chat-model)\n", "- [Vectorstores](/docs/concepts/#vector-stores)\n", "\n", ":::\n", "\n", "This guide covers how to prompt a chat model with example inputs and outputs. Providing the model with a few such examples is called few-shotting, and is a simple yet powerful way to guide generation and in some cases drastically improve model performance.\n", "\n", "There does not appear to be solid consensus on how best to do few-shot prompting, and the optimal prompt compilation will likely vary by model. Because of this, we provide few-shot prompt templates like the [FewShotChatMessagePromptTemplate](https://python.langchain.com/api_reference/core/prompts/langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate.html?highlight=fewshot#langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate) as a flexible starting point, and you can modify or replace them as you see fit.\n", "\n", "The goal of few-shot prompt templates are to dynamically select examples based on an input, and then format the examples in a final prompt to provide for the model.\n", "\n", "**Note:** The following code examples are for chat models only, since `FewShotChatMessagePromptTemplates` are designed to output formatted [chat messages](/docs/concepts/#message-types) rather than pure strings. For similar few-shot prompt examples for pure string templates compatible with completion models (LLMs), see the [few-shot prompt templates](/docs/how_to/few_shot_examples/) guide." ] }, { "cell_type": "markdown", "id": "d716f2de-cc29-4823-9360-a808c7bfdb86", "metadata": { "tags": [] }, "source": [ "## Fixed Examples\n", "\n", "The most basic (and common) few-shot prompting technique is to use fixed prompt examples. This way you can select a chain, evaluate it, and avoid worrying about additional moving parts in production.\n", "\n", "The basic components of the template are:\n", "- `examples`: A list of dictionary examples to include in the final prompt.\n", "- `example_prompt`: converts each example into 1 or more messages through its [`format_messages`](https://python.langchain.com/api_reference/core/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html?highlight=format_messages#langchain_core.prompts.chat.ChatPromptTemplate.format_messages) method. A common example would be to convert each example into one human message and one AI message response, or a human message followed by a function call message.\n", "\n", "Below is a simple demonstration. First, define the examples you'd like to include. Let's give the LLM an unfamiliar mathematical operator, denoted by the \"🦜\" emoji:" ] }, { "cell_type": "code", "execution_count": 1, "id": "5b79e400", "metadata": {}, "outputs": [], "source": [ "%pip install -qU langchain langchain-openai langchain-chroma\n", "\n", "import os\n", "from getpass import getpass\n", "\n", "if \"OPENAI_API_KEY\" not in os.environ:\n", " os.environ[\"OPENAI_API_KEY\"] = getpass()" ] }, { "cell_type": "markdown", "id": "30856d92", "metadata": {}, "source": [ "If we try to ask the model what the result of this expression is, it will fail:" ] }, { "cell_type": "code", "execution_count": 4, "id": "174dec5b", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='The expression \"2 🦜 9\" is not a standard mathematical operation or equation. It appears to be a combination of the number 2 and the parrot emoji 🦜 followed by the number 9. It does not have a specific mathematical meaning.', response_metadata={'token_usage': {'completion_tokens': 54, 'prompt_tokens': 17, 'total_tokens': 71}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-aad12dda-5c47-4a1e-9949-6fe94e03242a-0', usage_metadata={'input_tokens': 17, 'output_tokens': 54, 'total_tokens': 71})" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_openai import ChatOpenAI\n", "\n", "model = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0.0)\n", "\n", "model.invoke(\"What is 2 🦜 9?\")" ] }, { "cell_type": "markdown", "id": "e6d58385", "metadata": {}, "source": [ "Now let's see what happens if we give the LLM some examples to work with. We'll define some below:" ] }, { "cell_type": "code", "execution_count": 5, "id": "0fc5a02a-6249-4e92-95c3-30fff9671e8b", "metadata": { "tags": [] }, "outputs": [], "source": [ "from langchain_core.prompts import ChatPromptTemplate, FewShotChatMessagePromptTemplate\n", "\n", "examples = [\n", " {\"input\": \"2 🦜 2\", \"output\": \"4\"},\n", " {\"input\": \"2 🦜 3\", \"output\": \"5\"},\n", "]" ] }, { "cell_type": "markdown", "id": "e8710ecc-2aa0-4172-a74c-250f6bc3d9e2", "metadata": {}, "source": [ "Next, assemble them into the few-shot prompt template." ] }, { "cell_type": "code", "execution_count": 6, "id": "65e72ad1-9060-47d0-91a1-bc130c8b98ac", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[HumanMessage(content='2 🦜 2'), AIMessage(content='4'), HumanMessage(content='2 🦜 3'), AIMessage(content='5')]\n" ] } ], "source": [ "# This is a prompt template used to format each individual example.\n", "example_prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\"human\", \"{input}\"),\n", " (\"ai\", \"{output}\"),\n", " ]\n", ")\n", "few_shot_prompt = FewShotChatMessagePromptTemplate(\n", " example_prompt=example_prompt,\n", " examples=examples,\n", ")\n", "\n", "print(few_shot_prompt.invoke({}).to_messages())" ] }, { "cell_type": "markdown", "id": "5490bd59-b28f-46a4-bbdf-0191802dd3c5", "metadata": {}, "source": [ "Finally, we assemble the final prompt as shown below, passing `few_shot_prompt` directly into the `from_messages` factory method, and use it with a model:" ] }, { "cell_type": "code", "execution_count": 7, "id": "9f86d6d9-50de-41b6-b6c7-0f9980cc0187", "metadata": { "tags": [] }, "outputs": [], "source": [
150050
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# How to combine results from multiple retrievers\n", "\n", "The [EnsembleRetriever](https://python.langchain.com/api_reference/langchain/retrievers/langchain.retrievers.ensemble.EnsembleRetriever.html) supports ensembling of results from multiple retrievers. It is initialized with a list of [BaseRetriever](https://python.langchain.com/api_reference/core/retrievers/langchain_core.retrievers.BaseRetriever.html) objects. EnsembleRetrievers rerank the results of the constituent retrievers based on the [Reciprocal Rank Fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) algorithm.\n", "\n", "By leveraging the strengths of different algorithms, the `EnsembleRetriever` can achieve better performance than any single algorithm. \n", "\n", "The most common pattern is to combine a sparse retriever (like BM25) with a dense retriever (like embedding similarity), because their strengths are complementary. It is also known as \"hybrid search\". The sparse retriever is good at finding relevant documents based on keywords, while the dense retriever is good at finding relevant documents based on semantic similarity.\n", "\n", "## Basic usage\n", "\n", "Below we demonstrate ensembling of a [BM25Retriever](https://python.langchain.com/api_reference/community/retrievers/langchain_community.retrievers.bm25.BM25Retriever.html) with a retriever derived from the [FAISS vector store](https://python.langchain.com/api_reference/community/vectorstores/langchain_community.vectorstores.faiss.FAISS.html)." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet rank_bm25 > /dev/null" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "from langchain.retrievers import EnsembleRetriever\n", "from langchain_community.retrievers import BM25Retriever\n", "from langchain_community.vectorstores import FAISS\n", "from langchain_openai import OpenAIEmbeddings\n", "\n", "doc_list_1 = [\n", " \"I like apples\",\n", " \"I like oranges\",\n", " \"Apples and oranges are fruits\",\n", "]\n", "\n", "# initialize the bm25 retriever and faiss retriever\n", "bm25_retriever = BM25Retriever.from_texts(\n", " doc_list_1, metadatas=[{\"source\": 1}] * len(doc_list_1)\n", ")\n", "bm25_retriever.k = 2\n", "\n", "doc_list_2 = [\n", " \"You like apples\",\n", " \"You like oranges\",\n", "]\n", "\n", "embedding = OpenAIEmbeddings()\n", "faiss_vectorstore = FAISS.from_texts(\n", " doc_list_2, embedding, metadatas=[{\"source\": 2}] * len(doc_list_2)\n", ")\n", "faiss_retriever = faiss_vectorstore.as_retriever(search_kwargs={\"k\": 2})\n", "\n", "# initialize the ensemble retriever\n", "ensemble_retriever = EnsembleRetriever(\n", " retrievers=[bm25_retriever, faiss_retriever], weights=[0.5, 0.5]\n", ")" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='I like apples', metadata={'source': 1}),\n", " Document(page_content='You like apples', metadata={'source': 2}),\n", " Document(page_content='Apples and oranges are fruits', metadata={'source': 1}),\n", " Document(page_content='You like oranges', metadata={'source': 2})]" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "docs = ensemble_retriever.invoke(\"apples\")\n", "docs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Runtime Configuration\n", "\n", "We can also configure the individual retrievers at runtime using [configurable fields](/docs/how_to/configure). Below we update the \"top-k\" parameter for the FAISS retriever specifically:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "from langchain_core.runnables import ConfigurableField\n", "\n", "faiss_retriever = faiss_vectorstore.as_retriever(\n", " search_kwargs={\"k\": 2}\n", ").configurable_fields(\n", " search_kwargs=ConfigurableField(\n", " id=\"search_kwargs_faiss\",\n", " name=\"Search Kwargs\",\n", " description=\"The search kwargs to use\",\n", " )\n", ")\n", "\n", "ensemble_retriever = EnsembleRetriever(\n", " retrievers=[bm25_retriever, faiss_retriever], weights=[0.5, 0.5]\n", ")" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='I like apples', metadata={'source': 1}),\n", " Document(page_content='You like apples', metadata={'source': 2}),\n", " Document(page_content='Apples and oranges are fruits', metadata={'source': 1})]" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "config = {\"configurable\": {\"search_kwargs_faiss\": {\"k\": 1}}}\n", "docs = ensemble_retriever.invoke(\"apples\", config=config)\n", "docs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that this only returns one source from the FAISS retriever, because we pass in the relevant configuration at run time" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 4 }
150054
{ "cells": [ { "cell_type": "markdown", "id": "72b1b316", "metadata": {}, "source": [ "# How to parse JSON output\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [Chat models](/docs/concepts/#chat-models)\n", "- [Output parsers](/docs/concepts/#output-parsers)\n", "- [Prompt templates](/docs/concepts/#prompt-templates)\n", "- [Structured output](/docs/how_to/structured_output)\n", "- [Chaining runnables together](/docs/how_to/sequence/)\n", "\n", ":::\n", "\n", "While some model providers support [built-in ways to return structured output](/docs/how_to/structured_output), not all do. We can use an output parser to help users to specify an arbitrary JSON schema via the prompt, query a model for outputs that conform to that schema, and finally parse that schema as JSON.\n", "\n", ":::note\n", "Keep in mind that large language models are leaky abstractions! You'll have to use an LLM with sufficient capacity to generate well-formed JSON.\n", ":::" ] }, { "cell_type": "markdown", "id": "ae909b7a", "metadata": {}, "source": [ "The [`JsonOutputParser`](https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html) is one built-in option for prompting for and then parsing JSON output. While it is similar in functionality to the [`PydanticOutputParser`](https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.pydantic.PydanticOutputParser.html), it also supports streaming back partial JSON objects.\n", "\n", "Here's an example of how it can be used alongside [Pydantic](https://docs.pydantic.dev/) to conveniently declare the expected schema:" ] }, { "cell_type": "code", "execution_count": null, "id": "dd9d9110", "metadata": {}, "outputs": [], "source": [ "%pip install -qU langchain langchain-openai\n", "\n", "import os\n", "from getpass import getpass\n", "\n", "if \"OPENAI_API_KEY\" not in os.environ:\n", " os.environ[\"OPENAI_API_KEY\"] = getpass()" ] }, { "cell_type": "code", "execution_count": 2, "id": "4ccf45a3", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'setup': \"Why couldn't the bicycle stand up by itself?\",\n", " 'punchline': 'Because it was two tired!'}" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.output_parsers import JsonOutputParser\n", "from langchain_core.prompts import PromptTemplate\n", "from langchain_openai import ChatOpenAI\n", "from pydantic import BaseModel, Field\n", "\n", "model = ChatOpenAI(temperature=0)\n", "\n", "\n", "# Define your desired data structure.\n", "class Joke(BaseModel):\n", " setup: str = Field(description=\"question to set up a joke\")\n", " punchline: str = Field(description=\"answer to resolve the joke\")\n", "\n", "\n", "# And a query intented to prompt a language model to populate the data structure.\n", "joke_query = \"Tell me a joke.\"\n", "\n", "# Set up a parser + inject instructions into the prompt template.\n", "parser = JsonOutputParser(pydantic_object=Joke)\n", "\n", "prompt = PromptTemplate(\n", " template=\"Answer the user query.\\n{format_instructions}\\n{query}\\n\",\n", " input_variables=[\"query\"],\n", " partial_variables={\"format_instructions\": parser.get_format_instructions()},\n", ")\n", "\n", "chain = prompt | model | parser\n", "\n", "chain.invoke({\"query\": joke_query})" ] }, { "cell_type": "markdown", "id": "51ffa2e3", "metadata": {}, "source": [ "Note that we are passing `format_instructions` from the parser directly into the prompt. You can and should experiment with adding your own formatting hints in the other parts of your prompt to either augment or replace the default instructions:" ] }, { "cell_type": "code", "execution_count": 3, "id": "72de9c82", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'The output should be formatted as a JSON instance that conforms to the JSON schema below.\\n\\nAs an example, for the schema {\"properties\": {\"foo\": {\"title\": \"Foo\", \"description\": \"a list of strings\", \"type\": \"array\", \"items\": {\"type\": \"string\"}}}, \"required\": [\"foo\"]}\\nthe object {\"foo\": [\"bar\", \"baz\"]} is a well-formatted instance of the schema. The object {\"properties\": {\"foo\": [\"bar\", \"baz\"]}} is not well-formatted.\\n\\nHere is the output schema:\\n```\\n{\"properties\": {\"setup\": {\"title\": \"Setup\", \"description\": \"question to set up a joke\", \"type\": \"string\"}, \"punchline\": {\"title\": \"Punchline\", \"description\": \"answer to resolve the joke\", \"type\": \"string\"}}, \"required\": [\"setup\", \"punchline\"]}\\n```'" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "parser.get_format_instructions()" ] }, { "cell_type": "markdown", "id": "37d801be", "metadata": {}, "source": [ "## Streaming\n", "\n", "As mentioned above, a key difference between the `JsonOutputParser` and the `PydanticOutputParser` is that the `JsonOutputParser` output parser supports streaming partial chunks. Here's what that looks like:" ] }, { "cell_type": "code", "execution_count": 4, "id": "0309256d", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{}\n", "{'setup': ''}\n", "{'setup': 'Why'}\n", "{'setup': 'Why couldn'}\n", "{'setup': \"Why couldn't\"}\n", "{'setup': \"Why couldn't the\"}\n", "{'setup': \"Why couldn't the bicycle\"}\n", "{'setup': \"Why couldn't the bicycle stand\"}\n", "{'setup': \"Why couldn't the bicycle stand up\"}\n", "{'setup': \"Why couldn't the bicycle stand up by\"}\n", "{'setup': \"Why couldn't the bicycle stand up by itself\"}\n", "{'setup': \"Why couldn't the bicycle stand up by itself?\"}\n", "{'setup': \"Why couldn't the bicycle stand up by itself?\", 'punchline': ''}\n", "{'setup': \"Why couldn't the bicycle stand up by itself?\", 'punchline': 'Because'}\n", "{'setup': \"Why couldn't the bicycle stand up by itself?\", 'punchline': 'Because it'}\n", "{'setup': \"Why couldn't the bicycle stand up by itself?\", 'punchline': 'Because it was'}\n", "{'setup': \"Why couldn't the bicycle stand up by itself?\", 'punchline': 'Because it was two'}\n", "{'setup': \"Why couldn't the bicycle stand up by itself?\", 'punchline': 'Because it was two tired'}\n", "{'setup': \"Why couldn't the bicycle stand up by itself?\", 'punchline': 'Because it was two tired!'}\n" ] } ], "source": [ "for s in chain.stream({\"query\": joke_query}):\n", " print(s)" ] }, { "cell_type": "markdown", "id": "344bd968", "metadata": {}, "source": [
150056
# How to create and query vector stores :::info Head to [Integrations](/docs/integrations/vectorstores/) for documentation on built-in integrations with 3rd-party vector stores. ::: One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search for you. ## Get started This guide showcases basic functionality related to vector stores. A key part of working with vector stores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the [text embedding model interfaces](/docs/how_to/embed_text) before diving into this. Before using the vectorstore at all, we need to load some data and initialize an embedding model. We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. ```python import os import getpass os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:') ``` ```python from langchain_community.document_loaders import TextLoader from langchain_openai import OpenAIEmbeddings from langchain_text_splitters import CharacterTextSplitter # Load the document, split it into chunks, embed each chunk and load it into the vector store. raw_documents = TextLoader('state_of_the_union.txt').load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) documents = text_splitter.split_documents(raw_documents) ``` import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; There are many great vector store options, here are a few that are free, open-source, and run entirely on your local machine. Review all integrations for many great hosted offerings. <Tabs> <TabItem value="chroma" label="Chroma" default> This walkthrough uses the `chroma` vector database, which runs on your local machine as a library. ```bash pip install langchain-chroma ``` ```python from langchain_chroma import Chroma db = Chroma.from_documents(documents, OpenAIEmbeddings()) ``` </TabItem> <TabItem value="faiss" label="FAISS"> This walkthrough uses the `FAISS` vector database, which makes use of the Facebook AI Similarity Search (FAISS) library. ```bash pip install faiss-cpu ``` ```python from langchain_community.vectorstores import FAISS db = FAISS.from_documents(documents, OpenAIEmbeddings()) ``` </TabItem> <TabItem value="lance" label="Lance"> This notebook shows how to use functionality related to the LanceDB vector database based on the Lance data format. ```bash pip install lancedb ``` ```python from langchain_community.vectorstores import LanceDB import lancedb db = lancedb.connect("/tmp/lancedb") table = db.create_table( "my_table", data=[ { "vector": embeddings.embed_query("Hello World"), "text": "Hello World", "id": "1", } ], mode="overwrite", ) db = LanceDB.from_documents(documents, OpenAIEmbeddings()) ``` </TabItem> </Tabs> ## Similarity search All vectorstores expose a `similarity_search` method. This will take incoming documents, create an embedding of them, and then find all documents with the most similar embedding. ```python query = "What did the president say about Ketanji Brown Jackson" docs = db.similarity_search(query) print(docs[0].page_content) ``` ```output Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ``` ### Similarity search by vector It is also possible to do a search for documents similar to a given embedding vector using `similarity_search_by_vector` which accepts an embedding vector as a parameter instead of a string. ```python embedding_vector = OpenAIEmbeddings().embed_query(query) docs = db.similarity_search_by_vector(embedding_vector) print(docs[0].page_content) ``` ```output Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ``` ## Async Operations
150058
{ "cells": [ { "cell_type": "raw", "id": "27598444", "metadata": { "vscode": { "languageId": "raw" } }, "source": [ "---\n", "sidebar_position: 3\n", "keywords: [structured output, json, information extraction, with_structured_output]\n", "---" ] }, { "cell_type": "markdown", "id": "6e3f0f72", "metadata": {}, "source": [ "# How to return structured data from a model\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [Chat models](/docs/concepts/#chat-models)\n", "- [Function/tool calling](/docs/concepts/#functiontool-calling)\n", ":::\n", "\n", "It is often useful to have a model return output that matches a specific schema. One common use-case is extracting data from text to insert into a database or use with some other downstream system. This guide covers a few strategies for getting structured outputs from a model.\n", "\n", "## The `.with_structured_output()` method\n", "\n", "<span data-heading-keywords=\"with_structured_output\"></span>\n", "\n", ":::info Supported models\n", "\n", "You can find a [list of models that support this method here](/docs/integrations/chat/).\n", "\n", ":::\n", "\n", "This is the easiest and most reliable way to get structured outputs. `with_structured_output()` is implemented for models that provide native APIs for structuring outputs, like tool/function calling or JSON mode, and makes use of these capabilities under the hood.\n", "\n", "This method takes a schema as input which specifies the names, types, and descriptions of the desired output attributes. The method returns a model-like Runnable, except that instead of outputting strings or Messages it outputs objects corresponding to the given schema. The schema can be specified as a TypedDict class, [JSON Schema](https://json-schema.org/) or a Pydantic class. If TypedDict or JSON Schema are used then a dictionary will be returned by the Runnable, and if a Pydantic class is used then a Pydantic object will be returned.\n", "\n", "As an example, let's get a model to generate a joke and separate the setup from the punchline:\n", "\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs\n", " customVarName=\"llm\"\n", "/>\n" ] }, { "cell_type": "code", "execution_count": 3, "id": "6d55008f", "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI(model=\"gpt-4o\", temperature=0)" ] }, { "cell_type": "markdown", "id": "a808a401-be1f-49f9-ad13-58dd68f7db5f", "metadata": {}, "source": [ "### Pydantic class\n", "\n", "If we want the model to return a Pydantic object, we just need to pass in the desired Pydantic class. The key advantage of using Pydantic is that the model-generated output will be validated. Pydantic will raise an error if any required fields are missing or if any fields are of the wrong type." ] }, { "cell_type": "code", "execution_count": 4, "id": "070bf702", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Joke(setup='Why was the cat sitting on the computer?', punchline='Because it wanted to keep an eye on the mouse!', rating=7)" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from typing import Optional\n", "\n", "from pydantic import BaseModel, Field\n", "\n", "\n", "# Pydantic\n", "class Joke(BaseModel):\n", " \"\"\"Joke to tell user.\"\"\"\n", "\n", " setup: str = Field(description=\"The setup of the joke\")\n", " punchline: str = Field(description=\"The punchline to the joke\")\n", " rating: Optional[int] = Field(\n", " default=None, description=\"How funny the joke is, from 1 to 10\"\n", " )\n", "\n", "\n", "structured_llm = llm.with_structured_output(Joke)\n", "\n", "structured_llm.invoke(\"Tell me a joke about cats\")" ] }, { "cell_type": "markdown", "id": "00890a47-3cdf-4805-b8f1-6d110f0633d3", "metadata": {}, "source": [ ":::tip\n", "Beyond just the structure of the Pydantic class, the name of the Pydantic class, the docstring, and the names and provided descriptions of parameters are very important. Most of the time `with_structured_output` is using a model's function/tool calling API, and you can effectively think of all of this information as being added to the model prompt.\n", ":::" ] }, { "cell_type": "markdown", "id": "deddb6d3", "metadata": {}, "source": [ "### TypedDict or JSON Schema\n", "\n", "If you don't want to use Pydantic, explicitly don't want validation of the arguments, or want to be able to stream the model outputs, you can define your schema using a TypedDict class. We can optionally use a special `Annotated` syntax supported by LangChain that allows you to specify the default value and description of a field. Note, the default value is *not* filled in automatically if the model doesn't generate it, it is only used in defining the schema that is passed to the model.\n", "\n", ":::info Requirements\n", "\n", "- Core: `langchain-core>=0.2.26`\n", "- Typing extensions: It is highly recommended to import `Annotated` and `TypedDict` from `typing_extensions` instead of `typing` to ensure consistent behavior across Python versions.\n", "\n", ":::" ] }, { "cell_type": "code", "execution_count": 8, "id": "70d82891-42e8-424a-919e-07d83bcfec61", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'setup': 'Why was the cat sitting on the computer?',\n", " 'punchline': 'Because it wanted to keep an eye on the mouse!',\n", " 'rating': 7}" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from typing_extensions import Annotated, TypedDict\n", "\n", "\n", "# TypedDict\n", "class Joke(TypedDict):\n", " \"\"\"Joke to tell user.\"\"\"\n", "\n", " setup: Annotated[str, ..., \"The setup of the joke\"]\n", "\n", " # Alternatively, we could have specified setup as:\n", "\n", " # setup: str # no default, no description\n", " # setup: Annotated[str, ...] # no default, no description\n", " # setup: Annotated[str, \"foo\"] # default, no description\n", "\n", " punchline: Annotated[str, ..., \"The punchline of the joke\"]\n", " rating: Annotated[Optional[int], None, \"How funny the joke is, from 1 to 10\"]\n", "\n", "\n", "structured_llm = llm.with_structured_output(Joke)\n", "\n", "structured_llm.invoke(\"Tell me a joke about cats\")" ] }, { "cell_type": "markdown", "id": "e4d7b4dc-f617-4ea8-aa58-847c228791b4", "metadata": {}, "source": [
150061
"For models that support more than one means of structuring outputs (i.e., they support both tool calling and JSON mode), you can specify which method to use with the `method=` argument.\n", "\n", ":::info JSON mode\n", "\n", "If using JSON mode you'll have to still specify the desired schema in the model prompt. The schema you pass to `with_structured_output` will only be used for parsing the model outputs, it will not be passed to the model the way it is with tool calling.\n", "\n", "To see if the model you're using supports JSON mode, check its entry in the [API reference](https://python.langchain.com/api_reference/langchain/index.html).\n", "\n", ":::" ] }, { "cell_type": "code", "execution_count": 15, "id": "df0370e3", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'setup': 'Why was the cat sitting on the computer?',\n", " 'punchline': 'Because it wanted to keep an eye on the mouse!'}" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "structured_llm = llm.with_structured_output(None, method=\"json_mode\")\n", "\n", "structured_llm.invoke(\n", " \"Tell me a joke about cats, respond in JSON with `setup` and `punchline` keys\"\n", ")" ] }, { "cell_type": "markdown", "id": "91e95aa2", "metadata": {}, "source": [ "### (Advanced) Raw outputs\n", "\n", "LLMs aren't perfect at generating structured output, especially as schemas become complex. You can avoid raising exceptions and handle the raw output yourself by passing `include_raw=True`. This changes the output format to contain the raw message output, the `parsed` value (if successful), and any resulting errors:" ] }, { "cell_type": "code", "execution_count": 17, "id": "10ed2842", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_f25ZRmh8u5vHlOWfTUw8sJFZ', 'function': {'arguments': '{\"setup\":\"Why was the cat sitting on the computer?\",\"punchline\":\"Because it wanted to keep an eye on the mouse!\",\"rating\":7}', 'name': 'Joke'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 33, 'prompt_tokens': 93, 'total_tokens': 126}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_4e2b2da518', 'finish_reason': 'stop', 'logprobs': None}, id='run-d880d7e2-df08-4e9e-ad92-dfc29f2fd52f-0', tool_calls=[{'name': 'Joke', 'args': {'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse!', 'rating': 7}, 'id': 'call_f25ZRmh8u5vHlOWfTUw8sJFZ', 'type': 'tool_call'}], usage_metadata={'input_tokens': 93, 'output_tokens': 33, 'total_tokens': 126}),\n", " 'parsed': {'setup': 'Why was the cat sitting on the computer?',\n", " 'punchline': 'Because it wanted to keep an eye on the mouse!',\n", " 'rating': 7},\n", " 'parsing_error': None}" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "structured_llm = llm.with_structured_output(Joke, include_raw=True)\n", "\n", "structured_llm.invoke(\"Tell me a joke about cats\")" ] }, { "cell_type": "markdown", "id": "5e92a98a", "metadata": {}, "source": [ "## Prompting and parsing model outputs directly\n", "\n", "Not all models support `.with_structured_output()`, since not all models have tool calling or JSON mode support. For such models you'll need to directly prompt the model to use a specific format, and use an output parser to extract the structured response from the raw model output.\n", "\n", "### Using `PydanticOutputParser`\n", "\n", "The following example uses the built-in [`PydanticOutputParser`](https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.pydantic.PydanticOutputParser.html) to parse the output of a chat model prompted to match the given Pydantic schema. Note that we are adding `format_instructions` directly to the prompt from a method on the parser:" ] }, { "cell_type": "code", "execution_count": 31, "id": "6e514455", "metadata": {}, "outputs": [], "source": [ "from typing import List\n", "\n", "from langchain_core.output_parsers import PydanticOutputParser\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from pydantic import BaseModel, Field\n", "\n", "\n", "class Person(BaseModel):\n", " \"\"\"Information about a person.\"\"\"\n", "\n", " name: str = Field(..., description=\"The name of the person\")\n", " height_in_meters: float = Field(\n", " ..., description=\"The height of the person expressed in meters.\"\n", " )\n", "\n", "\n", "class People(BaseModel):\n", " \"\"\"Identifying information about all people in a text.\"\"\"\n", "\n", " people: List[Person]\n", "\n", "\n", "# Set up a parser\n", "parser = PydanticOutputParser(pydantic_object=People)\n", "\n", "# Prompt\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"Answer the user query. Wrap the output in `json` tags\\n{format_instructions}\",\n", " ),\n", " (\"human\", \"{query}\"),\n", " ]\n", ").partial(format_instructions=parser.get_format_instructions())" ] }, { "cell_type": "markdown", "id": "082fa166", "metadata": {}, "source": [ "Let’s take a look at what information is sent to the model:" ] }, { "cell_type": "code", "execution_count": 37, "id": "3d73d33d", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "System: Answer the user query. Wrap the output in `json` tags\n", "The output should be formatted as a JSON instance that conforms to the JSON schema below.\n", "\n", "As an example, for the schema {\"properties\": {\"foo\": {\"title\": \"Foo\", \"description\": \"a list of strings\", \"type\": \"array\", \"items\": {\"type\": \"string\"}}}, \"required\": [\"foo\"]}\n", "the object {\"foo\": [\"bar\", \"baz\"]} is a well-formatted instance of the schema. The object {\"properties\": {\"foo\": [\"bar\", \"baz\"]}} is not well-formatted.\n", "\n", "Here is the output schema:\n", "```\n",
150063
{ "cells": [ { "cell_type": "markdown", "id": "4da7ae91-4973-4e97-a570-fa24024ec65d", "metadata": {}, "source": [ "# How to do query validation as part of SQL question-answering\n", "\n", "Perhaps the most error-prone part of any SQL chain or agent is writing valid and safe SQL queries. In this guide we'll go over some strategies for validating our queries and handling invalid queries.\n", "\n", "We will cover: \n", "\n", "1. Appending a \"query validator\" step to the query generation;\n", "2. Prompt engineering to reduce the incidence of errors.\n", "\n", "## Setup\n", "\n", "First, get required packages and set environment variables:" ] }, { "cell_type": "code", "execution_count": null, "id": "5d40d5bc-3647-4b5d-808a-db470d40fe7a", "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet langchain langchain-community langchain-openai" ] }, { "cell_type": "code", "execution_count": null, "id": "71f46270-e1c6-45b4-b36e-ea2e9f860eba", "metadata": {}, "outputs": [], "source": [ "# Uncomment the below to use LangSmith. Not required.\n", "# import os\n", "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n", "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"" ] }, { "cell_type": "markdown", "id": "a0a2151b-cecf-4559-92a1-ca48824fed18", "metadata": {}, "source": [ "The below example will use a SQLite connection with Chinook database. Follow [these installation steps](https://database.guide/2-sample-databases-sqlite/) to create `Chinook.db` in the same directory as this notebook:\n", "\n", "* Save [this file](https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql) as `Chinook_Sqlite.sql`\n", "* Run `sqlite3 Chinook.db`\n", "* Run `.read Chinook_Sqlite.sql`\n", "* Test `SELECT * FROM Artist LIMIT 10;`\n", "\n", "Now, `Chinhook.db` is in our directory and we can interface with it using the SQLAlchemy-driven `SQLDatabase` class:" ] }, { "cell_type": "code", "execution_count": 1, "id": "8cedc936-5268-4bfa-b838-bdcc1ee9573c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "sqlite\n", "['Album', 'Artist', 'Customer', 'Employee', 'Genre', 'Invoice', 'InvoiceLine', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track']\n", "[(1, 'AC/DC'), (2, 'Accept'), (3, 'Aerosmith'), (4, 'Alanis Morissette'), (5, 'Alice In Chains'), (6, 'Antônio Carlos Jobim'), (7, 'Apocalyptica'), (8, 'Audioslave'), (9, 'BackBeat'), (10, 'Billy Cobham')]\n" ] } ], "source": [ "from langchain_community.utilities import SQLDatabase\n", "\n", "db = SQLDatabase.from_uri(\"sqlite:///Chinook.db\")\n", "print(db.dialect)\n", "print(db.get_usable_table_names())\n", "print(db.run(\"SELECT * FROM Artist LIMIT 10;\"))" ] }, { "cell_type": "markdown", "id": "2d203315-fab7-4621-80da-41e9bf82d803", "metadata": {}, "source": [ "## Query checker\n", "\n", "Perhaps the simplest strategy is to ask the model itself to check the original query for common mistakes. Suppose we have the following SQL query chain:\n", "\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs customVarName=\"llm\" />\n" ] }, { "cell_type": "code", "execution_count": 3, "id": "d81ebf69-75ad-4c92-baa9-fd152b8e622a", "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI()" ] }, { "cell_type": "code", "execution_count": 4, "id": "ec66bb76-b1ad-48ad-a7d4-b518e9421b86", "metadata": {}, "outputs": [], "source": [ "from langchain.chains import create_sql_query_chain\n", "\n", "chain = create_sql_query_chain(llm, db)" ] }, { "cell_type": "markdown", "id": "da01023d-cc05-43e3-a38d-ed9d56d3ad15", "metadata": {}, "source": [ "And we want to validate its outputs. We can do so by extending the chain with a second prompt and model call:" ] }, { "cell_type": "code", "execution_count": 5, "id": "16686750-d8ee-4c60-8d67-b28281cb6164", "metadata": {}, "outputs": [], "source": [ "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "system = \"\"\"Double check the user's {dialect} query for common mistakes, including:\n", "- Using NOT IN with NULL values\n", "- Using UNION when UNION ALL should have been used\n", "- Using BETWEEN for exclusive ranges\n", "- Data type mismatch in predicates\n", "- Properly quoting identifiers\n", "- Using the correct number of arguments for functions\n", "- Casting to the correct data type\n", "- Using the proper columns for joins\n", "\n", "If there are any of the above mistakes, rewrite the query.\n", "If there are no mistakes, just reproduce the original query with no further commentary.\n", "\n", "Output the final SQL query only.\"\"\"\n", "prompt = ChatPromptTemplate.from_messages(\n", " [(\"system\", system), (\"human\", \"{query}\")]\n", ").partial(dialect=db.dialect)\n", "validation_chain = prompt | llm | StrOutputParser()\n", "\n", "full_chain = {\"query\": chain} | validation_chain" ] }, { "cell_type": "code", "execution_count": 10, "id": "28ef9c6e-21fa-4b62-8aa4-8cd398ce4c4d", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "SELECT AVG(i.Total) AS AverageInvoice\n", "FROM Invoice i\n", "JOIN Customer c ON i.CustomerId = c.CustomerId\n", "WHERE c.Country = 'USA'\n", "AND c.Fax IS NULL\n", "AND i.InvoiceDate >= '2003-01-01' \n", "AND i.InvoiceDate < '2010-01-01'\n" ] } ], "source": [ "query = full_chain.invoke(\n", " {\n", " \"question\": \"What's the average Invoice from an American customer whose Fax is missing since 2003 but before 2010\"\n", " }\n", ")\n", "print(query)" ] }, { "cell_type": "markdown", "id": "228a1b87-4e44-4d86-bed7-fd2d7a91fb23", "metadata": {}, "source": [
150066
{ "cells": [ { "cell_type": "raw", "id": "b5fc1fc7-c4c5-418f-99da-006c604a7ea6", "metadata": {}, "source": [ "---\n", "title: Custom Retriever\n", "---" ] }, { "cell_type": "markdown", "id": "ff6f3c79-0848-4956-9115-54f6b2134587", "metadata": {}, "source": [ "# How to create a custom Retriever\n", "\n", "## Overview\n", "\n", "Many LLM applications involve retrieving information from external data sources using a `Retriever`. \n", "\n", "A retriever is responsible for retrieving a list of relevant `Documents` to a given user `query`.\n", "\n", "The retrieved documents are often formatted into prompts that are fed into an LLM, allowing the LLM to use the information in the to generate an appropriate response (e.g., answering a user question based on a knowledge base).\n", "\n", "## Interface\n", "\n", "To create your own retriever, you need to extend the `BaseRetriever` class and implement the following methods:\n", "\n", "| Method | Description | Required/Optional |\n", "|--------------------------------|--------------------------------------------------|-------------------|\n", "| `_get_relevant_documents` | Get documents relevant to a query. | Required |\n", "| `_aget_relevant_documents` | Implement to provide async native support. | Optional |\n", "\n", "\n", "The logic inside of `_get_relevant_documents` can involve arbitrary calls to a database or to the web using requests.\n", "\n", ":::tip\n", "By inherting from `BaseRetriever`, your retriever automatically becomes a LangChain [Runnable](/docs/concepts#interface) and will gain the standard `Runnable` functionality out of the box!\n", ":::\n", "\n", "\n", ":::info\n", "You can use a `RunnableLambda` or `RunnableGenerator` to implement a retriever.\n", "\n", "The main benefit of implementing a retriever as a `BaseRetriever` vs. a `RunnableLambda` (a custom [runnable function](/docs/how_to/functions)) is that a `BaseRetriever` is a well\n", "known LangChain entity so some tooling for monitoring may implement specialized behavior for retrievers. Another difference\n", "is that a `BaseRetriever` will behave slightly differently from `RunnableLambda` in some APIs; e.g., the `start` event\n", "in `astream_events` API will be `on_retriever_start` instead of `on_chain_start`.\n", ":::\n" ] }, { "cell_type": "markdown", "id": "2be9fe82-0757-41d1-a647-15bed11fd3bf", "metadata": {}, "source": [ "## Example\n", "\n", "Let's implement a toy retriever that returns all documents whose text contains the text in the user query." ] }, { "cell_type": "code", "execution_count": 26, "id": "bdf61902-2984-493b-a002-d4fced6df590", "metadata": {}, "outputs": [], "source": [ "from typing import List\n", "\n", "from langchain_core.callbacks import CallbackManagerForRetrieverRun\n", "from langchain_core.documents import Document\n", "from langchain_core.retrievers import BaseRetriever\n", "\n", "\n", "class ToyRetriever(BaseRetriever):\n", " \"\"\"A toy retriever that contains the top k documents that contain the user query.\n", "\n", " This retriever only implements the sync method _get_relevant_documents.\n", "\n", " If the retriever were to involve file access or network access, it could benefit\n", " from a native async implementation of `_aget_relevant_documents`.\n", "\n", " As usual, with Runnables, there's a default async implementation that's provided\n", " that delegates to the sync implementation running on another thread.\n", " \"\"\"\n", "\n", " documents: List[Document]\n", " \"\"\"List of documents to retrieve from.\"\"\"\n", " k: int\n", " \"\"\"Number of top results to return\"\"\"\n", "\n", " def _get_relevant_documents(\n", " self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n", " ) -> List[Document]:\n", " \"\"\"Sync implementations for retriever.\"\"\"\n", " matching_documents = []\n", " for document in documents:\n", " if len(matching_documents) > self.k:\n", " return matching_documents\n", "\n", " if query.lower() in document.page_content.lower():\n", " matching_documents.append(document)\n", " return matching_documents\n", "\n", " # Optional: Provide a more efficient native implementation by overriding\n", " # _aget_relevant_documents\n", " # async def _aget_relevant_documents(\n", " # self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n", " # ) -> List[Document]:\n", " # \"\"\"Asynchronously get documents relevant to a query.\n", "\n", " # Args:\n", " # query: String to find relevant documents for\n", " # run_manager: The callbacks handler to use\n", "\n", " # Returns:\n", " # List of relevant documents\n", " # \"\"\"" ] }, { "cell_type": "markdown", "id": "2eac1f28-29c1-4888-b3aa-b4fa70c73b4c", "metadata": {}, "source": [ "## Test it 🧪" ] }, { "cell_type": "code", "execution_count": 21, "id": "ea868db5-48cc-4ec2-9b0a-1ab94c32b302", "metadata": {}, "outputs": [], "source": [ "documents = [\n", " Document(\n", " page_content=\"Dogs are great companions, known for their loyalty and friendliness.\",\n", " metadata={\"type\": \"dog\", \"trait\": \"loyalty\"},\n", " ),\n", " Document(\n", " page_content=\"Cats are independent pets that often enjoy their own space.\",\n", " metadata={\"type\": \"cat\", \"trait\": \"independence\"},\n", " ),\n", " Document(\n", " page_content=\"Goldfish are popular pets for beginners, requiring relatively simple care.\",\n", " metadata={\"type\": \"fish\", \"trait\": \"low maintenance\"},\n", " ),\n", " Document(\n", " page_content=\"Parrots are intelligent birds capable of mimicking human speech.\",\n", " metadata={\"type\": \"bird\", \"trait\": \"intelligence\"},\n", " ),\n", " Document(\n", " page_content=\"Rabbits are social animals that need plenty of space to hop around.\",\n", " metadata={\"type\": \"rabbit\", \"trait\": \"social\"},\n", " ),\n", "]\n", "retriever = ToyRetriever(documents=documents, k=3)" ] }, { "cell_type": "code", "execution_count": 22, "id": "18be85e9-6ef0-4ee0-ae5d-a0810c38b254", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'type': 'cat', 'trait': 'independence'}),\n", " Document(page_content='Rabbits are social animals that need plenty of space to hop around.', metadata={'type': 'rabbit', 'trait': 'social'})]" ] }, "execution_count": 22, "metadata": {},
150068
{ "cells": [ { "cell_type": "markdown", "id": "e5715368", "metadata": {}, "source": [ "# How to track token usage in ChatModels\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [Chat models](/docs/concepts/#chat-models)\n", "\n", ":::\n", "\n", "Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.\n", "\n", "This guide requires `langchain-openai >= 0.1.9`." ] }, { "cell_type": "code", "execution_count": null, "id": "9c7d1338-dd1b-4d06-b33d-d5cffc49fd6a", "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet langchain langchain-openai" ] }, { "cell_type": "markdown", "id": "598ae1e2-a52d-4459-81fd-cdc68b06742a", "metadata": {}, "source": [ "## Using LangSmith\n", "\n", "You can use [LangSmith](https://www.langchain.com/langsmith) to help track token usage in your LLM application. See the [LangSmith quick start guide](https://docs.smith.langchain.com/).\n", "\n", "## Using AIMessage.usage_metadata\n", "\n", "A number of model providers return token usage information as part of the chat generation response. When available, this information will be included on the `AIMessage` objects produced by the corresponding model.\n", "\n", "LangChain `AIMessage` objects include a [usage_metadata](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.usage_metadata) attribute. When populated, this attribute will be a [UsageMetadata](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.UsageMetadata.html) dictionary with standard keys (e.g., `\"input_tokens\"` and `\"output_tokens\"`).\n", "\n", "Examples:\n", "\n", "**OpenAI**:" ] }, { "cell_type": "code", "execution_count": 1, "id": "b39bf807-4125-4db4-bbf7-28a46afff6b4", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# # !pip install -qU langchain-openai\n", "\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI(model=\"gpt-4o-mini\")\n", "openai_response = llm.invoke(\"hello\")\n", "openai_response.usage_metadata" ] }, { "cell_type": "markdown", "id": "2299c44a-2fe6-4d52-a6a2-99ff6d231c73", "metadata": {}, "source": [ "**Anthropic**:" ] }, { "cell_type": "code", "execution_count": 2, "id": "9c82ff80-ec4e-4049-b019-5f0bbd7df82a", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input_tokens': 8, 'output_tokens': 12, 'total_tokens': 20}" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# !pip install -qU langchain-anthropic\n", "\n", "from langchain_anthropic import ChatAnthropic\n", "\n", "llm = ChatAnthropic(model=\"claude-3-haiku-20240307\")\n", "anthropic_response = llm.invoke(\"hello\")\n", "anthropic_response.usage_metadata" ] }, { "cell_type": "markdown", "id": "6d4efc15-ba9f-4b3d-9278-8e01f99f263f", "metadata": {}, "source": [ "### Using AIMessage.response_metadata\n", "\n", "Metadata from the model response is also included in the AIMessage [response_metadata](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.response_metadata) attribute. These data are typically not standardized. Note that different providers adopt different conventions for representing token counts:" ] }, { "cell_type": "code", "execution_count": 3, "id": "f156f9da-21f2-4c81-a714-54cbf9ad393e", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "OpenAI: {'completion_tokens': 9, 'prompt_tokens': 8, 'total_tokens': 17}\n", "\n", "Anthropic: {'input_tokens': 8, 'output_tokens': 12}\n" ] } ], "source": [ "print(f'OpenAI: {openai_response.response_metadata[\"token_usage\"]}\\n')\n", "print(f'Anthropic: {anthropic_response.response_metadata[\"usage\"]}')" ] }, { "cell_type": "markdown", "id": "b4ef2c43-0ff6-49eb-9782-e4070c9da8d7", "metadata": {}, "source": [ "### Streaming\n", "\n", "Some providers support token count metadata in a streaming context.\n", "\n", "#### OpenAI\n", "\n", "For example, OpenAI will return a message [chunk](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.AIMessageChunk.html) at the end of a stream with token usage information. This behavior is supported by `langchain-openai >= 0.1.9` and can be enabled by setting `stream_usage=True`. This attribute can also be set when `ChatOpenAI` is instantiated.\n", "\n", ":::note\n", "By default, the last message chunk in a stream will include a `\"finish_reason\"` in the message's `response_metadata` attribute. If we include token usage in streaming mode, an additional chunk containing usage metadata will be added to the end of the stream, such that `\"finish_reason\"` appears on the second to last message chunk.\n", ":::\n" ] }, { "cell_type": "code", "execution_count": 4, "id": "07f0c872-6b6c-4fed-a129-9b5a858505be", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "content='' id='run-adb20c31-60c7-43a2-99b2-d4a53ca5f623'\n", "content='Hello' id='run-adb20c31-60c7-43a2-99b2-d4a53ca5f623'\n", "content='!' id='run-adb20c31-60c7-43a2-99b2-d4a53ca5f623'\n", "content=' How' id='run-adb20c31-60c7-43a2-99b2-d4a53ca5f623'\n", "content=' can' id='run-adb20c31-60c7-43a2-99b2-d4a53ca5f623'\n", "content=' I' id='run-adb20c31-60c7-43a2-99b2-d4a53ca5f623'\n", "content=' assist' id='run-adb20c31-60c7-43a2-99b2-d4a53ca5f623'\n", "content=' you' id='run-adb20c31-60c7-43a2-99b2-d4a53ca5f623'\n",
150075
{ "cells": [ { "cell_type": "markdown", "id": "b8982428", "metadata": {}, "source": [ "# Run models locally\n", "\n", "## Use case\n", "\n", "The popularity of projects like [llama.cpp](https://github.com/ggerganov/llama.cpp), [Ollama](https://github.com/ollama/ollama), [GPT4All](https://github.com/nomic-ai/gpt4all), [llamafile](https://github.com/Mozilla-Ocho/llamafile), and others underscore the demand to run LLMs locally (on your own device).\n", "\n", "This has at least two important benefits:\n", "\n", "1. `Privacy`: Your data is not sent to a third party, and it is not subject to the terms of service of a commercial service\n", "2. `Cost`: There is no inference fee, which is important for token-intensive applications (e.g., [long-running simulations](https://twitter.com/RLanceMartin/status/1691097659262820352?s=20), summarization)\n", "\n", "## Overview\n", "\n", "Running an LLM locally requires a few things:\n", "\n", "1. `Open-source LLM`: An open-source LLM that can be freely modified and shared \n", "2. `Inference`: Ability to run this LLM on your device w/ acceptable latency\n", "\n", "### Open-source LLMs\n", "\n", "Users can now gain access to a rapidly growing set of [open-source LLMs](https://cameronrwolfe.substack.com/p/the-history-of-open-source-llms-better). \n", "\n", "These LLMs can be assessed across at least two dimensions (see figure):\n", " \n", "1. `Base model`: What is the base-model and how was it trained?\n", "2. `Fine-tuning approach`: Was the base-model fine-tuned and, if so, what [set of instructions](https://cameronrwolfe.substack.com/p/beyond-llama-the-power-of-open-llms#%C2%A7alpaca-an-instruction-following-llama-model) was used?\n", "\n", "![Image description](../../static/img/OSS_LLM_overview.png)\n", "\n", "The relative performance of these models can be assessed using several leaderboards, including:\n", "\n", "1. [LmSys](https://chat.lmsys.org/?arena)\n", "2. [GPT4All](https://gpt4all.io/index.html)\n", "3. [HuggingFace](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)\n", "\n", "### Inference\n", "\n", "A few frameworks for this have emerged to support inference of open-source LLMs on various devices:\n", "\n", "1. [`llama.cpp`](https://github.com/ggerganov/llama.cpp): C++ implementation of llama inference code with [weight optimization / quantization](https://finbarr.ca/how-is-llama-cpp-possible/)\n", "2. [`gpt4all`](https://docs.gpt4all.io/index.html): Optimized C backend for inference\n", "3. [`Ollama`](https://ollama.ai/): Bundles model weights and environment into an app that runs on device and serves the LLM\n", "4. [`llamafile`](https://github.com/Mozilla-Ocho/llamafile): Bundles model weights and everything needed to run the model in a single file, allowing you to run the LLM locally from this file without any additional installation steps\n", "\n", "In general, these frameworks will do a few things:\n", "\n", "1. `Quantization`: Reduce the memory footprint of the raw model weights\n", "2. `Efficient implementation for inference`: Support inference on consumer hardware (e.g., CPU or laptop GPU)\n", "\n", "In particular, see [this excellent post](https://finbarr.ca/how-is-llama-cpp-possible/) on the importance of quantization.\n", "\n", "![Image description](../../static/img/llama-memory-weights.png)\n", "\n", "With less precision, we radically decrease the memory needed to store the LLM in memory.\n", "\n", "In addition, we can see the importance of GPU memory bandwidth [sheet](https://docs.google.com/spreadsheets/d/1OehfHHNSn66BP2h3Bxp2NJTVX97icU0GmCXF6pK23H8/edit#gid=0)!\n", "\n", "A Mac M2 Max is 5-6x faster than a M1 for inference due to the larger GPU memory bandwidth.\n", "\n", "![Image description](../../static/img/llama_t_put.png)\n", "\n", "### Formatting prompts\n", "\n", "Some providers have [chat model](/docs/concepts/#chat-models) wrappers that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models with a [text-in/text-out LLM](/docs/concepts/#llms) wrapper, you may need to use a prompt tailed for your specific model.\n", "\n", "This can [require the inclusion of special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). [Here's an example for LLaMA 2](https://smith.langchain.com/hub/rlm/rag-prompt-llama).\n", "\n", "## Quickstart\n", "\n", "[`Ollama`](https://ollama.ai/) is one way to easily run inference on macOS.\n", " \n", "The instructions [here](https://github.com/jmorganca/ollama?tab=readme-ov-file#ollama) provide details, which we summarize:\n", " \n", "* [Download and run](https://ollama.ai/download) the app\n", "* From command line, fetch a model from this [list of options](https://github.com/jmorganca/ollama): e.g., `ollama pull llama3.1:8b`\n", "* When the app is running, all models are automatically served on `localhost:11434`\n" ] }, { "cell_type": "code", "execution_count": null, "id": "29450fc9", "metadata": {}, "outputs": [], "source": [ "%pip install -qU langchain_ollama" ] }, { "cell_type": "code", "execution_count": 2, "id": "86178adb", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'...Neil Armstrong!\\n\\nOn July 20, 1969, Neil Armstrong became the first person to set foot on the lunar surface, famously declaring \"That\\'s one small step for man, one giant leap for mankind\" as he stepped off the lunar module Eagle onto the Moon\\'s surface.\\n\\nWould you like to know more about the Apollo 11 mission or Neil Armstrong\\'s achievements?'" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_ollama import OllamaLLM\n", "\n", "llm = OllamaLLM(model=\"llama3.1:8b\")\n", "\n", "llm.invoke(\"The first man on the moon was ...\")" ] }, { "cell_type": "markdown", "id": "674cc672", "metadata": {}, "source": [ "Stream tokens as they are being generated:" ] }, { "cell_type": "code", "execution_count": 3, "id": "1386a852", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "...|" ] }, { "name": "stdout", "output_type": "stream", "text": [
150076
"Neil| Armstrong|,| an| American| astronaut|.| He| stepped| out| of| the| lunar| module| Eagle| and| onto| the| surface| of| the| Moon| on| July| |20|,| |196|9|,| famously| declaring|:| \"|That|'s| one| small| step| for| man|,| one| giant| leap| for| mankind|.\"||" ] } ], "source": [ "for chunk in llm.stream(\"The first man on the moon was ...\"):\n", " print(chunk, end=\"|\", flush=True)" ] }, { "cell_type": "markdown", "id": "e5731060", "metadata": {}, "source": [ "Ollama also includes a chat model wrapper that handles formatting conversation turns:" ] }, { "cell_type": "code", "execution_count": 4, "id": "f14a778a", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='The answer is a historic one!\\n\\nThe first man to walk on the Moon was Neil Armstrong, an American astronaut and commander of the Apollo 11 mission. On July 20, 1969, Armstrong stepped out of the lunar module Eagle onto the surface of the Moon, famously declaring:\\n\\n\"That\\'s one small step for man, one giant leap for mankind.\"\\n\\nArmstrong was followed by fellow astronaut Edwin \"Buzz\" Aldrin, who also walked on the Moon during the mission. Michael Collins remained in orbit around the Moon in the command module Columbia.\\n\\nNeil Armstrong passed away on August 25, 2012, but his legacy as a pioneering astronaut and engineer continues to inspire people around the world!', response_metadata={'model': 'llama3.1:8b', 'created_at': '2024-08-01T00:38:29.176717Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 10681861417, 'load_duration': 34270292, 'prompt_eval_count': 19, 'prompt_eval_duration': 6209448000, 'eval_count': 141, 'eval_duration': 4432022000}, id='run-7bed57c5-7f54-4092-912c-ae49073dcd48-0', usage_metadata={'input_tokens': 19, 'output_tokens': 141, 'total_tokens': 160})" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_ollama import ChatOllama\n", "\n", "chat_model = ChatOllama(model=\"llama3.1:8b\")\n", "\n", "chat_model.invoke(\"Who was the first man on the moon?\")" ] }, { "cell_type": "markdown", "id": "5cb27414", "metadata": {}, "source": [ "## Environment\n", "\n", "Inference speed is a challenge when running models locally (see above).\n", "\n", "To minimize latency, it is desirable to run models locally on GPU, which ships with many consumer laptops [e.g., Apple devices](https://www.apple.com/newsroom/2022/06/apple-unveils-m2-with-breakthrough-performance-and-capabilities/).\n", "\n", "And even with GPU, the available GPU memory bandwidth (as noted above) is important.\n", "\n", "### Running Apple silicon GPU\n", "\n", "`Ollama` and [`llamafile`](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#gpu-support) will automatically utilize the GPU on Apple devices.\n", " \n", "Other frameworks require the user to set up the environment to utilize the Apple GPU.\n", "\n", "For example, `llama.cpp` python bindings can be configured to use the GPU via [Metal](https://developer.apple.com/metal/).\n", "\n", "Metal is a graphics and compute API created by Apple providing near-direct access to the GPU. \n", "\n", "See the [`llama.cpp`](/docs/integrations/llms/llamacpp) setup [here](https://github.com/abetlen/llama-cpp-python/blob/main/docs/install/macos.md) to enable this.\n", "\n", "In particular, ensure that conda is using the correct virtual environment that you created (`miniforge3`).\n", "\n", "E.g., for me:\n", "\n", "```\n", "conda activate /Users/rlm/miniforge3/envs/llama\n", "```\n", "\n", "With the above confirmed, then:\n", "\n", "```\n", "CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir\n", "```" ] }, { "cell_type": "markdown", "id": "c382e79a", "metadata": {}, "source": [ "## LLMs\n", "\n", "There are various ways to gain access to quantized model weights.\n", "\n", "1. [`HuggingFace`](https://huggingface.co/TheBloke) - Many quantized model are available for download and can be run with framework such as [`llama.cpp`](https://github.com/ggerganov/llama.cpp). You can also download models in [`llamafile` format](https://huggingface.co/models?other=llamafile) from HuggingFace.\n", "2. [`gpt4all`](https://gpt4all.io/index.html) - The model explorer offers a leaderboard of metrics and associated quantized models available for download \n", "3. [`Ollama`](https://github.com/jmorganca/ollama) - Several models can be accessed directly via `pull`\n", "\n", "### Ollama\n", "\n", "With [Ollama](https://github.com/jmorganca/ollama), fetch a model via `ollama pull <model family>:<tag>`:\n", "\n", "* E.g., for Llama 2 7b: `ollama pull llama2` will download the most basic version of the model (e.g., smallest # parameters and 4 bit quantization)\n", "* We can also specify a particular version from the [model list](https://github.com/jmorganca/ollama?tab=readme-ov-file#model-library), e.g., `ollama pull llama2:13b`\n", "* See the full set of parameters on the [API reference page](https://python.langchain.com/api_reference/community/llms/langchain_community.llms.ollama.Ollama.html)" ] }, { "cell_type": "code", "execution_count": 42, "id": "8ecd2f78", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "' Sure! Here\\'s the answer, broken down step by step:\\n\\nThe first man on the moon was... Neil Armstrong.\\n\\nHere\\'s how I arrived at that answer:\\n\\n1. The first manned mission to land on the moon was Apollo 11.\\n2. The mission included three astronauts: Neil Armstrong, Edwin \"Buzz\" Aldrin, and Michael Collins.\\n3. Neil Armstrong was the mission commander and the first person to set foot on the moon.\\n4. On July 20, 1969, Armstrong stepped out of the lunar module Eagle and onto the moon\\'s surface, famously declaring \"That\\'s one small step for man, one giant leap for mankind.\"\\n\\nSo, the first man on the moon was Neil Armstrong!'" ] }, "execution_count": 42, "metadata": {}, "output_type": "execute_result" } ], "source": [ "llm = OllamaLLM(model=\"llama2:13b\")\n", "llm.invoke(\"The first man on the moon was ... think step by step\")" ] }, { "cell_type": "markdown", "id": "07c8c0d1", "metadata": {}, "source": [ "### Llama.cpp\n", "\n", "Llama.cpp is compatible with a [broad set of models](https://github.com/ggerganov/llama.cpp).\n", "\n",
150077
"For example, below we run inference on `llama2-13b` with 4 bit quantization downloaded from [HuggingFace](https://huggingface.co/TheBloke/Llama-2-13B-GGML/tree/main).\n", "\n", "As noted above, see the [API reference](https://python.langchain.com/api_reference/langchain/llms/langchain.llms.llamacpp.LlamaCpp.html?highlight=llamacpp#langchain.llms.llamacpp.LlamaCpp) for the full set of parameters. \n", "\n", "From the [llama.cpp API reference docs](https://python.langchain.com/api_reference/community/llms/langchain_community.llms.llamacpp.LlamaCpp.html), a few are worth commenting on:\n", "\n", "`n_gpu_layers`: number of layers to be loaded into GPU memory\n", "\n", "* Value: 1\n", "* Meaning: Only one layer of the model will be loaded into GPU memory (1 is often sufficient).\n", "\n", "`n_batch`: number of tokens the model should process in parallel \n", "\n", "* Value: n_batch\n", "* Meaning: It's recommended to choose a value between 1 and n_ctx (which in this case is set to 2048)\n", "\n", "`n_ctx`: Token context window\n", "\n", "* Value: 2048\n", "* Meaning: The model will consider a window of 2048 tokens at a time\n", "\n", "`f16_kv`: whether the model should use half-precision for the key/value cache\n", "\n", "* Value: True\n", "* Meaning: The model will use half-precision, which can be more memory efficient; Metal only supports True." ] }, { "cell_type": "code", "execution_count": null, "id": "5eba38dc", "metadata": {}, "outputs": [], "source": [ "%env CMAKE_ARGS=\"-DLLAMA_METAL=on\"\n", "%env FORCE_CMAKE=1\n", "%pip install --upgrade --quiet llama-cpp-python --no-cache-dirclear" ] }, { "cell_type": "code", "execution_count": null, "id": "a88bf0c8-e989-4bcd-bcb7-4d7757e684f2", "metadata": {}, "outputs": [], "source": [ "from langchain_community.llms import LlamaCpp\n", "from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler\n", "\n", "llm = LlamaCpp(\n", " model_path=\"/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin\",\n", " n_gpu_layers=1,\n", " n_batch=512,\n", " n_ctx=2048,\n", " f16_kv=True,\n", " callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n", " verbose=True,\n", ")" ] }, { "cell_type": "markdown", "id": "f56f5168", "metadata": {}, "source": [ "The console log will show the below to indicate Metal was enabled properly from steps above:\n", "```\n", "ggml_metal_init: allocating\n", "ggml_metal_init: using MPS\n", "```" ] }, { "cell_type": "code", "execution_count": 45, "id": "7890a077", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Llama.generate: prefix-match hit\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " and use logical reasoning to figure out who the first man on the moon was.\n", "\n", "Here are some clues:\n", "\n", "1. The first man on the moon was an American.\n", "2. He was part of the Apollo 11 mission.\n", "3. He stepped out of the lunar module and became the first person to set foot on the moon's surface.\n", "4. His last name is Armstrong.\n", "\n", "Now, let's use our reasoning skills to figure out who the first man on the moon was. Based on clue #1, we know that the first man on the moon was an American. Clue #2 tells us that he was part of the Apollo 11 mission. Clue #3 reveals that he was the first person to set foot on the moon's surface. And finally, clue #4 gives us his last name: Armstrong.\n", "Therefore, the first man on the moon was Neil Armstrong!" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\n", "llama_print_timings: load time = 9623.21 ms\n", "llama_print_timings: sample time = 143.77 ms / 203 runs ( 0.71 ms per token, 1412.01 tokens per second)\n", "llama_print_timings: prompt eval time = 485.94 ms / 7 tokens ( 69.42 ms per token, 14.40 tokens per second)\n", "llama_print_timings: eval time = 6385.16 ms / 202 runs ( 31.61 ms per token, 31.64 tokens per second)\n", "llama_print_timings: total time = 7279.28 ms\n" ] }, { "data": { "text/plain": [ "\" and use logical reasoning to figure out who the first man on the moon was.\\n\\nHere are some clues:\\n\\n1. The first man on the moon was an American.\\n2. He was part of the Apollo 11 mission.\\n3. He stepped out of the lunar module and became the first person to set foot on the moon's surface.\\n4. His last name is Armstrong.\\n\\nNow, let's use our reasoning skills to figure out who the first man on the moon was. Based on clue #1, we know that the first man on the moon was an American. Clue #2 tells us that he was part of the Apollo 11 mission. Clue #3 reveals that he was the first person to set foot on the moon's surface. And finally, clue #4 gives us his last name: Armstrong.\\nTherefore, the first man on the moon was Neil Armstrong!\"" ] }, "execution_count": 45, "metadata": {}, "output_type": "execute_result" } ], "source": [ "llm.invoke(\"The first man on the moon was ... Let's think step by step\")" ] }, { "cell_type": "markdown", "id": "831ddf7c", "metadata": {}, "source": [ "### GPT4All\n", "\n", "We can use model weights downloaded from [GPT4All](/docs/integrations/llms/gpt4all) model explorer.\n", "\n", "Similar to what is shown above, we can run inference and use [the API reference](https://python.langchain.com/api_reference/community/llms/langchain_community.llms.gpt4all.GPT4All.html) to set parameters of interest." ] }, { "cell_type": "code", "execution_count": null, "id": "e27baf6e", "metadata": {}, "outputs": [], "source": [ "%pip install gpt4all" ] }, { "cell_type": "code", "execution_count": null, "id": "915ecd4c-8f6b-4de3-a787-b64cb7c682b4", "metadata": {}, "outputs": [], "source": [ "from langchain_community.llms import GPT4All\n", "\n", "llm = GPT4All(\n", " model=\"/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin\"\n", ")" ] }, { "cell_type": "code", "execution_count": 47, "id": "e3d4526f", "metadata": {}, "outputs": [ { "data": { "text/plain": [
150080
{ "cells": [ { "cell_type": "raw", "metadata": { "vscode": { "languageId": "raw" } }, "source": [ "---\n", "keywords: [Runnable, Runnables, RunnableSequence, LCEL, chain, chains, chaining]\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# How to chain runnables\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", "- [Prompt templates](/docs/concepts/#prompt-templates)\n", "- [Chat models](/docs/concepts/#chat-models)\n", "- [Output parser](/docs/concepts/#output-parsers)\n", "\n", ":::\n", "\n", "One point about [LangChain Expression Language](/docs/concepts/#langchain-expression-language) is that any two runnables can be \"chained\" together into sequences. The output of the previous runnable's `.invoke()` call is passed as input to the next runnable. This can be done using the pipe operator (`|`), or the more explicit `.pipe()` method, which does the same thing.\n", "\n", "The resulting [`RunnableSequence`](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.RunnableSequence.html) is itself a runnable, which means it can be invoked, streamed, or further chained just like any other runnable. Advantages of chaining runnables in this way are efficient streaming (the sequence will stream output as soon as it is available), and debugging and tracing with tools like [LangSmith](/docs/how_to/debugging).\n", "\n", "## The pipe operator: `|`\n", "\n", "To show off how this works, let's go through an example. We'll walk through a common pattern in LangChain: using a [prompt template](/docs/how_to#prompt-templates) to format input into a [chat model](/docs/how_to#chat-models), and finally converting the chat message output into a string with an [output parser](/docs/how_to#output-parsers).\n", "\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs\n", " customVarName=\"model\"\n", "/>\n" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "%pip install -qU langchain langchain_anthropic\n", "\n", "import os\n", "from getpass import getpass\n", "\n", "from langchain_anthropic import ChatAnthropic\n", "\n", "if \"ANTHROPIC_API_KEY\" not in os.environ:\n", " os.environ[\"ANTHROPIC_API_KEY\"] = getpass()\n", "\n", "model = ChatAnthropic(model=\"claude-3-sonnet-20240229\", temperature=0)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "prompt = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\")\n", "\n", "chain = prompt | model | StrOutputParser()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Prompts and models are both runnable, and the output type from the prompt call is the same as the input type of the chat model, so we can chain them together. We can then invoke the resulting sequence like any other runnable:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"Here's a bear joke for you:\\n\\nWhy did the bear dissolve in water?\\nBecause it was a polar bear!\"" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain.invoke({\"topic\": \"bears\"})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Coercion\n", "\n", "We can even combine this chain with more runnables to create another chain. This may involve some input/output formatting using other types of runnables, depending on the required inputs and outputs of the chain components.\n", "\n", "For example, let's say we wanted to compose the joke generating chain with another chain that evaluates whether or not the generated joke was funny.\n", "\n", "We would need to be careful with how we format the input into the next chain. In the below example, the dict in the chain is automatically parsed and converted into a [`RunnableParallel`](/docs/how_to/parallel), which runs all of its values in parallel and returns a dict with the results.\n", "\n", "This happens to be the same format the next prompt template expects. Here it is in action:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Haha, that\\'s a clever play on words! Using \"polar\" to imply the bear dissolved or became polar/polarized when put in water. Not the most hilarious joke ever, but it has a cute, groan-worthy pun that makes it mildly amusing. I appreciate a good pun or wordplay joke.'" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.output_parsers import StrOutputParser\n", "\n", "analysis_prompt = ChatPromptTemplate.from_template(\"is this a funny joke? {joke}\")\n", "\n", "composed_chain = {\"joke\": chain} | analysis_prompt | model | StrOutputParser()\n", "\n", "composed_chain.invoke({\"topic\": \"bears\"})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Functions will also be coerced into runnables, so you can add custom logic to your chains too. The below chain results in the same logical flow as before:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"Haha, that's a cute and punny joke! I like how it plays on the idea of beets blushing or turning red like someone blushing. Food puns can be quite amusing. While not a total knee-slapper, it's a light-hearted, groan-worthy dad joke that would make me chuckle and shake my head. Simple vegetable humor!\"" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "composed_chain_with_lambda = (\n", " chain\n", " | (lambda input: {\"joke\": input})\n", " | analysis_prompt\n", " | model\n", " | StrOutputParser()\n", ")\n", "\n", "composed_chain_with_lambda.invoke({\"topic\": \"beets\"})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "However, keep in mind that using functions like this may interfere with operations like streaming. See [this section](/docs/how_to/functions) for more information." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The `.pipe()` method\n", "\n", "We could also compose the same sequence using the `.pipe()` method. Here's what that looks like:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [
150087
{ "cells": [ { "cell_type": "markdown", "id": "674a0d41-e3e3-4423-a995-25d40128c518", "metadata": {}, "source": [ "# How to do question answering over CSVs\n", "\n", "LLMs are great for building question-answering systems over various types of data sources. In this section we'll go over how to build Q&A systems over data stored in a CSV file(s). Like working with SQL databases, the key to working with CSV files is to give an LLM access to tools for querying and interacting with the data. The two main ways to do this are to either:\n", "\n", "* **RECOMMENDED**: Load the CSV(s) into a SQL database, and use the approaches outlined in the [SQL tutorial](/docs/tutorials/sql_qa).\n", "* Give the LLM access to a Python environment where it can use libraries like Pandas to interact with the data.\n", "\n", "We will cover both approaches in this guide.\n", "\n", "## ⚠️ Security note ⚠️\n", "\n", "Both approaches mentioned above carry significant risks. Using SQL requires executing model-generated SQL queries. Using a library like Pandas requires letting the model execute Python code. Since it is easier to tightly scope SQL connection permissions and sanitize SQL queries than it is to sandbox Python environments, **we HIGHLY recommend interacting with CSV data via SQL.** For more on general security best practices, [see here](/docs/security)." ] }, { "cell_type": "markdown", "id": "d20c20d7-71e1-4808-9012-48278f3a9b94", "metadata": {}, "source": [ "## Setup\n", "Dependencies for this guide:" ] }, { "cell_type": "code", "execution_count": null, "id": "c3fcf245-b0aa-4aee-8f0a-9c9cf94b065e", "metadata": {}, "outputs": [], "source": [ "%pip install -qU langchain langchain-openai langchain-community langchain-experimental pandas" ] }, { "cell_type": "markdown", "id": "7f2e34a3-0978-4856-8844-d8dfc6d5ac51", "metadata": {}, "source": [ "Set required environment variables:" ] }, { "cell_type": "code", "execution_count": 1, "id": "53913d79-4a11-4bc6-bb49-dea2cc8c453b", "metadata": {}, "outputs": [], "source": [ "# Using LangSmith is recommended but not required. Uncomment below lines to use.\n", "# import os\n", "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()" ] }, { "cell_type": "markdown", "id": "c23b4232-2f6a-4eb5-b0cb-1d48a9e02fcc", "metadata": {}, "source": [ "Download the [Titanic dataset](https://www.kaggle.com/datasets/yasserh/titanic-dataset) if you don't already have it:" ] }, { "cell_type": "code", "execution_count": null, "id": "1c9099c7-5247-4edb-ba5d-10c3c4c60db4", "metadata": {}, "outputs": [], "source": [ "!wget https://web.stanford.edu/class/archive/cs/cs109/cs109.1166/stuff/titanic.csv -O titanic.csv" ] }, { "cell_type": "code", "execution_count": 1, "id": "ad029641-6d6c-44cc-b16f-2d5472672adf", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(887, 8)\n", "['Survived', 'Pclass', 'Name', 'Sex', 'Age', 'Siblings/Spouses Aboard', 'Parents/Children Aboard', 'Fare']\n" ] } ], "source": [ "import pandas as pd\n", "\n", "df = pd.read_csv(\"titanic.csv\")\n", "print(df.shape)\n", "print(df.columns.tolist())" ] }, { "cell_type": "markdown", "id": "1779ab07-b715-49e5-ab2a-2e6be7d02927", "metadata": {}, "source": [ "## SQL\n", "\n", "Using SQL to interact with CSV data is the recommended approach because it is easier to limit permissions and sanitize queries than with arbitrary Python.\n", "\n", "Most SQL databases make it easy to load a CSV file in as a table ([DuckDB](https://duckdb.org/docs/data/csv/overview.html), [SQLite](https://www.sqlite.org/csv.html), etc.). Once you've done this you can use all of the chain and agent-creating techniques outlined in the [SQL tutorial](/docs/tutorials/sql_qa). Here's a quick example of how we might do this with SQLite:" ] }, { "cell_type": "code", "execution_count": 2, "id": "f61e9886-4713-4c88-87d4-dab439687f43", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "887" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_community.utilities import SQLDatabase\n", "from sqlalchemy import create_engine\n", "\n", "engine = create_engine(\"sqlite:///titanic.db\")\n", "df.to_sql(\"titanic\", engine, index=False)" ] }, { "cell_type": "code", "execution_count": 3, "id": "3275fc91-3777-4f78-8edf-d148001684b0", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "sqlite\n", "['titanic']\n", "[(1, 2, 'Master. Alden Gates Caldwell', 'male', 0.83, 0, 2, 29.0), (0, 3, 'Master. Eino Viljami Panula', 'male', 1.0, 4, 1, 39.6875), (1, 3, 'Miss. Eleanor Ileen Johnson', 'female', 1.0, 1, 1, 11.1333), (1, 2, 'Master. Richard F Becker', 'male', 1.0, 2, 1, 39.0), (1, 1, 'Master. Hudson Trevor Allison', 'male', 0.92, 1, 2, 151.55), (1, 3, 'Miss. Maria Nakid', 'female', 1.0, 0, 2, 15.7417), (0, 3, 'Master. Sidney Leonard Goodwin', 'male', 1.0, 5, 2, 46.9), (1, 3, 'Miss. Helene Barbara Baclini', 'female', 0.75, 2, 1, 19.2583), (1, 3, 'Miss. Eugenie Baclini', 'female', 0.75, 2, 1, 19.2583), (1, 2, 'Master. Viljo Hamalainen', 'male', 0.67, 1, 1, 14.5), (1, 3, 'Master. Bertram Vere Dean', 'male', 1.0, 1, 2, 20.575), (1, 3, 'Master. Assad Alexander Thomas', 'male', 0.42, 0, 1, 8.5167), (1, 2, 'Master. Andre Mallet', 'male', 1.0, 0, 2, 37.0042), (1, 2, 'Master. George Sibley Richards', 'male', 0.83, 1, 1, 18.75)]\n" ] } ], "source": [ "db = SQLDatabase(engine=engine)\n",
150088
"print(db.dialect)\n", "print(db.get_usable_table_names())\n", "print(db.run(\"SELECT * FROM titanic WHERE Age < 2;\"))" ] }, { "cell_type": "markdown", "id": "42f5a3c3-707c-4331-9f5f-0cb4919763dd", "metadata": {}, "source": [ "And create a [SQL agent](/docs/tutorials/sql_qa) to interact with it:\n", "\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs customVarName=\"llm\" />\n" ] }, { "cell_type": "code", "execution_count": 6, "id": "e868a586-4f4e-4b1d-ab11-fae1271dd551", "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI()" ] }, { "cell_type": "code", "execution_count": 7, "id": "edd92649-b178-47bd-b2b7-d5d4e14b3512", "metadata": {}, "outputs": [], "source": [ "from langchain_community.agent_toolkits import create_sql_agent\n", "\n", "agent_executor = create_sql_agent(llm, db=db, agent_type=\"openai-tools\", verbose=True)" ] }, { "cell_type": "code", "execution_count": 8, "id": "7aefe929-5e39-4ed1-b135-aaf88edce2eb", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new SQL Agent Executor chain...\u001b[0m\n", "\u001b[32;1m\u001b[1;3m\n", "Invoking: `sql_db_list_tables` with `{}`\n", "\n", "\n", "\u001b[0m\u001b[38;5;200m\u001b[1;3mtitanic\u001b[0m\u001b[32;1m\u001b[1;3m\n", "Invoking: `sql_db_schema` with `{'table_names': 'titanic'}`\n", "\n", "\n", "\u001b[0m\u001b[33;1m\u001b[1;3m\n", "CREATE TABLE titanic (\n", "\t\"Survived\" BIGINT, \n", "\t\"Pclass\" BIGINT, \n", "\t\"Name\" TEXT, \n", "\t\"Sex\" TEXT, \n", "\t\"Age\" FLOAT, \n", "\t\"Siblings/Spouses Aboard\" BIGINT, \n", "\t\"Parents/Children Aboard\" BIGINT, \n", "\t\"Fare\" FLOAT\n", ")\n", "\n", "/*\n", "3 rows from titanic table:\n", "Survived\tPclass\tName\tSex\tAge\tSiblings/Spouses Aboard\tParents/Children Aboard\tFare\n", "0\t3\tMr. Owen Harris Braund\tmale\t22.0\t1\t0\t7.25\n", "1\t1\tMrs. John Bradley (Florence Briggs Thayer) Cumings\tfemale\t38.0\t1\t0\t71.2833\n", "1\t3\tMiss. Laina Heikkinen\tfemale\t26.0\t0\t0\t7.925\n", "*/\u001b[0m\u001b[32;1m\u001b[1;3m\n", "Invoking: `sql_db_query` with `{'query': 'SELECT AVG(Age) AS Average_Age FROM titanic WHERE Survived = 1'}`\n", "\n", "\n", "\u001b[0m\u001b[36;1m\u001b[1;3m[(28.408391812865496,)]\u001b[0m\u001b[32;1m\u001b[1;3mThe average age of survivors in the Titanic dataset is approximately 28.41 years.\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n" ] }, { "data": { "text/plain": [ "{'input': \"what's the average age of survivors\",\n", " 'output': 'The average age of survivors in the Titanic dataset is approximately 28.41 years.'}" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "agent_executor.invoke({\"input\": \"what's the average age of survivors\"})" ] }, { "cell_type": "markdown", "id": "4d1eb128-842b-4018-87ab-bb269147f6ec", "metadata": {}, "source": [ "This approach easily generalizes to multiple CSVs, since we can just load each of them into our database as its own table. See the [Multiple CSVs](/docs/how_to/sql_csv#multiple-csvs) section below." ] }, { "cell_type": "markdown", "id": "fe7f2d91-2377-49dd-97a3-19d48a750715", "metadata": {}, "source": [ "## Pandas\n", "\n", "Instead of SQL we can also use data analysis libraries like pandas and the code generating abilities of LLMs to interact with CSV data. Again, **this approach is not fit for production use cases unless you have extensive safeguards in place**. For this reason, our code-execution utilities and constructors live in the `langchain-experimental` package.\n", "\n", "### Chain\n", "\n", "Most LLMs have been trained on enough pandas Python code that they can generate it just by being asked to:" ] }, { "cell_type": "code", "execution_count": 9, "id": "27c84b27-9367-4c58-8a88-ade1fbf6683c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "```python\n", "correlation = df['Age'].corr(df['Fare'])\n", "correlation\n", "```\n" ] } ], "source": [ "ai_msg = llm.invoke(\n", " \"I have a pandas DataFrame 'df' with columns 'Age' and 'Fare'. Write code to compute the correlation between the two columns. Return Markdown for a Python code snippet and nothing else.\"\n", ")\n", "print(ai_msg.content)" ] }, { "cell_type": "markdown", "id": "f5e84003-5c39-496b-afa7-eaa50a01b7bb", "metadata": {}, "source": [ "We can combine this ability with a Python-executing tool to create a simple data analysis chain. We'll first want to load our CSV table as a dataframe, and give the tool access to this dataframe:" ] }, { "cell_type": "code", "execution_count": 10, "id": "16abe312-b1a3-413f-bb9a-0e613d1e550b", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "32.30542018038331" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import pandas as pd\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_experimental.tools import PythonAstREPLTool\n", "\n", "df = pd.read_csv(\"titanic.csv\")\n", "tool = PythonAstREPLTool(locals={\"df\": df})\n", "tool.invoke(\"df['Fare'].mean()\")" ] }, { "cell_type": "markdown",
150090
" [\n", " (\n", " \"system\",\n", " system,\n", " ),\n", " (\"human\", \"{question}\"),\n", " # This MessagesPlaceholder allows us to optionally append an arbitrary number of messages\n", " # at the end of the prompt using the 'chat_history' arg.\n", " MessagesPlaceholder(\"chat_history\", optional=True),\n", " ]\n", ")\n", "\n", "\n", "def _get_chat_history(x: dict) -> list:\n", " \"\"\"Parse the chain output up to this point into a list of chat history messages to insert in the prompt.\"\"\"\n", " ai_msg = x[\"ai_msg\"]\n", " tool_call_id = x[\"ai_msg\"].additional_kwargs[\"tool_calls\"][0][\"id\"]\n", " tool_msg = ToolMessage(tool_call_id=tool_call_id, content=str(x[\"tool_output\"]))\n", " return [ai_msg, tool_msg]\n", "\n", "\n", "chain = (\n", " RunnablePassthrough.assign(ai_msg=prompt | llm_with_tools)\n", " .assign(tool_output=itemgetter(\"ai_msg\") | parser | tool)\n", " .assign(chat_history=_get_chat_history)\n", " .assign(response=prompt | llm | StrOutputParser())\n", " .pick([\"tool_output\", \"response\"])\n", ")" ] }, { "cell_type": "code", "execution_count": 17, "id": "ff6e98ec-52f1-4ffd-9ea8-bacedfa29f28", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'tool_output': 0.11232863699941616,\n", " 'response': 'The correlation between age and fare is approximately 0.1123.'}" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain.invoke({\"question\": \"What's the correlation between age and fare\"})" ] }, { "cell_type": "markdown", "id": "245a5a91-c6d2-4a40-9b9f-eb38f78c9d22", "metadata": {}, "source": [ "Here's the LangSmith trace for this run: https://smith.langchain.com/public/14e38d70-45b1-4b81-8477-9fd2b7c07ea6/r" ] }, { "cell_type": "markdown", "id": "6c24b4f4-abbf-4891-b200-814eb9c35bec", "metadata": {}, "source": [ "### Agent\n", "\n", "For complex questions it can be helpful for an LLM to be able to iteratively execute code while maintaining the inputs and outputs of its previous executions. This is where Agents come into play. They allow an LLM to decide how many times a tool needs to be invoked and keep track of the executions it's made so far. The [create_pandas_dataframe_agent](https://python.langchain.com/api_reference/experimental/agents/langchain_experimental.agents.agent_toolkits.pandas.base.create_pandas_dataframe_agent.html) is a built-in agent that makes it easy to work with dataframes:" ] }, { "cell_type": "code", "execution_count": 18, "id": "35ea904e-795f-411b-bef8-6484dbb6e35c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n", "\u001b[32;1m\u001b[1;3m\n", "Invoking: `python_repl_ast` with `{'query': \"df[['Age', 'Fare']].corr().iloc[0,1]\"}`\n", "\n", "\n", "\u001b[0m\u001b[36;1m\u001b[1;3m0.11232863699941621\u001b[0m\u001b[32;1m\u001b[1;3m\n", "Invoking: `python_repl_ast` with `{'query': \"df[['Fare', 'Survived']].corr().iloc[0,1]\"}`\n", "\n", "\n", "\u001b[0m\u001b[36;1m\u001b[1;3m0.2561785496289603\u001b[0m\u001b[32;1m\u001b[1;3mThe correlation between Age and Fare is approximately 0.112, and the correlation between Fare and Survival is approximately 0.256.\n", "\n", "Therefore, the correlation between Fare and Survival (0.256) is greater than the correlation between Age and Fare (0.112).\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n" ] }, { "data": { "text/plain": [ "{'input': \"What's the correlation between age and fare? is that greater than the correlation between fare and survival?\",\n", " 'output': 'The correlation between Age and Fare is approximately 0.112, and the correlation between Fare and Survival is approximately 0.256.\\n\\nTherefore, the correlation between Fare and Survival (0.256) is greater than the correlation between Age and Fare (0.112).'}" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_experimental.agents import create_pandas_dataframe_agent\n", "\n", "agent = create_pandas_dataframe_agent(llm, df, agent_type=\"openai-tools\", verbose=True)\n", "agent.invoke(\n", " {\n", " \"input\": \"What's the correlation between age and fare? is that greater than the correlation between fare and survival?\"\n", " }\n", ")" ] }, { "cell_type": "markdown", "id": "a65322f3-b13c-4949-82b2-4517b9a0859d", "metadata": {}, "source": [ "Here's the LangSmith trace for this run: https://smith.langchain.com/public/6a86aee2-4f22-474a-9264-bd4c7283e665/r" ] }, { "cell_type": "markdown", "id": "68492261-faef-47e7-8009-e20ef1420d5a", "metadata": {}, "source": [ "### Multiple CSVs {#multiple-csvs}\n", "\n", "To handle multiple CSVs (or dataframes) we just need to pass multiple dataframes to our Python tool. Our `create_pandas_dataframe_agent` constructor can do this out of the box, we can pass in a list of dataframes instead of just one. If we're constructing a chain ourselves, we can do something like:" ] }, { "cell_type": "code", "execution_count": 19, "id": "77a70e1b-d3ee-4fa6-a4a0-d2e5005e6c8a", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.14384991262954416" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df_1 = df[[\"Age\", \"Fare\"]]\n", "df_2 = df[[\"Fare\", \"Survived\"]]\n", "\n", "tool = PythonAstREPLTool(locals={\"df_1\": df_1, \"df_2\": df_2})\n", "llm_with_tool = llm.bind_tools(tools=[tool], tool_choice=tool.name)\n", "df_template = \"\"\"```python\n", "{df_name}.head().to_markdown()\n", ">>> {df_head}\n", "```\"\"\"\n", "df_context = \"\\n\\n\".join(\n",
150091
" df_template.format(df_head=_df.head().to_markdown(), df_name=df_name)\n", " for _df, df_name in [(df_1, \"df_1\"), (df_2, \"df_2\")]\n", ")\n", "\n", "system = f\"\"\"You have access to a number of pandas dataframes. \\\n", "Here is a sample of rows from each dataframe and the python code that was used to generate the sample:\n", "\n", "{df_context}\n", "\n", "Given a user question about the dataframes, write the Python code to answer it. \\\n", "Don't assume you have access to any libraries other than built-in Python ones and pandas. \\\n", "Make sure to refer only to the variables mentioned above.\"\"\"\n", "prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", \"{question}\")])\n", "\n", "chain = prompt | llm_with_tool | parser | tool\n", "chain.invoke(\n", " {\n", " \"question\": \"return the difference in the correlation between age and fare and the correlation between fare and survival\"\n", " }\n", ")" ] }, { "cell_type": "markdown", "id": "7043363f-4ab1-41de-9318-c556e4ae66bc", "metadata": {}, "source": [ "Here's the LangSmith trace for this run: https://smith.langchain.com/public/cc2a7d7f-7c5a-4e77-a10c-7b5420fcd07f/r" ] }, { "cell_type": "markdown", "id": "a2256d09-23c2-4e52-bfc6-c84eba538586", "metadata": {}, "source": [ "### Sandboxed code execution\n", "\n", "There are a number of tools like [E2B](/docs/integrations/tools/e2b_data_analysis) and [Bearly](/docs/integrations/tools/bearly) that provide sandboxed environments for Python code execution, to allow for safer code-executing chains and agents." ] }, { "cell_type": "markdown", "id": "1728e791-f114-41e6-aa12-0436fdeeedae", "metadata": {}, "source": [ "## Next steps\n", "\n", "For more advanced data analysis applications we recommend checking out:\n", "\n", "* [SQL tutorial](/docs/tutorials/sql_qa): Many of the challenges of working with SQL db's and CSV's are generic to any structured data type, so it's useful to read the SQL techniques even if you're using Pandas for CSV data analysis.\n", "* [Tool use](/docs/how_to/tool_calling): Guides on general best practices when working with chains and agents that invoke tools\n", "* [Agents](/docs/tutorials/agents): Understand the fundamentals of building LLM agents.\n", "* Integrations: Sandboxed envs like [E2B](/docs/integrations/tools/e2b_data_analysis) and [Bearly](/docs/integrations/tools/bearly), utilities like [SQLDatabase](https://python.langchain.com/api_reference/community/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase), related agents like [Spark DataFrame agent](/docs/integrations/tools/spark_sql)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
150097
{ "cells": [ { "cell_type": "raw", "id": "d35de667-0352-4bfb-a890-cebe7f676fe7", "metadata": {}, "source": [ "---\n", "sidebar_position: 5\n", "keywords: [RunnablePassthrough, LCEL]\n", "---" ] }, { "cell_type": "markdown", "id": "b022ab74-794d-4c54-ad47-ff9549ddb9d2", "metadata": {}, "source": [ "# How to pass through arguments from one step to the next\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", "- [Calling runnables in parallel](/docs/how_to/parallel/)\n", "- [Custom functions](/docs/how_to/functions/)\n", "\n", ":::\n", "\n", "\n", "When composing chains with several steps, sometimes you will want to pass data from previous steps unchanged for use as input to a later step. The [`RunnablePassthrough`](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) class allows you to do just this, and is typically is used in conjuction with a [RunnableParallel](/docs/how_to/parallel/) to pass data through to a later step in your constructed chains.\n", "\n", "See the example below:" ] }, { "cell_type": "code", "execution_count": null, "id": "e169b952", "metadata": {}, "outputs": [], "source": [ "%pip install -qU langchain langchain-openai\n", "\n", "import os\n", "from getpass import getpass\n", "\n", "if \"OPENAI_API_KEY\" not in os.environ:\n", " os.environ[\"OPENAI_API_KEY\"] = getpass()" ] }, { "cell_type": "code", "execution_count": 2, "id": "03988b8d-d54c-4492-8707-1594372cf093", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'passed': {'num': 1}, 'modified': 2}" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n", "\n", "runnable = RunnableParallel(\n", " passed=RunnablePassthrough(),\n", " modified=lambda x: x[\"num\"] + 1,\n", ")\n", "\n", "runnable.invoke({\"num\": 1})" ] }, { "cell_type": "markdown", "id": "702c7acc-cd31-4037-9489-647df192fd7c", "metadata": {}, "source": [ "As seen above, `passed` key was called with `RunnablePassthrough()` and so it simply passed on `{'num': 1}`. \n", "\n", "We also set a second key in the map with `modified`. This uses a lambda to set a single value adding 1 to the num, which resulted in `modified` key with the value of `2`." ] }, { "cell_type": "markdown", "id": "15187a3b-d666-4b9b-a258-672fc51fe0e2", "metadata": {}, "source": [ "## Retrieval Example\n", "\n", "In the example below, we see a more real-world use case where we use `RunnablePassthrough` along with `RunnableParallel` in a chain to properly format inputs to a prompt:" ] }, { "cell_type": "code", "execution_count": 3, "id": "267d1460-53c1-4fdb-b2c3-b6a1eb7fccff", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Harrison worked at Kensho.'" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_community.vectorstores import FAISS\n", "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.runnables import RunnablePassthrough\n", "from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n", "\n", "vectorstore = FAISS.from_texts(\n", " [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n", ")\n", "retriever = vectorstore.as_retriever()\n", "template = \"\"\"Answer the question based only on the following context:\n", "{context}\n", "\n", "Question: {question}\n", "\"\"\"\n", "prompt = ChatPromptTemplate.from_template(template)\n", "model = ChatOpenAI()\n", "\n", "retrieval_chain = (\n", " {\"context\": retriever, \"question\": RunnablePassthrough()}\n", " | prompt\n", " | model\n", " | StrOutputParser()\n", ")\n", "\n", "retrieval_chain.invoke(\"where did harrison work?\")" ] }, { "cell_type": "markdown", "id": "392cd4c4-e7ed-4ab8-934d-f7a4eca55ee1", "metadata": {}, "source": [ "Here the input to prompt is expected to be a map with keys \"context\" and \"question\". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the \"question\" key. The `RunnablePassthrough` allows us to pass on the user's question to the prompt and model. \n", "\n", "## Next steps\n", "\n", "Now you've learned how to pass data through your chains to help to help format the data flowing through your chains.\n", "\n", "To learn more, see the other how-to guides on runnables in this section." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
150101
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# How to force models to call a tool\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [Chat models](/docs/concepts/#chat-models)\n", "- [LangChain Tools](/docs/concepts/#tools)\n", "- [How to use a model to call tools](/docs/how_to/tool_calling)\n", ":::\n", "\n", "In order to force our LLM to select a specific tool, we can use the `tool_choice` parameter to ensure certain behavior. First, let's define our model and tools:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from langchain_core.tools import tool\n", "\n", "\n", "@tool\n", "def add(a: int, b: int) -> int:\n", " \"\"\"Adds a and b.\"\"\"\n", " return a + b\n", "\n", "\n", "@tool\n", "def multiply(a: int, b: int) -> int:\n", " \"\"\"Multiplies a and b.\"\"\"\n", " return a * b\n", "\n", "\n", "tools = [add, multiply]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "%pip install -qU langchain langchain_openai\n", "\n", "import os\n", "from getpass import getpass\n", "\n", "from langchain_openai import ChatOpenAI\n", "\n", "if \"OPENAI_API_KEY\" not in os.environ:\n", " os.environ[\"OPENAI_API_KEY\"] = getpass()\n", "\n", "llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For example, we can force our tool to call the multiply tool by using the following code:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_9cViskmLvPnHjXk9tbVla5HA', 'function': {'arguments': '{\"a\":2,\"b\":4}', 'name': 'Multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 103, 'total_tokens': 112}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-095b827e-2bdd-43bb-8897-c843f4504883-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 2, 'b': 4}, 'id': 'call_9cViskmLvPnHjXk9tbVla5HA'}], usage_metadata={'input_tokens': 103, 'output_tokens': 9, 'total_tokens': 112})" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "llm_forced_to_multiply = llm.bind_tools(tools, tool_choice=\"Multiply\")\n", "llm_forced_to_multiply.invoke(\"what is 2 + 4\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Even if we pass it something that doesn't require multiplcation - it will still call the tool!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also just force our tool to select at least one of our tools by passing in the \"any\" (or \"required\" which is OpenAI specific) keyword to the `tool_choice` parameter." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_mCSiJntCwHJUBfaHZVUB2D8W', 'function': {'arguments': '{\"a\":1,\"b\":2}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 94, 'total_tokens': 109}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-28f75260-9900-4bed-8cd3-f1579abb65e5-0', tool_calls=[{'name': 'Add', 'args': {'a': 1, 'b': 2}, 'id': 'call_mCSiJntCwHJUBfaHZVUB2D8W'}], usage_metadata={'input_tokens': 94, 'output_tokens': 15, 'total_tokens': 109})" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "llm_forced_to_use_tool = llm.bind_tools(tools, tool_choice=\"any\")\n", "llm_forced_to_use_tool.invoke(\"What day is today?\")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.9" } }, "nbformat": 4, "nbformat_minor": 4 }
150102
{ "cells": [ { "cell_type": "raw", "id": "94c3ad61", "metadata": {}, "source": [ "---\n", "sidebar_position: 3\n", "---" ] }, { "cell_type": "markdown", "id": "b91e03f1", "metadata": {}, "source": [ "# How to use few shot examples\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [Prompt templates](/docs/concepts/#prompt-templates)\n", "- [Example selectors](/docs/concepts/#example-selectors)\n", "- [LLMs](/docs/concepts/#llms)\n", "- [Vectorstores](/docs/concepts/#vector-stores)\n", "\n", ":::\n", "\n", "In this guide, we'll learn how to create a simple prompt template that provides the model with example inputs and outputs when generating. Providing the LLM with a few such examples is called few-shotting, and is a simple yet powerful way to guide generation and in some cases drastically improve model performance.\n", "\n", "A few-shot prompt template can be constructed from either a set of examples, or from an [Example Selector](https://python.langchain.com/api_reference/core/example_selectors/langchain_core.example_selectors.base.BaseExampleSelector.html) class responsible for choosing a subset of examples from the defined set.\n", "\n", "This guide will cover few-shotting with string prompt templates. For a guide on few-shotting with chat messages for chat models, see [here](/docs/how_to/few_shot_examples_chat/).\n", "\n", "## Create a formatter for the few-shot examples\n", "\n", "Configure a formatter that will format the few-shot examples into a string. This formatter should be a `PromptTemplate` object." ] }, { "cell_type": "code", "execution_count": 1, "id": "4e70bce2", "metadata": {}, "outputs": [], "source": [ "from langchain_core.prompts import PromptTemplate\n", "\n", "example_prompt = PromptTemplate.from_template(\"Question: {question}\\n{answer}\")" ] }, { "cell_type": "markdown", "id": "50846ad4", "metadata": {}, "source": [ "## Creating the example set\n", "\n", "Next, we'll create a list of few-shot examples. Each example should be a dictionary representing an example input to the formatter prompt we defined above." ] }, { "cell_type": "code", "execution_count": 2, "id": "a44be840", "metadata": {}, "outputs": [], "source": [ "examples = [\n", " {\n", " \"question\": \"Who lived longer, Muhammad Ali or Alan Turing?\",\n", " \"answer\": \"\"\"\n", "Are follow up questions needed here: Yes.\n", "Follow up: How old was Muhammad Ali when he died?\n", "Intermediate answer: Muhammad Ali was 74 years old when he died.\n", "Follow up: How old was Alan Turing when he died?\n", "Intermediate answer: Alan Turing was 41 years old when he died.\n", "So the final answer is: Muhammad Ali\n", "\"\"\",\n", " },\n", " {\n", " \"question\": \"When was the founder of craigslist born?\",\n", " \"answer\": \"\"\"\n", "Are follow up questions needed here: Yes.\n", "Follow up: Who was the founder of craigslist?\n", "Intermediate answer: Craigslist was founded by Craig Newmark.\n", "Follow up: When was Craig Newmark born?\n", "Intermediate answer: Craig Newmark was born on December 6, 1952.\n", "So the final answer is: December 6, 1952\n", "\"\"\",\n", " },\n", " {\n", " \"question\": \"Who was the maternal grandfather of George Washington?\",\n", " \"answer\": \"\"\"\n", "Are follow up questions needed here: Yes.\n", "Follow up: Who was the mother of George Washington?\n", "Intermediate answer: The mother of George Washington was Mary Ball Washington.\n", "Follow up: Who was the father of Mary Ball Washington?\n", "Intermediate answer: The father of Mary Ball Washington was Joseph Ball.\n", "So the final answer is: Joseph Ball\n", "\"\"\",\n", " },\n", " {\n", " \"question\": \"Are both the directors of Jaws and Casino Royale from the same country?\",\n", " \"answer\": \"\"\"\n", "Are follow up questions needed here: Yes.\n", "Follow up: Who is the director of Jaws?\n", "Intermediate Answer: The director of Jaws is Steven Spielberg.\n", "Follow up: Where is Steven Spielberg from?\n", "Intermediate Answer: The United States.\n", "Follow up: Who is the director of Casino Royale?\n", "Intermediate Answer: The director of Casino Royale is Martin Campbell.\n", "Follow up: Where is Martin Campbell from?\n", "Intermediate Answer: New Zealand.\n", "So the final answer is: No\n", "\"\"\",\n", " },\n", "]" ] }, { "cell_type": "markdown", "id": "3d1ec9d5", "metadata": {}, "source": [ "Let's test the formatting prompt with one of our examples:" ] }, { "cell_type": "code", "execution_count": 13, "id": "8c6e48ad", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Question: Who lived longer, Muhammad Ali or Alan Turing?\n", "\n", "Are follow up questions needed here: Yes.\n", "Follow up: How old was Muhammad Ali when he died?\n", "Intermediate answer: Muhammad Ali was 74 years old when he died.\n", "Follow up: How old was Alan Turing when he died?\n", "Intermediate answer: Alan Turing was 41 years old when he died.\n", "So the final answer is: Muhammad Ali\n", "\n" ] } ], "source": [ "print(example_prompt.invoke(examples[0]).to_string())" ] }, { "cell_type": "markdown", "id": "dad66af1", "metadata": {}, "source": [ "### Pass the examples and formatter to `FewShotPromptTemplate`\n", "\n", "Finally, create a [`FewShotPromptTemplate`](https://python.langchain.com/api_reference/core/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html) object. This object takes in the few-shot examples and the formatter for the few-shot examples. When this `FewShotPromptTemplate` is formatted, it formats the passed examples using the `example_prompt`, then and adds them to the final prompt before `suffix`:" ] }, { "cell_type": "code", "execution_count": 14, "id": "e76fa1ba", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Question: Who lived longer, Muhammad Ali or Alan Turing?\n", "\n", "Are follow up questions needed here: Yes.\n", "Follow up: How old was Muhammad Ali when he died?\n", "Intermediate answer: Muhammad Ali was 74 years old when he died.\n", "Follow up: How old was Alan Turing when he died?\n", "Intermediate answer: Alan Turing was 41 years old when he died.\n", "So the final answer is: Muhammad Ali\n", "\n", "\n", "Question: When was the founder of craigslist born?\n", "\n", "Are follow up questions needed here: Yes.\n", "Follow up: Who was the founder of craigslist?\n", "Intermediate answer: Craigslist was founded by Craig Newmark.\n", "Follow up: When was Craig Newmark born?\n", "Intermediate answer: Craig Newmark was born on December 6, 1952.\n", "So the final answer is: December 6, 1952\n", "\n", "\n", "Question: Who was the maternal grandfather of George Washington?\n", "\n", "Are follow up questions needed here: Yes.\n",
150106
{ "cells": [ { "cell_type": "raw", "id": "ee14951b", "metadata": {}, "source": [ "---\n", "sidebar_position: 0\n", "---" ] }, { "cell_type": "markdown", "id": "105cddce", "metadata": {}, "source": [ "# How to use a vectorstore as a retriever\n", "\n", "A vector store retriever is a retriever that uses a vector store to retrieve documents. It is a lightweight wrapper around the vector store class to make it conform to the retriever interface.\n", "It uses the search methods implemented by a vector store, like similarity search and MMR, to query the texts in the vector store.\n", "\n", "In this guide we will cover:\n", "\n", "1. How to instantiate a retriever from a vectorstore;\n", "2. How to specify the search type for the retriever;\n", "3. How to specify additional search parameters, such as threshold scores and top-k.\n", "\n", "## Creating a retriever from a vectorstore\n", "\n", "You can build a retriever from a vectorstore using its [.as_retriever](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.VectorStore.html#langchain_core.vectorstores.VectorStore.as_retriever) method. Let's walk through an example.\n", "\n", "First we instantiate a vectorstore. We will use an in-memory [FAISS](https://python.langchain.com/api_reference/community/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) vectorstore:" ] }, { "cell_type": "code", "execution_count": 1, "id": "174e3c69", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import TextLoader\n", "from langchain_community.vectorstores import FAISS\n", "from langchain_openai import OpenAIEmbeddings\n", "from langchain_text_splitters import CharacterTextSplitter\n", "\n", "loader = TextLoader(\"state_of_the_union.txt\")\n", "\n", "documents = loader.load()\n", "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n", "texts = text_splitter.split_documents(documents)\n", "embeddings = OpenAIEmbeddings()\n", "vectorstore = FAISS.from_documents(texts, embeddings)" ] }, { "cell_type": "markdown", "id": "6f6e65a1-5eb4-4165-b06b-9bb40624a8d8", "metadata": {}, "source": [ "We can then instantiate a retriever:" ] }, { "cell_type": "code", "execution_count": 2, "id": "52df5f55", "metadata": {}, "outputs": [], "source": [ "retriever = vectorstore.as_retriever()" ] }, { "cell_type": "markdown", "id": "08f8b820-5912-49c1-9d76-40be0571dffb", "metadata": {}, "source": [ "This creates a retriever (specifically a [VectorStoreRetriever](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.base.VectorStoreRetriever.html)), which we can use in the usual way:" ] }, { "cell_type": "code", "execution_count": 3, "id": "32334fda", "metadata": {}, "outputs": [], "source": [ "docs = retriever.invoke(\"what did the president say about ketanji brown jackson?\")" ] }, { "cell_type": "markdown", "id": "fd7b19f0", "metadata": {}, "source": [ "## Maximum marginal relevance retrieval\n", "By default, the vector store retriever uses similarity search. If the underlying vector store supports maximum marginal relevance search, you can specify that as the search type.\n", "\n", "This effectively specifies what method on the underlying vectorstore is used (e.g., `similarity_search`, `max_marginal_relevance_search`, etc.)." ] }, { "cell_type": "code", "execution_count": 4, "id": "b286ac04", "metadata": {}, "outputs": [], "source": [ "retriever = vectorstore.as_retriever(search_type=\"mmr\")" ] }, { "cell_type": "code", "execution_count": 5, "id": "07f937f7", "metadata": {}, "outputs": [], "source": [ "docs = retriever.invoke(\"what did the president say about ketanji brown jackson?\")" ] }, { "cell_type": "markdown", "id": "6ce77789", "metadata": {}, "source": [ "## Passing search parameters\n", "\n", "We can pass parameters to the underlying vectorstore's search methods using `search_kwargs`.\n", "\n", "### Similarity score threshold retrieval\n", "\n", "For example, we can set a similarity score threshold and only return documents with a score above that threshold." ] }, { "cell_type": "code", "execution_count": 6, "id": "dbb38a03", "metadata": {}, "outputs": [], "source": [ "retriever = vectorstore.as_retriever(\n", " search_type=\"similarity_score_threshold\", search_kwargs={\"score_threshold\": 0.5}\n", ")" ] }, { "cell_type": "code", "execution_count": 7, "id": "56f6c9ae", "metadata": {}, "outputs": [], "source": [ "docs = retriever.invoke(\"what did the president say about ketanji brown jackson?\")" ] }, { "cell_type": "markdown", "id": "329f5b26", "metadata": {}, "source": [ "### Specifying top k\n", "\n", "We can also limit the number of documents `k` returned by the retriever." ] }, { "cell_type": "code", "execution_count": 8, "id": "d712c91d", "metadata": {}, "outputs": [], "source": [ "retriever = vectorstore.as_retriever(search_kwargs={\"k\": 1})" ] }, { "cell_type": "code", "execution_count": 9, "id": "a79b573b", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "1" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "docs = retriever.invoke(\"what did the president say about ketanji brown jackson?\")\n", "len(docs)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
150108
{ "cells": [ { "cell_type": "markdown", "id": "9122e4b9-4883-4e6e-940b-ab44a70f0951", "metadata": {}, "source": [ "# How to load documents from a directory\n", "\n", "LangChain's [DirectoryLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.directory.DirectoryLoader.html) implements functionality for reading files from disk into LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects. Here we demonstrate:\n", "\n", "- How to load from a filesystem, including use of wildcard patterns;\n", "- How to use multithreading for file I/O;\n", "- How to use custom loader classes to parse specific file types (e.g., code);\n", "- How to handle errors, such as those due to decoding." ] }, { "cell_type": "code", "execution_count": 1, "id": "1c1e3796-bee8-4882-8065-6b98e48ec53a", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import DirectoryLoader" ] }, { "cell_type": "markdown", "id": "e3cdb7bb-1f58-4a7a-af83-599443127834", "metadata": {}, "source": [ "`DirectoryLoader` accepts a `loader_cls` kwarg, which defaults to [UnstructuredLoader](/docs/integrations/document_loaders/unstructured_file). [Unstructured](https://unstructured-io.github.io/unstructured/) supports parsing for a number of formats, such as PDF and HTML. Here we use it to read in a markdown (.md) file.\n", "\n", "We can use the `glob` parameter to control which files to load. Note that here it doesn't load the `.rst` file or the `.html` files." ] }, { "cell_type": "code", "execution_count": 2, "id": "bd2fcd1f-8286-499b-b43a-0c17084ae8ee", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "20" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "loader = DirectoryLoader(\"../\", glob=\"**/*.md\")\n", "docs = loader.load()\n", "len(docs)" ] }, { "cell_type": "code", "execution_count": 3, "id": "9ff1503d-3ac0-4172-99ec-15c9a4a707d8", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Security\n", "\n", "LangChain has a large ecosystem of integrations with various external resources like local\n" ] } ], "source": [ "print(docs[0].page_content[:100])" ] }, { "cell_type": "markdown", "id": "b8b1cee8-626a-461a-8d33-1c56120f1cc0", "metadata": {}, "source": [ "## Show a progress bar\n", "\n", "By default a progress bar will not be shown. To show a progress bar, install the `tqdm` library (e.g. `pip install tqdm`), and set the `show_progress` parameter to `True`." ] }, { "cell_type": "code", "execution_count": 4, "id": "cfa48224-5d02-4aa7-93c7-ce48241645d5", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 54.56it/s]\n" ] } ], "source": [ "loader = DirectoryLoader(\"../\", glob=\"**/*.md\", show_progress=True)\n", "docs = loader.load()" ] }, { "cell_type": "markdown", "id": "5e02c922-6a4b-48e6-8c46-5015553eafbe", "metadata": {}, "source": [ "## Use multithreading\n", "\n", "By default the loading happens in one thread. In order to utilize several threads set the `use_multithreading` flag to true." ] }, { "cell_type": "code", "execution_count": 5, "id": "aae1c580-6d7c-409c-bfc8-3049fa8bdbf9", "metadata": {}, "outputs": [], "source": [ "loader = DirectoryLoader(\"../\", glob=\"**/*.md\", use_multithreading=True)\n", "docs = loader.load()" ] }, { "cell_type": "markdown", "id": "5add3f54-f303-4006-90c9-540a90ab8c46", "metadata": {}, "source": [ "## Change loader class\n", "By default this uses the `UnstructuredLoader` class. To customize the loader, specify the loader class in the `loader_cls` kwarg. Below we show an example using [TextLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.text.TextLoader.html):" ] }, { "cell_type": "code", "execution_count": 6, "id": "d369ee78-ea24-48cc-9f46-1f5cd4b56f48", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import TextLoader\n", "\n", "loader = DirectoryLoader(\"../\", glob=\"**/*.md\", loader_cls=TextLoader)\n", "docs = loader.load()" ] }, { "cell_type": "code", "execution_count": 7, "id": "2863d7dd-2d56-4fef-8bfd-95c48a6b4a71", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "# Security\n", "\n", "LangChain has a large ecosystem of integrations with various external resources like loc\n" ] } ], "source": [ "print(docs[0].page_content[:100])" ] }, { "cell_type": "markdown", "id": "c97ed37b-38c0-4f31-9403-d3a5d5444f78", "metadata": {}, "source": [ "Notice that while the `UnstructuredLoader` parses Markdown headers, `TextLoader` does not.\n", "\n", "If you need to load Python source code files, use the `PythonLoader`:" ] }, { "cell_type": "code", "execution_count": 8, "id": "5ef483a8-57d3-45e5-93be-37c8416c543c", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import PythonLoader\n", "\n", "loader = DirectoryLoader(\"../../../../../\", glob=\"**/*.py\", loader_cls=PythonLoader)" ] }, { "cell_type": "markdown", "id": "61dd1428-8246-47e3-b1da-f6a3d6f05566", "metadata": {}, "source": [ "## Auto-detect file encodings with TextLoader\n", "\n", "`DirectoryLoader` can help manage errors due to variations in file encodings. Below we will attempt to load in a collection of files, one of which includes non-UTF8 encodings." ] }, { "cell_type": "code", "execution_count": 9, "id": "e69db7ae-0385-4129-968f-17c42c7a635c", "metadata": {}, "outputs": [],
150115
{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ "---\n", "sidebar_position: 6\n", "keywords: [RunnablePassthrough, assign, LCEL]\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# How to add values to a chain's state\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", "- [Calling runnables in parallel](/docs/how_to/parallel/)\n", "- [Custom functions](/docs/how_to/functions/)\n", "- [Passing data through](/docs/how_to/passthrough)\n", "\n", ":::\n", "\n", "An alternate way of [passing data through](/docs/how_to/passthrough) steps of a chain is to leave the current values of the chain state unchanged while assigning a new value under a given key. The [`RunnablePassthrough.assign()`](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html#langchain_core.runnables.passthrough.RunnablePassthrough.assign) static method takes an input value and adds the extra arguments passed to the assign function.\n", "\n", "This is useful in the common [LangChain Expression Language](/docs/concepts/#langchain-expression-language) pattern of additively creating a dictionary to use as input to a later step.\n", "\n", "Here's an example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet langchain langchain-openai\n", "\n", "import os\n", "from getpass import getpass\n", "\n", "if \"OPENAI_API_KEY\" not in os.environ:\n", " os.environ[\"OPENAI_API_KEY\"] = getpass()" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'extra': {'num': 1, 'mult': 3}, 'modified': 2}" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n", "\n", "runnable = RunnableParallel(\n", " extra=RunnablePassthrough.assign(mult=lambda x: x[\"num\"] * 3),\n", " modified=lambda x: x[\"num\"] + 1,\n", ")\n", "\n", "runnable.invoke({\"num\": 1})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's break down what's happening here.\n", "\n", "- The input to the chain is `{\"num\": 1}`. This is passed into a `RunnableParallel`, which invokes the runnables it is passed in parallel with that input.\n", "- The value under the `extra` key is invoked. `RunnablePassthrough.assign()` keeps the original keys in the input dict (`{\"num\": 1}`), and assigns a new key called `mult`. The value is `lambda x: x[\"num\"] * 3)`, which is `3`. Thus, the result is `{\"num\": 1, \"mult\": 3}`.\n", "- `{\"num\": 1, \"mult\": 3}` is returned to the `RunnableParallel` call, and is set as the value to the key `extra`.\n", "- At the same time, the `modified` key is called. The result is `2`, since the lambda extracts a key called `\"num\"` from its input and adds one.\n", "\n", "Thus, the result is `{'extra': {'num': 1, 'mult': 3}, 'modified': 2}`.\n", "\n", "## Streaming\n", "\n", "One convenient feature of this method is that it allows values to pass through as soon as they are available. To show this off, we'll use `RunnablePassthrough.assign()` to immediately return source docs in a retrieval chain:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'question': 'where did harrison work?'}\n", "{'context': [Document(page_content='harrison worked at kensho')]}\n", "{'output': ''}\n", "{'output': 'H'}\n", "{'output': 'arrison'}\n", "{'output': ' worked'}\n", "{'output': ' at'}\n", "{'output': ' Kens'}\n", "{'output': 'ho'}\n", "{'output': '.'}\n", "{'output': ''}\n" ] } ], "source": [ "from langchain_community.vectorstores import FAISS\n", "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.runnables import RunnablePassthrough\n", "from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n", "\n", "vectorstore = FAISS.from_texts(\n", " [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n", ")\n", "retriever = vectorstore.as_retriever()\n", "template = \"\"\"Answer the question based only on the following context:\n", "{context}\n", "\n", "Question: {question}\n", "\"\"\"\n", "prompt = ChatPromptTemplate.from_template(template)\n", "model = ChatOpenAI()\n", "\n", "generation_chain = prompt | model | StrOutputParser()\n", "\n", "retrieval_chain = {\n", " \"context\": retriever,\n", " \"question\": RunnablePassthrough(),\n", "} | RunnablePassthrough.assign(output=generation_chain)\n", "\n", "stream = retrieval_chain.stream(\"where did harrison work?\")\n", "\n", "for chunk in stream:\n", " print(chunk)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that the first chunk contains the original `\"question\"` since that is immediately available. The second chunk contains `\"context\"` since the retriever finishes second. Finally, the output from the `generation_chain` streams in chunks as soon as it is available.\n", "\n", "## Next steps\n", "\n", "Now you've learned how to pass data through your chains to help to help format the data flowing through your chains.\n", "\n", "To learn more, see the other how-to guides on runnables in this section." ] }, { "cell_type": "markdown", "metadata": {}, "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 4 }
150119
"File \u001b[0;32m~/langchain/.venv/lib/python3.11/site-packages/langchain_core/tools/base.py:659\u001b[0m, in \u001b[0;36mBaseTool.run\u001b[0;34m(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, tool_call_id, **kwargs)\u001b[0m\n\u001b[1;32m 657\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m error_to_raise:\n\u001b[1;32m 658\u001b[0m run_manager\u001b[38;5;241m.\u001b[39mon_tool_error(error_to_raise)\n\u001b[0;32m--> 659\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m error_to_raise\n\u001b[1;32m 660\u001b[0m output \u001b[38;5;241m=\u001b[39m _format_output(content, artifact, tool_call_id, \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mname, status)\n\u001b[1;32m 661\u001b[0m run_manager\u001b[38;5;241m.\u001b[39mon_tool_end(output, color\u001b[38;5;241m=\u001b[39mcolor, name\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mname, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n", "File \u001b[0;32m~/langchain/.venv/lib/python3.11/site-packages/langchain_core/tools/base.py:622\u001b[0m, in \u001b[0;36mBaseTool.run\u001b[0;34m(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, tool_call_id, **kwargs)\u001b[0m\n\u001b[1;32m 620\u001b[0m context \u001b[38;5;241m=\u001b[39m copy_context()\n\u001b[1;32m 621\u001b[0m context\u001b[38;5;241m.\u001b[39mrun(_set_config_context, child_config)\n\u001b[0;32m--> 622\u001b[0m tool_args, tool_kwargs \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_to_args_and_kwargs\u001b[49m\u001b[43m(\u001b[49m\u001b[43mtool_input\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 623\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m signature(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_run)\u001b[38;5;241m.\u001b[39mparameters\u001b[38;5;241m.\u001b[39mget(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mrun_manager\u001b[39m\u001b[38;5;124m\"\u001b[39m):\n\u001b[1;32m 624\u001b[0m tool_kwargs[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mrun_manager\u001b[39m\u001b[38;5;124m\"\u001b[39m] \u001b[38;5;241m=\u001b[39m run_manager\n", "File \u001b[0;32m~/langchain/.venv/lib/python3.11/site-packages/langchain_core/tools/base.py:545\u001b[0m, in \u001b[0;36mBaseTool._to_args_and_kwargs\u001b[0;34m(self, tool_input)\u001b[0m\n\u001b[1;32m 544\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_to_args_and_kwargs\u001b[39m(\u001b[38;5;28mself\u001b[39m, tool_input: Union[\u001b[38;5;28mstr\u001b[39m, Dict]) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Tuple[Tuple, Dict]:\n\u001b[0;32m--> 545\u001b[0m tool_input \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_parse_input\u001b[49m\u001b[43m(\u001b[49m\u001b[43mtool_input\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 546\u001b[0m \u001b[38;5;66;03m# For backwards compatibility, if run_input is a string,\u001b[39;00m\n\u001b[1;32m 547\u001b[0m \u001b[38;5;66;03m# pass as a positional argument.\u001b[39;00m\n\u001b[1;32m 548\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(tool_input, \u001b[38;5;28mstr\u001b[39m):\n",
150124
"\n", "ts_splitter = RecursiveCharacterTextSplitter.from_language(\n", " language=Language.TS, chunk_size=60, chunk_overlap=0\n", ")\n", "ts_docs = ts_splitter.create_documents([TS_CODE])\n", "ts_docs" ] }, { "cell_type": "markdown", "id": "ee2361f8", "metadata": {}, "source": [ "## Markdown\n", "\n", "Here's an example using the Markdown text splitter:\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "ac9295d3", "metadata": {}, "outputs": [], "source": [ "markdown_text = \"\"\"\n", "# 🦜️🔗 LangChain\n", "\n", "⚡ Building applications with LLMs through composability ⚡\n", "\n", "## Quick Install\n", "\n", "# Hopefully this code block isn't split\n", "pip install langchain\n", "\n", "As an open-source project in a rapidly developing field, we are extremely open to contributions.\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": 3, "id": "3a0cb17a", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='# 🦜️🔗 LangChain'),\n", " Document(page_content='⚡ Building applications with LLMs through composability ⚡'),\n", " Document(page_content='## Quick Install'),\n", " Document(page_content=\"# Hopefully this code block isn't split\"),\n", " Document(page_content='pip install langchain'),\n", " Document(page_content='As an open-source project in a rapidly developing field, we'),\n", " Document(page_content='are extremely open to contributions.')]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "md_splitter = RecursiveCharacterTextSplitter.from_language(\n", " language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0\n", ")\n", "md_docs = md_splitter.create_documents([markdown_text])\n", "md_docs" ] }, { "cell_type": "markdown", "id": "7aa306f6", "metadata": {}, "source": [ "## Latex\n", "\n", "Here's an example on Latex text:\n" ] }, { "cell_type": "code", "execution_count": 10, "id": "77d1049d", "metadata": {}, "outputs": [], "source": [ "latex_text = \"\"\"\n", "\\documentclass{article}\n", "\n", "\\begin{document}\n", "\n", "\\maketitle\n", "\n", "\\section{Introduction}\n", "Large language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis.\n", "\n", "\\subsection{History of LLMs}\n", "The earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance.\n", "\n", "\\subsection{Applications of LLMs}\n", "LLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics.\n", "\n", "\\end{document}\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": 11, "id": "4dbc47e1", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='\\\\documentclass{article}\\n\\n\\x08egin{document}\\n\\n\\\\maketitle'),\n", " Document(page_content='\\\\section{Introduction}'),\n", " Document(page_content='Large language models (LLMs) are a type of machine learning'),\n", " Document(page_content='model that can be trained on vast amounts of text data to'),\n", " Document(page_content='generate human-like language. In recent years, LLMs have'),\n", " Document(page_content='made significant advances in a variety of natural language'),\n", " Document(page_content='processing tasks, including language translation, text'),\n", " Document(page_content='generation, and sentiment analysis.'),\n", " Document(page_content='\\\\subsection{History of LLMs}'),\n", " Document(page_content='The earliest LLMs were developed in the 1980s and 1990s,'),\n", " Document(page_content='but they were limited by the amount of data that could be'),\n", " Document(page_content='processed and the computational power available at the'),\n", " Document(page_content='time. In the past decade, however, advances in hardware and'),\n", " Document(page_content='software have made it possible to train LLMs on massive'),\n", " Document(page_content='datasets, leading to significant improvements in'),\n", " Document(page_content='performance.'),\n", " Document(page_content='\\\\subsection{Applications of LLMs}'),\n", " Document(page_content='LLMs have many applications in industry, including'),\n", " Document(page_content='chatbots, content creation, and virtual assistants. They'),\n", " Document(page_content='can also be used in academia for research in linguistics,'),\n", " Document(page_content='psychology, and computational linguistics.'),\n", " Document(page_content='\\\\end{document}')]" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "latex_splitter = RecursiveCharacterTextSplitter.from_language(\n", " language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0\n", ")\n", "latex_docs = latex_splitter.create_documents([latex_text])\n", "latex_docs" ] }, { "cell_type": "markdown", "id": "c29adadf", "metadata": {}, "source": [ "## HTML\n", "\n", "Here's an example using an HTML text splitter:\n" ] }, { "cell_type": "code", "execution_count": 12, "id": "0fc78794", "metadata": {}, "outputs": [], "source": [ "html_text = \"\"\"\n", "<!DOCTYPE html>\n", "<html>\n", " <head>\n", " <title>🦜️🔗 LangChain</title>\n", " <style>\n", " body {\n", " font-family: Arial, sans-serif;\n", " }\n", " h1 {\n", " color: darkblue;\n", " }\n", " </style>\n", " </head>\n", " <body>\n", " <div>\n", " <h1>🦜️🔗 LangChain</h1>\n", " <p>⚡ Building applications with LLMs through composability ⚡</p>\n", " </div>\n", " <div>\n", " As an open-source project in a rapidly developing field, we are extremely open to contributions.\n", " </div>\n", " </body>\n", "</html>\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": 13, "id": "e3e3fca1", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='<!DOCTYPE html>\\n<html>'),\n", " Document(page_content='<head>\\n <title>🦜️🔗 LangChain</title>'),\n",
150125
" Document(page_content='<style>\\n body {\\n font-family: Aria'),\n", " Document(page_content='l, sans-serif;\\n }\\n h1 {'),\n", " Document(page_content='color: darkblue;\\n }\\n </style>\\n </head'),\n", " Document(page_content='>'),\n", " Document(page_content='<body>'),\n", " Document(page_content='<div>\\n <h1>🦜️🔗 LangChain</h1>'),\n", " Document(page_content='<p>⚡ Building applications with LLMs through composability ⚡'),\n", " Document(page_content='</p>\\n </div>'),\n", " Document(page_content='<div>\\n As an open-source project in a rapidly dev'),\n", " Document(page_content='eloping field, we are extremely open to contributions.'),\n", " Document(page_content='</div>\\n </body>\\n</html>')]" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "html_splitter = RecursiveCharacterTextSplitter.from_language(\n", " language=Language.HTML, chunk_size=60, chunk_overlap=0\n", ")\n", "html_docs = html_splitter.create_documents([html_text])\n", "html_docs" ] }, { "cell_type": "markdown", "id": "fcaf7abf", "metadata": {}, "source": [ "## Solidity\n", "Here's an example using the Solidity text splitter:" ] }, { "cell_type": "code", "execution_count": 14, "id": "49a1df11", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='pragma solidity ^0.8.20;'),\n", " Document(page_content='contract HelloWorld {\\n function add(uint a, uint b) pure public returns(uint) {\\n return a + b;\\n }\\n}')]" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "SOL_CODE = \"\"\"\n", "pragma solidity ^0.8.20;\n", "contract HelloWorld {\n", " function add(uint a, uint b) pure public returns(uint) {\n", " return a + b;\n", " }\n", "}\n", "\"\"\"\n", "\n", "sol_splitter = RecursiveCharacterTextSplitter.from_language(\n", " language=Language.SOL, chunk_size=128, chunk_overlap=0\n", ")\n", "sol_docs = sol_splitter.create_documents([SOL_CODE])\n", "sol_docs" ] }, { "cell_type": "markdown", "id": "edd0052c", "metadata": {}, "source": [ "## C#\n", "Here's an example using the C# text splitter:\n" ] }, { "cell_type": "code", "execution_count": 15, "id": "1524ae0f", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='using System;'),\n", " Document(page_content='class Program\\n{\\n static void Main()\\n {\\n int age = 30; // Change the age value as needed'),\n", " Document(page_content='// Categorize the age without any console output\\n if (age < 18)\\n {\\n // Age is under 18'),\n", " Document(page_content='}\\n else if (age >= 18 && age < 65)\\n {\\n // Age is an adult\\n }\\n else\\n {'),\n", " Document(page_content='// Age is a senior citizen\\n }\\n }\\n}')]" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "C_CODE = \"\"\"\n", "using System;\n", "class Program\n", "{\n", " static void Main()\n", " {\n", " int age = 30; // Change the age value as needed\n", "\n", " // Categorize the age without any console output\n", " if (age < 18)\n", " {\n", " // Age is under 18\n", " }\n", " else if (age >= 18 && age < 65)\n", " {\n", " // Age is an adult\n", " }\n", " else\n", " {\n", " // Age is a senior citizen\n", " }\n", " }\n", "}\n", "\"\"\"\n", "c_splitter = RecursiveCharacterTextSplitter.from_language(\n", " language=Language.CSHARP, chunk_size=128, chunk_overlap=0\n", ")\n", "c_docs = c_splitter.create_documents([C_CODE])\n", "c_docs" ] }, { "cell_type": "markdown", "id": "af9de667-230e-4c2a-8c5f-122a28515d97", "metadata": {}, "source": [ "## Haskell\n", "Here's an example using the Haskell text splitter:" ] }, { "cell_type": "code", "execution_count": 3, "id": "688185b5", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='main :: IO ()'),\n", " Document(page_content='main = do\\n putStrLn \"Hello, World!\"\\n-- Some'),\n", " Document(page_content='sample functions\\nadd :: Int -> Int -> Int\\nadd x y'),\n", " Document(page_content='= x + y')]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "HASKELL_CODE = \"\"\"\n", "main :: IO ()\n", "main = do\n", " putStrLn \"Hello, World!\"\n", "-- Some sample functions\n", "add :: Int -> Int -> Int\n", "add x y = x + y\n", "\"\"\"\n", "haskell_splitter = RecursiveCharacterTextSplitter.from_language(\n", " language=Language.HASKELL, chunk_size=50, chunk_overlap=0\n", ")\n", "haskell_docs = haskell_splitter.create_documents([HASKELL_CODE])\n", "haskell_docs" ] }, { "cell_type": "markdown", "id": "4a11f7cd-cd85-430c-b307-5b5b5f07f8db", "metadata": {}, "source": [ "## PHP\n", "Here's an example using the PHP text splitter:" ] }, { "cell_type": "code", "execution_count": 2, "id": "90c66e7e-87a5-4a81-bece-7949aabf2369", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='<?php\\nnamespace foo;'),\n", " Document(page_content='class Hello {'),\n", " Document(page_content='public function __construct() { }\\n}'),\n", " Document(page_content='function hello() {\\n echo \"Hello World!\";\\n}'),\n", " Document(page_content='interface Human {\\n public function breath();\\n}'),\n", " Document(page_content='trait Foo { }\\nenum Color\\n{\\n case Red;'),\n", " Document(page_content='case Blue;\\n}')]" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "PHP_CODE = \"\"\"<?php\n", "namespace foo;\n", "class Hello {\n", " public function __construct() { }\n", "}\n",
150127
{ "cells": [ { "cell_type": "markdown", "id": "7414502a-4532-4da3-aef0-71aac4d0d4dd", "metadata": {}, "source": [ "# How to load web pages\n", "\n", "This guide covers how to load web pages into the LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) format that we use downstream. Web pages contain text, images, and other multimedia elements, and are typically represented with HTML. They may include links to other pages or resources.\n", "\n", "LangChain integrates with a host of parsers that are appropriate for web pages. The right parser will depend on your needs. Below we demonstrate two possibilities:\n", "\n", "- [Simple and fast](/docs/how_to/document_loader_web#simple-and-fast-text-extraction) parsing, in which we recover one `Document` per web page with its content represented as a \"flattened\" string;\n", "- [Advanced](/docs/how_to/document_loader_web#advanced-parsing) parsing, in which we recover multiple `Document` objects per page, allowing one to identify and traverse sections, links, tables, and other structures.\n", "\n", "## Setup\n", "\n", "For the \"simple and fast\" parsing, we will need `langchain-community` and the `beautifulsoup4` library:" ] }, { "cell_type": "code", "execution_count": null, "id": "89bc7be9-ab50-4c5a-860a-deee7b469f67", "metadata": {}, "outputs": [], "source": [ "%pip install -qU langchain-community beautifulsoup4" ] }, { "cell_type": "markdown", "id": "a07f5ca3-e2b7-4d9c-b1f2-7547856cbdf7", "metadata": {}, "source": [ "For advanced parsing, we will use `langchain-unstructured`:" ] }, { "cell_type": "code", "execution_count": null, "id": "8a3ef1fc-dfde-4814-b7f6-b6c0c649f044", "metadata": {}, "outputs": [], "source": [ "%pip install -qU langchain-unstructured" ] }, { "cell_type": "markdown", "id": "4ef11005-1bd0-43a3-8d52-ea823c830c34", "metadata": {}, "source": [ "## Simple and fast text extraction\n", "\n", "If you are looking for a simple string representation of text that is embedded in a web page, the method below is appropriate. It will return a list of `Document` objects -- one per page -- containing a single string of the page's text. Under the hood it uses the `beautifulsoup4` Python library.\n", "\n", "LangChain document loaders implement `lazy_load` and its async variant, `alazy_load`, which return iterators of `Document objects`. We will use these below." ] }, { "cell_type": "code", "execution_count": 1, "id": "7faeccbc-4e56-4b88-99db-2274ed0680c1", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "USER_AGENT environment variable not set, consider setting it to identify your requests.\n" ] } ], "source": [ "import bs4\n", "from langchain_community.document_loaders import WebBaseLoader\n", "\n", "page_url = \"https://python.langchain.com/docs/how_to/chatbots_memory/\"\n", "\n", "loader = WebBaseLoader(web_paths=[page_url])\n", "docs = []\n", "async for doc in loader.alazy_load():\n", " docs.append(doc)\n", "\n", "assert len(docs) == 1\n", "doc = docs[0]" ] }, { "cell_type": "code", "execution_count": 2, "id": "21199a0d-3bd2-4410-a060-763649b14691", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'source': 'https://python.langchain.com/docs/how_to/chatbots_memory/', 'title': 'How to add memory to chatbots | \\uf8ffü¶úÔ∏è\\uf8ffüîó LangChain', 'description': 'A key feature of chatbots is their ability to use content of previous conversation turns as context. This state management can take several forms, including:', 'language': 'en'}\n", "\n", "How to add memory to chatbots | ü¶úÔ∏èüîó LangChain\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "Skip to main contentShare your thoughts on AI agents. Take the 3-min survey.IntegrationsAPI ReferenceMoreContributingPeopleLangSmithLangGraphLangChain HubLangChain JS/TSv0.3v0.3v0.2v0.1üí¨SearchIntroductionTutorialsBuild a Question Answering application over a Graph DatabaseTutorialsBuild a Simple LLM Application with LCELBuild a Query Analysis SystemBuild a ChatbotConversational RAGBuild an Extraction ChainBuild an AgentTaggingd\n" ] } ], "source": [ "print(f\"{doc.metadata}\\n\")\n", "print(doc.page_content[:500].strip())" ] }, { "cell_type": "markdown", "id": "23189e91-5237-4a9e-a4bb-cb79e130c364", "metadata": {}, "source": [ "This is essentially a dump of the text from the page's HTML. It may contain extraneous information like headings and navigation bars. If you are familiar with the expected HTML, you can specify desired `<div>` classes and other parameters via BeautifulSoup. Below we parse only the body text of the article:" ] }, { "cell_type": "code", "execution_count": 3, "id": "4211b1a6-e636-415b-a556-ae01969399a7", "metadata": {}, "outputs": [], "source": [ "loader = WebBaseLoader(\n", " web_paths=[page_url],\n", " bs_kwargs={\n", " \"parse_only\": bs4.SoupStrainer(class_=\"theme-doc-markdown markdown\"),\n", " },\n", " bs_get_text_kwargs={\"separator\": \" | \", \"strip\": True},\n", ")\n", "\n", "docs = []\n", "async for doc in loader.alazy_load():\n", " docs.append(doc)\n", "\n", "assert len(docs) == 1\n", "doc = docs[0]" ] }, { "cell_type": "code", "execution_count": 4, "id": "7edf6ed0-e22f-4c64-b986-8ba019c14757", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'source': 'https://python.langchain.com/docs/how_to/chatbots_memory/'}\n", "\n", "How to add memory to chatbots | A key feature of chatbots is their ability to use content of previous conversation turns as context. This state management can take several forms, including: | Simply stuffing previous messages into a chat model prompt. | The above, but trimming old messages to reduce the amount of distracting information the model has to deal with. | More complex modifications like synthesizing summaries for long running conversations. | We'll go into more detail on a few techniq\n" ] } ], "source": [ "print(f\"{doc.metadata}\\n\")\n", "print(doc.page_content[:500])" ] }, { "cell_type": "code", "execution_count": 5, "id": "6ab1ba2b-3b22-4c5d-8ad3-f6809d075d26", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [
150129
"{'https://python.langchain.com/docs/how_to/chatbots_memory/': \"You'll need to install a few packages, and have your OpenAI API key set as an environment variable named OPENAI_API_KEY:\\n%pip install --upgrade --quiet langchain langchain-openai\\n\\n# Set env var OPENAI_API_KEY or load from a .env file:\\nimport dotenv\\n\\ndotenv.load_dotenv()\\n[33mWARNING: You are using pip version 22.0.4; however, version 23.3.2 is available.\\nYou should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.[0m[33m\\n[0mNote: you may need to restart the kernel to use updated packages.\\n\",\n", " 'https://python.langchain.com/docs/how_to/chatbots_tools/': \"For this guide, we'll be using a tool calling agent with a single tool for searching the web. The default will be powered by Tavily, but you can switch it out for any similar tool. The rest of this section will assume you're using Tavily.\\nYou'll need to sign up for an account on the Tavily website, and install the following packages:\\n%pip install --upgrade --quiet langchain-community langchain-openai tavily-python\\n\\n# Set env var OPENAI_API_KEY or load from a .env file:\\nimport dotenv\\n\\ndotenv.load_dotenv()\\nYou will also need your OpenAI key set as OPENAI_API_KEY and your Tavily API key set as TAVILY_API_KEY.\\n\"}" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from collections import defaultdict\n", "\n", "setup_text = defaultdict(str)\n", "\n", "for doc in setup_docs:\n", " url = doc.metadata[\"url\"]\n", " setup_text[url] += f\"{doc.page_content}\\n\"\n", "\n", "dict(setup_text)" ] }, { "cell_type": "markdown", "id": "5cd42892-24a6-4969-92c8-c928680be9b5", "metadata": {}, "source": [ "### Vector search over page content\n", "\n", "Once we have loaded the page contents into LangChain `Document` objects, we can index them (e.g., for a RAG application) in the usual way. Below we use OpenAI [embeddings](/docs/concepts/#embedding-models), although any LangChain embeddings model will suffice." ] }, { "cell_type": "code", "execution_count": null, "id": "5a6cbb01-6e0d-418f-9f76-2031622bebb0", "metadata": {}, "outputs": [], "source": [ "%pip install -qU langchain-openai" ] }, { "cell_type": "code", "execution_count": 11, "id": "598e612c-180d-494d-8caa-761c89f84eae", "metadata": {}, "outputs": [], "source": [ "import getpass\n", "import os\n", "\n", "if \"OPENAI_API_KEY\" not in os.environ:\n", " os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")" ] }, { "cell_type": "code", "execution_count": 12, "id": "5eeaeb54-ea03-4634-8a79-b60c22ab2b66", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO: HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n", "INFO: HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Page https://python.langchain.com/docs/how_to/chatbots_tools/: You'll need to sign up for an account on the Tavily website, and install the following packages:\n", "\n", "Page https://python.langchain.com/docs/how_to/chatbots_tools/: For this guide, we'll be using a tool calling agent with a single tool for searching the web. The default will be powered by Tavily, but you can switch it out for any similar tool. The rest of this section will assume you're using Tavily.\n", "\n" ] } ], "source": [ "from langchain_core.vectorstores import InMemoryVectorStore\n", "from langchain_openai import OpenAIEmbeddings\n", "\n", "vector_store = InMemoryVectorStore.from_documents(setup_docs, OpenAIEmbeddings())\n", "retrieved_docs = vector_store.similarity_search(\"Install Tavily\", k=2)\n", "for doc in retrieved_docs:\n", " print(f'Page {doc.metadata[\"url\"]}: {doc.page_content[:300]}\\n')" ] }, { "cell_type": "markdown", "id": "67be9c94-dbde-4fdd-87d0-e83ed6066d2b", "metadata": {}, "source": [ "## Other web page loaders\n", "\n", "For a list of available LangChain web page loaders, please see [this table](/docs/integrations/document_loaders/#webpages)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
150134
"id": "d59823f5-9b9a-43c5-a213-34644e2f1d3d", "metadata": {}, "source": [ ":::note\n", "Because the code above is relying on JSON auto-completion, you may see partial names of countries (e.g., `Sp` and `Spain`), which is not what one would want for an extraction result!\n", "\n", "We're focusing on streaming concepts, not necessarily the results of the chains.\n", ":::" ] }, { "cell_type": "markdown", "id": "6adf65b7-aa47-4321-98c7-a0abe43b833a", "metadata": {}, "source": [ "### Non-streaming components\n", "\n", "Some built-in components like Retrievers do not offer any `streaming`. What happens if we try to `stream` them? 🤨" ] }, { "cell_type": "code", "execution_count": 10, "id": "b9b1c00d-8b44-40d0-9e2b-8a70d238f82b", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[[Document(page_content='harrison worked at kensho'),\n", " Document(page_content='harrison likes spicy food')]]" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_community.vectorstores import FAISS\n", "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.runnables import RunnablePassthrough\n", "from langchain_openai import OpenAIEmbeddings\n", "\n", "template = \"\"\"Answer the question based only on the following context:\n", "{context}\n", "\n", "Question: {question}\n", "\"\"\"\n", "prompt = ChatPromptTemplate.from_template(template)\n", "\n", "vectorstore = FAISS.from_texts(\n", " [\"harrison worked at kensho\", \"harrison likes spicy food\"],\n", " embedding=OpenAIEmbeddings(),\n", ")\n", "retriever = vectorstore.as_retriever()\n", "\n", "chunks = [chunk for chunk in retriever.stream(\"where did harrison work?\")]\n", "chunks" ] }, { "cell_type": "markdown", "id": "6fd3e71b-439e-418f-8a8a-5232fba3d9fd", "metadata": {}, "source": [ "Stream just yielded the final result from that component.\n", "\n", "This is OK 🥹! Not all components have to implement streaming -- in some cases streaming is either unnecessary, difficult or just doesn't make sense.\n", "\n", ":::tip\n", "An LCEL chain constructed using non-streaming components, will still be able to stream in a lot of cases, with streaming of partial output starting after the last non-streaming step in the chain.\n", ":::" ] }, { "cell_type": "code", "execution_count": 11, "id": "957447e6-1e60-41ef-8c10-2654bd9e738d", "metadata": {}, "outputs": [], "source": [ "retrieval_chain = (\n", " {\n", " \"context\": retriever.with_config(run_name=\"Docs\"),\n", " \"question\": RunnablePassthrough(),\n", " }\n", " | prompt\n", " | model\n", " | StrOutputParser()\n", ")" ] }, { "cell_type": "code", "execution_count": 12, "id": "94e50b5d-bf51-4eee-9da0-ee40dd9ce42b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Base|d on| the| given| context|,| Harrison| worke|d at| K|ens|ho|.|\n", "\n", "Here| are| |3| |made| up| sentences| about| this| place|:|\n", "\n", "1|.| K|ens|ho| was| a| cutting|-|edge| technology| company| known| for| its| innovative| solutions| in| artificial| intelligence| an|d data| analytics|.|\n", "\n", "2|.| The| modern| office| space| at| K|ens|ho| feature|d open| floor| plans|,| collaborative| work|sp|aces|,| an|d a| vib|rant| atmosphere| that| fos|tere|d creativity| an|d team|work|.|\n", "\n", "3|.| With| its| prime| location| in| the| heart| of| the| city|,| K|ens|ho| attracte|d top| talent| from| aroun|d the| worl|d,| creating| a| diverse| an|d dynamic| work| environment|.|" ] } ], "source": [ "for chunk in retrieval_chain.stream(\n", " \"Where did harrison work? \" \"Write 3 made up sentences about this place.\"\n", "):\n", " print(chunk, end=\"|\", flush=True)" ] }, { "cell_type": "markdown", "id": "8657aa4e-3469-4b5b-a09c-60b53a23b1e7", "metadata": {}, "source": [ "Now that we've seen how `stream` and `astream` work, let's venture into the world of streaming events. 🏞️" ] }, { "cell_type": "markdown", "id": "baceb5c0-d4a4-4b98-8733-80ae4407b62d", "metadata": {}, "source": [ "## Using Stream Events\n", "\n", "Event Streaming is a **beta** API. This API may change a bit based on feedback.\n", "\n", ":::note\n", "\n", "This guide demonstrates the `V2` API and requires langchain-core >= 0.2. For the `V1` API compatible with older versions of LangChain, see [here](https://python.langchain.com/v0.1/docs/expression_language/streaming/#using-stream-events).\n", ":::" ] }, { "cell_type": "code", "execution_count": null, "id": "61348df9-ec58-401e-be89-68a70042f88e", "metadata": {}, "outputs": [], "source": [ "import langchain_core\n", "\n", "langchain_core.__version__" ] }, { "cell_type": "markdown", "id": "52e9e983-bbde-4906-9eca-4ccc06eabd91", "metadata": {}, "source": [ "For the `astream_events` API to work properly:\n", "\n", "* Use `async` throughout the code to the extent possible (e.g., async tools etc)\n", "* Propagate callbacks if defining custom functions / runnables\n", "* Whenever using runnables without LCEL, make sure to call `.astream()` on LLMs rather than `.ainvoke` to force the LLM to stream tokens.\n", "* Let us know if anything doesn't work as expected! :)\n", "\n", "### Event Reference\n", "\n", "Below is a reference table that shows some events that might be emitted by the various Runnable objects.\n", "\n", "\n", ":::note\n", "When streaming is implemented properly, the inputs to a runnable will not be known until after the input stream has been entirely consumed. This means that `inputs` will often be included only for `end` events and rather than for `start` events.\n", ":::\n", "\n", "| event | name | chunk | input | output |\n", "|----------------------|------------------|---------------------------------|-----------------------------------------------|-------------------------------------------------|\n", "| on_chat_model_start | [model name] | | \\{\"messages\": [[SystemMessage, HumanMessage]]\\} | |\n",
150152
"For the retriever, we will use [WebBaseLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) to load the content of a web page. Here we instantiate a `InMemoryVectorStore` vectorstore and then use its [.as_retriever](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.VectorStore.html#langchain_core.vectorstores.VectorStore.as_retriever) method to build a retriever that can be incorporated into [LCEL](/docs/concepts/#langchain-expression-language) chains." ] }, { "cell_type": "code", "execution_count": 5, "id": "820244ae-74b4-4593-b392-822979dd91b8", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "USER_AGENT environment variable not set, consider setting it to identify your requests.\n" ] } ], "source": [ "import bs4\n", "from langchain.chains import create_retrieval_chain\n", "from langchain.chains.combine_documents import create_stuff_documents_chain\n", "from langchain_community.document_loaders import WebBaseLoader\n", "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.runnables import RunnablePassthrough\n", "from langchain_core.vectorstores import InMemoryVectorStore\n", "from langchain_openai import OpenAIEmbeddings\n", "from langchain_text_splitters import RecursiveCharacterTextSplitter\n", "\n", "loader = WebBaseLoader(\n", " web_paths=(\"https://lilianweng.github.io/posts/2023-06-23-agent/\",),\n", " bs_kwargs=dict(\n", " parse_only=bs4.SoupStrainer(\n", " class_=(\"post-content\", \"post-title\", \"post-header\")\n", " )\n", " ),\n", ")\n", "docs = loader.load()\n", "\n", "text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n", "splits = text_splitter.split_documents(docs)\n", "vectorstore = InMemoryVectorStore(embedding=OpenAIEmbeddings())\n", "vectorstore.add_documents(splits)\n", "retriever = vectorstore.as_retriever()" ] }, { "cell_type": "markdown", "id": "776ae958-cbdc-4471-8669-c6087436f0b5", "metadata": {}, "source": [ "### Prompt\n", "\n", "We'll use a prompt that includes a `MessagesPlaceholder` variable under the name \"chat_history\". This allows us to pass in a list of Messages to the prompt using the \"chat_history\" input key, and these messages will be inserted after the system message and before the human message containing the latest question." ] }, { "cell_type": "code", "execution_count": 6, "id": "2b685428-8b82-4af1-be4f-7232c5d55b73", "metadata": {}, "outputs": [], "source": [ "from langchain.chains import create_history_aware_retriever\n", "from langchain_core.prompts import MessagesPlaceholder\n", "\n", "contextualize_q_system_prompt = (\n", " \"Given a chat history and the latest user question \"\n", " \"which might reference context in the chat history, \"\n", " \"formulate a standalone question which can be understood \"\n", " \"without the chat history. Do NOT answer the question, \"\n", " \"just reformulate it if needed and otherwise return it as is.\"\n", ")\n", "\n", "contextualize_q_prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\"system\", contextualize_q_system_prompt),\n", " MessagesPlaceholder(\"chat_history\"),\n", " (\"human\", \"{input}\"),\n", " ]\n", ")" ] }, { "cell_type": "markdown", "id": "9d2a692e-a019-4515-9625-5b0530c3c9af", "metadata": {}, "source": [ "### Assembling the chain\n", "\n", "We can then instantiate the history-aware retriever:" ] }, { "cell_type": "code", "execution_count": 7, "id": "4c4b1695-6217-4ee8-abaf-7cc26366d988", "metadata": {}, "outputs": [], "source": [ "history_aware_retriever = create_history_aware_retriever(\n", " llm, retriever, contextualize_q_prompt\n", ")" ] }, { "cell_type": "markdown", "id": "42a47168-4a1f-4e39-bd2d-d5b03609a243", "metadata": {}, "source": [ "This chain prepends a rephrasing of the input query to our retriever, so that the retrieval incorporates the context of the conversation.\n", "\n", "Now we can build our full QA chain.\n", "\n", "As in the [RAG tutorial](/docs/tutorials/rag), we will use [create_stuff_documents_chain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) to generate a `question_answer_chain`, with input keys `context`, `chat_history`, and `input`-- it accepts the retrieved context alongside the conversation history and query to generate an answer.\n", "\n", "We build our final `rag_chain` with [create_retrieval_chain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.retrieval.create_retrieval_chain.html). This chain applies the `history_aware_retriever` and `question_answer_chain` in sequence, retaining intermediate outputs such as the retrieved context for convenience. It has input keys `input` and `chat_history`, and includes `input`, `chat_history`, `context`, and `answer` in its output." ] }, { "cell_type": "code", "execution_count": 8, "id": "afef4385-f571-4874-8f52-3d475642f579", "metadata": {}, "outputs": [], "source": [ "system_prompt = (\n", " \"You are an assistant for question-answering tasks. \"\n", " \"Use the following pieces of retrieved context to answer \"\n", " \"the question. If you don't know the answer, say that you \"\n", " \"don't know. Use three sentences maximum and keep the \"\n", " \"answer concise.\"\n", " \"\\n\\n\"\n", " \"{context}\"\n", ")\n", "qa_prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\"system\", system_prompt),\n", " MessagesPlaceholder(\"chat_history\"),\n", " (\"human\", \"{input}\"),\n", " ]\n", ")\n", "\n", "question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)\n", "rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)" ] }, { "cell_type": "markdown", "id": "53a662c2-f38b-45f9-95c4-66de15637614", "metadata": {}, "source": [ "### Stateful Management of chat history\n", "\n", "We have added application logic for incorporating chat history, but we are still manually plumbing it through our application. In production, the Q&A application we usually persist the chat history into a database, and be able to read and update it appropriately.\n", "\n", "[LangGraph](https://langchain-ai.github.io/langgraph/) implements a built-in [persistence layer](https://langchain-ai.github.io/langgraph/concepts/persistence/), making it ideal for chat applications that support multiple conversational turns.\n", "\n", "Wrapping our chat model in a minimal LangGraph application allows us to automatically persist the message history, simplifying the development of multi-turn applications.\n", "\n",