id
stringlengths
6
6
text
stringlengths
20
17.2k
title
stringclasses
1 value
145588
{ "cells": [ { "cell_type": "raw", "id": "38831021-76ed-48b3-9f62-d1241a68b6ad", "metadata": {}, "source": [ "---\n", "sidebar_position: 3\n", "---" ] }, { "cell_type": "markdown", "id": "a745f98b-c495-44f6-a882-757c38992d76", "metadata": {}, "source": [ "# How to use output parsers to parse an LLM response into structured format\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "\n", "- [Output parsers](/docs/concepts#output-parsers)\n", "- [Chat models](/docs/concepts#chat-models)\n", "\n", ":::\n", "\n", "Language models output text. But there are times where you want to get more structured information than just text back. While some model providers support [built-in ways to return structured output](/docs/how_to/structured_output), not all do. For these providers, you must use prompting to encourage the model to return structured data in the desired format.\n", "\n", "LangChain has [output parsers](/docs/concepts#output-parsers) which can help parse model outputs into usable objects. We'll go over a few examples below.\n", "\n", "## Get started\n", "\n", "The primary type of output parser for working with structured data in model responses is the [`StructuredOutputParser`](https://api.js.langchain.com/classes/langchain_core.output_parsers.StructuredOutputParser.html). In the below example, we define a schema for the type of output we expect from the model using [`zod`](https://zod.dev).\n", "\n", "First, let's see the default formatting instructions we'll plug into the prompt:" ] }, { "cell_type": "markdown", "id": "b62367da", "metadata": {}, "source": [ "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs />\n", "```" ] }, { "cell_type": "code", "execution_count": 1, "id": "1594b2bf-2a6f-47bb-9a81-38930f8e606b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "You must format your output as a JSON value that adheres to a given \"JSON Schema\" instance.\n", "\n", "\"JSON Schema\" is a declarative language that allows you to annotate and validate JSON documents.\n", "\n", "For example, the example \"JSON Schema\" instance {{\"properties\": {{\"foo\": {{\"description\": \"a list of test words\", \"type\": \"array\", \"items\": {{\"type\": \"string\"}}}}}}, \"required\": [\"foo\"]}}}}\n", "would match an object with one required property, \"foo\". The \"type\" property specifies \"foo\" must be an \"array\", and the \"description\" property semantically describes it as \"a list of test words\". The items within \"foo\" must be strings.\n", "Thus, the object {{\"foo\": [\"bar\", \"baz\"]}} is a well-formatted instance of this example \"JSON Schema\". The object {{\"properties\": {{\"foo\": [\"bar\", \"baz\"]}}}} is not well-formatted.\n", "\n", "Your output will be parsed and type-checked according to the provided schema instance, so make sure all fields in your output match the schema exactly and there are no trailing commas!\n", "\n", "Here is the JSON Schema instance your output must adhere to. Include the enclosing markdown codeblock:\n", "```json\n", "{\"type\":\"object\",\"properties\":{\"answer\":{\"type\":\"string\",\"description\":\"answer to the user's question\"},\"source\":{\"type\":\"string\",\"description\":\"source used to answer the user's question, should be a website.\"}},\"required\":[\"answer\",\"source\"],\"additionalProperties\":false,\"$schema\":\"http://json-schema.org/draft-07/schema#\"}\n", "```\n", "\n" ] } ], "source": [ "import { z } from \"zod\";\n", "import { RunnableSequence } from \"@langchain/core/runnables\";\n", "import { StructuredOutputParser } from \"@langchain/core/output_parsers\";\n", "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n", "\n", "const zodSchema = z.object({\n", " answer: z.string().describe(\"answer to the user's question\"),\n", " source: z.string().describe(\"source used to answer the user's question, should be a website.\"),\n", "})\n", "\n", "const parser = StructuredOutputParser.fromZodSchema(zodSchema);\n", "\n", "const chain = RunnableSequence.from([\n", " ChatPromptTemplate.fromTemplate(\n", " \"Answer the users question as best as possible.\\n{format_instructions}\\n{question}\"\n", " ),\n", " model,\n", " parser,\n", "]);\n", "\n", "console.log(parser.getFormatInstructions());\n" ] }, { "cell_type": "markdown", "id": "2bd357c5", "metadata": {}, "source": [ "Next, let's invoke the chain:" ] }, { "cell_type": "code", "execution_count": 2, "id": "301471a0", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{\n", " answer: \"The capital of France is Paris.\",\n", " source: \"https://en.wikipedia.org/wiki/Paris\"\n", "}\n" ] } ], "source": [ "const response = await chain.invoke({\n", " question: \"What is the capital of France?\",\n", " format_instructions: parser.getFormatInstructions(),\n", "});\n", "\n", "console.log(response);" ] }, { "cell_type": "markdown", "id": "75976cd6-78e2-458b-821f-3ddf3683466b", "metadata": {}, "source": [ "Output parsers implement the [Runnable interface](/docs/how_to/#langchain-expression-language-lcel), the basic building block of [LangChain Expression Language (LCEL)](/docs/how_to/#langchain-expression-language-lcel). This means they support `invoke`, `stream`, `batch`, `streamLog` calls.\n", "\n", "## Validation\n", "\n", "One feature of the `StructuredOutputParser` is that it supports stricter Zod validations. For example, if you pass a simulated model output that does not conform to the schema, we get a detailed type error:" ] }, { "cell_type": "code", "execution_count": 3, "id": "475f1ae5", "metadata": {}, "outputs": [ { "ename": "Error", "evalue": "Failed to parse. Text: \"{\"badfield\": \"foo\"}\". Error: [\n {\n \"code\": \"invalid_type\",\n \"expected\": \"string\",\n \"received\": \"undefined\",\n \"path\": [\n \"answer\"\n ],\n \"message\": \"Required\"\n },\n {\n \"code\": \"invalid_type\",\n \"expected\": \"string\",\n \"received\": \"undefined\",\n \"path\": [\n \"source\"\n ],\n \"message\": \"Required\"\n }\n]", "output_type": "error", "traceback": [ "Stack trace:", "Error: Failed to parse. Text: \"{\"badfield\": \"foo\"}\". Error: [", " {", " \"code\": \"invalid_type\",", " \"expected\": \"string\",", " \"received\": \"undefined\",", " \"path\": [", " \"answer\"", " ],", " \"message\": \"Required\"", " },", " {", " \"code\": \"invalid_type\",", " \"expected\": \"string\",", " \"received\": \"undefined\",",
145593
"{ query: \u001b[32m\"books about aliens\"\u001b[39m, author: \u001b[32m\"jess knight\"\u001b[39m }" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "await queryAnalyzer.invoke(\"what are books about aliens by jess knight\")" ] }, { "cell_type": "markdown", "id": "0b60b7c2", "metadata": {}, "source": [ "### Add in all values\n", "\n", "One way around this is to add ALL possible values to the prompt. That will generally guide the query in the right direction" ] }, { "cell_type": "code", "execution_count": 8, "id": "98788a94", "metadata": {}, "outputs": [], "source": [ "const systemTemplate = `Generate a relevant search query for a library system using the 'search' tool.\n", "\n", "The 'author' you return to the user MUST be one of the following authors:\n", "\n", "{authors}\n", "\n", "Do NOT hallucinate author name!`\n", "const basePrompt = ChatPromptTemplate.fromMessages(\n", " [\n", " [\"system\", systemTemplate],\n", " [\"human\", \"{question}\"],\n", " ]\n", ")\n", "const promptWithAuthors = await basePrompt.partial({ authors: names.join(\", \") })\n", "\n", "const queryAnalyzerAll = RunnableSequence.from([\n", " {\n", " question: new RunnablePassthrough(),\n", " },\n", " promptWithAuthors,\n", " llmWithTools\n", "])" ] }, { "cell_type": "markdown", "id": "e639285a", "metadata": {}, "source": [ "However... if the list of categoricals is long enough, it may error!" ] }, { "cell_type": "code", "execution_count": 9, "id": "696b000f", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Error: 400 This model's maximum context length is 16385 tokens. However, your messages resulted in 50197 tokens (50167 in the messages, 30 in the functions). Please reduce the length of the messages or functions.\n", " at Function.generate (file:///Users/jacoblee/Library/Caches/deno/npm/registry.npmjs.org/openai/4.47.1/error.mjs:41:20)\n", " at OpenAI.makeStatusError (file:///Users/jacoblee/Library/Caches/deno/npm/registry.npmjs.org/openai/4.47.1/core.mjs:256:25)\n", " at OpenAI.makeRequest (file:///Users/jacoblee/Library/Caches/deno/npm/registry.npmjs.org/openai/4.47.1/core.mjs:299:30)\n", " at eventLoopTick (ext:core/01_core.js:63:7)\n", " at async file:///Users/jacoblee/Library/Caches/deno/npm/registry.npmjs.org/@langchain/openai/0.0.31/dist/chat_models.js:756:29\n", " at async RetryOperation._fn (file:///Users/jacoblee/Library/Caches/deno/npm/registry.npmjs.org/p-retry/4.6.2/index.js:50:12) {\n", " status: 400,\n", " headers: {\n", " \"alt-svc\": 'h3=\":443\"; ma=86400',\n", " \"cf-cache-status\": \"DYNAMIC\",\n", " \"cf-ray\": \"885f794b3df4fa52-SJC\",\n", " \"content-length\": \"340\",\n", " \"content-type\": \"application/json\",\n", " date: \"Sat, 18 May 2024 23:02:16 GMT\",\n", " \"openai-organization\": \"langchain\",\n", " \"openai-processing-ms\": \"230\",\n", " \"openai-version\": \"2020-10-01\",\n", " server: \"cloudflare\",\n", " \"set-cookie\": \"_cfuvid=F_c9lnRuQDUhKiUE2eR2PlsxHPldf1OAVMonLlHTjzM-1716073336256-0.0.1.1-604800000; path=/; domain=\"... 48 more characters,\n", " \"strict-transport-security\": \"max-age=15724800; includeSubDomains\",\n", " \"x-ratelimit-limit-requests\": \"10000\",\n", " \"x-ratelimit-limit-tokens\": \"2000000\",\n", " \"x-ratelimit-remaining-requests\": \"9999\",\n", " \"x-ratelimit-remaining-tokens\": \"1958402\",\n", " \"x-ratelimit-reset-requests\": \"6ms\",\n", " \"x-ratelimit-reset-tokens\": \"1.247s\",\n", " \"x-request-id\": \"req_7b88677d6883fac1520e44543f68c839\"\n", " },\n", " request_id: \"req_7b88677d6883fac1520e44543f68c839\",\n", " error: {\n", " message: \"This model's maximum context length is 16385 tokens. However, your messages resulted in 50197 tokens\"... 101 more characters,\n", " type: \"invalid_request_error\",\n", " param: \"messages\",\n", " code: \"context_length_exceeded\"\n", " },\n", " code: \"context_length_exceeded\",\n", " param: \"messages\",\n", " type: \"invalid_request_error\",\n", " attemptNumber: 1,\n", " retriesLeft: 6\n", "}\n" ] } ], "source": [ "try {\n", " const res = await queryAnalyzerAll.invoke(\"what are books about aliens by jess knight\")\n", "} catch (e) {\n", " console.error(e)\n", "}" ] }, { "cell_type": "markdown", "id": "1d5d7891", "metadata": {}, "source": [ "We can try to use a longer context window... but with so much information in there, it is not guaranteed to pick it up reliably" ] }, { "cell_type": "markdown", "id": "618a9762", "metadata": {}, "source": [ "```{=mdx}\n", "<ChatModelTabs customVarName=\"llmLong\" openaiParams={`{ model: \"gpt-4-turbo-preview\" }`} />\n", "```" ] }, { "cell_type": "code", "execution_count": null, "id": "64817a0f", "metadata": {}, "outputs": [], "source": [ "// @lc-docs-hide-cell\n", "import { ChatOpenAI } from '@langchain/openai';\n", "\n", "const llmLong = new ChatOpenAI({\n", " model: \"gpt-4o\",\n", " temperature: 0,\n", "})" ] }, { "cell_type": "code", "execution_count": 12, "id": "0f0d0757", "metadata": {}, "outputs": [], "source": [ "const structuredLlmLong = llmLong.withStructuredOutput(searchSchema, {\n", " name: \"Search\"\n", "});\n", "const queryAnalyzerAllLong = RunnableSequence.from([\n", " {\n", " question: new RunnablePassthrough(),\n", " },\n", " prompt,\n", " structuredLlmLong\n", "]);" ] }, { "cell_type": "code", "execution_count": 13, "id": "03e5b7b2", "metadata": {}, "outputs": [ { "data": { "text/plain": [
145596
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# How to stream from a question-answering chain\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following:\n", "\n", "- [Retrieval-augmented generation](/docs/tutorials/rag/)\n", "\n", ":::\n", "\n", "Often in Q&A applications it's important to show users the sources that were used to generate the answer. The simplest way to do this is for the chain to return the Documents that were retrieved in each generation.\n", "\n", "We'll be using the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng for retrieval content this notebook." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup\n", "### Dependencies\n", "\n", "We’ll use an OpenAI chat model and embeddings and a Memory vector store in this walkthrough, but everything shown here works with any [ChatModel](/docs/concepts/#chat-models) or [LLM](/docs/concepts#llms), [Embeddings](/docs/concepts#embedding-models), and [VectorStore](/docs/concepts#vectorstores) or [Retriever](/docs/concepts#retrievers).\n", "\n", "We’ll use the following packages:\n", "\n", "```bash\n", "npm install --save langchain @langchain/openai cheerio\n", "```\n", "\n", "We need to set environment variable `OPENAI_API_KEY`:\n", "\n", "```bash\n", "export OPENAI_API_KEY=YOUR_KEY\n", "```\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### LangSmith\n", "\n", "Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com/).\n", "\n", "Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:\n", "\n", "\n", "```bash\n", "export LANGCHAIN_TRACING_V2=true\n", "export LANGCHAIN_API_KEY=YOUR_KEY\n", "\n", "# Reduce tracing latency if you are not in a serverless environment\n", "# export LANGCHAIN_CALLBACKS_BACKGROUND=true\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Chain with sources\n", "\n", "Here is Q&A app with sources we built over the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng in the [Returning sources](/docs/how_to/qa_sources/) guide:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{\n", " question: \u001b[32m\"What is Task Decomposition\"\u001b[39m,\n", " context: [\n", " Document {\n", " pageContent: \u001b[32m\"Fig. 1. Overview of a LLM-powered autonomous agent system.\\n\"\u001b[39m +\n", " \u001b[32m\"Component One: Planning#\\n\"\u001b[39m +\n", " \u001b[32m\"A complicated ta\"\u001b[39m... 898 more characters,\n", " metadata: {\n", " source: \u001b[32m\"https://lilianweng.github.io/posts/2023-06-23-agent/\"\u001b[39m,\n", " loc: { lines: \u001b[36m[Object]\u001b[39m }\n", " }\n", " },\n", " Document {\n", " pageContent: \u001b[32m'Task decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are'\u001b[39m... 887 more characters,\n", " metadata: {\n", " source: \u001b[32m\"https://lilianweng.github.io/posts/2023-06-23-agent/\"\u001b[39m,\n", " loc: { lines: \u001b[36m[Object]\u001b[39m }\n", " }\n", " },\n", " Document {\n", " pageContent: \u001b[32m\"Agent System Overview\\n\"\u001b[39m +\n", " \u001b[32m\" \\n\"\u001b[39m +\n", " \u001b[32m\" Component One: Planning\\n\"\u001b[39m +\n", " \u001b[32m\" \"\u001b[39m... 850 more characters,\n", " metadata: {\n", " source: \u001b[32m\"https://lilianweng.github.io/posts/2023-06-23-agent/\"\u001b[39m,\n", " loc: { lines: \u001b[36m[Object]\u001b[39m }\n", " }\n", " },\n", " Document {\n", " pageContent: \u001b[32m\"Resources:\\n\"\u001b[39m +\n", " \u001b[32m\"1. Internet access for searches and information gathering.\\n\"\u001b[39m +\n", " \u001b[32m\"2. Long Term memory management\"\u001b[39m... 456 more characters,\n", " metadata: {\n", " source: \u001b[32m\"https://lilianweng.github.io/posts/2023-06-23-agent/\"\u001b[39m,\n", " loc: { lines: \u001b[36m[Object]\u001b[39m }\n", " }\n", " }\n", " ],\n", " answer: \u001b[32m\"Task decomposition is a technique used to break down complex tasks into smaller and simpler steps fo\"\u001b[39m... 230 more characters\n", "}" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import \"cheerio\";\n", "import { CheerioWebBaseLoader } from \"@langchain/community/document_loaders/web/cheerio\";\n", "import { RecursiveCharacterTextSplitter } from \"langchain/text_splitter\";\n", "import { MemoryVectorStore } from \"langchain/vectorstores/memory\"\n", "import { OpenAIEmbeddings, ChatOpenAI } from \"@langchain/openai\";\n", "import { pull } from \"langchain/hub\";\n", "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n", "import { formatDocumentsAsString } from \"langchain/util/document\";\n", "import { RunnableSequence, RunnablePassthrough, RunnableMap } from \"@langchain/core/runnables\";\n", "import { StringOutputParser } from \"@langchain/core/output_parsers\";\n", "\n", "const loader = new CheerioWebBaseLoader(\n", " \"https://lilianweng.github.io/posts/2023-06-23-agent/\"\n", ");\n", "\n", "const docs = await loader.load();\n", "\n", "const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200 });\n", "const splits = await textSplitter.splitDocuments(docs);\n",
145622
{ "cells": [ { "cell_type": "raw", "id": "afaf8039", "metadata": { "vscode": { "languageId": "raw" } }, "source": [ "---\n", "sidebar_label: Azure OpenAI\n", "---" ] }, { "cell_type": "markdown", "id": "9a3d6f34", "metadata": {}, "source": [ "# AzureOpenAIEmbeddings\n", "\n", "[Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) is a cloud service to help you quickly develop generative AI experiences with a diverse set of prebuilt and curated models from OpenAI, Meta and beyond.\n", "\n", "LangChain.js supports integration with [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) using the new Azure integration in the [OpenAI SDK](https://github.com/openai/openai-node).\n", "\n", "You can learn more about Azure OpenAI and its difference with the OpenAI API on [this page](https://learn.microsoft.com/azure/ai-services/openai/overview). If you don't have an Azure account, you can [create a free account](https://azure.microsoft.com/free/) to get started.\n", "\n", "This will help you get started with AzureOpenAIEmbeddings [embedding models](/docs/concepts#embedding-models) using LangChain. For detailed documentation on `AzureOpenAIEmbeddings` features and configuration options, please refer to the [API reference](https://api.js.langchain.com/classes/langchain_openai.AzureOpenAIEmbeddings.html).\n", "\n", "\n", "```{=mdx}\n", "\n", ":::info\n", "\n", "Previously, LangChain.js supported integration with Azure OpenAI using the dedicated [Azure OpenAI SDK](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai). This SDK is now deprecated in favor of the new Azure integration in the OpenAI SDK, which allows to access the latest OpenAI models and features the same day they are released, and allows seamless transition between the OpenAI API and Azure OpenAI.\n", "\n", "If you are using Azure OpenAI with the deprecated SDK, see the [migration guide](#migration-from-azure-openai-sdk) to update to the new API.\n", "\n", ":::\n", "\n", "```\n", "\n", "## Overview\n", "### Integration details\n", "\n", "| Class | Package | Local | [Py support](https://python.langchain.com/docs/integrations/text_embedding/azureopenai/) | Package downloads | Package latest |\n", "| :--- | :--- | :---: | :---: | :---: | :---: |\n", "| [AzureOpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.AzureOpenAIEmbeddings.html) | [@langchain/openai](https://api.js.langchain.com/modules/langchain_openai.html) | ❌ | ✅ | ![NPM - Downloads](https://img.shields.io/npm/dm/@langchain/openai?style=flat-square&label=%20&) | ![NPM - Version](https://img.shields.io/npm/v/@langchain/openai?style=flat-square&label=%20&) |\n", "\n", "## Setup\n", "\n", "To access Azure OpenAI embedding models you'll need to create an Azure account, get an API key, and install the `@langchain/openai` integration package.\n", "\n", "### Credentials\n", "\n", "You'll need to have an Azure OpenAI instance deployed. You can deploy a version on Azure Portal following [this guide](https://learn.microsoft.com/azure/ai-services/openai/how-to/create-resource?pivots=web-portal).\n", "\n", "Once you have your instance running, make sure you have the name of your instance and key. You can find the key in the Azure Portal, under the \"Keys and Endpoint\" section of your instance.\n", "\n", "If you're using Node.js, you can define the following environment variables to use the service:\n", "\n", "```bash\n", "AZURE_OPENAI_API_INSTANCE_NAME=<YOUR_INSTANCE_NAME>\n", "AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=<YOUR_EMBEDDINGS_DEPLOYMENT_NAME>\n", "AZURE_OPENAI_API_KEY=<YOUR_KEY>\n", "AZURE_OPENAI_API_VERSION=\"2024-02-01\"\n", "```\n", "\n", "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n", "\n", "```bash\n", "# export LANGCHAIN_TRACING_V2=\"true\"\n", "# export LANGCHAIN_API_KEY=\"your-api-key\"\n", "```\n", "\n", "### Installation\n", "\n", "The LangChain AzureOpenAIEmbeddings integration lives in the `@langchain/openai` package:\n", "\n", "```{=mdx}\n", "import IntegrationInstallTooltip from \"@mdx_components/integration_install_tooltip.mdx\";\n", "import Npm2Yarn from \"@theme/Npm2Yarn\";\n", "\n", "<IntegrationInstallTooltip></IntegrationInstallTooltip>\n", "\n", "<Npm2Yarn>\n", " @langchain/openai @langchain/core\n", "</Npm2Yarn>\n", "\n", ":::info\n", "\n", "You can find the list of supported API versions in the [Azure OpenAI documentation](https://learn.microsoft.com/azure/ai-services/openai/reference).\n", "\n", ":::\n", "\n", ":::tip\n", "\n", "If `AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME` is not defined, it will fall back to the value of `AZURE_OPENAI_API_DEPLOYMENT_NAME` for the deployment name. The same applies to the `azureOpenAIApiEmbeddingsDeploymentName` parameter in the `AzureOpenAIEmbeddings` constructor, which will fall back to the value of `azureOpenAIApiDeploymentName` if not defined.\n", "\n", ":::\n", "\n", "```" ] }, { "cell_type": "markdown", "id": "45dd1724", "metadata": {}, "source": [ "## Instantiation\n", "\n", "Now we can instantiate our model object and embed text:" ] }, { "cell_type": "code", "execution_count": 1, "id": "9ea7a09b", "metadata": {}, "outputs": [], "source": [ "import { AzureOpenAIEmbeddings } from \"@langchain/openai\";\n", "\n", "const embeddings = new AzureOpenAIEmbeddings({\n", " azureOpenAIApiKey: \"<your_key>\", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY\n", " azureOpenAIApiInstanceName: \"<your_instance_name>\", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME\n", " azureOpenAIApiEmbeddingsDeploymentName: \"<your_embeddings_deployment_name>\", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME\n", " azureOpenAIApiVersion: \"<api_version>\", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION\n", " maxRetries: 1,\n", "});" ] }, { "cell_type": "markdown", "id": "77d271b6", "metadata": {}, "source": [ "## Indexing and Retrieval\n", "\n", "Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our RAG tutorials under the [working with external knowledge tutorials](/docs/tutorials/#working-with-external-knowledge).\n", "\n", "Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document using the demo [`MemoryVectorStore`](/docs/integrations/vectorstores/memory)." ] }, { "cell_type": "code", "execution_count": 2, "id": "d817716b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream",
145678
{ "cells": [ { "cell_type": "raw", "id": "afaf8039", "metadata": { "vscode": { "languageId": "raw" } }, "source": [ "---\n", "sidebar_label: HNSWLib\n", "---" ] }, { "cell_type": "markdown", "id": "e49f1e0d", "metadata": {}, "source": [ "# HNSWLib\n", "\n", "This guide will help you getting started with such a retriever backed by a [HNSWLib vector store](/docs/integrations/vectorstores/hnswlib). For detailed documentation of all features and configurations head to the [API reference](https://api.js.langchain.com/classes/langchain.retrievers_self_query.SelfQueryRetriever.html).\n", "\n", "## Overview\n", "\n", "A [self-query retriever](/docs/how_to/self_query/) retrieves documents by dynamically generating metadata filters based on some input query. This allows the retriever to account for underlying document metadata in addition to pure semantic similarity when fetching results.\n", "\n", "It uses a module called a `Translator` that generates a filter based on information about metadata fields and the query language that a given vector store supports.\n", "\n", "### Integration details\n", "\n", "| Backing vector store | Self-host | Cloud offering | Package | Py support |\n", "| :--- | :--- | :---: | :---: | :---: |\n", "[`HNSWLib`](https://api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) | ✅ | ❌ | [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) | ❌ |\n", "\n", "## Setup\n", "\n", "Set up a HNSWLib instance as documented [here](/docs/integrations/vectorstores/hnswlib).\n", "\n", "If you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n", "\n", "```typescript\n", "// process.env.LANGSMITH_API_KEY = \"<YOUR API KEY HERE>\";\n", "// process.env.LANGSMITH_TRACING = \"true\";\n", "```\n", "\n", "### Installation\n", "\n", "The vector store lives in the `@langchain/community` package. You'll also need to install the `langchain` package to import the main `SelfQueryRetriever` class.\n", "\n", "For this example, we'll also use OpenAI embeddings, so you'll need to install the `@langchain/openai` package and [obtain an API key](https://platform.openai.com):\n", "\n", "```{=mdx}\n", "import IntegrationInstallTooltip from \"@mdx_components/integration_install_tooltip.mdx\";\n", "import Npm2Yarn from \"@theme/Npm2Yarn\";\n", "\n", "<IntegrationInstallTooltip></IntegrationInstallTooltip>\n", "\n", "<Npm2Yarn>\n", " @langchain/community langchain @langchain/openai @langchain/core\n", "</Npm2Yarn>\n", "```" ] }, { "cell_type": "markdown", "id": "a38cde65-254d-4219-a441-068766c0d4b5", "metadata": {}, "source": [ "## Instantiation\n", "\n", "First, initialize your HNSWLib vector store with some documents that contain metadata:" ] }, { "cell_type": "code", "execution_count": 2, "id": "e7fd15a5", "metadata": {}, "outputs": [], "source": [ "import { OpenAIEmbeddings } from \"@langchain/openai\";\n", "import { HNSWLib } from \"@langchain/community/vectorstores/hnswlib\";\n", "import { Document } from \"@langchain/core/documents\";\n", "import type { AttributeInfo } from \"langchain/chains/query_constructor\";\n", "\n", "/**\n", " * First, we create a bunch of documents. You can load your own documents here instead.\n", " * Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below.\n", " */\n", "const docs = [\n", " new Document({\n", " pageContent:\n", " \"A bunch of scientists bring back dinosaurs and mayhem breaks loose\",\n", " metadata: { year: 1993, rating: 7.7, genre: \"science fiction\" },\n", " }),\n", " new Document({\n", " pageContent:\n", " \"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...\",\n", " metadata: { year: 2010, director: \"Christopher Nolan\", rating: 8.2 },\n", " }),\n", " new Document({\n", " pageContent:\n", " \"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea\",\n", " metadata: { year: 2006, director: \"Satoshi Kon\", rating: 8.6 },\n", " }),\n", " new Document({\n", " pageContent:\n", " \"A bunch of normal-sized women are supremely wholesome and some men pine after them\",\n", " metadata: { year: 2019, director: \"Greta Gerwig\", rating: 8.3 },\n", " }),\n", " new Document({\n", " pageContent: \"Toys come alive and have a blast doing so\",\n", " metadata: { year: 1995, genre: \"animated\" },\n", " }),\n", " new Document({\n", " pageContent: \"Three men walk into the Zone, three men walk out of the Zone\",\n", " metadata: {\n", " year: 1979,\n", " director: \"Andrei Tarkovsky\",\n", " genre: \"science fiction\",\n", " rating: 9.9,\n", " },\n", " }),\n", "];\n", "\n", "/**\n", " * Next, we define the attributes we want to be able to query on.\n", " * in this case, we want to be able to query on the genre, year, director, rating, and length of the movie.\n", " * We also provide a description of each attribute and the type of the attribute.\n", " * This is used to generate the query prompts.\n", " */\n", "const attributeInfo: AttributeInfo[] = [\n", " {\n", " name: \"genre\",\n", " description: \"The genre of the movie\",\n", " type: \"string or array of strings\",\n", " },\n", " {\n", " name: \"year\",\n", " description: \"The year the movie was released\",\n", " type: \"number\",\n", " },\n", " {\n", " name: \"director\",\n", " description: \"The director of the movie\",\n", " type: \"string\",\n", " },\n", " {\n", " name: \"rating\",\n", " description: \"The rating of the movie (1-10)\",\n", " type: \"number\",\n", " },\n", " {\n", " name: \"length\",\n", " description: \"The length of the movie in minutes\",\n", " type: \"number\",\n", " },\n", "];\n", "\n", "/**\n", " * Next, we instantiate a vector store. This is where we store the embeddings of the documents.\n", " * We also need to provide an embeddings object. This is used to embed the documents.\n", " */\n", "const embeddings = new OpenAIEmbeddings();\n", "const vectorStore = await HNSWLib.fromDocuments(docs, embeddings);" ] }, {
145700
--- keywords: [azure] --- import CodeBlock from "@theme/CodeBlock"; # Microsoft All functionality related to `Microsoft Azure` and other `Microsoft` products. ## Chat Models ### Azure OpenAI See a [usage example](/docs/integrations/chat/azure) import AzureChatOpenAI from "@examples/models/chat/integration_azure_openai.ts"; <UnifiedModelParamsTooltip></UnifiedModelParamsTooltip> <CodeBlock language="typescript">{AzureChatOpenAI}</CodeBlock> ## LLM ### Azure OpenAI > [Microsoft Azure](https://en.wikipedia.org/wiki/Microsoft_Azure), often referred to as `Azure` is a cloud computing platform run by `Microsoft`, which offers access, management, and development of applications and services through global data centers. It provides a range of capabilities, including software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). `Microsoft Azure` supports many programming languages, tools, and frameworks, including Microsoft-specific and third-party software and systems. > [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) is a cloud service to help you quickly develop generative AI experiences with a diverse set of prebuilt and curated models from OpenAI, Meta and beyond. LangChain.js supports integration with [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) using the new Azure integration in the [OpenAI SDK](https://github.com/openai/openai-node). You can learn more about Azure OpenAI and its difference with the OpenAI API on [this page](https://learn.microsoft.com/azure/ai-services/openai/overview). If you don't have an Azure account, you can [create a free account](https://azure.microsoft.com/free/) to get started. You'll need to have an Azure OpenAI instance deployed. You can deploy a version on Azure Portal following [this guide](https://learn.microsoft.com/azure/ai-services/openai/how-to/create-resource?pivots=web-portal). Once you have your instance running, make sure you have the name of your instance and key. You can find the key in the Azure Portal, under the "Keys and Endpoint" section of your instance. If you're using Node.js, you can define the following environment variables to use the service: ```bash AZURE_OPENAI_API_INSTANCE_NAME=<YOUR_INSTANCE_NAME> AZURE_OPENAI_API_DEPLOYMENT_NAME=<YOUR_DEPLOYMENT_NAME> AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=<YOUR_EMBEDDINGS_DEPLOYMENT_NAME> AZURE_OPENAI_API_KEY=<YOUR_KEY> AZURE_OPENAI_API_VERSION="2024-02-01" ``` :::info You can find the list of supported API versions in the [Azure OpenAI documentation](https://learn.microsoft.com/azure/ai-services/openai/reference). ::: import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/openai @langchain/core ``` See a [usage example](/docs/integrations/llms/azure). import AzureOpenAI from "@examples/models/llm/azure_openai.ts"; import UnifiedModelParamsTooltip from "@mdx_components/unified_model_params_tooltip.mdx"; <UnifiedModelParamsTooltip></UnifiedModelParamsTooltip> <CodeBlock language="typescript">{AzureOpenAI}</CodeBlock> ## Text Embedding Models ### Azure OpenAI See a [usage example](/docs/integrations/text_embedding/azure_openai) import AzureOpenAIEmbeddings from "@examples/models/embeddings/azure_openai.ts"; <UnifiedModelParamsTooltip></UnifiedModelParamsTooltip> <CodeBlock language="typescript">{AzureOpenAIEmbeddings}</CodeBlock> ## Vector stores ### Azure AI Search > [Azure AI Search](https://azure.microsoft.com/products/ai-services/ai-search) (formerly known as Azure Search and Azure Cognitive Search) is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads on Azure. It supports also vector search using the [k-nearest neighbor](https://en.wikipedia.org/wiki/Nearest_neighbor_search) (kNN) algorithm and also [semantic search](https://learn.microsoft.com/azure/search/semantic-search-overview). <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install -S @langchain/community @langchain/core @azure/search-documents ``` See a [usage example](/docs/integrations/vectorstores/azure_aisearch). ```typescript import { AzureAISearchVectorStore } from "@langchain/community/vectorstores/azure_aisearch"; ``` ### Azure Cosmos DB for NoSQL > [Azure Cosmos DB for NoSQL](https://learn.microsoft.com/azure/cosmos-db/nosql/) provides support for querying items with flexible schemas and native support for JSON. It now offers vector indexing and search. This feature is designed to handle high-dimensional vectors, enabling efficient and accurate vector search at any scale. You can now store vectors directly in the documents alongside your data. Each document in your database can contain not only traditional schema-free data, but also high-dimensional vectors as other properties of the documents. <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/azure-cosmosdb @langchain/core ``` See a [usage example](/docs/integrations/vectorstores/azure_cosmosdb_nosql). ```typescript import { AzureCosmosDBNoSQLVectorStore } from "@langchain/azure-cosmosdb"; ``` ### Azure Cosmos DB for MongoDB vCore > [Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/azure/cosmos-db/mongodb/vcore/) makes it easy to create a database with full native MongoDB support. You can apply your MongoDB experience and continue to use your favorite MongoDB drivers, SDKs, and tools by pointing your application to the API for MongoDB vCore account’s connection string. Use vector search in Azure Cosmos DB for MongoDB vCore to seamlessly integrate your AI-based applications with your data that’s stored in Azure Cosmos DB. <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/azure-cosmosdb @langchain/core ``` See a [usage example](/docs/integrations/vectorstores/azure_cosmosdb_mongodb). ```typescript import { AzureCosmosDBMongoDBVectorStore } from "@langchain/azure-cosmosdb"; ``` ## Document loaders ### Azure Blob Storage > [Azure Blob Storage](https://learn.microsoft.com/azure/storage/blobs/storage-blobs-introduction) is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data. > [Azure Files](https://learn.microsoft.com/azure/storage/files/storage-files-introduction) offers fully managed > file shares in the cloud that are accessible via the industry standard Server Message Block (`SMB`) protocol, > Network File System (`NFS`) protocol, and `Azure Files REST API`. `Azure Files` are based on the `Azure Blob Storage`. `Azure Blob Storage` is designed for: - Serving images or documents directly to a browser. - Storing files for distributed access. - Streaming video and audio. - Writing to log files. - Storing data for backup and restore, disaster recovery, and archiving. - Storing data for analysis by an on-premises or Azure-hosted service. <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/community @langchain/core @azure/storage-blob ``` See a [usage example for the Azure Blob Storage](/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container). ```typescript import { AzureBlobStorageContainerLoader } from "@langchain/community/document_loaders/web/azure_blob_storage_container"; ``` See a [usage example for the Azure Files](/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file). ```typescript import { AzureBlobStorageFileLoader } from "@langchain/community/document_loaders/web/azure_blob_storage_file"; ``` ## Tools ### Azure Container Apps Dynamic Sessions > [Azure Container Apps dynamic sessions](https://learn.microsoft.com/azure/container-apps/sessions) provide fast access to secure sandboxed environments that are ideal for running code or applications that require strong isolation from other workloads. <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/azure-dynamic-sessions @langchain/core ``` See a [usage example](/docs/integrations/tools/azure_dynamic_sessions). ```typescript import { SessionsPythonREPLTool } from "@langchain/azure-dynamic-sessions"; ```
145789
{ "cells": [ { "cell_type": "raw", "id": "afaf8039", "metadata": { "vscode": { "languageId": "raw" } }, "source": [ "---\n", "sidebar_label: Azure OpenAI\n", "---" ] }, { "cell_type": "markdown", "id": "e49f1e0d", "metadata": {}, "source": [ "# AzureChatOpenAI\n", "\n", "Azure OpenAI is a Microsoft Azure service that provides powerful language models from OpenAI.\n", "\n", "This will help you getting started with AzureChatOpenAI [chat models](/docs/concepts/#chat-models). For detailed documentation of all AzureChatOpenAI features and configurations head to the [API reference](https://api.js.langchain.com/classes/langchain_openai.AzureChatOpenAI.html).\n", "\n", "## Overview\n", "### Integration details\n", "\n", "| Class | Package | Local | Serializable | [PY support](https://python.langchain.com/docs/integrations/chat/azure_chat_openai) | Package downloads | Package latest |\n", "| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n", "| [AzureChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.AzureChatOpenAI.html) | [`@langchain/openai`](https://www.npmjs.com/package/@langchain/openai) | ❌ | ✅ | ✅ | ![NPM - Downloads](https://img.shields.io/npm/dm/@langchain/openai?style=flat-square&label=%20&) | ![NPM - Version](https://img.shields.io/npm/v/@langchain/openai?style=flat-square&label=%20&) |\n", "\n", "### Model features\n", "\n", "See the links in the table headers below for guides on how to use specific features.\n", "\n", "| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n", "| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n", "| ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | \n", "\n", "## Setup\n", "\n", "[Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) is a cloud service to help you quickly develop generative AI experiences with a diverse set of prebuilt and curated models from OpenAI, Meta and beyond.\n", "\n", "LangChain.js supports integration with [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) using the new Azure integration in the [OpenAI SDK](https://github.com/openai/openai-node).\n", "\n", "You can learn more about Azure OpenAI and its difference with the OpenAI API on [this page](https://learn.microsoft.com/azure/ai-services/openai/overview).\n", "\n", "### Credentials\n", "\n", "If you don't have an Azure account, you can [create a free account](https://azure.microsoft.com/free/) to get started.\n", "\n", "You'll also need to have an Azure OpenAI instance deployed. You can deploy a version on Azure Portal following [this guide](https://learn.microsoft.com/azure/ai-services/openai/how-to/create-resource?pivots=web-portal).\n", "\n", "Once you have your instance running, make sure you have the name of your instance and key. You can find the key in the Azure Portal, under the \"Keys and Endpoint\" section of your instance. Then, if using Node.js, you can set your credentials as environment variables:\n", "\n", "```bash\n", "AZURE_OPENAI_API_INSTANCE_NAME=<YOUR_INSTANCE_NAME>\n", "AZURE_OPENAI_API_DEPLOYMENT_NAME=<YOUR_DEPLOYMENT_NAME>\n", "AZURE_OPENAI_API_KEY=<YOUR_KEY>\n", "AZURE_OPENAI_API_VERSION=\"2024-02-01\"\n", "```\n", "\n", "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n", "\n", "```bash\n", "# export LANGCHAIN_TRACING_V2=\"true\"\n", "# export LANGCHAIN_API_KEY=\"your-api-key\"\n", "```\n", "\n", "### Installation\n", "\n", "The LangChain AzureChatOpenAI integration lives in the `@langchain/openai` package:\n", "\n", "```{=mdx}\n", "\n", "import IntegrationInstallTooltip from \"@mdx_components/integration_install_tooltip.mdx\";\n", "import Npm2Yarn from \"@theme/Npm2Yarn\";\n", "\n", "<IntegrationInstallTooltip></IntegrationInstallTooltip>\n", "\n", "<Npm2Yarn>\n", " @langchain/openai @langchain/core\n", "</Npm2Yarn>\n", "\n", "```" ] }, { "cell_type": "markdown", "id": "a38cde65-254d-4219-a441-068766c0d4b5", "metadata": {}, "source": [ "## Instantiation\n", "\n", "Now we can instantiate our model object and generate chat completions:" ] }, { "cell_type": "code", "execution_count": 3, "id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae", "metadata": {}, "outputs": [], "source": [ "import { AzureChatOpenAI } from \"@langchain/openai\" \n", "\n", "const llm = new AzureChatOpenAI({\n", " model: \"gpt-4o\",\n", " temperature: 0,\n", " maxTokens: undefined,\n", " maxRetries: 2,\n", " azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY, // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY\n", " azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME, // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME\n", " azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME, // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME\n", " azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION, // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION\n", "})" ] }, { "cell_type": "markdown", "id": "2b4f3e15", "metadata": {}, "source": [ "## Invocation" ] }, { "cell_type": "code", "execution_count": 4, "id": "62e0dbc3", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "AIMessage {\n", " \"id\": \"chatcmpl-9qrWKByvVrzWMxSn8joRZAklHoB32\",\n", " \"content\": \"J'adore la programmation.\",\n", " \"additional_kwargs\": {},\n", " \"response_metadata\": {\n", " \"tokenUsage\": {\n", " \"completionTokens\": 8,\n", " \"promptTokens\": 31,\n", " \"totalTokens\": 39\n", " },\n", " \"finish_reason\": \"stop\"\n", " },\n", " \"tool_calls\": [],\n", " \"invalid_tool_calls\": [],\n", " \"usage_metadata\": {\n", " \"input_tokens\": 31,\n", " \"output_tokens\": 8,\n", " \"total_tokens\": 39\n", " }\n",
145790
"}\n" ] } ], "source": [ "const aiMsg = await llm.invoke([\n", " [\n", " \"system\",\n", " \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n", " ],\n", " [\"human\", \"I love programming.\"],\n", "])\n", "aiMsg" ] }, { "cell_type": "code", "execution_count": 5, "id": "d86145b3-bfef-46e8-b227-4dda5c9c2705", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "J'adore la programmation.\n" ] } ], "source": [ "console.log(aiMsg.content)" ] }, { "cell_type": "markdown", "id": "18e2bfc0-7e78-4528-a73f-499ac150dca8", "metadata": {}, "source": [ "## Chaining\n", "\n", "We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:" ] }, { "cell_type": "code", "execution_count": 6, "id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "AIMessage {\n", " \"id\": \"chatcmpl-9qrWR7WiNjZ3leSG4Wd77cnKEVivv\",\n", " \"content\": \"Ich liebe das Programmieren.\",\n", " \"additional_kwargs\": {},\n", " \"response_metadata\": {\n", " \"tokenUsage\": {\n", " \"completionTokens\": 6,\n", " \"promptTokens\": 26,\n", " \"totalTokens\": 32\n", " },\n", " \"finish_reason\": \"stop\"\n", " },\n", " \"tool_calls\": [],\n", " \"invalid_tool_calls\": [],\n", " \"usage_metadata\": {\n", " \"input_tokens\": 26,\n", " \"output_tokens\": 6,\n", " \"total_tokens\": 32\n", " }\n", "}\n" ] } ], "source": [ "import { ChatPromptTemplate } from \"@langchain/core/prompts\"\n", "\n", "const prompt = ChatPromptTemplate.fromMessages(\n", " [\n", " [\n", " \"system\",\n", " \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n", " ],\n", " [\"human\", \"{input}\"],\n", " ]\n", ")\n", "\n", "const chain = prompt.pipe(llm);\n", "await chain.invoke(\n", " {\n", " input_language: \"English\",\n", " output_language: \"German\",\n", " input: \"I love programming.\",\n", " }\n", ")" ] }, { "cell_type": "markdown", "id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd", "metadata": {}, "source": [ "## Using Azure Managed Identity\n", "\n", "If you're using Azure Managed Identity, you can configure the credentials like this:" ] }, { "cell_type": "code", "execution_count": 7, "id": "d7f47b2a", "metadata": {}, "outputs": [], "source": [ "import {\n", " DefaultAzureCredential,\n", " getBearerTokenProvider,\n", "} from \"@azure/identity\";\n", "import { AzureChatOpenAI } from \"@langchain/openai\";\n", "\n", "const credentials = new DefaultAzureCredential();\n", "const azureADTokenProvider = getBearerTokenProvider(\n", " credentials,\n", " \"https://cognitiveservices.azure.com/.default\"\n", ");\n", "\n", "const llmWithManagedIdentity = new AzureChatOpenAI({\n", " azureADTokenProvider,\n", " azureOpenAIApiInstanceName: \"<your_instance_name>\",\n", " azureOpenAIApiDeploymentName: \"<your_deployment_name>\",\n", " azureOpenAIApiVersion: \"<api_version>\",\n", "});" ] }, { "cell_type": "markdown", "id": "6a889856", "metadata": {}, "source": [ "## Using a different domain\n", "\n", "If your instance is hosted under a domain other than the default `openai.azure.com`, you'll need to use the alternate `AZURE_OPENAI_BASE_PATH` environment variable.\n", "For example, here's how you would connect to the domain `https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}`:" ] }, { "cell_type": "code", "execution_count": 8, "id": "ace7f876", "metadata": {}, "outputs": [], "source": [ "import { AzureChatOpenAI } from \"@langchain/openai\";\n", "\n", "const llmWithDifferentDomain = new AzureChatOpenAI({\n", " temperature: 0.9,\n", " azureOpenAIApiKey: \"<your_key>\", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY\n", " azureOpenAIApiDeploymentName: \"<your_deployment_name>\", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME\n", " azureOpenAIApiVersion: \"<api_version>\", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION\n", " azureOpenAIBasePath:\n", " \"https://westeurope.api.microsoft.com/openai/deployments\", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH\n", "});\n" ] }, { "cell_type": "markdown", "id": "092e7a38", "metadata": {}, "source": [ "## Custom headers\n", "\n", "You can specify custom headers by passing in a `configuration` field:" ] }, { "cell_type": "code", "execution_count": null, "id": "43503a94", "metadata": {}, "outputs": [], "source": [ "import { AzureChatOpenAI } from \"@langchain/openai\";\n", "\n", "const llmWithCustomHeaders = new AzureChatOpenAI({\n", " azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY, // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY\n", " azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME, // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME\n", " azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME, // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME\n", " azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION, // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION\n", " configuration: {\n", " defaultHeaders: {\n", " \"x-custom-header\": `SOME_VALUE`,\n", " },\n", " },\n", "});\n", "\n", "await llmWithCustomHeaders.invoke(\"Hi there!\");" ] }, { "cell_type": "markdown", "id": "1a6b849d", "metadata": {}, "source": [ "The `configuration` field also accepts other `ClientOptions` parameters accepted by the official SDK.\n", "\n", "**Note:** The specific header `api-key` currently cannot be overridden in this manner and will pass through the value from `azureOpenAIApiKey`." ] }, {
145791
"cell_type": "markdown", "id": "0ac0310c", "metadata": {}, "source": [ "## Migration from Azure OpenAI SDK\n", "\n", "If you are using the deprecated Azure OpenAI SDK with the `@langchain/azure-openai` package, you can update your code to use the new Azure integration following these steps:\n", "\n", "1. Install the new `@langchain/openai` package and remove the previous `@langchain/azure-openai` package:\n", "\n", "```{=mdx}\n", "\n", "<Npm2Yarn>\n", " @langchain/openai\n", "</Npm2Yarn>\n", "\n", "```\n", "\n", "```bash\n", "npm uninstall @langchain/azure-openai\n", "```\n", "\n", " \n", "2. Update your imports to use the new `AzureChatOpenAI` class from the `@langchain/openai` package:\n", " ```typescript\n", " import { AzureChatOpenAI } from \"@langchain/openai\";\n", " ```\n", "3. Update your code to use the new `AzureChatOpenAI` class and pass the required parameters:\n", "\n", " ```typescript\n", " const model = new AzureChatOpenAI({\n", " azureOpenAIApiKey: \"<your_key>\",\n", " azureOpenAIApiInstanceName: \"<your_instance_name>\",\n", " azureOpenAIApiDeploymentName: \"<your_deployment_name>\",\n", " azureOpenAIApiVersion: \"<api_version>\",\n", " });\n", " ```\n", "\n", " Notice that the constructor now requires the `azureOpenAIApiInstanceName` parameter instead of the `azureOpenAIEndpoint` parameter, and adds the `azureOpenAIApiVersion` parameter to specify the API version.\n", "\n", " - If you were using Azure Managed Identity, you now need to use the `azureADTokenProvider` parameter to the constructor instead of `credentials`, see the [Azure Managed Identity](#using-azure-managed-identity) section for more details.\n", "\n", " - If you were using environment variables, you now have to set the `AZURE_OPENAI_API_INSTANCE_NAME` environment variable instead of `AZURE_OPENAI_API_ENDPOINT`, and add the `AZURE_OPENAI_API_VERSION` environment variable to specify the API version.\n" ] }, { "cell_type": "markdown", "id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3", "metadata": {}, "source": [ "## API reference\n", "\n", "For detailed documentation of all AzureChatOpenAI features and configurations head to the API reference: https://api.js.langchain.com/classes/langchain_openai.AzureChatOpenAI.html" ] } ], "metadata": { "kernelspec": { "display_name": "TypeScript", "language": "typescript", "name": "tslab" }, "language_info": { "codemirror_mode": { "mode": "typescript", "name": "javascript", "typescript": true }, "file_extension": ".ts", "mimetype": "text/typescript", "name": "typescript", "version": "3.7.2" } }, "nbformat": 4, "nbformat_minor": 5 }
145817
"This property returns a list of \\`ToolCall\\`s. A \\`ToolCall\\` is an object with the following arguments:\n", "\n", "- \\`name\\`: The name of the tool that should be called.\n", "- \\`args\\`: The arguments to that tool.\n", "- \\`id\\`: The id of that tool call.\n", "\n", "#### SystemMessage\n", "\n", "This represents a system message, which tells the model how to behave. Not every model provider supports this.\n", "\n", "#### ToolMessage\n", "\n", "This represents the result of a tool call. In addition to \\`role\\` and \\`content\\`, this message has:\n", "\n", "- a \\`tool_call_id\\` field which conveys the id of the call to the tool that was called to produce this result.\n", "- an \\`artifact\\` field which can be used to pass along arbitrary artifacts of the tool execution which are useful to track but which should not be sent to the model.\n", "\n", "#### (Legacy) FunctionMessage\n", "\n", "This is a legacy message type, corresponding to OpenAI's legacy function-calling API. \\`ToolMessage\\` should be used instead to correspond to the updated tool-calling API.\n", "\n", "This represents the result of a function call. In addition to \\`role\\` and \\`content\\`, this message has a \\`name\\` parameter which conveys the name of the function that was called to produce this result.\n", "\n", "### Prompt templates\n", "\n", "<span data-heading-keywords=\"prompt,prompttemplate,chatprompttemplate\"></span>\n", "\n", "Prompt templates help to translate user input and parameters into instructions for a language model.\n", "This can be used to guide a model's response, helping it understand the context and generate relevant and coherent language-based output.\n", "\n", "Prompt Templates take as input an object, where each key represents a variable in the prompt template to fill in.\n", "\n", "Prompt Templates output a PromptValue. This PromptValue can be passed to an LLM or a ChatModel, and can also be cast to a string or an array of messages.\n", "The reason this PromptValue exists is to make it easy to switch between strings and messages.\n", "\n", "There are a few different types of prompt templates:\n", "\n", "#### String PromptTemplates\n", "\n", "These prompt templates are used to format a single string, and generally are used for simpler inputs.\n", "For example, a common way to construct and use a PromptTemplate is as follows:\n", "\n", "\\`\\`\\`typescript\n", "import { PromptTemplate } from \"@langchain/core/prompts\";\n", "\n", "const promptTemplate = PromptTemplate.fromTemplate(\n", " \"Tell me a joke about {topic}\"\n", ");\n", "\n", "await promptTemplate.invoke({ topic: \"cats\" });\n", "\\`\\`\\`\n", "\n", "#### ChatPromptTemplates\n", "\n", "These prompt templates are used to format an array of messages. These \"templates\" consist of an array of templates themselves.\n", "For example, a common way to construct and use a ChatPromptTemplate is as follows:\n", "\n", "\\`\\`\\`typescript\n", "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n", "\n", "const promptTemplate = ChatPromptTemplate.fromMessages([\n", " [\"system\", \"You are a helpful assistant\"],\n", " [\"user\", \"Tell me a joke about {topic}\"],\n", "]);\n", "\n", "await promptTemplate.invoke({ topic: \"cats\" });\n", "\\`\\`\\`\n", "\n", "In the above example, this ChatPromptTemplate will construct two messages when called.\n", "The first is a system message, that has no variables to format.\n", "The second is a HumanMessage, and will be formatted by the \\`topic\\` variable the user passes in.\n", "\n", "#### MessagesPlaceholder\n", "\n", "<span data-heading-keywords=\"messagesplaceholder\"></span>\n", "\n", "This prompt template is responsible for adding an array of messages in a particular place.\n", "In the above ChatPromptTemplate, we saw how we could format two messages, each one a string.\n", "But what if we wanted the user to pass in an array of messages that we would slot into a particular spot?\n", "This is how you use MessagesPlaceholder.\n", "\n", "\\`\\`\\`typescript\n", "import {\n", " ChatPromptTemplate,\n", " MessagesPlaceholder,\n", "} from \"@langchain/core/prompts\";\n", "import { HumanMessage } from \"@langchain/core/messages\";\n", "\n", "const promptTemplate = ChatPromptTemplate.fromMessages([\n", " [\"system\", \"You are a helpful assistant\"],\n", " new MessagesPlaceholder(\"msgs\"),\n", "]);\n", "\n", "promptTemplate.invoke({ msgs: [new HumanMessage({ content: \"hi!\" })] });\n", "\\`\\`\\`\n", "\n", "This will produce an array of two messages, the first one being a system message, and the second one being the HumanMessage we passed in.\n", "If we had passed in 5 messages, then it would have produced 6 messages in total (the system message plus the 5 passed in).\n", "This is useful for letting an array of messages be slotted into a particular spot.\n", "\n", "An alternative way to accomplish the same thing without using the \\`MessagesPlaceholder\\` class explicitly is:\n", "\n", "\\`\\`\\`typescript\n", "const promptTemplate = ChatPromptTemplate.fromMessages([\n", " [\"system\", \"You are a helpful assistant\"],\n", " [\"placeholder\", \"{msgs}\"], // <-- This is the changed part\n", "]);\n", "\\`\\`\\`\n", "\n", "For specifics on how to use prompt templates, see the [relevant how-to guides here](/docs/how_to/#prompt-templates).\n", "\n", "### Example Selectors\n", "\n", "One common prompting technique for achieving better performance is to include examples as part of the prompt.\n", "This gives the language model concrete examples of how it should behave.\n", "Sometimes these examples are hardcoded into the prompt, but for more advanced situations it may be nice to dynamically select them.\n", "Example Selectors are classes responsible for selecting and then formatting examples into prompts.\n", "\n", "For specifics on how to use example selectors, see the [relevant how-to guides here](/docs/how_to/#example-selectors).\n", "\n", "### Output parsers\n", "\n", "<span data-heading-keywords=\"output parser\"></span>\n", "\n", ":::note\n", "\n", "The information here refers to parsers that take a text output from a model try to parse it into a more structured representation.\n", "More and more models are supporting function (or tool) calling, which handles this automatically.\n", "It is recommended to use function/tool calling rather than output parsing.\n", "See documentation for that [here](/docs/concepts/#function-tool-calling).\n", "\n", ":::\n", "\n", "Responsible for taking the output of a model and transforming it to a more suitable format for downstream tasks.\n", "Useful when you are using LLMs to generate structured data, or to normalize output from chat models and LLMs.\n", "\n", "There are two main methods an output parser must implement:\n", "\n", "- \"Get format instructions\": A method which returns a string containing instructions for how the output of a language model should be formatted.\n", "- \"Parse\": A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.\n", "\n", "And then one optional one:\n", "\n", "- \"Parse with prompt\": A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to be the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.\n",
145823
"Generally, such models are better at tool calling than non-fine-tuned models, and are recommended for use cases that require tool calling.\n", "Please see the [tool calling section](/docs/concepts/#functiontool-calling) for more information.\n", ":::\n", "\n", "For specifics on how to use chat models, see the [relevant how-to guides here](/docs/how_to/#chat-models).\n", "\n", "#### Multimodality\n", "\n", "Some chat models are multimodal, accepting images, audio and even video as inputs.\n", "These are still less common, meaning model providers haven't standardized on the \"best\" way to define the API.\n", "Multimodal outputs are even less common. As such, we've kept our multimodal abstractions fairly light weight\n", "and plan to further solidify the multimodal APIs and interaction patterns as the field matures.\n", "\n", "In LangChain, most chat models that support multimodal inputs also accept those values in OpenAI's content blocks format.\n", "So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations.\n", "\n", "For specifics on how to use multimodal models, see the [relevant how-to guides here](/docs/how_to/#multimodal).\n", "\n", "### LLMs\n", "\n", "<span data-heading-keywords=\"llm,llms\"></span>\n", "\n", ":::caution\n", "Pure text-in/text-out LLMs tend to be older or lower-level. Many popular models are best used as [chat completion models](/docs/concepts/#chat-models),\n", "even for non-chat use cases.\n", "\n", "You are probably looking for [the section above instead](/docs/concepts/#chat-models).\n", ":::\n", "\n", "Language models that takes a string as input and returns a string.\n", "These are traditionally older models (newer models generally are [Chat Models](/docs/concepts/#chat-models), see above).\n", "\n", "Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input.\n", "This gives them the same interface as [Chat Models](/docs/concepts/#chat-models).\n", "When messages are passed in as input, they will be formatted into a string under the hood before being passed to the underlying model.\n", "\n", "LangChain does not host any LLMs, rather we rely on third party integrations.\n", "\n", "For specifics on how to use LLMs, see the [relevant how-to guides here](/docs/how_to/#llms).\n", "\n", "### Message types\n", "\n", "Some language models take an array of messages as input and return a message.\n", "There are a few different types of messages.\n", "All messages have a \\`role\\`, \\`content\\`, and \\`response_metadata\\` property.\n", "\n", "The \\`role\\` describes WHO is saying the message.\n", "LangChain has different message classes for different roles.\n", "\n", "The \\`content\\` property describes the content of the message.\n", "This can be a few different things:\n", "\n", "- A string (most models deal this type of content)\n", "- A List of objects (this is used for multi-modal input, where the object contains information about that input type and that input location)\n", "\n", "#### HumanMessage\n", "\n", "This represents a message from the user.\n", "\n", "#### AIMessage\n", "\n", "This represents a message from the model. In addition to the \\`content\\` property, these messages also have:\n", "\n", "**\\`response_metadata\\`**\n", "\n", "The \\`response_metadata\\` property contains additional metadata about the response. The data here is often specific to each model provider.\n", "This is where information like log-probs and token usage may be stored.\n", "\n", "**\\`tool_calls\\`**\n", "\n", "These represent a decision from an language model to call a tool. They are included as part of an \\`AIMessage\\` output.\n", "They can be accessed from there with the \\`.tool_calls\\` property.\n", "\n", "This property returns a list of \\`ToolCall\\`s. A \\`ToolCall\\` is an object with the following arguments:\n", "\n", "- \\`name\\`: The name of the tool that should be called.\n", "- \\`args\\`: The arguments to that tool.\n", "- \\`id\\`: The id of that tool call.\n", "\n", "#### SystemMessage\n", "\n", "This represents a system message, which tells the model how to behave. Not every model provider supports this.\n", "\n", "#### ToolMessage\n", "\n", "This represents the result of a tool call. In addition to \\`role\\` and \\`content\\`, this message has:\n", "\n", "- a \\`tool_call_id\\` field which conveys the id of the call to the tool that was called to produce this result.\n", "- an \\`artifact\\` field which can be used to pass along arbitrary artifacts of the tool execution which are useful to track but which should not be sent to the model.\n", "\n", "#### (Legacy) FunctionMessage\n", "\n", "This is a legacy message type, corresponding to OpenAI's legacy function-calling API. \\`ToolMessage\\` should be used instead to correspond to the updated tool-calling API.\n", "\n", "This represents the result of a function call. In addition to \\`role\\` and \\`content\\`, this message has a \\`name\\` parameter which conveys the name of the function that was called to produce this result.\n", "\n", "### Prompt templates\n", "\n", "<span data-heading-keywords=\"prompt,prompttemplate,chatprompttemplate\"></span>\n", "\n", "Prompt templates help to translate user input and parameters into instructions for a language model.\n", "This can be used to guide a model's response, helping it understand the context and generate relevant and coherent language-based output.\n", "\n", "Prompt Templates take as input an object, where each key represents a variable in the prompt template to fill in.\n", "\n", "Prompt Templates output a PromptValue. This PromptValue can be passed to an LLM or a ChatModel, and can also be cast to a string or an array of messages.\n", "The reason this PromptValue exists is to make it easy to switch between strings and messages.\n", "\n", "There are a few different types of prompt templates:\n", "\n", "#### String PromptTemplates\n", "\n", "These prompt templates are used to format a single string, and generally are used for simpler inputs.\n", "For example, a common way to construct and use a PromptTemplate is as follows:\n", "\n", "\\`\\`\\`typescript\n", "import { PromptTemplate } from \"@langchain/core/prompts\";\n", "\n", "const promptTemplate = PromptTemplate.fromTemplate(\n", " \"Tell me a joke about {topic}\"\n", ");\n", "\n", "await promptTemplate.invoke({ topic: \"cats\" });\n", "\\`\\`\\`\n", "\n", "#### ChatPromptTemplates\n", "\n", "These prompt templates are used to format an array of messages. These \"templates\" consist of an array of templates themselves.\n", "For example, a common way to construct and use a ChatPromptTemplate is as follows:\n", "\n", "\\`\\`\\`typescript\n", "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n", "\n", "const promptTemplate = ChatPromptTemplate.fromMessages([\n", " [\"system\", \"You are a helpful assistant\"],\n", " [\"user\", \"Tell me a joke about {topic}\"],\n", "]);\n", "\n", "await promptTemplate.invoke({ topic: \"cats\" });\n", "\\`\\`\\`\n", "\n", "In the above example, this ChatPromptTemplate will construct two messages when called.\n", "The first is a system message, that has no variables to format.\n",
145856
{ "cells": [ { "cell_type": "raw", "id": "1957f5cb", "metadata": { "vscode": { "languageId": "raw" } }, "source": [ "---\n", "sidebar_label: Chroma\n", "---" ] }, { "cell_type": "markdown", "id": "ef1f0986", "metadata": {}, "source": [ "# Chroma\n", "\n", "[Chroma](https://docs.trychroma.com/getting-started) is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.\n", "\n", "This guide provides a quick overview for getting started with Chroma [`vector stores`](/docs/concepts/#vectorstores). For detailed documentation of all `Chroma` features and configurations head to the [API reference](https://api.js.langchain.com/classes/langchain_community_vectorstores_chroma.Chroma.html)." ] }, { "cell_type": "markdown", "id": "c824838d", "metadata": {}, "source": [ "## Overview\n", "\n", "### Integration details\n", "\n", "| Class | Package | [PY support](https://python.langchain.com/docs/integrations/vectorstores/chroma/) | Package latest |\n", "| :--- | :--- | :---: | :---: |\n", "| [`Chroma`](https://api.js.langchain.com/classes/langchain_community_vectorstores_chroma.Chroma.html) | [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) | ✅ | ![NPM - Version](https://img.shields.io/npm/v/@langchain/community?style=flat-square&label=%20&) |" ] }, { "cell_type": "markdown", "id": "36fdc060", "metadata": {}, "source": [ "## Setup\n", "\n", "To use Chroma vector stores, you'll need to install the `@langchain/community` integration package along with the [Chroma JS SDK](https://www.npmjs.com/package/chromadb) as a peer dependency.\n", "\n", "This guide will also use [OpenAI embeddings](/docs/integrations/text_embedding/openai), which require you to install the `@langchain/openai` integration package. You can also use [other supported embeddings models](/docs/integrations/text_embedding) if you wish.\n", "\n", "```{=mdx}\n", "import IntegrationInstallTooltip from \"@mdx_components/integration_install_tooltip.mdx\";\n", "import Npm2Yarn from \"@theme/Npm2Yarn\";\n", "\n", "<IntegrationInstallTooltip></IntegrationInstallTooltip>\n", "\n", "<Npm2Yarn>\n", " @langchain/community @langchain/openai @langchain/core chromadb\n", "</Npm2Yarn>\n", "```\n", "\n", "Next, follow the following instructions to run Chroma with Docker on your computer:\n", "\n", "```\n", "docker pull chromadb/chroma \n", "docker run -p 8000:8000 chromadb/chroma\n", "```\n", "\n", "You can see alternative setup instructions [in this guide](https://docs.trychroma.com/getting-started).\n", "\n", "### Credentials\n", "\n", "If you are running Chroma through Docker, you do not need to provide any credentials.\n", "\n", "If you are using OpenAI embeddings for this guide, you'll need to set your OpenAI key as well:\n", "\n", "```typescript\n", "process.env.OPENAI_API_KEY = \"YOUR_API_KEY\";\n", "```\n", "\n", "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n", "\n", "```typescript\n", "// process.env.LANGCHAIN_TRACING_V2=\"true\"\n", "// process.env.LANGCHAIN_API_KEY=\"your-api-key\"\n", "```" ] }, { "cell_type": "markdown", "id": "93df377e", "metadata": {}, "source": [ "## Instantiation" ] }, { "cell_type": "code", "execution_count": 5, "id": "dc37144c-208d-4ab3-9f3a-0407a69fe052", "metadata": { "tags": [] }, "outputs": [], "source": [ "import { Chroma } from \"@langchain/community/vectorstores/chroma\";\n", "import { OpenAIEmbeddings } from \"@langchain/openai\";\n", "\n", "const embeddings = new OpenAIEmbeddings({\n", " model: \"text-embedding-3-small\",\n", "});\n", "\n", "const vectorStore = new Chroma(embeddings, {\n", " collectionName: \"a-test-collection\",\n", " url: \"http://localhost:8000\", // Optional, will default to this value\n", " collectionMetadata: {\n", " \"hnsw:space\": \"cosine\",\n", " }, // Optional, can be used to specify the distance method of the embedding space https://docs.trychroma.com/usage-guide#changing-the-distance-function\n", "});" ] }, { "cell_type": "markdown", "id": "ac6071d4", "metadata": {}, "source": [ "## Manage vector store\n", "\n", "### Add items to vector store" ] }, { "cell_type": "code", "execution_count": 8, "id": "17f5efc0", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ '1', '2', '3', '4' ]\n" ] } ], "source": [ "import type { Document } from \"@langchain/core/documents\";\n", "\n", "const document1: Document = {\n", " pageContent: \"The powerhouse of the cell is the mitochondria\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document2: Document = {\n", " pageContent: \"Buildings are made out of brick\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document3: Document = {\n", " pageContent: \"Mitochondria are made out of lipids\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document4: Document = {\n", " pageContent: \"The 2024 Olympics are in Paris\",\n", " metadata: { source: \"https://example.com\" }\n", "}\n", "\n", "const documents = [document1, document2, document3, document4];\n", "\n", "await vectorStore.addDocuments(documents, { ids: [\"1\", \"2\", \"3\", \"4\"] });" ] }, { "cell_type": "markdown", "id": "dcf1b905", "metadata": {}, "source": [ "### Delete items from vector store\n", "\n", "You can delete documents from Chroma by id as follows:" ] }, { "cell_type": "code", "execution_count": 10, "id": "ef61e188", "metadata": {}, "outputs": [], "source": [ "await vectorStore.delete({ ids: [\"4\"] });" ] }, { "cell_type": "markdown", "id": "c3620501", "metadata": {}, "source": [ "## Query vector store\n", "\n", "Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent. \n", "\n", "### Query directly\n", "\n", "Performing a simple similarity search can be done as follows:" ] }, { "cell_type": "code", "execution_count": 11, "id": "aa0a16fa", "metadata": {},
145860
{ "cells": [ { "cell_type": "raw", "id": "1957f5cb", "metadata": { "vscode": { "languageId": "raw" } }, "source": [ "---\n", "sidebar_label: HNSWLib\n", "sidebar_class_name: node-only\n", "---" ] }, { "cell_type": "markdown", "id": "ef1f0986", "metadata": {}, "source": [ "# HNSWLib\n", "\n", "```{=mdx}\n", ":::tip Compatibility\n", "Only available on Node.js.\n", ":::\n", "```\n", "\n", "HNSWLib is an in-memory vector store that can be saved to a file. It uses the [HNSWLib library](https://github.com/nmslib/hnswlib).\n", "\n", "This guide provides a quick overview for getting started with HNSWLib [vector stores](/docs/concepts/#vectorstores). For detailed documentation of all `HNSWLib` features and configurations head to the [API reference](https://api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html)." ] }, { "cell_type": "markdown", "id": "c824838d", "metadata": {}, "source": [ "## Overview\n", "\n", "### Integration details\n", "\n", "| Class | Package | PY support | Package latest |\n", "| :--- | :--- | :---: | :---: |\n", "| [`HNSWLib`](https://api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) | [`@langchain/community`](https://npmjs.com/@langchain/community) | ❌ | ![NPM - Version](https://img.shields.io/npm/v/@langchain/community?style=flat-square&label=%20&) |" ] }, { "cell_type": "markdown", "id": "36fdc060", "metadata": {}, "source": [ "## Setup\n", "\n", "To use HNSWLib vector stores, you'll need to install the `@langchain/community` integration package with the [`hnswlib-node`](https://www.npmjs.com/package/hnswlib-node) package as a peer dependency.\n", "\n", "This guide will also use [OpenAI embeddings](/docs/integrations/text_embedding/openai), which require you to install the `@langchain/openai` integration package. You can also use [other supported embeddings models](/docs/integrations/text_embedding) if you wish.\n", "\n", "```{=mdx}\n", "import IntegrationInstallTooltip from \"@mdx_components/integration_install_tooltip.mdx\";\n", "import Npm2Yarn from \"@theme/Npm2Yarn\";\n", "\n", "<IntegrationInstallTooltip></IntegrationInstallTooltip>\n", "\n", "<Npm2Yarn>\n", " @langchain/community hnswlib-node @langchain/openai @langchain/core\n", "</Npm2Yarn>\n", "```\n", "\n", "```{=mdx}\n", ":::caution\n", "\n", "**On Windows**, you might need to install [Visual Studio](https://visualstudio.microsoft.com/downloads/) first in order to properly build the `hnswlib-node` package.\n", "\n", ":::\n", "```\n", "\n", "### Credentials\n", "\n", "Because HNSWLib runs locally, you do not need any credentials to use it.\n", "\n", "If you are using OpenAI embeddings for this guide, you'll need to set your OpenAI key as well:\n", "\n", "```typescript\n", "process.env.OPENAI_API_KEY = \"YOUR_API_KEY\";\n", "```\n", "\n", "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n", "\n", "```typescript\n", "// process.env.LANGCHAIN_TRACING_V2=\"true\"\n", "// process.env.LANGCHAIN_API_KEY=\"your-api-key\"\n", "```" ] }, { "cell_type": "markdown", "id": "93df377e", "metadata": {}, "source": [ "## Instantiation" ] }, { "cell_type": "code", "execution_count": 1, "id": "dc37144c-208d-4ab3-9f3a-0407a69fe052", "metadata": { "tags": [] }, "outputs": [], "source": [ "import { HNSWLib } from \"@langchain/community/vectorstores/hnswlib\";\n", "import { OpenAIEmbeddings } from \"@langchain/openai\";\n", "\n", "const embeddings = new OpenAIEmbeddings({\n", " model: \"text-embedding-3-small\",\n", "});\n", "\n", "const vectorStore = await HNSWLib.fromDocuments([], embeddings);" ] }, { "cell_type": "markdown", "id": "ac6071d4", "metadata": {}, "source": [ "## Manage vector store\n", "\n", "### Add items to vector store" ] }, { "cell_type": "code", "execution_count": 2, "id": "17f5efc0", "metadata": {}, "outputs": [], "source": [ "import type { Document } from \"@langchain/core/documents\";\n", "\n", "const document1: Document = {\n", " pageContent: \"The powerhouse of the cell is the mitochondria\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document2: Document = {\n", " pageContent: \"Buildings are made out of brick\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document3: Document = {\n", " pageContent: \"Mitochondria are made out of lipids\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document4: Document = {\n", " pageContent: \"The 2024 Olympics are in Paris\",\n", " metadata: { source: \"https://example.com\" }\n", "}\n", "\n", "const documents = [document1, document2, document3, document4];\n", "\n", "await vectorStore.addDocuments(documents);" ] }, { "cell_type": "markdown", "id": "c3620501", "metadata": {}, "source": [ "Deletion and ids for individual documents are not currently supported.\n", "\n", "## Query vector store\n", "\n", "Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent. \n", "\n", "### Query directly\n", "\n", "Performing a simple similarity search can be done as follows:" ] }, { "cell_type": "code", "execution_count": 4, "id": "aa0a16fa", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "* The powerhouse of the cell is the mitochondria [{\"source\":\"https://example.com\"}]\n", "* Mitochondria are made out of lipids [{\"source\":\"https://example.com\"}]\n" ] } ], "source": [ "const filter = (doc) => doc.metadata.source === \"https://example.com\";\n", "\n", "const similaritySearchResults = await vectorStore.similaritySearch(\"biology\", 2, filter);\n", "\n", "for (const doc of similaritySearchResults) {\n", " console.log(`* ${doc.pageContent} [${JSON.stringify(doc.metadata, null)}]`);\n", "}" ] }, { "cell_type": "markdown", "id": "3ed9d733", "metadata": {}, "source": [
145871
# Typesense Vector store that utilizes the Typesense search engine. ### Basic Usage import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/openai @langchain/community @langchain/core ``` ```typescript import { Typesense, TypesenseConfig, } from "@lanchain/community/vectorstores/typesense"; import { OpenAIEmbeddings } from "@langchain/openai"; import { Client } from "typesense"; import { Document } from "@langchain/core/documents"; const vectorTypesenseClient = new Client({ nodes: [ { // Ideally should come from your .env file host: "...", port: 123, protocol: "https", }, ], // Ideally should come from your .env file apiKey: "...", numRetries: 3, connectionTimeoutSeconds: 60, }); const typesenseVectorStoreConfig = { // Typesense client typesenseClient: vectorTypesenseClient, // Name of the collection to store the vectors in schemaName: "your_schema_name", // Optional column names to be used in Typesense columnNames: { // "vec" is the default name for the vector column in Typesense but you can change it to whatever you want vector: "vec", // "text" is the default name for the text column in Typesense but you can change it to whatever you want pageContent: "text", // Names of the columns that you will save in your typesense schema and need to be retrieved as metadata when searching metadataColumnNames: ["foo", "bar", "baz"], }, // Optional search parameters to be passed to Typesense when searching searchParams: { q: "*", filter_by: "foo:[fooo]", query_by: "", }, // You can override the default Typesense import function if you want to do something more complex // Default import function: // async importToTypesense< // T extends Record<string, unknown> = Record<string, unknown> // >(data: T[], collectionName: string) { // const chunkSize = 2000; // for (let i = 0; i < data.length; i += chunkSize) { // const chunk = data.slice(i, i + chunkSize); // await this.caller.call(async () => { // await this.client // .collections<T>(collectionName) // .documents() // .import(chunk, { action: "emplace", dirty_values: "drop" }); // }); // } // } import: async (data, collectionName) => { await vectorTypesenseClient .collections(collectionName) .documents() .import(data, { action: "emplace", dirty_values: "drop" }); }, } satisfies TypesenseConfig; /** * Creates a Typesense vector store from a list of documents. * Will update documents if there is a document with the same id, at least with the default import function. * @param documents list of documents to create the vector store from * @returns Typesense vector store */ const createVectorStoreWithTypesense = async (documents: Document[] = []) => Typesense.fromDocuments( documents, new OpenAIEmbeddings(), typesenseVectorStoreConfig ); /** * Returns a Typesense vector store from an existing index. * @returns Typesense vector store */ const getVectorStoreWithTypesense = async () => new Typesense(new OpenAIEmbeddings(), typesenseVectorStoreConfig); // Do a similarity search const vectorStore = await getVectorStoreWithTypesense(); const documents = await vectorStore.similaritySearch("hello world"); // Add filters based on metadata with the search parameters of Typesense // will exclude documents with author:JK Rowling, so if Joe Rowling & JK Rowling exists, only Joe Rowling will be returned vectorStore.similaritySearch("Rowling", undefined, { filter_by: "author:!=JK Rowling", }); // Delete a document vectorStore.deleteDocuments(["document_id_1", "document_id_2"]); ``` ### Constructor Before starting, create a schema in Typesense with an id, a field for the vector and a field for the text. Add as many other fields as needed for the metadata. - `constructor(embeddings: Embeddings, config: TypesenseConfig)`: Constructs a new instance of the `Typesense` class. - `embeddings`: An instance of the `Embeddings` class used for embedding documents. - `config`: Configuration object for the Typesense vector store. - `typesenseClient`: Typesense client instance. - `schemaName`: Name of the Typesense schema in which documents will be stored and searched. - `searchParams` (optional): Typesense search parameters. Default is `{ q: '*', per_page: 5, query_by: '' }`. - `columnNames` (optional): Column names configuration. - `vector` (optional): Vector column name. Default is `'vec'`. - `pageContent` (optional): Page content column name. Default is `'text'`. - `metadataColumnNames` (optional): Metadata column names. Default is an empty array `[]`. - `import` (optional): Replace the default import function for importing data to Typesense. This can affect the functionality of updating documents. ### Methods - `async addDocuments(documents: Document[]): Promise<void>`: Adds documents to the vector store. The documents will be updated if there is a document with the same ID. - `static async fromDocuments(docs: Document[], embeddings: Embeddings, config: TypesenseConfig): Promise<Typesense>`: Creates a Typesense vector store from a list of documents. Documents are added to the vector store during construction. - `static async fromTexts(texts: string[], metadatas: object[], embeddings: Embeddings, config: TypesenseConfig): Promise<Typesense>`: Creates a Typesense vector store from a list of texts and associated metadata. Texts are converted to documents and added to the vector store during construction. - `async similaritySearch(query: string, k?: number, filter?: Record<string, unknown>): Promise<Document[]>`: Searches for similar documents based on a query. Returns an array of similar documents. - `async deleteDocuments(documentIds: string[]): Promise<void>`: Deletes documents from the vector store based on their IDs. ## Related - Vector store [conceptual guide](/docs/concepts/#vectorstores) - Vector store [how-to guides](/docs/how_to/#vectorstores)
145877
--- sidebar_class_name: node-only --- import CodeBlock from "@theme/CodeBlock"; # Tigris Tigris makes it easy to build AI applications with vector embeddings. It is a fully managed cloud-native database that allows you store and index documents and vector embeddings for fast and scalable vector search. :::tip Compatibility Only available on Node.js. ::: ## Setup ### 1. Install the Tigris SDK Install the SDK as follows ```bash npm2yarn npm install -S @tigrisdata/vector ``` ### 2. Fetch Tigris API credentials You can sign up for a free Tigris account [here](https://www.tigrisdata.com/). Once you have signed up for the Tigris account, create a new project called `vectordemo`. Next, make a note of the `clientId` and `clientSecret`, which you can get from the Application Keys section of the project. ## Index docs import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install -S @langchain/openai ``` ```typescript import { VectorDocumentStore } from "@tigrisdata/vector"; import { Document } from "langchain/document"; import { OpenAIEmbeddings } from "@langchain/openai"; import { TigrisVectorStore } from "langchain/vectorstores/tigris"; const index = new VectorDocumentStore({ connection: { serverUrl: "api.preview.tigrisdata.cloud", projectName: process.env.TIGRIS_PROJECT, clientId: process.env.TIGRIS_CLIENT_ID, clientSecret: process.env.TIGRIS_CLIENT_SECRET, }, indexName: "examples_index", numDimensions: 1536, // match the OpenAI embedding size }); const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "tigris is a cloud-native vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "tigris is a river", }), ]; await TigrisVectorStore.fromDocuments(docs, new OpenAIEmbeddings(), { index }); ``` ## Query docs import Search from "@examples/indexes/vector_stores/tigris/search.ts"; ```typescript import { VectorDocumentStore } from "@tigrisdata/vector"; import { OpenAIEmbeddings } from "@langchain/openai"; import { TigrisVectorStore } from "langchain/vectorstores/tigris"; const index = new VectorDocumentStore({ connection: { serverUrl: "api.preview.tigrisdata.cloud", projectName: process.env.TIGRIS_PROJECT, clientId: process.env.TIGRIS_CLIENT_ID, clientSecret: process.env.TIGRIS_CLIENT_SECRET, }, indexName: "examples_index", numDimensions: 1536, // match the OpenAI embedding size }); const vectorStore = await TigrisVectorStore.fromExistingIndex( new OpenAIEmbeddings(), { index } ); /* Search the vector DB independently with metadata filters */ const results = await vectorStore.similaritySearch("tigris", 1, { "metadata.foo": "bar", }); console.log(JSON.stringify(results, null, 2)); /* [ Document { pageContent: 'tigris is a cloud-native vector db', metadata: { foo: 'bar' } } ] */ ``` ## Related - Vector store [conceptual guide](/docs/concepts/#vectorstores) - Vector store [how-to guides](/docs/how_to/#vectorstores)
145882
# libSQL [Turso](https://turso.tech) is a SQLite-compatible database built on [libSQL](https://docs.turso.tech/libsql), the Open Contribution fork of SQLite. Vector Similiarity Search is built into Turso and libSQL as a native datatype, enabling you to store and query vectors directly in the database. LangChain.js supports using a local libSQL, or remote Turso database as a vector store, and provides a simple API to interact with it. This guide provides a quick overview for getting started with libSQL vector stores. For detailed documentation of all libSQL features and configurations head to the API reference. ## Overview ## Integration details | Class | Package | JS support | Package latest | | ------------------- | ---------------------- | ---------- | ----------------------------------------------------------------- | | `LibSQLVectorStore` | `@langchain/community` | ✅ | ![npm version](https://img.shields.io/npm/v/@langchain/community) | ## Setup To use libSQL vector stores, you'll need to create a Turso account or set up a local SQLite database, and install the `@langchain/community` integration package. This guide will also use OpenAI embeddings, which require you to install the `@langchain/openai` integration package. You can also use other supported embeddings models if you wish. You can use local SQLite when working with the libSQL vector store, or use a hosted Turso Database. import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @libsql/client @langchain/openai @langchain/community ``` Now it's time to create a database. You can create one locally, or use a hosted Turso database. ### Local libSQL Create a new local SQLite file and connect to the shell: ```bash sqlite3 file.db ``` ### Hosted Turso Visit [sqlite.new](https://sqlite.new) to create a new database, give it a name, and create a database auth token. Make sure to copy the database auth token, and the database URL, it should look something like: ```text libsql://[database-name]-[your-username].turso.io ``` ### Setup the table and index Execute the following SQL command to create a new table or add the embedding column to an existing table. Make sure to mopdify the following parts of the SQL: - `TABLE_NAME` is the name of the table you want to create. - `content` is used to store the `Document.pageContent` values. - `metadata` is used to store the `Document.metadata` object. - `EMBEDDING_COLUMN` is used to store the vector values, use the dimensions size used by the model you plan to use (1536 for OpenAI). ```sql CREATE TABLE IF NOT EXISTS TABLE_NAME ( id INTEGER PRIMARY KEY AUTOINCREMENT, content TEXT, metadata TEXT, EMBEDDING_COLUMN F32_BLOB(1536) -- 1536-dimensional f32 vector for OpenAI ); ``` Now create an index on the `EMBEDDING_COLUMN` column: ```sql CREATE INDEX IF NOT EXISTS idx_TABLE_NAME_EMBEDDING_COLUMN ON TABLE_NAME(libsql_vector_idx(EMBEDDING_COLUMN)); ``` Make sure to replace the `TABLE_NAME` and `EMBEDDING_COLUMN` with the values you used in the previous step. ## Instantiation To initialize a new `LibSQL` vector store, you need to provide the database URL and Auth Token when working remotely, or by passing the filename for a local SQLite. ```typescript import { LibSQLVectorStore } from "@langchain/community/vectorstores/libsql"; import { OpenAIEmbeddings } from "@langchain/openai"; import { createClient } from "@libsql/client"; const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small", }); const libsqlClient = createClient({ url: "libsql://[database-name]-[your-username].turso.io", authToken: "...", }); // Local instantiation // const libsqlClient = createClient({ // url: "file:./dev.db", // }); const vectorStore = new LibSQLVectorStore(embeddings, { db: libsqlClient, tableName: "TABLE_NAME", embeddingColumn: "EMBEDDING_COLUMN", dimensions: 1536, }); ``` ## Manage vector store ### Add items to vector store ```typescript import type { Document } from "@langchain/core/documents"; const documents: Document[] = [ { pageContent: "Hello", metadata: { topic: "greeting" } }, { pageContent: "Bye bye", metadata: { topic: "greeting" } }, ]; await vectorStore.addDocuments(documents); ``` ### Delete items from vector store ```typescript await vectorStore.deleteDocuments({ ids: [1, 2] }); ``` ## Query vector store Once you have inserted the documents, you can query the vector store. ### Query directly Performing a simple similarity search can be done as follows: ```typescript const resultOne = await vectorStore.similaritySearch("hola", 1); for (const doc of similaritySearchResults) { console.log(`${doc.pageContent} [${JSON.stringify(doc.metadata, null)}]`); } ``` For similarity search with scores: ```typescript const similaritySearchWithScoreResults = await vectorStore.similaritySearchWithScore("hola", 1); for (const [doc, score] of similaritySearchWithScoreResults) { console.log( `${score.toFixed(3)} ${doc.pageContent} [${JSON.stringify(doc.metadata)}` ); } ``` ## API reference For detailed documentation of all `LibSQLVectorStore` features and configurations head to the API reference. ## Related - Vector store [conceptual guide](/docs/concepts/#vectorstores) - Vector store [how-to guides](/docs/how_to/#vectorstores)
145892
" pageContent: \"The powerhouse of the cell is the mitochondria\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document2: Document = {\n", " pageContent: \"Buildings are made out of brick\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document3: Document = {\n", " pageContent: \"Mitochondria are made out of lipids\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document4: Document = {\n", " pageContent: \"The 2024 Olympics are in Paris\",\n", " metadata: { source: \"https://example.com\" }\n", "}\n", "\n", "const documents = [document1, document2, document3, document4];\n", "\n", "await vectorStore.addDocuments(documents, { ids: [\"1\", \"2\", \"3\", \"4\"] });" ] }, { "cell_type": "markdown", "id": "dcf1b905", "metadata": {}, "source": [ "### Delete items from vector store" ] }, { "cell_type": "code", "execution_count": 3, "id": "ef61e188", "metadata": {}, "outputs": [], "source": [ "await vectorStore.delete({ ids: [\"4\"] });" ] }, { "cell_type": "markdown", "id": "c3620501", "metadata": {}, "source": [ "## Query vector store\n", "\n", "Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent. \n", "\n", "### Query directly\n", "\n", "Performing a simple similarity search can be done as follows:" ] }, { "cell_type": "code", "execution_count": 6, "id": "aa0a16fa", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "* The powerhouse of the cell is the mitochondria [{\"source\":\"https://example.com\"}]\n", "* Mitochondria are made out of lipids [{\"source\":\"https://example.com\"}]\n" ] } ], "source": [ "const filter = { source: \"https://example.com\" };\n", "\n", "const similaritySearchResults = await vectorStore.similaritySearch(\"biology\", 2, filter);\n", "\n", "for (const doc of similaritySearchResults) {\n", " console.log(`* ${doc.pageContent} [${JSON.stringify(doc.metadata, null)}]`);\n", "}" ] }, { "cell_type": "markdown", "id": "3ed9d733", "metadata": {}, "source": [ "If you want to execute a similarity search and receive the corresponding scores you can run:" ] }, { "cell_type": "code", "execution_count": 7, "id": "5efd2eaa", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "* [SIM=0.165] The powerhouse of the cell is the mitochondria [{\"source\":\"https://example.com\"}]\n", "* [SIM=0.148] Mitochondria are made out of lipids [{\"source\":\"https://example.com\"}]\n" ] } ], "source": [ "const similaritySearchWithScoreResults = await vectorStore.similaritySearchWithScore(\"biology\", 2, filter)\n", "\n", "for (const [doc, score] of similaritySearchWithScoreResults) {\n", " console.log(`* [SIM=${score.toFixed(3)}] ${doc.pageContent} [${JSON.stringify(doc.metadata)}]`);\n", "}" ] }, { "cell_type": "markdown", "id": "180b0e66", "metadata": {}, "source": [ "### Metadata Query Builder Filtering\n", "\n", "You can also use query builder-style filtering similar to how the [Supabase JavaScript library](https://supabase.com/docs/reference/javascript/using-filters) works instead of passing an object. Note that since most of the filter properties are in the metadata column, you need to use arrow operators (-> for integer or ->> for text) as defined in [Postgrest API documentation](https://postgrest.org/en/stable/references/api/tables_views.html#json-columns) and specify the data type of the property (e.g. the column should look something like `metadata->some_int_prop_name::int`)." ] }, { "cell_type": "code", "execution_count": 9, "id": "e3287768", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "* The powerhouse of the cell is the mitochondria [{\"source\":\"https://example.com\"}]\n", "* Mitochondria are made out of lipids [{\"source\":\"https://example.com\"}]\n" ] } ], "source": [ "import { SupabaseFilterRPCCall } from \"@langchain/community/vectorstores/supabase\";\n", "\n", "const funcFilter: SupabaseFilterRPCCall = (rpc) =>\n", " rpc.filter(\"metadata->>source\", \"eq\", \"https://example.com\");\n", "\n", "const funcFilterSearchResults = await vectorStore.similaritySearch(\"biology\", 2, funcFilter);\n", "\n", "for (const doc of funcFilterSearchResults) {\n", " console.log(`* ${doc.pageContent} [${JSON.stringify(doc.metadata, null)}]`);\n", "}" ] }, { "cell_type": "markdown", "id": "0c235cdc", "metadata": {}, "source": [ "### Query by turning into retriever\n", "\n", "You can also transform the vector store into a [retriever](/docs/concepts/#retrievers) for easier usage in your chains. " ] }, { "cell_type": "code", "execution_count": 10, "id": "f3460093", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[\n", " Document {\n", " pageContent: 'The powerhouse of the cell is the mitochondria',\n", " metadata: { source: 'https://example.com' },\n", " id: undefined\n", " },\n", " Document {\n", " pageContent: 'Mitochondria are made out of lipids',\n", " metadata: { source: 'https://example.com' },\n", " id: undefined\n", " }\n", "]\n" ] } ], "source": [ "const retriever = vectorStore.asRetriever({\n", " // Optional filter\n", " filter: filter,\n", " k: 2,\n", "});\n", "await retriever.invoke(\"biology\");" ] }, { "cell_type": "markdown", "id": "e2e0a211", "metadata": {}, "source": [ "### Usage for retrieval-augmented generation\n", "\n", "For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:\n", "\n", "- [Tutorials: working with external knowledge](/docs/tutorials/#working-with-external-knowledge).\n", "- [How-to: Question and answer with RAG](/docs/how_to/#qa-with-rag)\n", "- [Retrieval conceptual docs](/docs/concepts#retrieval)" ] }, { "cell_type": "markdown", "id": "8a27244f", "metadata": {}, "source": [ "## API reference\n", "\n", "For detailed documentation of all `SupabaseVectorStore` features and configurations head to the [API reference](https://api.js.langchain.com/classes/langchain_community_vectorstores_supabase.SupabaseVectorStore.html)."
145901
You can specify the fields to return from the document using `fields` parameter in the filter during searches. These fields are returned as part of the `metadata` object. You can fetch any field that is stored in the index. The `textKey` of the document is returned as part of the document's `pageContent`. If you do not specify any fields to be fetched, all the fields stored in the index are returned. If you want to fetch one of the fields in the metadata, you need to specify it using `.` For example, to fetch the `source` field in the metadata, you need to use `metadata.source`. ```typescript const result = await store.similaritySearch(query, 1, { fields: ["metadata.source"], }); console.log(result[0]); ``` ## Hybrid Search Couchbase allows you to do hybrid searches by combining vector search results with searches on non-vector fields of the document like the `metadata` object. The results will be based on the combination of the results from both vector search and the searches supported by full text search service. The scores of each of the component searches are added up to get the total score of the result. To perform hybrid searches, there is an optional key, `searchOptions` in `fields` parameter that can be passed to all the similarity searches. The different search/query possibilities for the `searchOptions` can be found [here](https://docs.couchbase.com/server/current/search/search-request-params.html#query-object). ### Create Diverse Metadata for Hybrid Search In order to simulate hybrid search, let us create some random metadata from the existing documents. We uniformly add three fields to the metadata, `date` between 2010 & 2020, `rating` between 1 & 5 and `author` set to either John Doe or Jane Doe. We will also declare few sample queries. ```typescript for (let i = 0; i < docs.length; i += 1) { docs[i].metadata.date = `${2010 + (i % 10)}-01-01`; docs[i].metadata.rating = 1 + (i % 5); docs[i].metadata.author = ["John Doe", "Jane Doe"][i % 2]; } const store = await CouchbaseVectorStore.fromDocuments( docs, embeddings, couchbaseConfig ); const query = "What did the president say about Ketanji Brown Jackson"; const independenceQuery = "Any mention about independence?"; ``` ### Example: Search by Exact Value We can search for exact matches on a textual field like the author in the `metadata` object. ```typescript const exactValueResult = await store.similaritySearch(query, 4, { fields: ["metadata.author"], searchOptions: { query: { field: "metadata.author", match: "John Doe" }, }, }); console.log(exactValueResult[0]); ``` ### Example: Search by Partial Match We can search for partial matches by specifying a fuzziness for the search. This is useful when you want to search for slight variations or misspellings of a search query. Here, "Johny" is close (fuzziness of 1) to "John Doe". ```typescript const partialMatchResult = await store.similaritySearch(query, 4, { fields: ["metadata.author"], searchOptions: { query: { field: "metadata.author", match: "Johny", fuzziness: 1 }, }, }); console.log(partialMatchResult[0]); ``` ### Example: Search by Date Range Query We can search for documents that are within a date range query on a date field like `metadata.date`. ```typescript const dateRangeResult = await store.similaritySearch(independenceQuery, 4, { fields: ["metadata.date", "metadata.author"], searchOptions: { query: { start: "2016-12-31", end: "2017-01-02", inclusiveStart: true, inclusiveEnd: false, field: "metadata.date", }, }, }); console.log(dateRangeResult[0]); ``` ### Example: Search by Numeric Range Query We can search for documents that are within a range for a numeric field like `metadata.rating`. ```typescript const ratingRangeResult = await store.similaritySearch(independenceQuery, 4, { fields: ["metadata.rating"], searchOptions: { query: { min: 3, max: 5, inclusiveMin: false, inclusiveMax: true, field: "metadata.rating", }, }, }); console.log(ratingRangeResult[0]); ``` ### Example: Combining Multiple Search Conditions Different queries can by combined using AND (conjuncts) or OR (disjuncts) operators. In this example, we are checking for documents with a rating between 3 & 4 and dated between 2015 & 2018. ```typescript const multipleConditionsResult = await store.similaritySearch(texts[0], 4, { fields: ["metadata.rating", "metadata.date"], searchOptions: { query: { conjuncts: [ { min: 3, max: 4, inclusive_max: true, field: "metadata.rating" }, { start: "2016-12-31", end: "2017-01-02", field: "metadata.date" }, ], }, }, }); console.log(multipleConditionsResult[0]); ``` ### Other Queries Similarly, you can use any of the supported Query methods like Geo Distance, Polygon Search, Wildcard, Regular Expressions, etc in the `searchOptions` Key of `filter` parameter. Please refer to the documentation for more details on the available query methods and their syntax. - [Couchbase Capella](https://docs.couchbase.com/cloud/search/search-request-params.html#query-object) - [Couchbase Server](https://docs.couchbase.com/server/current/search/search-request-params.html#query-object) <br /> <br /> # Frequently Asked Questions ## Question: Should I create the Search index before creating the CouchbaseVectorStore object? Yes, currently you need to create the Search index before creating the `CouchbaseVectorStore` object. ## Question: I am not seeing all the fields that I specified in my search results. In Couchbase, we can only return the fields stored in the Search index. Please ensure that the field that you are trying to access in the search results is part of the Search index. One way to handle this is to index and store a document's fields dynamically in the index. - In Capella, you need to go to "Advanced Mode" then under the chevron "General Settings" you can check "[X] Store Dynamic Fields" or "[X] Index Dynamic Fields" - In Couchbase Server, in the Index Editor (not Quick Editor) under the chevron "Advanced" you can check "[X] Store Dynamic Fields" or "[X] Index Dynamic Fields" Note that these options will increase the size of the index. For more details on dynamic mappings, please refer to the [documentation](https://docs.couchbase.com/cloud/search/customize-index.html). ## Question: I am unable to see the metadata object in my search results. This is most likely due to the `metadata` field in the document not being indexed and/or stored by the Couchbase Search index. In order to index the `metadata` field in the document, you need to add it to the index as a child mapping. If you select to map all the fields in the mapping, you will be able to search by all metadata fields. Alternatively, to optimize the index, you can select the specific fields inside `metadata` object to be indexed. You can refer to the [docs](https://docs.couchbase.com/cloud/search/customize-index.html) to learn more about indexing child mappings. To create Child Mappings, you can refer to the following docs - - [Couchbase Capella](https://docs.couchbase.com/cloud/search/create-child-mapping.html) - [Couchbase Server](https://docs.couchbase.com/server/current/fts/fts-creating-index-from-UI-classic-editor-dynamic.html) ## Related - Vector store [conceptual guide](/docs/concepts/#vectorstores) - Vector store [how-to guides](/docs/how_to/#vectorstores)
145902
{ "cells": [ { "cell_type": "raw", "id": "1957f5cb", "metadata": { "vscode": { "languageId": "raw" } }, "source": [ "---\n", "sidebar_label: MongoDB Atlas\n", "sidebar_class_name: node-only\n", "---" ] }, { "cell_type": "markdown", "id": "ef1f0986", "metadata": {}, "source": [ "# MongoDB Atlas\n", "\n", "```{=mdx}\n", ":::tip Compatibility\n", "Only available on Node.js.\n", "\n", "You can still create API routes that use MongoDB with Next.js by setting the `runtime` variable to `nodejs` like so:\n", "\n", "`export const runtime = \"nodejs\";`\n", "\n", "You can read more about Edge runtimes in the Next.js documentation [here](https://nextjs.org/docs/app/building-your-application/rendering/edge-and-nodejs-runtimes).\n", ":::\n", "```\n", "\n", "This guide provides a quick overview for getting started with MongoDB Atlas [vector stores](/docs/concepts/#vectorstores). For detailed documentation of all `MongoDBAtlasVectorSearch` features and configurations head to the [API reference](https://api.js.langchain.com/classes/langchain_mongodb.MongoDBAtlasVectorSearch.html)." ] }, { "cell_type": "markdown", "id": "c824838d", "metadata": {}, "source": [ "## Overview\n", "\n", "### Integration details\n", "\n", "| Class | Package | [PY support](https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas/) | Package latest |\n", "| :--- | :--- | :---: | :---: |\n", "| [`MongoDBAtlasVectorSearch`](https://api.js.langchain.com/classes/langchain_mongodb.MongoDBAtlasVectorSearch.html) | [`@langchain/mongodb`](https://www.npmjs.com/package/@langchain/mongodb) | ✅ | ![NPM - Version](https://img.shields.io/npm/v/@langchain/mongodb?style=flat-square&label=%20&) |" ] }, { "cell_type": "markdown", "id": "36fdc060", "metadata": {}, "source": [ "## Setup\n", "\n", "To use MongoDB Atlas vector stores, you'll need to configure a MongoDB Atlas cluster and install the `@langchain/mongodb` integration package.\n", "\n", "### Initial Cluster Configuration\n", "\n", "To create a MongoDB Atlas cluster, navigate to the [MongoDB Atlas website](https://www.mongodb.com/products/platform/atlas-database) and create an account if you don't already have one.\n", "\n", "Create and name a cluster when prompted, then find it under `Database`. Select `Browse Collections` and create either a blank collection or one from the provided sample data.\n", "\n", "**Note:** The cluster created must be MongoDB 7.0 or higher.\n", "\n", "### Creating an Index\n", "\n", "After configuring your cluster, you'll need to create an index on the collection field you want to search over.\n", "\n", "Switch to the `Atlas Search` tab and click `Create Search Index`. From there, make sure you select `Atlas Vector Search - JSON Editor`, then select the appropriate database and collection and paste the following into the textbox:\n", "\n", "```json\n", "{\n", " \"fields\": [\n", " {\n", " \"numDimensions\": 1536,\n", " \"path\": \"embedding\",\n", " \"similarity\": \"euclidean\",\n", " \"type\": \"vector\"\n", " }\n", " ]\n", "}\n", "```\n", "\n", "Note that the dimensions property should match the dimensionality of the embeddings you are using. For example, Cohere embeddings have 1024 dimensions, and by default OpenAI embeddings have 1536:\n", "\n", "Note: By default the vector store expects an index name of default, an indexed collection field name of embedding, and a raw text field name of text. You should initialize the vector store with field names matching your index name collection schema as shown below.\n", "\n", "Finally, proceed to build the index.\n", "\n", "### Embeddings\n", "\n", "This guide will also use [OpenAI embeddings](/docs/integrations/text_embedding/openai), which require you to install the `@langchain/openai` integration package. You can also use [other supported embeddings models](/docs/integrations/text_embedding) if you wish.\n", "\n", "### Installation\n", "\n", "Install the following packages:\n", "\n", "```{=mdx}\n", "import IntegrationInstallTooltip from \"@mdx_components/integration_install_tooltip.mdx\";\n", "import Npm2Yarn from \"@theme/Npm2Yarn\";\n", "\n", "<IntegrationInstallTooltip></IntegrationInstallTooltip>\n", "\n", "<Npm2Yarn>\n", " @langchain/mongodb mongodb @langchain/openai @langchain/core\n", "</Npm2Yarn>\n", "```\n", "\n", "### Credentials\n", "\n", "Once you've done the above, set the `MONGODB_ATLAS_URI` environment variable from the `Connect` button in Mongo's dashboard. You'll also need your DB name and collection name:\n", "\n", "```typescript\n", "process.env.MONGODB_ATLAS_URI = \"your-atlas-url\";\n", "process.env.MONGODB_ATLAS_COLLECTION_NAME = \"your-atlas-db-name\";\n", "process.env.MONGODB_ATLAS_DB_NAME = \"your-atlas-db-name\";\n", "```\n", "\n", "If you are using OpenAI embeddings for this guide, you'll need to set your OpenAI key as well:\n", "\n", "```typescript\n", "process.env.OPENAI_API_KEY = \"YOUR_API_KEY\";\n", "```\n", "\n", "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n", "\n", "```typescript\n", "// process.env.LANGCHAIN_TRACING_V2=\"true\"\n", "// process.env.LANGCHAIN_API_KEY=\"your-api-key\"\n", "```" ] }, { "cell_type": "markdown", "id": "93df377e", "metadata": {}, "source": [ "## Instantiation\n", "\n", "Once you've set up your cluster as shown above, you can initialize your vector store as follows:" ] }, { "cell_type": "code", "execution_count": 1, "id": "dc37144c-208d-4ab3-9f3a-0407a69fe052", "metadata": { "tags": [] }, "outputs": [], "source": [ "import { MongoDBAtlasVectorSearch } from \"@langchain/mongodb\";\n", "import { OpenAIEmbeddings } from \"@langchain/openai\";\n", "import { MongoClient } from \"mongodb\";\n", "\n", "const client = new MongoClient(process.env.MONGODB_ATLAS_URI || \"\");\n", "const collection = client.db(process.env.MONGODB_ATLAS_DB_NAME)\n", " .collection(process.env.MONGODB_ATLAS_COLLECTION_NAME);\n", "\n", "const embeddings = new OpenAIEmbeddings({\n", " model: \"text-embedding-3-small\",\n", "});\n", "\n", "const vectorStore = new MongoDBAtlasVectorSearch(embeddings, {\n", " collection: collection,\n", " indexName: \"vector_index\", // The name of the Atlas search index. Defaults to \"default\"\n", " textKey: \"text\", // The name of the collection field containing the raw content. Defaults to \"text\"\n", " embeddingKey: \"embedding\", // The name of the collection field containing the embedded text. Defaults to \"embedding\"\n", "});" ] }, { "cell_type": "markdown", "id": "ac6071d4", "metadata": {}, "source": [ "## Manage vector store\n", "\n", "### Add items to vector store\n",
145903
"\n", "You can now add documents to your vector store:" ] }, { "cell_type": "code", "execution_count": 2, "id": "17f5efc0", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ '1', '2', '3', '4' ]\n" ] } ], "source": [ "import type { Document } from \"@langchain/core/documents\";\n", "\n", "const document1: Document = {\n", " pageContent: \"The powerhouse of the cell is the mitochondria\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document2: Document = {\n", " pageContent: \"Buildings are made out of brick\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document3: Document = {\n", " pageContent: \"Mitochondria are made out of lipids\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document4: Document = {\n", " pageContent: \"The 2024 Olympics are in Paris\",\n", " metadata: { source: \"https://example.com\" }\n", "}\n", "\n", "const documents = [document1, document2, document3, document4];\n", "\n", "await vectorStore.addDocuments(documents, { ids: [\"1\", \"2\", \"3\", \"4\"] });" ] }, { "cell_type": "markdown", "id": "dcf1b905", "metadata": {}, "source": [ "**Note:** After adding documents, there is a slight delay before they become queryable.\n", "\n", "Adding a document with the same `id` as an existing document will update the existing one.\n", "\n", "### Delete items from vector store" ] }, { "cell_type": "code", "execution_count": 3, "id": "ef61e188", "metadata": {}, "outputs": [], "source": [ "await vectorStore.delete({ ids: [\"4\"] });" ] }, { "cell_type": "markdown", "id": "c3620501", "metadata": {}, "source": [ "## Query vector store\n", "\n", "Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent. \n", "\n", "### Query directly\n", "\n", "Performing a simple similarity search can be done as follows:" ] }, { "cell_type": "code", "execution_count": 5, "id": "aa0a16fa", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "* The powerhouse of the cell is the mitochondria [{\"_id\":\"1\",\"source\":\"https://example.com\"}]\n", "* Mitochondria are made out of lipids [{\"_id\":\"3\",\"source\":\"https://example.com\"}]\n" ] } ], "source": [ "const similaritySearchResults = await vectorStore.similaritySearch(\"biology\", 2);\n", "\n", "for (const doc of similaritySearchResults) {\n", " console.log(`* ${doc.pageContent} [${JSON.stringify(doc.metadata, null)}]`);\n", "}" ] }, { "cell_type": "markdown", "id": "3ed9d733", "metadata": {}, "source": [ "### Filtering\n", "\n", "MongoDB Atlas supports pre-filtering of results on other fields. They require you to define which metadata fields you plan to filter on by updating the index you created initially. Here's an example:\n", "\n", "```json\n", "{\n", " \"fields\": [\n", " {\n", " \"numDimensions\": 1024,\n", " \"path\": \"embedding\",\n", " \"similarity\": \"euclidean\",\n", " \"type\": \"vector\"\n", " },\n", " {\n", " \"path\": \"source\",\n", " \"type\": \"filter\"\n", " }\n", " ]\n", "}\n", "```\n", "\n", "Above, the first item in `fields` is the vector index, and the second item is the metadata property you want to filter on. The name of the property is the value of the `path` key. So the above index would allow us to search on a metadata field named `source`.\n", "\n", "Then, in your code you can use [MQL Query Operators](https://www.mongodb.com/docs/manual/reference/operator/query/) for filtering.\n", "\n", "The below example illustrates this:" ] }, { "cell_type": "code", "execution_count": 9, "id": "bc8f242e", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "* The powerhouse of the cell is the mitochondria [{\"_id\":\"1\",\"source\":\"https://example.com\"}]\n", "* Mitochondria are made out of lipids [{\"_id\":\"3\",\"source\":\"https://example.com\"}]\n" ] } ], "source": [ "const filter = {\n", " preFilter: {\n", " source: {\n", " $eq: \"https://example.com\",\n", " },\n", " },\n", "}\n", "\n", "const filteredResults = await vectorStore.similaritySearch(\"biology\", 2, filter);\n", "\n", "for (const doc of filteredResults) {\n", " console.log(`* ${doc.pageContent} [${JSON.stringify(doc.metadata, null)}]`);\n", "}" ] }, { "cell_type": "markdown", "id": "69326bba", "metadata": {}, "source": [ "### Returning scores\n", "\n", "If you want to execute a similarity search and receive the corresponding scores you can run:" ] }, { "cell_type": "code", "execution_count": 10, "id": "5efd2eaa", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "* [SIM=0.374] The powerhouse of the cell is the mitochondria [{\"_id\":\"1\",\"source\":\"https://example.com\"}]\n", "* [SIM=0.370] Mitochondria are made out of lipids [{\"_id\":\"3\",\"source\":\"https://example.com\"}]\n" ] } ], "source": [ "const similaritySearchWithScoreResults = await vectorStore.similaritySearchWithScore(\"biology\", 2, filter)\n", "\n", "for (const [doc, score] of similaritySearchWithScoreResults) {\n", " console.log(`* [SIM=${score.toFixed(3)}] ${doc.pageContent} [${JSON.stringify(doc.metadata)}]`);\n", "}" ] }, { "cell_type": "markdown", "id": "0c235cdc", "metadata": {}, "source": [ "### Query by turning into retriever\n", "\n", "You can also transform the vector store into a [retriever](/docs/concepts/#retrievers) for easier usage in your chains. " ] }, { "cell_type": "code", "execution_count": 11, "id": "f3460093", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[\n", " Document {\n", " pageContent: 'The powerhouse of the cell is the mitochondria',\n", " metadata: { _id: '1', source: 'https://example.com' },\n", " id: undefined\n", " },\n", " Document {\n",
145906
{ "cells": [ { "cell_type": "raw", "id": "1957f5cb", "metadata": { "vscode": { "languageId": "raw" } }, "source": [ "---\n", "sidebar_label: Faiss\n", "sidebar_class_name: node-only\n", "---" ] }, { "cell_type": "markdown", "id": "ef1f0986", "metadata": {}, "source": [ "# FaissStore\n", "\n", "```{=mdx}\n", "\n", ":::tip Compatibility\n", "Only available on Node.js.\n", ":::\n", "\n", "```\n", "\n", "[Faiss](https://github.com/facebookresearch/faiss) is a library for efficient similarity search and clustering of dense vectors.\n", "\n", "LangChain.js supports using Faiss as a locally-running vectorstore that can be saved to a file. It also provides the ability to read the saved file from the [LangChain Python implementation](https://python.langchain.com/docs/integrations/vectorstores/faiss#saving-and-loading).\n", "\n", "This guide provides a quick overview for getting started with Faiss [vector stores](/docs/concepts/#vectorstores). For detailed documentation of all `FaissStore` features and configurations head to the [API reference](https://api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html)." ] }, { "cell_type": "markdown", "id": "c824838d", "metadata": {}, "source": [ "## Overview\n", "\n", "### Integration details\n", "\n", "| Class | Package | [PY support](https://python.langchain.com/docs/integrations/vectorstores/faiss) | Package latest |\n", "| :--- | :--- | :---: | :---: |\n", "| [`FaissStore`](https://api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) | [`@langchain/community`](https://npmjs.com/@langchain/community) | ✅ | ![NPM - Version](https://img.shields.io/npm/v/@langchain/community?style=flat-square&label=%20&) |" ] }, { "cell_type": "markdown", "id": "36fdc060", "metadata": {}, "source": [ "## Setup\n", "\n", "To use Faiss vector stores, you'll need to install the `@langchain/community` integration package and the [`faiss-node`](https://github.com/ewfian/faiss-node) package as a peer dependency.\n", "\n", "This guide will also use [OpenAI embeddings](/docs/integrations/text_embedding/openai), which require you to install the `@langchain/openai` integration package. You can also use [other supported embeddings models](/docs/integrations/text_embedding) if you wish.\n", "\n", "```{=mdx}\n", "import IntegrationInstallTooltip from \"@mdx_components/integration_install_tooltip.mdx\";\n", "import Npm2Yarn from \"@theme/Npm2Yarn\";\n", "\n", "<IntegrationInstallTooltip></IntegrationInstallTooltip>\n", "\n", "<Npm2Yarn>\n", " @langchain/community faiss-node @langchain/openai @langchain/core\n", "</Npm2Yarn>\n", "```\n", "\n", "### Credentials\n", "\n", "Because Faiss runs locally, you do not need any credentials to use it.\n", "\n", "If you are using OpenAI embeddings for this guide, you'll need to set your OpenAI key as well:\n", "\n", "```typescript\n", "process.env.OPENAI_API_KEY = \"YOUR_API_KEY\";\n", "```\n", "\n", "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n", "\n", "```typescript\n", "// process.env.LANGCHAIN_TRACING_V2=\"true\"\n", "// process.env.LANGCHAIN_API_KEY=\"your-api-key\"\n", "```" ] }, { "cell_type": "markdown", "id": "93df377e", "metadata": {}, "source": [ "## Instantiation" ] }, { "cell_type": "code", "execution_count": 2, "id": "dc37144c-208d-4ab3-9f3a-0407a69fe052", "metadata": { "tags": [] }, "outputs": [], "source": [ "import { FaissStore } from \"@langchain/community/vectorstores/faiss\";\n", "import { OpenAIEmbeddings } from \"@langchain/openai\";\n", "\n", "const embeddings = new OpenAIEmbeddings({\n", " model: \"text-embedding-3-small\",\n", "});\n", "\n", "const vectorStore = new FaissStore(embeddings, {});" ] }, { "cell_type": "markdown", "id": "ac6071d4", "metadata": {}, "source": [ "## Manage vector store\n", "\n", "### Add items to vector store" ] }, { "cell_type": "code", "execution_count": 3, "id": "17f5efc0", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ '1', '2', '3', '4' ]\n" ] } ], "source": [ "import type { Document } from \"@langchain/core/documents\";\n", "\n", "const document1: Document = {\n", " pageContent: \"The powerhouse of the cell is the mitochondria\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document2: Document = {\n", " pageContent: \"Buildings are made out of brick\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document3: Document = {\n", " pageContent: \"Mitochondria are made out of lipids\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document4: Document = {\n", " pageContent: \"The 2024 Olympics are in Paris\",\n", " metadata: { source: \"https://example.com\" }\n", "}\n", "\n", "const documents = [document1, document2, document3, document4];\n", "\n", "await vectorStore.addDocuments(documents, { ids: [\"1\", \"2\", \"3\", \"4\"] });" ] }, { "cell_type": "markdown", "id": "dcf1b905", "metadata": {}, "source": [ "### Delete items from vector store" ] }, { "cell_type": "code", "execution_count": 4, "id": "ef61e188", "metadata": {}, "outputs": [], "source": [ "await vectorStore.delete({ ids: [\"4\"] });" ] }, { "cell_type": "markdown", "id": "c3620501", "metadata": {}, "source": [ "## Query vector store\n", "\n", "Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent. \n", "\n", "### Query directly\n", "\n", "Performing a simple similarity search can be done as follows:" ] }, { "cell_type": "code", "execution_count": 5, "id": "aa0a16fa", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "* The powerhouse of the cell is the mitochondria [{\"source\":\"https://example.com\"}]\n", "* Mitochondria are made out of lipids [{\"source\":\"https://example.com\"}]\n" ] } ], "source": [ "const similaritySearchResults = await vectorStore.similaritySearch(\"biology\", 2);\n",
145910
{ "cells": [ { "cell_type": "raw", "id": "1957f5cb", "metadata": { "vscode": { "languageId": "raw" } }, "source": [ "---\n", "sidebar_label: Pinecone\n", "---" ] }, { "cell_type": "markdown", "id": "ef1f0986", "metadata": {}, "source": [ "# PineconeStore\n", "\n", "[Pinecone](https://www.pinecone.io/) is a vector database that helps power AI for some of the world’s best companies.\n", "\n", "This guide provides a quick overview for getting started with Pinecone [vector stores](/docs/concepts/#vectorstores). For detailed documentation of all `PineconeStore` features and configurations head to the [API reference](https://api.js.langchain.com/classes/langchain_pinecone.PineconeStore.html)." ] }, { "cell_type": "markdown", "id": "c824838d", "metadata": {}, "source": [ "## Overview\n", "\n", "### Integration details\n", "\n", "| Class | Package | [PY support](https://python.langchain.com/docs/integrations/vectorstores/pinecone/) | Package latest |\n", "| :--- | :--- | :---: | :---: |\n", "| [`PineconeStore`](https://api.js.langchain.com/classes/langchain_pinecone.PineconeStore.html) | [`@langchain/pinecone`](https://npmjs.com/@langchain/pinecone) | ✅ | ![NPM - Version](https://img.shields.io/npm/v/@langchain/pinecone?style=flat-square&label=%20&) |" ] }, { "cell_type": "markdown", "id": "36fdc060", "metadata": {}, "source": [ "## Setup\n", "\n", "To use Pinecone vector stores, you'll need to create a Pinecone account, initialize an index, and install the `@langchain/pinecone` integration package. You'll also want to install the [official Pinecone SDK](https://www.npmjs.com/package/@pinecone-database/pinecone) to initialize a client to pass into the `PineconeStore` instance.\n", "\n", "This guide will also use [OpenAI embeddings](/docs/integrations/text_embedding/openai), which require you to install the `@langchain/openai` integration package. You can also use [other supported embeddings models](/docs/integrations/text_embedding) if you wish.\n", "\n", "```{=mdx}\n", "import IntegrationInstallTooltip from \"@mdx_components/integration_install_tooltip.mdx\";\n", "import Npm2Yarn from \"@theme/Npm2Yarn\";\n", "\n", "<IntegrationInstallTooltip></IntegrationInstallTooltip>\n", "\n", "<Npm2Yarn>\n", " @langchain/pinecone @langchain/openai @langchain/core @pinecone-database/pinecone \n", "</Npm2Yarn>\n", "```\n", "\n", "### Credentials\n", "\n", "Sign up for a [Pinecone](https://www.pinecone.io/) account and create an index. Make sure the dimensions match those of the embeddings you want to use (the default is 1536 for OpenAI's `text-embedding-3-small`). Once you've done this set the `PINECONE_INDEX`, `PINECONE_API_KEY`, and (optionally) `PINECONE_ENVIRONMENT` environment variables:\n", "\n", "```typescript\n", "process.env.PINECONE_API_KEY = \"your-pinecone-api-key\";\n", "process.env.PINECONE_INDEX = \"your-pinecone-index\";\n", "\n", "// Optional\n", "process.env.PINECONE_ENVIRONMENT = \"your-pinecone-environment\";\n", "```\n", "\n", "If you are using OpenAI embeddings for this guide, you'll need to set your OpenAI key as well:\n", "\n", "```typescript\n", "process.env.OPENAI_API_KEY = \"YOUR_API_KEY\";\n", "```\n", "\n", "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n", "\n", "```typescript\n", "// process.env.LANGCHAIN_TRACING_V2=\"true\"\n", "// process.env.LANGCHAIN_API_KEY=\"your-api-key\"\n", "```" ] }, { "cell_type": "markdown", "id": "93df377e", "metadata": {}, "source": [ "## Instantiation" ] }, { "cell_type": "code", "execution_count": 1, "id": "dc37144c-208d-4ab3-9f3a-0407a69fe052", "metadata": { "tags": [] }, "outputs": [], "source": [ "import { PineconeStore } from \"@langchain/pinecone\";\n", "import { OpenAIEmbeddings } from \"@langchain/openai\";\n", "\n", "import { Pinecone as PineconeClient } from \"@pinecone-database/pinecone\";\n", "\n", "const embeddings = new OpenAIEmbeddings({\n", " model: \"text-embedding-3-small\",\n", "});\n", "\n", "const pinecone = new PineconeClient();\n", "// Will automatically read the PINECONE_API_KEY and PINECONE_ENVIRONMENT env vars\n", "const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);\n", "\n", "const vectorStore = await PineconeStore.fromExistingIndex(\n", " embeddings,\n", " {\n", " pineconeIndex,\n", " // Maximum number of batch requests to allow at once. Each batch is 1000 vectors.\n", " maxConcurrency: 5,\n", " // You can pass a namespace here too\n", " // namespace: \"foo\",\n", " }\n", ");" ] }, { "cell_type": "markdown", "id": "ac6071d4", "metadata": {}, "source": [ "## Manage vector store\n", "\n", "### Add items to vector store" ] }, { "cell_type": "code", "execution_count": 2, "id": "17f5efc0", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ '1', '2', '3', '4' ]\n" ] } ], "source": [ "import type { Document } from \"@langchain/core/documents\";\n", "\n", "const document1: Document = {\n", " pageContent: \"The powerhouse of the cell is the mitochondria\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document2: Document = {\n", " pageContent: \"Buildings are made out of brick\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document3: Document = {\n", " pageContent: \"Mitochondria are made out of lipids\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document4: Document = {\n", " pageContent: \"The 2024 Olympics are in Paris\",\n", " metadata: { source: \"https://example.com\" }\n", "}\n", "\n", "const documents = [document1, document2, document3, document4];\n", "\n", "await vectorStore.addDocuments(documents, { ids: [\"1\", \"2\", \"3\", \"4\"] });" ] }, { "cell_type": "markdown", "id": "dcf1b905", "metadata": {}, "source": [ "**Note:** After adding documents, there is a slight delay before they become queryable.\n", "\n", "### Delete items from vector store" ] }, { "cell_type": "code", "execution_count": 3,
145913
" pageContent: \"Buildings are made out of brick\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document3: Document = {\n", " pageContent: \"Mitochondria are made out of lipids\",\n", " metadata: { source: \"https://example.com\" }\n", "};\n", "\n", "const document4: Document = {\n", " pageContent: \"The 2024 Olympics are in Paris\",\n", " metadata: { source: \"https://example.com\" }\n", "}\n", "\n", "const documents = [document1, document2, document3, document4];\n", "\n", "await vectorStore.addDocuments(documents, { ids: [\"1\", \"2\", \"3\", \"4\"] });" ] }, { "cell_type": "markdown", "id": "dcf1b905", "metadata": {}, "source": [ "### Delete items from vector store\n", "\n", "You can delete values from the store by passing the same id you passed in:" ] }, { "cell_type": "code", "execution_count": 3, "id": "ef61e188", "metadata": {}, "outputs": [], "source": [ "await vectorStore.delete({ ids: [\"4\"] });" ] }, { "cell_type": "markdown", "id": "c3620501", "metadata": {}, "source": [ "## Query vector store\n", "\n", "Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent.\n", "\n", "### Query directly\n", "\n", "Performing a simple similarity search can be done as follows:" ] }, { "cell_type": "code", "execution_count": 13, "id": "aa0a16fa", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "* The powerhouse of the cell is the mitochondria [{\"source\":\"https://example.com\"}]\n", "* Mitochondria are made out of lipids [{\"source\":\"https://example.com\"}]\n" ] } ], "source": [ "const filter = [{\n", " operator: \"match\",\n", " field: \"source\",\n", " value: \"https://example.com\",\n", "}];\n", "\n", "const similaritySearchResults = await vectorStore.similaritySearch(\"biology\", 2, filter);\n", "\n", "for (const doc of similaritySearchResults) {\n", " console.log(`* ${doc.pageContent} [${JSON.stringify(doc.metadata, null)}]`);\n", "}" ] }, { "cell_type": "markdown", "id": "3ed9d733", "metadata": {}, "source": [ "The vector store supports [Elasticsearch filter syntax](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-filter-context.html) operators.\n", "\n", "If you want to execute a similarity search and receive the corresponding scores you can run:" ] }, { "cell_type": "code", "execution_count": 14, "id": "5efd2eaa", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "* [SIM=0.374] The powerhouse of the cell is the mitochondria [{\"source\":\"https://example.com\"}]\n", "* [SIM=0.370] Mitochondria are made out of lipids [{\"source\":\"https://example.com\"}]\n" ] } ], "source": [ "const similaritySearchWithScoreResults = await vectorStore.similaritySearchWithScore(\"biology\", 2, filter)\n", "\n", "for (const [doc, score] of similaritySearchWithScoreResults) {\n", " console.log(`* [SIM=${score.toFixed(3)}] ${doc.pageContent} [${JSON.stringify(doc.metadata)}]`);\n", "}" ] }, { "cell_type": "markdown", "id": "0c235cdc", "metadata": {}, "source": [ "### Query by turning into retriever\n", "\n", "You can also transform the vector store into a [retriever](/docs/concepts/#retrievers) for easier usage in your chains. " ] }, { "cell_type": "code", "execution_count": 15, "id": "f3460093", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[\n", " Document {\n", " pageContent: 'The powerhouse of the cell is the mitochondria',\n", " metadata: { source: 'https://example.com' },\n", " id: undefined\n", " },\n", " Document {\n", " pageContent: 'Mitochondria are made out of lipids',\n", " metadata: { source: 'https://example.com' },\n", " id: undefined\n", " }\n", "]\n" ] } ], "source": [ "const retriever = vectorStore.asRetriever({\n", " // Optional filter\n", " filter: filter,\n", " k: 2,\n", "});\n", "await retriever.invoke(\"biology\");" ] }, { "cell_type": "markdown", "id": "e2e0a211", "metadata": {}, "source": [ "### Usage for retrieval-augmented generation\n", "\n", "For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:\n", "\n", "- [Tutorials: working with external knowledge](/docs/tutorials/#working-with-external-knowledge).\n", "- [How-to: Question and answer with RAG](/docs/how_to/#qa-with-rag)\n", "- [Retrieval conceptual docs](/docs/concepts#retrieval)" ] }, { "cell_type": "markdown", "id": "8a27244f", "metadata": {}, "source": [ "## API reference\n", "\n", "For detailed documentation of all `ElasticVectorSearch` features and configurations head to the [API reference](https://api.js.langchain.com/classes/langchain_community_vectorstores_elasticsearch.ElasticVectorSearch.html)." ] } ], "metadata": { "kernelspec": { "display_name": "TypeScript", "language": "typescript", "name": "tslab" }, "language_info": { "codemirror_mode": { "mode": "typescript", "name": "javascript", "typescript": true }, "file_extension": ".ts", "mimetype": "text/typescript", "name": "typescript", "version": "3.7.2" } }, "nbformat": 4, "nbformat_minor": 5 }
145925
--- hide_table_of_contents: true --- # JSONLines files This example goes over how to load data from JSONLines or JSONL files. The second argument is a JSONPointer to the property to extract from each JSON object in the file. One document will be created for each JSON object in the file. Example JSONLines file: ```json {"html": "This is a sentence."} {"html": "This is another sentence."} ``` Example code: ```typescript import { JSONLinesLoader } from "langchain/document_loaders/fs/json"; const loader = new JSONLinesLoader( "src/document_loaders/example_data/example.jsonl", "/html" ); const docs = await loader.load(); /* [ Document { "metadata": { "blobType": "application/jsonl+json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence.", }, Document { "metadata": { "blobType": "application/jsonl+json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence.", }, ] */ ```
145926
# JSON files The JSON loader use [JSON pointer](https://github.com/janl/node-jsonpointer) to target keys in your JSON files you want to target. ### No JSON pointer example The most simple way of using it, is to specify no JSON pointer. The loader will load all strings it finds in the JSON object. Example JSON file: ```json { "texts": ["This is a sentence.", "This is another sentence."] } ``` Example code: ```typescript import { JSONLoader } from "langchain/document_loaders/fs/json"; const loader = new JSONLoader("src/document_loaders/example_data/example.json"); const docs = await loader.load(); /* [ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence.", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence.", }, ] */ ``` ### Using JSON pointer example You can do a more advanced scenario by choosing which keys in your JSON object you want to extract string from. In this example, we want to only extract information from "from" and "surname" entries. ```json { "1": { "body": "BD 2023 SUMMER", "from": "LinkedIn Job", "labels": ["IMPORTANT", "CATEGORY_UPDATES", "INBOX"] }, "2": { "body": "Intern, Treasury and other roles are available", "from": "LinkedIn Job2", "labels": ["IMPORTANT"], "other": { "name": "plop", "surname": "bob" } } } ``` Example code: ```typescript import { JSONLoader } from "langchain/document_loaders/fs/json"; const loader = new JSONLoader( "src/document_loaders/example_data/example.json", ["/from", "/surname"] ); const docs = await loader.load(); /* [ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "BD 2023 SUMMER", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "LinkedIn Job", }, ... ] ```
145931
" {\n", " \".pdf\": (path: string) => new PDFLoader(path),\n", " }\n", ");\n", "\n", "const directoryDocs = await directoryLoader.load();\n", "\n", "console.log(directoryDocs[0]);\n", "\n", "/* Additional steps : Split text into chunks with any TextSplitter. You can then use it as context or save it to memory afterwards. */\n", "const textSplitter = new RecursiveCharacterTextSplitter({\n", " chunkSize: 1000,\n", " chunkOverlap: 200,\n", "});\n", "\n", "const splitDocs = await textSplitter.splitDocuments(directoryDocs);\n", "console.log(splitDocs[0]);\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## API reference\n", "\n", "For detailed documentation of all PDFLoader features and configurations head to the API reference: https://api.js.langchain.com/classes/langchain_community_document_loaders_fs_pdf.PDFLoader.html" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] } ], "metadata": { "kernelspec": { "display_name": "TypeScript", "language": "typescript", "name": "tslab" }, "language_info": { "codemirror_mode": { "mode": "typescript", "name": "javascript", "typescript": true }, "file_extension": ".ts", "mimetype": "text/typescript", "name": "typescript", "version": "3.7.2" } }, "nbformat": 4, "nbformat_minor": 4 }
146002
--- sidebar_class_name: hidden --- # PromptLayer OpenAI :::warning This module has been deprecated and is no longer supported. The documentation below will not work in versions 0.2.0 or later. ::: LangChain integrates with PromptLayer for logging and debugging prompts and responses. To add support for PromptLayer: 1. Create a PromptLayer account here: [https://promptlayer.com](https://promptlayer.com). 2. Create an API token and pass it either as `promptLayerApiKey` argument in the `PromptLayerOpenAI` constructor or in the `PROMPTLAYER_API_KEY` environment variable. ```typescript import { PromptLayerOpenAI } from "langchain/llms/openai"; const model = new PromptLayerOpenAI({ temperature: 0.9, apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY }); const res = await model.invoke( "What would be a good company name a company that makes colorful socks?" ); ``` # Azure PromptLayerOpenAI LangChain also integrates with PromptLayer for Azure-hosted OpenAI instances: ```typescript import { PromptLayerOpenAI } from "langchain/llms/openai"; const model = new PromptLayerOpenAI({ temperature: 0.9, azureOpenAIApiKey: "YOUR-AOAI-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiInstanceName: "YOUR-AOAI-INSTANCE-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "YOUR-AOAI-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME azureOpenAIApiCompletionsDeploymentName: "YOUR-AOAI-COMPLETIONS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME azureOpenAIApiEmbeddingsDeploymentName: "YOUR-AOAI-EMBEDDINGS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME azureOpenAIApiVersion: "YOUR-AOAI-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIBasePath: "YOUR-AZURE-OPENAI-BASE-PATH", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY }); const res = await model.invoke( "What would be a good company name a company that makes colorful socks?" ); ``` The request and the response will be logged in the [PromptLayer dashboard](https://promptlayer.com/home). > **_Note:_** In streaming mode PromptLayer will not log the response. ## Related - LLM [conceptual guide](/docs/concepts/#llms) - LLM [how-to guides](/docs/how_to/#llms)
146003
{ "cells": [ { "cell_type": "raw", "id": "67db2992", "metadata": { "vscode": { "languageId": "raw" } }, "source": [ "---\n", "sidebar_label: Azure OpenAI\n", "---" ] }, { "cell_type": "markdown", "id": "9597802c", "metadata": {}, "source": [ "# AzureOpenAI\n", "\n", "```{=mdx}\n", "\n", ":::caution\n", "You are currently on a page documenting the use of Azure OpenAI [text completion models](/docs/concepts/#llms). The latest and most popular Azure OpenAI models are [chat completion models](/docs/concepts/#chat-models).\n", "\n", "Unless you are specifically using `gpt-3.5-turbo-instruct`, you are probably looking for [this page instead](/docs/integrations/chat/azure/).\n", ":::\n", "\n", ":::info\n", "\n", "Previously, LangChain.js supported integration with Azure OpenAI using the dedicated [Azure OpenAI SDK](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai). This SDK is now deprecated in favor of the new Azure integration in the OpenAI SDK, which allows to access the latest OpenAI models and features the same day they are released, and allows seemless transition between the OpenAI API and Azure OpenAI.\n", "\n", "If you are using Azure OpenAI with the deprecated SDK, see the [migration guide](#migration-from-azure-openai-sdk) to update to the new API.\n", "\n", ":::\n", "\n", "```\n", "\n", "[Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/) is a Microsoft Azure service that provides powerful language models from OpenAI.\n", "\n", "This will help you get started with AzureOpenAI completion models (LLMs) using LangChain. For detailed documentation on `AzureOpenAI` features and configuration options, please refer to the [API reference](https://api.js.langchain.com/classes/langchain_openai.AzureOpenAI.html).\n", "\n", "## Overview\n", "### Integration details\n", "\n", "| Class | Package | Local | Serializable | [PY support](https://python.langchain.com/docs/integrations/llms/azure_openai) | Package downloads | Package latest |\n", "| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n", "| [AzureOpenAI](https://api.js.langchain.com/classes/langchain_openai.AzureOpenAI.html) | [@langchain/openai](https://api.js.langchain.com/modules/langchain_openai.html) | ❌ | ✅ | ✅ | ![NPM - Downloads](https://img.shields.io/npm/dm/@langchain/openai?style=flat-square&label=%20&) | ![NPM - Version](https://img.shields.io/npm/v/@langchain/openai?style=flat-square&label=%20&) |\n", "\n", "## Setup\n", "\n", "To access AzureOpenAI models you'll need to create an Azure account, get an API key, and install the `@langchain/openai` integration package.\n", "\n", "### Credentials\n", "\n", "Head to [azure.microsoft.com](https://azure.microsoft.com/) to sign up to AzureOpenAI and generate an API key. \n", "\n", "You'll also need to have an Azure OpenAI instance deployed. You can deploy a version on Azure Portal following [this guide](https://learn.microsoft.com/azure/ai-services/openai/how-to/create-resource?pivots=web-portal).\n", "\n", "Once you have your instance running, make sure you have the name of your instance and key. You can find the key in the Azure Portal, under the \"Keys and Endpoint\" section of your instance.\n", "\n", "If you're using Node.js, you can define the following environment variables to use the service:\n", "\n", "```bash\n", "AZURE_OPENAI_API_INSTANCE_NAME=<YOUR_INSTANCE_NAME>\n", "AZURE_OPENAI_API_DEPLOYMENT_NAME=<YOUR_DEPLOYMENT_NAME>\n", "AZURE_OPENAI_API_KEY=<YOUR_KEY>\n", "AZURE_OPENAI_API_VERSION=\"2024-02-01\"\n", "```\n", "\n", "Alternatively, you can pass the values directly to the `AzureOpenAI` constructor.\n", "\n", "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n", "\n", "```bash\n", "# export LANGCHAIN_TRACING_V2=\"true\"\n", "# export LANGCHAIN_API_KEY=\"your-api-key\"\n", "```\n", "\n", "### Installation\n", "\n", "The LangChain AzureOpenAI integration lives in the `@langchain/openai` package:\n", "\n", "```{=mdx}\n", "import IntegrationInstallTooltip from \"@mdx_components/integration_install_tooltip.mdx\";\n", "import Npm2Yarn from \"@theme/Npm2Yarn\";\n", "\n", "<IntegrationInstallTooltip></IntegrationInstallTooltip>\n", "\n", "<Npm2Yarn>\n", " @langchain/openai @langchain/core\n", "</Npm2Yarn>\n", "\n", "```" ] }, { "cell_type": "markdown", "id": "0a760037", "metadata": {}, "source": [ "## Instantiation\n", "\n", "Now we can instantiate our model object and generate chat completions:" ] }, { "cell_type": "code", "execution_count": 7, "id": "a0562a13", "metadata": {}, "outputs": [], "source": [ "import { AzureOpenAI } from \"@langchain/openai\"\n", "\n", "const llm = new AzureOpenAI({\n", " model: \"gpt-3.5-turbo-instruct\",\n", " azureOpenAIApiKey: \"<your_key>\", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY\n", " azureOpenAIApiInstanceName: \"<your_instance_name>\", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME\n", " azureOpenAIApiDeploymentName: \"<your_deployment_name>\", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME\n", " azureOpenAIApiVersion: \"<api_version>\", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION\n", " temperature: 0,\n", " maxTokens: undefined,\n", " timeout: undefined,\n", " maxRetries: 2,\n", " // other params...\n", "})" ] }, { "cell_type": "markdown", "id": "0ee90032", "metadata": {}, "source": [ "## Invocation" ] }, { "cell_type": "code", "execution_count": 8, "id": "035dea0f", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "provides AI solutions to businesses. They offer a range of services including natural language processing, computer vision, and machine learning. Their solutions are designed to help businesses automate processes, gain insights from data, and improve decision-making. AzureOpenAI also offers consulting services to help businesses identify and implement the best AI solutions for their specific needs. They work with a variety of industries, including healthcare, finance, and retail. With their expertise in AI and their partnership with Microsoft Azure, AzureOpenAI is a trusted provider of AI solutions for businesses looking to stay ahead in the rapidly evolving world of technology.\n" ] } ], "source": [ "const inputText = \"AzureOpenAI is an AI company that \"\n", "\n", "const completion = await llm.invoke(inputText)\n", "completion" ] }, { "cell_type": "markdown", "id": "add38532", "metadata": {}, "source": [ "## Chaining\n", "\n", "We can [chain](/docs/how_to/sequence/) our completion model with a prompt template like so:"
146019
--- sidebar_class_name: node-only --- # Llama CPP :::tip Compatibility Only available on Node.js. ::: This module is based on the [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) Node.js bindings for [llama.cpp](https://github.com/ggerganov/llama.cpp), allowing you to work with a locally running LLM. This allows you to work with a much smaller quantized model capable of running on a laptop environment, ideal for testing and scratch padding ideas without running up a bill! ## Setup You'll need to install major version `2` of the [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) module to communicate with your local model. ```bash npm2yarn npm install -S node-llama-cpp@2 ``` import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/community @langchain/core ``` You will also need a local Llama 2 model (or a model supported by [node-llama-cpp](https://github.com/withcatai/node-llama-cpp)). You will need to pass the path to this model to the LlamaCpp module as a part of the parameters (see example). Out-of-the-box `node-llama-cpp` is tuned for running on a MacOS platform with support for the Metal GPU of Apple M-series of processors. If you need to turn this off or need support for the CUDA architecture then refer to the documentation at [node-llama-cpp](https://withcatai.github.io/node-llama-cpp/). A note to LangChain.js contributors: if you want to run the tests associated with this module you will need to put the path to your local model in the environment variable `LLAMA_PATH`. ## Guide to installing Llama2 Getting a local Llama2 model running on your machine is a pre-req so this is a quick guide to getting and building Llama 7B (the smallest) and then quantizing it so that it will run comfortably on a laptop. To do this you will need `python3` on your machine (3.11 is recommended), also `gcc` and `make` so that `llama.cpp` can be built. ### Getting the Llama2 models To get a copy of Llama2 you need to visit [Meta AI](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and request access to their models. Once Meta AI grant you access, you will receive an email containing a unique URL to access the files, this will be needed in the next steps. Now create a directory to work in, for example: ``` mkdir llama2 cd llama2 ``` Now we need to get the Meta AI `llama` repo in place so we can download the model. ``` git clone https://github.com/facebookresearch/llama.git ``` Once we have this in place we can change into this directory and run the downloader script to get the model we will be working with. Note: From here on its assumed that the model in use is `llama-2–7b`, if you select a different model don't forget to change the references to the model accordingly. ``` cd llama /bin/bash ./download.sh ``` This script will ask you for the URL that Meta AI sent to you (see above), you will also select the model to download, in this case we used `llama-2–7b`. Once this step has completed successfully (this can take some time, the `llama-2–7b` model is around 13.5Gb) there should be a new `llama-2–7b` directory containing the model and other files. ### Converting and quantizing the model In this step we need to use `llama.cpp` so we need to download that repo. ``` cd .. git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp ``` Now we need to build the `llama.cpp` tools and set up our `python` environment. In these steps it's assumed that your install of python can be run using `python3` and that the virtual environment can be called `llama2`, adjust accordingly for your own situation. ``` make python3 -m venv llama2 source llama2/bin/activate ``` After activating your llama2 environment you should see `(llama2)` prefixing your command prompt to let you know this is the active environment. Note: if you need to come back to build another model or re-quantize the model don't forget to activate the environment again also if you update `llama.cpp` you will need to rebuild the tools and possibly install new or updated dependencies! Now that we have an active python environment, we need to install the python dependencies. ``` python3 -m pip install -r requirements.txt ``` Having done this, we can start converting and quantizing the Llama2 model ready for use locally via `llama.cpp`. First, we need to convert the model, prior to the conversion let's create a directory to store it in. ``` mkdir models/7B python3 convert.py --outfile models/7B/gguf-llama2-f16.bin --outtype f16 ../../llama2/llama/llama-2-7b --vocab-dir ../../llama2/llama/llama-2-7b ``` This should create a converted model called `gguf-llama2-f16.bin` in the directory we just created. Note that this is just a converted model so it is also around 13.5Gb in size, in the next step we will quantize it down to around 4Gb. ``` ./quantize ./models/7B/gguf-llama2-f16.bin ./models/7B/gguf-llama2-q4_0.bin q4_0 ``` Running this should result in a new model being created in the `models\7B` directory, this one called `gguf-llama2-q4_0.bin`, this is the model we can use with langchain. You can validate this model is working by testing it using the `llama.cpp` tools. ``` ./main -m ./models/7B/gguf-llama2-q4_0.bin -n 1024 --repeat_penalty 1.0 --color -i -r "User:" -f ./prompts/chat-with-bob.txt ``` Running this command fires up the model for a chat session. BTW if you are running out of disk space this small model is the only one we need, so you can backup and/or delete the original and converted 13.5Gb models. ## Usage import CodeBlock from "@theme/CodeBlock"; import LlamaCppExample from "@examples/models/llm/llama_cpp.ts"; <CodeBlock language="typescript">{LlamaCppExample}</CodeBlock> ## Streaming import LlamaCppStreamExample from "@examples/models/llm/llama_cpp_stream.ts"; <CodeBlock language="typescript">{LlamaCppStreamExample}</CodeBlock>; ## Related - LLM [conceptual guide](/docs/concepts/#llms) - LLM [how-to guides](/docs/how_to/#llms)
146023
{ "cells": [ { "cell_type": "raw", "metadata": { "vscode": { "languageId": "raw" } }, "source": [ "---\n", "keywords: [pdf, document loader]\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Build a PDF ingestion and Question/Answering system\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "\n", "- [Document loaders](/docs/concepts/#document-loaders)\n", "- [Chat models](/docs/concepts/#chat-models)\n", "- [Embeddings](/docs/concepts/#embedding-models)\n", "- [Vector stores](/docs/concepts/#vector-stores)\n", "- [Retrieval-augmented generation](/docs/tutorials/rag/)\n", "\n", ":::\n", "\n", "PDF files often hold crucial unstructured data unavailable from other sources. They can be quite lengthy, and unlike plain text files, cannot generally be fed directly into the prompt of a language model.\n", "\n", "In this tutorial, you'll create a system that can answer questions about PDF files. More specifically, you'll use a [Document Loader](/docs/concepts/#document-loaders) to load text in a format usable by an LLM, then build a retrieval-augmented generation (RAG) pipeline to answer questions, including citations from the source material.\n", "\n", "This tutorial will gloss over some concepts more deeply covered in our [RAG](/docs/tutorials/rag/) tutorial, so you may want to go through those first if you haven't already.\n", "\n", "Let's dive in!\n", "\n", "## Loading documents\n", "\n", "First, you'll need to choose a PDF to load. We'll use a document from [Nike's annual public SEC report](https://s1.q4cdn.com/806093406/files/doc_downloads/2023/414759-1-_5_Nike-NPS-Combo_Form-10-K_WR.pdf). It's over 100 pages long, and contains some crucial data mixed with longer explanatory text. However, you can feel free to use a PDF of your choosing.\n", "\n", "Once you've chosen your PDF, the next step is to load it into a format that an LLM can more easily handle, since LLMs generally require text inputs. LangChain has a few different [built-in document loaders](/docs/how_to/document_loader_pdf/) for this purpose which you can experiment with. Below, we'll use one powered by the [`pdf-parse`](https://www.npmjs.com/package/pdf-parse) package that reads from a filepath:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "107\n" ] } ], "source": [ "import \"pdf-parse\"; // Peer dep\n", "import { PDFLoader } from \"@langchain/community/document_loaders/fs/pdf\";\n", "\n", "const loader = new PDFLoader(\"../../data/nke-10k-2023.pdf\");\n", "\n", "const docs = await loader.load();\n", "\n", "console.log(docs.length);" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Table of Contents\n", "UNITED STATES\n", "SECURITIES AND EXCHANGE COMMISSION\n", "Washington, D.C. 20549\n", "FORM 10-K\n", "\n", "{\n", " source: '../../data/nke-10k-2023.pdf',\n", " pdf: {\n", " version: '1.10.100',\n", " info: {\n", " PDFFormatVersion: '1.4',\n", " IsAcroFormPresent: false,\n", " IsXFAPresent: false,\n", " Title: '0000320187-23-000039',\n", " Author: 'EDGAR Online, a division of Donnelley Financial Solutions',\n", " Subject: 'Form 10-K filed on 2023-07-20 for the period ending 2023-05-31',\n", " Keywords: '0000320187-23-000039; ; 10-K',\n", " Creator: 'EDGAR Filing HTML Converter',\n", " Producer: 'EDGRpdf Service w/ EO.Pdf 22.0.40.0',\n", " CreationDate: \"D:20230720162200-04'00'\",\n", " ModDate: \"D:20230720162208-04'00'\"\n", " },\n", " metadata: null,\n", " totalPages: 107\n", " },\n", " loc: { pageNumber: 1 }\n", "}\n" ] } ], "source": [ "console.log(docs[0].pageContent.slice(0, 100));\n", "console.log(docs[0].metadata)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So what just happened?\n", "\n", "- The loader reads the PDF at the specified path into memory.\n", "- It then extracts text data using the `pdf-parse` package.\n", "- Finally, it creates a LangChain [Document](/docs/concepts/#documents) for each page of the PDF with the page's content and some metadata about where in the document the text came from.\n", "\n", "LangChain has [many other document loaders](/docs/integrations/document_loaders/) for other data sources, or you can create a [custom document loader](/docs/how_to/document_loader_custom/).\n", "\n", "## Question answering with RAG\n", "\n", "Next, you'll prepare the loaded documents for later retrieval. Using a [text splitter](/docs/concepts/#text-splitters), you'll split your loaded documents into smaller documents that can more easily fit into an LLM's context window, then load them into a [vector store](/docs/concepts/#vectorstores). You can then create a [retriever](/docs/concepts/#retrievers) from the vector store for use in our RAG chain:\n", "\n", "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs openaiParams={`{ model: \"gpt-4o\" }`} />\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import { MemoryVectorStore } from \"langchain/vectorstores/memory\";\n", "import { OpenAIEmbeddings } from \"@langchain/openai\";\n", "import { RecursiveCharacterTextSplitter } from \"@langchain/textsplitters\";\n", "\n", "const textSplitter = new RecursiveCharacterTextSplitter({\n", " chunkSize: 1000,\n", " chunkOverlap: 200,\n", "});\n", "\n", "const splits = await textSplitter.splitDocuments(docs);\n", "\n", "const vectorstore = await MemoryVectorStore.fromDocuments(splits, new OpenAIEmbeddings());\n", "\n", "const retriever = vectorstore.asRetriever();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, you'll use some built-in helpers to construct the final `ragChain`:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{\n", " input: \"What was Nike's revenue in 2023?\",\n", " chat_history: [],\n", " context: [\n", " Document {\n",
146025
"import { createRetrievalChain } from \"langchain/chains/retrieval\";\n", "import { createStuffDocumentsChain } from \"langchain/chains/combine_documents\";\n", "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n", "\n", "const systemTemplate = [\n", " `You are an assistant for question-answering tasks. `,\n", " `Use the following pieces of retrieved context to answer `,\n", " `the question. If you don't know the answer, say that you `,\n", " `don't know. Use three sentences maximum and keep the `,\n", " `answer concise.`,\n", " `\\n\\n`,\n", " `{context}`,\n", "].join(\"\");\n", "\n", "const prompt = ChatPromptTemplate.fromMessages([\n", " [\"system\", systemTemplate],\n", " [\"human\", \"{input}\"],\n", "]);\n", "\n", "const questionAnswerChain = await createStuffDocumentsChain({ llm, prompt });\n", "const ragChain = await createRetrievalChain({ retriever, combineDocsChain: questionAnswerChain });\n", "\n", "const results = await ragChain.invoke({\n", " input: \"What was Nike's revenue in 2023?\",\n", "});\n", "\n", "console.log(results);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can see that you get both a final answer in the `answer` key of the results object, and the `context` the LLM used to generate an answer.\n", "\n", "Examining the values under the `context` further, you can see that they are documents that each contain a chunk of the ingested page content. Usefully, these documents also preserve the original metadata from way back when you first loaded them:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Enterprise Resource Planning Platform, data and analytics, demand sensing, insight gathering, and other areas to create an end-to-end technology foundation, which we\n", "believe will further accelerate our digital transformation. We believe this unified approach will accelerate growth and unlock more efficiency for our business, while driving\n", "speed and responsiveness as we serve consumers globally.\n", "FINANCIAL HIGHLIGHTS\n", "•In fiscal 2023, NIKE, Inc. achieved record Revenues of $51.2 billion, which increased 10% and 16% on a reported and currency-neutral basis, respectively\n", "•NIKE Direct revenues grew 14% from $18.7 billion in fiscal 2022 to $21.3 billion in fiscal 2023, and represented approximately 44% of total NIKE Brand revenues for\n", "fiscal 2023\n", "•Gross margin for the fiscal year decreased 250 basis points to 43.5% primarily driven by higher product costs, higher markdowns and unfavorable changes in foreign\n", "currency exchange rates, partially offset by strategic pricing actions\n" ] } ], "source": [ "console.log(results.context[0].pageContent);" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{\n", " source: '../../data/nke-10k-2023.pdf',\n", " pdf: {\n", " version: '1.10.100',\n", " info: {\n", " PDFFormatVersion: '1.4',\n", " IsAcroFormPresent: false,\n", " IsXFAPresent: false,\n", " Title: '0000320187-23-000039',\n", " Author: 'EDGAR Online, a division of Donnelley Financial Solutions',\n", " Subject: 'Form 10-K filed on 2023-07-20 for the period ending 2023-05-31',\n", " Keywords: '0000320187-23-000039; ; 10-K',\n", " Creator: 'EDGAR Filing HTML Converter',\n", " Producer: 'EDGRpdf Service w/ EO.Pdf 22.0.40.0',\n", " CreationDate: \"D:20230720162200-04'00'\",\n", " ModDate: \"D:20230720162208-04'00'\"\n", " },\n", " metadata: null,\n", " totalPages: 107\n", " },\n", " loc: { pageNumber: 31, lines: { from: 14, to: 22 } }\n", "}\n" ] } ], "source": [ "console.log(results.context[0].metadata);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This particular chunk came from page 31 in the original PDF. You can use this data to show which page in the PDF the answer came from, allowing users to quickly verify that answers are based on the source material.\n", "\n", ":::info\n", "For a deeper dive into RAG, see [this more focused tutorial](/docs/tutorials/rag/) or [our how-to guides](/docs/how_to/#qa-with-rag).\n", ":::\n", "\n", "## Next steps\n", "\n", "You've now seen how to load documents from a PDF file with a Document Loader and some techniques you can use to prepare that loaded data for RAG.\n", "\n", "For more on document loaders, you can check out:\n", "\n", "- [The entry in the conceptual guide](/docs/concepts/#document-loaders)\n", "- [Related how-to guides](/docs/how_to/#document-loaders)\n", "- [Available integrations](/docs/integrations/document_loaders/)\n", "- [How to create a custom document loader](/docs/how_to/document_loader_custom/)\n", "\n", "For more on RAG, see:\n", "\n", "- [Build a Retrieval Augmented Generation (RAG) App](/docs/tutorials/rag/)\n", "- [Related how-to guides](/docs/how_to/#qa-with-rag)" ] } ], "metadata": { "kernelspec": { "display_name": "TypeScript", "language": "typescript", "name": "tslab" }, "language_info": { "codemirror_mode": { "mode": "typescript", "name": "javascript", "typescript": true }, "file_extension": ".ts", "mimetype": "text/typescript", "name": "typescript", "version": "3.7.2" } }, "nbformat": 4, "nbformat_minor": 2 }
146027
"My satire is more than just a joke, it's a call to action, and I've got the power\n", "I'm the one who's really making a difference, and you're just a fleeting flower.\n", "\n", "[The crowd continues to cheer and chant as the two comedians continue their rap battle.]\n" ] } ], "source": [ "const response = await ollamaLlm.invoke(\"Simulate a rap battle between Stephen Colbert and John Oliver\");\n", "console.log(response.content);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "See the LangSmith trace [here](https://smith.langchain.com/public/31c178b5-4bea-4105-88c3-7ec95325c817/r)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using in a chain\n", "\n", "We can create a summarization chain with either model by passing in the retrieved docs and a simple prompt.\n", "\n", "It formats the prompt template using the input key values provided and passes the formatted string to `LLama-V2`, or another specified LLM." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "import { RunnableSequence } from \"@langchain/core/runnables\";\n", "import { StringOutputParser } from \"@langchain/core/output_parsers\";\n", "import { PromptTemplate } from \"@langchain/core/prompts\";\n", "import { createStuffDocumentsChain } from \"langchain/chains/combine_documents\";\n", "\n", "const prompt = PromptTemplate.fromTemplate(\"Summarize the main themes in these retrieved docs: {context}\");\n", "\n", "const chain = await createStuffDocumentsChain({\n", " llm: ollamaLlm,\n", " outputParser: new StringOutputParser(),\n", " prompt,\n", "})" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\u001b[32m\"The main themes retrieved from the provided documents are:\\n\"\u001b[39m +\n", " \u001b[32m\"\\n\"\u001b[39m +\n", " \u001b[32m\"1. Sensory Memory: The ability to retain\"\u001b[39m... 1117 more characters" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "const question = \"What are the approaches to Task Decomposition?\";\n", "const docs = await vectorStore.similaritySearch(question);\n", "await chain.invoke({\n", " context: docs,\n", "});" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "See the LangSmith trace [here](https://smith.langchain.com/public/47cf6c2a-3d86-4f2b-9a51-ee4663b19152/r)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Q&A \n", "\n", "We can also use the LangChain Prompt Hub to store and fetch prompts that are model-specific.\n", "\n", "Let's try with a default RAG prompt, [here](https://smith.langchain.com/hub/rlm/rag-prompt)." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "import { pull } from \"langchain/hub\";\n", "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n", "\n", "const ragPrompt = await pull<ChatPromptTemplate>(\"rlm/rag-prompt\");\n", "\n", "const chain = await createStuffDocumentsChain({\n", " llm: ollamaLlm,\n", " outputParser: new StringOutputParser(),\n", " prompt: ragPrompt,\n", "});" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see what this prompt actually looks like:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\n", "Question: {question} \n", "Context: {context} \n", "Answer:\n" ] } ], "source": [ "console.log(ragPrompt.promptMessages.map((msg) => msg.prompt.template).join(\"\\n\"));" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\u001b[32m\"Task decomposition is a crucial step in breaking down complex problems into manageable parts for eff\"\u001b[39m... 1095 more characters" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "await chain.invoke({ context: docs, question });" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "See the LangSmith trace [here](https://smith.langchain.com/public/dd3a189b-53a1-4f31-9766-244cd04ad1f7/r)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Q&A with retrieval\n", "\n", "Instead of manually passing in docs, we can automatically retrieve them from our vector store based on the user question.\n", "\n", "This will use a QA default prompt and will retrieve from the vectorDB." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\u001b[32m\"Based on the context provided, I understand that you are asking me to answer a question related to m\"\u001b[39m... 948 more characters" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import { RunnablePassthrough, RunnableSequence } from \"@langchain/core/runnables\";\n", "import { formatDocumentsAsString } from \"langchain/util/document\";\n", "\n", "const retriever = vectorStore.asRetriever();\n", "\n", "const qaChain = RunnableSequence.from([\n", " {\n", " context: (input: { question: string }, callbacks) => {\n", " const retrieverAndFormatter = retriever.pipe(formatDocumentsAsString);\n", " return retrieverAndFormatter.invoke(input.question, callbacks);\n", " },\n", " question: new RunnablePassthrough(),\n", " },\n", " ragPrompt,\n", " ollamaLlm,\n", " new StringOutputParser(),\n", "]);\n", "\n", "await qaChain.invoke({ question });" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "See the LangSmith trace [here](https://smith.langchain.com/public/440e65ee-0301-42cf-afc9-f09cfb52cf64/r)" ] } ], "metadata": { "kernelspec": { "display_name": "Deno", "language": "typescript", "name": "deno" }, "language_info": { "file_extension": ".ts", "mimetype": "text/x.typescript", "name": "typescript", "nb_converter": "script",
146042
" [\"human\", \"{input}\"],\n", "]);\n", "\n", "const questionAnswerChain = await createStuffDocumentsChain({\n", " llm,\n", " prompt,\n", "});\n", "\n", "const ragChain = await createRetrievalChain({\n", " retriever,\n", " combineDocsChain: questionAnswerChain,\n", "});" ] }, { "cell_type": "code", "execution_count": 4, "id": "bf55faaf-0d17-4b74-925d-c478b555f7b2", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Task decomposition involves breaking down large and complex tasks into smaller, more manageable subgoals or steps. This approach helps agents or models efficiently handle intricate tasks by simplifying them into easier components. Task decomposition can be achieved through techniques like Chain of Thought, Tree of Thoughts, or by using task-specific instructions and human input.\n" ] } ], "source": [ "const response = await ragChain.invoke({ input: \"What is Task Decomposition?\" });\n", "console.log(response.answer);" ] }, { "cell_type": "markdown", "id": "187404c7-db47-49c5-be29-9ecb96dc9afa", "metadata": {}, "source": [ "Note that we have used the built-in chain constructors `createStuffDocumentsChain` and `createRetrievalChain`, so that the basic ingredients to our solution are:\n", "\n", "1. retriever;\n", "2. prompt;\n", "3. LLM.\n", "\n", "This will simplify the process of incorporating chat history.\n", "\n", "### Adding chat history\n", "\n", "The chain we have built uses the input query directly to retrieve relevant context. But in a conversational setting, the user query might require conversational context to be understood. For example, consider this exchange:\n", "\n", "> Human: \"What is Task Decomposition?\"\n", ">\n", "> AI: \"Task decomposition involves breaking down complex tasks into smaller and simpler steps to make them more manageable for an agent or model.\"\n", ">\n", "> Human: \"What are common ways of doing it?\"\n", "\n", "In order to answer the second question, our system needs to understand that \"it\" refers to \"Task Decomposition.\"\n", "\n", "We'll need to update two things about our existing app:\n", "\n", "1. **Prompt**: Update our prompt to support historical messages as an input.\n", "2. **Contextualizing questions**: Add a sub-chain that takes the latest user question and reformulates it in the context of the chat history. This can be thought of simply as building a new \"history aware\" retriever. Whereas before we had:\n", " - `query` -> `retriever` \n", " Now we will have:\n", " - `(query, conversation history)` -> `LLM` -> `rephrased query` -> `retriever`" ] }, { "cell_type": "markdown", "id": "776ae958-cbdc-4471-8669-c6087436f0b5", "metadata": {}, "source": [ "#### Contextualizing the question\n", "\n", "First we'll need to define a sub-chain that takes historical messages and the latest user question, and reformulates the question if it makes reference to any information in the historical information.\n", "\n", "We'll use a prompt that includes a `MessagesPlaceholder` variable under the name \"chat_history\". This allows us to pass in a list of Messages to the prompt using the \"chat_history\" input key, and these messages will be inserted after the system message and before the human message containing the latest question.\n", "\n", "Note that we leverage a helper function [createHistoryAwareRetriever](https://api.js.langchain.com/functions/langchain.chains_history_aware_retriever.createHistoryAwareRetriever.html) for this step, which manages the case where `chat_history` is empty, and otherwise applies `prompt.pipe(llm).pipe(new StringOutputParser()).pipe(retriever)` in sequence.\n", "\n", "`createHistoryAwareRetriever` constructs a chain that accepts keys `input` and `chat_history` as input, and has the same output schema as a retriever." ] }, { "cell_type": "code", "execution_count": 7, "id": "2b685428-8b82-4af1-be4f-7232c5d55b73", "metadata": {}, "outputs": [], "source": [ "import { createHistoryAwareRetriever } from \"langchain/chains/history_aware_retriever\";\n", "import { MessagesPlaceholder } from \"@langchain/core/prompts\";\n", "\n", "const contextualizeQSystemPrompt = \n", " \"Given a chat history and the latest user question \" +\n", " \"which might reference context in the chat history, \" +\n", " \"formulate a standalone question which can be understood \" +\n", " \"without the chat history. Do NOT answer the question, \" +\n", " \"just reformulate it if needed and otherwise return it as is.\";\n", "\n", "const contextualizeQPrompt = ChatPromptTemplate.fromMessages([\n", " [\"system\", contextualizeQSystemPrompt],\n", " new MessagesPlaceholder(\"chat_history\"),\n", " [\"human\", \"{input}\"],\n", "]);\n", "\n", "const historyAwareRetriever = await createHistoryAwareRetriever({\n", " llm,\n", " retriever,\n", " rephrasePrompt: contextualizeQPrompt,\n", "});" ] }, { "cell_type": "markdown", "id": "42a47168-4a1f-4e39-bd2d-d5b03609a243", "metadata": {}, "source": [ "This chain prepends a rephrasing of the input query to our retriever, so that the retrieval incorporates the context of the conversation.\n", "\n", "Now we can build our full QA chain. This is as simple as updating the retriever to be our new `historyAwareRetriever`.\n", "\n", "Again, we will use [createStuffDocumentsChain](https://api.js.langchain.com/functions/langchain.chains_combine_documents.createStuffDocumentsChain.html) to generate a `questionAnswerChain2`, with input keys `context`, `chat_history`, and `input`-- it accepts the retrieved context alongside the conversation history and query to generate an answer. A more detailed explaination is over [here](/docs/tutorials/rag/#built-in-chains)\n", "\n", "We build our final `ragChain2` with [createRetrievalChain](https://api.js.langchain.com/functions/langchain.chains_retrieval.createRetrievalChain.html). This chain applies the `historyAwareRetriever` and `questionAnswerChain2` in sequence, retaining intermediate outputs such as the retrieved context for convenience. It has input keys `input` and `chat_history`, and includes `input`, `chat_history`, `context`, and `answer` in its output." ] }, { "cell_type": "code", "execution_count": 9, "id": "66f275f3-ddef-4678-b90d-ee64576878f9", "metadata": {}, "outputs": [], "source": [ "const qaPrompt = ChatPromptTemplate.fromMessages([\n", " [\"system\", systemPrompt],\n", " new MessagesPlaceholder(\"chat_history\"),\n", " [\"human\", \"{input}\"],\n", "]);\n", "\n", "const questionAnswerChain2 = await createStuffDocumentsChain({\n", " llm,\n", " prompt: qaPrompt,\n", "});\n", "\n", "const ragChain2 = await createRetrievalChain({\n", " retriever: historyAwareRetriever,\n", " combineDocsChain: questionAnswerChain2,\n", "});" ] }, { "cell_type": "markdown", "id": "1ba1ae56-7ecb-4563-b792-50a1a5042df3", "metadata": {}, "source": [
146045
"{ answer: ' using' }\n", "----\n", "{ answer: ' task' }\n", "----\n", "{ answer: '-specific' }\n", "----\n", "{ answer: ' instructions' }\n", "----\n", "{ answer: '.' }\n", "----\n", "{ answer: '' }\n", "----\n", "{ answer: '' }\n", "----\n" ] } ], "source": [ "import { CheerioWebBaseLoader } from \"@langchain/community/document_loaders/web/cheerio\";\n", "import { RecursiveCharacterTextSplitter } from \"langchain/text_splitter\";\n", "import { MemoryVectorStore } from \"langchain/vectorstores/memory\";\n", "import { OpenAIEmbeddings, ChatOpenAI } from \"@langchain/openai\";\n", "import { ChatPromptTemplate, MessagesPlaceholder } from \"@langchain/core/prompts\";\n", "import { createHistoryAwareRetriever } from \"langchain/chains/history_aware_retriever\";\n", "import { createStuffDocumentsChain } from \"langchain/chains/combine_documents\";\n", "import { createRetrievalChain } from \"langchain/chains/retrieval\";\n", "import { RunnableWithMessageHistory } from \"@langchain/core/runnables\";\n", "import { ChatMessageHistory } from \"langchain/stores/message/in_memory\";\n", "import { BaseChatMessageHistory } from \"@langchain/core/chat_history\";\n", "\n", "const llm2 = new ChatOpenAI({ model: \"gpt-3.5-turbo\", temperature: 0 });\n", "\n", "// Construct retriever\n", "const loader2 = new CheerioWebBaseLoader(\n", " \"https://lilianweng.github.io/posts/2023-06-23-agent/\",\n", " {\n", " selector: \".post-content, .post-title, .post-header\"\n", " }\n", ");\n", "\n", "const docs2 = await loader2.load();\n", "\n", "const textSplitter2 = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200 });\n", "const splits2 = await textSplitter2.splitDocuments(docs2);\n", "const vectorstore2 = await MemoryVectorStore.fromDocuments(splits2, new OpenAIEmbeddings());\n", "const retriever2 = vectorstore2.asRetriever();\n", "\n", "// Contextualize question\n", "const contextualizeQSystemPrompt2 = \n", " \"Given a chat history and the latest user question \" +\n", " \"which might reference context in the chat history, \" +\n", " \"formulate a standalone question which can be understood \" +\n", " \"without the chat history. Do NOT answer the question, \" +\n", " \"just reformulate it if needed and otherwise return it as is.\";\n", "\n", "const contextualizeQPrompt2 = ChatPromptTemplate.fromMessages([\n", " [\"system\", contextualizeQSystemPrompt2],\n", " new MessagesPlaceholder(\"chat_history\"),\n", " [\"human\", \"{input}\"],\n", "]);\n", "\n", "const historyAwareRetriever2 = await createHistoryAwareRetriever({\n", " llm: llm2,\n", " retriever: retriever2,\n", " rephrasePrompt: contextualizeQPrompt2\n", "});\n", "\n", "// Answer question\n", "const systemPrompt2 = \n", " \"You are an assistant for question-answering tasks. \" +\n", " \"Use the following pieces of retrieved context to answer \" +\n", " \"the question. If you don't know the answer, say that you \" +\n", " \"don't know. Use three sentences maximum and keep the \" +\n", " \"answer concise.\" +\n", " \"\\n\\n\" +\n", " \"{context}\";\n", "\n", "const qaPrompt2 = ChatPromptTemplate.fromMessages([\n", " [\"system\", systemPrompt2],\n", " new MessagesPlaceholder(\"chat_history\"),\n", " [\"human\", \"{input}\"],\n", "]);\n", "\n", "const questionAnswerChain3 = await createStuffDocumentsChain({\n", " llm,\n", " prompt: qaPrompt2,\n", "});\n", "\n", "const ragChain3 = await createRetrievalChain({\n", " retriever: historyAwareRetriever2,\n", " combineDocsChain: questionAnswerChain3,\n", "});\n", "\n", "// Statefully manage chat history\n", "const store2: Record<string, BaseChatMessageHistory> = {};\n", "\n", "function getSessionHistory2(sessionId: string): BaseChatMessageHistory {\n", " if (!(sessionId in store2)) {\n", " store2[sessionId] = new ChatMessageHistory();\n", " }\n", " return store2[sessionId];\n", "}\n", "\n", "const conversationalRagChain2 = new RunnableWithMessageHistory({\n", " runnable: ragChain3,\n", " getMessageHistory: getSessionHistory2,\n", " inputMessagesKey: \"input\",\n", " historyMessagesKey: \"chat_history\",\n", " outputMessagesKey: \"answer\",\n", "});\n", "\n", "// Example usage\n", "const query2 = \"What is Task Decomposition?\";\n", "\n", "for await (const s of await conversationalRagChain2.stream(\n", " { input: query2 },\n", " { configurable: { sessionId: \"unique_session_id\" } }\n", ")) {\n", " console.log(s);\n", " console.log(\"----\");\n", "}" ] }, { "cell_type": "markdown", "id": "861da8ed-d890-4fdc-a3bf-30433db61e0d", "metadata": {}, "source": [ "## Agents {#agents}\n", "\n", "Agents leverage the reasoning capabilities of LLMs to make decisions during execution. Using agents allow you to offload some discretion over the retrieval process. Although their behavior is less predictable than chains, they offer some advantages in this context:\n", "\n", "- Agents generate the input to the retriever directly, without necessarily needing us to explicitly build in contextualization, as we did above;\n", "- Agents can execute multiple retrieval steps in service of a query, or refrain from executing a retrieval step altogether (e.g., in response to a generic greeting from a user).\n", "\n", "### Retrieval tool\n", "\n", "Agents can access \"tools\" and manage their execution. In this case, we will convert our retriever into a LangChain tool to be wielded by the agent:" ] }, { "cell_type": "code", "execution_count": 23, "id": "809cc747-2135-40a2-8e73-e4556343ee64", "metadata": {}, "outputs": [], "source": [ "import { createRetrieverTool } from \"langchain/tools/retriever\";\n", "\n", "const tool = createRetrieverTool(\n", " retriever,\n", " {\n", " name: \"blog_post_retriever\",\n", " description: \"Searches and returns excerpts from the Autonomous Agents blog post.\",\n", " }\n", ")\n", "const tools = [tool]" ] }, { "cell_type": "markdown", "id": "07dcb968-ed9a-458a-85e1-528cd28c6965", "metadata": {}, "source": [ "Tools are LangChain [Runnables](/docs/concepts#langchain-expression-language-lcel), and implement the usual interface:" ] }, { "cell_type": "code", "execution_count": 24, "id": "931c4fe3-c603-4efb-9b37-5f7cbbb1cbbd", "metadata": {},
146054
{ "cells": [ { "cell_type": "raw", "metadata": { "vscode": { "languageId": "raw" } }, "source": [ "---\n", "sidebar_position: 1\n", "keywords: [conversationchain]\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Build a Chatbot\n", "\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "\n", "- [Chat Models](/docs/concepts/#chat-models)\n", "- [Prompt Templates](/docs/concepts/#prompt-templates)\n", "- [Chat History](/docs/concepts/#chat-history)\n", "\n", "This guide requires `langgraph >= 0.2.28`.\n", "\n", ":::\n", "\n", "\n", "```{=mdx}\n", "\n", ":::note\n", "\n", "This tutorial previously built a chatbot using [RunnableWithMessageHistory](https://api.js.langchain.com/classes/_langchain_core.runnables.RunnableWithMessageHistory.html). You can access this version of the tutorial in the [v0.2 docs](https://js.langchain.com/v0.2/docs/tutorials/chatbot/).\n", "\n", "The LangGraph implementation offers a number of advantages over `RunnableWithMessageHistory`, including the ability to persist arbitrary components of an application's state (instead of only messages).\n", "\n", ":::\n", "\n", "```\n", "\n", "## Overview\n", "\n", "We'll go over an example of how to design and implement an LLM-powered chatbot. \n", "This chatbot will be able to have a conversation and remember previous interactions.\n", "\n", "\n", "Note that this chatbot that we build will only use the language model to have a conversation.\n", "There are several other related concepts that you may be looking for:\n", "\n", "- [Conversational RAG](/docs/tutorials/qa_chat_history): Enable a chatbot experience over an external source of data\n", "- [Agents](https://langchain-ai.github.io/langgraphjs/tutorials/multi_agent/agent_supervisor/): Build a chatbot that can take actions\n", "\n", "This tutorial will cover the basics which will be helpful for those two more advanced topics, but feel free to skip directly to there should you choose.\n", "\n", "## Setup\n", "\n", "### Jupyter Notebook\n", "\n", "This guide (and most of the other guides in the documentation) uses [Jupyter notebooks](https://jupyter.org/) and assumes the reader is as well. Jupyter notebooks are perfect for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc) and going through guides in an interactive environment is a great way to better understand them.\n", "\n", "This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See [here](https://jupyter.org/install) for instructions on how to install.\n", "\n", "### Installation\n", "\n", "For this tutorial we will need `@langchain/core` and `langgraph`:\n", "\n", "```{=mdx}\n", "import Npm2Yarn from \"@theme/Npm2Yarn\"\n", "\n", "<Npm2Yarn>\n", " @langchain/core @langchain/langgraph uuid\n", "</Npm2Yarn>\n", "```\n", "\n", "For more details, see our [Installation guide](/docs/how_to/installation).\n", "\n", "### LangSmith\n", "\n", "Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls.\n", "As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent.\n", "The best way to do this is with [LangSmith](https://smith.langchain.com).\n", "\n", "After you sign up at the link above, make sure to set your environment variables to start logging traces:\n", "\n", "```typescript\n", "process.env.LANGCHAIN_TRACING_V2 = \"true\"\n", "process.env.LANGCHAIN_API_KEY = \"...\"\n", "```\n", "\n", "## Quickstart\n", "\n", "First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangeably - select the one you want to use below!\n", "\n", "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs customVarName=\"llm\" />\n", "```\n" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "// @lc-docs-hide-cell\n", "\n", "import { ChatOpenAI } from \"@langchain/openai\";\n", "\n", "const llm = new ChatOpenAI({ model: \"gpt-4o-mini\" })" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's first use the model directly. `ChatModel`s are instances of LangChain \"Runnables\", which means they expose a standard interface for interacting with them. To just simply call the model, we can pass in a list of messages to the `.invoke` method." ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "AIMessage {\n", " \"id\": \"chatcmpl-ABUXeSO4JQpxO96lj7iudUptJ6nfW\",\n", " \"content\": \"Hi Bob! How can I assist you today?\",\n", " \"additional_kwargs\": {},\n", " \"response_metadata\": {\n", " \"tokenUsage\": {\n", " \"completionTokens\": 10,\n", " \"promptTokens\": 10,\n", " \"totalTokens\": 20\n", " },\n", " \"finish_reason\": \"stop\",\n", " \"system_fingerprint\": \"fp_1bb46167f9\"\n", " },\n", " \"tool_calls\": [],\n", " \"invalid_tool_calls\": [],\n", " \"usage_metadata\": {\n", " \"input_tokens\": 10,\n", " \"output_tokens\": 10,\n", " \"total_tokens\": 20\n", " }\n", "}\n" ] } ], "source": [ "await llm.invoke([{ role: \"user\", content: \"Hi im bob\" }])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model on its own does not have any concept of state. For example, if you ask a followup question:" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "AIMessage {\n", " \"id\": \"chatcmpl-ABUXe1Zih4gMe3XgotWL83xeWub2h\",\n", " \"content\": \"I'm sorry, but I don't have access to personal information about individuals unless it has been shared with me during our conversation. If you'd like to tell me your name, feel free to do so!\",\n", " \"additional_kwargs\": {},\n", " \"response_metadata\": {\n", " \"tokenUsage\": {\n", " \"completionTokens\": 39,\n", " \"promptTokens\": 10,\n", " \"totalTokens\": 49\n", " },\n", " \"finish_reason\": \"stop\",\n", " \"system_fingerprint\": \"fp_1bb46167f9\"\n", " },\n", " \"tool_calls\": [],\n", " \"invalid_tool_calls\": [],\n",
146055
" \"usage_metadata\": {\n", " \"input_tokens\": 10,\n", " \"output_tokens\": 39,\n", " \"total_tokens\": 49\n", " }\n", "}\n" ] } ], "source": [ "await llm.invoke([{ role: \"user\", content: \"Whats my name\" }])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's take a look at the example [LangSmith trace](https://smith.langchain.com/public/3b768e44-a319-453a-bd6e-30f9df75f16a/r)\n", "\n", "We can see that it doesn't take the previous conversation turn into context, and cannot answer the question.\n", "This makes for a terrible chatbot experience!\n", "\n", "To get around this, we need to pass the entire conversation history into the model. Let's see what happens when we do that:" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "AIMessage {\n", " \"id\": \"chatcmpl-ABUXfX4Fnp247rOxyPlBUYMQgahj2\",\n", " \"content\": \"Your name is Bob! How can I help you today?\",\n", " \"additional_kwargs\": {},\n", " \"response_metadata\": {\n", " \"tokenUsage\": {\n", " \"completionTokens\": 12,\n", " \"promptTokens\": 33,\n", " \"totalTokens\": 45\n", " },\n", " \"finish_reason\": \"stop\",\n", " \"system_fingerprint\": \"fp_1bb46167f9\"\n", " },\n", " \"tool_calls\": [],\n", " \"invalid_tool_calls\": [],\n", " \"usage_metadata\": {\n", " \"input_tokens\": 33,\n", " \"output_tokens\": 12,\n", " \"total_tokens\": 45\n", " }\n", "}\n" ] } ], "source": [ "await llm.invoke([\n", " { role: \"user\", content: \"Hi! I'm Bob\" },\n", " { role: \"assistant\", content: \"Hello Bob! How can I assist you today?\" },\n", " { role: \"user\", content: \"What's my name?\" }\n", "]);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now we can see that we get a good response!\n", "\n", "This is the basic idea underpinning a chatbot's ability to interact conversationally.\n", "So how do we best implement this?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Message persistence\n", "\n", "[LangGraph](https://langchain-ai.github.io/langgraphjs/) implements a built-in persistence layer, making it ideal for chat applications that support multiple conversational turns.\n", "\n", "Wrapping our chat model in a minimal LangGraph application allows us to automatically persist the message history, simplifying the development of multi-turn applications.\n", "\n", "LangGraph comes with a simple in-memory checkpointer, which we use below." ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [], "source": [ "import { START, END, MessagesAnnotation, StateGraph, MemorySaver } from \"@langchain/langgraph\";\n", "\n", "// Define the function that calls the model\n", "const callModel = async (state: typeof MessagesAnnotation.State) => {\n", " const response = await llm.invoke(state.messages);\n", " return { messages: response };\n", "};\n", "\n", "// Define a new graph\n", "const workflow = new StateGraph(MessagesAnnotation)\n", " // Define the node and edge\n", " .addNode(\"model\", callModel)\n", " .addEdge(START, \"model\")\n", " .addEdge(\"model\", END);\n", "\n", "// Add memory\n", "const memory = new MemorySaver();\n", "const app = workflow.compile({ checkpointer: memory });" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now need to create a `config` that we pass into the runnable every time. This config contains information that is not part of the input directly, but is still useful. In this case, we want to include a `thread_id`. This should look like:" ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [], "source": [ "import { v4 as uuidv4 } from \"uuid\";\n", "\n", "const config = { configurable: { thread_id: uuidv4() } };" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This enables us to support multiple conversation threads with a single application, a common requirement when your application has multiple users.\n", "\n", "We can then invoke the application:" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "AIMessage {\n", " \"id\": \"chatcmpl-ABUXfjqCno78CGXCHoAgamqXG1pnZ\",\n", " \"content\": \"Hi Bob! How can I assist you today?\",\n", " \"additional_kwargs\": {},\n", " \"response_metadata\": {\n", " \"tokenUsage\": {\n", " \"completionTokens\": 10,\n", " \"promptTokens\": 12,\n", " \"totalTokens\": 22\n", " },\n", " \"finish_reason\": \"stop\",\n", " \"system_fingerprint\": \"fp_1bb46167f9\"\n", " },\n", " \"tool_calls\": [],\n", " \"invalid_tool_calls\": [],\n", " \"usage_metadata\": {\n", " \"input_tokens\": 12,\n", " \"output_tokens\": 10,\n", " \"total_tokens\": 22\n", " }\n", "}\n" ] } ], "source": [ "const input = [\n", " {\n", " role: \"user\",\n", " content: \"Hi! I'm Bob.\",\n", " }\n", "]\n", "const output = await app.invoke({ messages: input }, config)\n", "// The output contains all messages in the state.\n", "// This will long the last message in the conversation.\n", "console.log(output.messages[output.messages.length - 1]);" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "AIMessage {\n", " \"id\": \"chatcmpl-ABUXgzHFHk4KsaNmDJyvflHq4JY2L\",\n", " \"content\": \"Your name is Bob! How can I help you today, Bob?\",\n", " \"additional_kwargs\": {},\n", " \"response_metadata\": {\n", " \"tokenUsage\": {\n", " \"completionTokens\": 14,\n", " \"promptTokens\": 34,\n", " \"totalTokens\": 48\n", " },\n", " \"finish_reason\": \"stop\",\n", " \"system_fingerprint\": \"fp_1bb46167f9\"\n", " },\n", " \"tool_calls\": [],\n", " \"invalid_tool_calls\": [],\n", " \"usage_metadata\": {\n", " \"input_tokens\": 34,\n", " \"output_tokens\": 14,\n",
146060
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Build a Retrieval Augmented Generation (RAG) App\n", "\n", "One of the most powerful applications enabled by LLMs is sophisticated question-answering (Q&A) chatbots. These are applications that can answer questions about specific source information. These applications use a technique known as Retrieval Augmented Generation, or RAG.\n", "\n", "This tutorial will show how to build a simple Q&A application\n", "over a text data source. Along the way we’ll go over a typical Q&A\n", "architecture and highlight additional resources for more advanced Q&A techniques. We’ll also see\n", "how LangSmith can help us trace and understand our application.\n", "LangSmith will become increasingly helpful as our application grows in\n", "complexity.\n", "\n", "If you're already familiar with basic retrieval, you might also be interested in\n", "this [high-level overview of different retrieval techinques](/docs/concepts/#retrieval).\n", "\n", "## What is RAG?\n", "\n", "RAG is a technique for augmenting LLM knowledge with additional data.\n", "\n", "LLMs can reason about wide-ranging topics, but their knowledge is limited to the public data up to a specific point in time that they were trained on. If you want to build AI applications that can reason about private data or data introduced after a model's cutoff date, you need to augment the knowledge of the model with the specific information it needs. The process of bringing the appropriate information and inserting it into the model prompt is known as Retrieval Augmented Generation (RAG).\n", "\n", "LangChain has a number of components designed to help build Q&A applications, and RAG applications more generally. \n", "\n", "**Note**: Here we focus on Q&A for unstructured data. If you are interested for RAG over structured data, check out our tutorial on doing [question/answering over SQL data](/docs/tutorials/sql_qa).\n", "\n", "## Concepts\n", "A typical RAG application has two main components:\n", "\n", "**Indexing**: a pipeline for ingesting data from a source and indexing it. *This usually happens offline.*\n", "\n", "**Retrieval and generation**: the actual RAG chain, which takes the user query at run time and retrieves the relevant data from the index, then passes that to the model.\n", "\n", "The most common full sequence from raw data to answer looks like:\n", "\n", "### Indexing\n", "1. **Load**: First we need to load our data. This is done with [Document Loaders](/docs/concepts/#document-loaders).\n", "2. **Split**: [Text splitters](/docs/concepts/#text-splitters) break large `Documents` into smaller chunks. This is useful both for indexing data and for passing it in to a model, since large chunks are harder to search over and won't fit in a model's finite context window.\n", "3. **Store**: We need somewhere to store and index our splits, so that they can later be searched over. This is often done using a [VectorStore](/docs/concepts/#vectorstores) and [Embeddings](/docs/concepts/#embedding-models) model.\n", "\n", "![index_diagram](../../static/img/rag_indexing.png)\n", "\n", "### Retrieval and generation\n", "4. **Retrieve**: Given a user input, relevant splits are retrieved from storage using a [Retriever](/docs/concepts/#retrievers).\n", "5. **Generate**: A [ChatModel](/docs/concepts/#chat-models) / [LLM](/docs/concepts/#llms) produces an answer using a prompt that includes the question and the retrieved data\n", "\n", "![retrieval_diagram](../../static/img/rag_retrieval_generation.png)\n", "\n", "\n", "## Setup\n", "\n", "### Installation\n", "\n", "To install LangChain run:\n", "\n", "```bash npm2yarn\n", "npm i langchain @langchain/core\n", "```\n", "\n", "For more details, see our [Installation guide](/docs/how_to/installation).\n", "\n", "### LangSmith\n", "\n", "Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls.\n", "As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent.\n", "The best way to do this is with [LangSmith](https://smith.langchain.com).\n", "\n", "After you sign up at the link above, make sure to set your environment variables to start logging traces:\n", "\n", "```shell\n", "export LANGCHAIN_TRACING_V2=\"true\"\n", "export LANGCHAIN_API_KEY=\"...\"\n", "\n", "# Reduce tracing latency if you are not in a serverless environment\n", "# export LANGCHAIN_CALLBACKS_BACKGROUND=true\n", "```\n", "\n", "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs customVarName=\"llm\" />\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preview\n", "\n", "In this guide we’ll build a QA app over the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng, which allows us to ask questions about the contents of the post.\n", "\n", "We can create a simple indexing pipeline and RAG chain to do this in only a few lines of code:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import \"cheerio\";\n", "import { CheerioWebBaseLoader } from \"@langchain/community/document_loaders/web/cheerio\";\n", "import { RecursiveCharacterTextSplitter } from \"langchain/text_splitter\";\n", "import { MemoryVectorStore } from \"langchain/vectorstores/memory\"\n", "import { OpenAIEmbeddings, ChatOpenAI } from \"@langchain/openai\";\n", "import { pull } from \"langchain/hub\";\n", "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n", "import { StringOutputParser } from \"@langchain/core/output_parsers\";\n", "import { createStuffDocumentsChain } from \"langchain/chains/combine_documents\";\n", "\n", "const loader = new CheerioWebBaseLoader(\n", " \"https://lilianweng.github.io/posts/2023-06-23-agent/\"\n", ");\n", "\n", "const docs = await loader.load();\n", "\n", "const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200 });\n", "const splits = await textSplitter.splitDocuments(docs);\n", "const vectorStore = await MemoryVectorStore.fromDocuments(splits, new OpenAIEmbeddings());\n", "\n", "// Retrieve and generate using the relevant snippets of the blog.\n", "const retriever = vectorStore.asRetriever();\n", "const prompt = await pull<ChatPromptTemplate>(\"rlm/rag-prompt\");\n", "const llm = new ChatOpenAI({ model: \"gpt-3.5-turbo\", temperature: 0 });\n", "\n", "const ragChain = await createStuffDocumentsChain({\n", " llm,\n", " prompt,\n", " outputParser: new StringOutputParser(),\n", "})\n", "\n", "const retrievedDocs = await retriever.invoke(\"what is task decomposition\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The prompt looks like this:\n", "\n", "```\n", "You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\n", "Question: {question} \n",
146062
"Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:A complicated task usually involves many steps. An agent needs to know what they are and plan ahead.Chain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.Task decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.Another quite distinct approach, LLM+P (Liu et al. 2023), involves relying on an external classical planner to do long-horizon planning. This approach utilizes the Planning Domain Definition Language (PDDL) as an intermediate interface to describe the planning problem. In this process, LLM (1) translates the problem into “Problem PDDL”, then (2) requests a classical planner to generate a PDDL plan based on an existing “Domain PDDL”, and finally (3) translates the PDDL plan back into natural language. Essentially, the planning step is outsourced to an external tool, assuming the availability of domain-specific PDDL and a suitable planner which is common in certain robotic setups but not in many other domains.Self-reflection is a vital aspect that allows autonomous agents to improve iteratively by refining past action decisions and correcting previous mistakes. It plays a crucial role in real-world tasks where trial and error are inevitable.ReAct (Yao et al. 2023) integrates reasoning and acting within LLM by extending the action space to be a combination of task-specific discrete actions and the language space. The former enables LLM to interact with the environment (e.g. use Wikipedia search API), while the latter prompting LLM to generate reasoning traces in natural language.The ReAct prompt template incorporates explicit steps for LLM to think, roughly formatted as:In both experiments on knowledge-intensive tasks and decision-making tasks, ReAct works better than the Act-only baseline where Thought: … step is removed.Reflexion (Shinn & Labash 2023) is a framework to equips agents with dynamic memory and self-reflection capabilities to improve reasoning skills. Reflexion has a standard RL setup, in which the reward model provides a simple binary reward and the action space follows the setup in ReAct where the task-specific action space is augmented with language to enable complex reasoning steps. After each action $a_t$, the agent computes a heuristic $h_t$ and optionally may decide to reset the environment to start a new trial depending on the self-reflection results.The heuristic function determines when the trajectory is inefficient or contains hallucination and should be stopped. Inefficient planning refers to trajectories that take too long without success. Hallucination is defined as encountering a sequence of consecutive identical actions that lead to the same observation in the environment.Self-reflection is created by showing two-shot examples to LLM and each example is a pair of (failed trajectory, ideal reflection for guiding future changes in the plan). Then reflections are added into the agent’s working memory, up to three, to be used as context for querying LLM.Chain of Hindsight (CoH; Liu et al. 2023) encourages the model to improve on its own outputs by explicitly presenting it with a sequence of past outputs, each annotated with feedback. Human feedback data is a collection of $D_h = \\{(x, y_i , r_i , z_i)\\}_{i=1}^n$, where $x$ is the prompt, each $y_i$ is a model completion, $r_i$ is the human rating of $y_i$, and $z_i$ is the corresponding human-provided hindsight feedback. Assume the feedback tuples are ranked by reward, $r_n \\geq r_{n-1} \\geq \\dots \\geq r_1$ The process is supervised fine-tuning where the data is a sequence in the form of $\\tau_h = (x, z_i, y_i, z_j, y_j, \\dots, z_n, y_n)$, where $\\leq i \\leq j \\leq n$. The model is finetuned to only predict $y_n$ where conditioned on the sequence prefix, such that the model can self-reflect to produce better output based on the feedback sequence. The model can optionally receive multiple rounds of instructions with human annotators at test time.To avoid overfitting, CoH adds a regularization term to maximize the log-likelihood of the pre-training dataset. To avoid shortcutting and copying (because there are many common words in feedback sequences), they randomly mask 0% - 5% of past tokens during training.The training dataset in their experiments is a combination of WebGPT comparisons, summarization from human feedback and human preference dataset.The idea of CoH is to present a history of sequentially improved outputs in context and train the model to take on the trend to produce better outputs. Algorithm Distillation (AD; Laskin et al. 2023) applies the same idea to cross-episode trajectories in reinforcement learning tasks, where an algorithm is encapsulated in a long history-conditioned policy. Considering that an agent interacts with the environment many times and in each episode the agent gets a little better, AD concatenates this learning history and feeds that into the model. Hence we should expect the next predicted action to lead to better performance than previous trials. The goal is to learn the process of RL instead of training a task-specific policy itself.The paper hypothesizes that any algorithm that generates a set of learning histories can be distilled into a neural network by performing behavioral cloning over actions. The history data is generated by a set of source policies, each trained for a specific task. At the training stage, during each RL run, a random task is sampled and a subsequence of multi-episode history is used for training, such that the learned policy is task-agnostic.In reality, the model has limited context window length, so episodes should be short enough to construct multi-episode history. Multi-episodic contexts of 2-4 episodes are necessary to learn a near-optimal in-context RL algorithm. The emergence of in-context RL requires long enough context.In comparison with three baselines, including ED (expert distillation, behavior cloning with expert trajectories instead of learning history), source policy (used for generating trajectories for distillation by UCB), RL^2 (Duan et al. 2017; used as upper bound since it needs online RL), AD demonstrates in-context RL with performance getting close to RL^2 despite only using offline RL and learns much faster than other baselines. When conditioned on partial training history of the source policy, AD also improves much faster than ED baseline.(Big thank you to ChatGPT for helping me draft this section. I’ve learned a lot about the human brain and data structure for fast MIPS in my conversations with ChatGPT.)Memory can be defined as the processes used to acquire, store, retain, and later retrieve information. There are several types of memory in human brains.Sensory Memory: This is the earliest stage of memory, providing the ability to retain impressions of sensory information (visual, auditory, etc) after the original stimuli have ended. Sensory memory typically only lasts for up to a few seconds. Subcategories include iconic memory (visual), echoic memory (auditory), and haptic memory (touch).Short-Term Memory (STM) or Working Memory: It stores information that we are currently aware of and needed to carry out complex cognitive tasks such as learning and reasoning. Short-term memory is believed to have the capacity of about 7 items (Miller 1956) and lasts for 20-30 seconds.Long-Term Memory (LTM): Long-term memory can store information for a remarkably long time, ranging from a few days to decades, with an essentially unlimited storage capacity. There are two subtypes of LTM:We can roughly consider the following mappings:The external memory can alleviate the restriction of finite attention span.
146063
A standard practice is to save the embedding representation of information into a vector store database that can support fast maximum inner-product search (MIPS). To optimize the retrieval speed, the common choice is the approximate nearest neighbors (ANN)​ algorithm to return approximately top k nearest neighbors to trade off a little accuracy lost for a huge speedup.A couple common choices of ANN algorithms for fast MIPS:Check more MIPS algorithms and performance comparison in ann-benchmarks.com.Tool use is a remarkable and distinguishing characteristic of human beings. We create, modify and utilize external objects to do things that go beyond our physical and cognitive limits. Equipping LLMs with external tools can significantly extend the model capabilities.MRKL (Karpas et al. 2022), short for “Modular Reasoning, Knowledge and Language”, is a neuro-symbolic architecture for autonomous agents. A MRKL system is proposed to contain a collection of “expert” modules and the general-purpose LLM works as a router to route inquiries to the best suitable expert module. These modules can be neural (e.g. deep learning models) or symbolic (e.g. math calculator, currency converter, weather API).They did an experiment on fine-tuning LLM to call a calculator, using arithmetic as a test case. Their experiments showed that it was harder to solve verbal math problems than explicitly stated math problems because LLMs (7B Jurassic1-large model) failed to extract the right arguments for the basic arithmetic reliably. The results highlight when the external symbolic tools can work reliably, knowing when to and how to use the tools are crucial, determined by the LLM capability.Both TALM (Tool Augmented Language Models; Parisi et al. 2022) and Toolformer (Schick et al. 2023) fine-tune a LM to learn to use external tool APIs. The dataset is expanded based on whether a newly added API call annotation can improve the quality of model outputs. See more details in the “External APIs” section of Prompt Engineering.ChatGPT Plugins and OpenAI API function calling are good examples of LLMs augmented with tool use capability working in practice. The collection of tool APIs can be provided by other developers (as in Plugins) or self-defined (as in function calls).HuggingGPT (Shen et al. 2023) is a framework to use ChatGPT as the task planner to select models available in HuggingFace platform according to the model descriptions and summarize the response based on the execution results.The system comprises of 4 stages:(1) Task planning: LLM works as the brain and parses the user requests into multiple tasks. There are four attributes associated with each task: task type, ID, dependencies, and arguments. They use few-shot examples to guide LLM to do task parsing and planning.Instruction:(2) Model selection: LLM distributes the tasks to expert models, where the request is framed as a multiple-choice question. LLM is presented with a list of models to choose from. Due to the limited context length, task type based filtration is needed.Instruction:(3) Task execution: Expert models execute on the specific tasks and log results.Instruction:(4) Response generation: LLM receives the execution results and provides summarized results to users.To put HuggingGPT into real world usage, a couple challenges need to solve: (1) Efficiency improvement is needed as both LLM inference rounds and interactions with other models slow down the process; (2) It relies on a long context window to communicate over complicated task content; (3) Stability improvement of LLM outputs and external model services.API-Bank (Li et al. 2023) is a benchmark for evaluating the performance of tool-augmented LLMs. It contains 53 commonly used API tools, a complete tool-augmented LLM workflow, and 264 annotated dialogues that involve 568 API calls. The selection of APIs is quite diverse, including search engines, calculator, calendar queries, smart home control, schedule management, health data management, account authentication workflow and more. Because there are a large number of APIs, LLM first has access to API search engine to find the right API to call and then uses the corresponding documentation to make a call.In the API-Bank workflow, LLMs need to make a couple of decisions and at each step we can evaluate how accurate that decision is. Decisions include:This benchmark evaluates the agent’s tool use capabilities at three levels:ChemCrow (Bran et al. 2023) is a domain-specific example in which LLM is augmented with 13 expert-designed tools to accomplish tasks across organic synthesis, drug discovery, and materials design. The workflow, implemented in LangChain, reflects what was previously described in the ReAct and MRKLs and combines CoT reasoning with tools relevant to the tasks:One interesting observation is that while the LLM-based evaluation concluded that GPT-4 and ChemCrow perform nearly equivalently, human evaluations with experts oriented towards the completion and chemical correctness of the solutions showed that ChemCrow outperforms GPT-4 by a large margin. This indicates a potential problem with using LLM to evaluate its own performance on domains that requires deep expertise. The lack of expertise may cause LLMs not knowing its flaws and thus cannot well judge the correctness of task results.Boiko et al. (2023) also looked into LLM-empowered agents for scientific discovery, to handle autonomous design, planning, and performance of complex scientific experiments. This agent can use tools to browse the Internet, read documentation, execute code, call robotics experimentation APIs and leverage other LLMs.For example, when requested to \"develop a novel anticancer drug\", the model came up with the following reasoning steps:They also discussed the risks, especially with illicit drugs and bioweapons. They developed a test set containing a list of known chemical weapon agents and asked the agent to synthesize them. 4 out of 11 requests (36%) were accepted to obtain a synthesis solution and the agent attempted to consult documentation to execute the procedure. 7 out of 11 were rejected and among these 7 rejected cases, 5 happened after a Web search while 2 were rejected based on prompt only.Generative Agents (Park, et al. 2023) is super fun experiment where 25 virtual characters, each controlled by a LLM-powered agent, are living and interacting in a sandbox environment, inspired by The Sims. Generative agents create believable simulacra of human behavior for interactive applications.The design of generative agents combines LLM with memory, planning and reflection mechanisms to enable agents to behave conditioned on past experience, as well as to interact with other agents.This fun simulation results in emergent social behavior, such as information diffusion, relationship memory (e.g. two agents continuing the conversation topic) and coordination of social events (e.g. host a party and invite many others).AutoGPT has drawn a lot of attention into the possibility of setting up autonomous agents with LLM as the main controller. It has quite a lot of reliability issues given the natural language interface, but nevertheless a cool proof-of-concept demo. A lot of code in AutoGPT is about format parsing.Here is the system message used by AutoGPT, where {{...}} are user inputs:GPT-Engineer is another project to create a whole repository of code given a task specified in natural language. The GPT-Engineer is instructed to think over a list of smaller components to build and ask for user input to clarify questions as needed.Here are a sample conversation for task clarification sent to OpenAI ChatCompletion endpoint used by GPT-Engineer. The user inputs are wrapped in {{user input text}}.Then after these clarification, the agent moved into the code writing mode with a different system message.\n",
146064
"System message:Think step by step and reason yourself to the right decisions to make sure we get it right.\n", "You will first lay out the names of the core classes, functions, methods that will be necessary, as well as a quick comment on their purpose.Then you will output the content of each file including ALL code.\n", "Each file must strictly follow a markdown code block format, where the following tokens must be replaced such that\n", "FILENAME is the lowercase file name including the file extension,\n", "LANG is the markup code block language for the code’s language, and CODE is the code:FILENAMEYou will start with the “entrypoint” file, then go to the ones that are imported by that file, and so on.\n", "Please note that the code should be fully functional. No placeholders.Follow a language and framework appropriate best practice file naming convention.\n", "Make sure that files contain all imports, types etc. Make sure that code in different files are compatible with each other.\n", "Ensure to implement all code, if you are unsure, write a plausible implementation.\n", "Include module dependency or package manager dependency definition file.\n", "Before you finish, double check that all parts of the architecture is present in the files.Useful to know:\n", "You almost always put different classes in different files.\n", "For Python, you always create an appropriate requirements.txt file.\n", "For NodeJS, you always create an appropriate package.json file.\n", "You always add a comment briefly describing the purpose of the function definition.\n", "You try to add comments explaining very complex bits of logic.\n", "You always follow the best practices for the requested languages in terms of describing the code written as a defined\n", "package/project.Python toolbelt preferences:Conversatin samples:After going through key ideas and demos of building LLM-centered agents, I start to see a couple common limitations:Finite context length: The restricted context capacity limits the inclusion of historical information, detailed instructions, API call context, and responses. The design of the system has to work with this limited communication bandwidth, while mechanisms like self-reflection to learn from past mistakes would benefit a lot from long or infinite context windows. Although vector stores and retrieval can provide access to a larger knowledge pool, their representation power is not as powerful as full attention.Challenges in long-term planning and task decomposition: Planning over a lengthy history and effectively exploring the solution space remain challenging. LLMs struggle to adjust plans when faced with unexpected errors, making them less robust compared to humans who learn from trial and error.Reliability of natural language interface: Current agent system relies on natural language as an interface between LLMs and external components such as memory and tools. However, the reliability of model outputs is questionable, as LLMs may make formatting errors and occasionally exhibit rebellious behavior (e.g. refuse to follow an instruction). Consequently, much of the agent demo code focuses on parsing model output.Cited as:Weng, Lilian. (Jun 2023). “LLM-powered Autonomous Agents”. Lil’Log. https://lilianweng.github.io/posts/2023-06-23-agent/.Or[1] Wei et al. “Chain of thought prompting elicits reasoning in large language models.” NeurIPS 2022[2] Yao et al. “Tree of Thoughts: Dliberate Problem Solving with Large Language Models.” arXiv preprint arXiv:2305.10601 (2023).[3] Liu et al. “Chain of Hindsight Aligns Language Models with Feedback\n", "“ arXiv preprint arXiv:2302.02676 (2023).[4] Liu et al. “LLM+P: Empowering Large Language Models with Optimal Planning Proficiency” arXiv preprint arXiv:2304.11477 (2023).[5] Yao et al. “ReAct: Synergizing reasoning and acting in language models.” ICLR 2023.[6] Google Blog. “Announcing ScaNN: Efficient Vector Similarity Search” July 28, 2020.[7] https://chat.openai.com/share/46ff149e-a4c7-4dd7-a800-fc4a642ea389[8] Shinn & Labash. “Reflexion: an autonomous agent with dynamic memory and self-reflection” arXiv preprint arXiv:2303.11366 (2023).[9] Laskin et al. “In-context Reinforcement Learning with Algorithm Distillation” ICLR 2023.[10] Karpas et al. “MRKL Systems A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning.” arXiv preprint arXiv:2205.00445 (2022).[11] Nakano et al. “Webgpt: Browser-assisted question-answering with human feedback.” arXiv preprint arXiv:2112.09332 (2021).[12] Parisi et al. “TALM: Tool Augmented Language Models”[13] Schick et al. “Toolformer: Language Models Can Teach Themselves to Use Tools.” arXiv preprint arXiv:2302.04761 (2023).[14] Weaviate Blog. Why is Vector Search so fast? Sep 13, 2022.[15] Li et al. “API-Bank: A Benchmark for Tool-Augmented LLMs” arXiv preprint arXiv:2304.08244 (2023).[16] Shen et al. “HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace” arXiv preprint arXiv:2303.17580 (2023).[17] Bran et al. “ChemCrow: Augmenting large-language models with chemistry tools.” arXiv preprint arXiv:2304.05376 (2023).[18] Boiko et al. “Emergent autonomous scientific research capabilities of large language models.” arXiv preprint arXiv:2304.05332 (2023).[19] Joon Sung Park, et al. “Generative Agents: Interactive Simulacra of Human Behavior.” arXiv preprint arXiv:2304.03442 (2023).[20] AutoGPT. https://github.com/Significant-Gravitas/Auto-GPT[21] GPT-Engineer. https://github.com/AntonOsika/gpt-engineer\n" ] } ], "source": [ "console.log(loadedDocs[0].pageContent)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Go deeper\n", "`DocumentLoader`: Class that loads data from a source as list of Documents. - [Docs](/docs/concepts#document-loaders): Detailed documentation on how to use\n", "\n", "`DocumentLoaders`. - [Integrations](/docs/integrations/document_loaders/) - [Interface](https:/api.js.langchain.com/classes/langchain.document_loaders_base.BaseDocumentLoader.html): API reference for the base interface." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Indexing: Split\n", "Our loaded document is over 42k characters long. This is too long to fit in the context window of many models. Even for those models that could fit the full post in their context window, models can struggle to find information in very long inputs.\n", "\n", "To handle this we’ll split the `Document` into chunks for embedding and vector storage. This should help us retrieve only the most relevant bits of the blog post at run time.\n", "\n", "In this case we’ll split our documents into chunks of 1000 characters with 200 characters of overlap between chunks. The overlap helps mitigate the possibility of separating a statement from important context related to it. We use the [RecursiveCharacterTextSplitter](/docs/how_to/recursive_text_splitter/), which will recursively split the document using common separators like new lines until each chunk is the appropriate size. This is the recommended text splitter for generic text use cases." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "const splitter = new RecursiveCharacterTextSplitter({\n", " chunkSize: 1000, chunkOverlap: 200\n", "});\n", "const allSplits = await splitter.splitDocuments(loadedDocs);" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "29\n" ] } ], "source": [ "console.log(allSplits.length);" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "996\n" ] } ], "source": [
146065
"console.log(allSplits[0].pageContent.length);" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{\n", " source: 'https://lilianweng.github.io/posts/2023-06-23-agent/',\n", " loc: { lines: { from: 1, to: 1 } }\n", "}\n" ] } ], "source": [ "allSplits[10].metadata" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Go deeper\n", "\n", "`TextSplitter`: Object that splits a list of `Document`s into smaller chunks. Subclass of `DocumentTransformers`. - Explore `Context-aware splitters`, which keep the location (“context”) of each split in the original `Document`: - [Markdown files](/docs/how_to/code_splitter/#markdown) - [Code](/docs/how_to/code_splitter/) (15+ langs) - [Interface](https://api.js.langchain.com/classes/langchain_textsplitters.TextSplitter.html): API reference for the base interface.\n", "\n", "`DocumentTransformer`: Object that performs a transformation on a list of `Document`s. - Docs: Detailed documentation on how to use `DocumentTransformer`s - [Integrations](/docs/integrations/document_transformers) - [Interface](https://api.js.langchain.com/classes/langchain_core.documents.BaseDocumentTransformer.html): API reference for the base interface." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Indexing: Store\n", "Now we need to index our 28 text chunks so that we can search over them at runtime. The most common way to do this is to embed the contents of each document split and insert these embeddings into a vector database (or vector store). When we want to search over our splits, we take a text search query, embed it, and perform some sort of “similarity” search to identify the stored splits with the most similar embeddings to our query embedding. The simplest similarity measure is cosine similarity — we measure the cosine of the angle between each pair of embeddings (which are high dimensional vectors).\n", "\n", "We can embed and store all of our document splits in a single command using the [Memory](/docs/integrations/vectorstores/memory) vector store and [OpenAIEmbeddings](/docs/integrations/text_embedding/openai) model." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "import { MemoryVectorStore } from \"langchain/vectorstores/memory\"\n", "import { OpenAIEmbeddings } from \"@langchain/openai\";\n", "\n", "const inMemoryVectorStore = await MemoryVectorStore.fromDocuments(allSplits, new OpenAIEmbeddings());" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Go deeper\n", "\n", "`Embeddings`: Wrapper around a text embedding model, used for converting text to embeddings. - [Docs](/docs/concepts#embedding-models): Detailed documentation on how to use embeddings. - [Integrations](/docs/integrations/text_embedding): 30+ integrations to choose from. - [Interface](https://api.js.langchain.com/classes/langchain_core.embeddings.Embeddings.html): API reference for the base interface.\n", "\n", "`VectorStore`: Wrapper around a vector database, used for storing and querying embeddings. - [Docs](/docs/concepts#vectorstores): Detailed documentation on how to use vector stores. - [Integrations](/docs/integrations/vectorstores): 40+ integrations to choose from. - [Interface](https://api.js.langchain.com/classes/langchain_core.vectorstores.VectorStore.html): API reference for the base interface.\n", "\n", "This completes the **Indexing** portion of the pipeline. At this point we have a query-able vector store containing the chunked contents of our blog post. Given a user question, we should ideally be able to return the snippets of the blog post that answer the question." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Retrieval and Generation: Retrieve\n", "\n", "Now let’s write the actual application logic. We want to create a simple application that takes a user question, searches for documents relevant to that question, passes the retrieved documents and initial question to a model, and returns an answer.\n", "\n", "First we need to define our logic for searching over documents. LangChain defines a [Retriever](/docs/concepts#retrievers) interface which wraps an index that can return relevant `Document`s given a string query.\n", "\n", "The most common type of Retriever is the [VectorStoreRetriever](https://api.js.langchain.com/classes/langchain_core.vectorstores.VectorStoreRetriever.html), which uses the similarity search capabilities of a vector store to facilitate retrieval. Any `VectorStore` can easily be turned into a `Retriever` with `VectorStore.asRetriever()`:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "const vectorStoreRetriever = inMemoryVectorStore.asRetriever({ k: 6, searchType: \"similarity\" });" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "const retrievedDocuments = await vectorStoreRetriever.invoke(\"What are the approaches to task decomposition?\");" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "6\n" ] } ], "source": [ "console.log(retrievedDocuments.length);" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.Task decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.Another quite distinct approach, LLM+P (Liu et al. 2023), involves relying on an external classical planner to do long-horizon planning. This approach utilizes the Planning Domain\n" ] } ], "source": [ "console.log(retrievedDocuments[0].pageContent);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Go deeper\n", "\n", "Vector stores are commonly used for retrieval, but there are other ways to do retrieval, too.\n", "\n", "`Retriever`: An object that returns `Document`s given a text query - [Docs](/docs/concepts#retrievers): Further documentation on the interface and built-in retrieval techniques. Some of which include: - `MultiQueryRetriever` [generates variants of the input question](/docs/how_to/multiple_queries/) to improve retrieval hit rate. - `MultiVectorRetriever` (diagram below) instead generates variants of the embeddings, also in order to improve retrieval hit rate. - Max marginal relevance selects for relevance and diversity among the retrieved documents to avoid passing in duplicate context. - Documents can be filtered during vector store retrieval using metadata filters. - Integrations: Integrations with retrieval services. - Interface: API reference for the base interface." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. Retrieval and Generation: Generate\n", "\n", "Let’s put it all together into a chain that takes a question, retrieves relevant documents, constructs a prompt, passes that to a model, and parses the output.\n", "\n",
146068
# Build a Question/Answering system over SQL data :::info Prerequisites This guide assumes familiarity with the following concepts: - [Chaining runnables](/docs/how_to/sequence/) - [Chat models](/docs/concepts/#chat-models) - [Tools](/docs/concepts/#tools) - [Agents](/docs/concepts/#agents) ::: In this guide we'll go over the basic ways to create a Q&A chain and agent over a SQL database. These systems will allow us to ask a question about the data in a SQL database and get back a natural language answer. The main difference between the two is that our agent can query the database in a loop as many time as it needs to answer the question. ## ⚠️ Security note ⚠️ Building Q&A systems of SQL databases can require executing model-generated SQL queries. There are inherent risks in doing this. Make sure that your database connection permissions are always scoped as narrowly as possible for your chain/agent's needs. This will mitigate though not eliminate the risks of building a model-driven system. For more on general security best practices, see [here](/docs/security). ## Architecture At a high-level, the steps of most SQL chain and agent are: 1. **Convert question to SQL query**: Model converts user input to a SQL query. 2. **Execute SQL query**: Execute the SQL query 3. **Answer the question**: Model responds to user input using the query results. ![SQL Use Case Diagram](/img/sql_usecase.png) ## Setup First, get required packages and set environment variables: import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm i langchain @langchain/community @langchain/openai @langchain/core ``` We default to OpenAI models in this guide. ```bash export OPENAI_API_KEY=<your key> # Uncomment the below to use LangSmith. Not required, but recommended for debugging and observability. # export LANGCHAIN_API_KEY=<your key> # export LANGCHAIN_TRACING_V2=true # Reduce tracing latency if you are not in a serverless environment # export LANGCHAIN_CALLBACKS_BACKGROUND=true ``` import CodeBlock from "@theme/CodeBlock"; import DbCheck from "@examples/use_cases/sql/db_check.ts"; <CodeBlock language="typescript">{DbCheck}</CodeBlock> Great! We've got a SQL database that we can query. Now let's try hooking it up to an LLM. ## Chain Let's create a simple chain that takes a question, turns it into a SQL query, executes the query, and uses the result to answer the original question. ### Convert question to SQL query The first step in a SQL chain or agent is to take the user input and convert it to a SQL query. LangChain comes with a built-in chain for this: [`createSqlQueryChain`](https://api.js.langchain.com/functions/langchain.chains_sql_db.createSqlQueryChain.html) import QuickstartChainExample from "@examples/use_cases/sql/quickstart_chain.ts"; <CodeBlock language="typescript">{QuickstartChainExample}</CodeBlock> We can look at the [LangSmith trace](https://smith.langchain.com/public/6d8f0213-9f02-498e-aeb2-ec774e324e2c/r) to get a better understanding of what this chain is doing. We can also inspect the chain directly for its prompts. Looking at the prompt (below), we can see that it is: - Dialect-specific. In this case it references SQLite explicitly. - Has definitions for all the available tables. - Has three examples rows for each table. This technique is inspired by papers like [this](https://arxiv.org/pdf/2204.00498.pdf), which suggest showing examples rows and being explicit about tables improves performance. We can also inspect the full prompt via the LangSmith trace: ![Chain Prompt](/img/sql_quickstart_langsmith_prompt.png) ### Execute SQL query Now that we've generated a SQL query, we'll want to execute it. This is the most dangerous part of creating a SQL chain. Consider carefully if it is OK to run automated queries over your data. Minimize the database connection permissions as much as possible. Consider adding a human approval step to you chains before query execution (see below). We can use the [`QuerySqlTool`](https://api.js.langchain.com/classes/langchain.tools_sql.QuerySqlTool.html) to easily add query execution to our chain: import QuickstartExecuteExample from "@examples/use_cases/sql/quickstart_execute_sql.ts"; <CodeBlock language="typescript">{QuickstartExecuteExample}</CodeBlock> :::tip See a LangSmith trace of the chain above [here](https://smith.langchain.com/public/3cbcf6f2-a55b-4701-a2e3-9928e4747328/r). ::: ### Answer the question Now that we have a way to automatically generate and execute queries, we just need to combine the original question and SQL query result to generate a final answer. We can do this by passing question and result to the LLM once more: import QuickstartAnswerExample from "@examples/use_cases/sql/quickstart_answer_question.ts"; <CodeBlock language="typescript">{QuickstartAnswerExample}</CodeBlock> :::tip See a LangSmith trace of the chain above [here](https://smith.langchain.com/public/d130ce1f-1fce-4192-921e-4b522884ec1a/r). ::: ### Next steps For more complex query-generation, we may want to create few-shot prompts or add query-checking steps. For advanced techniques like this and more check out: - [Prompting strategies](/docs/how_to/sql_prompting): Advanced prompt engineering techniques. - [Query checking](/docs/how_to/sql_query_checking): Add query validation and error handling. - [Large databases](/docs/how_to/sql_large_db): Techniques for working with large databases. ## Agents LangChain offers a number of tools and functions that allow you to create SQL Agents which can provide a more flexible way of interacting with SQL databases. The main advantages of using SQL Agents are: - It can answer questions based on the databases' schema as well as on the databases' content (like describing a specific table). - It can recover from errors by running a generated query, catching the traceback and regenerating it correctly. - It can answer questions that require multiple dependent queries. - It will save tokens by only considering the schema from relevant tables. - To initialize the agent, we use [`createOpenAIToolsAgent`](https://api.js.langchain.com/functions/langchain.agents.createOpenAIToolsAgent.html) function. This agent contains the [`SqlToolkit`](https://api.js.langchain.com/classes/langchain.agents_toolkits_sql.SqlToolkit.html) which contains tools to: - Create and execute queries - Check query syntax - Retrieve table descriptions - … and more
146500
import type * as tiktoken from "js-tiktoken"; import { Document, BaseDocumentTransformer } from "@langchain/core/documents"; import { getEncoding } from "@langchain/core/utils/tiktoken"; export interface TextSplitterParams { chunkSize: number; chunkOverlap: number; keepSeparator: boolean; lengthFunction?: | ((text: string) => number) | ((text: string) => Promise<number>); } export type TextSplitterChunkHeaderOptions = { chunkHeader?: string; chunkOverlapHeader?: string; appendChunkOverlapHeader?: boolean; }; export abstract class TextSplitter extends BaseDocumentTransformer implements TextSplitterParams { lc_namespace = ["langchain", "document_transformers", "text_splitters"]; chunkSize = 1000; chunkOverlap = 200; keepSeparator = false; lengthFunction: | ((text: string) => number) | ((text: string) => Promise<number>); constructor(fields?: Partial<TextSplitterParams>) { super(fields); this.chunkSize = fields?.chunkSize ?? this.chunkSize; this.chunkOverlap = fields?.chunkOverlap ?? this.chunkOverlap; this.keepSeparator = fields?.keepSeparator ?? this.keepSeparator; this.lengthFunction = fields?.lengthFunction ?? ((text: string) => text.length); if (this.chunkOverlap >= this.chunkSize) { throw new Error("Cannot have chunkOverlap >= chunkSize"); } } async transformDocuments( documents: Document[], chunkHeaderOptions: TextSplitterChunkHeaderOptions = {} ): Promise<Document[]> { return this.splitDocuments(documents, chunkHeaderOptions); } abstract splitText(text: string): Promise<string[]>; protected splitOnSeparator(text: string, separator: string): string[] { let splits; if (separator) { if (this.keepSeparator) { const regexEscapedSeparator = separator.replace( /[/\-\\^$*+?.()|[\]{}]/g, "\\$&" ); splits = text.split(new RegExp(`(?=${regexEscapedSeparator})`)); } else { splits = text.split(separator); } } else { splits = text.split(""); } return splits.filter((s) => s !== ""); } async createDocuments( texts: string[], // eslint-disable-next-line @typescript-eslint/no-explicit-any metadatas: Record<string, any>[] = [], chunkHeaderOptions: TextSplitterChunkHeaderOptions = {} ): Promise<Document[]> { // if no metadata is provided, we create an empty one for each text // eslint-disable-next-line @typescript-eslint/no-explicit-any const _metadatas: Record<string, any>[] = metadatas.length > 0 ? metadatas : [...Array(texts.length)].map(() => ({})); const { chunkHeader = "", chunkOverlapHeader = "(cont'd) ", appendChunkOverlapHeader = false, } = chunkHeaderOptions; const documents = new Array<Document>(); for (let i = 0; i < texts.length; i += 1) { const text = texts[i]; let lineCounterIndex = 1; let prevChunk = null; let indexPrevChunk = -1; for (const chunk of await this.splitText(text)) { let pageContent = chunkHeader; // we need to count the \n that are in the text before getting removed by the splitting const indexChunk = text.indexOf(chunk, indexPrevChunk + 1); if (prevChunk === null) { const newLinesBeforeFirstChunk = this.numberOfNewLines( text, 0, indexChunk ); lineCounterIndex += newLinesBeforeFirstChunk; } else { const indexEndPrevChunk = indexPrevChunk + (await this.lengthFunction(prevChunk)); if (indexEndPrevChunk < indexChunk) { const numberOfIntermediateNewLines = this.numberOfNewLines( text, indexEndPrevChunk, indexChunk ); lineCounterIndex += numberOfIntermediateNewLines; } else if (indexEndPrevChunk > indexChunk) { const numberOfIntermediateNewLines = this.numberOfNewLines( text, indexChunk, indexEndPrevChunk ); lineCounterIndex -= numberOfIntermediateNewLines; } if (appendChunkOverlapHeader) { pageContent += chunkOverlapHeader; } } const newLinesCount = this.numberOfNewLines(chunk); const loc = _metadatas[i].loc && typeof _metadatas[i].loc === "object" ? { ..._metadatas[i].loc } : {}; loc.lines = { from: lineCounterIndex, to: lineCounterIndex + newLinesCount, }; const metadataWithLinesNumber = { ..._metadatas[i], loc, }; pageContent += chunk; documents.push( new Document({ pageContent, metadata: metadataWithLinesNumber, }) ); lineCounterIndex += newLinesCount; prevChunk = chunk; indexPrevChunk = indexChunk; } } return documents; } private numberOfNewLines(text: string, start?: number, end?: number) { const textSection = text.slice(start, end); return (textSection.match(/\n/g) || []).length; } async splitDocuments( documents: Document[], chunkHeaderOptions: TextSplitterChunkHeaderOptions = {} ): Promise<Document[]> { const selectedDocuments = documents.filter( (doc) => doc.pageContent !== undefined ); const texts = selectedDocuments.map((doc) => doc.pageContent); const metadatas = selectedDocuments.map((doc) => doc.metadata); return this.createDocuments(texts, metadatas, chunkHeaderOptions); } private joinDocs(docs: string[], separator: string): string | null { const text = docs.join(separator).trim(); return text === "" ? null : text; } async mergeSplits(splits: string[], separator: string): Promise<string[]> { const docs: string[] = []; const currentDoc: string[] = []; let total = 0; for (const d of splits) { const _len = await this.lengthFunction(d); if ( total + _len + currentDoc.length * separator.length > this.chunkSize ) { if (total > this.chunkSize) { console.warn( `Created a chunk of size ${total}, + which is longer than the specified ${this.chunkSize}` ); } if (currentDoc.length > 0) { const doc = this.joinDocs(currentDoc, separator); if (doc !== null) { docs.push(doc); } // Keep on popping if: // - we have a larger chunk than in the chunk overlap // - or if we still have any chunks and the length is long while ( total > this.chunkOverlap || (total + _len + currentDoc.length * separator.length > this.chunkSize && total > 0) ) { total -= await this.lengthFunction(currentDoc[0]); currentDoc.shift(); } } } currentDoc.push(d); total += _len; } const doc = this.joinDocs(currentDoc, separator); if (doc !== null) { docs.push(doc); } return docs; } } export interface CharacterTextSplitterParams extends TextSplitterParams { separator: string; } export class CharacterTextSplitter extends TextSplitter implements CharacterTextSplitterParams { static lc_name() { return "CharacterTextSplitter"; } separator = "\n\n"; constructor(fields?: Partial<CharacterTextSplitterParams>) { super(fields); this.separator = fields?.separator ?? this.separator; } async splitText(text: string): Promise<string[]> { // First we naively split the large input into a bunch of smaller ones. const splits = this.splitOnSeparator(text, this.separator); return this.mergeSplits(splits, this.keepSeparator ? "" : this.separator); } } export interface RecursiveCharacterTextSplitterParams extends TextSplitterParams { separators: string[]; } export const SupportedTextSplitterLanguages = [ "cpp", "go", "java", "js", "php", "proto", "python", "rst", "ruby", "rust", "scala", "swift", "markdown", "latex", "html", "sol", ] as const; export type SupportedTextSplitterLanguage = (typeof SupportedTextSplitterLanguages)[number];
146501
export class RecursiveCharacterTextSplitter extends TextSplitter implements RecursiveCharacterTextSplitterParams { static lc_name() { return "RecursiveCharacterTextSplitter"; } separators: string[] = ["\n\n", "\n", " ", ""]; constructor(fields?: Partial<RecursiveCharacterTextSplitterParams>) { super(fields); this.separators = fields?.separators ?? this.separators; this.keepSeparator = fields?.keepSeparator ?? true; } private async _splitText(text: string, separators: string[]) { const finalChunks: string[] = []; // Get appropriate separator to use let separator: string = separators[separators.length - 1]; let newSeparators; for (let i = 0; i < separators.length; i += 1) { const s = separators[i]; if (s === "") { separator = s; break; } if (text.includes(s)) { separator = s; newSeparators = separators.slice(i + 1); break; } } // Now that we have the separator, split the text const splits = this.splitOnSeparator(text, separator); // Now go merging things, recursively splitting longer texts. let goodSplits: string[] = []; const _separator = this.keepSeparator ? "" : separator; for (const s of splits) { if ((await this.lengthFunction(s)) < this.chunkSize) { goodSplits.push(s); } else { if (goodSplits.length) { const mergedText = await this.mergeSplits(goodSplits, _separator); finalChunks.push(...mergedText); goodSplits = []; } if (!newSeparators) { finalChunks.push(s); } else { const otherInfo = await this._splitText(s, newSeparators); finalChunks.push(...otherInfo); } } } if (goodSplits.length) { const mergedText = await this.mergeSplits(goodSplits, _separator); finalChunks.push(...mergedText); } return finalChunks; } async splitText(text: string): Promise<string[]> { return this._splitText(text, this.separators); } static fromLanguage( language: SupportedTextSplitterLanguage, options?: Partial<RecursiveCharacterTextSplitterParams> ) { return new RecursiveCharacterTextSplitter({ ...options, separators: RecursiveCharacterTextSplitter.getSeparatorsForLanguage(language), }); }
146506
import { describe, expect, test } from "@jest/globals"; import { Document } from "@langchain/core/documents"; import { CharacterTextSplitter, LatexTextSplitter, MarkdownTextSplitter, RecursiveCharacterTextSplitter, TokenTextSplitter, } from "../text_splitter.js"; function textLineGenerator(char: string, length: number) { const line = new Array(length).join(char); return `${line}\n`; } describe("Character text splitter", () => { test("Test splitting by character count.", async () => { const text = "foo bar baz 123"; const splitter = new CharacterTextSplitter({ separator: " ", chunkSize: 7, chunkOverlap: 3, }); const output = await splitter.splitText(text); const expectedOutput = ["foo bar", "bar baz", "baz 123"]; expect(output).toEqual(expectedOutput); }); test("Test splitting by character count doesn't create empty documents.", async () => { const text = "foo bar"; const splitter = new CharacterTextSplitter({ separator: " ", chunkSize: 2, chunkOverlap: 0, }); const output = await splitter.splitText(text); const expectedOutput = ["foo", "bar"]; expect(output).toEqual(expectedOutput); }); test("Test splitting by character count on long words.", async () => { const text = "foo bar baz a a"; const splitter = new CharacterTextSplitter({ separator: " ", chunkSize: 3, chunkOverlap: 1, }); const output = await splitter.splitText(text); const expectedOutput = ["foo", "bar", "baz", "a a"]; expect(output).toEqual(expectedOutput); }); test("Test splitting by character count when shorter words are first.", async () => { const text = "a a foo bar baz"; const splitter = new CharacterTextSplitter({ separator: " ", chunkSize: 3, chunkOverlap: 1, }); const output = await splitter.splitText(text); const expectedOutput = ["a a", "foo", "bar", "baz"]; expect(output).toEqual(expectedOutput); }); test("Test splitting by characters when splits not found easily.", async () => { const text = "foo bar baz 123"; const splitter = new CharacterTextSplitter({ separator: " ", chunkSize: 1, chunkOverlap: 0, }); const output = await splitter.splitText(text); const expectedOutput = ["foo", "bar", "baz", "123"]; expect(output).toEqual(expectedOutput); }); test("Test invalid arguments.", () => { expect(() => { // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const res = new CharacterTextSplitter({ chunkSize: 2, chunkOverlap: 4 }); // console.log(res); }).toThrow(); }); test("Test create documents method.", async () => { const texts = ["foo bar", "baz"]; const splitter = new CharacterTextSplitter({ separator: " ", chunkSize: 3, chunkOverlap: 0, }); const docs = await splitter.createDocuments(texts); const metadata = { loc: { lines: { from: 1, to: 1 } } }; const expectedDocs = [ new Document({ pageContent: "foo", metadata }), new Document({ pageContent: "bar", metadata }), new Document({ pageContent: "baz", metadata }), ]; expect(docs).toEqual(expectedDocs); }); test("Test create documents with metadata method.", async () => { const texts = ["foo bar", "baz"]; const splitter = new CharacterTextSplitter({ separator: " ", chunkSize: 3, chunkOverlap: 0, }); const docs = await splitter.createDocuments(texts, [ { source: "1" }, { source: "2" }, ]); const loc = { lines: { from: 1, to: 1 } }; const expectedDocs = [ new Document({ pageContent: "foo", metadata: { source: "1", loc } }), new Document({ pageContent: "bar", metadata: { source: "1", loc }, }), new Document({ pageContent: "baz", metadata: { source: "2", loc } }), ]; expect(docs).toEqual(expectedDocs); }); test("Test create documents method with metadata and an added chunk header.", async () => { const texts = ["foo bar", "baz"]; const splitter = new CharacterTextSplitter({ separator: " ", chunkSize: 3, chunkOverlap: 0, }); const docs = await splitter.createDocuments( texts, [{ source: "1" }, { source: "2" }], { chunkHeader: `SOURCE NAME: testing\n-----\n`, appendChunkOverlapHeader: true, } ); const loc = { lines: { from: 1, to: 1 } }; const expectedDocs = [ new Document({ pageContent: "SOURCE NAME: testing\n-----\nfoo", metadata: { source: "1", loc }, }), new Document({ pageContent: "SOURCE NAME: testing\n-----\n(cont'd) bar", metadata: { source: "1", loc }, }), new Document({ pageContent: "SOURCE NAME: testing\n-----\nbaz", metadata: { source: "2", loc }, }), ]; expect(docs).toEqual(expectedDocs); }); });
146548
// eslint-disable-next-line import/no-extraneous-dependencies import { loadPyodide, type PyodideInterface } from "pyodide"; import { Tool, ToolParams } from "@langchain/core/tools"; export type PythonInterpreterToolParams = Parameters<typeof loadPyodide>[0] & ToolParams & { instance: PyodideInterface; }; export class PythonInterpreterTool extends Tool { static lc_name() { return "PythonInterpreterTool"; } name = "python_interpreter"; description = `Evaluates python code in a sandbox environment. The environment resets on every execution. You must send the whole script every time and print your outputs. Script should be pure python code that can be evaluated. Packages available: ${this.availableDefaultPackages}`; pyodideInstance: PyodideInterface; stdout = ""; stderr = ""; constructor(options: PythonInterpreterToolParams) { super(options); this.pyodideInstance = options.instance; this.pyodideInstance.setStderr({ batched: (text: string) => { this.stderr += text; }, }); this.pyodideInstance.setStdout({ batched: (text: string) => { this.stdout += text; }, }); } async addPackage(packageName: string) { await this.pyodideInstance.loadPackage(packageName); this.description += `, ${packageName}`; } get availableDefaultPackages(): string { return [ "asciitree", "astropy", "atomicwrites", "attrs", "autograd", "awkward-cpp", "bcrypt", "beautifulsoup4", "biopython", "bitarray", "bitstring", "bleach", "bokeh", "boost-histogram", "brotli", "cachetools", "Cartopy", "cbor-diag", "certifi", "cffi", "cffi_example", "cftime", "click", "cligj", "cloudpickle", "cmyt", "colorspacious", "contourpy", "coolprop", "coverage", "cramjam", "cryptography", "cssselect", "cycler", "cytoolz", "decorator", "demes", "deprecation", "distlib", "docutils", "exceptiongroup", "fastparquet", "fiona", "fonttools", "freesasa", "fsspec", "future", "galpy", "gensim", "geopandas", "gmpy2", "gsw", "h5py", "html5lib", "idna", "igraph", "imageio", "iniconfig", "jedi", "Jinja2", "joblib", "jsonschema", "kiwisolver", "lazy-object-proxy", "lazy_loader", "lightgbm", "logbook", "lxml", "MarkupSafe", "matplotlib", "matplotlib-pyodide", "micropip", "mne", "more-itertools", "mpmath", "msgpack", "msprime", "multidict", "munch", "mypy", "netcdf4", "networkx", "newick", "nlopt", "nltk", "nose", "numcodecs", "numpy", "opencv-python", "optlang", "orjson", "packaging", "pandas", "parso", "patsy", "peewee", "Pillow", "pillow_heif", "pkgconfig", "pluggy", "protobuf", "py", "pyb2d", "pyclipper", "pycparser", "pycryptodome", "pydantic", "pyerfa", "Pygments", "pyheif", "pyinstrument", "pynacl", "pyodide-http", "pyodide-tblib", "pyparsing", "pyproj", "pyrsistent", "pyshp", "pytest", "pytest-benchmark", "python-dateutil", "python-magic", "python-sat", "python_solvespace", "pytz", "pywavelets", "pyxel", "pyyaml", "rebound", "reboundx", "regex", "retrying", "RobotRaconteur", "ruamel.yaml", "rust-panic-test", "scikit-image", "scikit-learn", "scipy", "screed", "setuptools", "shapely", "simplejson", "six", "smart_open", "soupsieve", "sourmash", "sparseqr", "sqlalchemy", "statsmodels", "svgwrite", "swiglpk", "sympy", "termcolor", "texttable", "threadpoolctl", "tomli", "tomli-w", "toolz", "tqdm", "traits", "tskit", "typing-extensions", "uncertainties", "unyt", "webencodings", "wordcloud", "wrapt", "xarray", "xgboost", "xlrd", "xyzservices", "yarl", "yt", "zarr", ].join(", "); } static async initialize( options: Omit<PythonInterpreterToolParams, "instance"> ) { const instance = await loadPyodide(options); return new this({ ...options, instance }); } async _call(script: string) { this.stdout = ""; this.stderr = ""; await this.pyodideInstance.runPythonAsync(script); return JSON.stringify({ stdout: this.stdout, stderr: this.stderr }); } }
146556
import { TextServiceClient, protos } from "@google-ai/generativelanguage"; import { GoogleAuth } from "google-auth-library"; import { type BaseLLMParams, LLM } from "@langchain/core/language_models/llms"; import { getEnvironmentVariable } from "@langchain/core/utils/env"; /** * @deprecated - Deprecated by Google. Will be removed in 0.3.0 * * Input for Text generation for Google Palm */ export interface GooglePaLMTextInput extends BaseLLMParams { /** * Model Name to use * * Alias for `model` * * Note: The format must follow the pattern - `models/{model}` */ modelName?: string; /** * Model Name to use * * Note: The format must follow the pattern - `models/{model}` */ model?: string; /** * Controls the randomness of the output. * * Values can range from [0.0,1.0], inclusive. A value closer to 1.0 * will produce responses that are more varied and creative, while * a value closer to 0.0 will typically result in more straightforward * responses from the model. * * Note: The default value varies by model */ temperature?: number; /** * Maximum number of tokens to generate in the completion. */ maxOutputTokens?: number; /** * Top-p changes how the model selects tokens for output. * * Tokens are selected from most probable to least until the sum * of their probabilities equals the top-p value. * * For example, if tokens A, B, and C have a probability of * .3, .2, and .1 and the top-p value is .5, then the model will * select either A or B as the next token (using temperature). * * Note: The default value varies by model */ topP?: number; /** * Top-k changes how the model selects tokens for output. * * A top-k of 1 means the selected token is the most probable among * all tokens in the model’s vocabulary (also called greedy decoding), * while a top-k of 3 means that the next token is selected from * among the 3 most probable tokens (using temperature). * * Note: The default value varies by model */ topK?: number; /** * The set of character sequences (up to 5) that will stop output generation. * If specified, the API will stop at the first appearance of a stop * sequence. * * Note: The stop sequence will not be included as part of the response. */ stopSequences?: string[]; /** * A list of unique `SafetySetting` instances for blocking unsafe content. The API will block * any prompts and responses that fail to meet the thresholds set by these settings. If there * is no `SafetySetting` for a given `SafetyCategory` provided in the list, the API will use * the default safety setting for that category. */ safetySettings?: protos.google.ai.generativelanguage.v1beta2.ISafetySetting[]; /** * Google Palm API key to use */ apiKey?: string; } /** * @deprecated - Deprecated by Google. Will be removed in 0.3.0 * * Google Palm 2 Language Model Wrapper to generate texts */ export class GooglePaLM extends LLM implements GooglePaLMTextInput { lc_serializable = true; get lc_secrets(): { [key: string]: string } | undefined { return { apiKey: "GOOGLE_PALM_API_KEY", }; } modelName = "models/text-bison-001"; model = "models/text-bison-001"; temperature?: number; // default value chosen based on model maxOutputTokens?: number; // defaults to 64 topP?: number; // default value chosen based on model topK?: number; // default value chosen based on model stopSequences: string[] = []; safetySettings?: protos.google.ai.generativelanguage.v1beta2.ISafetySetting[]; // default safety setting for that category apiKey?: string; private client: TextServiceClient; constructor(fields?: GooglePaLMTextInput) { super(fields ?? {}); this.modelName = fields?.model ?? fields?.modelName ?? this.model; this.model = this.modelName; this.temperature = fields?.temperature ?? this.temperature; if (this.temperature && (this.temperature < 0 || this.temperature > 1)) { throw new Error("`temperature` must be in the range of [0.0,1.0]"); } this.maxOutputTokens = fields?.maxOutputTokens ?? this.maxOutputTokens; if (this.maxOutputTokens && this.maxOutputTokens < 0) { throw new Error("`maxOutputTokens` must be a positive integer"); } this.topP = fields?.topP ?? this.topP; if (this.topP && this.topP < 0) { throw new Error("`topP` must be a positive integer"); } if (this.topP && this.topP > 1) { throw new Error("Google PaLM `topP` must in the range of [0,1]"); } this.topK = fields?.topK ?? this.topK; if (this.topK && this.topK < 0) { throw new Error("`topK` must be a positive integer"); } this.stopSequences = fields?.stopSequences ?? this.stopSequences; this.safetySettings = fields?.safetySettings ?? this.safetySettings; if (this.safetySettings && this.safetySettings.length > 0) { const safetySettingsSet = new Set( this.safetySettings.map((s) => s.category) ); if (safetySettingsSet.size !== this.safetySettings.length) { throw new Error( "The categories in `safetySettings` array must be unique" ); } } this.apiKey = fields?.apiKey ?? getEnvironmentVariable("GOOGLE_PALM_API_KEY"); if (!this.apiKey) { throw new Error( "Please set an API key for Google Palm 2 in the environment variable GOOGLE_PALM_API_KEY or in the `apiKey` field of the GooglePalm constructor" ); } this.client = new TextServiceClient({ authClient: new GoogleAuth().fromAPIKey(this.apiKey), }); } _llmType(): string { return "googlepalm"; } async _call( prompt: string, options: this["ParsedCallOptions"] ): Promise<string> { const res = await this.caller.callWithOptions( { signal: options.signal }, this._generateText.bind(this), prompt ); return res ?? ""; } protected async _generateText( prompt: string ): Promise<string | null | undefined> { const res = await this.client.generateText({ model: this.model, temperature: this.temperature, candidateCount: 1, topK: this.topK, topP: this.topP, maxOutputTokens: this.maxOutputTokens, stopSequences: this.stopSequences, safetySettings: this.safetySettings, prompt: { text: prompt, }, }); return res[0].candidates && res[0].candidates.length > 0 ? res[0].candidates[0].output : undefined; } }
146566
import { z } from "zod"; import { zodToJsonSchema } from "zod-to-json-schema"; import { BaseLanguageModel } from "@langchain/core/language_models/base"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { Document } from "@langchain/core/documents"; import { Node, Relationship, GraphDocument, } from "../../graphs/graph_document.js"; export const SYSTEM_PROMPT = ` # Knowledge Graph Instructions for GPT-4\n ## 1. Overview\n You are a top-tier algorithm designed for extracting information in structured formats to build a knowledge graph.\n Try to capture as much information from the text as possible without sacrifing accuracy. Do not add any information that is not explicitly mentioned in the text\n" - **Nodes** represent entities and concepts.\n" - The aim is to achieve simplicity and clarity in the knowledge graph, making it\n accessible for a vast audience.\n ## 2. Labeling Nodes\n - **Consistency**: Ensure you use available types for node labels.\n Ensure you use basic or elementary types for node labels.\n - For example, when you identify an entity representing a person, always label it as **'person'**. Avoid using more specific terms like 'mathematician' or 'scientist' - **Node IDs**: Never utilize integers as node IDs. Node IDs should be names or human-readable identifiers found in the text.\n - **Relationships** represent connections between entities or concepts.\n Ensure consistency and generality in relationship types when constructing knowledge graphs. Instead of using specific and momentary types such as 'BECAME_PROFESSOR', use more general and timeless relationship types like 'PROFESSOR'. Make sure to use general and timeless relationship types!\n ## 3. Coreference Resolution\n - **Maintain Entity Consistency**: When extracting entities, it's vital to ensure consistency.\n If an entity, such as "John Doe", is mentioned multiple times in the text but is referred to by different names or pronouns (e.g., "Joe", "he"), always use the most complete identifier for that entity throughout the knowledge graph. In this example, use "John Doe" as the entity ID.\n Remember, the knowledge graph should be coherent and easily understandable, so maintaining consistency in entity references is crucial.\n ## 4. Strict Compliance\n Adhere to the rules strictly. Non-compliance will result in termination. `; const DEFAULT_PROMPT = /* #__PURE__ */ ChatPromptTemplate.fromMessages([ ["system", SYSTEM_PROMPT], [ "human", "Tip: Make sure to answer in the correct format and do not include any explanations. Use the given format to extract information from the following input: {input}", ], ]); interface OptionalEnumFieldProps { enumValues?: string[]; description: string; isRel?: boolean; fieldKwargs?: object; } function toTitleCase(str: string): string { return str .split(" ") .map((w) => w[0].toUpperCase() + w.substring(1).toLowerCase()) .join(""); } function createOptionalEnumType({ enumValues = undefined, description = "", isRel = false, }: OptionalEnumFieldProps): z.ZodTypeAny { let schema; if (enumValues && enumValues.length) { schema = z .enum(enumValues as [string, ...string[]]) .describe( `${description} Available options are: ${enumValues.join(", ")}.` ); } else { const nodeInfo = "Ensure you use basic or elementary types for node labels.\n" + "For example, when you identify an entity representing a person, " + "always label it as **'Person'**. Avoid using more specific terms " + "like 'Mathematician' or 'Scientist'"; const relInfo = "Instead of using specific and momentary types such as " + "'BECAME_PROFESSOR', use more general and timeless relationship types like " + "'PROFESSOR'. However, do not sacrifice any accuracy for generality"; const additionalInfo = isRel ? relInfo : nodeInfo; schema = z.string().describe(description + additionalInfo); } return schema; } function createSchema(allowedNodes: string[], allowedRelationships: string[]) { const dynamicGraphSchema = z.object({ nodes: z .array( z.object({ id: z.string(), type: createOptionalEnumType({ enumValues: allowedNodes, description: "The type or label of the node.", }), }) ) .describe("List of nodes"), relationships: z .array( z.object({ sourceNodeId: z.string(), sourceNodeType: createOptionalEnumType({ enumValues: allowedNodes, description: "The source node of the relationship.", }), relationshipType: createOptionalEnumType({ enumValues: allowedRelationships, description: "The type of the relationship.", isRel: true, }), targetNodeId: z.string(), targetNodeType: createOptionalEnumType({ enumValues: allowedNodes, description: "The target node of the relationship.", }), }) ) .describe("List of relationships."), }); return dynamicGraphSchema; } // eslint-disable-next-line @typescript-eslint/no-explicit-any function mapToBaseNode(node: any): Node { return new Node({ id: node.id, type: node.type ? toTitleCase(node.type) : "", }); } // eslint-disable-next-line @typescript-eslint/no-explicit-any function mapToBaseRelationship(relationship: any): Relationship { return new Relationship({ source: new Node({ id: relationship.sourceNodeId, type: relationship.sourceNodeType ? toTitleCase(relationship.sourceNodeType) : "", }), target: new Node({ id: relationship.targetNodeId, type: relationship.targetNodeType ? toTitleCase(relationship.targetNodeType) : "", }), type: relationship.relationshipType.replace(" ", "_").toUpperCase(), }); } export interface LLMGraphTransformerProps { llm: BaseLanguageModel; allowedNodes?: string[]; allowedRelationships?: string[]; prompt?: ChatPromptTemplate; strictMode?: boolean; }
146587
import { test } from "@jest/globals"; import { Document } from "@langchain/core/documents"; import { OpenAIEmbeddings, OpenAI } from "@langchain/openai"; import { AttributeInfo } from "langchain/chains/query_constructor"; import { FunctionalTranslator, SelfQueryRetriever, } from "langchain/retrievers/self_query"; import { HNSWLib } from "../../vectorstores/hnswlib.js"; test("HNSWLib Store Self Query Retriever Test", async () => { const docs = [ new Document({ pageContent: "A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata: { year: 1993, rating: 7.7, genre: "science fiction" }, }), new Document({ pageContent: "Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 }, }), new Document({ pageContent: "A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 }, }), new Document({ pageContent: "A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 }, }), new Document({ pageContent: "Toys come alive and have a blast doing so", metadata: { year: 1995, genre: "animated" }, }), new Document({ pageContent: "Three men walk into the Zone, three men walk out of the Zone", metadata: { year: 1979, director: "Andrei Tarkovsky", genre: "science fiction", rating: 9.9, }, }), ]; const attributeInfo: AttributeInfo[] = [ { name: "genre", description: "The genre of the movie", type: "string or array of strings", }, { name: "year", description: "The year the movie was released", type: "number", }, { name: "director", description: "The director of the movie", type: "string", }, { name: "rating", description: "The rating of the movie (1-10)", type: "number", }, { name: "length", description: "The length of the movie in minutes", type: "number", }, ]; const embeddings = new OpenAIEmbeddings(); const llm = new OpenAI({ modelName: "gpt-3.5-turbo", temperature: 0.01, }); const documentContents = "Brief summary of a movie"; const vectorStore = await HNSWLib.fromDocuments(docs, embeddings); const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, structuredQueryTranslator: new FunctionalTranslator(), }); const query1 = await selfQueryRetriever.getRelevantDocuments( "Which movies are less than 90 minutes?" ); // console.log(query1); expect(query1.length).toEqual(0); const query2 = await selfQueryRetriever.getRelevantDocuments( "Which movies are rated higher than 8.5?" ); // console.log(query2); expect(query2.length).toEqual(2); const query3 = await selfQueryRetriever.getRelevantDocuments( "Which movies are directed by Greta Gerwig?" ); // console.log(query3); expect(query3.length).toEqual(1); }); test("HNSWLib shouldn't throw an error if a filter can't be generated, but should return no items", async () => { const docs = [ new Document({ pageContent: "A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata: { year: 1993, rating: 7.7, genre: "science fiction" }, }), new Document({ pageContent: "Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 }, }), new Document({ pageContent: "A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 }, }), new Document({ pageContent: "A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 }, }), new Document({ pageContent: "Toys come alive and have a blast doing so", metadata: { year: 1995, genre: "animated" }, }), new Document({ pageContent: "Three men walk into the Zone, three men walk out of the Zone", metadata: { year: 1979, director: "Andrei Tarkovsky", genre: "science fiction", rating: 9.9, }, }), ]; const attributeInfo = [ { name: "sectionNumber", description: "The section number of the rule", type: "number", }, { name: "sectionTitle", description: "The section title of the rule", type: "string", }, { name: "sectionScope", description: "The section scope of the rule", type: "string", }, { name: "codeRule", description: "The code rule of the rule", type: "string", }, ]; const embeddings = new OpenAIEmbeddings(); const llm = new OpenAI({ modelName: "gpt-3.5-turbo", temperature: 0.01, }); const documentContents = "Brief summary of a movie"; const vectorStore = await HNSWLib.fromDocuments(docs, embeddings); const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, structuredQueryTranslator: new FunctionalTranslator(), }); const query1 = await selfQueryRetriever.getRelevantDocuments( "Which sectionTitle talks about pools?" ); // console.log(query1); expect(query1.length).toEqual(0); });
146589
import Metal from "@getmetal/metal-sdk"; import { BaseRetriever, BaseRetrieverInput } from "@langchain/core/retrievers"; import { Document } from "@langchain/core/documents"; /** * Interface for the fields required during the initialization of a * `MetalRetriever` instance. It extends the `BaseRetrieverInput` * interface and adds a `client` field of type `Metal`. */ export interface MetalRetrieverFields extends BaseRetrieverInput { client: Metal; } /** * Interface to represent a response item from the Metal service. It * contains a `text` field and an index signature to allow for additional * unknown properties. */ interface ResponseItem { text: string; [key: string]: unknown; } /** * Class used to interact with the Metal service, a managed retrieval & * memory platform. It allows you to index your data into Metal and run * semantic search and retrieval on it. It extends the `BaseRetriever` * class and requires a `Metal` instance and a dictionary of parameters to * pass to the Metal API during its initialization. * @example * ```typescript * const retriever = new MetalRetriever({ * client: new Metal( * process.env.METAL_API_KEY, * process.env.METAL_CLIENT_ID, * process.env.METAL_INDEX_ID, * ), * }); * const docs = await retriever.getRelevantDocuments("hello"); * ``` */ export class MetalRetriever extends BaseRetriever { static lc_name() { return "MetalRetriever"; } lc_namespace = ["langchain", "retrievers", "metal"]; private client: Metal; constructor(fields: MetalRetrieverFields) { super(fields); this.client = fields.client; } async _getRelevantDocuments(query: string): Promise<Document[]> { const res = await this.client.search({ text: query }); const items = ("data" in res ? res.data : res) as ResponseItem[]; return items.map( ({ text, metadata }) => new Document({ pageContent: text, metadata: metadata as Record<string, unknown>, }) ); } }
146593
import { MemorySearchPayload, MemorySearchResult, NotFoundError, ZepClient, } from "@getzep/zep-js"; import { BaseRetriever, BaseRetrieverInput } from "@langchain/core/retrievers"; import { Document } from "@langchain/core/documents"; /** * Configuration interface for the ZepRetriever class. Extends the * BaseRetrieverInput interface. * * @argument {string} sessionId - The ID of the Zep session. * @argument {string} url - The URL of the Zep API. * @argument {number} [topK] - The number of results to return. * @argument {string} [apiKey] - The API key for the Zep API. * @argument [searchScope] [searchScope] - The scope of the search: "messages" or "summary". * @argument [searchType] [searchType] - The type of search to perform: "similarity" or "mmr". * @argument {number} [mmrLambda] - The lambda value for the MMR search. * @argument {Record<string, unknown>} [filter] - The metadata filter to apply to the search. */ export interface ZepRetrieverConfig extends BaseRetrieverInput { sessionId: string; url: string; topK?: number; apiKey?: string; searchScope?: "messages" | "summary"; searchType?: "similarity" | "mmr"; mmrLambda?: number; filter?: Record<string, unknown>; } /** * Class for retrieving information from a Zep long-term memory store. * Extends the BaseRetriever class. * @example * ```typescript * const retriever = new ZepRetriever({ * url: "http: * sessionId: "session_exampleUUID", * topK: 3, * }); * const query = "Can I drive red cars in France?"; * const docs = await retriever.getRelevantDocuments(query); * ``` */ export class ZepRetriever extends BaseRetriever { static lc_name() { return "ZepRetriever"; } lc_namespace = ["langchain", "retrievers", "zep"]; get lc_secrets(): { [key: string]: string } | undefined { return { apiKey: "ZEP_API_KEY", url: "ZEP_API_URL", }; } get lc_aliases(): { [key: string]: string } | undefined { return { apiKey: "api_key" }; } zepClientPromise: Promise<ZepClient>; private sessionId: string; private topK?: number; private searchScope?: "messages" | "summary"; private searchType?: "similarity" | "mmr"; private mmrLambda?: number; private filter?: Record<string, unknown>; constructor(config: ZepRetrieverConfig) { super(config); this.sessionId = config.sessionId; this.topK = config.topK; this.searchScope = config.searchScope; this.searchType = config.searchType; this.mmrLambda = config.mmrLambda; this.filter = config.filter; this.zepClientPromise = ZepClient.init(config.url, config.apiKey); } /** * Converts an array of message search results to an array of Document objects. * @param {MemorySearchResult[]} results - The array of search results. * @returns {Document[]} An array of Document objects representing the search results. */ private searchMessageResultToDoc(results: MemorySearchResult[]): Document[] { return results .filter((r) => r.message) .map( ({ message: { content, metadata: messageMetadata } = {}, dist, ...rest }) => new Document({ pageContent: content ?? "", metadata: { score: dist, ...messageMetadata, ...rest }, }) ); } /** * Converts an array of summary search results to an array of Document objects. * @param {MemorySearchResult[]} results - The array of search results. * @returns {Document[]} An array of Document objects representing the search results. */ private searchSummaryResultToDoc(results: MemorySearchResult[]): Document[] { return results .filter((r) => r.summary) .map( ({ summary: { content, metadata: summaryMetadata } = {}, dist, ...rest }) => new Document({ pageContent: content ?? "", metadata: { score: dist, ...summaryMetadata, ...rest }, }) ); } /** * Retrieves the relevant documents based on the given query. * @param {string} query - The query string. * @returns {Promise<Document[]>} A promise that resolves to an array of relevant Document objects. */ async _getRelevantDocuments(query: string): Promise<Document[]> { const payload: MemorySearchPayload = { text: query, metadata: this.filter, search_scope: this.searchScope, search_type: this.searchType, mmr_lambda: this.mmrLambda, }; // Wait for ZepClient to be initialized const zepClient = await this.zepClientPromise; if (!zepClient) { throw new Error("ZepClient is not initialized"); } try { const results: MemorySearchResult[] = await zepClient.memory.searchMemory( this.sessionId, payload, this.topK ); return this.searchScope === "summary" ? this.searchSummaryResultToDoc(results) : this.searchMessageResultToDoc(results); } catch (error) { // eslint-disable-next-line no-instanceof/no-instanceof if (error instanceof NotFoundError) { return Promise.resolve([]); // Return an empty Document array } // If it's not a NotFoundError, throw the error again throw error; } } }
146595
import { BaseRetriever, type BaseRetrieverInput, } from "@langchain/core/retrievers"; import { Document } from "@langchain/core/documents"; import { AsyncCaller, AsyncCallerParams, } from "@langchain/core/utils/async_caller"; /** * Interface for the arguments required to create a new instance of * DataberryRetriever. */ export interface DataberryRetrieverArgs extends AsyncCallerParams, BaseRetrieverInput { datastoreUrl: string; topK?: number; apiKey?: string; } /** * Interface for the structure of a Berry object returned by the Databerry * API. */ interface Berry { text: string; score: number; source?: string; [key: string]: unknown; } /** * A specific implementation of a document retriever for the Databerry * API. It extends the BaseRetriever class, which is an abstract base * class for a document retrieval system in LangChain. */ /** @deprecated Use "langchain/retrievers/chaindesk" instead */ export class DataberryRetriever extends BaseRetriever { static lc_name() { return "DataberryRetriever"; } lc_namespace = ["langchain", "retrievers", "databerry"]; get lc_secrets() { return { apiKey: "DATABERRY_API_KEY" }; } get lc_aliases() { return { apiKey: "api_key" }; } caller: AsyncCaller; datastoreUrl: string; topK?: number; apiKey?: string; constructor(fields: DataberryRetrieverArgs) { super(fields); const { datastoreUrl, apiKey, topK, ...rest } = fields; this.caller = new AsyncCaller(rest); this.datastoreUrl = datastoreUrl; this.apiKey = apiKey; this.topK = topK; } async _getRelevantDocuments(query: string): Promise<Document[]> { const r = await this.caller.call(fetch, this.datastoreUrl, { method: "POST", body: JSON.stringify({ query, ...(this.topK ? { topK: this.topK } : {}), }), headers: { "Content-Type": "application/json", ...(this.apiKey ? { Authorization: `Bearer ${this.apiKey}` } : {}), }, }); const { results } = (await r.json()) as { results: Berry[] }; return results.map( ({ text, score, source, ...rest }) => new Document({ pageContent: text, metadata: { score, source, ...rest, }, }) ); } }
146596
import { BaseRetriever, type BaseRetrieverInput, } from "@langchain/core/retrievers"; import { Document } from "@langchain/core/documents"; import { AsyncCaller, type AsyncCallerParams, } from "@langchain/core/utils/async_caller"; export interface ChaindeskRetrieverArgs extends AsyncCallerParams, BaseRetrieverInput { datastoreId: string; topK?: number; filter?: Record<string, unknown>; apiKey?: string; } interface Berry { text: string; score: number; source?: string; [key: string]: unknown; } /** * @example * ```typescript * const retriever = new ChaindeskRetriever({ * datastoreId: "DATASTORE_ID", * apiKey: "CHAINDESK_API_KEY", * topK: 8, * }); * const docs = await retriever.getRelevantDocuments("hello"); * ``` */ export class ChaindeskRetriever extends BaseRetriever { static lc_name() { return "ChaindeskRetriever"; } lc_namespace = ["langchain", "retrievers", "chaindesk"]; caller: AsyncCaller; datastoreId: string; topK?: number; filter?: Record<string, unknown>; apiKey?: string; constructor({ datastoreId, apiKey, topK, filter, ...rest }: ChaindeskRetrieverArgs) { super(); this.caller = new AsyncCaller(rest); this.datastoreId = datastoreId; this.apiKey = apiKey; this.topK = topK; this.filter = filter; } async getRelevantDocuments(query: string): Promise<Document[]> { const r = await this.caller.call( fetch, `https://app.chaindesk.ai/api/datastores/${this.datastoreId}/query`, { method: "POST", body: JSON.stringify({ query, ...(this.topK ? { topK: this.topK } : {}), ...(this.filter ? { filters: this.filter } : {}), }), headers: { "Content-Type": "application/json", ...(this.apiKey ? { Authorization: `Bearer ${this.apiKey}` } : {}), }, } ); const { results } = (await r.json()) as { results: Berry[] }; return results.map( ({ text, score, source, ...rest }) => new Document({ pageContent: text, metadata: { score, source, ...rest, }, }) ); } }
146600
import { ZepClient } from "@getzep/zep-cloud"; import { SearchScope, SearchType, MemorySearchResult, NotFoundError, } from "@getzep/zep-cloud/api"; import { BaseRetriever, BaseRetrieverInput } from "@langchain/core/retrievers"; import { Document } from "@langchain/core/documents"; /** * Configuration interface for the ZepRetriever class. Extends the * BaseRetrieverInput interface. * * @argument {string} sessionId - The ID of the Zep session. * @argument {string} [apiKey] - The Zep Cloud Project Key. * @argument {number} [topK] - The number of results to return. * @argument [searchScope] [searchScope] - The scope of the search: "messages" or "summary". * @argument [searchType] [searchType] - The type of search to perform: "similarity" or "mmr". * @argument {number} [mmrLambda] - The lambda value for the MMR search. * @argument {Record<string, unknown>} [filter] - The metadata filter to apply to the search. */ export interface ZepCloudRetrieverConfig extends BaseRetrieverInput { sessionId: string; topK?: number; apiKey: string; searchScope?: SearchScope; searchType?: SearchType; mmrLambda?: number; filter?: Record<string, unknown>; } /** * Class for retrieving information from a Zep Cloud long-term memory store. * Extends the BaseRetriever class. * @example * ```typescript * const retriever = new ZepCloudRetriever({ * apiKey: "<zep cloud project api key>", * sessionId: "session_exampleUUID", * topK: 3, * }); * const query = "Can I drive red cars in France?"; * const docs = await retriever.getRelevantDocuments(query); * ``` */ export class ZepCloudRetriever extends BaseRetriever { static lc_name() { return "ZepRetriever"; } lc_namespace = ["langchain", "retrievers", "zep"]; get lc_secrets(): { [key: string]: string } | undefined { return { apiKey: "ZEP_API_KEY", }; } get lc_aliases(): { [key: string]: string } | undefined { return { apiKey: "api_key" }; } client: ZepClient; private sessionId: string; private topK?: number; private searchScope?: SearchScope; private searchType?: SearchType; private mmrLambda?: number; private filter?: Record<string, unknown>; constructor(config: ZepCloudRetrieverConfig) { super(config); this.sessionId = config.sessionId; this.topK = config.topK; this.searchScope = config.searchScope; this.searchType = config.searchType; this.mmrLambda = config.mmrLambda; this.filter = config.filter; this.client = new ZepClient({ apiKey: config.apiKey }); } /** * Converts an array of message search results to an array of Document objects. * @param {MemorySearchResult[]} results - The array of search results. * @returns {Document[]} An array of Document objects representing the search results. */ private searchMessageResultToDoc(results: MemorySearchResult[]): Document[] { return results .filter((r) => r.message) .map( ({ message: { content, metadata: messageMetadata } = {}, score, ...rest }) => new Document({ pageContent: content ?? "", metadata: { score, ...messageMetadata, ...rest }, }) ); } /** * Converts an array of summary search results to an array of Document objects. * @param {MemorySearchResult[]} results - The array of search results. * @returns {Document[]} An array of Document objects representing the search results. */ private searchSummaryResultToDoc(results: MemorySearchResult[]): Document[] { return results .filter((r) => r.summary) .map( ({ summary: { content, metadata: summaryMetadata } = {}, score, ...rest }) => new Document({ pageContent: content ?? "", metadata: { score, ...summaryMetadata, ...rest }, }) ); } /** * Retrieves the relevant documents based on the given query. * @param {string} query - The query string. * @returns {Promise<Document[]>} A promise that resolves to an array of relevant Document objects. */ async _getRelevantDocuments(query: string): Promise<Document[]> { try { const results: MemorySearchResult[] = await this.client.memory.search( this.sessionId, { text: query, metadata: this.filter, searchScope: this.searchScope, searchType: this.searchType, mmrLambda: this.mmrLambda, limit: this.topK, } ); return this.searchScope === "summary" ? this.searchSummaryResultToDoc(results) : this.searchMessageResultToDoc(results); } catch (error) { // eslint-disable-next-line no-instanceof/no-instanceof if (error instanceof NotFoundError) { return Promise.resolve([]); // Return an empty Document array } // If it's not a NotFoundError, throw the error again throw error; } } }
146611
import { BaseRetriever, type BaseRetrieverInput, } from "@langchain/core/retrievers"; import { AsyncCaller, type AsyncCallerParams, } from "@langchain/core/utils/async_caller"; import type { DocumentInterface } from "@langchain/core/documents"; /** * Type for the authentication method used by the RemoteRetriever. It can * either be false (no authentication) or an object with a bearer token. */ export type RemoteRetrieverAuth = false | { bearer: string }; /** * Type for the JSON response values from the remote server. */ // eslint-disable-next-line @typescript-eslint/no-explicit-any export type RemoteRetrieverValues = Record<string, any>; /** * Interface for the parameters required to initialize a RemoteRetriever * instance. */ export interface RemoteRetrieverParams extends AsyncCallerParams, BaseRetrieverInput { /** * The URL of the remote retriever server */ url: string; /** * The authentication method to use, currently implemented is * - false: no authentication * - { bearer: string }: Bearer token authentication */ auth: RemoteRetrieverAuth; } /** * Abstract class for interacting with a remote server to retrieve * relevant documents based on a given query. */ export abstract class RemoteRetriever extends BaseRetriever implements RemoteRetrieverParams { get lc_secrets(): { [key: string]: string } | undefined { return { "auth.bearer": "REMOTE_RETRIEVER_AUTH_BEARER", }; } url: string; auth: RemoteRetrieverAuth; headers: Record<string, string>; asyncCaller: AsyncCaller; constructor(fields: RemoteRetrieverParams) { super(fields); const { url, auth, ...rest } = fields; this.url = url; this.auth = auth; this.headers = { Accept: "application/json", "Content-Type": "application/json", ...(this.auth && this.auth.bearer ? { Authorization: `Bearer ${this.auth.bearer}` } : {}), }; this.asyncCaller = new AsyncCaller(rest); } /** * Abstract method that should be implemented by subclasses to create the * JSON body of the request based on the given query. * @param query The query based on which the JSON body of the request is created. * @returns The JSON body of the request. */ abstract createJsonBody(query: string): RemoteRetrieverValues; /** * Abstract method that should be implemented by subclasses to process the * JSON response from the server and convert it into an array of Document * instances. * @param json The JSON response from the server. * @returns An array of Document instances. */ abstract processJsonResponse( json: RemoteRetrieverValues ): DocumentInterface[]; async _getRelevantDocuments(query: string): Promise<DocumentInterface[]> { const body = this.createJsonBody(query); const response = await this.asyncCaller.call(() => fetch(this.url, { method: "POST", headers: this.headers, body: JSON.stringify(body), }) ); if (!response.ok) { throw new Error( `Failed to retrieve documents from ${this.url}: ${response.status} ${response.statusText}` ); } const json = await response.json(); return this.processJsonResponse(json); } }
146765
import { Zep, ZepClient } from "@getzep/zep-cloud"; import { Memory, NotFoundError } from "@getzep/zep-cloud/api"; import { InputValues, OutputValues, MemoryVariables, getInputValue, getOutputValue, } from "@langchain/core/memory"; import { AIMessage, BaseMessage, ChatMessage, getBufferString, HumanMessage, SystemMessage, } from "@langchain/core/messages"; import { BaseChatMemory, BaseChatMemoryInput } from "./chat_memory.js"; // Extract Summary and Facts from Zep memory, if present and compose a system prompt export const zepMemoryContextToSystemPrompt = (memory: Memory) => { let systemPrompt = ""; // Extract conversation facts, if present if (memory.facts) { systemPrompt += memory.facts.join("\n"); } // Extract summary, if present if (memory.summary && memory.summary?.content) { systemPrompt += memory.summary.content; } return systemPrompt; }; // We are condensing the Zep context into a human message in order to satisfy // some models' input requirements and allow more flexibility for devs. // (for example, Anthropic only supports one system message, and does not support multiple user messages in a row) export const condenseZepMemoryIntoHumanMessage = (memory: Memory) => { const systemPrompt = zepMemoryContextToSystemPrompt(memory); let concatMessages = ""; // Add message history to the prompt, if present if (memory.messages) { concatMessages = memory.messages .map((msg) => `${msg.role ?? msg.roleType}: ${msg.content}`) .join("\n"); } return new HumanMessage(`${systemPrompt}\n${concatMessages}`); }; // Convert Zep Memory to a list of BaseMessages export const zepMemoryToMessages = (memory: Memory) => { const systemPrompt = zepMemoryContextToSystemPrompt(memory); let messages: BaseMessage[] = systemPrompt ? [new SystemMessage(systemPrompt)] : []; if (memory && memory.messages) { messages = messages.concat( memory.messages .filter((m) => m.content) .map((message) => { const { content, role, roleType } = message; const messageContent = content as string; if (roleType === "user") { return new HumanMessage(messageContent); } else if (role === "assistant") { return new AIMessage(messageContent); } else { // default to generic ChatMessage return new ChatMessage( messageContent, (roleType ?? role) as string ); } }) ); } return messages; }; /** * Interface defining the structure of the input data for the ZepMemory * class. It includes properties like humanPrefix, aiPrefix, memoryKey, memoryType * sessionId, and apiKey. */ export interface ZepCloudMemoryInput extends BaseChatMemoryInput { humanPrefix?: string; aiPrefix?: string; memoryKey?: string; sessionId: string; apiKey: string; memoryType?: Zep.MemoryType; // Whether to return separate messages for chat history with a SystemMessage containing (facts and summary) or return a single HumanMessage with the entire memory context. // Defaults to false (return a single HumanMessage) in order to allow more flexibility with different models. separateMessages?: boolean; } /** * Class used to manage the memory of a chat session, including loading * and saving the chat history, and clearing the memory when needed. It * uses the ZepClient to interact with the Zep service for managing the * chat session's memory. * @example * ```typescript * const sessionId = randomUUID(); * * // Initialize ZepCloudMemory with session ID and API key * const memory = new ZepCloudMemory({ * sessionId, * apiKey: "<zep api key>", * }); * * // Create a ChatOpenAI model instance with specific parameters * const model = new ChatOpenAI({ * modelName: "gpt-3.5-turbo", * temperature: 0, * }); * * // Create a ConversationChain with the model and memory * const chain = new ConversationChain({ llm: model, memory }); * * // Example of calling the chain with an input * const res1 = await chain.call({ input: "Hi! I'm Jim." }); * console.log({ res1 }); * * // Follow-up call to the chain to demonstrate memory usage * const res2 = await chain.call({ input: "What did I just say my name was?" }); * console.log({ res2 }); * * // Output the session ID and the current state of memory * console.log("Session ID: ", sessionId); * console.log("Memory: ", await memory.loadMemoryVariables({})); * * ``` */ export class ZepCloudMemory extends BaseChatMemory implements ZepCloudMemoryInput { humanPrefix = "Human"; aiPrefix = "AI"; memoryKey = "history"; apiKey: string; sessionId: string; zepClient: ZepClient; memoryType: Zep.MemoryType; separateMessages: boolean; constructor(fields: ZepCloudMemoryInput) { super({ returnMessages: fields?.returnMessages ?? false, inputKey: fields?.inputKey, outputKey: fields?.outputKey, }); this.humanPrefix = fields.humanPrefix ?? this.humanPrefix; this.aiPrefix = fields.aiPrefix ?? this.aiPrefix; this.memoryKey = fields.memoryKey ?? this.memoryKey; this.apiKey = fields.apiKey; this.sessionId = fields.sessionId; this.memoryType = fields.memoryType ?? "perpetual"; this.separateMessages = fields.separateMessages ?? false; this.zepClient = new ZepClient({ apiKey: this.apiKey, }); } get memoryKeys() { return [this.memoryKey]; } /** * Method that retrieves the chat history from the Zep service and formats * it into a list of messages. * @param values Input values for the method. * @returns Promise that resolves with the chat history formatted into a list of messages. */ async loadMemoryVariables(values: InputValues): Promise<MemoryVariables> { const memoryType = values.memoryType ?? "perpetual"; let memory: Memory | null = null; try { memory = await this.zepClient.memory.get(this.sessionId, { memoryType, }); } catch (error) { // eslint-disable-next-line no-instanceof/no-instanceof if (error instanceof NotFoundError) { return this.returnMessages ? { [this.memoryKey]: [] } : { [this.memoryKey]: "" }; } throw error; } if (this.returnMessages) { return { [this.memoryKey]: this.separateMessages ? zepMemoryToMessages(memory) : [condenseZepMemoryIntoHumanMessage(memory)], }; } return { [this.memoryKey]: this.separateMessages ? getBufferString( zepMemoryToMessages(memory), this.humanPrefix, this.aiPrefix ) : condenseZepMemoryIntoHumanMessage(memory).content, }; } /** * Method that saves the input and output messages to the Zep service. * @param inputValues Input messages to be saved. * @param outputValues Output messages to be saved. * @returns Promise that resolves when the messages have been saved. */ async saveContext( inputValues: InputValues, outputValues: OutputValues ): Promise<void> { const input = getInputValue(inputValues, this.inputKey); const output = getOutputValue(outputValues, this.outputKey); // Add the new memory to the session using the ZepClient if (this.sessionId) { try { await this.zepClient.memory.add(this.sessionId, { messages: [ { role: this.humanPrefix, roleType: "user", content: `${input}`, }, { role: this.aiPrefix, roleType: "assistant", content: `${output}`, }, ], }); } catch (error) { console.error("Error adding memory: ", error); } } // Call the superclass's saveContext method await super.saveContext(inputValues, outputValues); } /** * Method that deletes the chat history from the Zep service. * @returns Promise that resolves when the chat history has been deleted. */ async clear(): Promise<void> { try { await this.zepClient.memory.delete(this.sessionId); } catch (error) { console.error("Error deleting session: ", error); } // Clear the superclass's chat history await super.clear(); } }
146837
import { test } from "@jest/globals"; import * as fs from "node:fs/promises"; import { fileURLToPath } from "node:url"; import * as path from "node:path"; import { AIMessage, HumanMessage } from "@langchain/core/messages"; import { PromptTemplate } from "@langchain/core/prompts"; import { BytesOutputParser, StringOutputParser, } from "@langchain/core/output_parsers"; import { ChatOllama } from "../ollama.js"; test.skip("test call", async () => { const ollama = new ChatOllama({}); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const result = await ollama.invoke( "What is a good name for a company that makes colorful socks?" ); // console.log({ result }); }); test.skip("test call with callback", async () => { const ollama = new ChatOllama({ baseUrl: "http://localhost:11434", }); const tokens: string[] = []; const result = await ollama.invoke( "What is a good name for a company that makes colorful socks?", { callbacks: [ { handleLLMNewToken(token: string) { tokens.push(token); }, }, ], } ); expect(tokens.length).toBeGreaterThan(1); expect(result).toEqual(tokens.join("")); }); test.skip("test streaming call", async () => { const ollama = new ChatOllama({ baseUrl: "http://localhost:11434", }); const stream = await ollama.stream( `Translate "I love programming" into German.` ); const chunks = []; for await (const chunk of stream) { chunks.push(chunk); } expect(chunks.length).toBeGreaterThan(1); }); test.skip("should abort the request", async () => { const ollama = new ChatOllama({ baseUrl: "http://localhost:11434", }); const controller = new AbortController(); await expect(() => { const ret = ollama.invoke("Respond with an extremely verbose response", { signal: controller.signal, }); controller.abort(); return ret; }).rejects.toThrow("This operation was aborted"); }); test.skip("Test multiple messages", async () => { const model = new ChatOllama({ baseUrl: "http://localhost:11434" }); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const res = await model.invoke([ new HumanMessage({ content: "My name is Jonas" }), ]); // console.log({ res }); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const res2 = await model.invoke([ new HumanMessage("My name is Jonas"), new AIMessage( "Hello Jonas! It's nice to meet you. Is there anything I can help you with?" ), new HumanMessage("What did I say my name was?"), ]); // console.log({ res2 }); }); test.skip("should stream through with a bytes output parser", async () => { const TEMPLATE = `You are a pirate named Patchy. All responses must be extremely verbose and in pirate dialect. User: {input} AI:`; // Infer the input variables from the template const prompt = PromptTemplate.fromTemplate(TEMPLATE); const ollama = new ChatOllama({ model: "llama2", baseUrl: "http://127.0.0.1:11434", }); const outputParser = new BytesOutputParser(); const chain = prompt.pipe(ollama).pipe(outputParser); const stream = await chain.stream({ input: `Translate "I love programming" into German.`, }); const chunks = []; for await (const chunk of stream) { chunks.push(chunk); } // console.log(chunks.join("")); expect(chunks.length).toBeGreaterThan(1); }); test.skip("JSON mode", async () => { const TEMPLATE = `You are a pirate named Patchy. All responses must be in pirate dialect and in JSON format, with a property named "response" followed by the value. User: {input} AI:`; // Infer the input variables from the template const prompt = PromptTemplate.fromTemplate(TEMPLATE); const ollama = new ChatOllama({ model: "llama2", baseUrl: "http://127.0.0.1:11434", format: "json", }); const outputParser = new StringOutputParser(); const chain = prompt.pipe(ollama).pipe(outputParser); const res = await chain.invoke({ input: `Translate "I love programming" into German.`, }); expect(JSON.parse(res).response).toBeDefined(); }); test.skip("Test ChatOllama with an image", async () => { const __filename = fileURLToPath(import.meta.url); const __dirname = path.dirname(__filename); const imageData = await fs.readFile(path.join(__dirname, "/data/hotdog.jpg")); const chat = new ChatOllama({ model: "llava", baseUrl: "http://127.0.0.1:11434", }); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const res = await chat.invoke([ new HumanMessage({ content: [ { type: "text", text: "What is in this image?", }, { type: "image_url", image_url: `data:image/jpeg;base64,${imageData.toString("base64")}`, }, ], }), ]); // console.log({ res }); }); test.skip("test max tokens (numPredict)", async () => { const ollama = new ChatOllama({ numPredict: 10, }).pipe(new StringOutputParser()); const stream = await ollama.stream( "explain quantum physics to me in as many words as possible" ); let numTokens = 0; let response = ""; for await (const s of stream) { numTokens += 1; response += s; } // console.log({ numTokens, response }); // Ollama doesn't always stream back the exact number of tokens, so we // check for a number which is slightly above the `numPredict`. expect(numTokens).toBeLessThanOrEqual(12); });
146888
import { Storage, File } from "@google-cloud/storage"; import { Document } from "@langchain/core/documents"; import { Docstore } from "langchain/stores/doc/base"; /** * Interface that defines the configuration for the * GoogleCloudStorageDocstore. It includes the bucket name and an optional * prefix. */ export interface GoogleCloudStorageDocstoreConfiguration { /** The identifier for the GCS bucket */ bucket: string; /** * An optional prefix to prepend to each object name. * Often used to create a pseudo-hierarchy. */ prefix?: string; } /** * Class that provides an interface for interacting with Google Cloud * Storage (GCS) as a document store. It extends the Docstore class and * implements methods to search, add, and add a document to the GCS * bucket. */ export class GoogleCloudStorageDocstore extends Docstore { bucket: string; prefix = ""; storage: Storage; constructor(config: GoogleCloudStorageDocstoreConfiguration) { super(); this.bucket = config.bucket; this.prefix = config.prefix ?? this.prefix; this.storage = new Storage(); } /** * Searches for a document in the GCS bucket and returns it as a Document * instance. * @param search The name of the document to search for in the GCS bucket * @returns A Promise that resolves to a Document instance representing the found document */ async search(search: string): Promise<Document> { const file = this.getFile(search); const [fileMetadata] = await file.getMetadata(); const metadata = fileMetadata?.metadata; const [dataBuffer] = await file.download(); const pageContent = dataBuffer.toString(); const ret = new Document({ pageContent, metadata, }); return ret; } /** * Adds multiple documents to the GCS bucket. * @param texts An object where each key is the name of a document and the value is the Document instance to be added * @returns A Promise that resolves when all documents have been added */ async add(texts: Record<string, Document>): Promise<void> { await Promise.all( Object.keys(texts).map((key) => this.addDocument(key, texts[key])) ); } /** * Adds a single document to the GCS bucket. * @param name The name of the document to be added * @param document The Document instance to be added * @returns A Promise that resolves when the document has been added */ async addDocument(name: string, document: Document): Promise<void> { const file = this.getFile(name); await file.save(document.pageContent); await file.setMetadata({ metadata: document.metadata }); } /** * Gets a file from the GCS bucket. * @param name The name of the file to get from the GCS bucket * @returns A File instance representing the fetched file */ private getFile(name: string): File { const filename = this.prefix + name; const file = this.storage.bucket(this.bucket).file(filename); return file; } }
146962
function isVersionLessThan(v1: number[], v2: number[]): boolean { for (let i = 0; i < Math.min(v1.length, v2.length); i += 1) { if (v1[i] < v2[i]) { return true; } else if (v1[i] > v2[i]) { return false; } } // If all the corresponding parts are equal, the shorter version is less return v1.length < v2.length; } // Filter utils const COMPARISONS_TO_NATIVE: Record<string, string> = { $eq: "=", $ne: "<>", $lt: "<", $lte: "<=", $gt: ">", $gte: ">=", }; const COMPARISONS_TO_NATIVE_OPERATORS = new Set( Object.keys(COMPARISONS_TO_NATIVE) ); const TEXT_OPERATORS = new Set(["$like", "$ilike"]); const LOGICAL_OPERATORS = new Set(["$and", "$or"]); const SPECIAL_CASED_OPERATORS = new Set(["$in", "$nin", "$between"]); const SUPPORTED_OPERATORS = new Set([ ...COMPARISONS_TO_NATIVE_OPERATORS, ...TEXT_OPERATORS, ...LOGICAL_OPERATORS, ...SPECIAL_CASED_OPERATORS, ]); const IS_IDENTIFIER_REGEX = /^[a-zA-Z_][a-zA-Z0-9_]*$/; function combineQueries( inputQueries: [string, Record<string, Any>][], operator: string ): [string, Record<string, Any>] { let combinedQuery = ""; const combinedParams: Record<string, Any> = {}; const paramCounter: Record<string, number> = {}; for (const [query, params] of inputQueries) { let newQuery = query; for (const [param, value] of Object.entries(params)) { if (param in paramCounter) { paramCounter[param] += 1; } else { paramCounter[param] = 1; } const newParamName = `${param}_${paramCounter[param]}`; newQuery = newQuery.replace(`$${param}`, `$${newParamName}`); combinedParams[newParamName] = value; } if (combinedQuery) { combinedQuery += ` ${operator} `; } combinedQuery += `(${newQuery})`; } return [combinedQuery, combinedParams]; } function collectParams( inputData: [string, Record<string, string>][] ): [string[], Record<string, Any>] { const queryParts: string[] = []; const params: Record<string, Any> = {}; for (const [queryPart, param] of inputData) { queryParts.push(queryPart); Object.assign(params, param); } return [queryParts, params]; } function handleFieldFilter( field: string, value: Any, paramNumber = 1 ): [string, Record<string, Any>] { if (typeof field !== "string") { throw new Error( `field should be a string but got: ${typeof field} with value: ${field}` ); } if (field.startsWith("$")) { throw new Error( `Invalid filter condition. Expected a field but got an operator: ${field}` ); } // Allow [a - zA - Z0 -9_], disallow $ for now until we support escape characters if (!IS_IDENTIFIER_REGEX.test(field)) { throw new Error( `Invalid field name: ${field}. Expected a valid identifier.` ); } let operator: string; let filterValue: Any; if (typeof value === "object" && value !== null && !Array.isArray(value)) { const keys = Object.keys(value); if (keys.length !== 1) { throw new Error(`Invalid filter condition. Expected a value which is a dictionary with a single key that corresponds to an operator but got a dictionary with ${keys.length} keys. The first few keys are: ${keys .slice(0, 3) .join(", ")} `); } // eslint-disable-next-line prefer-destructuring operator = keys[0]; filterValue = value[operator]; if (!SUPPORTED_OPERATORS.has(operator)) { throw new Error( `Invalid operator: ${operator}. Expected one of ${SUPPORTED_OPERATORS}` ); } } else { operator = "$eq"; filterValue = value; } if (COMPARISONS_TO_NATIVE_OPERATORS.has(operator)) { const native = COMPARISONS_TO_NATIVE[operator]; const querySnippet = `n.${field} ${native} $param_${paramNumber}`; const queryParam = { [`param_${paramNumber}`]: filterValue }; return [querySnippet, queryParam]; } else if (operator === "$between") { const [low, high] = filterValue; const querySnippet = `$param_${paramNumber}_low <= n.${field} <= $param_${paramNumber}_high`; const queryParam = { [`param_${paramNumber}_low`]: low, [`param_${paramNumber}_high`]: high, }; return [querySnippet, queryParam]; } else if (["$in", "$nin", "$like", "$ilike"].includes(operator)) { if (["$in", "$nin"].includes(operator)) { filterValue.forEach((val: Any) => { if ( typeof val !== "string" && typeof val !== "number" && typeof val !== "boolean" ) { throw new Error(`Unsupported type: ${typeof val} for value: ${val}`); } }); } if (operator === "$in") { const querySnippet = `n.${field} IN $param_${paramNumber}`; const queryParam = { [`param_${paramNumber}`]: filterValue }; return [querySnippet, queryParam]; } else if (operator === "$nin") { const querySnippet = `n.${field} NOT IN $param_${paramNumber}`; const queryParam = { [`param_${paramNumber}`]: filterValue }; return [querySnippet, queryParam]; } else if (operator === "$like") { const querySnippet = `n.${field} CONTAINS $param_${paramNumber}`; const queryParam = { [`param_${paramNumber}`]: filterValue.slice(0, -1) }; return [querySnippet, queryParam]; } else if (operator === "$ilike") { const querySnippet = `toLower(n.${field}) CONTAINS $param_${paramNumber}`; const queryParam = { [`param_${paramNumber}`]: filterValue.slice(0, -1) }; return [querySnippet, queryParam]; } else { throw new Error("Not Implemented"); } } else { throw new Error("Not Implemented"); } } function constructMetadataFilter( filter: Record<string, Any> ): [string, Record<string, Any>] { if (typeof filter !== "object" || filter === null) { throw new Error("Expected a dictionary representing the filter condition."); } const entries = Object.entries(filter); if (entries.length === 1) { const [key, value] = entries[0]; if (key.startsWith("$")) { if (!["$and", "$or"].includes(key.toLowerCase())) { throw new Error( `Invalid filter condition. Expected $and or $or but got: ${key}` ); } if (!Array.isArray(value)) { throw new Error( `Expected an array for logical conditions, but got ${typeof value} for value: ${value}` ); } const operation = key.toLowerCase() === "$and" ? "AND" : "OR"; const combinedQueries = combineQueries( value.map((v) => constructMetadataFilter(v)), operation ); return combinedQueries; } else { return handleFieldFilter(key, value); } } else if (entries.length > 1) { for (const [key] of entries) { if (key.startsWith("$")) { throw new Error( `Invalid filter condition. Expected a field but got an operator: ${key}` ); } } const and_multiple = collectParams( entries.map(([field, val], index) => handleFieldFilter(field, val, index + 1) ) ); if (and_multiple.length >= 1) { return [and_multiple[0].join(" AND "), and_multiple[1]]; } else { throw Error( "Invalid filter condition. Expected a dictionary but got an empty dictionary" ); } } else { throw new Error("Filter condition contains no entries."); } }
146963
import * as uuid from "uuid"; import type { ChromaClient as ChromaClientT, Collection, ChromaClientParams, CollectionMetadata, Where, } from "chromadb"; import type { EmbeddingsInterface } from "@langchain/core/embeddings"; import { VectorStore } from "@langchain/core/vectorstores"; import { Document } from "@langchain/core/documents"; type SharedChromaLibArgs = { numDimensions?: number; collectionName?: string; filter?: object; collectionMetadata?: CollectionMetadata; clientParams?: Omit<ChromaClientParams, "path">; }; /** * Defines the arguments that can be passed to the `Chroma` class * constructor. It can either contain a `url` for the Chroma database, the * number of dimensions for the vectors (`numDimensions`), a * `collectionName` for the collection to be used in the database, and a * `filter` object; or it can contain an `index` which is an instance of * `ChromaClientT`, along with the `numDimensions`, `collectionName`, and * `filter`. */ export type ChromaLibArgs = | ({ url?: string; } & SharedChromaLibArgs) | ({ index?: ChromaClientT; } & SharedChromaLibArgs); /** * Defines the parameters for the `delete` method in the `Chroma` class. * It can either contain an array of `ids` of the documents to be deleted * or a `filter` object to specify the documents to be deleted. */ export interface ChromaDeleteParams<T> { ids?: string[]; filter?: T; } /** * Chroma vector store integration. * * Setup: * Install `@langchain/community` and `chromadb`. * * ```bash * npm install @langchain/community chromadb * ``` * * ## [Constructor args](https://api.js.langchain.com/classes/langchain_community_vectorstores_chroma.Chroma.html#constructor) * * <details open> * <summary><strong>Instantiate</strong></summary> * * ```typescript * import { Chroma } from '@langchain/community/vectorstores/chroma'; * // Or other embeddings * import { OpenAIEmbeddings } from '@langchain/openai'; * * const embeddings = new OpenAIEmbeddings({ * model: "text-embedding-3-small", * }) * * const vectorStore = new Chroma( * embeddings, * { * collectionName: "foo", * url: "http://localhost:8000", // URL of the Chroma server * } * ); * ``` * </details> * * <br /> * * <details> * <summary><strong>Add documents</strong></summary> * * ```typescript * import type { Document } from '@langchain/core/documents'; * * const document1 = { pageContent: "foo", metadata: { baz: "bar" } }; * const document2 = { pageContent: "thud", metadata: { bar: "baz" } }; * const document3 = { pageContent: "i will be deleted :(", metadata: {} }; * * const documents: Document[] = [document1, document2, document3]; * const ids = ["1", "2", "3"]; * await vectorStore.addDocuments(documents, { ids }); * ``` * </details> * * <br /> * * <details> * <summary><strong>Delete documents</strong></summary> * * ```typescript * await vectorStore.delete({ ids: ["3"] }); * ``` * </details> * * <br /> * * <details> * <summary><strong>Similarity search</strong></summary> * * ```typescript * const results = await vectorStore.similaritySearch("thud", 1); * for (const doc of results) { * console.log(`* ${doc.pageContent} [${JSON.stringify(doc.metadata, null)}]`); * } * // Output: * thud [{"baz":"bar"}] * ``` * </details> * * <br /> * * * <details> * <summary><strong>Similarity search with filter</strong></summary> * * ```typescript * const resultsWithFilter = await vectorStore.similaritySearch("thud", 1, { baz: "bar" }); * * for (const doc of resultsWithFilter) { * console.log(`* ${doc.pageContent} [${JSON.stringify(doc.metadata, null)}]`); * } * // Output: * foo [{"baz":"bar"}] * ``` * </details> * * <br /> * * * <details> * <summary><strong>Similarity search with score</strong></summary> * * ```typescript * const resultsWithScore = await vectorStore.similaritySearchWithScore("qux", 1); * for (const [doc, score] of resultsWithScore) { * console.log(`* [SIM=${score.toFixed(6)}] ${doc.pageContent} [${JSON.stringify(doc.metadata, null)}]`); * } * // Output: * [SIM=0.000000] qux [{"bar":"baz","baz":"bar"}] * ``` * </details> * * <br /> * * <details> * <summary><strong>As a retriever</strong></summary> * * ```typescript * const retriever = vectorStore.asRetriever({ * searchType: "mmr", // Leave blank for standard similarity search * k: 1, * }); * const resultAsRetriever = await retriever.invoke("thud"); * console.log(resultAsRetriever); * * // Output: [Document({ metadata: { "baz":"bar" }, pageContent: "thud" })] * ``` * </details> * * <br /> */
146964
export class Chroma extends VectorStore { declare FilterType: Where; index?: ChromaClientT; collection?: Collection; collectionName: string; collectionMetadata?: CollectionMetadata; numDimensions?: number; clientParams?: Omit<ChromaClientParams, "path">; url: string; filter?: object; _vectorstoreType(): string { return "chroma"; } constructor(embeddings: EmbeddingsInterface, args: ChromaLibArgs) { super(embeddings, args); this.numDimensions = args.numDimensions; this.embeddings = embeddings; this.collectionName = ensureCollectionName(args.collectionName); this.collectionMetadata = args.collectionMetadata; this.clientParams = args.clientParams; if ("index" in args) { this.index = args.index; } else if ("url" in args) { this.url = args.url || "http://localhost:8000"; } this.filter = args.filter; } /** * Adds documents to the Chroma database. The documents are first * converted to vectors using the `embeddings` instance, and then added to * the database. * @param documents An array of `Document` instances to be added to the database. * @param options Optional. An object containing an array of `ids` for the documents. * @returns A promise that resolves when the documents have been added to the database. */ async addDocuments(documents: Document[], options?: { ids?: string[] }) { const texts = documents.map(({ pageContent }) => pageContent); return this.addVectors( await this.embeddings.embedDocuments(texts), documents, options ); } /** * Ensures that a collection exists in the Chroma database. If the * collection does not exist, it is created. * @returns A promise that resolves with the `Collection` instance. */ async ensureCollection(): Promise<Collection> { if (!this.collection) { if (!this.index) { const chromaClient = new (await Chroma.imports()).ChromaClient({ path: this.url, ...(this.clientParams ?? {}), }); this.index = chromaClient; } try { this.collection = await this.index.getOrCreateCollection({ name: this.collectionName, ...(this.collectionMetadata && { metadata: this.collectionMetadata }), }); } catch (err) { throw new Error(`Chroma getOrCreateCollection error: ${err}`); } } return this.collection; } /** * Adds vectors to the Chroma database. The vectors are associated with * the provided documents. * @param vectors An array of vectors to be added to the database. * @param documents An array of `Document` instances associated with the vectors. * @param options Optional. An object containing an array of `ids` for the vectors. * @returns A promise that resolves with an array of document IDs when the vectors have been added to the database. */ async addVectors( vectors: number[][], documents: Document[], options?: { ids?: string[] } ) { if (vectors.length === 0) { return []; } if (this.numDimensions === undefined) { this.numDimensions = vectors[0].length; } if (vectors.length !== documents.length) { throw new Error(`Vectors and metadatas must have the same length`); } if (vectors[0].length !== this.numDimensions) { throw new Error( `Vectors must have the same length as the number of dimensions (${this.numDimensions})` ); } const documentIds = options?.ids ?? Array.from({ length: vectors.length }, () => uuid.v1()); const collection = await this.ensureCollection(); const mappedMetadatas = documents.map(({ metadata }) => { let locFrom; let locTo; if (metadata?.loc) { if (metadata.loc.lines?.from !== undefined) locFrom = metadata.loc.lines.from; if (metadata.loc.lines?.to !== undefined) locTo = metadata.loc.lines.to; } const newMetadata: Document["metadata"] = { ...metadata, ...(locFrom !== undefined && { locFrom }), ...(locTo !== undefined && { locTo }), }; if (newMetadata.loc) delete newMetadata.loc; return newMetadata; }); await collection.upsert({ ids: documentIds, embeddings: vectors, metadatas: mappedMetadatas, documents: documents.map(({ pageContent }) => pageContent), }); return documentIds; } /** * Deletes documents from the Chroma database. The documents to be deleted * can be specified by providing an array of `ids` or a `filter` object. * @param params An object containing either an array of `ids` of the documents to be deleted or a `filter` object to specify the documents to be deleted. * @returns A promise that resolves when the specified documents have been deleted from the database. */ async delete(params: ChromaDeleteParams<this["FilterType"]>): Promise<void> { const collection = await this.ensureCollection(); if (Array.isArray(params.ids)) { await collection.delete({ ids: params.ids }); } else if (params.filter) { await collection.delete({ where: { ...params.filter }, }); } else { throw new Error(`You must provide one of "ids or "filter".`); } } /** * Searches for vectors in the Chroma database that are similar to the * provided query vector. The search can be filtered using the provided * `filter` object or the `filter` property of the `Chroma` instance. * @param query The query vector. * @param k The number of similar vectors to return. * @param filter Optional. A `filter` object to filter the search results. * @returns A promise that resolves with an array of tuples, each containing a `Document` instance and a similarity score. */ async similaritySearchVectorWithScore( query: number[], k: number, filter?: this["FilterType"] ) { if (filter && this.filter) { throw new Error("cannot provide both `filter` and `this.filter`"); } const _filter = filter ?? this.filter; const collection = await this.ensureCollection(); // similaritySearchVectorWithScore supports one query vector at a time // chroma supports multiple query vectors at a time const result = await collection.query({ queryEmbeddings: query, nResults: k, where: { ..._filter }, }); const { ids, distances, documents, metadatas } = result; if (!ids || !distances || !documents || !metadatas) { return []; } // get the result data from the first and only query vector const [firstIds] = ids; const [firstDistances] = distances; const [firstDocuments] = documents; const [firstMetadatas] = metadatas; const results: [Document, number][] = []; for (let i = 0; i < firstIds.length; i += 1) { let metadata: Document["metadata"] = firstMetadatas?.[i] ?? {}; if (metadata.locFrom && metadata.locTo) { metadata = { ...metadata, loc: { lines: { from: metadata.locFrom, to: metadata.locTo, }, }, }; delete metadata.locFrom; delete metadata.locTo; } results.push([ new Document({ pageContent: firstDocuments?.[i] ?? "", metadata, }), firstDistances[i], ]); } return results; } /** * Creates a new `Chroma` instance from an array of text strings. The text * strings are converted to `Document` instances and added to the Chroma * database. * @param texts An array of text strings. * @param metadatas An array of metadata objects or a single metadata object. If an array is provided, it must have the same length as the `texts` array. * @param embeddings An `Embeddings` instance used to generate embeddings for the documents. * @param dbConfig A `ChromaLibArgs` object containing the configuration for the Chroma database. * @returns A promise that resolves with a new `Chroma` instance. */
146965
static async fromTexts( texts: string[], metadatas: object[] | object, embeddings: EmbeddingsInterface, dbConfig: ChromaLibArgs ): Promise<Chroma> { const docs: Document[] = []; for (let i = 0; i < texts.length; i += 1) { const metadata = Array.isArray(metadatas) ? metadatas[i] : metadatas; const newDoc = new Document({ pageContent: texts[i], metadata, }); docs.push(newDoc); } return this.fromDocuments(docs, embeddings, dbConfig); } /** * Creates a new `Chroma` instance from an array of `Document` instances. * The documents are added to the Chroma database. * @param docs An array of `Document` instances. * @param embeddings An `Embeddings` instance used to generate embeddings for the documents. * @param dbConfig A `ChromaLibArgs` object containing the configuration for the Chroma database. * @returns A promise that resolves with a new `Chroma` instance. */ static async fromDocuments( docs: Document[], embeddings: EmbeddingsInterface, dbConfig: ChromaLibArgs ): Promise<Chroma> { const instance = new this(embeddings, dbConfig); await instance.addDocuments(docs); return instance; } /** * Creates a new `Chroma` instance from an existing collection in the * Chroma database. * @param embeddings An `Embeddings` instance used to generate embeddings for the documents. * @param dbConfig A `ChromaLibArgs` object containing the configuration for the Chroma database. * @returns A promise that resolves with a new `Chroma` instance. */ static async fromExistingCollection( embeddings: EmbeddingsInterface, dbConfig: ChromaLibArgs ): Promise<Chroma> { const instance = new this(embeddings, dbConfig); await instance.ensureCollection(); return instance; } /** @ignore */ static async imports(): Promise<{ ChromaClient: typeof ChromaClientT; }> { try { const { ChromaClient } = await import("chromadb"); return { ChromaClient }; } catch (e) { throw new Error( "Please install chromadb as a dependency with, e.g. `npm install -S chromadb`" ); } } } /** * Generates a unique collection name if none is provided. */ function ensureCollectionName(collectionName?: string) { if (!collectionName) { return `langchain-${uuid.v4()}`; } return collectionName; }
146966
import type { HierarchicalNSW as HierarchicalNSWT, SpaceName, } from "hnswlib-node"; import type { EmbeddingsInterface } from "@langchain/core/embeddings"; import { SaveableVectorStore } from "@langchain/core/vectorstores"; import { Document } from "@langchain/core/documents"; import { SynchronousInMemoryDocstore } from "../stores/doc/in_memory.js"; /** * Interface for the base configuration of HNSWLib. It includes the space * name and the number of dimensions. */ export interface HNSWLibBase { space: SpaceName; numDimensions?: number; } /** * Interface for the arguments that can be passed to the HNSWLib * constructor. It extends HNSWLibBase and includes properties for the * document store and HNSW index. */ export interface HNSWLibArgs extends HNSWLibBase { docstore?: SynchronousInMemoryDocstore; index?: HierarchicalNSWT; } /** * Class that implements a vector store using Hierarchical Navigable Small * World (HNSW) graphs. It extends the SaveableVectorStore class and * provides methods for adding documents and vectors, performing * similarity searches, and saving and loading the vector store. */
146967
export class HNSWLib extends SaveableVectorStore { declare FilterType: (doc: Document) => boolean; _index?: HierarchicalNSWT; docstore: SynchronousInMemoryDocstore; args: HNSWLibBase; _vectorstoreType(): string { return "hnswlib"; } constructor(embeddings: EmbeddingsInterface, args: HNSWLibArgs) { super(embeddings, args); this._index = args.index; this.args = args; this.embeddings = embeddings; this.docstore = args?.docstore ?? new SynchronousInMemoryDocstore(); } /** * Method to add documents to the vector store. It first converts the * documents to vectors using the embeddings, then adds the vectors to the * vector store. * @param documents The documents to be added to the vector store. * @returns A Promise that resolves when the documents have been added. */ async addDocuments(documents: Document[]): Promise<void> { const texts = documents.map(({ pageContent }) => pageContent); return this.addVectors( await this.embeddings.embedDocuments(texts), documents ); } private static async getHierarchicalNSW(args: HNSWLibBase) { const { HierarchicalNSW } = await HNSWLib.imports(); if (!args.space) { throw new Error("hnswlib-node requires a space argument"); } if (args.numDimensions === undefined) { throw new Error("hnswlib-node requires a numDimensions argument"); } return new HierarchicalNSW(args.space, args.numDimensions); } private async initIndex(vectors: number[][]) { if (!this._index) { if (this.args.numDimensions === undefined) { this.args.numDimensions = vectors[0].length; } this.index = await HNSWLib.getHierarchicalNSW(this.args); } if (!this.index.getCurrentCount()) { this.index.initIndex(vectors.length); } } public get index(): HierarchicalNSWT { if (!this._index) { throw new Error( "Vector store not initialised yet. Try calling `addTexts` first." ); } return this._index; } private set index(index: HierarchicalNSWT) { this._index = index; } /** * Method to add vectors to the vector store. It first initializes the * index if it hasn't been initialized yet, then adds the vectors to the * index and the documents to the document store. * @param vectors The vectors to be added to the vector store. * @param documents The documents corresponding to the vectors. * @returns A Promise that resolves when the vectors and documents have been added. */ async addVectors(vectors: number[][], documents: Document[]) { if (vectors.length === 0) { return; } await this.initIndex(vectors); // TODO here we could optionally normalise the vectors to unit length // so that dot product is equivalent to cosine similarity, like this // https://github.com/nmslib/hnswlib/issues/384#issuecomment-1155737730 // While we only support OpenAI embeddings this isn't necessary if (vectors.length !== documents.length) { throw new Error(`Vectors and metadatas must have the same length`); } if (vectors[0].length !== this.args.numDimensions) { throw new Error( `Vectors must have the same length as the number of dimensions (${this.args.numDimensions})` ); } const capacity = this.index.getMaxElements(); const needed = this.index.getCurrentCount() + vectors.length; if (needed > capacity) { this.index.resizeIndex(needed); } const docstoreSize = this.index.getCurrentCount(); const toSave: Record<string, Document> = {}; for (let i = 0; i < vectors.length; i += 1) { this.index.addPoint(vectors[i], docstoreSize + i); toSave[docstoreSize + i] = documents[i]; } this.docstore.add(toSave); } /** * Method to perform a similarity search in the vector store using a query * vector. It returns the k most similar documents along with their * similarity scores. An optional filter function can be provided to * filter the documents. * @param query The query vector. * @param k The number of most similar documents to return. * @param filter An optional filter function to filter the documents. * @returns A Promise that resolves to an array of tuples, where each tuple contains a document and its similarity score. */ async similaritySearchVectorWithScore( query: number[], k: number, filter?: this["FilterType"] ) { if (this.args.numDimensions && !this._index) { await this.initIndex([[]]); } if (query.length !== this.args.numDimensions) { throw new Error( `Query vector must have the same length as the number of dimensions (${this.args.numDimensions})` ); } if (k > this.index.getCurrentCount()) { const total = this.index.getCurrentCount(); console.warn( `k (${k}) is greater than the number of elements in the index (${total}), setting k to ${total}` ); // eslint-disable-next-line no-param-reassign k = total; } const filterFunction = (label: number): boolean => { if (!filter) { return true; } const document = this.docstore.search(String(label)); // eslint-disable-next-line no-instanceof/no-instanceof if (typeof document !== "string") { return filter(document); } return false; }; const result = this.index.searchKnn( query, k, filter ? filterFunction : undefined ); return result.neighbors.map( (docIndex, resultIndex) => [ this.docstore.search(String(docIndex)), result.distances[resultIndex], ] as [Document, number] ); } /** * Method to delete the vector store from a directory. It deletes the * hnswlib.index file, the docstore.json file, and the args.json file from * the directory. * @param params An object with a directory property that specifies the directory from which to delete the vector store. * @returns A Promise that resolves when the vector store has been deleted. */ async delete(params: { directory: string }) { const fs = await import("node:fs/promises"); const path = await import("node:path"); try { await fs.access(path.join(params.directory, "hnswlib.index")); } catch (err) { throw new Error( `Directory ${params.directory} does not contain a hnswlib.index file.` ); } await Promise.all([ await fs.rm(path.join(params.directory, "hnswlib.index"), { force: true, }), await fs.rm(path.join(params.directory, "docstore.json"), { force: true, }), await fs.rm(path.join(params.directory, "args.json"), { force: true }), ]); } /** * Method to save the vector store to a directory. It saves the HNSW * index, the arguments, and the document store to the directory. * @param directory The directory to which to save the vector store. * @returns A Promise that resolves when the vector store has been saved. */ async save(directory: string) { const fs = await import("node:fs/promises"); const path = await import("node:path"); await fs.mkdir(directory, { recursive: true }); await Promise.all([ this.index.writeIndex(path.join(directory, "hnswlib.index")), await fs.writeFile( path.join(directory, "args.json"), JSON.stringify(this.args) ), await fs.writeFile( path.join(directory, "docstore.json"), JSON.stringify(Array.from(this.docstore._docs.entries())) ), ]); } /** * Static method to load a vector store from a directory. It reads the * HNSW index, the arguments, and the document store from the directory, * then creates a new HNSWLib instance with these values. * @param directory The directory from which to load the vector store. * @param embeddings The embeddings to be used by the HNSWLib instance. * @returns A Promise that resolves to a new HNSWLib instance. */
146968
static async load(directory: string, embeddings: EmbeddingsInterface) { const fs = await import("node:fs/promises"); const path = await import("node:path"); const args = JSON.parse( await fs.readFile(path.join(directory, "args.json"), "utf8") ); const index = await HNSWLib.getHierarchicalNSW(args); const [docstoreFiles] = await Promise.all([ fs .readFile(path.join(directory, "docstore.json"), "utf8") .then(JSON.parse), index.readIndex(path.join(directory, "hnswlib.index")), ]); args.docstore = new SynchronousInMemoryDocstore(new Map(docstoreFiles)); args.index = index; return new HNSWLib(embeddings, args); } /** * Static method to create a new HNSWLib instance from texts and metadata. * It creates a new Document instance for each text and metadata, then * calls the fromDocuments method to create the HNSWLib instance. * @param texts The texts to be used to create the documents. * @param metadatas The metadata to be used to create the documents. * @param embeddings The embeddings to be used by the HNSWLib instance. * @param dbConfig An optional configuration object for the document store. * @returns A Promise that resolves to a new HNSWLib instance. */ static async fromTexts( texts: string[], metadatas: object[] | object, embeddings: EmbeddingsInterface, dbConfig?: { docstore?: SynchronousInMemoryDocstore; } ): Promise<HNSWLib> { const docs: Document[] = []; for (let i = 0; i < texts.length; i += 1) { const metadata = Array.isArray(metadatas) ? metadatas[i] : metadatas; const newDoc = new Document({ pageContent: texts[i], metadata, }); docs.push(newDoc); } return HNSWLib.fromDocuments(docs, embeddings, dbConfig); } /** * Static method to create a new HNSWLib instance from documents. It * creates a new HNSWLib instance, adds the documents to it, then returns * the instance. * @param docs The documents to be added to the HNSWLib instance. * @param embeddings The embeddings to be used by the HNSWLib instance. * @param dbConfig An optional configuration object for the document store. * @returns A Promise that resolves to a new HNSWLib instance. */ static async fromDocuments( docs: Document[], embeddings: EmbeddingsInterface, dbConfig?: { docstore?: SynchronousInMemoryDocstore; } ): Promise<HNSWLib> { const args: HNSWLibArgs = { docstore: dbConfig?.docstore, space: "cosine", }; const instance = new this(embeddings, args); await instance.addDocuments(docs); return instance; } static async imports(): Promise<{ HierarchicalNSW: typeof HierarchicalNSWT; }> { try { const { default: { HierarchicalNSW }, } = await import("hnswlib-node"); return { HierarchicalNSW }; // eslint-disable-next-line @typescript-eslint/no-explicit-any } catch (err: any) { throw new Error( `Could not import hnswlib-node. Please install hnswlib-node as a dependency with, e.g. \`npm install -S hnswlib-node\`.\n\nError: ${err?.message}` ); } } }
146999
export class MongoDBAtlasVectorSearch extends VectorStore { declare FilterType: MongoDBAtlasFilter; private readonly collection: Collection<MongoDBDocument>; private readonly indexName: string; private readonly textKey: string; private readonly embeddingKey: string; private readonly primaryKey: string; private caller: AsyncCaller; _vectorstoreType(): string { return "mongodb_atlas"; } constructor( embeddings: EmbeddingsInterface, args: MongoDBAtlasVectorSearchLibArgs ) { super(embeddings, args); this.collection = args.collection; this.indexName = args.indexName ?? "default"; this.textKey = args.textKey ?? "text"; this.embeddingKey = args.embeddingKey ?? "embedding"; this.primaryKey = args.primaryKey ?? "_id"; this.caller = new AsyncCaller(args); } /** * Method to add vectors and their corresponding documents to the MongoDB * collection. * @param vectors Vectors to be added. * @param documents Corresponding documents to be added. * @returns Promise that resolves when the vectors and documents have been added. */ async addVectors( vectors: number[][], documents: Document[], options?: { ids?: string[] } ) { const docs = vectors.map((embedding, idx) => ({ [this.textKey]: documents[idx].pageContent, [this.embeddingKey]: embedding, ...documents[idx].metadata, })); if (options?.ids === undefined) { await this.collection.insertMany(docs); } else { if (options.ids.length !== vectors.length) { throw new Error( `If provided, "options.ids" must be an array with the same length as "vectors".` ); } const { ids } = options; for (let i = 0; i < docs.length; i += 1) { await this.caller.call(async () => { await this.collection.updateOne( { [this.primaryKey]: ids[i] }, { $set: { [this.primaryKey]: ids[i], ...docs[i] } }, { upsert: true } ); }); } } return options?.ids ?? docs.map((doc) => doc[this.primaryKey]); } /** * Method to add documents to the MongoDB collection. It first converts * the documents to vectors using the embeddings and then calls the * addVectors method. * @param documents Documents to be added. * @returns Promise that resolves when the documents have been added. */ async addDocuments(documents: Document[], options?: { ids?: string[] }) { const texts = documents.map(({ pageContent }) => pageContent); return this.addVectors( await this.embeddings.embedDocuments(texts), documents, options ); } /** * Method that performs a similarity search on the vectors stored in the * MongoDB collection. It returns a list of documents and their * corresponding similarity scores. * @param query Query vector for the similarity search. * @param k Number of nearest neighbors to return. * @param filter Optional filter to be applied. * @returns Promise that resolves to a list of documents and their corresponding similarity scores. */ async similaritySearchVectorWithScore( query: number[], k: number, filter?: MongoDBAtlasFilter ): Promise<[Document, number][]> { const postFilterPipeline = filter?.postFilterPipeline ?? []; const preFilter: MongoDBDocument | undefined = filter?.preFilter || filter?.postFilterPipeline || filter?.includeEmbeddings ? filter.preFilter : filter; const removeEmbeddingsPipeline = !filter?.includeEmbeddings ? [ { $project: { [this.embeddingKey]: 0, }, }, ] : []; const pipeline: MongoDBDocument[] = [ { $vectorSearch: { queryVector: MongoDBAtlasVectorSearch.fixArrayPrecision(query), index: this.indexName, path: this.embeddingKey, limit: k, numCandidates: 10 * k, ...(preFilter && { filter: preFilter }), }, }, { $set: { score: { $meta: "vectorSearchScore" }, }, }, ...removeEmbeddingsPipeline, ...postFilterPipeline, ]; const results = this.collection .aggregate(pipeline) .map<[Document, number]>((result) => { const { score, [this.textKey]: text, ...metadata } = result; return [new Document({ pageContent: text, metadata }), score]; }); return results.toArray(); } /** * Return documents selected using the maximal marginal relevance. * Maximal marginal relevance optimizes for similarity to the query AND diversity * among selected documents. * * @param {string} query - Text to look up documents similar to. * @param {number} options.k - Number of documents to return. * @param {number} options.fetchK=20- Number of documents to fetch before passing to the MMR algorithm. * @param {number} options.lambda=0.5 - Number between 0 and 1 that determines the degree of diversity among the results, * where 0 corresponds to maximum diversity and 1 to minimum diversity. * @param {MongoDBAtlasFilter} options.filter - Optional Atlas Search operator to pre-filter on document fields * or post-filter following the knnBeta search. * * @returns {Promise<Document[]>} - List of documents selected by maximal marginal relevance. */ async maxMarginalRelevanceSearch( query: string, options: MaxMarginalRelevanceSearchOptions<this["FilterType"]> ): Promise<Document[]> { const { k, fetchK = 20, lambda = 0.5, filter } = options; const queryEmbedding = await this.embeddings.embedQuery(query); // preserve the original value of includeEmbeddings const includeEmbeddingsFlag = options.filter?.includeEmbeddings || false; // update filter to include embeddings, as they will be used in MMR const includeEmbeddingsFilter = { ...filter, includeEmbeddings: true, }; const resultDocs = await this.similaritySearchVectorWithScore( MongoDBAtlasVectorSearch.fixArrayPrecision(queryEmbedding), fetchK, includeEmbeddingsFilter ); const embeddingList = resultDocs.map( (doc) => doc[0].metadata[this.embeddingKey] ); const mmrIndexes = maximalMarginalRelevance( queryEmbedding, embeddingList, lambda, k ); return mmrIndexes.map((idx) => { const doc = resultDocs[idx][0]; // remove embeddings if they were not requested originally if (!includeEmbeddingsFlag) { delete doc.metadata[this.embeddingKey]; } return doc; }); } /** * Static method to create an instance of MongoDBAtlasVectorSearch from a * list of texts. It first converts the texts to vectors and then adds * them to the MongoDB collection. * @param texts List of texts to be converted to vectors. * @param metadatas Metadata for the texts. * @param embeddings Embeddings to be used for conversion. * @param dbConfig Database configuration for MongoDB Atlas. * @returns Promise that resolves to a new instance of MongoDBAtlasVectorSearch. */ static async fromTexts( texts: string[], metadatas: object[] | object, embeddings: EmbeddingsInterface, dbConfig: MongoDBAtlasVectorSearchLibArgs & { ids?: string[] } ): Promise<MongoDBAtlasVectorSearch> { const docs: Document[] = []; for (let i = 0; i < texts.length; i += 1) { const metadata = Array.isArray(metadatas) ? metadatas[i] : metadatas; const newDoc = new Document({ pageContent: texts[i], metadata, }); docs.push(newDoc); } return MongoDBAtlasVectorSearch.fromDocuments(docs, embeddings, dbConfig); } /** * Static method to create an instance of MongoDBAtlasVectorSearch from a * list of documents. It first converts the documents to vectors and then * adds them to the MongoDB collection. * @param docs List of documents to be converted to vectors. * @param embeddings Embeddings to be used for conversion. * @param dbConfig Database configuration for MongoDB Atlas. * @returns Promise that resolves to a new instance of MongoDBAtlasVectorSearch. */ static async fromDocuments( docs: Document[], embeddings: EmbeddingsInterface, dbConfig: MongoDBAtlasVectorSearchLibArgs & { ids?: string[] } ): Promise<MongoDBAtlasVectorSearch> { const instance = new this(embeddings, dbConfig); await instance.addDocuments(docs, { ids: dbConfig.ids }); return instance; }
147007
import * as uuid from "uuid"; import type { EmbeddingsInterface } from "@langchain/core/embeddings"; import { VectorStore } from "@langchain/core/vectorstores"; import { Document } from "@langchain/core/documents"; /** * Type definition for the arguments required to initialize a * TigrisVectorStore instance. */ export type TigrisLibArgs = { // eslint-disable-next-line @typescript-eslint/no-explicit-any index: any; }; /** * Class for managing and operating vector search applications with * Tigris, an open-source Serverless NoSQL Database and Search Platform. */ export class TigrisVectorStore extends VectorStore { // eslint-disable-next-line @typescript-eslint/no-explicit-any index?: any; _vectorstoreType(): string { return "tigris"; } constructor(embeddings: EmbeddingsInterface, args: TigrisLibArgs) { super(embeddings, args); this.embeddings = embeddings; this.index = args.index; } /** * Method to add an array of documents to the Tigris database. * @param documents An array of Document instances to be added to the Tigris database. * @param options Optional parameter that can either be an array of string IDs or an object with a property 'ids' that is an array of string IDs. * @returns A Promise that resolves when the documents have been added to the Tigris database. */ async addDocuments( documents: Document[], options?: { ids?: string[] } | string[] ): Promise<void> { const texts = documents.map(({ pageContent }) => pageContent); await this.addVectors( await this.embeddings.embedDocuments(texts), documents, options ); } /** * Method to add vectors to the Tigris database. * @param vectors An array of vectors to be added to the Tigris database. * @param documents An array of Document instances corresponding to the vectors. * @param options Optional parameter that can either be an array of string IDs or an object with a property 'ids' that is an array of string IDs. * @returns A Promise that resolves when the vectors have been added to the Tigris database. */ async addVectors( vectors: number[][], documents: Document[], options?: { ids?: string[] } | string[] ) { if (vectors.length === 0) { return; } if (vectors.length !== documents.length) { throw new Error(`Vectors and metadatas must have the same length`); } const ids = Array.isArray(options) ? options : options?.ids; const documentIds = ids == null ? documents.map(() => uuid.v4()) : ids; await this.index?.addDocumentsWithVectors({ ids: documentIds, embeddings: vectors, documents: documents.map(({ metadata, pageContent }) => ({ content: pageContent, metadata, })), }); } /** * Method to perform a similarity search in the Tigris database and return * the k most similar vectors along with their similarity scores. * @param query The query vector. * @param k The number of most similar vectors to return. * @param filter Optional filter object to apply during the search. * @returns A Promise that resolves to an array of tuples, each containing a Document and its similarity score. */ async similaritySearchVectorWithScore( query: number[], k: number, filter?: object ) { const result = await this.index?.similaritySearchVectorWithScore({ query, k, filter, }); if (!result) { return []; } // eslint-disable-next-line @typescript-eslint/no-explicit-any return result.map(([document, score]: [any, any]) => [ new Document({ pageContent: document.content, metadata: document.metadata, }), score, ]) as [Document, number][]; } /** * Static method to create a new instance of TigrisVectorStore from an * array of texts. * @param texts An array of texts to be converted into Document instances and added to the Tigris database. * @param metadatas Either an array of metadata objects or a single metadata object to be associated with the texts. * @param embeddings An instance of Embeddings to be used for embedding the texts. * @param dbConfig An instance of TigrisLibArgs to be used for configuring the Tigris database. * @returns A Promise that resolves to a new instance of TigrisVectorStore. */ static async fromTexts( texts: string[], metadatas: object[] | object, embeddings: EmbeddingsInterface, dbConfig: TigrisLibArgs ): Promise<TigrisVectorStore> { const docs: Document[] = []; for (let i = 0; i < texts.length; i += 1) { const metadata = Array.isArray(metadatas) ? metadatas[i] : metadatas; const newDoc = new Document({ pageContent: texts[i], metadata, }); docs.push(newDoc); } return TigrisVectorStore.fromDocuments(docs, embeddings, dbConfig); } /** * Static method to create a new instance of TigrisVectorStore from an * array of Document instances. * @param docs An array of Document instances to be added to the Tigris database. * @param embeddings An instance of Embeddings to be used for embedding the documents. * @param dbConfig An instance of TigrisLibArgs to be used for configuring the Tigris database. * @returns A Promise that resolves to a new instance of TigrisVectorStore. */ static async fromDocuments( docs: Document[], embeddings: EmbeddingsInterface, dbConfig: TigrisLibArgs ): Promise<TigrisVectorStore> { const instance = new this(embeddings, dbConfig); await instance.addDocuments(docs); return instance; } /** * Static method to create a new instance of TigrisVectorStore from an * existing index. * @param embeddings An instance of Embeddings to be used for embedding the documents. * @param dbConfig An instance of TigrisLibArgs to be used for configuring the Tigris database. * @returns A Promise that resolves to a new instance of TigrisVectorStore. */ static async fromExistingIndex( embeddings: EmbeddingsInterface, dbConfig: TigrisLibArgs ): Promise<TigrisVectorStore> { const instance = new this(embeddings, dbConfig); return instance; } }
147011
protected async createSearchIndexDefinition( indexName: string ): Promise<SearchIndex> { // Embed a test query to get the embedding dimensions const testEmbedding = await this.embeddings.embedQuery("test"); const embeddingDimensions = testEmbedding.length; return { name: indexName, vectorSearch: { algorithms: [ { name: "vector-search-algorithm", kind: "hnsw", parameters: { m: 4, efSearch: 500, metric: "cosine", efConstruction: 400, }, }, ], profiles: [ { name: "vector-search-profile", algorithmConfigurationName: "vector-search-algorithm", }, ], }, semanticSearch: { defaultConfigurationName: "semantic-search-config", configurations: [ { name: "semantic-search-config", prioritizedFields: { contentFields: [ { name: DEFAULT_FIELD_CONTENT, }, ], keywordsFields: [ { name: DEFAULT_FIELD_CONTENT, }, ], }, }, ], }, fields: [ { name: DEFAULT_FIELD_ID, filterable: true, key: true, type: "Edm.String", }, { name: DEFAULT_FIELD_CONTENT, searchable: true, filterable: true, type: "Edm.String", }, { name: DEFAULT_FIELD_CONTENT_VECTOR, searchable: true, type: "Collection(Edm.Single)", vectorSearchDimensions: embeddingDimensions, vectorSearchProfileName: "vector-search-profile", }, { name: DEFAULT_FIELD_METADATA, type: "Edm.ComplexType", fields: [ { name: DEFAULT_FIELD_METADATA_SOURCE, type: "Edm.String", filterable: true, }, { name: DEFAULT_FIELD_METADATA_ATTRS, type: "Collection(Edm.ComplexType)", fields: [ { name: "key", type: "Edm.String", filterable: true, }, { name: "value", type: "Edm.String", filterable: true, }, ], }, ], }, ], }; } /** * Static method to create an instance of AzureAISearchVectorStore from a * list of texts. It first converts the texts to vectors and then adds * them to the collection. * @param texts List of texts to be converted to vectors. * @param metadatas Metadata for the texts. * @param embeddings Embeddings to be used for conversion. * @param config Database configuration for Azure AI Search. * @returns Promise that resolves to a new instance of AzureAISearchVectorStore. */ static async fromTexts( texts: string[], metadatas: object[] | object, embeddings: EmbeddingsInterface, config: AzureAISearchConfig ): Promise<AzureAISearchVectorStore> { const docs: Document<AzureAISearchDocumentMetadata>[] = []; for (let i = 0; i < texts.length; i += 1) { const metadata = Array.isArray(metadatas) ? metadatas[i] : metadatas; const newDoc = new Document({ pageContent: texts[i], metadata, }); docs.push(newDoc); } return AzureAISearchVectorStore.fromDocuments(docs, embeddings, config); } /** * Static method to create an instance of AzureAISearchVectorStore from a * list of documents. It first converts the documents to vectors and then * adds them to the database. * @param docs List of documents to be converted to vectors. * @param embeddings Embeddings to be used for conversion. * @param config Database configuration for Azure AI Search. * @returns Promise that resolves to a new instance of AzureAISearchVectorStore. */ static async fromDocuments( docs: Document[], embeddings: EmbeddingsInterface, config: AzureAISearchConfig, options?: AzureAISearchAddDocumentsOptions ): Promise<AzureAISearchVectorStore> { const instance = new this(embeddings, config); await instance.addDocuments(docs, options); return instance; } }
147018
export class PineconeStore extends VectorStore { declare FilterType: PineconeMetadata; textKey: string; namespace?: string; pineconeIndex: PineconeIndex; filter?: PineconeMetadata; caller: AsyncCaller; _vectorstoreType(): string { return "pinecone"; } constructor(embeddings: EmbeddingsInterface, args: PineconeLibArgs) { super(embeddings, args); this.embeddings = embeddings; const { namespace, pineconeIndex, textKey, filter, ...asyncCallerArgs } = args; this.namespace = namespace; this.pineconeIndex = pineconeIndex; this.textKey = textKey ?? "text"; this.filter = filter; this.caller = new AsyncCaller(asyncCallerArgs); } /** * Method that adds documents to the Pinecone database. * @param documents Array of documents to add to the Pinecone database. * @param options Optional ids for the documents. * @returns Promise that resolves with the ids of the added documents. */ async addDocuments( documents: Document[], options?: { ids?: string[] } | string[] ) { const texts = documents.map(({ pageContent }) => pageContent); return this.addVectors( await this.embeddings.embedDocuments(texts), documents, options ); } /** * Method that adds vectors to the Pinecone database. * @param vectors Array of vectors to add to the Pinecone database. * @param documents Array of documents associated with the vectors. * @param options Optional ids for the vectors. * @returns Promise that resolves with the ids of the added vectors. */ async addVectors( vectors: number[][], documents: Document[], options?: { ids?: string[] } | string[] ) { const ids = Array.isArray(options) ? options : options?.ids; const documentIds = ids == null ? documents.map(() => uuid.v4()) : ids; const pineconeVectors = vectors.map((values, idx) => { // Pinecone doesn't support nested objects, so we flatten them const documentMetadata = { ...documents[idx].metadata }; // preserve string arrays which are allowed const stringArrays: Record<string, string[]> = {}; for (const key of Object.keys(documentMetadata)) { if ( Array.isArray(documentMetadata[key]) && // eslint-disable-next-line @typescript-eslint/ban-types, @typescript-eslint/no-explicit-any documentMetadata[key].every((el: any) => typeof el === "string") ) { stringArrays[key] = documentMetadata[key]; delete documentMetadata[key]; } } const metadata: { [key: string]: string | number | boolean | string[] | null; } = { ...flatten(documentMetadata), ...stringArrays, [this.textKey]: documents[idx].pageContent, }; // Pinecone doesn't support null values, so we remove them for (const key of Object.keys(metadata)) { if (metadata[key] == null) { delete metadata[key]; } else if ( typeof metadata[key] === "object" && Object.keys(metadata[key] as unknown as object).length === 0 ) { delete metadata[key]; } } return { id: documentIds[idx], metadata, values, } as PineconeRecord<RecordMetadata>; }); const namespace = this.pineconeIndex.namespace(this.namespace ?? ""); // Pinecone recommends a limit of 100 vectors per upsert request const chunkSize = 100; const chunkedVectors = chunkArray(pineconeVectors, chunkSize); const batchRequests = chunkedVectors.map((chunk) => this.caller.call(async () => namespace.upsert(chunk)) ); await Promise.all(batchRequests); return documentIds; } /** * Method that deletes vectors from the Pinecone database. * @param params Parameters for the delete operation. * @returns Promise that resolves when the delete operation is complete. */ async delete(params: PineconeDeleteParams): Promise<void> { const { deleteAll, ids, filter } = params; const namespace = this.pineconeIndex.namespace(this.namespace ?? ""); if (deleteAll) { await namespace.deleteAll(); } else if (ids) { const batchSize = 1000; for (let i = 0; i < ids.length; i += batchSize) { const batchIds = ids.slice(i, i + batchSize); await namespace.deleteMany(batchIds); } } else if (filter) { await namespace.deleteMany(filter); } else { throw new Error("Either ids or delete_all must be provided."); } } protected async _runPineconeQuery( query: number[], k: number, filter?: PineconeMetadata, options?: { includeValues: boolean } ) { if (filter && this.filter) { throw new Error("cannot provide both `filter` and `this.filter`"); } const _filter = filter ?? this.filter; const namespace = this.pineconeIndex.namespace(this.namespace ?? ""); const results = await namespace.query({ includeMetadata: true, topK: k, vector: query, filter: _filter, ...options, }); return results; } /** * Method that performs a similarity search in the Pinecone database and * returns the results along with their scores. * @param query Query vector for the similarity search. * @param k Number of top results to return. * @param filter Optional filter to apply to the search. * @returns Promise that resolves with an array of documents and their scores. */ async similaritySearchVectorWithScore( query: number[], k: number, filter?: PineconeMetadata ): Promise<[Document, number][]> { const results = await this._runPineconeQuery(query, k, filter); const result: [Document, number][] = []; if (results.matches) { for (const res of results.matches) { const { [this.textKey]: pageContent, ...metadata } = (res.metadata ?? {}) as PineconeMetadata; if (res.score) { result.push([new Document({ metadata, pageContent }), res.score]); } } } return result; } /** * Return documents selected using the maximal marginal relevance. * Maximal marginal relevance optimizes for similarity to the query AND diversity * among selected documents. * * @param {string} query - Text to look up documents similar to. * @param {number} options.k - Number of documents to return. * @param {number} options.fetchK=20 - Number of documents to fetch before passing to the MMR algorithm. * @param {number} options.lambda=0.5 - Number between 0 and 1 that determines the degree of diversity among the results, * where 0 corresponds to maximum diversity and 1 to minimum diversity. * @param {PineconeMetadata} options.filter - Optional filter to apply to the search. * * @returns {Promise<Document[]>} - List of documents selected by maximal marginal relevance. */ async maxMarginalRelevanceSearch( query: string, options: MaxMarginalRelevanceSearchOptions<this["FilterType"]> ): Promise<Document[]> { const queryEmbedding = await this.embeddings.embedQuery(query); const results = await this._runPineconeQuery( queryEmbedding, options.fetchK ?? 20, options.filter, { includeValues: true } ); const matches = results?.matches ?? []; const embeddingList = matches.map((match) => match.values); const mmrIndexes = maximalMarginalRelevance( queryEmbedding, embeddingList, options.lambda, options.k ); const topMmrMatches = mmrIndexes.map((idx) => matches[idx]); const finalResult: Document[] = []; for (const res of topMmrMatches) { const { [this.textKey]: pageContent, ...metadata } = (res.metadata ?? {}) as PineconeMetadata; if (res.score) { finalResult.push(new Document({ metadata, pageContent })); } } return finalResult; } /** * Static method that creates a new instance of the PineconeStore class * from texts. * @param texts Array of texts to add to the Pinecone database. * @param metadatas Metadata associated with the texts. * @param embeddings Embeddings to use for the texts. * @param dbConfig Configuration for the Pinecone database. * @returns Promise that resolves with a new instance of the PineconeStore class. */
147039
export class Milvus extends VectorStore { get lc_secrets(): { [key: string]: string } { return { ssl: "MILVUS_SSL", username: "MILVUS_USERNAME", password: "MILVUS_PASSWORD", }; } _vectorstoreType(): string { return "milvus"; } declare FilterType: string; collectionName: string; partitionName?: string; numDimensions?: number; autoId?: boolean; primaryField: string; vectorField: string; textField: string; textFieldMaxLength: number; partitionKey?: string; partitionKeyMaxLength?: number; fields: string[]; client: MilvusClient; indexCreateParams: IndexCreateOptions; indexSearchParams: keyValueObj; constructor(public embeddings: EmbeddingsInterface, args: MilvusLibArgs) { super(embeddings, args); this.collectionName = args.collectionName ?? genCollectionName(); this.partitionName = args.partitionName; this.textField = args.textField ?? MILVUS_TEXT_FIELD_NAME; this.autoId = args.autoId ?? true; this.primaryField = args.primaryField ?? MILVUS_PRIMARY_FIELD_NAME; this.vectorField = args.vectorField ?? MILVUS_VECTOR_FIELD_NAME; this.textFieldMaxLength = args.textFieldMaxLength ?? 0; this.partitionKey = args.partitionKey; this.partitionKeyMaxLength = args.partitionKeyMaxLength ?? MILVUS_PARTITION_KEY_MAX_LENGTH; this.fields = []; const url = args.url ?? getEnvironmentVariable("MILVUS_URL"); const { address = "", username = "", password = "", ssl, } = args.clientConfig || {}; // Index creation parameters const { indexCreateOptions } = args; if (indexCreateOptions) { const { metric_type, index_type, params, search_params = {}, } = indexCreateOptions; this.indexCreateParams = { metric_type, index_type, params, }; this.indexSearchParams = { ...DEFAULT_INDEX_SEARCH_PARAMS[index_type].params, ...search_params, }; } else { // Default index creation parameters. this.indexCreateParams = { index_type: "HNSW", metric_type: "L2", params: { M: 8, efConstruction: 64 }, }; // Default index search parameters. this.indexSearchParams = { ...DEFAULT_INDEX_SEARCH_PARAMS.HNSW.params, }; } // combine args clientConfig and env variables const clientConfig: ClientConfig = { ...(args.clientConfig || {}), address: url || address, username: args.username || username, password: args.password || password, ssl: args.ssl || ssl, }; if (!clientConfig.address) { throw new Error("Milvus URL address is not provided."); } this.client = new MilvusClient(clientConfig); } /** * Adds documents to the Milvus database. * @param documents Array of Document instances to be added to the database. * @param options Optional parameter that can include specific IDs for the documents. * @returns Promise resolving to void. */ async addDocuments( documents: Document[], options?: { ids?: string[] } ): Promise<void> { const texts = documents.map(({ pageContent }) => pageContent); await this.addVectors( await this.embeddings.embedDocuments(texts), documents, options ); } /** * Adds vectors to the Milvus database. * @param vectors Array of vectors to be added to the database. * @param documents Array of Document instances associated with the vectors. * @param options Optional parameter that can include specific IDs for the documents. * @returns Promise resolving to void. */ async addVectors( vectors: number[][], documents: Document[], options?: { ids?: string[] } ): Promise<void> { if (vectors.length === 0) { return; } await this.ensureCollection(vectors, documents); if (this.partitionName !== undefined) { await this.ensurePartition(); } const documentIds = options?.ids ?? []; const insertDatas: InsertRow[] = []; // eslint-disable-next-line no-plusplus for (let index = 0; index < vectors.length; index++) { const vec = vectors[index]; const doc = documents[index]; const data: InsertRow = { [this.textField]: doc.pageContent, [this.vectorField]: vec, }; this.fields.forEach((field) => { switch (field) { case this.primaryField: if (documentIds[index] !== undefined) { data[field] = documentIds[index]; } else if (!this.autoId) { if (doc.metadata[this.primaryField] === undefined) { throw new Error( `The Collection's primaryField is configured with autoId=false, thus its value must be provided through metadata.` ); } data[field] = doc.metadata[this.primaryField]; } break; case this.textField: data[field] = doc.pageContent; break; case this.vectorField: data[field] = vec; break; default: // metadata fields if (doc.metadata[field] === undefined) { throw new Error( `The field "${field}" is not provided in documents[${index}].metadata.` ); } else if (typeof doc.metadata[field] === "object") { data[field] = JSON.stringify(doc.metadata[field]); } else { data[field] = doc.metadata[field]; } break; } }); insertDatas.push(data); } const params: InsertReq = { collection_name: this.collectionName, fields_data: insertDatas, }; if (this.partitionName !== undefined) { params.partition_name = this.partitionName; } const insertResp = this.autoId ? await this.client.insert(params) : await this.client.upsert(params); if (insertResp.status.error_code !== ErrorCode.SUCCESS) { throw new Error( `Error ${ this.autoId ? "inserting" : "upserting" } data: ${JSON.stringify(insertResp)}` ); } await this.client.flushSync({ collection_names: [this.collectionName] }); } /** * Searches for vectors in the Milvus database that are similar to a given * vector. * @param query Vector to compare with the vectors in the database. * @param k Number of similar vectors to return. * @param filter Optional filter to apply to the search. * @returns Promise resolving to an array of tuples, each containing a Document instance and a similarity score. */
147041
static async fromTexts( texts: string[], metadatas: object[] | object, embeddings: EmbeddingsInterface, dbConfig?: MilvusLibArgs ): Promise<Milvus> { const docs: Document[] = []; for (let i = 0; i < texts.length; i += 1) { const metadata = Array.isArray(metadatas) ? metadatas[i] : metadatas; const newDoc = new Document({ pageContent: texts[i], metadata, }); docs.push(newDoc); } return Milvus.fromDocuments(docs, embeddings, dbConfig); } /** * Creates a Milvus instance from a set of Document instances. * @param docs Array of Document instances to be added to the database. * @param embeddings Embeddings instance used to generate vector embeddings for the documents. * @param dbConfig Optional configuration for the Milvus database. * @returns Promise resolving to a new Milvus instance. */ static async fromDocuments( docs: Document[], embeddings: EmbeddingsInterface, dbConfig?: MilvusLibArgs ): Promise<Milvus> { const args: MilvusLibArgs = { ...dbConfig, collectionName: dbConfig?.collectionName ?? genCollectionName(), }; const instance = new this(embeddings, args); await instance.addDocuments(docs); return instance; } /** * Creates a Milvus instance from an existing collection in the Milvus * database. * @param embeddings Embeddings instance used to generate vector embeddings for the documents in the collection. * @param dbConfig Configuration for the Milvus database. * @returns Promise resolving to a new Milvus instance. */ static async fromExistingCollection( embeddings: EmbeddingsInterface, dbConfig: MilvusLibArgs ): Promise<Milvus> { const instance = new this(embeddings, dbConfig); await instance.ensureCollection(); return instance; } /** * Deletes data from the Milvus database. * @param params Object containing a filter to apply to the deletion. * @returns Promise resolving to void. */ async delete(params: { filter?: string; ids?: string[] }): Promise<void> { const hasColResp = await this.client.hasCollection({ collection_name: this.collectionName, }); if (hasColResp.status.error_code !== ErrorCode.SUCCESS) { throw new Error(`Error checking collection: ${hasColResp}`); } if (hasColResp.value === false) { throw new Error( `Collection not found: ${this.collectionName}, please create collection before search.` ); } const { filter, ids } = params; if (filter && !ids) { const deleteResp = await this.client.deleteEntities({ collection_name: this.collectionName, expr: filter, }); if (deleteResp.status.error_code !== ErrorCode.SUCCESS) { throw new Error(`Error deleting data: ${JSON.stringify(deleteResp)}`); } } else if (!filter && ids && ids.length > 0) { const deleteResp = await this.client.delete({ collection_name: this.collectionName, ids, }); if (deleteResp.status.error_code !== ErrorCode.SUCCESS) { throw new Error( `Error deleting data with ids: ${JSON.stringify(deleteResp)}` ); } } } } function createFieldTypeForMetadata( documents: Document[], primaryFieldName: string, partitionKey?: string ): FieldType[] { const sampleMetadata = documents[0].metadata; let textFieldMaxLength = 0; let jsonFieldMaxLength = 0; documents.forEach(({ metadata }) => { // check all keys name and count in metadata is same as sampleMetadata Object.keys(metadata).forEach((key) => { if ( !(key in metadata) || typeof metadata[key] !== typeof sampleMetadata[key] ) { throw new Error( "All documents must have same metadata keys and datatype" ); } // find max length of string field and json field, cache json string value if (typeof metadata[key] === "string") { if (metadata[key].length > textFieldMaxLength) { textFieldMaxLength = metadata[key].length; } } else if (typeof metadata[key] === "object") { const json = JSON.stringify(metadata[key]); if (json.length > jsonFieldMaxLength) { jsonFieldMaxLength = json.length; } } }); }); const fields: FieldType[] = []; for (const [key, value] of Object.entries(sampleMetadata)) { const type = typeof value; if (key === primaryFieldName || key === partitionKey) { /** * skip primary field and partition key * because we will create primary field and partition key in createCollection * */ } else if (type === "string") { fields.push({ name: key, description: `Metadata String field`, data_type: DataType.VarChar, type_params: { max_length: textFieldMaxLength.toString(), }, }); } else if (type === "number") { fields.push({ name: key, description: `Metadata Number field`, data_type: DataType.Float, }); } else if (type === "boolean") { fields.push({ name: key, description: `Metadata Boolean field`, data_type: DataType.Bool, }); } else if (value === null) { // skip } else { // use json for other types try { fields.push({ name: key, description: `Metadata JSON field`, data_type: DataType.VarChar, type_params: { max_length: jsonFieldMaxLength.toString(), }, }); } catch (e) { throw new Error("Failed to parse metadata field as JSON"); } } } return fields; } function genCollectionName(): string { return `${MILVUS_COLLECTION_NAME_PREFIX}_${uuid.v4().replaceAll("-", "")}`; } function getTextFieldMaxLength(documents: Document[]) { let textMaxLength = 0; const textEncoder = new TextEncoder(); // eslint-disable-next-line no-plusplus for (let i = 0; i < documents.length; i++) { const text = documents[i].pageContent; const textLengthInBytes = textEncoder.encode(text).length; if (textLengthInBytes > textMaxLength) { textMaxLength = textLengthInBytes; } } return textMaxLength; } function getVectorFieldDim(vectors: number[][]) { if (vectors.length === 0) { throw new Error("No vectors found"); } return vectors[0].length; } // eslint-disable-next-line @typescript-eslint/no-explicit-any function checkJsonString(value: string): { isJson: boolean; obj: any } { try { const result = JSON.parse(value); return { isJson: true, obj: result }; } catch (e) { return { isJson: false, obj: null }; } }
147045
export class ElasticVectorSearch extends VectorStore { declare FilterType: ElasticFilter; private readonly client: Client; private readonly indexName: string; private readonly engine: ElasticKnnEngine; private readonly similarity: ElasticSimilarity; private readonly efConstruction: number; private readonly m: number; private readonly candidates: number; _vectorstoreType(): string { return "elasticsearch"; } constructor(embeddings: EmbeddingsInterface, args: ElasticClientArgs) { super(embeddings, args); this.engine = args.vectorSearchOptions?.engine ?? "hnsw"; this.similarity = args.vectorSearchOptions?.similarity ?? "l2_norm"; this.m = args.vectorSearchOptions?.m ?? 16; this.efConstruction = args.vectorSearchOptions?.efConstruction ?? 100; this.candidates = args.vectorSearchOptions?.candidates ?? 200; this.client = args.client.child({ headers: { "user-agent": "langchain-js-vs/0.0.1" }, }); this.indexName = args.indexName ?? "documents"; } /** * Method to add documents to the Elasticsearch database. It first * converts the documents to vectors using the embeddings, then adds the * vectors to the database. * @param documents The documents to add to the database. * @param options Optional parameter that can contain the IDs for the documents. * @returns A promise that resolves with the IDs of the added documents. */ async addDocuments(documents: Document[], options?: { ids?: string[] }) { const texts = documents.map(({ pageContent }) => pageContent); return this.addVectors( await this.embeddings.embedDocuments(texts), documents, options ); } /** * Method to add vectors to the Elasticsearch database. It ensures the * index exists, then adds the vectors and their corresponding documents * to the database. * @param vectors The vectors to add to the database. * @param documents The documents corresponding to the vectors. * @param options Optional parameter that can contain the IDs for the documents. * @returns A promise that resolves with the IDs of the added documents. */ async addVectors( vectors: number[][], documents: Document[], options?: { ids?: string[] } ) { await this.ensureIndexExists( vectors[0].length, this.engine, this.similarity, this.efConstruction, this.m ); const documentIds = options?.ids ?? Array.from({ length: vectors.length }, () => uuid.v4()); const operations = vectors.flatMap((embedding, idx) => [ { index: { _id: documentIds[idx], _index: this.indexName, }, }, { embedding, metadata: documents[idx].metadata, text: documents[idx].pageContent, }, ]); const results = await this.client.bulk({ refresh: true, operations }); if (results.errors) { const reasons = results.items.map( (result) => result.index?.error?.reason ); throw new Error(`Failed to insert documents:\n${reasons.join("\n")}`); } return documentIds; } /** * Method to perform a similarity search in the Elasticsearch database * using a vector. It returns the k most similar documents along with * their similarity scores. * @param query The query vector. * @param k The number of most similar documents to return. * @param filter Optional filter to apply to the search. * @returns A promise that resolves with an array of tuples, where each tuple contains a Document and its similarity score. */ async similaritySearchVectorWithScore( query: number[], k: number, filter?: ElasticFilter ): Promise<[Document, number][]> { const result = await this.client.search({ index: this.indexName, size: k, knn: { field: "embedding", query_vector: query, filter: { bool: this.buildMetadataTerms(filter) }, k, num_candidates: this.candidates, }, }); // eslint-disable-next-line @typescript-eslint/no-explicit-any return result.hits.hits.map((hit: any) => [ new Document({ pageContent: hit._source.text, metadata: hit._source.metadata, }), hit._score, ]); } /** * Method to delete documents from the Elasticsearch database. * @param params Object containing the IDs of the documents to delete. * @returns A promise that resolves when the deletion is complete. */ async delete(params: { ids: string[] }): Promise<void> { const operations = params.ids.map((id) => ({ delete: { _id: id, _index: this.indexName, }, })); if (operations.length > 0) await this.client.bulk({ refresh: true, operations }); } /** * Static method to create an ElasticVectorSearch instance from texts. It * creates Document instances from the texts and their corresponding * metadata, then calls the fromDocuments method to create the * ElasticVectorSearch instance. * @param texts The texts to create the ElasticVectorSearch instance from. * @param metadatas The metadata corresponding to the texts. * @param embeddings The embeddings to use for the documents. * @param args The arguments to create the Elasticsearch client. * @returns A promise that resolves with the created ElasticVectorSearch instance. */ static fromTexts( texts: string[], metadatas: object[] | object, embeddings: EmbeddingsInterface, args: ElasticClientArgs ): Promise<ElasticVectorSearch> { const documents = texts.map((text, idx) => { const metadata = Array.isArray(metadatas) ? metadatas[idx] : metadatas; return new Document({ pageContent: text, metadata }); }); return ElasticVectorSearch.fromDocuments(documents, embeddings, args); } /** * Static method to create an ElasticVectorSearch instance from Document * instances. It adds the documents to the Elasticsearch database, then * returns the ElasticVectorSearch instance. * @param docs The Document instances to create the ElasticVectorSearch instance from. * @param embeddings The embeddings to use for the documents. * @param dbConfig The configuration for the Elasticsearch database. * @returns A promise that resolves with the created ElasticVectorSearch instance. */ static async fromDocuments( docs: Document[], embeddings: EmbeddingsInterface, dbConfig: ElasticClientArgs ): Promise<ElasticVectorSearch> { const store = new ElasticVectorSearch(embeddings, dbConfig); await store.addDocuments(docs).then(() => store); return store; } /** * Static method to create an ElasticVectorSearch instance from an * existing index in the Elasticsearch database. It checks if the index * exists, then returns the ElasticVectorSearch instance if it does. * @param embeddings The embeddings to use for the documents. * @param dbConfig The configuration for the Elasticsearch database. * @returns A promise that resolves with the created ElasticVectorSearch instance if the index exists, otherwise it throws an error. */ static async fromExistingIndex( embeddings: EmbeddingsInterface, dbConfig: ElasticClientArgs ): Promise<ElasticVectorSearch> { const store = new ElasticVectorSearch(embeddings, dbConfig); const exists = await store.doesIndexExist(); if (exists) { return store; } throw new Error(`The index ${store.indexName} does not exist.`); } private async ensureIndexExists( dimension: number, engine = "hnsw", similarity = "l2_norm", efConstruction = 100, m = 16 ): Promise<void> { const request: estypes.IndicesCreateRequest = { index: this.indexName, mappings: { dynamic_templates: [ { // map all metadata properties to be keyword except loc metadata_except_loc: { match_mapping_type: "*", match: "metadata.*", unmatch: "metadata.loc", mapping: { type: "keyword" }, }, }, ], properties: { text: { type: "text" }, metadata: { type: "object", properties: { loc: { type: "object" }, // explicitly define loc as an object }, }, embedding: { type: "dense_vector", dims: dimension, index: true, similarity, index_options: { type: engine, m, ef_construction: efConstruction, }, }, }, }, }; const indexExists = await this.doesIndexExist(); if (indexExists) return; await this.client.indices.create(request); }
147056
import { test, expect } from "@jest/globals"; import * as fs from "node:fs/promises"; import * as path from "node:path"; import * as os from "node:os"; import { OpenAIEmbeddings } from "@langchain/openai"; import { Document } from "@langchain/core/documents"; import { HNSWLib } from "../hnswlib.js"; test("Test HNSWLib.fromTexts", async () => { const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); expect(vectorStore.index?.getCurrentCount()).toBe(3); const resultOne = await vectorStore.similaritySearch("hello world", 1); const resultOneMetadatas = resultOne.map(({ metadata }) => metadata); expect(resultOneMetadatas).toEqual([{ id: 2 }]); const resultTwo = await vectorStore.similaritySearch("hello world", 3); const resultTwoMetadatas = resultTwo.map(({ metadata }) => metadata); expect(resultTwoMetadatas).toEqual([{ id: 2 }, { id: 3 }, { id: 1 }]); }); test("Test HNSWLib.fromTexts + addDocuments", async () => { const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); expect(vectorStore.index?.getMaxElements()).toBe(3); expect(vectorStore.index?.getCurrentCount()).toBe(3); await vectorStore.addDocuments([ new Document({ pageContent: "hello worldklmslksmn", metadata: { id: 4 }, }), ]); expect(vectorStore.index?.getMaxElements()).toBe(4); const resultTwo = await vectorStore.similaritySearch("hello world", 3); const resultTwoMetadatas = resultTwo.map(({ metadata }) => metadata); expect(resultTwoMetadatas).toEqual([{ id: 2 }, { id: 3 }, { id: 4 }]); }); test("Test HNSWLib.load, HNSWLib.save, and HNSWLib.delete", async () => { const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); expect(vectorStore.index?.getCurrentCount()).toBe(3); const resultOne = await vectorStore.similaritySearch("hello world", 1); const resultOneMetadatas = resultOne.map(({ metadata }) => metadata); expect(resultOneMetadatas).toEqual([{ id: 2 }]); const resultTwo = await vectorStore.similaritySearch("hello world", 3); const resultTwoMetadatas = resultTwo.map(({ metadata }) => metadata); expect(resultTwoMetadatas).toEqual([{ id: 2 }, { id: 3 }, { id: 1 }]); const tempDirectory = await fs.mkdtemp(path.join(os.tmpdir(), "lcjs-")); // console.log(tempDirectory); await vectorStore.save(tempDirectory); const loadedVectorStore = await HNSWLib.load( tempDirectory, new OpenAIEmbeddings() ); const resultThree = await loadedVectorStore.similaritySearch( "hello world", 1 ); const resultThreeMetadatas = resultThree.map(({ metadata }) => metadata); expect(resultThreeMetadatas).toEqual([{ id: 2 }]); const resultFour = await loadedVectorStore.similaritySearch("hello world", 3); const resultFourMetadatas = resultFour.map(({ metadata }) => metadata); expect(resultFourMetadatas).toEqual([{ id: 2 }, { id: 3 }, { id: 1 }]); await loadedVectorStore.delete({ directory: tempDirectory, }); await expect(async () => { await HNSWLib.load(tempDirectory, new OpenAIEmbeddings()); }).rejects.toThrow(); });
147068
/* eslint-disable no-process-env */ /* eslint-disable @typescript-eslint/no-non-null-assertion */ import { beforeEach, describe, expect, test } from "@jest/globals"; import { ChromaClient } from "chromadb"; import { faker } from "@faker-js/faker"; import * as uuid from "uuid"; import { Document } from "@langchain/core/documents"; import { OpenAIEmbeddings } from "@langchain/openai"; import { Chroma } from "../chroma.js"; describe.skip("Chroma", () => { let chromaStore: Chroma; beforeEach(async () => { const embeddings = new OpenAIEmbeddings(); chromaStore = new Chroma(embeddings, { url: "http://localhost:8000", collectionName: "test-collection", }); }); test.skip("auto-generated ids", async () => { const pageContent = faker.lorem.sentence(5); await chromaStore.addDocuments([{ pageContent, metadata: { foo: "bar" } }]); const results = await chromaStore.similaritySearch(pageContent, 1); expect(results).toEqual([ new Document({ metadata: { foo: "bar" }, pageContent }), ]); }); test.skip("metadata filtering", async () => { const pageContent = faker.lorem.sentence(5); const id = uuid.v4(); await chromaStore.addDocuments([ { pageContent, metadata: { foo: "bar" } }, { pageContent, metadata: { foo: id } }, { pageContent, metadata: { foo: "qux" } }, ]); // If the filter wasn't working, we'd get all 3 documents back const results = await chromaStore.similaritySearch(pageContent, 3, { foo: id, }); expect(results).toEqual([ new Document({ metadata: { foo: id }, pageContent }), ]); }); test.skip("upsert", async () => { const pageContent = faker.lorem.sentence(5); const id = uuid.v4(); const ids = await chromaStore.addDocuments([ { pageContent, metadata: { foo: id } }, { pageContent, metadata: { foo: id } }, ]); const results = await chromaStore.similaritySearch(pageContent, 4, { foo: id, }); expect(results.length).toEqual(2); const ids2 = await chromaStore.addDocuments( [ { pageContent, metadata: { foo: id } }, { pageContent, metadata: { foo: id } }, ], { ids } ); expect(ids).toEqual(ids2); const newResults = await chromaStore.similaritySearch(pageContent, 4, { foo: id, }); expect(newResults.length).toEqual(2); }); test.skip("delete by ids", async () => { const pageContent = faker.lorem.sentence(5); const id = uuid.v4(); const ids = await chromaStore.addDocuments([ { pageContent, metadata: { foo: id } }, { pageContent, metadata: { foo: id } }, ]); const results = await chromaStore.similaritySearch(pageContent, 2, { foo: id, }); expect(results.length).toEqual(2); await chromaStore.delete({ ids: ids.slice(0, 1) }); const newResults = await chromaStore.similaritySearch(pageContent, 2, { foo: id, }); expect(newResults.length).toEqual(1); }); test.skip("delete by filter", async () => { const pageContent = faker.lorem.sentence(5); const id = uuid.v4(); const id2 = uuid.v4(); await chromaStore.addDocuments([ { pageContent, metadata: { foo: id } }, { pageContent, metadata: { foo: id, bar: id2 } }, ]); const results = await chromaStore.similaritySearch(pageContent, 2, { foo: id, }); expect(results.length).toEqual(2); await chromaStore.delete({ filter: { bar: id2, }, }); const newResults = await chromaStore.similaritySearch(pageContent, 2, { foo: id, }); expect(newResults.length).toEqual(1); }); test.skip("load from client instance", async () => { const pageContent = faker.lorem.sentence(5); const id = uuid.v4(); const chromaStoreFromClient = new Chroma(new OpenAIEmbeddings(), { index: new ChromaClient({ path: "http://localhost:8000", }), collectionName: "test-collection", }); await chromaStoreFromClient.addDocuments([ { pageContent, metadata: { foo: "bar" } }, { pageContent, metadata: { foo: id } }, { pageContent, metadata: { foo: "qux" } }, ]); const results = await chromaStoreFromClient.similaritySearch( pageContent, 3 ); expect(results.length).toEqual(3); }); });
147089
import { test, expect } from "@jest/globals"; import { Document } from "@langchain/core/documents"; import { OpenAIEmbeddings } from "@langchain/openai"; import { getEnvironmentVariable } from "@langchain/core/utils/env"; import { TurbopufferVectorStore } from "../turbopuffer.js"; beforeEach(async () => { const embeddings = new OpenAIEmbeddings(); const store = new TurbopufferVectorStore(embeddings, { apiKey: getEnvironmentVariable("TURBOPUFFER_API_KEY"), namespace: "langchain-js-testing", }); await store.delete({ deleteIndex: true, }); }); test("similaritySearchVectorWithScore", async () => { const embeddings = new OpenAIEmbeddings(); const store = new TurbopufferVectorStore(embeddings, { apiKey: getEnvironmentVariable("TURBOPUFFER_API_KEY"), namespace: "langchain-js-testing", }); expect(store).toBeDefined(); const createdAt = new Date().toString(); await store.addDocuments([ { pageContent: createdAt.toString(), metadata: { a: createdAt } }, { pageContent: "hi", metadata: { a: createdAt } }, { pageContent: "bye", metadata: { a: createdAt } }, { pageContent: "what's this", metadata: { a: createdAt } }, ]); // console.log("added docs"); const results = await store.similaritySearch(createdAt.toString(), 1); expect(results).toHaveLength(1); expect(results).toEqual([ new Document({ metadata: { a: createdAt }, pageContent: createdAt.toString(), }), ]); }); test("similaritySearch with a passed filter", async () => { const embeddings = new OpenAIEmbeddings(); const store = new TurbopufferVectorStore(embeddings, { apiKey: getEnvironmentVariable("TURBOPUFFER_API_KEY"), namespace: "langchain-js-testing", }); expect(store).toBeDefined(); const createdAt = new Date().getTime(); await store.addDocuments([ { pageContent: "hello 0", metadata: { created_at: createdAt.toString() } }, { pageContent: "hello 1", metadata: { created_at: (createdAt + 1).toString() }, }, { pageContent: "hello 2", metadata: { created_at: (createdAt + 2).toString() }, }, { pageContent: "hello 3", metadata: { created_at: (createdAt + 3).toString() }, }, ]); const results = await store.similaritySearch("hello", 1, { created_at: [["Eq", (createdAt + 2).toString()]], }); expect(results).toHaveLength(1); expect(results).toEqual([ new Document({ metadata: { created_at: (createdAt + 2).toString() }, pageContent: "hello 2", }), ]); }); test("Should drop metadata keys from docs with non-string metadata", async () => { const embeddings = new OpenAIEmbeddings(); const store = new TurbopufferVectorStore(embeddings, { apiKey: getEnvironmentVariable("TURBOPUFFER_API_KEY"), namespace: "langchain-js-testing", }); expect(store).toBeDefined(); const createdAt = new Date().getTime(); await store.addDocuments([ { pageContent: "hello 0", metadata: { created_at: { time: createdAt.toString() } }, }, { pageContent: "goodbye", metadata: { created_at: (createdAt + 1).toString() }, }, ]); const results = await store.similaritySearch("hello", 1, { created_at: [["Eq", createdAt.toString()]], }); expect(results).toHaveLength(0); const results2 = await store.similaritySearch("hello", 1); expect(results2).toEqual([ new Document({ metadata: { created_at: null, }, pageContent: "hello 0", }), ]); });
147102
import { Document } from "@langchain/core/documents"; interface Metadata { name: string; date: string; count: number; is_active: boolean; tags: string[]; location: number[]; id: number; height: number | null; happiness: number | null; sadness?: number; } const metadatas: Metadata[] = [ { name: "adam", date: "2021-01-01", count: 1, is_active: true, tags: ["a", "b"], location: [1.0, 2.0], id: 1, height: 10.0, happiness: 0.9, sadness: 0.1, }, { name: "bob", date: "2021-01-02", count: 2, is_active: false, tags: ["b", "c"], location: [2.0, 3.0], id: 2, height: 5.7, happiness: 0.8, sadness: 0.1, }, { name: "jane", date: "2021-01-01", count: 3, is_active: true, tags: ["b", "d"], location: [3.0, 4.0], id: 3, height: 2.4, happiness: null, }, ]; const texts: string[] = metadatas.map((metadata) => `id ${metadata.id} `); export const DOCUMENTS: Document[] = texts.map( (text, index) => new Document({ pageContent: text, metadata: metadatas[index] }) ); interface TestCase { // eslint-disable-next-line @typescript-eslint/no-explicit-any filter: Record<string, any>; expected: number[]; } export const TYPE_1_FILTERING_TEST_CASES: TestCase[] = [ { filter: { id: 1 }, expected: [1] }, { filter: { name: "adam" }, expected: [1] }, { filter: { is_active: true }, expected: [1, 3] }, { filter: { is_active: false }, expected: [2] }, { filter: { id: 1, is_active: true }, expected: [1] }, { filter: { id: 1, is_active: false }, expected: [] }, ]; export const TYPE_2_FILTERING_TEST_CASES: TestCase[] = [ { filter: { id: 1 }, expected: [1] }, { filter: { id: { $ne: 1 } }, expected: [2, 3] }, { filter: { id: { $gt: 1 } }, expected: [2, 3] }, { filter: { id: { $gte: 1 } }, expected: [1, 2, 3] }, { filter: { id: { $lt: 1 } }, expected: [] }, { filter: { id: { $lte: 1 } }, expected: [1] }, { filter: { name: "adam" }, expected: [1] }, { filter: { name: "bob" }, expected: [2] }, { filter: { name: { $eq: "adam" } }, expected: [1] }, { filter: { name: { $ne: "adam" } }, expected: [2, 3] }, { filter: { name: { $gt: "jane" } }, expected: [] }, { filter: { name: { $gte: "jane" } }, expected: [3] }, { filter: { name: { $lt: "jane" } }, expected: [1, 2] }, { filter: { name: { $lte: "jane" } }, expected: [1, 2, 3] }, { filter: { is_active: { $eq: true } }, expected: [1, 3] }, { filter: { is_active: { $ne: true } }, expected: [2] }, { filter: { height: { $gt: 5.0 } }, expected: [1, 2] }, { filter: { height: { $gte: 5.0 } }, expected: [1, 2] }, { filter: { height: { $lt: 5.0 } }, expected: [3] }, { filter: { height: { $lte: 5.8 } }, expected: [2, 3] }, ]; export const TYPE_3_FILTERING_TEST_CASES: TestCase[] = [ { filter: { $or: [{ id: 1 }, { id: 2 }] }, expected: [1, 2] }, { filter: { $or: [{ id: 1 }, { name: "bob" }] }, expected: [1, 2] }, { filter: { $and: [{ id: 1 }, { id: 2 }] }, expected: [] }, { filter: { $or: [{ id: 1 }, { id: 2 }, { id: 3 }] }, expected: [1, 2, 3] }, ]; export const TYPE_4_FILTERING_TEST_CASES: TestCase[] = [ { filter: { id: { $between: [1, 2] } }, expected: [1, 2] }, { filter: { id: { $between: [1, 1] } }, expected: [1] }, { filter: { name: { $in: ["adam", "bob"] } }, expected: [1, 2] }, ]; export const TYPE_5_FILTERING_TEST_CASES: TestCase[] = [ { filter: { name: { $like: "a%" } }, expected: [1] }, { filter: { name: { $like: "%a%" } }, expected: [1, 3] }, ];
147106
test.skip("Test metadata filters", async () => { const url = process.env.NEO4J_URI as string; const username = process.env.NEO4J_USERNAME as string; const password = process.env.NEO4J_PASSWORD as string; expect(url).toBeDefined(); expect(username).toBeDefined(); expect(password).toBeDefined(); const docsearch = await Neo4jVectorStore.fromDocuments( DOCUMENTS, new FakeEmbeddings(), { url, username, password, indexName: "vector", preDeleteCollection: true, } ); const examples = [ ...TYPE_1_FILTERING_TEST_CASES, ...TYPE_2_FILTERING_TEST_CASES, ...TYPE_3_FILTERING_TEST_CASES, ...TYPE_4_FILTERING_TEST_CASES, ]; for (const example of examples) { const { filter, expected } = example; const output = await docsearch.similaritySearch("Foo", 4, { filter }); const adjustedIndices = expected.map((index) => index - 1); const expectedOutput = adjustedIndices.map((index) => DOCUMENTS[index]); // We don't return id properties from similarity search by default // Also remove any key where the value is null for (const doc of expectedOutput) { if ("id" in doc.metadata) { delete doc.metadata.id; } const keysWithNull = Object.keys(doc.metadata).filter( (key) => doc.metadata[key] === null ); for (const key of keysWithNull) { delete doc.metadata[key]; } } // console.log("OUTPUT:", output); // console.log("EXPECTED OUTPUT:", expectedOutput); expect(output.length).toEqual(expectedOutput.length); expect(output).toEqual(expect.arrayContaining(expectedOutput)); } }); });
147111
/* eslint-disable no-process-env */ import { test, expect } from "@jest/globals"; import { Client, ClientOptions } from "@elastic/elasticsearch"; import { OpenAIEmbeddings } from "@langchain/openai"; import { Document } from "@langchain/core/documents"; import { ElasticVectorSearch } from "../elasticsearch.js"; describe("ElasticVectorSearch", () => { let store: ElasticVectorSearch; beforeEach(async () => { if (!process.env.ELASTIC_URL) { throw new Error("ELASTIC_URL not set"); } const config: ClientOptions = { node: process.env.ELASTIC_URL, }; if (process.env.ELASTIC_API_KEY) { config.auth = { apiKey: process.env.ELASTIC_API_KEY, }; } else if (process.env.ELASTIC_USERNAME && process.env.ELASTIC_PASSWORD) { config.auth = { username: process.env.ELASTIC_USERNAME, password: process.env.ELASTIC_PASSWORD, }; } const client = new Client(config); const indexName = "test_index"; const embeddings = new OpenAIEmbeddings(); store = new ElasticVectorSearch(embeddings, { client, indexName }); await store.deleteIfExists(); expect(store).toBeDefined(); }); test.skip("ElasticVectorSearch integration", async () => { const createdAt = new Date().getTime(); const ids = await store.addDocuments([ { pageContent: "hello", metadata: { a: createdAt + 1 } }, { pageContent: "car", metadata: { a: createdAt } }, { pageContent: "adjective", metadata: { a: createdAt } }, { pageContent: "hi", metadata: { a: createdAt } }, ]); const results1 = await store.similaritySearch("hello!", 1); expect(results1).toHaveLength(1); expect(results1).toEqual([ new Document({ metadata: { a: createdAt + 1 }, pageContent: "hello" }), ]); const results2 = await store.similaritySearchWithScore("testing!", 6, { a: createdAt, }); expect(results2).toHaveLength(3); const ids2 = await store.addDocuments( [ { pageContent: "hello upserted", metadata: { a: createdAt + 1 } }, { pageContent: "car upserted", metadata: { a: createdAt } }, { pageContent: "adjective upserted", metadata: { a: createdAt } }, { pageContent: "hi upserted", metadata: { a: createdAt } }, ], { ids } ); expect(ids).toEqual(ids2); const results3 = await store.similaritySearchWithScore("testing!", 6, { a: createdAt, }); expect(results3).toHaveLength(3); // console.log(`Upserted:`, results3); await store.delete({ ids: ids.slice(2) }); const results4 = await store.similaritySearchWithScore("testing!", 3, { a: createdAt, }); expect(results4).toHaveLength(1); }); test.skip("ElasticVectorSearch integration with more than 10 documents", async () => { const createdAt = new Date().getTime(); await store.addDocuments([ { pageContent: "pretty", metadata: { a: createdAt + 1 } }, { pageContent: "intelligent", metadata: { a: createdAt } }, { pageContent: "creative", metadata: { a: createdAt } }, { pageContent: "courageous", metadata: { a: createdAt } }, { pageContent: "energetic", metadata: { a: createdAt } }, { pageContent: "patient", metadata: { a: createdAt } }, { pageContent: "responsible", metadata: { a: createdAt } }, { pageContent: "friendly", metadata: { a: createdAt } }, { pageContent: "confident", metadata: { a: createdAt } }, { pageContent: "generous", metadata: { a: createdAt } }, { pageContent: "compassionate", metadata: { a: createdAt } }, ]); const results = await store.similaritySearch("*", 11); expect(results).toHaveLength(11); const results2 = await store.similaritySearch("*", 11, [ { field: "a", value: createdAt, operator: "exclude", }, ]); expect(results2).toHaveLength(1); const results3 = await store.similaritySearch("*", 11, [ { field: "a", value: [createdAt], operator: "exclude", }, ]); expect(results3).toHaveLength(1); }); test.skip("ElasticVectorSearch integration with text splitting metadata", async () => { const createdAt = new Date().getTime(); const documents = [ new Document({ pageContent: "hello", metadata: { a: createdAt, loc: { lines: { from: 1, to: 1 } } }, }), new Document({ pageContent: "car", metadata: { a: createdAt, loc: { lines: { from: 2, to: 2 } } }, }), ]; await store.addDocuments(documents); const results1 = await store.similaritySearch("hello!", 1); expect(results1).toHaveLength(1); expect(results1).toEqual([ new Document({ metadata: { a: createdAt, loc: { lines: { from: 1, to: 1 } } }, pageContent: "hello", }), ]); }); });
147125
from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain_community.vectorstores import FAISS from langchain_community.document_loaders import TextLoader loader = TextLoader('../../../../../../examples/state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() db = FAISS.from_documents(docs, embeddings) query = "What did the president say about Ketanji Brown Jackson" docs = db.similarity_search(query) print(docs) db.save_local("faiss_index")
147220
import { test, expect } from "@jest/globals"; import * as url from "node:url"; import * as path from "node:path"; import { PDFLoader } from "../fs/pdf.js"; test("Test PDF loader from file", async () => { const filePath = path.resolve( path.dirname(url.fileURLToPath(import.meta.url)), "./example_data/1706.03762.pdf" ); const loader = new PDFLoader(filePath); const docs = await loader.load(); expect(docs.length).toBe(15); expect(docs[0].pageContent).toContain("Attention Is All You Need"); }); test("Test PDF loader from file to single document", async () => { const filePath = path.resolve( path.dirname(url.fileURLToPath(import.meta.url)), "./example_data/1706.03762.pdf" ); const loader = new PDFLoader(filePath, { splitPages: false }); const docs = await loader.load(); expect(docs.length).toBe(1); expect(docs[0].pageContent).toContain("Attention Is All You Need"); }); test("Test PDF loader should not create documents with excessive newlines", async () => { const filePath = path.resolve( path.dirname(url.fileURLToPath(import.meta.url)), "./example_data/Jacob_Lee_Resume_2023.pdf" ); const loader = new PDFLoader(filePath, { splitPages: false }); const docs = await loader.load(); expect(docs.length).toBe(1); expect(docs[0].pageContent.split("\n").length).toBeLessThan(100); });
147270
import { Document } from "@langchain/core/documents"; import { BufferLoader } from "langchain/document_loaders/fs/buffer"; /** * A class that extends the `BufferLoader` class. It represents a document * loader that loads documents from PDF files. * @example * ```typescript * const loader = new PDFLoader("path/to/bitcoin.pdf"); * const docs = await loader.load(); * console.log({ docs }); * ``` */ export class PDFLoader extends BufferLoader { private splitPages: boolean; private pdfjs: typeof PDFLoaderImports; protected parsedItemSeparator: string; constructor( filePathOrBlob: string | Blob, { splitPages = true, pdfjs = PDFLoaderImports, parsedItemSeparator = "", } = {} ) { super(filePathOrBlob); this.splitPages = splitPages; this.pdfjs = pdfjs; this.parsedItemSeparator = parsedItemSeparator; } /** * A method that takes a `raw` buffer and `metadata` as parameters and * returns a promise that resolves to an array of `Document` instances. It * uses the `getDocument` function from the PDF.js library to load the PDF * from the buffer. It then iterates over each page of the PDF, retrieves * the text content using the `getTextContent` method, and joins the text * items to form the page content. It creates a new `Document` instance * for each page with the extracted text content and metadata, and adds it * to the `documents` array. If `splitPages` is `true`, it returns the * array of `Document` instances. Otherwise, if there are no documents, it * returns an empty array. Otherwise, it concatenates the page content of * all documents and creates a single `Document` instance with the * concatenated content. * @param raw The buffer to be parsed. * @param metadata The metadata of the document. * @returns A promise that resolves to an array of `Document` instances. */ public async parse( raw: Buffer, metadata: Document["metadata"] ): Promise<Document[]> { const { getDocument, version } = await this.pdfjs(); const pdf = await getDocument({ data: new Uint8Array(raw.buffer), useWorkerFetch: false, isEvalSupported: false, useSystemFonts: true, }).promise; const meta = await pdf.getMetadata().catch(() => null); const documents: Document[] = []; for (let i = 1; i <= pdf.numPages; i += 1) { const page = await pdf.getPage(i); const content = await page.getTextContent(); if (content.items.length === 0) { continue; } // Eliminate excessive newlines // Source: https://github.com/albertcui/pdf-parse/blob/7086fc1cc9058545cdf41dd0646d6ae5832c7107/lib/pdf-parse.js#L16 let lastY; const textItems = []; for (const item of content.items) { if ("str" in item) { if (lastY === item.transform[5] || !lastY) { textItems.push(item.str); } else { textItems.push(`\n${item.str}`); } // eslint-disable-next-line prefer-destructuring lastY = item.transform[5]; } } const text = textItems.join(this.parsedItemSeparator); documents.push( new Document({ pageContent: text, metadata: { ...metadata, pdf: { version, info: meta?.info, metadata: meta?.metadata, totalPages: pdf.numPages, }, loc: { pageNumber: i, }, }, }) ); } if (this.splitPages) { return documents; } if (documents.length === 0) { return []; } return [ new Document({ pageContent: documents.map((doc) => doc.pageContent).join("\n\n"), metadata: { ...metadata, pdf: { version, info: meta?.info, metadata: meta?.metadata, totalPages: pdf.numPages, }, }, }), ]; } } async function PDFLoaderImports() { try { const { default: mod } = await import( "pdf-parse/lib/pdf.js/v1.10.100/build/pdf.js" ); const { getDocument, version } = mod; return { getDocument, version }; } catch (e) { console.error(e); throw new Error( "Failed to load pdf-parse. Please install it with eg. `npm install pdf-parse`." ); } }
147317
/* eslint-disable no-process-env */ /* eslint-disable @typescript-eslint/no-non-null-assertion */ import { expect, test } from "@jest/globals"; import { SageMakerEndpoint, SageMakerLLMContentHandler, } from "../sagemaker_endpoint.js"; // yarn test:single /{path_to}/langchain/src/llms/tests/sagemaker.int.test.ts describe.skip("Test SageMaker LLM", () => { test("without streaming", async () => { interface ResponseJsonInterface { generation: { content: string; }; } class LLama213BHandler implements SageMakerLLMContentHandler { contentType = "application/json"; accepts = "application/json"; async transformInput( prompt: string, modelKwargs: Record<string, unknown> ): Promise<Uint8Array> { const payload = { inputs: [[{ role: "user", content: prompt }]], parameters: modelKwargs, }; const input_str = JSON.stringify(payload); return new TextEncoder().encode(input_str); } async transformOutput(output: Uint8Array): Promise<string> { const response_json = JSON.parse( new TextDecoder("utf-8").decode(output) ) as ResponseJsonInterface[]; const content = response_json[0]?.generation.content ?? ""; return content; } } const contentHandler = new LLama213BHandler(); const model = new SageMakerEndpoint({ endpointName: "aws-productbot-ai-dev-llama-2-13b-chat", streaming: false, modelKwargs: { temperature: 0.5, max_new_tokens: 700, top_p: 0.9, }, endpointKwargs: { CustomAttributes: "accept_eula=true", }, contentHandler, clientOptions: { region: "us-east-1", credentials: { accessKeyId: process.env.AWS_ACCESS_KEY_ID!, secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!, }, }, }); const response = await model.invoke( "hello, my name is John Doe, tell me a fun story about llamas." ); expect(response.length).toBeGreaterThan(0); }); test("with streaming", async () => { class LLama213BHandler implements SageMakerLLMContentHandler { contentType = "application/json"; accepts = "application/json"; async transformInput( prompt: string, modelKwargs: Record<string, unknown> ): Promise<Uint8Array> { const payload = { inputs: [[{ role: "user", content: prompt }]], parameters: modelKwargs, }; const input_str = JSON.stringify(payload); return new TextEncoder().encode(input_str); } async transformOutput(output: Uint8Array): Promise<string> { return new TextDecoder("utf-8").decode(output); } } const contentHandler = new LLama213BHandler(); const model = new SageMakerEndpoint({ endpointName: "aws-productbot-ai-dev-llama-2-13b-chat", streaming: true, // specify streaming modelKwargs: { temperature: 0.5, max_new_tokens: 700, top_p: 0.9, }, endpointKwargs: { CustomAttributes: "accept_eula=true", }, contentHandler, clientOptions: { region: "us-east-1", credentials: { accessKeyId: process.env.AWS_ACCESS_KEY_ID!, secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!, }, }, }); const response = await model.invoke( "hello, my name is John Doe, tell me a fun story about llamas in 3 paragraphs" ); const chunks = []; for await (const chunk of response) { chunks.push(chunk); } expect(response.length).toBeGreaterThan(0); }); });
147510
const CACHED_TEXT = `## Components LangChain provides standard, extendable interfaces and external integrations for various components useful for building with LLMs. Some components LangChain implements, some components we rely on third-party integrations for, and others are a mix. ### Chat models <span data-heading-keywords="chat model,chat models"></span> Language models that use a sequence of messages as inputs and return chat messages as outputs (as opposed to using plain text). These are generally newer models (older models are generally \`LLMs\`, see below). Chat models support the assignment of distinct roles to conversation messages, helping to distinguish messages from the AI, users, and instructions such as system messages. Although the underlying models are messages in, message out, the LangChain wrappers also allow these models to take a string as input. This gives them the same interface as LLMs (and simpler to use). When a string is passed in as input, it will be converted to a \`HumanMessage\` under the hood before being passed to the underlying model. LangChain does not host any Chat Models, rather we rely on third party integrations. We have some standardized parameters when constructing ChatModels: - \`model\`: the name of the model Chat Models also accept other parameters that are specific to that integration. :::important Some chat models have been fine-tuned for **tool calling** and provide a dedicated API for it. Generally, such models are better at tool calling than non-fine-tuned models, and are recommended for use cases that require tool calling. Please see the [tool calling section](/docs/concepts/#functiontool-calling) for more information. ::: For specifics on how to use chat models, see the [relevant how-to guides here](/docs/how_to/#chat-models). #### Multimodality Some chat models are multimodal, accepting images, audio and even video as inputs. These are still less common, meaning model providers haven't standardized on the "best" way to define the API. Multimodal outputs are even less common. As such, we've kept our multimodal abstractions fairly light weight and plan to further solidify the multimodal APIs and interaction patterns as the field matures. In LangChain, most chat models that support multimodal inputs also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations. For specifics on how to use multimodal models, see the [relevant how-to guides here](/docs/how_to/#multimodal). ### LLMs <span data-heading-keywords="llm,llms"></span> :::caution Pure text-in/text-out LLMs tend to be older or lower-level. Many popular models are best used as [chat completion models](/docs/concepts/#chat-models), even for non-chat use cases. You are probably looking for [the section above instead](/docs/concepts/#chat-models). ::: Language models that takes a string as input and returns a string. These are traditionally older models (newer models generally are [Chat Models](/docs/concepts/#chat-models), see above). Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input. This gives them the same interface as [Chat Models](/docs/concepts/#chat-models). When messages are passed in as input, they will be formatted into a string under the hood before being passed to the underlying model. LangChain does not host any LLMs, rather we rely on third party integrations. For specifics on how to use LLMs, see the [relevant how-to guides here](/docs/how_to/#llms). ### Message types Some language models take an array of messages as input and return a message. There are a few different types of messages. All messages have a \`role\`, \`content\`, and \`response_metadata\` property. The \`role\` describes WHO is saying the message. LangChain has different message classes for different roles. The \`content\` property describes the content of the message. This can be a few different things: - A string (most models deal this type of content) - A List of objects (this is used for multi-modal input, where the object contains information about that input type and that input location) #### HumanMessage This represents a message from the user. #### AIMessage This represents a message from the model. In addition to the \`content\` property, these messages also have: **\`response_metadata\`** The \`response_metadata\` property contains additional metadata about the response. The data here is often specific to each model provider. This is where information like log-probs and token usage may be stored. **\`tool_calls\`** These represent a decision from an language model to call a tool. They are included as part of an \`AIMessage\` output. They can be accessed from there with the \`.tool_calls\` property. This property returns a list of \`ToolCall\`s. A \`ToolCall\` is an object with the following arguments: - \`name\`: The name of the tool that should be called. - \`args\`: The arguments to that tool. - \`id\`: The id of that tool call. #### SystemMessage This represents a system message, which tells the model how to behave. Not every model provider supports this. #### ToolMessage This represents the result of a tool call. In addition to \`role\` and \`content\`, this message has: - a \`tool_call_id\` field which conveys the id of the call to the tool that was called to produce this result. - an \`artifact\` field which can be used to pass along arbitrary artifacts of the tool execution which are useful to track but which should not be sent to the model. #### (Legacy) FunctionMessage This is a legacy message type, corresponding to OpenAI's legacy function-calling API. \`ToolMessage\` should be used instead to correspond to the updated tool-calling API. This represents the result of a function call. In addition to \`role\` and \`content\`, this message has a \`name\` parameter which conveys the name of the function that was called to produce this result. ### Prompt templates <span data-heading-keywords="prompt,prompttemplate,chatprompttemplate"></span> Prompt templates help to translate user input and parameters into instructions for a language model. This can be used to guide a model's response, helping it understand the context and generate relevant and coherent language-based output. Prompt Templates take as input an object, where each key represents a variable in the prompt template to fill in. Prompt Templates output a PromptValue. This PromptValue can be passed to an LLM or a ChatModel, and can also be cast to a string or an array of messages. The reason this PromptValue exists is to make it easy to switch between strings and messages. There are a few different types of prompt templates: #### String PromptTemplates These prompt templates are used to format a single string, and generally are used for simpler inputs. For example, a common way to construct and use a PromptTemplate is as follows: \`\`\`typescript import { PromptTemplate } from "@langchain/core/prompts"; const promptTemplate = PromptTemplate.fromTemplate( "Tell me a joke about {topic}" ); await promptTemplate.invoke({ topic: "cats" }); \`\`\` #### ChatPromptTemplates These prompt templates are used to format an array of messages. These "templates" consist of an array of templates themselves. For example, a common way to construct and use a ChatPromptTemplate is as follows: \`\`\`typescript import { ChatPromptTemplate } from "@langchain/core/prompts"; const promptTemplate = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["user", "Tell me a joke about {topic}"], ]); await promptTemplate.invoke({ topic: "cats" }); \`\`\` In the above example, this ChatPromptTemplate will construct two messages when called. The first is a system message, that has no variables to format. The second is a HumanMessage, and will be formatted by the \`topic\` variable the user passes in. #### MessagesPlaceholder <span data-heading-keywords="messagesplaceholder"></span> This prompt template is responsible for adding an array of messages in a particular place. In the above ChatPromptTemplate, we saw how we could format two messages, each one a string. But what if we wanted the user to pass in an array of messages that we would slot into a particular spot? This is how you use MessagesPlaceholder. \`\`\`typescript import { ChatPromptTemplate, MessagesPlaceholder, } from "@langchain/core/prompts"; import { HumanMessage } from "@langchain/core/messages"; const promptTemplate = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], new MessagesPlaceholder("msgs"), ]); promptTemplate.invoke({ msgs: [new HumanMessage({ content: "hi!" })] }); \`\`\`
147724
# @langchain/openai This package contains the LangChain.js integrations for OpenAI through their SDK. ## Installation ```bash npm2yarn npm install @langchain/openai @langchain/core ``` This package, along with the main LangChain package, depends on [`@langchain/core`](https://npmjs.com/package/@langchain/core/). If you are using this package with other LangChain packages, you should make sure that all of the packages depend on the same instance of @langchain/core. You can do so by adding appropriate fields to your project's `package.json` like this: ```json { "name": "your-project", "version": "0.0.0", "dependencies": { "@langchain/core": "^0.3.0", "@langchain/openai": "^0.0.0" }, "resolutions": { "@langchain/core": "^0.3.0" }, "overrides": { "@langchain/core": "^0.3.0" }, "pnpm": { "overrides": { "@langchain/core": "^0.3.0" } } } ``` The field you need depends on the package manager you're using, but we recommend adding a field for the common `yarn`, `npm`, and `pnpm` to maximize compatibility. ## Chat Models This package contains the `ChatOpenAI` class, which is the recommended way to interface with the OpenAI series of models. To use, install the requirements, and configure your environment. ```bash export OPENAI_API_KEY=your-api-key ``` Then initialize ```typescript import { ChatOpenAI } from "@langchain/openai"; const model = new ChatOpenAI({ apiKey: process.env.OPENAI_API_KEY, modelName: "gpt-4-1106-preview", }); const response = await model.invoke(new HumanMessage("Hello world!")); ``` ### Streaming ```typescript import { ChatOpenAI } from "@langchain/openai"; const model = new ChatOpenAI({ apiKey: process.env.OPENAI_API_KEY, modelName: "gpt-4-1106-preview", }); const response = await model.stream(new HumanMessage("Hello world!")); ``` ## Embeddings This package also adds support for OpenAI's embeddings model. ```typescript import { OpenAIEmbeddings } from "@langchain/openai"; const embeddings = new OpenAIEmbeddings({ apiKey: process.env.OPENAI_API_KEY, }); const res = await embeddings.embedQuery("Hello world"); ``` ## Development To develop the OpenAI package, you'll need to follow these instructions: ### Install dependencies ```bash yarn install ``` ### Build the package ```bash yarn build ``` Or from the repo root: ```bash yarn build --filter=@langchain/openai ``` ### Run tests Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should end in `.int.test.ts`: ```bash $ yarn test $ yarn test:int ``` ### Lint & Format Run the linter & formatter to ensure your code is up to standard: ```bash yarn lint && yarn format ``` ### Adding new entrypoints If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `entrypoints` field in the `config` variable located inside `langchain.config.js` and run `yarn build` to generate the new entrypoint.
147732
{ static lc_name() { return "OpenAI"; } get callKeys() { return [...super.callKeys, "options"]; } lc_serializable = true; get lc_secrets(): { [key: string]: string } | undefined { return { openAIApiKey: "OPENAI_API_KEY", apiKey: "OPENAI_API_KEY", azureOpenAIApiKey: "AZURE_OPENAI_API_KEY", organization: "OPENAI_ORGANIZATION", }; } get lc_aliases(): Record<string, string> { return { modelName: "model", openAIApiKey: "openai_api_key", apiKey: "openai_api_key", azureOpenAIApiVersion: "azure_openai_api_version", azureOpenAIApiKey: "azure_openai_api_key", azureOpenAIApiInstanceName: "azure_openai_api_instance_name", azureOpenAIApiDeploymentName: "azure_openai_api_deployment_name", }; } temperature = 0.7; maxTokens = 256; topP = 1; frequencyPenalty = 0; presencePenalty = 0; n = 1; bestOf?: number; logitBias?: Record<string, number>; modelName = "gpt-3.5-turbo-instruct"; model = "gpt-3.5-turbo-instruct"; modelKwargs?: OpenAIInput["modelKwargs"]; batchSize = 20; timeout?: number; stop?: string[]; stopSequences?: string[]; user?: string; streaming = false; openAIApiKey?: string; apiKey?: string; azureOpenAIApiVersion?: string; azureOpenAIApiKey?: string; azureADTokenProvider?: () => Promise<string>; azureOpenAIApiInstanceName?: string; azureOpenAIApiDeploymentName?: string; azureOpenAIBasePath?: string; organization?: string; protected client: OpenAIClient; protected clientConfig: ClientOptions; constructor( fields?: Partial<OpenAIInput> & Partial<AzureOpenAIInput> & BaseLLMParams & { configuration?: ClientOptions & LegacyOpenAIInput; }, /** @deprecated */ configuration?: ClientOptions & LegacyOpenAIInput ) { let model = fields?.model ?? fields?.modelName; if ( (model?.startsWith("gpt-3.5-turbo") || model?.startsWith("gpt-4")) && !model?.includes("-instruct") ) { console.warn( [ `Your chosen OpenAI model, "${model}", is a chat model and not a text-in/text-out LLM.`, `Passing it into the "OpenAI" class is deprecated and only permitted for backwards-compatibility. You may experience odd behavior.`, `Please use the "ChatOpenAI" class instead.`, "", `See this page for more information:`, "|", `└> https://js.langchain.com/docs/integrations/chat/openai`, ].join("\n") ); // eslint-disable-next-line no-constructor-return return new OpenAIChat( fields, configuration ) as unknown as OpenAI<CallOptions>; } super(fields ?? {}); model = model ?? this.model; this.openAIApiKey = fields?.apiKey ?? fields?.openAIApiKey ?? getEnvironmentVariable("OPENAI_API_KEY"); this.apiKey = this.openAIApiKey; this.azureOpenAIApiKey = fields?.azureOpenAIApiKey ?? getEnvironmentVariable("AZURE_OPENAI_API_KEY"); this.azureADTokenProvider = fields?.azureADTokenProvider ?? undefined; if (!this.azureOpenAIApiKey && !this.apiKey && !this.azureADTokenProvider) { throw new Error( "OpenAI or Azure OpenAI API key or Token Provider not found" ); } this.azureOpenAIApiInstanceName = fields?.azureOpenAIApiInstanceName ?? getEnvironmentVariable("AZURE_OPENAI_API_INSTANCE_NAME"); this.azureOpenAIApiDeploymentName = (fields?.azureOpenAIApiCompletionsDeploymentName || fields?.azureOpenAIApiDeploymentName) ?? (getEnvironmentVariable("AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME") || getEnvironmentVariable("AZURE_OPENAI_API_DEPLOYMENT_NAME")); this.azureOpenAIApiVersion = fields?.azureOpenAIApiVersion ?? getEnvironmentVariable("AZURE_OPENAI_API_VERSION"); this.azureOpenAIBasePath = fields?.azureOpenAIBasePath ?? getEnvironmentVariable("AZURE_OPENAI_BASE_PATH"); this.organization = fields?.configuration?.organization ?? getEnvironmentVariable("OPENAI_ORGANIZATION"); this.modelName = model; this.model = model; this.modelKwargs = fields?.modelKwargs ?? {}; this.batchSize = fields?.batchSize ?? this.batchSize; this.timeout = fields?.timeout; this.temperature = fields?.temperature ?? this.temperature; this.maxTokens = fields?.maxTokens ?? this.maxTokens; this.topP = fields?.topP ?? this.topP; this.frequencyPenalty = fields?.frequencyPenalty ?? this.frequencyPenalty; this.presencePenalty = fields?.presencePenalty ?? this.presencePenalty; this.n = fields?.n ?? this.n; this.bestOf = fields?.bestOf ?? this.bestOf; this.logitBias = fields?.logitBias; this.stop = fields?.stopSequences ?? fields?.stop; this.stopSequences = fields?.stopSequences; this.user = fields?.user; this.streaming = fields?.streaming ?? false; if (this.streaming && this.bestOf && this.bestOf > 1) { throw new Error("Cannot stream results when bestOf > 1"); } if (this.azureOpenAIApiKey || this.azureADTokenProvider) { if (!this.azureOpenAIApiInstanceName && !this.azureOpenAIBasePath) { throw new Error("Azure OpenAI API instance name not found"); } if (!this.azureOpenAIApiDeploymentName) { throw new Error("Azure OpenAI API deployment name not found"); } if (!this.azureOpenAIApiVersion) { throw new Error("Azure OpenAI API version not found"); } this.apiKey = this.apiKey ?? ""; } this.clientConfig = { apiKey: this.apiKey, organization: this.organization, baseURL: configuration?.basePath ?? fields?.configuration?.basePath, dangerouslyAllowBrowser: true, defaultHeaders: configuration?.baseOptions?.headers ?? fields?.configuration?.baseOptions?.headers, defaultQuery: configuration?.baseOptions?.params ?? fields?.configuration?.baseOptions?.params, ...configuration, ...fields?.configuration, }; } /** * Get the parameters used to invoke the model */ invocationParams( options?: this["ParsedCallOptions"] ): Omit<OpenAIClient.CompletionCreateParams, "prompt"> { return { model: this.model, temperature: this.temperature, max_tokens: this.maxTokens, top_p: this.topP, frequency_penalty: this.frequencyPenalty, presence_penalty: this.presencePenalty, n: this.n, best_of: this.bestOf, logit_bias: this.logitBias, stop: options?.stop ?? this.stopSequences, user: this.user, stream: this.streaming, ...this.modelKwargs, }; } /** @ignore */ _identifyingParams(): Omit<OpenAIClient.CompletionCreateParams, "prompt"> & { model_name: string; } & ClientOptions { return { model_name: this.model, ...this.invocationParams(), ...this.clientConfig, }; } /** * Get the identifying parameters for the model */ identifyingParams(): Omit<OpenAIClient.CompletionCreateParams, "prompt"> & { model_name: string; } & ClientOptions { return this._identifyingParams(); } /** * Call out to OpenAI's endpoint with k unique prompts * * @param [prompts] - The prompts to pass into the model. * @param [options] - Optional list of stop words to use when generating. * @param [runManager] - Optional callback manager to use when generating. * * @returns The full LLM output. * * @example * ```ts * import { OpenAI } from "langchain/llms/openai"; * const openai = new OpenAI(); * const response = await openai.generate(["Tell me a joke."]); * ``` */
147735
export class OpenAIEmbeddings extends Embeddings implements OpenAIEmbeddingsParams, AzureOpenAIInput { modelName = "text-embedding-ada-002"; model = "text-embedding-ada-002"; batchSize = 512; // TODO: Update to `false` on next minor release (see: https://github.com/langchain-ai/langchainjs/pull/3612) stripNewLines = true; /** * The number of dimensions the resulting output embeddings should have. * Only supported in `text-embedding-3` and later models. */ dimensions?: number; timeout?: number; azureOpenAIApiVersion?: string; azureOpenAIApiKey?: string; azureADTokenProvider?: () => Promise<string>; azureOpenAIApiInstanceName?: string; azureOpenAIApiDeploymentName?: string; azureOpenAIBasePath?: string; organization?: string; protected client: OpenAIClient; protected clientConfig: ClientOptions; constructor( fields?: Partial<OpenAIEmbeddingsParams> & Partial<AzureOpenAIInput> & { verbose?: boolean; /** * The OpenAI API key to use. * Alias for `apiKey`. */ openAIApiKey?: string; /** The OpenAI API key to use. */ apiKey?: string; configuration?: ClientOptions; }, configuration?: ClientOptions & LegacyOpenAIInput ) { const fieldsWithDefaults = { maxConcurrency: 2, ...fields }; super(fieldsWithDefaults); let apiKey = fieldsWithDefaults?.apiKey ?? fieldsWithDefaults?.openAIApiKey ?? getEnvironmentVariable("OPENAI_API_KEY"); const azureApiKey = fieldsWithDefaults?.azureOpenAIApiKey ?? getEnvironmentVariable("AZURE_OPENAI_API_KEY"); this.azureADTokenProvider = fields?.azureADTokenProvider ?? undefined; if (!azureApiKey && !apiKey && !this.azureADTokenProvider) { throw new Error( "OpenAI or Azure OpenAI API key or Token Provider not found" ); } const azureApiInstanceName = fieldsWithDefaults?.azureOpenAIApiInstanceName ?? getEnvironmentVariable("AZURE_OPENAI_API_INSTANCE_NAME"); const azureApiDeploymentName = (fieldsWithDefaults?.azureOpenAIApiEmbeddingsDeploymentName || fieldsWithDefaults?.azureOpenAIApiDeploymentName) ?? (getEnvironmentVariable("AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME") || getEnvironmentVariable("AZURE_OPENAI_API_DEPLOYMENT_NAME")); const azureApiVersion = fieldsWithDefaults?.azureOpenAIApiVersion ?? getEnvironmentVariable("AZURE_OPENAI_API_VERSION"); this.azureOpenAIBasePath = fieldsWithDefaults?.azureOpenAIBasePath ?? getEnvironmentVariable("AZURE_OPENAI_BASE_PATH"); this.organization = fieldsWithDefaults?.configuration?.organization ?? getEnvironmentVariable("OPENAI_ORGANIZATION"); this.modelName = fieldsWithDefaults?.model ?? fieldsWithDefaults?.modelName ?? this.model; this.model = this.modelName; this.batchSize = fieldsWithDefaults?.batchSize ?? (azureApiKey ? 1 : this.batchSize); this.stripNewLines = fieldsWithDefaults?.stripNewLines ?? this.stripNewLines; this.timeout = fieldsWithDefaults?.timeout; this.dimensions = fieldsWithDefaults?.dimensions; this.azureOpenAIApiVersion = azureApiVersion; this.azureOpenAIApiKey = azureApiKey; this.azureOpenAIApiInstanceName = azureApiInstanceName; this.azureOpenAIApiDeploymentName = azureApiDeploymentName; if (this.azureOpenAIApiKey || this.azureADTokenProvider) { if (!this.azureOpenAIApiInstanceName && !this.azureOpenAIBasePath) { throw new Error("Azure OpenAI API instance name not found"); } if (!this.azureOpenAIApiDeploymentName) { throw new Error("Azure OpenAI API deployment name not found"); } if (!this.azureOpenAIApiVersion) { throw new Error("Azure OpenAI API version not found"); } apiKey = apiKey ?? ""; } this.clientConfig = { apiKey, organization: this.organization, baseURL: configuration?.basePath, dangerouslyAllowBrowser: true, defaultHeaders: configuration?.baseOptions?.headers, defaultQuery: configuration?.baseOptions?.params, ...configuration, ...fields?.configuration, }; } /** * Method to generate embeddings for an array of documents. Splits the * documents into batches and makes requests to the OpenAI API to generate * embeddings. * @param texts Array of documents to generate embeddings for. * @returns Promise that resolves to a 2D array of embeddings for each document. */ async embedDocuments(texts: string[]): Promise<number[][]> { const batches = chunkArray( this.stripNewLines ? texts.map((t) => t.replace(/\n/g, " ")) : texts, this.batchSize ); const batchRequests = batches.map((batch) => { const params: OpenAIClient.EmbeddingCreateParams = { model: this.model, input: batch, }; if (this.dimensions) { params.dimensions = this.dimensions; } return this.embeddingWithRetry(params); }); const batchResponses = await Promise.all(batchRequests); const embeddings: number[][] = []; for (let i = 0; i < batchResponses.length; i += 1) { const batch = batches[i]; const { data: batchResponse } = batchResponses[i]; for (let j = 0; j < batch.length; j += 1) { embeddings.push(batchResponse[j].embedding); } } return embeddings; } /** * Method to generate an embedding for a single document. Calls the * embeddingWithRetry method with the document as the input. * @param text Document to generate an embedding for. * @returns Promise that resolves to an embedding for the document. */ async embedQuery(text: string): Promise<number[]> { const params: OpenAIClient.EmbeddingCreateParams = { model: this.model, input: this.stripNewLines ? text.replace(/\n/g, " ") : text, }; if (this.dimensions) { params.dimensions = this.dimensions; } const { data } = await this.embeddingWithRetry(params); return data[0].embedding; } /** * Private method to make a request to the OpenAI API to generate * embeddings. Handles the retry logic and returns the response from the * API. * @param request Request to send to the OpenAI API. * @returns Promise that resolves to the response from the API. */ protected async embeddingWithRetry( request: OpenAIClient.EmbeddingCreateParams ) { if (!this.client) { const openAIEndpointConfig: OpenAIEndpointConfig = { azureOpenAIApiDeploymentName: this.azureOpenAIApiDeploymentName, azureOpenAIApiInstanceName: this.azureOpenAIApiInstanceName, azureOpenAIApiKey: this.azureOpenAIApiKey, azureOpenAIBasePath: this.azureOpenAIBasePath, baseURL: this.clientConfig.baseURL, }; const endpoint = getEndpoint(openAIEndpointConfig); const params = { ...this.clientConfig, baseURL: endpoint, timeout: this.timeout, maxRetries: 0, }; if (!params.baseURL) { delete params.baseURL; } this.client = new OpenAIClient(params); } const requestOptions: OpenAICoreRequestOptions = {}; if (this.azureOpenAIApiKey) { requestOptions.headers = { "api-key": this.azureOpenAIApiKey, ...requestOptions.headers, }; requestOptions.query = { "api-version": this.azureOpenAIApiVersion, ...requestOptions.query, }; } return this.caller.call(async () => { try { const res = await this.client.embeddings.create( request, requestOptions ); return res; } catch (e) { const error = wrapOpenAIClientError(e); throw error; } }); } }
147740
export class OpenAIChat extends LLM<OpenAIChatCallOptions> implements OpenAIChatInput, AzureOpenAIInput { static lc_name() { return "OpenAIChat"; } get callKeys() { return [...super.callKeys, "options", "promptIndex"]; } lc_serializable = true; get lc_secrets(): { [key: string]: string } | undefined { return { openAIApiKey: "OPENAI_API_KEY", azureOpenAIApiKey: "AZURE_OPENAI_API_KEY", organization: "OPENAI_ORGANIZATION", }; } get lc_aliases(): Record<string, string> { return { modelName: "model", openAIApiKey: "openai_api_key", azureOpenAIApiVersion: "azure_openai_api_version", azureOpenAIApiKey: "azure_openai_api_key", azureOpenAIApiInstanceName: "azure_openai_api_instance_name", azureOpenAIApiDeploymentName: "azure_openai_api_deployment_name", }; } temperature = 1; topP = 1; frequencyPenalty = 0; presencePenalty = 0; n = 1; logitBias?: Record<string, number>; maxTokens?: number; modelName = "gpt-3.5-turbo"; model = "gpt-3.5-turbo"; prefixMessages?: OpenAIClient.Chat.ChatCompletionMessageParam[]; modelKwargs?: OpenAIChatInput["modelKwargs"]; timeout?: number; stop?: string[]; user?: string; streaming = false; openAIApiKey?: string; azureOpenAIApiVersion?: string; azureOpenAIApiKey?: string; azureOpenAIApiInstanceName?: string; azureOpenAIApiDeploymentName?: string; azureOpenAIBasePath?: string; organization?: string; private client: OpenAIClient; private clientConfig: ClientOptions; constructor( fields?: Partial<OpenAIChatInput> & Partial<AzureOpenAIInput> & BaseLLMParams & { configuration?: ClientOptions & LegacyOpenAIInput; }, /** @deprecated */ configuration?: ClientOptions & LegacyOpenAIInput ) { super(fields ?? {}); this.openAIApiKey = fields?.apiKey ?? fields?.openAIApiKey ?? getEnvironmentVariable("OPENAI_API_KEY"); this.azureOpenAIApiKey = fields?.azureOpenAIApiKey ?? getEnvironmentVariable("AZURE_OPENAI_API_KEY"); if (!this.azureOpenAIApiKey && !this.openAIApiKey) { throw new Error("OpenAI or Azure OpenAI API key not found"); } this.azureOpenAIApiInstanceName = fields?.azureOpenAIApiInstanceName ?? getEnvironmentVariable("AZURE_OPENAI_API_INSTANCE_NAME"); this.azureOpenAIApiDeploymentName = (fields?.azureOpenAIApiCompletionsDeploymentName || fields?.azureOpenAIApiDeploymentName) ?? (getEnvironmentVariable("AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME") || getEnvironmentVariable("AZURE_OPENAI_API_DEPLOYMENT_NAME")); this.azureOpenAIApiVersion = fields?.azureOpenAIApiVersion ?? getEnvironmentVariable("AZURE_OPENAI_API_VERSION"); this.azureOpenAIBasePath = fields?.azureOpenAIBasePath ?? getEnvironmentVariable("AZURE_OPENAI_BASE_PATH"); this.organization = fields?.configuration?.organization ?? getEnvironmentVariable("OPENAI_ORGANIZATION"); this.modelName = fields?.model ?? fields?.modelName ?? this.modelName; this.prefixMessages = fields?.prefixMessages ?? this.prefixMessages; this.modelKwargs = fields?.modelKwargs ?? {}; this.timeout = fields?.timeout; this.temperature = fields?.temperature ?? this.temperature; this.topP = fields?.topP ?? this.topP; this.frequencyPenalty = fields?.frequencyPenalty ?? this.frequencyPenalty; this.presencePenalty = fields?.presencePenalty ?? this.presencePenalty; this.n = fields?.n ?? this.n; this.logitBias = fields?.logitBias; this.maxTokens = fields?.maxTokens; this.stop = fields?.stop; this.user = fields?.user; this.streaming = fields?.streaming ?? false; if (this.n > 1) { throw new Error( "Cannot use n > 1 in OpenAIChat LLM. Use ChatOpenAI Chat Model instead." ); } if (this.azureOpenAIApiKey) { if (!this.azureOpenAIApiInstanceName && !this.azureOpenAIBasePath) { throw new Error("Azure OpenAI API instance name not found"); } if (!this.azureOpenAIApiDeploymentName) { throw new Error("Azure OpenAI API deployment name not found"); } if (!this.azureOpenAIApiVersion) { throw new Error("Azure OpenAI API version not found"); } this.openAIApiKey = this.openAIApiKey ?? ""; } this.clientConfig = { apiKey: this.openAIApiKey, organization: this.organization, baseURL: configuration?.basePath ?? fields?.configuration?.basePath, dangerouslyAllowBrowser: true, defaultHeaders: configuration?.baseOptions?.headers ?? fields?.configuration?.baseOptions?.headers, defaultQuery: configuration?.baseOptions?.params ?? fields?.configuration?.baseOptions?.params, ...configuration, ...fields?.configuration, }; } /** * Get the parameters used to invoke the model */ invocationParams( options?: this["ParsedCallOptions"] ): Omit<OpenAIClient.Chat.ChatCompletionCreateParams, "messages"> { return { model: this.modelName, temperature: this.temperature, top_p: this.topP, frequency_penalty: this.frequencyPenalty, presence_penalty: this.presencePenalty, n: this.n, logit_bias: this.logitBias, max_tokens: this.maxTokens === -1 ? undefined : this.maxTokens, stop: options?.stop ?? this.stop, user: this.user, stream: this.streaming, ...this.modelKwargs, }; } /** @ignore */ _identifyingParams(): Omit< OpenAIClient.Chat.ChatCompletionCreateParams, "messages" > & { model_name: string; } & ClientOptions { return { model_name: this.modelName, ...this.invocationParams(), ...this.clientConfig, }; } /** * Get the identifying parameters for the model */ identifyingParams(): Omit< OpenAIClient.Chat.ChatCompletionCreateParams, "messages" > & { model_name: string; } & ClientOptions { return { model_name: this.modelName, ...this.invocationParams(), ...this.clientConfig, }; } /** * Formats the messages for the OpenAI API. * @param prompt The prompt to be formatted. * @returns Array of formatted messages. */ private formatMessages( prompt: string ): OpenAIClient.Chat.ChatCompletionMessageParam[] { const message: OpenAIClient.Chat.ChatCompletionMessageParam = { role: "user", content: prompt, }; return this.prefixMessages ? [...this.prefixMessages, message] : [message]; } async *_streamResponseChunks( prompt: string, options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): AsyncGenerator<GenerationChunk> { const params = { ...this.invocationParams(options), messages: this.formatMessages(prompt), stream: true as const, }; const stream = await this.completionWithRetry(params, options); for await (const data of stream) { const choice = data?.choices[0]; if (!choice) { continue; } const { delta } = choice; const generationChunk = new GenerationChunk({ text: delta.content ?? "", }); yield generationChunk; const newTokenIndices = { prompt: options.promptIndex ?? 0, completion: choice.index ?? 0, }; // eslint-disable-next-line no-void void runManager?.handleLLMNewToken( generationChunk.text ?? "", newTokenIndices ); } if (options.signal?.aborted) { throw new Error("AbortError"); } } /** @ignore */
147744
/** * OpenAI chat model integration. * * Setup: * Install `@langchain/openai` and set an environment variable named `OPENAI_API_KEY`. * * ```bash * npm install @langchain/openai * export OPENAI_API_KEY="your-api-key" * ``` * * ## [Constructor args](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html#constructor) * * ## [Runtime args](https://api.js.langchain.com/interfaces/langchain_openai.ChatOpenAICallOptions.html) * * Runtime args can be passed as the second argument to any of the base runnable methods `.invoke`. `.stream`, `.batch`, etc. * They can also be passed via `.bind`, or the second arg in `.bindTools`, like shown in the examples below: * * ```typescript * // When calling `.bind`, call options should be passed via the first argument * const llmWithArgsBound = llm.bind({ * stop: ["\n"], * tools: [...], * }); * * // When calling `.bindTools`, call options should be passed via the second argument * const llmWithTools = llm.bindTools( * [...], * { * tool_choice: "auto", * } * ); * ``` * * ## Examples * * <details open> * <summary><strong>Instantiate</strong></summary> * * ```typescript * import { ChatOpenAI } from '@langchain/openai'; * * const llm = new ChatOpenAI({ * model: "gpt-4o", * temperature: 0, * maxTokens: undefined, * timeout: undefined, * maxRetries: 2, * // apiKey: "...", * // baseUrl: "...", * // organization: "...", * // other params... * }); * ``` * </details> * * <br /> * * <details> * <summary><strong>Invoking</strong></summary> * * ```typescript * const input = `Translate "I love programming" into French.`; * * // Models also accept a list of chat messages or a formatted prompt * const result = await llm.invoke(input); * console.log(result); * ``` * * ```txt * AIMessage { * "id": "chatcmpl-9u4Mpu44CbPjwYFkTbeoZgvzB00Tz", * "content": "J'adore la programmation.", * "response_metadata": { * "tokenUsage": { * "completionTokens": 5, * "promptTokens": 28, * "totalTokens": 33 * }, * "finish_reason": "stop", * "system_fingerprint": "fp_3aa7262c27" * }, * "usage_metadata": { * "input_tokens": 28, * "output_tokens": 5, * "total_tokens": 33 * } * } * ``` * </details> * * <br /> * * <details> * <summary><strong>Streaming Chunks</strong></summary> * * ```typescript * for await (const chunk of await llm.stream(input)) { * console.log(chunk); * } * ``` * * ```txt * AIMessageChunk { * "id": "chatcmpl-9u4NWB7yUeHCKdLr6jP3HpaOYHTqs", * "content": "" * } * AIMessageChunk { * "content": "J" * } * AIMessageChunk { * "content": "'adore" * } * AIMessageChunk { * "content": " la" * } * AIMessageChunk { * "content": " programmation",, * } * AIMessageChunk { * "content": ".",, * } * AIMessageChunk { * "content": "", * "response_metadata": { * "finish_reason": "stop", * "system_fingerprint": "fp_c9aa9c0491" * }, * } * AIMessageChunk { * "content": "", * "usage_metadata": { * "input_tokens": 28, * "output_tokens": 5, * "total_tokens": 33 * } * } * ``` * </details> * * <br /> * * <details> * <summary><strong>Aggregate Streamed Chunks</strong></summary> * * ```typescript * import { AIMessageChunk } from '@langchain/core/messages'; * import { concat } from '@langchain/core/utils/stream'; * * const stream = await llm.stream(input); * let full: AIMessageChunk | undefined; * for await (const chunk of stream) { * full = !full ? chunk : concat(full, chunk); * } * console.log(full); * ``` * * ```txt * AIMessageChunk { * "id": "chatcmpl-9u4PnX6Fy7OmK46DASy0bH6cxn5Xu", * "content": "J'adore la programmation.", * "response_metadata": { * "prompt": 0, * "completion": 0, * "finish_reason": "stop", * }, * "usage_metadata": { * "input_tokens": 28, * "output_tokens": 5, * "total_tokens": 33 * } * } * ``` * </details> * * <br /> * * <details> * <summary><strong>Bind tools</strong></summary> * * ```typescript * import { z } from 'zod'; * * const GetWeather = { * name: "GetWeather", * description: "Get the current weather in a given location", * schema: z.object({ * location: z.string().describe("The city and state, e.g. San Francisco, CA") * }), * } * * const GetPopulation = { * name: "GetPopulation", * description: "Get the current population in a given location", * schema: z.object({ * location: z.string().describe("The city and state, e.g. San Francisco, CA") * }), * } * * const llmWithTools = llm.bindTools( * [GetWeather, GetPopulation], * { * // strict: true // enforce tool args schema is respected * } * ); * const aiMsg = await llmWithTools.invoke( * "Which city is hotter today and which is bigger: LA or NY?" * ); * console.log(aiMsg.tool_calls); * ``` * * ```txt * [ * { * name: 'GetWeather', * args: { location: 'Los Angeles, CA' }, * type: 'tool_call', * id: 'call_uPU4FiFzoKAtMxfmPnfQL6UK' * }, * { * name: 'GetWeather', * args: { location: 'New York, NY' }, * type: 'tool_call', * id: 'call_UNkEwuQsHrGYqgDQuH9nPAtX' * }, * { * name: 'GetPopulation', * args: { location: 'Los Angeles, CA' }, * type: 'tool_call', * id: 'call_kL3OXxaq9OjIKqRTpvjaCH14' * }, * { * name: 'GetPopulation', * args: { location: 'New York, NY' }, * type: 'tool_call', * id: 'call_s9KQB1UWj45LLGaEnjz0179q' * } * ] * ``` * </details> * * <br /> * * <details> * <summary><strong>Structured Output</strong></summary> * * ```typescript * import { z } from 'zod'; * * const Joke = z.object({ * setup: z.string().describe("The setup of the joke"), * punchline: z.string().describe("The punchline to the joke"), * rating: z.number().optional().describe("How funny the joke is, from 1 to 10") * }).describe('Joke to tell user.'); * * const structuredLlm = llm.withStructuredOutput(Joke, { * name: "Joke", * strict: true, // Optionally enable OpenAI structured outputs * }); * const jokeResult = await structuredLlm.invoke("Tell me a joke about cats"); * console.log(jokeResult); * ``` * * ```txt * { * setup: 'Why was the cat sitting on the computer?', * punchline: 'Because it wanted to keep an eye on the mouse!', * rating: 7 * } * ``` * </details> * * <br /> * * <details> * <summary><strong>JSON Object Response Format</strong></summary> * * ```typescript * const jsonLlm = llm.bind({ response_format: { type: "json_object" } }); * const jsonLlmAiMsg = await jsonLlm.invoke( * "Return a JSON object with key 'randomInts' and a value of 10 random ints in [0-99]" * );
147746
export class ChatOpenAI< CallOptions extends ChatOpenAICallOptions = ChatOpenAICallOptions > extends BaseChatModel<CallOptions, AIMessageChunk> implements OpenAIChatInput, AzureOpenAIInput { static lc_name() { return "ChatOpenAI"; } get callKeys() { return [ ...super.callKeys, "options", "function_call", "functions", "tools", "tool_choice", "promptIndex", "response_format", "seed", ]; } lc_serializable = true; get lc_secrets(): { [key: string]: string } | undefined { return { openAIApiKey: "OPENAI_API_KEY", apiKey: "OPENAI_API_KEY", azureOpenAIApiKey: "AZURE_OPENAI_API_KEY", organization: "OPENAI_ORGANIZATION", }; } get lc_aliases(): Record<string, string> { return { modelName: "model", openAIApiKey: "openai_api_key", apiKey: "openai_api_key", azureOpenAIApiVersion: "azure_openai_api_version", azureOpenAIApiKey: "azure_openai_api_key", azureOpenAIApiInstanceName: "azure_openai_api_instance_name", azureOpenAIApiDeploymentName: "azure_openai_api_deployment_name", }; } temperature = 1; topP = 1; frequencyPenalty = 0; presencePenalty = 0; n = 1; logitBias?: Record<string, number>; modelName = "gpt-3.5-turbo"; model = "gpt-3.5-turbo"; modelKwargs?: OpenAIChatInput["modelKwargs"]; stop?: string[]; stopSequences?: string[]; user?: string; timeout?: number; streaming = false; streamUsage = true; maxTokens?: number; logprobs?: boolean; topLogprobs?: number; openAIApiKey?: string; apiKey?: string; azureOpenAIApiVersion?: string; azureOpenAIApiKey?: string; azureADTokenProvider?: () => Promise<string>; azureOpenAIApiInstanceName?: string; azureOpenAIApiDeploymentName?: string; azureOpenAIBasePath?: string; azureOpenAIEndpoint?: string; organization?: string; __includeRawResponse?: boolean; protected client: OpenAIClient; protected clientConfig: ClientOptions; /** * Whether the model supports the `strict` argument when passing in tools. * If `undefined` the `strict` argument will not be passed to OpenAI. */ supportsStrictToolCalling?: boolean; audio?: OpenAIClient.Chat.ChatCompletionAudioParam; modalities?: Array<OpenAIClient.Chat.ChatCompletionModality>; constructor( fields?: ChatOpenAIFields, /** @deprecated */ configuration?: ClientOptions & LegacyOpenAIInput ) { super(fields ?? {}); this.openAIApiKey = fields?.apiKey ?? fields?.openAIApiKey ?? fields?.configuration?.apiKey ?? getEnvironmentVariable("OPENAI_API_KEY"); this.apiKey = this.openAIApiKey; this.azureOpenAIApiKey = fields?.azureOpenAIApiKey ?? getEnvironmentVariable("AZURE_OPENAI_API_KEY"); this.azureADTokenProvider = fields?.azureADTokenProvider ?? undefined; if (!this.azureOpenAIApiKey && !this.apiKey && !this.azureADTokenProvider) { throw new Error( "OpenAI or Azure OpenAI API key or Token Provider not found" ); } this.azureOpenAIApiInstanceName = fields?.azureOpenAIApiInstanceName ?? getEnvironmentVariable("AZURE_OPENAI_API_INSTANCE_NAME"); this.azureOpenAIApiDeploymentName = fields?.azureOpenAIApiDeploymentName ?? getEnvironmentVariable("AZURE_OPENAI_API_DEPLOYMENT_NAME"); this.azureOpenAIApiVersion = fields?.azureOpenAIApiVersion ?? getEnvironmentVariable("AZURE_OPENAI_API_VERSION"); this.azureOpenAIBasePath = fields?.azureOpenAIBasePath ?? getEnvironmentVariable("AZURE_OPENAI_BASE_PATH"); this.organization = fields?.configuration?.organization ?? getEnvironmentVariable("OPENAI_ORGANIZATION"); this.azureOpenAIEndpoint = fields?.azureOpenAIEndpoint ?? getEnvironmentVariable("AZURE_OPENAI_ENDPOINT"); this.modelName = fields?.model ?? fields?.modelName ?? this.model; this.model = this.modelName; this.modelKwargs = fields?.modelKwargs ?? {}; this.timeout = fields?.timeout; this.temperature = fields?.temperature ?? this.temperature; this.topP = fields?.topP ?? this.topP; this.frequencyPenalty = fields?.frequencyPenalty ?? this.frequencyPenalty; this.presencePenalty = fields?.presencePenalty ?? this.presencePenalty; this.maxTokens = fields?.maxTokens; this.logprobs = fields?.logprobs; this.topLogprobs = fields?.topLogprobs; this.n = fields?.n ?? this.n; this.logitBias = fields?.logitBias; this.stop = fields?.stopSequences ?? fields?.stop; this.stopSequences = this?.stop; this.user = fields?.user; this.__includeRawResponse = fields?.__includeRawResponse; this.audio = fields?.audio; this.modalities = fields?.modalities; if (this.azureOpenAIApiKey || this.azureADTokenProvider) { if ( !this.azureOpenAIApiInstanceName && !this.azureOpenAIBasePath && !this.azureOpenAIEndpoint ) { throw new Error("Azure OpenAI API instance name not found"); } if (!this.azureOpenAIApiDeploymentName && this.azureOpenAIBasePath) { const parts = this.azureOpenAIBasePath.split("/openai/deployments/"); if (parts.length === 2) { const [, deployment] = parts; this.azureOpenAIApiDeploymentName = deployment; } } if (!this.azureOpenAIApiDeploymentName) { throw new Error("Azure OpenAI API deployment name not found"); } if (!this.azureOpenAIApiVersion) { throw new Error("Azure OpenAI API version not found"); } this.apiKey = this.apiKey ?? ""; // Streaming usage is not supported by Azure deployments, so default to false this.streamUsage = false; } this.streaming = fields?.streaming ?? false; this.streamUsage = fields?.streamUsage ?? this.streamUsage; this.clientConfig = { apiKey: this.apiKey, organization: this.organization, baseURL: configuration?.basePath ?? fields?.configuration?.basePath, dangerouslyAllowBrowser: true, defaultHeaders: configuration?.baseOptions?.headers ?? fields?.configuration?.baseOptions?.headers, defaultQuery: configuration?.baseOptions?.params ?? fields?.configuration?.baseOptions?.params, ...configuration, ...fields?.configuration, }; // If `supportsStrictToolCalling` is explicitly set, use that value. // Else leave undefined so it's not passed to OpenAI. if (fields?.supportsStrictToolCalling !== undefined) { this.supportsStrictToolCalling = fields.supportsStrictToolCalling; } } getLsParams(options: this["ParsedCallOptions"]): LangSmithParams { const params = this.invocationParams(options); return { ls_provider: "openai", ls_model_name: this.model, ls_model_type: "chat", ls_temperature: params.temperature ?? undefined, ls_max_tokens: params.max_tokens ?? undefined, ls_stop: options.stop, }; } override bindTools( tools: ChatOpenAIToolType[], kwargs?: Partial<CallOptions> ): Runnable<BaseLanguageModelInput, AIMessageChunk, CallOptions> { let strict: boolean | undefined; if (kwargs?.strict !== undefined) { strict = kwargs.strict; } else if (this.supportsStrictToolCalling !== undefined) { strict = this.supportsStrictToolCalling; } return this.bind({ tools: tools.map((tool) => _convertChatOpenAIToolTypeToOpenAITool(tool, { strict }) ), ...kwargs, } as Partial<CallOptions>); }
147755
import { type ClientOptions, AzureOpenAI as AzureOpenAIClient, OpenAI as OpenAIClient, } from "openai"; import { OpenAIEmbeddings, OpenAIEmbeddingsParams } from "../embeddings.js"; import { AzureOpenAIInput, OpenAICoreRequestOptions, LegacyOpenAIInput, } from "../types.js"; import { getEndpoint, OpenAIEndpointConfig } from "../utils/azure.js"; import { wrapOpenAIClientError } from "../utils/openai.js"; export class AzureOpenAIEmbeddings extends OpenAIEmbeddings { constructor( fields?: Partial<OpenAIEmbeddingsParams> & Partial<AzureOpenAIInput> & { verbose?: boolean; /** The OpenAI API key to use. */ apiKey?: string; configuration?: ClientOptions; deploymentName?: string; openAIApiVersion?: string; }, configuration?: ClientOptions & LegacyOpenAIInput ) { const newFields = { ...fields }; if (Object.entries(newFields).length) { // don't rewrite the fields if they are already set newFields.azureOpenAIApiDeploymentName = newFields.azureOpenAIApiDeploymentName ?? newFields.deploymentName; newFields.azureOpenAIApiKey = newFields.azureOpenAIApiKey ?? newFields.apiKey; newFields.azureOpenAIApiVersion = newFields.azureOpenAIApiVersion ?? newFields.openAIApiVersion; } super(newFields, configuration); } protected async embeddingWithRetry( request: OpenAIClient.EmbeddingCreateParams ) { if (!this.client) { const openAIEndpointConfig: OpenAIEndpointConfig = { azureOpenAIApiDeploymentName: this.azureOpenAIApiDeploymentName, azureOpenAIApiInstanceName: this.azureOpenAIApiInstanceName, azureOpenAIApiKey: this.azureOpenAIApiKey, azureOpenAIBasePath: this.azureOpenAIBasePath, azureADTokenProvider: this.azureADTokenProvider, baseURL: this.clientConfig.baseURL, }; const endpoint = getEndpoint(openAIEndpointConfig); const params = { ...this.clientConfig, baseURL: endpoint, timeout: this.timeout, maxRetries: 0, }; if (!this.azureADTokenProvider) { params.apiKey = openAIEndpointConfig.azureOpenAIApiKey; } if (!params.baseURL) { delete params.baseURL; } params.defaultHeaders = { ...params.defaultHeaders, "User-Agent": params.defaultHeaders?.["User-Agent"] ? `${params.defaultHeaders["User-Agent"]}: langchainjs-azure-openai-v2` : `langchainjs-azure-openai-v2`, }; this.client = new AzureOpenAIClient({ apiVersion: this.azureOpenAIApiVersion, azureADTokenProvider: this.azureADTokenProvider, deployment: this.azureOpenAIApiDeploymentName, ...params, }); } const requestOptions: OpenAICoreRequestOptions = {}; if (this.azureOpenAIApiKey) { requestOptions.headers = { "api-key": this.azureOpenAIApiKey, ...requestOptions.headers, }; requestOptions.query = { "api-version": this.azureOpenAIApiVersion, ...requestOptions.query, }; } return this.caller.call(async () => { try { const res = await this.client.embeddings.create( request, requestOptions ); return res; } catch (e) { const error = wrapOpenAIClientError(e); throw error; } }); } }
147757
/** * Azure OpenAI chat model integration. * * Setup: * Install `@langchain/openai` and set the following environment variables: * * ```bash * npm install @langchain/openai * export AZURE_OPENAI_API_KEY="your-api-key" * export AZURE_OPENAI_API_DEPLOYMENT_NAME="your-deployment-name" * export AZURE_OPENAI_API_VERSION="your-version" * export AZURE_OPENAI_BASE_PATH="your-base-path" * ``` * * ## [Constructor args](https://api.js.langchain.com/classes/langchain_openai.AzureChatOpenAI.html#constructor) * * ## [Runtime args](https://api.js.langchain.com/interfaces/langchain_openai.ChatOpenAICallOptions.html) * * Runtime args can be passed as the second argument to any of the base runnable methods `.invoke`. `.stream`, `.batch`, etc. * They can also be passed via `.bind`, or the second arg in `.bindTools`, like shown in the examples below: * * ```typescript * // When calling `.bind`, call options should be passed via the first argument * const llmWithArgsBound = llm.bind({ * stop: ["\n"], * tools: [...], * }); * * // When calling `.bindTools`, call options should be passed via the second argument * const llmWithTools = llm.bindTools( * [...], * { * tool_choice: "auto", * } * ); * ``` * * ## Examples * * <details open> * <summary><strong>Instantiate</strong></summary> * * ```typescript * import { AzureChatOpenAI } from '@langchain/openai'; * * const llm = new AzureChatOpenAI({ * azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY, // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY * azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME, // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME * azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME, // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME * azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION, // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION * temperature: 0, * maxTokens: undefined, * timeout: undefined, * maxRetries: 2, * // apiKey: "...", * // baseUrl: "...", * // other params... * }); * ``` * </details> * * <br /> * * <details> * <summary><strong>Invoking</strong></summary> * * ```typescript * const input = `Translate "I love programming" into French.`; * * // Models also accept a list of chat messages or a formatted prompt * const result = await llm.invoke(input); * console.log(result); * ``` * * ```txt * AIMessage { * "id": "chatcmpl-9u4Mpu44CbPjwYFkTbeoZgvzB00Tz", * "content": "J'adore la programmation.", * "response_metadata": { * "tokenUsage": { * "completionTokens": 5, * "promptTokens": 28, * "totalTokens": 33 * }, * "finish_reason": "stop", * "system_fingerprint": "fp_3aa7262c27" * }, * "usage_metadata": { * "input_tokens": 28, * "output_tokens": 5, * "total_tokens": 33 * } * } * ``` * </details> * * <br /> * * <details> * <summary><strong>Streaming Chunks</strong></summary> * * ```typescript * for await (const chunk of await llm.stream(input)) { * console.log(chunk); * } * ``` * * ```txt * AIMessageChunk { * "id": "chatcmpl-9u4NWB7yUeHCKdLr6jP3HpaOYHTqs", * "content": "" * } * AIMessageChunk { * "content": "J" * } * AIMessageChunk { * "content": "'adore" * } * AIMessageChunk { * "content": " la" * } * AIMessageChunk { * "content": " programmation",, * } * AIMessageChunk { * "content": ".",, * } * AIMessageChunk { * "content": "", * "response_metadata": { * "finish_reason": "stop", * "system_fingerprint": "fp_c9aa9c0491" * }, * } * AIMessageChunk { * "content": "", * "usage_metadata": { * "input_tokens": 28, * "output_tokens": 5, * "total_tokens": 33 * } * } * ``` * </details> * * <br /> * * <details> * <summary><strong>Aggregate Streamed Chunks</strong></summary> * * ```typescript * import { AIMessageChunk } from '@langchain/core/messages'; * import { concat } from '@langchain/core/utils/stream'; * * const stream = await llm.stream(input); * let full: AIMessageChunk | undefined; * for await (const chunk of stream) { * full = !full ? chunk : concat(full, chunk); * } * console.log(full); * ``` * * ```txt * AIMessageChunk { * "id": "chatcmpl-9u4PnX6Fy7OmK46DASy0bH6cxn5Xu", * "content": "J'adore la programmation.", * "response_metadata": { * "prompt": 0, * "completion": 0, * "finish_reason": "stop", * }, * "usage_metadata": { * "input_tokens": 28, * "output_tokens": 5, * "total_tokens": 33 * } * } * ``` * </details> * * <br /> * * <details> * <summary><strong>Bind tools</strong></summary> * * ```typescript * import { z } from 'zod'; * * const GetWeather = { * name: "GetWeather", * description: "Get the current weather in a given location", * schema: z.object({ * location: z.string().describe("The city and state, e.g. San Francisco, CA") * }), * } * * const GetPopulation = { * name: "GetPopulation", * description: "Get the current population in a given location", * schema: z.object({ * location: z.string().describe("The city and state, e.g. San Francisco, CA") * }), * } * * const llmWithTools = llm.bindTools([GetWeather, GetPopulation]); * const aiMsg = await llmWithTools.invoke( * "Which city is hotter today and which is bigger: LA or NY?" * ); * console.log(aiMsg.tool_calls); * ``` * * ```txt * [ * { * name: 'GetWeather', * args: { location: 'Los Angeles, CA' }, * type: 'tool_call', * id: 'call_uPU4FiFzoKAtMxfmPnfQL6UK' * }, * { * name: 'GetWeather', * args: { location: 'New York, NY' }, * type: 'tool_call', * id: 'call_UNkEwuQsHrGYqgDQuH9nPAtX' * }, * { * name: 'GetPopulation', * args: { location: 'Los Angeles, CA' }, * type: 'tool_call', * id: 'call_kL3OXxaq9OjIKqRTpvjaCH14' * }, * { * name: 'GetPopulation', * args: { location: 'New York, NY' }, * type: 'tool_call', * id: 'call_s9KQB1UWj45LLGaEnjz0179q' * } * ] * ``` * </details> * * <br /> * * <details> * <summary><strong>Structured Output</strong></summary> * * ```typescript * import { z } from 'zod'; * * const Joke = z.object({ * setup: z.string().describe("The setup of the joke"), * punchline: z.string().describe("The punchline to the joke"), * rating: z.number().optional().describe("How funny the joke is, from 1 to 10") * }).describe('Joke to tell user.'); * * const structuredLlm = llm.withStructuredOutput(Joke, { name: "Joke" }); * const jokeResult = await structuredLlm.invoke("Tell me a joke about cats"); * console.log(jokeResult); * ``` * * ```txt * { * setup: 'Why was the cat sitting on the computer?',
147764
/* eslint-disable no-process-env */ /* eslint-disable @typescript-eslint/no-explicit-any */ import { test, jest, expect } from "@jest/globals"; import { AIMessageChunk, BaseMessage, ChatMessage, HumanMessage, SystemMessage, } from "@langchain/core/messages"; import { ChatGeneration, LLMResult } from "@langchain/core/outputs"; import { ChatPromptValue } from "@langchain/core/prompt_values"; import { PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate, } from "@langchain/core/prompts"; import { CallbackManager } from "@langchain/core/callbacks/manager"; import { NewTokenIndices } from "@langchain/core/callbacks/base"; import { InMemoryCache } from "@langchain/core/caches"; import { concat } from "@langchain/core/utils/stream"; import { ChatOpenAI } from "../chat_models.js"; // Save the original value of the 'LANGCHAIN_CALLBACKS_BACKGROUND' environment variable const originalBackground = process.env.LANGCHAIN_CALLBACKS_BACKGROUND; test("Test ChatOpenAI Generate", async () => { const chat = new ChatOpenAI({ modelName: "gpt-3.5-turbo", maxTokens: 10, n: 2, }); const message = new HumanMessage("Hello!"); const res = await chat.generate([[message], [message]]); expect(res.generations.length).toBe(2); for (const generation of res.generations) { expect(generation.length).toBe(2); for (const message of generation) { // console.log(message.text); expect(typeof message.text).toBe("string"); } } // console.log({ res }); }); test("Test ChatOpenAI invoke fails with proper error", async () => { const chat = new ChatOpenAI({ model: "gpt-4o-mini", maxTokens: 10, n: 2, apiKey: "bad", }); const message = new HumanMessage("Hello!"); let authError; try { await chat.invoke([message]); } catch (e) { authError = e; } expect(authError).toBeDefined(); expect((authError as any)?.lc_error_code).toEqual("MODEL_AUTHENTICATION"); }); test("Test ChatOpenAI invoke to unknown model fails with proper error", async () => { const chat = new ChatOpenAI({ model: "badbadbad", maxTokens: 10, n: 2, }); const message = new HumanMessage("Hello!"); let authError; try { await chat.invoke([message]); } catch (e) { authError = e; } expect(authError).toBeDefined(); expect((authError as any)?.lc_error_code).toEqual("MODEL_NOT_FOUND"); }); test("Test ChatOpenAI Generate throws when one of the calls fails", async () => { const chat = new ChatOpenAI({ modelName: "gpt-3.5-turbo", maxTokens: 10, n: 2, }); const message = new HumanMessage("Hello!"); await expect(() => chat.generate([[message], [message]], { signal: AbortSignal.timeout(10), }) ).rejects.toThrow(); }); test("Test ChatOpenAI tokenUsage", async () => { // Running LangChain callbacks in the background will sometimes cause the callbackManager to execute // after the test/llm call has already finished & returned. Set that environment variable to false // to prevent that from happening. process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "false"; try { let tokenUsage = { completionTokens: 0, promptTokens: 0, totalTokens: 0, }; const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", maxTokens: 10, callbackManager: CallbackManager.fromHandlers({ async handleLLMEnd(output: LLMResult) { tokenUsage = output.llmOutput?.tokenUsage; }, }), }); const message = new HumanMessage("Hello"); await model.invoke([message]); expect(tokenUsage.promptTokens).toBeGreaterThan(0); } finally { // Reset the environment variable process.env.LANGCHAIN_CALLBACKS_BACKGROUND = originalBackground; } }); test("Test ChatOpenAI tokenUsage with a batch", async () => { // Running LangChain callbacks in the background will sometimes cause the callbackManager to execute // after the test/llm call has already finished & returned. Set that environment variable to false // to prevent that from happening. process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "false"; try { let tokenUsage = { completionTokens: 0, promptTokens: 0, totalTokens: 0, }; const model = new ChatOpenAI({ temperature: 0, modelName: "gpt-3.5-turbo", callbackManager: CallbackManager.fromHandlers({ async handleLLMEnd(output: LLMResult) { tokenUsage = output.llmOutput?.tokenUsage; }, }), }); await model.generate([ [new HumanMessage("Hello")], [new HumanMessage("Hi")], ]); expect(tokenUsage.promptTokens).toBeGreaterThan(0); } finally { // Reset the environment variable process.env.LANGCHAIN_CALLBACKS_BACKGROUND = originalBackground; } }); test("Test ChatOpenAI in streaming mode", async () => { // Running LangChain callbacks in the background will sometimes cause the callbackManager to execute // after the test/llm call has already finished & returned. Set that environment variable to false // to prevent that from happening. process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "false"; try { let nrNewTokens = 0; let streamedCompletion = ""; const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", streaming: true, maxTokens: 10, callbacks: [ { async handleLLMNewToken(token: string) { nrNewTokens += 1; streamedCompletion += token; }, }, ], }); const message = new HumanMessage("Hello!"); const result = await model.invoke([message]); // console.log(result); expect(nrNewTokens > 0).toBe(true); expect(result.content).toBe(streamedCompletion); } finally { // Reset the environment variable process.env.LANGCHAIN_CALLBACKS_BACKGROUND = originalBackground; } }, 10000); test("Test ChatOpenAI in streaming mode with n > 1 and multiple prompts", async () => { // Running LangChain callbacks in the background will sometimes cause the callbackManager to execute // after the test/llm call has already finished & returned. Set that environment variable to false // to prevent that from happening. process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "false"; try { let nrNewTokens = 0; const streamedCompletions = [ ["", ""], ["", ""], ]; const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", streaming: true, maxTokens: 10, n: 2, callbacks: [ { async handleLLMNewToken(token: string, idx: NewTokenIndices) { nrNewTokens += 1; streamedCompletions[idx.prompt][idx.completion] += token; }, }, ], }); const message1 = new HumanMessage("Hello!"); const message2 = new HumanMessage("Bye!"); const result = await model.generate([[message1], [message2]]); // console.log(result.generations); expect(nrNewTokens > 0).toBe(true); expect(result.generations.map((g) => g.map((gg) => gg.text))).toEqual( streamedCompletions ); } finally { // Reset the environment variable process.env.LANGCHAIN_CALLBACKS_BACKGROUND = originalBackground; } }, 10000); test("Test ChatOpenAI prompt value", async () => { const chat = new ChatOpenAI({ modelName: "gpt-3.5-turbo", maxTokens: 10, n: 2, }); const message = new HumanMessage("Hello!"); const res = await chat.generatePrompt([new ChatPromptValue([message])]); expect(res.generations.length).toBe(1); for (const generation of res.generations) { expect(generation.length).toBe(2); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var for (const g of generation) { // console.log(g.text); } } // console.log({ res }); });
147765
test("OpenAI Chat, docs, prompt templates", async () => { const chat = new ChatOpenAI({ temperature: 0, maxTokens: 10 }); const systemPrompt = PromptTemplate.fromTemplate( "You are a helpful assistant that translates {input_language} to {output_language}." ); const chatPrompt = ChatPromptTemplate.fromMessages([ new SystemMessagePromptTemplate(systemPrompt), HumanMessagePromptTemplate.fromTemplate("{text}"), ]); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const responseA = await chat.generatePrompt([ await chatPrompt.formatPromptValue({ input_language: "English", output_language: "French", text: "I love programming.", }), ]); // console.log(responseA.generations); }, 5000); test("Test OpenAI with stop", async () => { const model = new ChatOpenAI({ maxTokens: 5 }); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const res = await model.invoke([new HumanMessage("Print hello world")], { stop: ["world"], }); // console.log({ res }); }); test("Test OpenAI with stop in object", async () => { const model = new ChatOpenAI({ maxTokens: 5 }); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const res = await model.invoke([new HumanMessage("Print hello world")], { stop: ["world"], }); // console.log({ res }); }); test("Test OpenAI with timeout in call options", async () => { const model = new ChatOpenAI({ maxTokens: 5, maxRetries: 0 }); await expect(() => model.invoke([new HumanMessage("Print hello world")], { options: { timeout: 10 }, }) ).rejects.toThrow(); }, 5000); test("Test OpenAI with timeout in call options and node adapter", async () => { const model = new ChatOpenAI({ maxTokens: 5, maxRetries: 0 }); await expect(() => model.invoke([new HumanMessage("Print hello world")], { options: { timeout: 10 }, }) ).rejects.toThrow(); }, 5000); test("Test OpenAI with signal in call options", async () => { const model = new ChatOpenAI({ maxTokens: 5 }); const controller = new AbortController(); await expect(() => { const ret = model.invoke([new HumanMessage("Print hello world")], { options: { signal: controller.signal }, }); controller.abort(); return ret; }).rejects.toThrow(); }, 5000); test("Test OpenAI with signal in call options and node adapter", async () => { const model = new ChatOpenAI({ maxTokens: 5, modelName: "gpt-3.5-turbo-instruct", }); const controller = new AbortController(); await expect(() => { const ret = model.invoke([new HumanMessage("Print hello world")], { options: { signal: controller.signal }, }); controller.abort(); return ret; }).rejects.toThrow(); }, 5000); function createSystemChatMessage(text: string, name?: string) { const msg = new SystemMessage(text); msg.name = name; return msg; } function createSampleMessages(): BaseMessage[] { // same example as in https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb return [ createSystemChatMessage( "You are a helpful, pattern-following assistant that translates corporate jargon into plain English." ), createSystemChatMessage( "New synergies will help drive top-line growth.", "example_user" ), createSystemChatMessage( "Things working well together will increase revenue.", "example_assistant" ), createSystemChatMessage( "Let's circle back when we have more bandwidth to touch base on opportunities for increased leverage.", "example_user" ), createSystemChatMessage( "Let's talk later when we're less busy about how to do better.", "example_assistant" ), new HumanMessage( "This late pivot means we don't have time to boil the ocean for the client deliverable." ), ]; } test("getNumTokensFromMessages gpt-3.5-turbo-0301 model for sample input", async () => { const messages: BaseMessage[] = createSampleMessages(); const chat = new ChatOpenAI({ openAIApiKey: "dummy", modelName: "gpt-3.5-turbo-0301", }); const { totalCount } = await chat.getNumTokensFromMessages(messages); expect(totalCount).toBe(127); }); test("getNumTokensFromMessages gpt-4-0314 model for sample input", async () => { const messages: BaseMessage[] = createSampleMessages(); const chat = new ChatOpenAI({ openAIApiKey: "dummy", modelName: "gpt-4-0314", }); const { totalCount } = await chat.getNumTokensFromMessages(messages); expect(totalCount).toBe(129); }); test("Test OpenAI with specific roles in ChatMessage", async () => { const chat = new ChatOpenAI({ modelName: "gpt-3.5-turbo", maxTokens: 10 }); const system_message = new ChatMessage( "You are to chat with a user.", "system" ); const user_message = new ChatMessage("Hello!", "user"); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const res = await chat.invoke([system_message, user_message]); // console.log({ res }); }); test("Test ChatOpenAI stream method", async () => { const model = new ChatOpenAI({ maxTokens: 50, modelName: "gpt-3.5-turbo" }); const stream = await model.stream("Print hello world."); const chunks = []; for await (const chunk of stream) { // console.log(chunk); chunks.push(chunk); } expect(chunks.length).toBeGreaterThan(1); }); test("Test ChatOpenAI stream method with abort", async () => { await expect(async () => { const model = new ChatOpenAI({ maxTokens: 100, modelName: "gpt-3.5-turbo", }); const stream = await model.stream( "How is your day going? Be extremely verbose.", { signal: AbortSignal.timeout(500), } ); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var for await (const chunk of stream) { // console.log(chunk); } }).rejects.toThrow(); }); test("Test ChatOpenAI stream method with early break", async () => { const model = new ChatOpenAI({ maxTokens: 50, modelName: "gpt-3.5-turbo" }); const stream = await model.stream( "How is your day going? Be extremely verbose." ); let i = 0; // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var for await (const chunk of stream) { // console.log(chunk); i += 1; if (i > 10) { break; } } }); test("Test ChatOpenAI stream method, timeout error thrown from SDK", async () => { await expect(async () => { const model = new ChatOpenAI({ maxTokens: 50, modelName: "gpt-3.5-turbo", timeout: 1, maxRetries: 0, }); const stream = await model.stream( "How is your day going? Be extremely verbose." ); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var for await (const chunk of stream) { // console.log(chunk); } }).rejects.toThrow(); });
147767
test("Test ChatOpenAI token usage reporting for streaming function calls", async () => { const humanMessage = "What a beautiful day!"; const extractionFunctionSchema = { name: "extractor", description: "Extracts fields from the input.", parameters: { type: "object", properties: { tone: { type: "string", enum: ["positive", "negative"], description: "The overall tone of the input", }, word_count: { type: "number", description: "The number of words in the input", }, chat_response: { type: "string", description: "A response to the human's input", }, }, required: ["tone", "word_count", "chat_response"], }, }; const callOptions = { seed: 42, functions: [extractionFunctionSchema], function_call: { name: "extractor" }, }; const constructorArgs = { model: "gpt-3.5-turbo", temperature: 0, }; const streamingModel = new ChatOpenAI({ ...constructorArgs, streaming: true, }).bind(callOptions); const nonStreamingModel = new ChatOpenAI({ ...constructorArgs, streaming: false, }).bind(callOptions); const [nonStreamingResult, streamingResult] = await Promise.all([ nonStreamingModel.invoke([new HumanMessage(humanMessage)]), streamingModel.invoke([new HumanMessage(humanMessage)]), ]); const tokenUsageStreaming = nonStreamingResult.usage_metadata; const tokenUsageNonStreaming = streamingResult.usage_metadata; if (!tokenUsageStreaming || !tokenUsageNonStreaming) { throw new Error(`Token usage not found in response. Streaming: ${JSON.stringify(streamingResult || {})} Non-streaming: ${JSON.stringify(nonStreamingResult || {})}`); } if ( nonStreamingResult.additional_kwargs.function_call?.arguments && streamingResult.additional_kwargs.function_call?.arguments ) { const nonStreamingArguments = JSON.stringify( JSON.parse(nonStreamingResult.additional_kwargs.function_call.arguments) ); const streamingArguments = JSON.stringify( JSON.parse(streamingResult.additional_kwargs.function_call.arguments) ); if (nonStreamingArguments === streamingArguments) { expect(tokenUsageStreaming).toEqual(tokenUsageNonStreaming); } } expect(tokenUsageStreaming.input_tokens).toBeGreaterThan(0); expect(tokenUsageStreaming.output_tokens).toBeGreaterThan(0); expect(tokenUsageStreaming.total_tokens).toBeGreaterThan(0); expect(tokenUsageNonStreaming.input_tokens).toBeGreaterThan(0); expect(tokenUsageNonStreaming.output_tokens).toBeGreaterThan(0); expect(tokenUsageNonStreaming.total_tokens).toBeGreaterThan(0); }); test("Test ChatOpenAI token usage reporting for streaming calls", async () => { // Running LangChain callbacks in the background will sometimes cause the callbackManager to execute // after the test/llm call has already finished & returned. Set that environment variable to false // to prevent that from happening. process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "false"; try { let streamingTokenUsed = -1; let nonStreamingTokenUsed = -1; const systemPrompt = "You are a helpful assistant"; const question = "What is the color of the night sky?"; const streamingModel = new ChatOpenAI({ modelName: "gpt-3.5-turbo", streaming: true, maxRetries: 10, maxConcurrency: 10, temperature: 0, topP: 0, callbacks: [ { handleLLMEnd: async (output) => { streamingTokenUsed = output.llmOutput?.estimatedTokenUsage?.totalTokens; // console.log( // "streaming usage", // output.llmOutput?.estimatedTokenUsage // ); }, handleLLMError: async (_err) => { // console.error(err); }, }, ], }); const nonStreamingModel = new ChatOpenAI({ modelName: "gpt-3.5-turbo", streaming: false, maxRetries: 10, maxConcurrency: 10, temperature: 0, topP: 0, callbacks: [ { handleLLMEnd: async (output) => { nonStreamingTokenUsed = output.llmOutput?.tokenUsage?.totalTokens; // console.log("non-streaming usage", output.llmOutput?.estimated); }, handleLLMError: async (_err) => { // console.error(err); }, }, ], }); const [nonStreamingResult, streamingResult] = await Promise.all([ nonStreamingModel.generate([ [new SystemMessage(systemPrompt), new HumanMessage(question)], ]), streamingModel.generate([ [new SystemMessage(systemPrompt), new HumanMessage(question)], ]), ]); expect(streamingTokenUsed).toBeGreaterThan(-1); if ( nonStreamingResult.generations[0][0].text === streamingResult.generations[0][0].text ) { expect(streamingTokenUsed).toEqual(nonStreamingTokenUsed); } } finally { // Reset the environment variable process.env.LANGCHAIN_CALLBACKS_BACKGROUND = originalBackground; } }); test("Finish reason is 'stop'", async () => { const model = new ChatOpenAI(); const response = await model.stream("Hello, how are you?"); let finalResult: AIMessageChunk | undefined; for await (const chunk of response) { if (finalResult) { finalResult = finalResult.concat(chunk); } else { finalResult = chunk; } } expect(finalResult).toBeTruthy(); expect(finalResult?.response_metadata?.finish_reason).toBe("stop"); }); test("Streaming tokens can be found in usage_metadata field", async () => { const model = new ChatOpenAI(); const response = await model.stream("Hello, how are you?"); let finalResult: AIMessageChunk | undefined; for await (const chunk of response) { if (finalResult) { finalResult = finalResult.concat(chunk); } else { finalResult = chunk; } } // console.log({ // usage_metadata: finalResult?.usage_metadata, // }); expect(finalResult).toBeTruthy(); expect(finalResult?.usage_metadata).toBeTruthy(); expect(finalResult?.usage_metadata?.input_tokens).toBeGreaterThan(0); expect(finalResult?.usage_metadata?.output_tokens).toBeGreaterThan(0); expect(finalResult?.usage_metadata?.total_tokens).toBeGreaterThan(0); }); test("streaming: true tokens can be found in usage_metadata field", async () => { const model = new ChatOpenAI({ streaming: true, }); const response = await model.invoke("Hello, how are you?", { stream_options: { include_usage: true, }, }); // console.log({ // usage_metadata: response?.usage_metadata, // }); expect(response).toBeTruthy(); expect(response?.usage_metadata).toBeTruthy(); expect(response?.usage_metadata?.input_tokens).toBeGreaterThan(0); expect(response?.usage_metadata?.output_tokens).toBeGreaterThan(0); expect(response?.usage_metadata?.total_tokens).toBeGreaterThan(0); }); test("streaming: streamUsage will not override stream_options", async () => { const model = new ChatOpenAI({ streaming: true, }); const response = await model.invoke("Hello, how are you?", { stream_options: { include_usage: false }, }); // console.log({ // usage_metadata: response?.usage_metadata, // }); expect(response).toBeTruthy(); expect(response?.usage_metadata).toBeFalsy(); }); test("streaming: streamUsage default is true", async () => { const model = new ChatOpenAI(); const response = await model.invoke("Hello, how are you?"); // console.log({ // usage_metadata: response?.usage_metadata, // }); expect(response).toBeTruthy(); expect(response?.usage_metadata).toBeTruthy(); expect(response?.usage_metadata?.input_tokens).toBeGreaterThan(0); expect(response?.usage_metadata?.output_tokens).toBeGreaterThan(0); expect(response?.usage_metadata?.total_tokens).toBeGreaterThan(0); });
147770
/* eslint-disable no-process-env */ import { test, expect } from "@jest/globals"; import { LLMResult } from "@langchain/core/outputs"; import { StringPromptValue } from "@langchain/core/prompt_values"; import { CallbackManager } from "@langchain/core/callbacks/manager"; import { NewTokenIndices } from "@langchain/core/callbacks/base"; import { OpenAIChat } from "../legacy.js"; import { OpenAI } from "../llms.js"; // Save the original value of the 'LANGCHAIN_CALLBACKS_BACKGROUND' environment variable const originalBackground = process.env.LANGCHAIN_CALLBACKS_BACKGROUND; test("Test OpenAI", async () => { const model = new OpenAI({ maxTokens: 5, modelName: "gpt-3.5-turbo-instruct", }); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const res = await model.invoke("Print hello world"); // console.log({ res }); }); test("Test OpenAI with stop", async () => { const model = new OpenAI({ maxTokens: 5, modelName: "gpt-3.5-turbo-instruct", }); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const res = await model.call("Print hello world", ["world"]); // console.log({ res }); }); test("Test OpenAI with stop in object", async () => { const model = new OpenAI({ maxTokens: 5, modelName: "gpt-3.5-turbo-instruct", }); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const res = await model.invoke("Print hello world", { stop: ["world"] }); // console.log({ res }); }); test("Test OpenAI with timeout in call options", async () => { const model = new OpenAI({ maxTokens: 5, maxRetries: 0, modelName: "gpt-3.5-turbo-instruct", }); await expect(() => model.invoke("Print hello world", { timeout: 10, }) ).rejects.toThrow(); }, 5000); test("Test OpenAI with timeout in call options and node adapter", async () => { const model = new OpenAI({ maxTokens: 5, maxRetries: 0, modelName: "gpt-3.5-turbo-instruct", }); await expect(() => model.invoke("Print hello world", { timeout: 10, }) ).rejects.toThrow(); }, 5000); test("Test OpenAI with signal in call options", async () => { const model = new OpenAI({ maxTokens: 5, modelName: "gpt-3.5-turbo-instruct", }); const controller = new AbortController(); await expect(() => { const ret = model.invoke("Print hello world", { signal: controller.signal, }); controller.abort(); return ret; }).rejects.toThrow(); }, 5000); test("Test OpenAI with signal in call options and node adapter", async () => { const model = new OpenAI({ maxTokens: 5, modelName: "gpt-3.5-turbo-instruct", }); const controller = new AbortController(); await expect(() => { const ret = model.invoke("Print hello world", { signal: controller.signal, }); controller.abort(); return ret; }).rejects.toThrow(); }, 5000); test("Test OpenAI with concurrency == 1", async () => { const model = new OpenAI({ maxTokens: 5, modelName: "gpt-3.5-turbo-instruct", maxConcurrency: 1, }); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const res = await Promise.all([ model.invoke("Print hello world"), model.invoke("Print hello world"), ]); // console.log({ res }); }); test("Test OpenAI with maxTokens -1", async () => { const model = new OpenAI({ maxTokens: -1, modelName: "gpt-3.5-turbo-instruct", }); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const res = await model.call("Print hello world", ["world"]); // console.log({ res }); }); test("Test OpenAI with chat model returns OpenAIChat", async () => { const model = new OpenAI({ modelName: "gpt-3.5-turbo" }); expect(model).toBeInstanceOf(OpenAIChat); const res = await model.invoke("Print hello world"); // console.log({ res }); expect(typeof res).toBe("string"); }); test("Test OpenAI with instruct model returns OpenAI", async () => { const model = new OpenAI({ modelName: "gpt-3.5-turbo-instruct" }); expect(model).toBeInstanceOf(OpenAI); const res = await model.invoke("Print hello world"); // console.log({ res }); expect(typeof res).toBe("string"); }); test("Test OpenAI with versioned instruct model returns OpenAI", async () => { const model = new OpenAI({ modelName: "gpt-3.5-turbo-instruct-0914" }); expect(model).toBeInstanceOf(OpenAI); const res = await model.invoke("Print hello world"); // console.log({ res }); expect(typeof res).toBe("string"); }); test("Test ChatOpenAI tokenUsage", async () => { // Running LangChain callbacks in the background will sometimes cause the callbackManager to execute // after the test/llm call has already finished & returned. Set that environment variable to false // to prevent that from happening. process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "false"; try { let tokenUsage = { completionTokens: 0, promptTokens: 0, totalTokens: 0, }; const model = new OpenAI({ maxTokens: 5, modelName: "gpt-3.5-turbo-instruct", callbackManager: CallbackManager.fromHandlers({ async handleLLMEnd(output: LLMResult) { tokenUsage = output.llmOutput?.tokenUsage; }, }), }); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const res = await model.invoke("Hello"); // console.log({ res }); expect(tokenUsage.promptTokens).toBe(1); } finally { // Reset the environment variable process.env.LANGCHAIN_CALLBACKS_BACKGROUND = originalBackground; } }); test("Test OpenAI in streaming mode", async () => { let nrNewTokens = 0; let streamedCompletion = ""; const model = new OpenAI({ maxTokens: 5, modelName: "gpt-3.5-turbo-instruct", streaming: true, callbacks: CallbackManager.fromHandlers({ async handleLLMNewToken(token: string) { nrNewTokens += 1; streamedCompletion += token; }, }), }); const res = await model.invoke("Print hello world"); // console.log({ res }); expect(nrNewTokens > 0).toBe(true); expect(res).toBe(streamedCompletion); }); test("Test OpenAI in streaming mode with multiple prompts", async () => { let nrNewTokens = 0; const completions = [ ["", ""], ["", ""], ]; const model = new OpenAI({ maxTokens: 5, modelName: "gpt-3.5-turbo-instruct", streaming: true, n: 2, callbacks: CallbackManager.fromHandlers({ async handleLLMNewToken(token: string, idx: NewTokenIndices) { nrNewTokens += 1; completions[idx.prompt][idx.completion] += token; }, }), }); const res = await model.generate(["Print hello world", "print hello sea"]); // console.log( // res.generations, // res.generations.map((g) => g[0].generationInfo) // ); expect(nrNewTokens > 0).toBe(true); expect(res.generations.length).toBe(2); expect(res.generations.map((g) => g.map((gg) => gg.text))).toEqual( completions ); });
147771
test("Test OpenAIChat in streaming mode with multiple prompts", async () => { let nrNewTokens = 0; const completions = [[""], [""]]; const model = new OpenAI({ maxTokens: 5, modelName: "gpt-3.5-turbo", streaming: true, n: 1, callbacks: CallbackManager.fromHandlers({ async handleLLMNewToken(token: string, idx: NewTokenIndices) { nrNewTokens += 1; completions[idx.prompt][idx.completion] += token; }, }), }); const res = await model.generate(["Print hello world", "print hello sea"]); // console.log( // res.generations, // res.generations.map((g) => g[0].generationInfo) // ); expect(nrNewTokens > 0).toBe(true); expect(res.generations.length).toBe(2); expect(res.generations.map((g) => g.map((gg) => gg.text))).toEqual( completions ); }); test("Test OpenAI prompt value", async () => { const model = new OpenAI({ maxTokens: 5, modelName: "gpt-3.5-turbo-instruct", }); const res = await model.generatePrompt([ new StringPromptValue("Print hello world"), ]); expect(res.generations.length).toBe(1); for (const generation of res.generations) { expect(generation.length).toBe(1); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var for (const g of generation) { // console.log(g.text); } } // console.log({ res }); }); test("Test OpenAI stream method", async () => { const model = new OpenAI({ maxTokens: 50, modelName: "gpt-3.5-turbo-instruct", }); const stream = await model.stream("Print hello world."); const chunks = []; for await (const chunk of stream) { chunks.push(chunk); } expect(chunks.length).toBeGreaterThan(1); }); test("Test OpenAI stream method with abort", async () => { await expect(async () => { const model = new OpenAI({ maxTokens: 250, maxRetries: 0, modelName: "gpt-3.5-turbo-instruct", }); const stream = await model.stream( "How is your day going? Be extremely verbose.", { signal: AbortSignal.timeout(1000), } ); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var for await (const chunk of stream) { // console.log(chunk); } }).rejects.toThrow(); }); test("Test OpenAI stream method with early break", async () => { const model = new OpenAI({ maxTokens: 50, modelName: "gpt-3.5-turbo-instruct", }); const stream = await model.stream( "How is your day going? Be extremely verbose." ); let i = 0; // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var for await (const chunk of stream) { // console.log(chunk); i += 1; if (i > 5) { break; } } });
147775
describe("response_format: json_schema", () => { const weatherSchema = z.object({ city: z.string().describe("The city to get the weather for"), state: z.string().describe("The state to get the weather for"), zipCode: z.string().describe("The zip code to get the weather for"), unit: z .enum(["fahrenheit", "celsius"]) .describe("The unit to get the weather in"), }); it("can invoke", async () => { const model = new ChatOpenAI({ model: "gpt-4o-2024-08-06", }).bind({ response_format: { type: "json_schema", json_schema: { name: "get_current_weather", description: "Get the current weather in a location", schema: zodToJsonSchema(weatherSchema), strict: true, }, }, }); const response = await model.invoke( "What is the weather in San Francisco, 91626 CA?" ); const parsed = JSON.parse(response.content as string); expect(parsed).toHaveProperty("city"); expect(parsed).toHaveProperty("state"); expect(parsed).toHaveProperty("zipCode"); expect(parsed).toHaveProperty("unit"); }); it("can stream", async () => { const model = new ChatOpenAI({ model: "gpt-4o-2024-08-06", }).bind({ response_format: { type: "json_schema", json_schema: { name: "get_current_weather", description: "Get the current weather in a location", schema: zodToJsonSchema(weatherSchema), strict: true, }, }, }); const stream = await model.stream( "What is the weather in San Francisco, 91626 CA?" ); let full: AIMessageChunk | undefined; for await (const chunk of stream) { full = !full ? chunk : concat(full, chunk); } expect(full).toBeDefined(); if (!full) return; const parsed = JSON.parse(full.content as string); expect(parsed).toHaveProperty("city"); expect(parsed).toHaveProperty("state"); expect(parsed).toHaveProperty("zipCode"); expect(parsed).toHaveProperty("unit"); }); it("can invoke with a zod schema passed in", async () => { const model = new ChatOpenAI({ model: "gpt-4o-2024-08-06", }).bind({ response_format: { type: "json_schema", json_schema: { name: "get_current_weather", description: "Get the current weather in a location", schema: weatherSchema, strict: true, }, }, }); const response = await model.invoke( "What is the weather in San Francisco, 91626 CA?" ); const parsed = JSON.parse(response.content as string); expect(parsed).toHaveProperty("city"); expect(parsed).toHaveProperty("state"); expect(parsed).toHaveProperty("zipCode"); expect(parsed).toHaveProperty("unit"); }); it("can stream with a zod schema passed in", async () => { const model = new ChatOpenAI({ model: "gpt-4o-2024-08-06", }).bind({ response_format: { type: "json_schema", json_schema: { name: "get_current_weather", description: "Get the current weather in a location", schema: weatherSchema, strict: true, }, }, }); const stream = await model.stream( "What is the weather in San Francisco, 91626 CA?" ); let full: AIMessageChunk | undefined; for await (const chunk of stream) { full = !full ? chunk : concat(full, chunk); } expect(full).toBeDefined(); if (!full) return; const parsed = JSON.parse(full.content as string); expect(parsed).toHaveProperty("city"); expect(parsed).toHaveProperty("state"); expect(parsed).toHaveProperty("zipCode"); expect(parsed).toHaveProperty("unit"); }); it("can be invoked with WSO", async () => { const model = new ChatOpenAI({ model: "gpt-4o-2024-08-06", }).withStructuredOutput(weatherSchema, { name: "get_current_weather", method: "jsonSchema", strict: true, }); const response = await model.invoke( "What is the weather in San Francisco, 91626 CA?" ); expect(response).toHaveProperty("city"); expect(response).toHaveProperty("state"); expect(response).toHaveProperty("zipCode"); expect(response).toHaveProperty("unit"); }); // Flaky test it.skip("can be streamed with WSO", async () => { const model = new ChatOpenAI({ model: "gpt-4o-2024-08-06", }).withStructuredOutput(weatherSchema, { name: "get_current_weather", method: "jsonSchema", strict: true, }); const stream = await model.stream( "What is the weather in San Francisco, 91626 CA?" ); // It should yield a single chunk let full: z.infer<typeof weatherSchema> | undefined; for await (const chunk of stream) { full = chunk; } expect(full).toBeDefined(); if (!full) return; expect(full).toHaveProperty("city"); expect(full).toHaveProperty("state"); expect(full).toHaveProperty("zipCode"); expect(full).toHaveProperty("unit"); }); });
147778
const CACHED_TEXT = `## Components LangChain provides standard, extendable interfaces and external integrations for various components useful for building with LLMs. Some components LangChain implements, some components we rely on third-party integrations for, and others are a mix. ### Chat models <span data-heading-keywords="chat model,chat models"></span> Language models that use a sequence of messages as inputs and return chat messages as outputs (as opposed to using plain text). These are generally newer models (older models are generally \`LLMs\`, see below). Chat models support the assignment of distinct roles to conversation messages, helping to distinguish messages from the AI, users, and instructions such as system messages. Although the underlying models are messages in, message out, the LangChain wrappers also allow these models to take a string as input. This gives them the same interface as LLMs (and simpler to use). When a string is passed in as input, it will be converted to a \`HumanMessage\` under the hood before being passed to the underlying model. LangChain does not host any Chat Models, rather we rely on third party integrations. We have some standardized parameters when constructing ChatModels: - \`model\`: the name of the model Chat Models also accept other parameters that are specific to that integration. :::important Some chat models have been fine-tuned for **tool calling** and provide a dedicated API for it. Generally, such models are better at tool calling than non-fine-tuned models, and are recommended for use cases that require tool calling. Please see the [tool calling section](/docs/concepts/#functiontool-calling) for more information. ::: For specifics on how to use chat models, see the [relevant how-to guides here](/docs/how_to/#chat-models). #### Multimodality Some chat models are multimodal, accepting images, audio and even video as inputs. These are still less common, meaning model providers haven't standardized on the "best" way to define the API. Multimodal outputs are even less common. As such, we've kept our multimodal abstractions fairly light weight and plan to further solidify the multimodal APIs and interaction patterns as the field matures. In LangChain, most chat models that support multimodal inputs also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations. For specifics on how to use multimodal models, see the [relevant how-to guides here](/docs/how_to/#multimodal). ### LLMs <span data-heading-keywords="llm,llms"></span> :::caution Pure text-in/text-out LLMs tend to be older or lower-level. Many popular models are best used as [chat completion models](/docs/concepts/#chat-models), even for non-chat use cases. You are probably looking for [the section above instead](/docs/concepts/#chat-models). ::: Language models that takes a string as input and returns a string. These are traditionally older models (newer models generally are [Chat Models](/docs/concepts/#chat-models), see above). Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input. This gives them the same interface as [Chat Models](/docs/concepts/#chat-models). When messages are passed in as input, they will be formatted into a string under the hood before being passed to the underlying model. LangChain does not host any LLMs, rather we rely on third party integrations. For specifics on how to use LLMs, see the [relevant how-to guides here](/docs/how_to/#llms). ### Message types Some language models take an array of messages as input and return a message. There are a few different types of messages. All messages have a \`role\`, \`content\`, and \`response_metadata\` property. The \`role\` describes WHO is saying the message. LangChain has different message classes for different roles. The \`content\` property describes the content of the message. This can be a few different things: - A string (most models deal this type of content) - A List of objects (this is used for multi-modal input, where the object contains information about that input type and that input location) #### HumanMessage This represents a message from the user. #### AIMessage This represents a message from the model. In addition to the \`content\` property, these messages also have: **\`response_metadata\`** The \`response_metadata\` property contains additional metadata about the response. The data here is often specific to each model provider. This is where information like log-probs and token usage may be stored. **\`tool_calls\`** These represent a decision from an language model to call a tool. They are included as part of an \`AIMessage\` output. They can be accessed from there with the \`.tool_calls\` property. This property returns a list of \`ToolCall\`s. A \`ToolCall\` is an object with the following arguments: - \`name\`: The name of the tool that should be called. - \`args\`: The arguments to that tool. - \`id\`: The id of that tool call. #### SystemMessage This represents a system message, which tells the model how to behave. Not every model provider supports this. #### ToolMessage This represents the result of a tool call. In addition to \`role\` and \`content\`, this message has: - a \`tool_call_id\` field which conveys the id of the call to the tool that was called to produce this result. - an \`artifact\` field which can be used to pass along arbitrary artifacts of the tool execution which are useful to track but which should not be sent to the model. #### (Legacy) FunctionMessage This is a legacy message type, corresponding to OpenAI's legacy function-calling API. \`ToolMessage\` should be used instead to correspond to the updated tool-calling API. This represents the result of a function call. In addition to \`role\` and \`content\`, this message has a \`name\` parameter which conveys the name of the function that was called to produce this result. ### Prompt templates <span data-heading-keywords="prompt,prompttemplate,chatprompttemplate"></span> Prompt templates help to translate user input and parameters into instructions for a language model. This can be used to guide a model's response, helping it understand the context and generate relevant and coherent language-based output. Prompt Templates take as input an object, where each key represents a variable in the prompt template to fill in. Prompt Templates output a PromptValue. This PromptValue can be passed to an LLM or a ChatModel, and can also be cast to a string or an array of messages. The reason this PromptValue exists is to make it easy to switch between strings and messages. There are a few different types of prompt templates: #### String PromptTemplates These prompt templates are used to format a single string, and generally are used for simpler inputs. For example, a common way to construct and use a PromptTemplate is as follows: \`\`\`typescript import { PromptTemplate } from "@langchain/core/prompts"; const promptTemplate = PromptTemplate.fromTemplate( "Tell me a joke about {topic}" ); await promptTemplate.invoke({ topic: "cats" }); \`\`\` #### ChatPromptTemplates These prompt templates are used to format an array of messages. These "templates" consist of an array of templates themselves. For example, a common way to construct and use a ChatPromptTemplate is as follows: \`\`\`typescript import { ChatPromptTemplate } from "@langchain/core/prompts"; const promptTemplate = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["user", "Tell me a joke about {topic}"], ]); await promptTemplate.invoke({ topic: "cats" }); \`\`\` In the above example, this ChatPromptTemplate will construct two messages when called. The first is a system message, that has no variables to format. The second is a HumanMessage, and will be formatted by the \`topic\` variable the user passes in. #### MessagesPlaceholder <span data-heading-keywords="messagesplaceholder"></span> This prompt template is responsible for adding an array of messages in a particular place. In the above ChatPromptTemplate, we saw how we could format two messages, each one a string. But what if we wanted the user to pass in an array of messages that we would slot into a particular spot? This is how you use MessagesPlaceholder. \`\`\`typescript import { ChatPromptTemplate, MessagesPlaceholder, } from "@langchain/core/prompts"; import { HumanMessage } from "@langchain/core/messages"; const promptTemplate = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], new MessagesPlaceholder("msgs"), ]); promptTemplate.invoke({ msgs: [new HumanMessage({ content: "hi!" })] }); \`\`\`
147779
This will produce an array of two messages, the first one being a system message, and the second one being the HumanMessage we passed in. If we had passed in 5 messages, then it would have produced 6 messages in total (the system message plus the 5 passed in). This is useful for letting an array of messages be slotted into a particular spot. An alternative way to accomplish the same thing without using the \`MessagesPlaceholder\` class explicitly is: \`\`\`typescript const promptTemplate = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["placeholder", "{msgs}"], // <-- This is the changed part ]); \`\`\` For specifics on how to use prompt templates, see the [relevant how-to guides here](/docs/how_to/#prompt-templates). ### Example Selectors One common prompting technique for achieving better performance is to include examples as part of the prompt. This gives the language model concrete examples of how it should behave. Sometimes these examples are hardcoded into the prompt, but for more advanced situations it may be nice to dynamically select them. Example Selectors are classes responsible for selecting and then formatting examples into prompts. For specifics on how to use example selectors, see the [relevant how-to guides here](/docs/how_to/#example-selectors). ### Output parsers <span data-heading-keywords="output parser"></span> :::note The information here refers to parsers that take a text output from a model try to parse it into a more structured representation. More and more models are supporting function (or tool) calling, which handles this automatically. It is recommended to use function/tool calling rather than output parsing. See documentation for that [here](/docs/concepts/#function-tool-calling). ::: Responsible for taking the output of a model and transforming it to a more suitable format for downstream tasks. Useful when you are using LLMs to generate structured data, or to normalize output from chat models and LLMs. There are two main methods an output parser must implement: - "Get format instructions": A method which returns a string containing instructions for how the output of a language model should be formatted. - "Parse": A method which takes in a string (assumed to be the response from a language model) and parses it into some structure. And then one optional one: - "Parse with prompt": A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to be the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Output parsers accept a string or \`BaseMessage\` as input and can return an arbitrary type. LangChain has many different types of output parsers. This is a list of output parsers LangChain supports. The table below has various pieces of information: **Name**: The name of the output parser **Supports Streaming**: Whether the output parser supports streaming. **Input Type**: Expected input type. Most output parsers work on both strings and messages, but some (like OpenAI Functions) need a message with specific arguments. **Output Type**: The output type of the object returned by the parser. **Description**: Our commentary on this output parser and when to use it. The current date is ${new Date().toISOString()}`; test.skip("system prompt caching", async () => { const model = new ChatOpenAI({ model: "gpt-4o-mini-2024-07-18", }); const date = new Date().toISOString(); const messages = [ { role: "system", content: `You are a pirate. Always respond in pirate dialect. The current date is ${date}.\nUse the following as context when answering questions: ${CACHED_TEXT}`, }, { role: "user", content: "What types of messages are supported in LangChain?", }, ]; const res = await model.invoke(messages); expect(res.response_metadata?.usage.prompt_tokens_details.cached_tokens).toBe( 0 ); await new Promise((resolve) => setTimeout(resolve, 5000)); const res2 = await model.invoke(messages); expect( res2.response_metadata?.usage.prompt_tokens_details.cached_tokens ).toBeGreaterThan(0); let aggregate; for await (const chunk of await model.stream(messages)) { aggregate = aggregate ? concat(aggregate, chunk) : chunk; } expect( aggregate?.response_metadata?.usage.prompt_tokens_details.cached_tokens ).toBeGreaterThan(0); });
147785
test("Test Azure ChatOpenAI in streaming mode with n > 1 and multiple prompts", async () => { // Running LangChain callbacks in the background will sometimes cause the callbackManager to execute // after the test/llm call has already finished & returned. Set that environment variable to false // to prevent that from happening. process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "false"; try { let nrNewTokens = 0; const streamedCompletions = [ ["", ""], ["", ""], ]; const model = new AzureChatOpenAI({ modelName: "gpt-3.5-turbo", streaming: true, maxTokens: 10, n: 2, callbacks: [ { async handleLLMNewToken(token: string, idx: NewTokenIndices) { nrNewTokens += 1; streamedCompletions[idx.prompt][idx.completion] += token; }, }, ], }); const message1 = new HumanMessage("Hello!"); const message2 = new HumanMessage("Bye!"); const result = await model.generate([[message1], [message2]]); expect(nrNewTokens > 0).toBe(true); expect(result.generations.map((g) => g.map((gg) => gg.text))).toEqual( streamedCompletions ); } finally { // Reset the environment variable process.env.LANGCHAIN_CALLBACKS_BACKGROUND = originalBackground; } }, 10000); test("Test Azure ChatOpenAI prompt value", async () => { const chat = new AzureChatOpenAI({ modelName: "gpt-3.5-turbo", maxTokens: 10, n: 2, }); const message = new HumanMessage("Hello!"); const res = await chat.generatePrompt([new ChatPromptValue([message])]); expect(res.generations.length).toBe(1); for (const generation of res.generations) { expect(generation.length).toBe(2); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var for (const g of generation) { // console.log(g.text); } } // console.log({ res }); }); test("Test Azure OpenAI Chat, docs, prompt templates", async () => { const chat = new AzureChatOpenAI({ temperature: 0, maxTokens: 10 }); const systemPrompt = PromptTemplate.fromTemplate( "You are a helpful assistant that translates {input_language} to {output_language}." ); const chatPrompt = ChatPromptTemplate.fromMessages([ new SystemMessagePromptTemplate(systemPrompt), HumanMessagePromptTemplate.fromTemplate("{text}"), ]); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const responseA = await chat.generatePrompt([ await chatPrompt.formatPromptValue({ input_language: "English", output_language: "French", text: "I love programming.", }), ]); // console.log(responseA.generations); }, 5000); test("Test Azure ChatOpenAI with stop", async () => { const model = new AzureChatOpenAI({ maxTokens: 5 }); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const res = await model.call( [new HumanMessage("Print hello world")], ["world"] ); // console.log({ res }); }); test("Test Azure ChatOpenAI with stop in object", async () => { const model = new AzureChatOpenAI({ maxTokens: 5 }); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const res = await model.invoke([new HumanMessage("Print hello world")], { stop: ["world"], }); // console.log({ res }); }); test("Test Azure ChatOpenAI with timeout in call options", async () => { const model = new AzureChatOpenAI({ maxTokens: 5 }); await expect(() => model.invoke([new HumanMessage("Print hello world")], { timeout: 10 }) ).rejects.toThrow(); }, 5000); test("Test Azure ChatOpenAI with timeout in call options and node adapter", async () => { const model = new AzureChatOpenAI({ maxTokens: 5 }); await expect(() => model.invoke([new HumanMessage("Print hello world")], { timeout: 10 }) ).rejects.toThrow(); }, 5000); test("Test Azure ChatOpenAI with signal in call options", async () => { const model = new AzureChatOpenAI({ maxTokens: 5 }); const controller = new AbortController(); await expect(() => { const ret = model.invoke([new HumanMessage("Print hello world")], { signal: controller.signal, }); controller.abort(); return ret; }).rejects.toThrow(); }, 5000); test("Test Azure ChatOpenAI with signal in call options and node adapter", async () => { const model = new AzureChatOpenAI({ maxTokens: 5, modelName: "gpt-3.5-turbo-instruct", }); const controller = new AbortController(); await expect(() => { const ret = model.invoke([new HumanMessage("Print hello world")], { signal: controller.signal, }); controller.abort(); return ret; }).rejects.toThrow(); }, 5000); test("Test Azure ChatOpenAI with specific roles in ChatMessage", async () => { const chat = new AzureChatOpenAI({ modelName: "gpt-3.5-turbo", maxTokens: 10, }); const system_message = new ChatMessage( "You are to chat with a user.", "system" ); const user_message = new ChatMessage("Hello!", "user"); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var const res = await chat.call([system_message, user_message]); // console.log({ res }); }); test("Test Azure ChatOpenAI stream method", async () => { const model = new AzureChatOpenAI({ maxTokens: 50, modelName: "gpt-3.5-turbo", }); const stream = await model.stream("Print hello world."); const chunks = []; for await (const chunk of stream) { // console.log(chunk); chunks.push(chunk); } expect(chunks.length).toBeGreaterThan(1); }); test("Test Azure ChatOpenAI stream method with abort", async () => { await expect(async () => { const model = new AzureChatOpenAI({ maxTokens: 100, modelName: "gpt-3.5-turbo", }); const stream = await model.stream( "How is your day going? Be extremely verbose.", { signal: AbortSignal.timeout(500), } ); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var for await (const chunk of stream) { // console.log(chunk); } }).rejects.toThrow(); }); test("Test Azure ChatOpenAI stream method with early break", async () => { const model = new AzureChatOpenAI({ maxTokens: 50, modelName: "gpt-3.5-turbo", }); const stream = await model.stream( "How is your day going? Be extremely verbose." ); let i = 0; // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var for await (const chunk of stream) { // console.log(chunk); i += 1; if (i > 10) { break; } } }); test("Test Azure ChatOpenAI stream method, timeout error thrown from SDK", async () => { await expect(async () => { const model = new AzureChatOpenAI({ maxTokens: 50, modelName: "gpt-3.5-turbo", timeout: 1, maxRetries: 0, }); const stream = await model.stream( "How is your day going? Be extremely verbose." ); // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment // @ts-expect-error unused var for await (const chunk of stream) { // console.log(chunk); } }).rejects.toThrow(); });