id
stringlengths 6
6
| text
stringlengths 20
17.2k
| title
stringclasses 1
value |
|---|---|---|
152852
|
"TextLoader": {"How to retrieve using multiple vectors per document": "https://python.langchain.com/docs/how_to/multi_vector/", "How to do retrieval with contextual compression": "https://python.langchain.com/docs/how_to/contextual_compression/", "How to load documents from a directory": "https://python.langchain.com/docs/how_to/document_loader_directory/", "How to create and query vector stores": "https://python.langchain.com/docs/how_to/vectorstores/", "How to use the Parent Document Retriever": "https://python.langchain.com/docs/how_to/parent_document_retriever/", "How to use a vectorstore as a retriever": "https://python.langchain.com/docs/how_to/vectorstore_retriever/", "Caching": "https://python.langchain.com/docs/how_to/caching_embeddings/", "AzureAISearchRetriever": "https://python.langchain.com/docs/integrations/retrievers/azure_ai_search/", "Kinetica Vectorstore based Retriever": "https://python.langchain.com/docs/integrations/retrievers/kinetica/", "JaguarDB Vector Database": "https://python.langchain.com/docs/integrations/retrievers/jaguar/", "LLMLingua Document Compressor": "https://python.langchain.com/docs/integrations/retrievers/llmlingua/", "Cohere reranker": "https://python.langchain.com/docs/integrations/retrievers/cohere-reranker/", "SingleStoreDB": "https://python.langchain.com/docs/integrations/retrievers/singlestoredb/", "FlashRank reranker": "https://python.langchain.com/docs/integrations/retrievers/flashrank-reranker/", "Confident": "https://python.langchain.com/docs/integrations/callbacks/confident/", "UpTrain": "https://python.langchain.com/docs/integrations/callbacks/uptrain/", "Upstash Vector": "https://python.langchain.com/docs/integrations/vectorstores/upstash/", "VDMS": "https://python.langchain.com/docs/integrations/providers/vdms/", "Vectara Chat": "https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat/", "LanceDB": "https://python.langchain.com/docs/integrations/vectorstores/lancedb/", "Kinetica Vectorstore API": "https://python.langchain.com/docs/integrations/vectorstores/kinetica/", "SQLite-VSS": "https://python.langchain.com/docs/integrations/vectorstores/sqlitevss/", "Vald": "https://python.langchain.com/docs/integrations/vectorstores/vald/", "Weaviate": "https://python.langchain.com/docs/integrations/vectorstores/weaviate/", "Jaguar Vector Database": "https://python.langchain.com/docs/integrations/vectorstores/jaguar/", "SAP HANA Cloud Vector Engine": "https://python.langchain.com/docs/integrations/vectorstores/sap_hanavector/", "DashVector": "https://python.langchain.com/docs/integrations/vectorstores/dashvector/", "Databricks Vector Search": "https://python.langchain.com/docs/integrations/vectorstores/databricks_vector_search/", "ScaNN": "https://python.langchain.com/docs/integrations/vectorstores/scann/", "Xata": "https://python.langchain.com/docs/integrations/vectorstores/xata/", "Hippo": "https://python.langchain.com/docs/integrations/vectorstores/hippo/", "Vespa": "https://python.langchain.com/docs/integrations/vectorstores/vespa/", "Rockset": "https://python.langchain.com/docs/integrations/vectorstores/rockset/", "DingoDB": "https://python.langchain.com/docs/integrations/vectorstores/dingo/", "Zilliz": "https://python.langchain.com/docs/integrations/vectorstores/zilliz/", "Azure Cosmos DB Mongo vCore": "https://python.langchain.com/docs/integrations/vectorstores/azure_cosmos_db/", "viking DB": "https://python.langchain.com/docs/integrations/vectorstores/vikingdb/", "Annoy": "https://python.langchain.com/docs/integrations/vectorstores/annoy/", "Couchbase ": "https://python.langchain.com/docs/integrations/vectorstores/couchbase/", "Typesense": "https://python.langchain.com/docs/integrations/vectorstores/typesense/", "Momento Vector Index (MVI)": "https://python.langchain.com/docs/integrations/vectorstores/momento_vector_index/", "TiDB Vector": "https://python.langchain.com/docs/integrations/vectorstores/tidb_vector/", "Relyt": "https://python.langchain.com/docs/integrations/vectorstores/relyt/", "Atlas": "https://python.langchain.com/docs/integrations/vectorstores/atlas/", "Activeloop Deep Lake": "https://python.langchain.com/docs/integrations/vectorstores/activeloop_deeplake/", "vlite": "https://python.langchain.com/docs/integrations/vectorstores/vlite/", "Neo4j Vector Index": "https://python.langchain.com/docs/integrations/vectorstores/neo4jvector/", "Lantern": "https://python.langchain.com/docs/integrations/vectorstores/lantern/", "Tair": "https://python.langchain.com/docs/integrations/vectorstores/tair/", "DuckDB": "https://python.langchain.com/docs/integrations/vectorstores/duckdb/", "Alibaba Cloud OpenSearch": "https://python.langchain.com/docs/integrations/vectorstores/alibabacloud_opensearch/", "Clarifai": "https://python.langchain.com/docs/integrations/vectorstores/clarifai/", "scikit-learn": "https://python.langchain.com/docs/integrations/vectorstores/sklearn/", "Tencent Cloud VectorDB": "https://python.langchain.com/docs/integrations/vectorstores/tencentvectordb/", "DocArray HnswSearch": "https://python.langchain.com/docs/integrations/vectorstores/docarray_hnsw/", "MyScale": "https://python.langchain.com/docs/integrations/vectorstores/myscale/", "TileDB": "https://python.langchain.com/docs/integrations/vectorstores/tiledb/", "Google Memorystore for Redis": "https://python.langchain.com/docs/integrations/vectorstores/google_memorystore_redis/", "Tigris": "https://python.langchain.com/docs/integrations/vectorstores/tigris/", "China Mobile ECloud ElasticSearch VectorSearch": "https://python.langchain.com/docs/integrations/vectorstores/ecloud_vector_search/", "Bagel": "https://python.langchain.com/docs/integrations/vectorstores/bagel/", "Baidu Cloud ElasticSearch VectorSearch": "https://python.langchain.com/docs/integrations/vectorstores/baiducloud_vector_search/", "AwaDB": "https://python.langchain.com/docs/integrations/vectorstores/awadb/", "Supabase (Postgres)": "https://python.langchain.com/docs/integrations/vectorstores/supabase/", "SurrealDB": "https://python.langchain.com/docs/integrations/vectorstores/surrealdb/", "OpenSearch": "https://python.langchain.com/docs/integrations/vectorstores/opensearch/", "Faiss (Async)": "https://python.langchain.com/docs/integrations/vectorstores/faiss_async/", "BagelDB": "https://python.langchain.com/docs/integrations/vectorstores/bageldb/", "ManticoreSearch VectorStore": "https://python.langchain.com/docs/integrations/vectorstores/manticore_search/", "Azure AI Search": "https://python.langchain.com/docs/integrations/vectorstores/azuresearch/", "USearch": "https://python.langchain.com/docs/integrations/vectorstores/usearch/", "PGVecto.rs": "https://python.langchain.com/docs/integrations/vectorstores/pgvecto_rs/", "Marqo": "https://python.langchain.com/docs/integrations/vectorstores/marqo/", "DocArray InMemorySearch": "https://python.langchain.com/docs/integrations/vectorstores/docarray_in_memory/", "Postgres Embedding": "https://python.langchain.com/docs/integrations/vectorstores/pgembedding/", "Intel's Visual Data Management System (VDMS)": "https://python.langchain.com/docs/integrations/vectorstores/vdms/", "Timescale Vector (Postgres)": "https://python.langchain.com/docs/integrations/vectorstores/timescalevector/", "Epsilla": "https://python.langchain.com/docs/integrations/vectorstores/epsilla/", "Amazon Document DB": "https://python.langchain.com/docs/integrations/vectorstores/documentdb/", "SemaDB": "https://python.langchain.com/docs/integrations/vectorstores/semadb/", "AnalyticDB": "https://python.langchain.com/docs/integrations/vectorstores/analyticdb/", "Hologres": "https://python.langchain.com/docs/integrations/vectorstores/hologres/", "Baidu VectorDB": "https://python.langchain.com/docs/integrations/vectorstores/baiduvectordb/",
| |
152854
|
"EmbeddingsFilter": {"How to do retrieval with contextual compression": "https://python.langchain.com/docs/how_to/contextual_compression/", "How to get a RAG application to add citations": "https://python.langchain.com/docs/how_to/qa_citations/"}, "DocumentCompressorPipeline": {"How to do retrieval with contextual compression": "https://python.langchain.com/docs/how_to/contextual_compression/"}, "EmbeddingsRedundantFilter": {"How to do retrieval with contextual compression": "https://python.langchain.com/docs/how_to/contextual_compression/", "LOTR (Merger Retriever)": "https://python.langchain.com/docs/integrations/retrievers/merger_retriever/"}, "Comparator": {"How to construct filters for query analysis": "https://python.langchain.com/docs/how_to/query_constructing_filters/"}, "Comparison": {"How to construct filters for query analysis": "https://python.langchain.com/docs/how_to/query_constructing_filters/"}, "Operation": {"How to construct filters for query analysis": "https://python.langchain.com/docs/how_to/query_constructing_filters/"}, "Operator": {"How to construct filters for query analysis": "https://python.langchain.com/docs/how_to/query_constructing_filters/"}, "StructuredQuery": {"How to construct filters for query analysis": "https://python.langchain.com/docs/how_to/query_constructing_filters/"}, "ChromaTranslator": {"How to construct filters for query analysis": "https://python.langchain.com/docs/how_to/query_constructing_filters/", "How to do \"self-querying\" retrieval": "https://python.langchain.com/docs/how_to/self_query/"}, "ElasticsearchTranslator": {"How to construct filters for query analysis": "https://python.langchain.com/docs/how_to/query_constructing_filters/"}, "WikipediaQueryRun": {"How to use built-in tools and toolkits": "https://python.langchain.com/docs/how_to/tools_builtin/", "Wikipedia": "https://python.langchain.com/docs/integrations/tools/wikipedia/"}, "WikipediaAPIWrapper": {"How to use built-in tools and toolkits": "https://python.langchain.com/docs/how_to/tools_builtin/", "Wikipedia": "https://python.langchain.com/docs/integrations/tools/wikipedia/", "Zep Open Source Memory": "https://python.langchain.com/docs/integrations/memory/zep_memory/", "Zep Cloud Memory": "https://python.langchain.com/docs/integrations/memory/zep_memory_cloud/"}, "CallbackManagerForRetrieverRun": {"How to create a custom Retriever": "https://python.langchain.com/docs/how_to/custom_retriever/", "How to add scores to retriever results": "https://python.langchain.com/docs/how_to/add_scores_retriever/"}, "BaseRetriever": {"How to create a custom Retriever": "https://python.langchain.com/docs/how_to/custom_retriever/"}, "LLMGraphTransformer": {"How to construct knowledge graphs": "https://python.langchain.com/docs/how_to/graph_constructing/"}, "RetryOutputParser": {"How to retry when a parsing error occurs": "https://python.langchain.com/docs/how_to/output_parser_retry/"}, "TimeWeightedVectorStoreRetriever": {"How to use a time-weighted vector store retriever": "https://python.langchain.com/docs/how_to/time_weighted_vectorstore/"}, "InMemoryDocstore": {"How to use a time-weighted vector store retriever": "https://python.langchain.com/docs/how_to/time_weighted_vectorstore/", "Annoy": "https://python.langchain.com/docs/integrations/vectorstores/annoy/", "Faiss": "https://python.langchain.com/docs/integrations/vectorstores/faiss/"}, "mock_now": {"How to use a time-weighted vector store retriever": "https://python.langchain.com/docs/how_to/time_weighted_vectorstore/"}, "RunnableGenerator": {"How to create a custom Output Parser": "https://python.langchain.com/docs/how_to/output_parser_custom/"}, "OutputParserException": {"How to create a custom Output Parser": "https://python.langchain.com/docs/how_to/output_parser_custom/"}, "BaseOutputParser": {"How to create a custom Output Parser": "https://python.langchain.com/docs/how_to/output_parser_custom/", "How to use the MultiQueryRetriever": "https://python.langchain.com/docs/how_to/MultiQueryRetriever/"}, "BaseGenerationOutputParser": {"How to create a custom Output Parser": "https://python.langchain.com/docs/how_to/output_parser_custom/"}, "Generation": {"How to create a custom Output Parser": "https://python.langchain.com/docs/how_to/output_parser_custom/"}, "DirectoryLoader": {"How to load documents from a directory": "https://python.langchain.com/docs/how_to/document_loader_directory/", "AzureAISearchRetriever": "https://python.langchain.com/docs/integrations/retrievers/azure_ai_search/", "Apache Doris": "https://python.langchain.com/docs/integrations/vectorstores/apache_doris/", "StarRocks": "https://python.langchain.com/docs/integrations/vectorstores/starrocks/"}, "PythonLoader": {"How to load documents from a directory": "https://python.langchain.com/docs/how_to/document_loader_directory/"}, "LanceDB": {"How to create and query vector stores": "https://python.langchain.com/docs/how_to/vectorstores/", "LanceDB": "https://python.langchain.com/docs/integrations/vectorstores/lancedb/"}, "SpacyTextSplitter": {"How to split text by tokens ": "https://python.langchain.com/docs/how_to/split_by_token/", "spaCy": "https://python.langchain.com/docs/integrations/providers/spacy/", "Atlas": "https://python.langchain.com/docs/integrations/vectorstores/atlas/"}, "SentenceTransformersTokenTextSplitter": {"How to split text by tokens ": "https://python.langchain.com/docs/how_to/split_by_token/"}, "NLTKTextSplitter": {"How to split text by tokens ": "https://python.langchain.com/docs/how_to/split_by_token/"}, "KonlpyTextSplitter": {"How to split text by tokens ": "https://python.langchain.com/docs/how_to/split_by_token/"}, "WikipediaRetriever": {"How to get a RAG application to add citations": "https://python.langchain.com/docs/how_to/qa_citations/", "WikipediaRetriever": "https://python.langchain.com/docs/integrations/retrievers/wikipedia/", "Wikipedia": "https://python.langchain.com/docs/integrations/providers/wikipedia/"}, "UnstructuredHTMLLoader": {"How to load HTML": "https://python.langchain.com/docs/how_to/document_loader_html/", "Unstructured": "https://python.langchain.com/docs/integrations/providers/unstructured/"}, "MultiQueryRetriever": {"How to use the MultiQueryRetriever": "https://python.langchain.com/docs/how_to/MultiQueryRetriever/", "UpTrain": "https://python.langchain.com/docs/integrations/callbacks/uptrain/", "Vectara": "https://python.langchain.com/docs/integrations/vectorstores/vectara/"}, "GraphCypherQAChain": {"How to best prompt for Graph-RAG": "https://python.langchain.com/docs/how_to/graph_prompting/", "Neo4j": "https://python.langchain.com/docs/integrations/graphs/neo4j_cypher/", "Memgraph": "https://python.langchain.com/docs/integrations/graphs/memgraph/", "Diffbot": "https://python.langchain.com/docs/integrations/graphs/diffbot/", "Apache AGE": "https://python.langchain.com/docs/integrations/graphs/apache_age/", "Build a Question Answering application over a Graph Database": "https://python.langchain.com/docs/tutorials/graph/"}, "Neo4jVector": {"How to best prompt for Graph-RAG": "https://python.langchain.com/docs/how_to/graph_prompting/", "Neo4j": "https://python.langchain.com/docs/integrations/providers/neo4j/", "Neo4j Vector Index": "https://python.langchain.com/docs/integrations/vectorstores/neo4jvector/"}, "ParentDocumentRetriever": {"How to use the Parent Document Retriever": "https://python.langchain.com/docs/how_to/parent_document_retriever/"}, "InMemoryStore": {"How to use the Parent Document Retriever": "https://python.langchain.com/docs/how_to/parent_document_retriever/", "How to add scores to retriever results": "https://python.langchain.com/docs/how_to/add_scores_retriever/", "Fleet AI Context": "https://python.langchain.com/docs/integrations/retrievers/fleet_context/", "Docugami": "https://python.langchain.com/docs/integrations/document_loaders/docugami/"}, "YamlOutputParser": {"How to parse YAML output": "https://python.langchain.com/docs/how_to/output_parser_yaml/"}, "PipelinePromptTemplate": {"How to compose prompts together": "https://python.langchain.com/docs/how_to/prompts_composition/"},
| |
152858
|
"LaserEmbeddings": {"LASER Language-Agnostic SEntence Representations Embeddings by Meta AI": "https://python.langchain.com/docs/integrations/text_embedding/laser/", "Facebook - Meta": "https://python.langchain.com/docs/integrations/providers/facebook/"}, "OpenCLIPEmbeddings": {"OpenClip": "https://python.langchain.com/docs/integrations/text_embedding/open_clip/", "LanceDB": "https://python.langchain.com/docs/integrations/vectorstores/lancedb/", "SingleStoreDB": "https://python.langchain.com/docs/integrations/vectorstores/singlestoredb/"}, "TitanTakeoffEmbed": {"Titan Takeoff": "https://python.langchain.com/docs/integrations/text_embedding/titan_takeoff/"}, "MistralAIEmbeddings": {"MistralAIEmbeddings": "https://python.langchain.com/docs/integrations/text_embedding/mistralai/", "MistralAI": "https://python.langchain.com/docs/integrations/providers/mistralai/"}, "SpacyEmbeddings": {"SpaCy": "https://python.langchain.com/docs/integrations/text_embedding/spacy_embedding/", "NanoPQ (Product Quantization)": "https://python.langchain.com/docs/integrations/retrievers/nanopq/", "spaCy": "https://python.langchain.com/docs/integrations/providers/spacy/"}, "DatabricksEmbeddings": {"Databricks": "https://python.langchain.com/docs/integrations/text_embedding/databricks/"}, "BaichuanTextEmbeddings": {"Baichuan Text Embeddings": "https://python.langchain.com/docs/integrations/text_embedding/baichuan/", "Baichuan": "https://python.langchain.com/docs/integrations/providers/baichuan/"}, "TogetherEmbeddings": {"TogetherEmbeddings": "https://python.langchain.com/docs/integrations/text_embedding/together/"}, "HuggingFaceInstructEmbeddings": {"Instruct Embeddings on Hugging Face": "https://python.langchain.com/docs/integrations/text_embedding/instruct_embeddings/", "Hugging Face": "https://python.langchain.com/docs/integrations/providers/huggingface/"}, "OracleEmbeddings": {"Oracle AI Vector Search: Generate Embeddings": "https://python.langchain.com/docs/integrations/text_embedding/oracleai/", "OracleAI Vector Search": "https://python.langchain.com/docs/integrations/providers/oracleai/"}, "QianfanEmbeddingsEndpoint": {"Baidu Qianfan": "https://python.langchain.com/docs/integrations/text_embedding/baidu_qianfan_endpoint/", "ERNIE": "https://python.langchain.com/docs/integrations/text_embedding/ernie/", "Baidu": "https://python.langchain.com/docs/integrations/providers/baidu/", "Baidu Cloud ElasticSearch VectorSearch": "https://python.langchain.com/docs/integrations/vectorstores/baiducloud_vector_search/"}, "EdenAiEmbeddings": {"EDEN AI": "https://python.langchain.com/docs/integrations/text_embedding/edenai/", "Eden AI": "https://python.langchain.com/docs/integrations/providers/edenai/"}, "JohnSnowLabsEmbeddings": {"John Snow Labs": "https://python.langchain.com/docs/integrations/text_embedding/johnsnowlabs_embedding/"}, "ErnieEmbeddings": {"ERNIE": "https://python.langchain.com/docs/integrations/text_embedding/ernie/"}, "ClarifaiEmbeddings": {"Clarifai": "https://python.langchain.com/docs/integrations/providers/clarifai/"}, "AzureOpenAIEmbeddings": {"AzureOpenAIEmbeddings": "https://python.langchain.com/docs/integrations/text_embedding/azureopenai/", "AzureAISearchRetriever": "https://python.langchain.com/docs/integrations/retrievers/azure_ai_search/", "Microsoft": "https://python.langchain.com/docs/integrations/providers/microsoft/", "Azure Cosmos DB No SQL": "https://python.langchain.com/docs/integrations/vectorstores/azure_cosmos_db_no_sql/", "Azure AI Search": "https://python.langchain.com/docs/integrations/vectorstores/azuresearch/"}, "InfinityEmbeddings": {"Infinity": "https://python.langchain.com/docs/integrations/providers/infinity/"}, "InfinityEmbeddingsLocal": {"Infinity": "https://python.langchain.com/docs/integrations/text_embedding/infinity/"}, "AwaEmbeddings": {"AwaDB": "https://python.langchain.com/docs/integrations/providers/awadb/"}, "VolcanoEmbeddings": {"Volc Engine": "https://python.langchain.com/docs/integrations/text_embedding/volcengine/"}, "MiniMaxEmbeddings": {"MiniMax": "https://python.langchain.com/docs/integrations/text_embedding/minimax/", "Minimax": "https://python.langchain.com/docs/integrations/providers/minimax/"}, "FakeEmbeddings": {"Fake Embeddings": "https://python.langchain.com/docs/integrations/text_embedding/fake/", "DocArray": "https://python.langchain.com/docs/integrations/retrievers/docarray_retriever/", "Relyt": "https://python.langchain.com/docs/integrations/vectorstores/relyt/", "Tair": "https://python.langchain.com/docs/integrations/vectorstores/tair/", "Tencent Cloud VectorDB": "https://python.langchain.com/docs/integrations/vectorstores/tencentvectordb/", "Google Memorystore for Redis": "https://python.langchain.com/docs/integrations/vectorstores/google_memorystore_redis/", "PGVecto.rs": "https://python.langchain.com/docs/integrations/vectorstores/pgvecto_rs/", "Baidu VectorDB": "https://python.langchain.com/docs/integrations/vectorstores/baiduvectordb/"}, "ClovaEmbeddings": {"Clova Embeddings": "https://python.langchain.com/docs/integrations/text_embedding/clova/"}, "NeMoEmbeddings": {"NVIDIA NeMo embeddings": "https://python.langchain.com/docs/integrations/text_embedding/nemo/"}, "SparkLLMTextEmbeddings": {"SparkLLM Text Embeddings": "https://python.langchain.com/docs/integrations/text_embedding/sparkllm/", "iFlytek": "https://python.langchain.com/docs/integrations/providers/iflytek/"}, "PremAIEmbeddings": {"PremAI": "https://python.langchain.com/docs/integrations/text_embedding/premai/"}, "KNNRetriever": {"Voyage AI": "https://python.langchain.com/docs/integrations/text_embedding/voyageai/", "kNN": "https://python.langchain.com/docs/integrations/retrievers/knn/"}, "SelfHostedEmbeddings": {"Self Hosted": "https://python.langchain.com/docs/integrations/text_embedding/self-hosted/"}, "SelfHostedHuggingFaceEmbeddings": {"Self Hosted": "https://python.langchain.com/docs/integrations/text_embedding/self-hosted/"}, "SelfHostedHuggingFaceInstructEmbeddings": {"Self Hosted": "https://python.langchain.com/docs/integrations/text_embedding/self-hosted/"}, "AnyscaleEmbeddings": {"Anyscale": "https://python.langchain.com/docs/integrations/providers/anyscale/"}, "EmbaasEmbeddings": {"Embaas": "https://python.langchain.com/docs/integrations/text_embedding/embaas/"}, "YandexGPTEmbeddings": {"YandexGPT": "https://python.langchain.com/docs/integrations/text_embedding/yandex/"}, "JinaEmbeddings": {"Jina": "https://python.langchain.com/docs/integrations/providers/jina/", "Jina Reranker": "https://python.langchain.com/docs/integrations/document_transformers/jina_rerank/"}, "AlephAlphaAsymmetricSemanticEmbedding": {"Aleph Alpha": "https://python.langchain.com/docs/integrations/providers/aleph_alpha/"}, "AlephAlphaSymmetricSemanticEmbedding": {"Aleph Alpha": "https://python.langchain.com/docs/integrations/providers/aleph_alpha/"}, "CloudflareWorkersAIEmbeddings": {"Cloudflare Workers AI": "https://python.langchain.com/docs/integrations/text_embedding/cloudflare_workersai/", "Cloudflare": "https://python.langchain.com/docs/integrations/providers/cloudflare/"}, "DashScopeEmbeddings": {"DashScope": "https://python.langchain.com/docs/integrations/text_embedding/dashscope/", "DashVector": "https://python.langchain.com/docs/integrations/vectorstores/dashvector/", "DashScope Reranker": "https://python.langchain.com/docs/integrations/document_transformers/dashscope_rerank/"}, "TensorflowHubEmbeddings": {"TensorFlow Hub": "https://python.langchain.com/docs/integrations/text_embedding/tensorflowhub/"}, "LlamafileEmbeddings": {"llamafile": "https://python.langchain.com/docs/integrations/text_embedding/llamafile/"}, "GradientEmbeddings": {"Gradient": "https://python.langchain.com/docs/integrations/providers/gradient/"},
| |
152873
|
"DistanceStrategy": {"Kinetica Vectorstore API": "https://python.langchain.com/docs/integrations/vectorstores/kinetica/", "SAP HANA Cloud Vector Engine": "https://python.langchain.com/docs/integrations/vectorstores/sap_hanavector/", "SingleStoreDB": "https://python.langchain.com/docs/integrations/vectorstores/singlestoredb/", "Oracle AI Vector Search: Vector Store": "https://python.langchain.com/docs/integrations/vectorstores/oracle/", "SemaDB": "https://python.langchain.com/docs/integrations/vectorstores/semadb/"}, "SentenceTransformerEmbeddings": {"SQLite-VSS": "https://python.langchain.com/docs/integrations/vectorstores/sqlitevss/", "Vespa": "https://python.langchain.com/docs/integrations/vectorstores/vespa/"}, "Vald": {"Vald": "https://python.langchain.com/docs/integrations/vectorstores/vald/"}, "RetrievalQAWithSourcesChain": {"Weaviate": "https://python.langchain.com/docs/integrations/vectorstores/weaviate/", "Yellowbrick": "https://python.langchain.com/docs/integrations/vectorstores/yellowbrick/", "Jaguar Vector Database": "https://python.langchain.com/docs/integrations/vectorstores/jaguar/", "Neo4j Vector Index": "https://python.langchain.com/docs/integrations/vectorstores/neo4jvector/", "Marqo": "https://python.langchain.com/docs/integrations/vectorstores/marqo/", "Psychic": "https://python.langchain.com/docs/integrations/document_loaders/psychic/"}, "Yellowbrick": {"Yellowbrick": "https://python.langchain.com/docs/integrations/vectorstores/yellowbrick/"}, "LLMRails": {"LLMRails": "https://python.langchain.com/docs/integrations/vectorstores/llm_rails/"}, "ChatGooglePalm": {"ScaNN": "https://python.langchain.com/docs/integrations/vectorstores/scann/"}, "Hippo": {"Hippo": "https://python.langchain.com/docs/integrations/vectorstores/hippo/"}, "RedisText": {"Redis": "https://python.langchain.com/docs/integrations/vectorstores/redis/"}, "RedisNum": {"Redis": "https://python.langchain.com/docs/integrations/vectorstores/redis/"}, "RedisTag": {"Redis": "https://python.langchain.com/docs/integrations/vectorstores/redis/"}, "RedisFilter": {"Redis": "https://python.langchain.com/docs/integrations/vectorstores/redis/"}, "VespaStore": {"Vespa": "https://python.langchain.com/docs/integrations/vectorstores/vespa/"}, "NeuralDBVectorStore": {"ThirdAI NeuralDB": "https://python.langchain.com/docs/integrations/vectorstores/thirdai_neuraldb/"}, "VikingDB": {"viking DB": "https://python.langchain.com/docs/integrations/vectorstores/vikingdb/"}, "VikingDBConfig": {"viking DB": "https://python.langchain.com/docs/integrations/vectorstores/vikingdb/"}, "ApertureDB": {"ApertureDB": "https://python.langchain.com/docs/integrations/vectorstores/aperturedb/"}, "Relyt": {"Relyt": "https://python.langchain.com/docs/integrations/vectorstores/relyt/"}, "oraclevs": {"Oracle AI Vector Search: Vector Store": "https://python.langchain.com/docs/integrations/vectorstores/oracle/"}, "VLite": {"vlite": "https://python.langchain.com/docs/integrations/vectorstores/vlite/"}, "AzureCosmosDBNoSqlVectorSearch": {"Azure Cosmos DB No SQL": "https://python.langchain.com/docs/integrations/vectorstores/azure_cosmos_db_no_sql/"}, "DuckDB": {"DuckDB": "https://python.langchain.com/docs/integrations/vectorstores/duckdb/"}, "StarRocksSettings": {"StarRocks": "https://python.langchain.com/docs/integrations/vectorstores/starrocks/"}, "PathwayVectorClient": {"Pathway": "https://python.langchain.com/docs/integrations/vectorstores/pathway/"}, "DocArrayHnswSearch": {"DocArray HnswSearch": "https://python.langchain.com/docs/integrations/vectorstores/docarray_hnsw/"}, "TileDB": {"TileDB": "https://python.langchain.com/docs/integrations/vectorstores/tiledb/"}, "EcloudESVectorStore": {"China Mobile ECloud ElasticSearch VectorSearch": "https://python.langchain.com/docs/integrations/vectorstores/ecloud_vector_search/"}, "SurrealDBStore": {"SurrealDB": "https://python.langchain.com/docs/integrations/vectorstores/surrealdb/"}, "ManticoreSearch": {"ManticoreSearch VectorStore": "https://python.langchain.com/docs/integrations/vectorstores/manticore_search/"}, "ManticoreSearchSettings": {"ManticoreSearch VectorStore": "https://python.langchain.com/docs/integrations/vectorstores/manticore_search/"}, "HuggingFaceEmbeddings": {"Aerospike": "https://python.langchain.com/docs/integrations/vectorstores/aerospike/", "self-query-qdrant": "https://python.langchain.com/docs/templates/self-query-qdrant/"}, "Aerospike": {"Aerospike": "https://python.langchain.com/docs/integrations/vectorstores/aerospike/"}, "ElasticVectorSearch": {"Elasticsearch": "https://python.langchain.com/docs/integrations/vectorstores/elasticsearch/"}, "PGVecto_rs": {"PGVecto.rs": "https://python.langchain.com/docs/integrations/vectorstores/pgvecto_rs/"}, "ZepVectorStore": {"Zep": "https://python.langchain.com/docs/integrations/vectorstores/zep/"}, "CollectionConfig": {"Zep": "https://python.langchain.com/docs/integrations/vectorstores/zep/"}, "openai": {"OpenAI Adapter(Old)": "https://python.langchain.com/docs/integrations/adapters/openai-old/", "OpenAI Adapter": "https://python.langchain.com/docs/integrations/adapters/openai/"}, "RankLLMRerank": {"RankLLM Reranker": "https://python.langchain.com/docs/integrations/document_transformers/rankllm-reranker/"}, "AsyncChromiumLoader": {"Beautiful Soup": "https://python.langchain.com/docs/integrations/document_transformers/beautiful_soup/", "Async Chromium": "https://python.langchain.com/docs/integrations/document_loaders/async_chromium/"}, "BeautifulSoupTransformer": {"Beautiful Soup": "https://python.langchain.com/docs/integrations/document_transformers/beautiful_soup/"}, "VolcengineRerank": {"Volcengine Reranker": "https://python.langchain.com/docs/integrations/document_transformers/volcengine_rerank/"}, "OpenVINOReranker": {"OpenVINO Reranker": "https://python.langchain.com/docs/integrations/document_transformers/openvino_rerank/"}, "create_metadata_tagger": {"OpenAI metadata tagger": "https://python.langchain.com/docs/integrations/document_transformers/openai_metadata_tagger/"}, "DoctranPropertyExtractor": {"Doctran: extract properties": "https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties/"}, "DoctranQATransformer": {"Doctran: interrogate documents": "https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document/"}, "CrossEncoderReranker": {"Cross Encoder Reranker": "https://python.langchain.com/docs/integrations/document_transformers/cross_encoder_reranker/"}, "HuggingFaceCrossEncoder": {"Cross Encoder Reranker": "https://python.langchain.com/docs/integrations/document_transformers/cross_encoder_reranker/"}, "JinaRerank": {"Jina Reranker": "https://python.langchain.com/docs/integrations/document_transformers/jina_rerank/"}, "DoctranTextTranslator": {"Doctran: language translation": "https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document/"}, "MarkdownifyTransformer": {"Markdownify": "https://python.langchain.com/docs/integrations/document_transformers/markdownify/"}, "DashScopeRerank": {"DashScope Reranker": "https://python.langchain.com/docs/integrations/document_transformers/dashscope_rerank/"}, "XorbitsLoader": {"Xorbits Pandas DataFrame": "https://python.langchain.com/docs/integrations/document_loaders/xorbits/"}, "OutlookMessageLoader": {"Email": "https://python.langchain.com/docs/integrations/document_loaders/email/"},
| |
153211
|
"/docs/modules/data_connection/document_loaders/html/": {
"canonical": "/docs/how_to/document_loader_html/",
"alternative": [
"/v0.1/docs/modules/data_connection/document_loaders/html/"
]
},
"/docs/modules/data_connection/document_loaders/json/": {
"canonical": "/docs/how_to/document_loader_json/",
"alternative": [
"/v0.1/docs/modules/data_connection/document_loaders/json/"
]
},
"/docs/modules/data_connection/document_loaders/markdown/": {
"canonical": "/docs/how_to/document_loader_markdown/",
"alternative": [
"/v0.1/docs/modules/data_connection/document_loaders/markdown/"
]
},
"/docs/modules/data_connection/document_loaders/office_file/": {
"canonical": "/docs/how_to/document_loader_office_file/",
"alternative": [
"/v0.1/docs/modules/data_connection/document_loaders/office_file/"
]
},
"/docs/modules/data_connection/document_loaders/pdf/": {
"canonical": "/docs/how_to/document_loader_pdf/",
"alternative": [
"/v0.1/docs/modules/data_connection/document_loaders/pdf/"
]
},
"/docs/modules/data_connection/document_transformers/": {
"canonical": "/docs/how_to/#text-splitters",
"alternative": [
"/v0.1/docs/modules/data_connection/document_transformers/"
]
},
"/docs/modules/data_connection/document_transformers/character_text_splitter/": {
"canonical": "/docs/how_to/character_text_splitter/",
"alternative": [
"/v0.1/docs/modules/data_connection/document_transformers/character_text_splitter/"
]
},
"/docs/modules/data_connection/document_transformers/code_splitter/": {
"canonical": "/docs/how_to/code_splitter/",
"alternative": [
"/v0.1/docs/modules/data_connection/document_transformers/code_splitter/"
]
},
"/docs/modules/data_connection/document_transformers/HTML_header_metadata/": {
"canonical": "/docs/how_to/HTML_header_metadata_splitter/",
"alternative": [
"/v0.1/docs/modules/data_connection/document_transformers/HTML_header_metadata/"
]
},
"/docs/modules/data_connection/document_transformers/HTML_section_aware_splitter/": {
"canonical": "/docs/how_to/HTML_section_aware_splitter/",
"alternative": [
"/v0.1/docs/modules/data_connection/document_transformers/HTML_section_aware_splitter/"
]
},
"/docs/modules/data_connection/document_transformers/markdown_header_metadata/": {
"canonical": "/docs/how_to/markdown_header_metadata_splitter/",
"alternative": [
"/v0.1/docs/modules/data_connection/document_transformers/markdown_header_metadata/"
]
},
"/docs/modules/data_connection/document_transformers/recursive_json_splitter/": {
"canonical": "/docs/how_to/recursive_json_splitter/",
"alternative": [
"/v0.1/docs/modules/data_connection/document_transformers/recursive_json_splitter/"
]
},
"/docs/modules/data_connection/document_transformers/recursive_text_splitter/": {
"canonical": "/docs/how_to/recursive_text_splitter/",
"alternative": [
"/v0.1/docs/modules/data_connection/document_transformers/recursive_text_splitter/"
]
},
"/docs/modules/data_connection/document_transformers/semantic-chunker/": {
"canonical": "/docs/how_to/semantic-chunker/",
"alternative": [
"/v0.1/docs/modules/data_connection/document_transformers/semantic-chunker/"
]
},
"/docs/modules/data_connection/document_transformers/split_by_token/": {
"canonical": "/docs/how_to/split_by_token/",
"alternative": [
"/v0.1/docs/modules/data_connection/document_transformers/split_by_token/"
]
},
"/docs/modules/data_connection/indexing/": {
"canonical": "/docs/how_to/indexing/",
"alternative": [
"/v0.1/docs/modules/data_connection/indexing/"
]
},
"/docs/modules/data_connection/retrievers/": {
"canonical": "/docs/how_to/#retrievers",
"alternative": [
"/v0.1/docs/modules/data_connection/retrievers/"
]
},
"/docs/modules/data_connection/retrievers/contextual_compression/": {
"canonical": "/docs/how_to/contextual_compression/",
"alternative": [
"/v0.1/docs/modules/data_connection/retrievers/contextual_compression/"
]
},
"/docs/modules/data_connection/retrievers/custom_retriever/": {
"canonical": "/docs/how_to/custom_retriever/",
"alternative": [
"/v0.1/docs/modules/data_connection/retrievers/custom_retriever/"
]
},
"/docs/modules/data_connection/retrievers/ensemble/": {
"canonical": "/docs/how_to/ensemble_retriever/",
"alternative": [
"/v0.1/docs/modules/data_connection/retrievers/ensemble/"
]
},
"/docs/modules/data_connection/retrievers/long_context_reorder/": {
"canonical": "/docs/how_to/long_context_reorder/",
"alternative": [
"/v0.1/docs/modules/data_connection/retrievers/long_context_reorder/"
]
},
"/docs/modules/data_connection/retrievers/multi_vector/": {
"canonical": "/docs/how_to/multi_vector/",
"alternative": [
"/v0.1/docs/modules/data_connection/retrievers/multi_vector/"
]
},
"/docs/modules/data_connection/retrievers/MultiQueryRetriever/": {
"canonical": "/docs/how_to/MultiQueryRetriever/",
"alternative": [
"/v0.1/docs/modules/data_connection/retrievers/MultiQueryRetriever/"
]
},
"/docs/modules/data_connection/retrievers/parent_document_retriever/": {
"canonical": "/docs/how_to/parent_document_retriever/",
"alternative": [
"/v0.1/docs/modules/data_connection/retrievers/parent_document_retriever/"
]
},
"/docs/modules/data_connection/retrievers/self_query/": {
"canonical": "/docs/how_to/self_query/",
"alternative": [
"/v0.1/docs/modules/data_connection/retrievers/self_query/"
]
},
"/docs/modules/data_connection/retrievers/time_weighted_vectorstore/": {
"canonical": "/docs/how_to/time_weighted_vectorstore/",
"alternative": [
"/v0.1/docs/modules/data_connection/retrievers/time_weighted_vectorstore/"
]
},
"/docs/modules/data_connection/retrievers/vectorstore/": {
"canonical": "/docs/how_to/vectorstore_retriever/",
"alternative": [
"/v0.1/docs/modules/data_connection/retrievers/vectorstore/"
]
},
"/docs/modules/data_connection/text_embedding/": {
"canonical": "/docs/how_to/embed_text/",
"alternative": [
"/v0.1/docs/modules/data_connection/text_embedding/"
]
},
"/docs/modules/data_connection/text_embedding/caching_embeddings/": {
"canonical": "/docs/how_to/caching_embeddings/",
"alternative": [
"/v0.1/docs/modules/data_connection/text_embedding/caching_embeddings/"
]
},
"/docs/modules/data_connection/vectorstores/": {
"canonical": "/docs/how_to/#vector-stores",
"alternative": [
"/v0.1/docs/modules/data_connection/vectorstores/"
]
},
"/docs/modules/memory/": {
"canonical": "/docs/how_to/chatbots_memory/",
"alternative": [
"/v0.1/docs/modules/memory/"
]
},
"/docs/modules/memory/adding_memory_chain_multiple_inputs/": {
"canonical": "/docs/how_to/chatbots_memory/",
"alternative": [
"/v0.1/docs/modules/memory/adding_memory_chain_multiple_inputs/"
]
},
"/docs/modules/memory/adding_memory/": {
"canonical": "/docs/how_to/chatbots_memory/",
"alternative": [
"/v0.1/docs/modules/memory/adding_memory/"
]
},
"/docs/modules/memory/agent_with_memory_in_db/": {
"canonical": "/docs/how_to/chatbots_memory/",
"alternative": [
"/v0.1/docs/modules/memory/agent_with_memory_in_db/"
]
},
"/docs/modules/memory/agent_with_memory/": {
"canonical": "/docs/how_to/chatbots_memory/",
"alternative": [
"/v0.1/docs/modules/memory/agent_with_memory/"
]
},
"/docs/modules/memory/chat_messages/": {
"canonical": "/docs/how_to/chatbots_memory/",
"alternative": [
"/v0.1/docs/modules/memory/chat_messages/"
]
},
"/docs/modules/memory/conversational_customization/": {
"canonical": "/docs/how_to/chatbots_memory/",
"alternative": [
"/v0.1/docs/modules/memory/conversational_customization/"
]
},
| |
153217
|
"/docs/use_cases/sql/csv/": {
"canonical": "/docs/tutorials/sql_qa/",
"alternative": [
"/v0.1/docs/use_cases/sql/csv/"
]
},
"/docs/use_cases/sql/large_db/": {
"canonical": "/docs/tutorials/sql_qa/",
"alternative": [
"/v0.1/docs/use_cases/sql/large_db/"
]
},
"/docs/use_cases/sql/prompting/": {
"canonical": "/docs/tutorials/sql_qa/",
"alternative": [
"/v0.1/docs/use_cases/sql/prompting/"
]
},
"/docs/use_cases/sql/query_checking/": {
"canonical": "/docs/tutorials/sql_qa/",
"alternative": [
"/v0.1/docs/use_cases/sql/query_checking/"
]
},
"/docs/use_cases/sql/quickstart/": {
"canonical": "/docs/tutorials/sql_qa/",
"alternative": [
"/v0.1/docs/use_cases/sql/quickstart/"
]
},
"/docs/use_cases/summarization/": {
"canonical": "/docs/tutorials/summarization/",
"alternative": [
"/v0.1/docs/use_cases/summarization/"
]
},
"/docs/use_cases/tagging/": {
"canonical": "/docs/tutorials/classification/",
"alternative": [
"/v0.1/docs/use_cases/tagging/"
]
},
"/docs/use_cases/tool_use/": {
"canonical": "/docs/tutorials/agents/",
"alternative": [
"/v0.1/docs/use_cases/tool_use/"
]
},
"/docs/use_cases/tool_use/agents/": {
"canonical": "/docs/tutorials/agents/",
"alternative": [
"/v0.1/docs/use_cases/tool_use/agents/"
]
},
"/docs/use_cases/tool_use/human_in_the_loop/": {
"canonical": "/docs/tutorials/agents/",
"alternative": [
"/v0.1/docs/use_cases/tool_use/human_in_the_loop/"
]
},
"/docs/use_cases/tool_use/multiple_tools/": {
"canonical": "/docs/tutorials/agents/",
"alternative": [
"/v0.1/docs/use_cases/tool_use/multiple_tools/"
]
},
"/docs/use_cases/tool_use/parallel/": {
"canonical": "/docs/tutorials/agents/",
"alternative": [
"/v0.1/docs/use_cases/tool_use/parallel/"
]
},
"/docs/use_cases/tool_use/prompting/": {
"canonical": "/docs/tutorials/agents/",
"alternative": [
"/v0.1/docs/use_cases/tool_use/prompting/"
]
},
"/docs/use_cases/tool_use/quickstart/": {
"canonical": "/docs/tutorials/agents/",
"alternative": [
"/v0.1/docs/use_cases/tool_use/quickstart/"
]
},
"/docs/use_cases/tool_use/tool_error_handling/": {
"canonical": "/docs/tutorials/agents/",
"alternative": [
"/v0.1/docs/use_cases/tool_use/tool_error_handling/"
]
},
"/docs/use_cases/web_scraping/": {
"canonical": "https://langchain-ai.github.io/langgraph/tutorials/web-navigation/web_voyager/",
"alternative": [
"/v0.1/docs/use_cases/web_scraping/"
]
},
// below are new
"/docs/modules/data_connection/document_transformers/text_splitters/": {"canonical": "/docs/how_to/#text-splitters", "alternative": ["/v0.1/docs/modules/data_connection/document_transformers/"]},
"/docs/modules/data_connection/document_transformers/text_splitters/character_text_splitter/": {"canonical": "/docs/how_to/character_text_splitter/", "alternative": ["/v0.1/docs/modules/data_connection/document_transformers/character_text_splitter/"]},
"/docs/modules/data_connection/document_transformers/text_splitters/code_splitter/": {"canonical": "/docs/how_to/code_splitter/", "alternative": ["/v0.1/docs/modules/data_connection/document_transformers/code_splitter/"]},
"/docs/modules/data_connection/document_transformers/text_splitters/HTML_header_metadata/": {"canonical": "/docs/how_to/HTML_header_metadata_splitter/", "alternative": ["/v0.1/docs/modules/data_connection/document_transformers/HTML_header_metadata/"]},
"/docs/modules/data_connection/document_transformers/text_splitters/HTML_section_aware_splitter/": {"canonical": "/docs/how_to/HTML_section_aware_splitter/", "alternative": ["/v0.1/docs/modules/data_connection/document_transformers/HTML_section_aware_splitter/"]},
"/docs/modules/data_connection/document_transformers/text_splitters/markdown_header_metadata/": {"canonical": "/docs/how_to/markdown_header_metadata_splitter/", "alternative": ["/v0.1/docs/modules/data_connection/document_transformers/markdown_header_metadata/"]},
"/docs/modules/data_connection/document_transformers/text_splitters/recursive_json_splitter/": {"canonical": "/docs/how_to/recursive_json_splitter/", "alternative": ["/v0.1/docs/modules/data_connection/document_transformers/recursive_json_splitter/"]},
"/docs/modules/data_connection/document_transformers/text_splitters/recursive_text_splitter/": {"canonical": "/docs/how_to/recursive_text_splitter/", "alternative": ["/v0.1/docs/modules/data_connection/document_transformers/recursive_text_splitter/"]},
"/docs/modules/data_connection/document_transformers/text_splitters/semantic-chunker/": {"canonical": "/docs/how_to/semantic-chunker/", "alternative": ["/v0.1/docs/modules/data_connection/document_transformers/semantic-chunker/"]},
"/docs/modules/data_connection/document_transformers/text_splitters/split_by_token/": {"canonical": "/docs/how_to/split_by_token/", "alternative": ["/v0.1/docs/modules/data_connection/document_transformers/split_by_token/"]},
"/docs/modules/model_io/prompts/prompt_templates/": {"canonical": "/docs/how_to/#prompt-templates", "alternative": ["/v0.1/docs/modules/model_io/prompts/"]},
"/docs/modules/model_io/prompts/prompt_templates/composition/": {"canonical": "/docs/how_to/prompts_composition/", "alternative": ["/v0.1/docs/modules/model_io/prompts/composition/"]},
"/docs/modules/model_io/prompts/prompt_templates/example_selectors/": {"canonical": "/docs/how_to/example_selectors/", "alternative": ["/v0.1/docs/modules/model_io/prompts/example_selectors/"]},
"/docs/modules/model_io/prompts/prompt_templates/example_selectors/length_based/": {"canonical": "/docs/how_to/example_selectors_length_based/", "alternative": ["/v0.1/docs/modules/model_io/prompts/example_selectors/length_based/"]},
"/docs/modules/model_io/prompts/prompt_templates/example_selectors/mmr/": {"canonical": "/docs/how_to/example_selectors_mmr/", "alternative": ["/v0.1/docs/modules/model_io/prompts/example_selectors/mmr/"]},
"/docs/modules/model_io/prompts/prompt_templates/example_selectors/ngram_overlap/": {"canonical": "/docs/how_to/example_selectors_ngram/", "alternative": ["/v0.1/docs/modules/model_io/prompts/example_selectors/ngram_overlap/"]},
"/docs/modules/model_io/prompts/prompt_templates/example_selectors/similarity/": {"canonical": "/docs/how_to/example_selectors_similarity/", "alternative": ["/v0.1/docs/modules/model_io/prompts/example_selectors/similarity/"]},
"/docs/modules/model_io/prompts/prompt_templates/few_shot_examples_chat/": {"canonical": "/docs/how_to/few_shot_examples_chat/", "alternative": ["/v0.1/docs/modules/model_io/prompts/few_shot_examples_chat/"]},
"/docs/modules/model_io/prompts/prompt_templates/few_shot_examples/": {"canonical": "/docs/how_to/few_shot_examples/", "alternative": ["/v0.1/docs/modules/model_io/prompts/few_shot_examples/"]},
"/docs/modules/model_io/prompts/prompt_templates/partial/": {"canonical": "/docs/how_to/prompts_partial/", "alternative": ["/v0.1/docs/modules/model_io/prompts/partial/"]},
"/docs/modules/model_io/prompts/prompt_templates/quick_start/": {"canonical": "/docs/how_to/#prompt-templates", "alternative": ["/v0.1/docs/modules/model_io/prompts/quick_start/"]},
"/docs/modules/model_io/models/": {"canonical": "/docs/how_to/#chat-models", "alternative": ["/v0.1/docs/modules/model_io/"]},
"/docs/modules/model_io/models/chat/": {"canonical": "/docs/how_to/#chat-models", "alternative": ["/v0.1/docs/modules/model_io/chat/"]},
| |
153229
|
This package has moved!
https://github.com/langchain-ai/langchain-experimental/tree/main/libs/experimental
| |
153319
|
"""Test text splitting functionality."""
import random
import re
import string
from pathlib import Path
from typing import Any, List
import pytest
from langchain_core.documents import Document
from langchain_text_splitters import (
Language,
RecursiveCharacterTextSplitter,
TextSplitter,
Tokenizer,
)
from langchain_text_splitters.base import split_text_on_tokens
from langchain_text_splitters.character import CharacterTextSplitter
from langchain_text_splitters.html import HTMLHeaderTextSplitter, HTMLSectionSplitter
from langchain_text_splitters.json import RecursiveJsonSplitter
from langchain_text_splitters.markdown import (
ExperimentalMarkdownSyntaxTextSplitter,
MarkdownHeaderTextSplitter,
)
from langchain_text_splitters.python import PythonCodeTextSplitter
FAKE_PYTHON_TEXT = """
class Foo:
def bar():
def foo():
def testing_func():
def bar():
"""
def test_character_text_splitter() -> None:
"""Test splitting by character count."""
text = "foo bar baz 123"
splitter = CharacterTextSplitter(separator=" ", chunk_size=7, chunk_overlap=3)
output = splitter.split_text(text)
expected_output = ["foo bar", "bar baz", "baz 123"]
assert output == expected_output
def test_character_text_splitter_empty_doc() -> None:
"""Test splitting by character count doesn't create empty documents."""
text = "foo bar"
splitter = CharacterTextSplitter(separator=" ", chunk_size=2, chunk_overlap=0)
output = splitter.split_text(text)
expected_output = ["foo", "bar"]
assert output == expected_output
def test_character_text_splitter_separtor_empty_doc() -> None:
"""Test edge cases are separators."""
text = "f b"
splitter = CharacterTextSplitter(separator=" ", chunk_size=2, chunk_overlap=0)
output = splitter.split_text(text)
expected_output = ["f", "b"]
assert output == expected_output
def test_character_text_splitter_long() -> None:
"""Test splitting by character count on long words."""
text = "foo bar baz a a"
splitter = CharacterTextSplitter(separator=" ", chunk_size=3, chunk_overlap=1)
output = splitter.split_text(text)
expected_output = ["foo", "bar", "baz", "a a"]
assert output == expected_output
def test_character_text_splitter_short_words_first() -> None:
"""Test splitting by character count when shorter words are first."""
text = "a a foo bar baz"
splitter = CharacterTextSplitter(separator=" ", chunk_size=3, chunk_overlap=1)
output = splitter.split_text(text)
expected_output = ["a a", "foo", "bar", "baz"]
assert output == expected_output
def test_character_text_splitter_longer_words() -> None:
"""Test splitting by characters when splits not found easily."""
text = "foo bar baz 123"
splitter = CharacterTextSplitter(separator=" ", chunk_size=1, chunk_overlap=1)
output = splitter.split_text(text)
expected_output = ["foo", "bar", "baz", "123"]
assert output == expected_output
@pytest.mark.parametrize(
"separator, is_separator_regex", [(re.escape("."), True), (".", False)]
)
def test_character_text_splitter_keep_separator_regex(
separator: str, is_separator_regex: bool
) -> None:
"""Test splitting by characters while keeping the separator
that is a regex special character.
"""
text = "foo.bar.baz.123"
splitter = CharacterTextSplitter(
separator=separator,
chunk_size=1,
chunk_overlap=0,
keep_separator=True,
is_separator_regex=is_separator_regex,
)
output = splitter.split_text(text)
expected_output = ["foo", ".bar", ".baz", ".123"]
assert output == expected_output
@pytest.mark.parametrize(
"separator, is_separator_regex", [(re.escape("."), True), (".", False)]
)
def test_character_text_splitter_keep_separator_regex_start(
separator: str, is_separator_regex: bool
) -> None:
"""Test splitting by characters while keeping the separator
that is a regex special character and placing it at the start of each chunk.
"""
text = "foo.bar.baz.123"
splitter = CharacterTextSplitter(
separator=separator,
chunk_size=1,
chunk_overlap=0,
keep_separator="start",
is_separator_regex=is_separator_regex,
)
output = splitter.split_text(text)
expected_output = ["foo", ".bar", ".baz", ".123"]
assert output == expected_output
@pytest.mark.parametrize(
"separator, is_separator_regex", [(re.escape("."), True), (".", False)]
)
def test_character_text_splitter_keep_separator_regex_end(
separator: str, is_separator_regex: bool
) -> None:
"""Test splitting by characters while keeping the separator
that is a regex special character and placing it at the end of each chunk.
"""
text = "foo.bar.baz.123"
splitter = CharacterTextSplitter(
separator=separator,
chunk_size=1,
chunk_overlap=0,
keep_separator="end",
is_separator_regex=is_separator_regex,
)
output = splitter.split_text(text)
expected_output = ["foo.", "bar.", "baz.", "123"]
assert output == expected_output
@pytest.mark.parametrize(
"separator, is_separator_regex", [(re.escape("."), True), (".", False)]
)
def test_character_text_splitter_discard_separator_regex(
separator: str, is_separator_regex: bool
) -> None:
"""Test splitting by characters discarding the separator
that is a regex special character."""
text = "foo.bar.baz.123"
splitter = CharacterTextSplitter(
separator=separator,
chunk_size=1,
chunk_overlap=0,
keep_separator=False,
is_separator_regex=is_separator_regex,
)
output = splitter.split_text(text)
expected_output = ["foo", "bar", "baz", "123"]
assert output == expected_output
def test_recursive_character_text_splitter_keep_separators() -> None:
split_tags = [",", "."]
query = "Apple,banana,orange and tomato."
# start
splitter = RecursiveCharacterTextSplitter(
chunk_size=10,
chunk_overlap=0,
separators=split_tags,
keep_separator="start",
)
result = splitter.split_text(query)
assert result == ["Apple", ",banana", ",orange and tomato", "."]
# end
splitter = RecursiveCharacterTextSplitter(
chunk_size=10,
chunk_overlap=0,
separators=split_tags,
keep_separator="end",
)
result = splitter.split_text(query)
assert result == ["Apple,", "banana,", "orange and tomato."]
def test_character_text_splitting_args() -> None:
"""Test invalid arguments."""
with pytest.raises(ValueError):
CharacterTextSplitter(chunk_size=2, chunk_overlap=4)
def test_merge_splits() -> None:
"""Test merging splits with a given separator."""
splitter = CharacterTextSplitter(separator=" ", chunk_size=9, chunk_overlap=2)
splits = ["foo", "bar", "baz"]
expected_output = ["foo bar", "baz"]
output = splitter._merge_splits(splits, separator=" ")
assert output == expected_output
def test_create_documents() -> None:
"""Test create documents method."""
texts = ["foo bar", "baz"]
splitter = CharacterTextSplitter(separator=" ", chunk_size=3, chunk_overlap=0)
docs = splitter.create_documents(texts)
expected_docs = [
Document(page_content="foo"),
Document(page_content="bar"),
Document(page_content="baz"),
]
assert docs == expected_docs
def test_create_documents_with_metadata() -> None:
"""Test create documents with metadata method."""
texts = ["foo bar", "baz"]
splitter = CharacterTextSplitter(separator=" ", chunk_size=3, chunk_overlap=0)
docs = splitter.create_documents(texts, [{"source": "1"}, {"source": "2"}])
expected_docs = [
Document(page_content="foo", metadata={"source": "1"}),
Document(page_content="bar", metadata={"source": "1"}),
Document(page_content="baz", metadata={"source": "2"}),
]
assert docs == expected_docs
| |
153320
|
@pytest.mark.parametrize(
"splitter, text, expected_docs",
[
(
CharacterTextSplitter(
separator=" ", chunk_size=7, chunk_overlap=3, add_start_index=True
),
"foo bar baz 123",
[
Document(page_content="foo bar", metadata={"start_index": 0}),
Document(page_content="bar baz", metadata={"start_index": 4}),
Document(page_content="baz 123", metadata={"start_index": 8}),
],
),
(
RecursiveCharacterTextSplitter(
chunk_size=6,
chunk_overlap=0,
separators=["\n\n", "\n", " ", ""],
add_start_index=True,
),
"w1 w1 w1 w1 w1 w1 w1 w1 w1",
[
Document(page_content="w1 w1", metadata={"start_index": 0}),
Document(page_content="w1 w1", metadata={"start_index": 6}),
Document(page_content="w1 w1", metadata={"start_index": 12}),
Document(page_content="w1 w1", metadata={"start_index": 18}),
Document(page_content="w1", metadata={"start_index": 24}),
],
),
],
)
def test_create_documents_with_start_index(
splitter: TextSplitter, text: str, expected_docs: List[Document]
) -> None:
"""Test create documents method."""
docs = splitter.create_documents([text])
assert docs == expected_docs
for doc in docs:
s_i = doc.metadata["start_index"]
assert text[s_i : s_i + len(doc.page_content)] == doc.page_content
def test_metadata_not_shallow() -> None:
"""Test that metadatas are not shallow."""
texts = ["foo bar"]
splitter = CharacterTextSplitter(separator=" ", chunk_size=3, chunk_overlap=0)
docs = splitter.create_documents(texts, [{"source": "1"}])
expected_docs = [
Document(page_content="foo", metadata={"source": "1"}),
Document(page_content="bar", metadata={"source": "1"}),
]
assert docs == expected_docs
docs[0].metadata["foo"] = 1
assert docs[0].metadata == {"source": "1", "foo": 1}
assert docs[1].metadata == {"source": "1"}
def test_iterative_text_splitter_keep_separator() -> None:
chunk_size = 5
output = __test_iterative_text_splitter(chunk_size=chunk_size, keep_separator=True)
assert output == [
"....5",
"X..3",
"Y...4",
"X....5",
"Y...",
]
def test_iterative_text_splitter_discard_separator() -> None:
chunk_size = 5
output = __test_iterative_text_splitter(chunk_size=chunk_size, keep_separator=False)
assert output == [
"....5",
"..3",
"...4",
"....5",
"...",
]
def __test_iterative_text_splitter(chunk_size: int, keep_separator: bool) -> List[str]:
chunk_size += 1 if keep_separator else 0
splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size,
chunk_overlap=0,
separators=["X", "Y"],
keep_separator=keep_separator,
)
text = "....5X..3Y...4X....5Y..."
output = splitter.split_text(text)
for chunk in output:
assert len(chunk) <= chunk_size, f"Chunk is larger than {chunk_size}"
return output
def test_iterative_text_splitter() -> None:
"""Test iterative text splitter."""
text = """Hi.\n\nI'm Harrison.\n\nHow? Are? You?\nOkay then f f f f.
This is a weird text to write, but gotta test the splittingggg some how.
Bye!\n\n-H."""
splitter = RecursiveCharacterTextSplitter(chunk_size=10, chunk_overlap=1)
output = splitter.split_text(text)
expected_output = [
"Hi.",
"I'm",
"Harrison.",
"How? Are?",
"You?",
"Okay then",
"f f f f.",
"This is a",
"weird",
"text to",
"write,",
"but gotta",
"test the",
"splitting",
"gggg",
"some how.",
"Bye!",
"-H.",
]
assert output == expected_output
def test_split_documents() -> None:
"""Test split_documents."""
splitter = CharacterTextSplitter(separator="", chunk_size=1, chunk_overlap=0)
docs = [
Document(page_content="foo", metadata={"source": "1"}),
Document(page_content="bar", metadata={"source": "2"}),
Document(page_content="baz", metadata={"source": "1"}),
]
expected_output = [
Document(page_content="f", metadata={"source": "1"}),
Document(page_content="o", metadata={"source": "1"}),
Document(page_content="o", metadata={"source": "1"}),
Document(page_content="b", metadata={"source": "2"}),
Document(page_content="a", metadata={"source": "2"}),
Document(page_content="r", metadata={"source": "2"}),
Document(page_content="b", metadata={"source": "1"}),
Document(page_content="a", metadata={"source": "1"}),
Document(page_content="z", metadata={"source": "1"}),
]
assert splitter.split_documents(docs) == expected_output
def test_python_text_splitter() -> None:
splitter = PythonCodeTextSplitter(chunk_size=30, chunk_overlap=0)
splits = splitter.split_text(FAKE_PYTHON_TEXT)
split_0 = """class Foo:\n\n def bar():"""
split_1 = """def foo():"""
split_2 = """def testing_func():"""
split_3 = """def bar():"""
expected_splits = [split_0, split_1, split_2, split_3]
assert splits == expected_splits
CHUNK_SIZE = 16
def test_python_code_splitter() -> None:
splitter = RecursiveCharacterTextSplitter.from_language(
Language.PYTHON, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
def hello_world():
print("Hello, World!")
# Call the function
hello_world()
"""
chunks = splitter.split_text(code)
assert chunks == [
"def",
"hello_world():",
'print("Hello,',
'World!")',
"# Call the",
"function",
"hello_world()",
]
def test_golang_code_splitter() -> None:
splitter = RecursiveCharacterTextSplitter.from_language(
Language.GO, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
package main
import "fmt"
func helloWorld() {
fmt.Println("Hello, World!")
}
func main() {
helloWorld()
}
"""
chunks = splitter.split_text(code)
assert chunks == [
"package main",
'import "fmt"',
"func",
"helloWorld() {",
'fmt.Println("He',
"llo,",
'World!")',
"}",
"func main() {",
"helloWorld()",
"}",
]
def test_rst_code_splitter() -> None:
splitter = RecursiveCharacterTextSplitter.from_language(
Language.RST, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
Sample Document
===============
Section
-------
This is the content of the section.
Lists
-----
- Item 1
- Item 2
- Item 3
Comment
*******
Not a comment
.. This is a comment
"""
chunks = splitter.split_text(code)
assert chunks == [
"Sample Document",
"===============",
"Section",
"-------",
"This is the",
"content of the",
"section.",
"Lists",
"-----",
"- Item 1",
"- Item 2",
"- Item 3",
"Comment",
"*******",
"Not a comment",
".. This is a",
"comment",
]
# Special test for special characters
code = "harry\n***\nbabylon is"
chunks = splitter.split_text(code)
assert chunks == ["harry", "***\nbabylon is"]
| |
153322
|
def test_rust_code_splitter() -> None:
splitter = RecursiveCharacterTextSplitter.from_language(
Language.RUST, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
fn main() {
println!("Hello, World!");
}
"""
chunks = splitter.split_text(code)
assert chunks == ["fn main() {", 'println!("Hello', ",", 'World!");', "}"]
def test_markdown_code_splitter() -> None:
splitter = RecursiveCharacterTextSplitter.from_language(
Language.MARKDOWN, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
# Sample Document
## Section
This is the content of the section.
## Lists
- Item 1
- Item 2
- Item 3
### Horizontal lines
***********
____________
-------------------
#### Code blocks
```
This is a code block
# sample code
a = 1
b = 2
```
"""
chunks = splitter.split_text(code)
assert chunks == [
"# Sample",
"Document",
"## Section",
"This is the",
"content of the",
"section.",
"## Lists",
"- Item 1",
"- Item 2",
"- Item 3",
"### Horizontal",
"lines",
"***********",
"____________",
"---------------",
"----",
"#### Code",
"blocks",
"```",
"This is a code",
"block",
"# sample code",
"a = 1\nb = 2",
"```",
]
# Special test for special characters
code = "harry\n***\nbabylon is"
chunks = splitter.split_text(code)
assert chunks == ["harry", "***\nbabylon is"]
def test_latex_code_splitter() -> None:
splitter = RecursiveCharacterTextSplitter.from_language(
Language.LATEX, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
Hi Harrison!
\\chapter{1}
"""
chunks = splitter.split_text(code)
assert chunks == ["Hi Harrison!", "\\chapter{1}"]
def test_html_code_splitter() -> None:
splitter = RecursiveCharacterTextSplitter.from_language(
Language.HTML, chunk_size=60, chunk_overlap=0
)
code = """
<h1>Sample Document</h1>
<h2>Section</h2>
<p id="1234">Reference content.</p>
<h2>Lists</h2>
<ul>
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
</ul>
<h3>A block</h3>
<div class="amazing">
<p>Some text</p>
<p>Some more text</p>
</div>
"""
chunks = splitter.split_text(code)
assert chunks == [
"<h1>Sample Document</h1>\n <h2>Section</h2>",
'<p id="1234">Reference content.</p>',
"<h2>Lists</h2>\n <ul>",
"<li>Item 1</li>\n <li>Item 2</li>",
"<li>Item 3</li>\n </ul>",
"<h3>A block</h3>",
'<div class="amazing">',
"<p>Some text</p>",
"<p>Some more text</p>\n </div>",
]
def test_md_header_text_splitter_1() -> None:
"""Test markdown splitter by header: Case 1."""
markdown_document = (
"# Foo\n\n"
" ## Bar\n\n"
"Hi this is Jim\n\n"
"Hi this is Joe\n\n"
" ## Baz\n\n"
" Hi this is Molly"
)
headers_to_split_on = [
("#", "Header 1"),
("##", "Header 2"),
]
markdown_splitter = MarkdownHeaderTextSplitter(
headers_to_split_on=headers_to_split_on,
)
output = markdown_splitter.split_text(markdown_document)
expected_output = [
Document(
page_content="Hi this is Jim \nHi this is Joe",
metadata={"Header 1": "Foo", "Header 2": "Bar"},
),
Document(
page_content="Hi this is Molly",
metadata={"Header 1": "Foo", "Header 2": "Baz"},
),
]
assert output == expected_output
def test_md_header_text_splitter_2() -> None:
"""Test markdown splitter by header: Case 2."""
markdown_document = (
"# Foo\n\n"
" ## Bar\n\n"
"Hi this is Jim\n\n"
"Hi this is Joe\n\n"
" ### Boo \n\n"
" Hi this is Lance \n\n"
" ## Baz\n\n"
" Hi this is Molly"
)
headers_to_split_on = [
("#", "Header 1"),
("##", "Header 2"),
("###", "Header 3"),
]
markdown_splitter = MarkdownHeaderTextSplitter(
headers_to_split_on=headers_to_split_on,
)
output = markdown_splitter.split_text(markdown_document)
expected_output = [
Document(
page_content="Hi this is Jim \nHi this is Joe",
metadata={"Header 1": "Foo", "Header 2": "Bar"},
),
Document(
page_content="Hi this is Lance",
metadata={"Header 1": "Foo", "Header 2": "Bar", "Header 3": "Boo"},
),
Document(
page_content="Hi this is Molly",
metadata={"Header 1": "Foo", "Header 2": "Baz"},
),
]
assert output == expected_output
def test_md_header_text_splitter_3() -> None:
"""Test markdown splitter by header: Case 3."""
markdown_document = (
"# Foo\n\n"
" ## Bar\n\n"
"Hi this is Jim\n\n"
"Hi this is Joe\n\n"
" ### Boo \n\n"
" Hi this is Lance \n\n"
" #### Bim \n\n"
" Hi this is John \n\n"
" ## Baz\n\n"
" Hi this is Molly"
)
headers_to_split_on = [
("#", "Header 1"),
("##", "Header 2"),
("###", "Header 3"),
("####", "Header 4"),
]
markdown_splitter = MarkdownHeaderTextSplitter(
headers_to_split_on=headers_to_split_on,
)
output = markdown_splitter.split_text(markdown_document)
expected_output = [
Document(
page_content="Hi this is Jim \nHi this is Joe",
metadata={"Header 1": "Foo", "Header 2": "Bar"},
),
Document(
page_content="Hi this is Lance",
metadata={"Header 1": "Foo", "Header 2": "Bar", "Header 3": "Boo"},
),
Document(
page_content="Hi this is John",
metadata={
"Header 1": "Foo",
"Header 2": "Bar",
"Header 3": "Boo",
"Header 4": "Bim",
},
),
Document(
page_content="Hi this is Molly",
metadata={"Header 1": "Foo", "Header 2": "Baz"},
),
]
assert output == expected_output
def test_md_header_text_splitter_preserve_headers_1() -> None:
"""Test markdown splitter by header: Preserve Headers."""
markdown_document = (
"# Foo\n\n"
" ## Bat\n\n"
"Hi this is Jim\n\n"
"Hi Joe\n\n"
"## Baz\n\n"
"# Bar\n\n"
"This is Alice\n\n"
"This is Bob"
)
headers_to_split_on = [
("#", "Header 1"),
]
markdown_splitter = MarkdownHeaderTextSplitter(
headers_to_split_on=headers_to_split_on,
strip_headers=False,
)
output = markdown_splitter.split_text(markdown_document)
expected_output = [
Document(
page_content="# Foo \n## Bat \nHi this is Jim \nHi Joe \n## Baz",
metadata={"Header 1": "Foo"},
),
Document(
page_content="# Bar \nThis is Alice \nThis is Bob",
metadata={"Header 1": "Bar"},
),
]
assert output == expected_output
| |
153324
|
def test_experimental_markdown_syntax_text_splitter_with_headers() -> None:
"""Test experimental markdown syntax splitter."""
markdown_splitter = ExperimentalMarkdownSyntaxTextSplitter(strip_headers=False)
output = markdown_splitter.split_text(EXPERIMENTAL_MARKDOWN_DOCUMENT)
expected_output = [
Document(
page_content="# My Header 1\nContent for header 1\n",
metadata={"Header 1": "My Header 1"},
),
Document(
page_content="## Header 2\nContent for header 2\n",
metadata={"Header 1": "My Header 1", "Header 2": "Header 2"},
),
Document(
page_content=(
"```python\ndef func_definition():\n "
"print('Keep the whitespace consistent')\n```\n"
),
metadata={
"Code": "python",
"Header 1": "My Header 1",
"Header 2": "Header 2",
},
),
Document(
page_content=(
"# Header 1 again\nWe should also split on the horizontal line\n"
),
metadata={"Header 1": "Header 1 again"},
),
Document(
page_content=(
"This will be a new doc but with the same header metadata\n\n"
"And it includes a new paragraph"
),
metadata={"Header 1": "Header 1 again"},
),
]
assert output == expected_output
def test_experimental_markdown_syntax_text_splitter_split_lines() -> None:
"""Test experimental markdown syntax splitter."""
markdown_splitter = ExperimentalMarkdownSyntaxTextSplitter(return_each_line=True)
output = markdown_splitter.split_text(EXPERIMENTAL_MARKDOWN_DOCUMENT)
expected_output = [
Document(
page_content="Content for header 1", metadata={"Header 1": "My Header 1"}
),
Document(
page_content="Content for header 2",
metadata={"Header 1": "My Header 1", "Header 2": "Header 2"},
),
Document(
page_content="```python",
metadata={
"Code": "python",
"Header 1": "My Header 1",
"Header 2": "Header 2",
},
),
Document(
page_content="def func_definition():",
metadata={
"Code": "python",
"Header 1": "My Header 1",
"Header 2": "Header 2",
},
),
Document(
page_content=" print('Keep the whitespace consistent')",
metadata={
"Code": "python",
"Header 1": "My Header 1",
"Header 2": "Header 2",
},
),
Document(
page_content="```",
metadata={
"Code": "python",
"Header 1": "My Header 1",
"Header 2": "Header 2",
},
),
Document(
page_content="We should also split on the horizontal line",
metadata={"Header 1": "Header 1 again"},
),
Document(
page_content="This will be a new doc but with the same header metadata",
metadata={"Header 1": "Header 1 again"},
),
Document(
page_content="And it includes a new paragraph",
metadata={"Header 1": "Header 1 again"},
),
]
assert output == expected_output
def test_solidity_code_splitter() -> None:
splitter = RecursiveCharacterTextSplitter.from_language(
Language.SOL, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """pragma solidity ^0.8.20;
contract HelloWorld {
function add(uint a, uint b) pure public returns(uint) {
return a + b;
}
}
"""
chunks = splitter.split_text(code)
assert chunks == [
"pragma solidity",
"^0.8.20;",
"contract",
"HelloWorld {",
"function",
"add(uint a,",
"uint b) pure",
"public",
"returns(uint) {",
"return a",
"+ b;",
"}\n }",
]
def test_lua_code_splitter() -> None:
splitter = RecursiveCharacterTextSplitter.from_language(
Language.LUA, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
local variable = 10
function add(a, b)
return a + b
end
if variable > 5 then
for i=1, variable do
while i < variable do
repeat
print(i)
i = i + 1
until i >= variable
end
end
end
"""
chunks = splitter.split_text(code)
assert chunks == [
"local variable",
"= 10",
"function add(a,",
"b)",
"return a +",
"b",
"end",
"if variable > 5",
"then",
"for i=1,",
"variable do",
"while i",
"< variable do",
"repeat",
"print(i)",
"i = i + 1",
"until i >=",
"variable",
"end",
"end\nend",
]
def test_haskell_code_splitter() -> None:
splitter = RecursiveCharacterTextSplitter.from_language(
Language.HASKELL, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
main :: IO ()
main = do
putStrLn "Hello, World!"
-- Some sample functions
add :: Int -> Int -> Int
add x y = x + y
"""
# Adjusted expected chunks to account for indentation and newlines
expected_chunks = [
"main ::",
"IO ()",
"main = do",
"putStrLn",
'"Hello, World!"',
"--",
"Some sample",
"functions",
"add :: Int ->",
"Int -> Int",
"add x y = x",
"+ y",
]
chunks = splitter.split_text(code)
assert chunks == expected_chunks
@pytest.mark.requires("lxml")
def test_html_header_text_splitter(tmp_path: Path) -> None:
splitter = HTMLHeaderTextSplitter(
headers_to_split_on=[("h1", "Header 1"), ("h2", "Header 2")]
)
content = """
<h1>Sample Document</h1>
<h2>Section</h2>
<p id="1234">Reference content.</p>
<h2>Lists</h2>
<ul>
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
</ul>
<h3>A block</h3>
<div class="amazing">
<p>Some text</p>
<p>Some more text</p>
</div>
"""
docs = splitter.split_text(content)
expected = [
Document(
page_content="Reference content.",
metadata={"Header 1": "Sample Document", "Header 2": "Section"},
),
Document(
page_content="Item 1 Item 2 Item 3 \nSome text \nSome more text",
metadata={"Header 1": "Sample Document", "Header 2": "Lists"},
),
]
assert docs == expected
with open(tmp_path / "doc.html", "w") as tmp:
tmp.write(content)
docs_from_file = splitter.split_text_from_file(tmp_path / "doc.html")
assert docs_from_file == expected
def test_split_text_on_tokens() -> None:
"""Test splitting by tokens per chunk."""
text = "foo bar baz 123"
tokenizer = Tokenizer(
chunk_overlap=3,
tokens_per_chunk=7,
decode=(lambda it: "".join(chr(i) for i in it)),
encode=(lambda it: [ord(c) for c in it]),
)
output = split_text_on_tokens(text=text, tokenizer=tokenizer)
expected_output = ["foo bar", "bar baz", "baz 123"]
assert output == expected_output
| |
153326
|
@pytest.mark.requires("lxml")
@pytest.mark.requires("bs4")
def test_happy_path_splitting_with_duplicate_header_tag() -> None:
# arrange
html_string = """<!DOCTYPE html>
<html>
<body>
<div>
<h1>Foo</h1>
<p>Some intro text about Foo.</p>
<div>
<h2>Bar main section</h2>
<p>Some intro text about Bar.</p>
<h3>Bar subsection 1</h3>
<p>Some text about the first subtopic of Bar.</p>
<h3>Bar subsection 2</h3>
<p>Some text about the second subtopic of Bar.</p>
</div>
<div>
<h2>Foo</h2>
<p>Some text about Baz</p>
</div>
<h1>Foo</h1>
<br>
<p>Some concluding text about Foo</p>
</div>
</body>
</html>"""
sec_splitter = HTMLSectionSplitter(
headers_to_split_on=[("h1", "Header 1"), ("h2", "Header 2")]
)
docs = sec_splitter.split_text(html_string)
assert len(docs) == 4
assert docs[0].page_content == "Foo \n Some intro text about Foo."
assert docs[0].metadata["Header 1"] == "Foo"
assert docs[1].page_content == (
"Bar main section \n Some intro text about Bar. \n "
"Bar subsection 1 \n Some text about the first subtopic of Bar. \n "
"Bar subsection 2 \n Some text about the second subtopic of Bar."
)
assert docs[1].metadata["Header 2"] == "Bar main section"
assert docs[2].page_content == "Foo \n Some text about Baz"
assert docs[2].metadata["Header 2"] == "Foo"
assert docs[3].page_content == "Foo \n \n Some concluding text about Foo"
assert docs[3].metadata["Header 1"] == "Foo"
def test_split_json() -> None:
"""Test json text splitter"""
max_chunk = 800
splitter = RecursiveJsonSplitter(max_chunk_size=max_chunk)
def random_val() -> str:
return "".join(random.choices(string.ascii_letters, k=random.randint(4, 12)))
test_data: Any = {
"val0": random_val(),
"val1": {f"val1{i}": random_val() for i in range(100)},
}
test_data["val1"]["val16"] = {f"val16{i}": random_val() for i in range(100)}
# uses create_docs and split_text
docs = splitter.create_documents(texts=[test_data])
output = [len(doc.page_content) < max_chunk * 1.05 for doc in docs]
expected_output = [True for doc in docs]
assert output == expected_output
def test_split_json_with_lists() -> None:
"""Test json text splitter with list conversion"""
max_chunk = 800
splitter = RecursiveJsonSplitter(max_chunk_size=max_chunk)
def random_val() -> str:
return "".join(random.choices(string.ascii_letters, k=random.randint(4, 12)))
test_data: Any = {
"val0": random_val(),
"val1": {f"val1{i}": random_val() for i in range(100)},
}
test_data["val1"]["val16"] = {f"val16{i}": random_val() for i in range(100)}
test_data_list: Any = {"testPreprocessing": [test_data]}
# test text splitter
texts = splitter.split_text(json_data=test_data)
texts_list = splitter.split_text(json_data=test_data_list, convert_lists=True)
assert len(texts_list) >= len(texts)
def test_split_json_many_calls() -> None:
x = {"a": 1, "b": 2}
y = {"c": 3, "d": 4}
splitter = RecursiveJsonSplitter()
chunk0 = splitter.split_json(x)
assert chunk0 == [{"a": 1, "b": 2}]
chunk1 = splitter.split_json(y)
assert chunk1 == [{"c": 3, "d": 4}]
# chunk0 is now altered by creating chunk1
assert chunk0 == [{"a": 1, "b": 2}]
chunk0_output = [{"a": 1, "b": 2}]
chunk1_output = [{"c": 3, "d": 4}]
assert chunk0 == chunk0_output
assert chunk1 == chunk1_output
def test_powershell_code_splitter_short_code() -> None:
splitter = RecursiveCharacterTextSplitter.from_language(
Language.POWERSHELL, chunk_size=60, chunk_overlap=0
)
code = """
# Check if a file exists
$filePath = "C:\\temp\\file.txt"
if (Test-Path $filePath) {
# File exists
} else {
# File does not exist
}
"""
chunks = splitter.split_text(code)
assert chunks == [
'# Check if a file exists\n$filePath = "C:\\temp\\file.txt"',
"if (Test-Path $filePath) {\n # File exists\n} else {",
"# File does not exist\n}",
]
def test_powershell_code_splitter_longer_code() -> None:
splitter = RecursiveCharacterTextSplitter.from_language(
Language.POWERSHELL, chunk_size=60, chunk_overlap=0
)
code = """
# Get a list of all processes and export to CSV
$processes = Get-Process
$processes | Export-Csv -Path "C:\\temp\\processes.csv" -NoTypeInformation
# Read the CSV file and display its content
$csvContent = Import-Csv -Path "C:\\temp\\processes.csv"
$csvContent | ForEach-Object {
$_.ProcessName
}
# End of script
"""
chunks = splitter.split_text(code)
assert chunks == [
"# Get a list of all processes and export to CSV",
"$processes = Get-Process",
'$processes | Export-Csv -Path "C:\\temp\\processes.csv"',
"-NoTypeInformation",
"# Read the CSV file and display its content",
'$csvContent = Import-Csv -Path "C:\\temp\\processes.csv"',
"$csvContent | ForEach-Object {\n $_.ProcessName\n}",
"# End of script",
]
| |
153331
|
from __future__ import annotations
import copy
import pathlib
from io import BytesIO, StringIO
from typing import Any, Dict, Iterable, List, Optional, Tuple, TypedDict, cast
import requests
from langchain_core.documents import Document
from langchain_text_splitters.character import RecursiveCharacterTextSplitter
class ElementType(TypedDict):
"""Element type as typed dict."""
url: str
xpath: str
content: str
metadata: Dict[str, str]
class HTMLHeaderTextSplitter:
"""
Splitting HTML files based on specified headers.
Requires lxml package.
"""
def __init__(
self,
headers_to_split_on: List[Tuple[str, str]],
return_each_element: bool = False,
):
"""Create a new HTMLHeaderTextSplitter.
Args:
headers_to_split_on: list of tuples of headers we want to track mapped to
(arbitrary) keys for metadata. Allowed header values: h1, h2, h3, h4,
h5, h6 e.g. [("h1", "Header 1"), ("h2", "Header 2)].
return_each_element: Return each element w/ associated headers.
"""
# Output element-by-element or aggregated into chunks w/ common headers
self.return_each_element = return_each_element
self.headers_to_split_on = sorted(headers_to_split_on)
def aggregate_elements_to_chunks(
self, elements: List[ElementType]
) -> List[Document]:
"""Combine elements with common metadata into chunks
Args:
elements: HTML element content with associated identifying info and metadata
"""
aggregated_chunks: List[ElementType] = []
for element in elements:
if (
aggregated_chunks
and aggregated_chunks[-1]["metadata"] == element["metadata"]
):
# If the last element in the aggregated list
# has the same metadata as the current element,
# append the current content to the last element's content
aggregated_chunks[-1]["content"] += " \n" + element["content"]
else:
# Otherwise, append the current element to the aggregated list
aggregated_chunks.append(element)
return [
Document(page_content=chunk["content"], metadata=chunk["metadata"])
for chunk in aggregated_chunks
]
def split_text_from_url(self, url: str, **kwargs: Any) -> List[Document]:
"""Split HTML from web URL
Args:
url: web URL
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the fetch url content request.
"""
r = requests.get(url, **kwargs)
return self.split_text_from_file(BytesIO(r.content))
def split_text(self, text: str) -> List[Document]:
"""Split HTML text string
Args:
text: HTML text
"""
return self.split_text_from_file(StringIO(text))
def split_text_from_file(self, file: Any) -> List[Document]:
"""Split HTML file
Args:
file: HTML file
"""
try:
from lxml import etree
except ImportError as e:
raise ImportError(
"Unable to import lxml, please install with `pip install lxml`."
) from e
# use lxml library to parse html document and return xml ElementTree
# Explicitly encoding in utf-8 allows non-English
# html files to be processed without garbled characters
parser = etree.HTMLParser(encoding="utf-8")
tree = etree.parse(file, parser)
# document transformation for "structure-aware" chunking is handled with xsl.
# see comments in html_chunks_with_headers.xslt for more detailed information.
xslt_path = pathlib.Path(__file__).parent / "xsl/html_chunks_with_headers.xslt"
xslt_tree = etree.parse(xslt_path)
transform = etree.XSLT(xslt_tree)
result = transform(tree)
result_dom = etree.fromstring(str(result))
# create filter and mapping for header metadata
header_filter = [header[0] for header in self.headers_to_split_on]
header_mapping = dict(self.headers_to_split_on)
# map xhtml namespace prefix
ns_map = {"h": "http://www.w3.org/1999/xhtml"}
# build list of elements from DOM
elements = []
for element in result_dom.findall("*//*", ns_map):
if element.findall("*[@class='headers']") or element.findall(
"*[@class='chunk']"
):
elements.append(
ElementType(
url=file,
xpath="".join(
[
node.text or ""
for node in element.findall("*[@class='xpath']", ns_map)
]
),
content="".join(
[
node.text or ""
for node in element.findall("*[@class='chunk']", ns_map)
]
),
metadata={
# Add text of specified headers to metadata using header
# mapping.
header_mapping[node.tag]: node.text or ""
for node in filter(
lambda x: x.tag in header_filter,
element.findall("*[@class='headers']/*", ns_map),
)
},
)
)
if not self.return_each_element:
return self.aggregate_elements_to_chunks(elements)
else:
return [
Document(page_content=chunk["content"], metadata=chunk["metadata"])
for chunk in elements
]
| |
153332
|
class HTMLSectionSplitter:
"""
Splitting HTML files based on specified tag and font sizes.
Requires lxml package.
"""
def __init__(
self,
headers_to_split_on: List[Tuple[str, str]],
xslt_path: Optional[str] = None,
**kwargs: Any,
) -> None:
"""Create a new HTMLSectionSplitter.
Args:
headers_to_split_on: list of tuples of headers we want to track mapped to
(arbitrary) keys for metadata. Allowed header values: h1, h2, h3, h4,
h5, h6 e.g. [("h1", "Header 1"), ("h2", "Header 2"].
xslt_path: path to xslt file for document transformation.
Uses a default if not passed.
Needed for html contents that using different format and layouts.
"""
self.headers_to_split_on = dict(headers_to_split_on)
if xslt_path is None:
self.xslt_path = (
pathlib.Path(__file__).parent / "xsl/converting_to_header.xslt"
).absolute()
else:
self.xslt_path = pathlib.Path(xslt_path).absolute()
self.kwargs = kwargs
def split_documents(self, documents: Iterable[Document]) -> List[Document]:
"""Split documents."""
texts, metadatas = [], []
for doc in documents:
texts.append(doc.page_content)
metadatas.append(doc.metadata)
results = self.create_documents(texts, metadatas=metadatas)
text_splitter = RecursiveCharacterTextSplitter(**self.kwargs)
return text_splitter.split_documents(results)
def split_text(self, text: str) -> List[Document]:
"""Split HTML text string
Args:
text: HTML text
"""
return self.split_text_from_file(StringIO(text))
def create_documents(
self, texts: List[str], metadatas: Optional[List[dict]] = None
) -> List[Document]:
"""Create documents from a list of texts."""
_metadatas = metadatas or [{}] * len(texts)
documents = []
for i, text in enumerate(texts):
for chunk in self.split_text(text):
metadata = copy.deepcopy(_metadatas[i])
for key in chunk.metadata.keys():
if chunk.metadata[key] == "#TITLE#":
chunk.metadata[key] = metadata["Title"]
metadata = {**metadata, **chunk.metadata}
new_doc = Document(page_content=chunk.page_content, metadata=metadata)
documents.append(new_doc)
return documents
def split_html_by_headers(self, html_doc: str) -> List[Dict[str, Optional[str]]]:
try:
from bs4 import BeautifulSoup, PageElement # type: ignore[import-untyped]
except ImportError as e:
raise ImportError(
"Unable to import BeautifulSoup/PageElement, \
please install with `pip install \
bs4`."
) from e
soup = BeautifulSoup(html_doc, "html.parser")
headers = list(self.headers_to_split_on.keys())
sections: list[dict[str, str | None]] = []
headers = soup.find_all(["body"] + headers)
for i, header in enumerate(headers):
header_element: PageElement = header
if i == 0:
current_header = "#TITLE#"
current_header_tag = "h1"
section_content: List = []
else:
current_header = header_element.text.strip()
current_header_tag = header_element.name
section_content = []
for element in header_element.next_elements:
if i + 1 < len(headers) and element == headers[i + 1]:
break
if isinstance(element, str):
section_content.append(element)
content = " ".join(section_content).strip()
if content != "":
sections.append(
{
"header": current_header,
"content": content,
"tag_name": current_header_tag,
}
)
return sections
def convert_possible_tags_to_header(self, html_content: str) -> str:
if self.xslt_path is None:
return html_content
try:
from lxml import etree
except ImportError as e:
raise ImportError(
"Unable to import lxml, please install with `pip install lxml`."
) from e
# use lxml library to parse html document and return xml ElementTree
parser = etree.HTMLParser()
tree = etree.parse(StringIO(html_content), parser)
xslt_tree = etree.parse(self.xslt_path)
transform = etree.XSLT(xslt_tree)
result = transform(tree)
return str(result)
def split_text_from_file(self, file: Any) -> List[Document]:
"""Split HTML file
Args:
file: HTML file
"""
file_content = file.getvalue()
file_content = self.convert_possible_tags_to_header(file_content)
sections = self.split_html_by_headers(file_content)
return [
Document(
cast(str, section["content"]),
metadata={
self.headers_to_split_on[str(section["tag_name"])]: section[
"header"
]
},
)
for section in sections
]
| |
153335
|
from __future__ import annotations
import re
from typing import Any, Dict, List, Tuple, TypedDict, Union
from langchain_core.documents import Document
from langchain_text_splitters.base import Language
from langchain_text_splitters.character import RecursiveCharacterTextSplitter
class MarkdownTextSplitter(RecursiveCharacterTextSplitter):
"""Attempts to split the text along Markdown-formatted headings."""
def __init__(self, **kwargs: Any) -> None:
"""Initialize a MarkdownTextSplitter."""
separators = self.get_separators_for_language(Language.MARKDOWN)
super().__init__(separators=separators, **kwargs)
class MarkdownHeaderTextSplitter:
"""Splitting markdown files based on specified headers."""
def __init__(
self,
headers_to_split_on: List[Tuple[str, str]],
return_each_line: bool = False,
strip_headers: bool = True,
):
"""Create a new MarkdownHeaderTextSplitter.
Args:
headers_to_split_on: Headers we want to track
return_each_line: Return each line w/ associated headers
strip_headers: Strip split headers from the content of the chunk
"""
# Output line-by-line or aggregated into chunks w/ common headers
self.return_each_line = return_each_line
# Given the headers we want to split on,
# (e.g., "#, ##, etc") order by length
self.headers_to_split_on = sorted(
headers_to_split_on, key=lambda split: len(split[0]), reverse=True
)
# Strip headers split headers from the content of the chunk
self.strip_headers = strip_headers
def aggregate_lines_to_chunks(self, lines: List[LineType]) -> List[Document]:
"""Combine lines with common metadata into chunks
Args:
lines: Line of text / associated header metadata
"""
aggregated_chunks: List[LineType] = []
for line in lines:
if (
aggregated_chunks
and aggregated_chunks[-1]["metadata"] == line["metadata"]
):
# If the last line in the aggregated list
# has the same metadata as the current line,
# append the current content to the last lines's content
aggregated_chunks[-1]["content"] += " \n" + line["content"]
elif (
aggregated_chunks
and aggregated_chunks[-1]["metadata"] != line["metadata"]
# may be issues if other metadata is present
and len(aggregated_chunks[-1]["metadata"]) < len(line["metadata"])
and aggregated_chunks[-1]["content"].split("\n")[-1][0] == "#"
and not self.strip_headers
):
# If the last line in the aggregated list
# has different metadata as the current line,
# and has shallower header level than the current line,
# and the last line is a header,
# and we are not stripping headers,
# append the current content to the last line's content
aggregated_chunks[-1]["content"] += " \n" + line["content"]
# and update the last line's metadata
aggregated_chunks[-1]["metadata"] = line["metadata"]
else:
# Otherwise, append the current line to the aggregated list
aggregated_chunks.append(line)
return [
Document(page_content=chunk["content"], metadata=chunk["metadata"])
for chunk in aggregated_chunks
]
def split_text(self, text: str) -> List[Document]:
"""Split markdown file
Args:
text: Markdown file"""
# Split the input text by newline character ("\n").
lines = text.split("\n")
# Final output
lines_with_metadata: List[LineType] = []
# Content and metadata of the chunk currently being processed
current_content: List[str] = []
current_metadata: Dict[str, str] = {}
# Keep track of the nested header structure
# header_stack: List[Dict[str, Union[int, str]]] = []
header_stack: List[HeaderType] = []
initial_metadata: Dict[str, str] = {}
in_code_block = False
opening_fence = ""
for line in lines:
stripped_line = line.strip()
# Remove all non-printable characters from the string, keeping only visible
# text.
stripped_line = "".join(filter(str.isprintable, stripped_line))
if not in_code_block:
# Exclude inline code spans
if stripped_line.startswith("```") and stripped_line.count("```") == 1:
in_code_block = True
opening_fence = "```"
elif stripped_line.startswith("~~~"):
in_code_block = True
opening_fence = "~~~"
else:
if stripped_line.startswith(opening_fence):
in_code_block = False
opening_fence = ""
if in_code_block:
current_content.append(stripped_line)
continue
# Check each line against each of the header types (e.g., #, ##)
for sep, name in self.headers_to_split_on:
# Check if line starts with a header that we intend to split on
if stripped_line.startswith(sep) and (
# Header with no text OR header is followed by space
# Both are valid conditions that sep is being used a header
len(stripped_line) == len(sep) or stripped_line[len(sep)] == " "
):
# Ensure we are tracking the header as metadata
if name is not None:
# Get the current header level
current_header_level = sep.count("#")
# Pop out headers of lower or same level from the stack
while (
header_stack
and header_stack[-1]["level"] >= current_header_level
):
# We have encountered a new header
# at the same or higher level
popped_header = header_stack.pop()
# Clear the metadata for the
# popped header in initial_metadata
if popped_header["name"] in initial_metadata:
initial_metadata.pop(popped_header["name"])
# Push the current header to the stack
header: HeaderType = {
"level": current_header_level,
"name": name,
"data": stripped_line[len(sep) :].strip(),
}
header_stack.append(header)
# Update initial_metadata with the current header
initial_metadata[name] = header["data"]
# Add the previous line to the lines_with_metadata
# only if current_content is not empty
if current_content:
lines_with_metadata.append(
{
"content": "\n".join(current_content),
"metadata": current_metadata.copy(),
}
)
current_content.clear()
if not self.strip_headers:
current_content.append(stripped_line)
break
else:
if stripped_line:
current_content.append(stripped_line)
elif current_content:
lines_with_metadata.append(
{
"content": "\n".join(current_content),
"metadata": current_metadata.copy(),
}
)
current_content.clear()
current_metadata = initial_metadata.copy()
if current_content:
lines_with_metadata.append(
{"content": "\n".join(current_content), "metadata": current_metadata}
)
# lines_with_metadata has each line with associated header metadata
# aggregate these into chunks based on common metadata
if not self.return_each_line:
return self.aggregate_lines_to_chunks(lines_with_metadata)
else:
return [
Document(page_content=chunk["content"], metadata=chunk["metadata"])
for chunk in lines_with_metadata
]
class LineType(TypedDict):
"""Line type as typed dict."""
metadata: Dict[str, str]
content: str
class HeaderType(TypedDict):
"""Header type as typed dict."""
level: int
name: str
data: str
| |
153338
|
from __future__ import annotations
import re
from typing import Any, List, Literal, Optional, Union
from langchain_text_splitters.base import Language, TextSplitter
class CharacterTextSplitter(TextSplitter):
"""Splitting text that looks at characters."""
def __init__(
self, separator: str = "\n\n", is_separator_regex: bool = False, **kwargs: Any
) -> None:
"""Create a new TextSplitter."""
super().__init__(**kwargs)
self._separator = separator
self._is_separator_regex = is_separator_regex
def split_text(self, text: str) -> List[str]:
"""Split incoming text and return chunks."""
# First we naively split the large input into a bunch of smaller ones.
separator = (
self._separator if self._is_separator_regex else re.escape(self._separator)
)
splits = _split_text_with_regex(text, separator, self._keep_separator)
_separator = "" if self._keep_separator else self._separator
return self._merge_splits(splits, _separator)
def _split_text_with_regex(
text: str, separator: str, keep_separator: Union[bool, Literal["start", "end"]]
) -> List[str]:
# Now that we have the separator, split the text
if separator:
if keep_separator:
# The parentheses in the pattern keep the delimiters in the result.
_splits = re.split(f"({separator})", text)
splits = (
([_splits[i] + _splits[i + 1] for i in range(0, len(_splits) - 1, 2)])
if keep_separator == "end"
else ([_splits[i] + _splits[i + 1] for i in range(1, len(_splits), 2)])
)
if len(_splits) % 2 == 0:
splits += _splits[-1:]
splits = (
(splits + [_splits[-1]])
if keep_separator == "end"
else ([_splits[0]] + splits)
)
else:
splits = re.split(separator, text)
else:
splits = list(text)
return [s for s in splits if s != ""]
class RecursiveCharacterTextSplitter(TextSplitter):
"""Splitting text by recursively look at characters.
Recursively tries to split by different characters to find one
that works.
"""
def __init__(
self,
separators: Optional[List[str]] = None,
keep_separator: Union[bool, Literal["start", "end"]] = True,
is_separator_regex: bool = False,
**kwargs: Any,
) -> None:
"""Create a new TextSplitter."""
super().__init__(keep_separator=keep_separator, **kwargs)
self._separators = separators or ["\n\n", "\n", " ", ""]
self._is_separator_regex = is_separator_regex
def _split_text(self, text: str, separators: List[str]) -> List[str]:
"""Split incoming text and return chunks."""
final_chunks = []
# Get appropriate separator to use
separator = separators[-1]
new_separators = []
for i, _s in enumerate(separators):
_separator = _s if self._is_separator_regex else re.escape(_s)
if _s == "":
separator = _s
break
if re.search(_separator, text):
separator = _s
new_separators = separators[i + 1 :]
break
_separator = separator if self._is_separator_regex else re.escape(separator)
splits = _split_text_with_regex(text, _separator, self._keep_separator)
# Now go merging things, recursively splitting longer texts.
_good_splits = []
_separator = "" if self._keep_separator else separator
for s in splits:
if self._length_function(s) < self._chunk_size:
_good_splits.append(s)
else:
if _good_splits:
merged_text = self._merge_splits(_good_splits, _separator)
final_chunks.extend(merged_text)
_good_splits = []
if not new_separators:
final_chunks.append(s)
else:
other_info = self._split_text(s, new_separators)
final_chunks.extend(other_info)
if _good_splits:
merged_text = self._merge_splits(_good_splits, _separator)
final_chunks.extend(merged_text)
return final_chunks
def split_text(self, text: str) -> List[str]:
return self._split_text(text, self._separators)
@classmethod
def from_language(
cls, language: Language, **kwargs: Any
) -> RecursiveCharacterTextSplitter:
separators = cls.get_separators_for_language(language)
return cls(separators=separators, is_separator_regex=True, **kwargs)
| |
153341
|
from __future__ import annotations
from typing import Any, List, Optional, cast
from langchain_text_splitters.base import TextSplitter, Tokenizer, split_text_on_tokens
class SentenceTransformersTokenTextSplitter(TextSplitter):
"""Splitting text to tokens using sentence model tokenizer."""
def __init__(
self,
chunk_overlap: int = 50,
model_name: str = "sentence-transformers/all-mpnet-base-v2",
tokens_per_chunk: Optional[int] = None,
**kwargs: Any,
) -> None:
"""Create a new TextSplitter."""
super().__init__(**kwargs, chunk_overlap=chunk_overlap)
try:
from sentence_transformers import SentenceTransformer
except ImportError:
raise ImportError(
"Could not import sentence_transformer python package. "
"This is needed in order to for SentenceTransformersTokenTextSplitter. "
"Please install it with `pip install sentence-transformers`."
)
self.model_name = model_name
self._model = SentenceTransformer(self.model_name)
self.tokenizer = self._model.tokenizer
self._initialize_chunk_configuration(tokens_per_chunk=tokens_per_chunk)
def _initialize_chunk_configuration(
self, *, tokens_per_chunk: Optional[int]
) -> None:
self.maximum_tokens_per_chunk = cast(int, self._model.max_seq_length)
if tokens_per_chunk is None:
self.tokens_per_chunk = self.maximum_tokens_per_chunk
else:
self.tokens_per_chunk = tokens_per_chunk
if self.tokens_per_chunk > self.maximum_tokens_per_chunk:
raise ValueError(
f"The token limit of the models '{self.model_name}'"
f" is: {self.maximum_tokens_per_chunk}."
f" Argument tokens_per_chunk={self.tokens_per_chunk}"
f" > maximum token limit."
)
def split_text(self, text: str) -> List[str]:
def encode_strip_start_and_stop_token_ids(text: str) -> List[int]:
return self._encode(text)[1:-1]
tokenizer = Tokenizer(
chunk_overlap=self._chunk_overlap,
tokens_per_chunk=self.tokens_per_chunk,
decode=self.tokenizer.decode,
encode=encode_strip_start_and_stop_token_ids,
)
return split_text_on_tokens(text=text, tokenizer=tokenizer)
def count_tokens(self, *, text: str) -> int:
return len(self._encode(text))
_max_length_equal_32_bit_integer: int = 2**32
def _encode(self, text: str) -> List[int]:
token_ids_with_start_and_end_token_ids = self.tokenizer.encode(
text,
max_length=self._max_length_equal_32_bit_integer,
truncation="do_not_truncate",
)
return token_ids_with_start_and_end_token_ids
| |
153345
|
from __future__ import annotations
import copy
import logging
from abc import ABC, abstractmethod
from dataclasses import dataclass
from enum import Enum
from typing import (
AbstractSet,
Any,
Callable,
Collection,
Iterable,
List,
Literal,
Optional,
Sequence,
Type,
TypeVar,
Union,
)
from langchain_core.documents import BaseDocumentTransformer, Document
logger = logging.getLogger(__name__)
TS = TypeVar("TS", bound="TextSplitter")
class TextSplitter(BaseDocumentTransformer, ABC):
"""Interface for splitting text into chunks."""
def __init__(
self,
chunk_size: int = 4000,
chunk_overlap: int = 200,
length_function: Callable[[str], int] = len,
keep_separator: Union[bool, Literal["start", "end"]] = False,
add_start_index: bool = False,
strip_whitespace: bool = True,
) -> None:
"""Create a new TextSplitter.
Args:
chunk_size: Maximum size of chunks to return
chunk_overlap: Overlap in characters between chunks
length_function: Function that measures the length of given chunks
keep_separator: Whether to keep the separator and where to place it
in each corresponding chunk (True='start')
add_start_index: If `True`, includes chunk's start index in metadata
strip_whitespace: If `True`, strips whitespace from the start and end of
every document
"""
if chunk_overlap > chunk_size:
raise ValueError(
f"Got a larger chunk overlap ({chunk_overlap}) than chunk size "
f"({chunk_size}), should be smaller."
)
self._chunk_size = chunk_size
self._chunk_overlap = chunk_overlap
self._length_function = length_function
self._keep_separator = keep_separator
self._add_start_index = add_start_index
self._strip_whitespace = strip_whitespace
@abstractmethod
def split_text(self, text: str) -> List[str]:
"""Split text into multiple components."""
def create_documents(
self, texts: List[str], metadatas: Optional[List[dict]] = None
) -> List[Document]:
"""Create documents from a list of texts."""
_metadatas = metadatas or [{}] * len(texts)
documents = []
for i, text in enumerate(texts):
index = 0
previous_chunk_len = 0
for chunk in self.split_text(text):
metadata = copy.deepcopy(_metadatas[i])
if self._add_start_index:
offset = index + previous_chunk_len - self._chunk_overlap
index = text.find(chunk, max(0, offset))
metadata["start_index"] = index
previous_chunk_len = len(chunk)
new_doc = Document(page_content=chunk, metadata=metadata)
documents.append(new_doc)
return documents
def split_documents(self, documents: Iterable[Document]) -> List[Document]:
"""Split documents."""
texts, metadatas = [], []
for doc in documents:
texts.append(doc.page_content)
metadatas.append(doc.metadata)
return self.create_documents(texts, metadatas=metadatas)
def _join_docs(self, docs: List[str], separator: str) -> Optional[str]:
text = separator.join(docs)
if self._strip_whitespace:
text = text.strip()
if text == "":
return None
else:
return text
def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]:
# We now want to combine these smaller pieces into medium size
# chunks to send to the LLM.
separator_len = self._length_function(separator)
docs = []
current_doc: List[str] = []
total = 0
for d in splits:
_len = self._length_function(d)
if (
total + _len + (separator_len if len(current_doc) > 0 else 0)
> self._chunk_size
):
if total > self._chunk_size:
logger.warning(
f"Created a chunk of size {total}, "
f"which is longer than the specified {self._chunk_size}"
)
if len(current_doc) > 0:
doc = self._join_docs(current_doc, separator)
if doc is not None:
docs.append(doc)
# Keep on popping if:
# - we have a larger chunk than in the chunk overlap
# - or if we still have any chunks and the length is long
while total > self._chunk_overlap or (
total + _len + (separator_len if len(current_doc) > 0 else 0)
> self._chunk_size
and total > 0
):
total -= self._length_function(current_doc[0]) + (
separator_len if len(current_doc) > 1 else 0
)
current_doc = current_doc[1:]
current_doc.append(d)
total += _len + (separator_len if len(current_doc) > 1 else 0)
doc = self._join_docs(current_doc, separator)
if doc is not None:
docs.append(doc)
return docs
@classmethod
def from_huggingface_tokenizer(cls, tokenizer: Any, **kwargs: Any) -> TextSplitter:
"""Text splitter that uses HuggingFace tokenizer to count length."""
try:
from transformers import PreTrainedTokenizerBase
if not isinstance(tokenizer, PreTrainedTokenizerBase):
raise ValueError(
"Tokenizer received was not an instance of PreTrainedTokenizerBase"
)
def _huggingface_tokenizer_length(text: str) -> int:
return len(tokenizer.encode(text))
except ImportError:
raise ValueError(
"Could not import transformers python package. "
"Please install it with `pip install transformers`."
)
return cls(length_function=_huggingface_tokenizer_length, **kwargs)
@classmethod
def from_tiktoken_encoder(
cls: Type[TS],
encoding_name: str = "gpt2",
model_name: Optional[str] = None,
allowed_special: Union[Literal["all"], AbstractSet[str]] = set(),
disallowed_special: Union[Literal["all"], Collection[str]] = "all",
**kwargs: Any,
) -> TS:
"""Text splitter that uses tiktoken encoder to count length."""
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to calculate max_tokens_for_prompt. "
"Please install it with `pip install tiktoken`."
)
if model_name is not None:
enc = tiktoken.encoding_for_model(model_name)
else:
enc = tiktoken.get_encoding(encoding_name)
def _tiktoken_encoder(text: str) -> int:
return len(
enc.encode(
text,
allowed_special=allowed_special,
disallowed_special=disallowed_special,
)
)
if issubclass(cls, TokenTextSplitter):
extra_kwargs = {
"encoding_name": encoding_name,
"model_name": model_name,
"allowed_special": allowed_special,
"disallowed_special": disallowed_special,
}
kwargs = {**kwargs, **extra_kwargs}
return cls(length_function=_tiktoken_encoder, **kwargs)
def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Transform sequence of documents by splitting them."""
return self.split_documents(list(documents))
| |
153426
|
"""Test the base tool implementation."""
import inspect
import json
import sys
import textwrap
import threading
from datetime import datetime
from enum import Enum
from functools import partial
from typing import (
Annotated,
Any,
Callable,
Generic,
Literal,
Optional,
TypeVar,
Union,
)
import pytest
from pydantic import BaseModel, Field, ValidationError
from pydantic.v1 import BaseModel as BaseModelV1
from pydantic.v1 import ValidationError as ValidationErrorV1
from typing_extensions import TypedDict
from langchain_core import tools
from langchain_core.callbacks import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
from langchain_core.messages import ToolMessage
from langchain_core.runnables import (
Runnable,
RunnableConfig,
RunnableLambda,
ensure_config,
)
from langchain_core.tools import (
BaseTool,
StructuredTool,
Tool,
ToolException,
tool,
)
from langchain_core.tools.base import (
InjectedToolArg,
SchemaAnnotationError,
_get_all_basemodel_annotations,
_is_message_content_block,
_is_message_content_type,
)
from langchain_core.utils.function_calling import convert_to_openai_function
from langchain_core.utils.pydantic import PYDANTIC_MAJOR_VERSION, _create_subset_model
from tests.unit_tests.fake.callbacks import FakeCallbackHandler
from tests.unit_tests.pydantic_utils import _schema
def test_unnamed_decorator() -> None:
"""Test functionality with unnamed decorator."""
@tool
def search_api(query: str) -> str:
"""Search the API for the query."""
return "API result"
assert isinstance(search_api, BaseTool)
assert search_api.name == "search_api"
assert not search_api.return_direct
assert search_api.invoke("test") == "API result"
class _MockSchema(BaseModel):
"""Return the arguments directly."""
arg1: int
arg2: bool
arg3: Optional[dict] = None
class _MockSchemaV1(BaseModelV1):
"""Return the arguments directly."""
arg1: int
arg2: bool
arg3: Optional[dict] = None
class _MockStructuredTool(BaseTool):
name: str = "structured_api"
args_schema: type[BaseModel] = _MockSchema
description: str = "A Structured Tool"
def _run(self, arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
return f"{arg1} {arg2} {arg3}"
async def _arun(self, arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
raise NotImplementedError
def test_structured_args() -> None:
"""Test functionality with structured arguments."""
structured_api = _MockStructuredTool()
assert isinstance(structured_api, BaseTool)
assert structured_api.name == "structured_api"
expected_result = "1 True {'foo': 'bar'}"
args = {"arg1": 1, "arg2": True, "arg3": {"foo": "bar"}}
assert structured_api.run(args) == expected_result
def test_misannotated_base_tool_raises_error() -> None:
"""Test that a BaseTool with the incorrect typehint raises an exception.""" ""
with pytest.raises(SchemaAnnotationError):
class _MisAnnotatedTool(BaseTool):
name: str = "structured_api"
# This would silently be ignored without the custom metaclass
args_schema: BaseModel = _MockSchema # type: ignore
description: str = "A Structured Tool"
def _run(self, arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
return f"{arg1} {arg2} {arg3}"
async def _arun(
self, arg1: int, arg2: bool, arg3: Optional[dict] = None
) -> str:
raise NotImplementedError
def test_forward_ref_annotated_base_tool_accepted() -> None:
"""Test that a using forward ref annotation syntax is accepted.""" ""
class _ForwardRefAnnotatedTool(BaseTool):
name: str = "structured_api"
args_schema: "type[BaseModel]" = _MockSchema
description: str = "A Structured Tool"
def _run(self, arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
return f"{arg1} {arg2} {arg3}"
async def _arun(
self, arg1: int, arg2: bool, arg3: Optional[dict] = None
) -> str:
raise NotImplementedError
def test_subclass_annotated_base_tool_accepted() -> None:
"""Test BaseTool child w/ custom schema isn't overwritten."""
class _ForwardRefAnnotatedTool(BaseTool):
name: str = "structured_api"
args_schema: type[_MockSchema] = _MockSchema
description: str = "A Structured Tool"
def _run(self, arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
return f"{arg1} {arg2} {arg3}"
async def _arun(
self, arg1: int, arg2: bool, arg3: Optional[dict] = None
) -> str:
raise NotImplementedError
assert issubclass(_ForwardRefAnnotatedTool, BaseTool)
tool = _ForwardRefAnnotatedTool()
assert tool.args_schema == _MockSchema
def test_decorator_with_specified_schema() -> None:
"""Test that manually specified schemata are passed through to the tool."""
@tool(args_schema=_MockSchema)
def tool_func(arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
return f"{arg1} {arg2} {arg3}"
assert isinstance(tool_func, BaseTool)
assert tool_func.args_schema == _MockSchema
@tool(args_schema=_MockSchemaV1)
def tool_func_v1(arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
return f"{arg1} {arg2} {arg3}"
assert isinstance(tool_func_v1, BaseTool)
assert tool_func_v1.args_schema == _MockSchemaV1
def test_decorated_function_schema_equivalent() -> None:
"""Test that a BaseTool without a schema meets expectations."""
@tool
def structured_tool_input(
arg1: int, arg2: bool, arg3: Optional[dict] = None
) -> str:
"""Return the arguments directly."""
return f"{arg1} {arg2} {arg3}"
assert isinstance(structured_tool_input, BaseTool)
assert structured_tool_input.args_schema is not None
assert (
_schema(structured_tool_input.args_schema)["properties"]
== _schema(_MockSchema)["properties"]
== structured_tool_input.args
)
def test_args_kwargs_filtered() -> None:
class _SingleArgToolWithKwargs(BaseTool):
name: str = "single_arg_tool"
description: str = "A single arged tool with kwargs"
def _run(
self,
some_arg: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
**kwargs: Any,
) -> str:
return "foo"
async def _arun(
self,
some_arg: str,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
**kwargs: Any,
) -> str:
raise NotImplementedError
tool = _SingleArgToolWithKwargs()
assert tool.is_single_input
class _VarArgToolWithKwargs(BaseTool):
name: str = "single_arg_tool"
description: str = "A single arged tool with kwargs"
def _run(
self,
*args: Any,
run_manager: Optional[CallbackManagerForToolRun] = None,
**kwargs: Any,
) -> str:
return "foo"
async def _arun(
self,
*args: Any,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
**kwargs: Any,
) -> str:
raise NotImplementedError
tool2 = _VarArgToolWithKwargs()
assert tool2.is_single_input
| |
153427
|
def test_structured_args_decorator_no_infer_schema() -> None:
"""Test functionality with structured arguments parsed as a decorator."""
@tool(infer_schema=False)
def structured_tool_input(
arg1: int, arg2: Union[float, datetime], opt_arg: Optional[dict] = None
) -> str:
"""Return the arguments directly."""
return f"{arg1}, {arg2}, {opt_arg}"
assert isinstance(structured_tool_input, BaseTool)
assert structured_tool_input.name == "structured_tool_input"
args = {"arg1": 1, "arg2": 0.001, "opt_arg": {"foo": "bar"}}
with pytest.raises(ToolException):
assert structured_tool_input.run(args)
def test_structured_single_str_decorator_no_infer_schema() -> None:
"""Test functionality with structured arguments parsed as a decorator."""
@tool(infer_schema=False)
def unstructured_tool_input(tool_input: str) -> str:
"""Return the arguments directly."""
assert isinstance(tool_input, str)
return f"{tool_input}"
assert isinstance(unstructured_tool_input, BaseTool)
assert unstructured_tool_input.args_schema is None
assert unstructured_tool_input.run("foo") == "foo"
def test_structured_tool_types_parsed() -> None:
"""Test the non-primitive types are correctly passed to structured tools."""
class SomeEnum(Enum):
A = "a"
B = "b"
class SomeBaseModel(BaseModel):
foo: str
@tool
def structured_tool(
some_enum: SomeEnum,
some_base_model: SomeBaseModel,
) -> dict:
"""Return the arguments directly."""
return {
"some_enum": some_enum,
"some_base_model": some_base_model,
}
assert isinstance(structured_tool, StructuredTool)
args = {
"some_enum": SomeEnum.A.value,
"some_base_model": SomeBaseModel(foo="bar").model_dump(),
}
result = structured_tool.run(json.loads(json.dumps(args)))
expected = {
"some_enum": SomeEnum.A,
"some_base_model": SomeBaseModel(foo="bar"),
}
assert result == expected
def test_structured_tool_types_parsed_pydantic_v1() -> None:
"""Test the non-primitive types are correctly passed to structured tools."""
class SomeBaseModel(BaseModelV1):
foo: str
class AnotherBaseModel(BaseModelV1):
bar: str
@tool
def structured_tool(some_base_model: SomeBaseModel) -> AnotherBaseModel:
"""Return the arguments directly."""
return AnotherBaseModel(bar=some_base_model.foo)
assert isinstance(structured_tool, StructuredTool)
expected = AnotherBaseModel(bar="baz")
for arg in [
SomeBaseModel(foo="baz"),
SomeBaseModel(foo="baz").dict(),
]:
args = {"some_base_model": arg}
result = structured_tool.run(args)
assert result == expected
def test_structured_tool_types_parsed_pydantic_mixed() -> None:
"""Test handling of tool with mixed Pydantic version arguments."""
class SomeBaseModel(BaseModelV1):
foo: str
class AnotherBaseModel(BaseModel):
bar: str
with pytest.raises(NotImplementedError):
@tool
def structured_tool(
some_base_model: SomeBaseModel, another_base_model: AnotherBaseModel
) -> None:
"""Return the arguments directly."""
def test_base_tool_inheritance_base_schema() -> None:
"""Test schema is correctly inferred when inheriting from BaseTool."""
class _MockSimpleTool(BaseTool):
name: str = "simple_tool"
description: str = "A Simple Tool"
def _run(self, tool_input: str) -> str:
return f"{tool_input}"
async def _arun(self, tool_input: str) -> str:
raise NotImplementedError
simple_tool = _MockSimpleTool()
assert simple_tool.args_schema is None
expected_args = {"tool_input": {"title": "Tool Input", "type": "string"}}
assert simple_tool.args == expected_args
def test_tool_lambda_args_schema() -> None:
"""Test args schema inference when the tool argument is a lambda function."""
tool = Tool(
name="tool",
description="A tool",
func=lambda tool_input: tool_input,
)
assert tool.args_schema is None
expected_args = {"tool_input": {"type": "string"}}
assert tool.args == expected_args
def test_structured_tool_from_function_docstring() -> None:
"""Test that structured tools can be created from functions."""
def foo(bar: int, baz: str) -> str:
"""Docstring
Args:
bar: the bar value
baz: the baz value
"""
raise NotImplementedError
structured_tool = StructuredTool.from_function(foo)
assert structured_tool.name == "foo"
assert structured_tool.args == {
"bar": {"title": "Bar", "type": "integer"},
"baz": {"title": "Baz", "type": "string"},
}
assert _schema(structured_tool.args_schema) == {
"properties": {
"bar": {"title": "Bar", "type": "integer"},
"baz": {"title": "Baz", "type": "string"},
},
"description": inspect.getdoc(foo),
"title": "foo",
"type": "object",
"required": ["bar", "baz"],
}
assert foo.__doc__ is not None
assert structured_tool.description == textwrap.dedent(foo.__doc__.strip())
def test_structured_tool_from_function_docstring_complex_args() -> None:
"""Test that structured tools can be created from functions."""
def foo(bar: int, baz: list[str]) -> str:
"""Docstring
Args:
bar: int
baz: List[str]
"""
raise NotImplementedError
structured_tool = StructuredTool.from_function(foo)
assert structured_tool.name == "foo"
assert structured_tool.args == {
"bar": {"title": "Bar", "type": "integer"},
"baz": {
"title": "Baz",
"type": "array",
"items": {"type": "string"},
},
}
assert _schema(structured_tool.args_schema) == {
"properties": {
"bar": {"title": "Bar", "type": "integer"},
"baz": {
"title": "Baz",
"type": "array",
"items": {"type": "string"},
},
},
"description": inspect.getdoc(foo),
"title": "foo",
"type": "object",
"required": ["bar", "baz"],
}
assert foo.__doc__ is not None
assert structured_tool.description == textwrap.dedent(foo.__doc__).strip()
def test_structured_tool_lambda_multi_args_schema() -> None:
"""Test args schema inference when the tool argument is a lambda function."""
tool = StructuredTool.from_function(
name="tool",
description="A tool",
func=lambda tool_input, other_arg: f"{tool_input}{other_arg}", # type: ignore
)
assert tool.args_schema is not None
expected_args = {
"tool_input": {"title": "Tool Input"},
"other_arg": {"title": "Other Arg"},
}
assert tool.args == expected_args
def test_tool_partial_function_args_schema() -> None:
"""Test args schema inference when the tool argument is a partial function."""
def func(tool_input: str, other_arg: str) -> str:
assert isinstance(tool_input, str)
assert isinstance(other_arg, str)
return tool_input + other_arg
tool = Tool(
name="tool",
description="A tool",
func=partial(func, other_arg="foo"),
)
assert tool.run("bar") == "barfoo"
def test_empty_args_decorator() -> None:
"""Test inferred schema of decorated fn with no args."""
@tool
def empty_tool_input() -> str:
"""Return a constant."""
return "the empty result"
assert isinstance(empty_tool_input, BaseTool)
assert empty_tool_input.name == "empty_tool_input"
assert empty_tool_input.args == {}
assert empty_tool_input.run({}) == "the empty result"
def test_tool_from_function_with_run_manager() -> None:
"""Test run of tool when using run_manager."""
def foo(bar: str, callbacks: Optional[CallbackManagerForToolRun] = None) -> str:
"""Docstring
Args:
bar: str
"""
assert callbacks is not None
return "foo" + bar
handler = FakeCallbackHandler()
tool = Tool.from_function(foo, name="foo", description="Docstring")
assert tool.run(tool_input={"bar": "bar"}, run_manager=[handler]) == "foobar"
assert tool.run("baz", run_manager=[handler]) == "foobaz"
| |
153434
|
@pytest.mark.skipif(PYDANTIC_MAJOR_VERSION != 1, reason="Testing pydantic v1.")
def test__get_all_basemodel_annotations_v1() -> None:
A = TypeVar("A")
class ModelA(BaseModel, Generic[A], extra="allow"):
a: A
class ModelB(ModelA[str]):
b: Annotated[ModelA[dict[str, Any]], "foo"]
class Mixin:
def foo(self) -> str:
return "foo"
class ModelC(Mixin, ModelB):
c: dict
expected = {"a": str, "b": Annotated[ModelA[dict[str, Any]], "foo"], "c": dict}
actual = _get_all_basemodel_annotations(ModelC)
assert actual == expected
expected = {"a": str, "b": Annotated[ModelA[dict[str, Any]], "foo"]}
actual = _get_all_basemodel_annotations(ModelB)
assert actual == expected
expected = {"a": Any}
actual = _get_all_basemodel_annotations(ModelA)
assert actual == expected
expected = {"a": int}
actual = _get_all_basemodel_annotations(ModelA[int])
assert actual == expected
D = TypeVar("D", bound=Union[str, int])
class ModelD(ModelC, Generic[D]):
d: Optional[D]
expected = {
"a": str,
"b": Annotated[ModelA[dict[str, Any]], "foo"],
"c": dict,
"d": Union[str, int, None],
}
actual = _get_all_basemodel_annotations(ModelD)
assert actual == expected
expected = {
"a": str,
"b": Annotated[ModelA[dict[str, Any]], "foo"],
"c": dict,
"d": Union[int, None],
}
actual = _get_all_basemodel_annotations(ModelD[int])
assert actual == expected
def test_tool_annotations_preserved() -> None:
"""Test that annotations are preserved when creating a tool."""
@tool
def my_tool(val: int, other_val: Annotated[dict, "my annotation"]) -> str:
"""Tool docstring."""
return "foo"
schema = my_tool.get_input_schema() # type: ignore[attr-defined]
func = my_tool.func # type: ignore[attr-defined]
expected_type_hints = {
name: hint
for name, hint in func.__annotations__.items()
if name in inspect.signature(func).parameters
}
assert schema.__annotations__ == expected_type_hints
@pytest.mark.skipif(PYDANTIC_MAJOR_VERSION != 2, reason="Testing pydantic v2.")
def test_tool_args_schema_pydantic_v2_with_metadata() -> None:
from pydantic import BaseModel as BaseModelV2
from pydantic import Field as FieldV2
from pydantic import ValidationError as ValidationErrorV2
class Foo(BaseModelV2):
x: list[int] = FieldV2(
description="List of integers", min_length=10, max_length=15
)
@tool(args_schema=Foo)
def foo(x): # type: ignore[no-untyped-def]
"""foo"""
return x
assert foo.tool_call_schema.model_json_schema() == {
"description": "foo",
"properties": {
"x": {
"description": "List of integers",
"items": {"type": "integer"},
"maxItems": 15,
"minItems": 10,
"title": "X",
"type": "array",
}
},
"required": ["x"],
"title": "foo",
"type": "object",
}
assert foo.invoke({"x": [0] * 10})
with pytest.raises(ValidationErrorV2):
foo.invoke({"x": [0] * 9})
def test_imports() -> None:
expected_all = [
"FILTERED_ARGS",
"SchemaAnnotationError",
"create_schema_from_function",
"ToolException",
"BaseTool",
"Tool",
"StructuredTool",
"tool",
"RetrieverInput",
"create_retriever_tool",
"ToolsRenderer",
"render_text_description",
"render_text_description_and_args",
"BaseToolkit",
"convert_runnable_to_tool",
"InjectedToolArg",
]
for module_name in expected_all:
assert hasattr(tools, module_name) and getattr(tools, module_name) is not None
def test_structured_tool_direct_init() -> None:
def foo(bar: str) -> str:
return bar
async def async_foo(bar: str) -> str:
return bar
class FooSchema(BaseModel):
bar: str = Field(..., description="The bar")
tool = StructuredTool(name="foo", args_schema=FooSchema, coroutine=async_foo)
with pytest.raises(NotImplementedError):
assert tool.invoke("hello") == "hello"
def test_injected_arg_with_complex_type() -> None:
"""Test that an injected tool arg can be a complex type."""
class Foo:
def __init__(self) -> None:
self.value = "bar"
@tool
def injected_tool(x: int, foo: Annotated[Foo, InjectedToolArg]) -> str:
"""Tool that has an injected tool arg."""
return foo.value
assert injected_tool.invoke({"x": 5, "foo": Foo()}) == "bar" # type: ignore
| |
153477
|
"""Test for some custom pydantic decorators."""
from typing import Any, Optional
import pytest
from pydantic import ConfigDict
from langchain_core.utils.pydantic import (
PYDANTIC_MAJOR_VERSION,
_create_subset_model_v2,
create_model_v2,
get_fields,
is_basemodel_instance,
is_basemodel_subclass,
pre_init,
)
def test_pre_init_decorator() -> None:
from pydantic import BaseModel
class Foo(BaseModel):
x: int = 5
y: int
@pre_init
def validator(cls, v: dict[str, Any]) -> dict[str, Any]:
v["y"] = v["x"] + 1
return v
# Type ignore initialization b/c y is marked as required
foo = Foo() # type: ignore
assert foo.y == 6
foo = Foo(x=10) # type: ignore
assert foo.y == 11
def test_pre_init_decorator_with_more_defaults() -> None:
from pydantic import BaseModel, Field
class Foo(BaseModel):
a: int = 1
b: Optional[int] = None
c: int = Field(default=2)
d: int = Field(default_factory=lambda: 3)
@pre_init
def validator(cls, v: dict[str, Any]) -> dict[str, Any]:
assert v["a"] == 1
assert v["b"] is None
assert v["c"] == 2
assert v["d"] == 3
return v
# Try to create an instance of Foo
# nothing is required, but mypy can't track the default for `c`
Foo() # type: ignore
def test_with_aliases() -> None:
from pydantic import BaseModel, Field
class Foo(BaseModel):
x: int = Field(default=1, alias="y")
z: int
model_config = ConfigDict(
populate_by_name=True,
)
@pre_init
def validator(cls, v: dict[str, Any]) -> dict[str, Any]:
v["z"] = v["x"]
return v
# Based on defaults
# z is required
foo = Foo() # type: ignore
assert foo.x == 1
assert foo.z == 1
# Based on field name
# z is required
foo = Foo(x=2) # type: ignore
assert foo.x == 2
assert foo.z == 2
# Based on alias
# z is required
foo = Foo(y=2) # type: ignore
assert foo.x == 2
assert foo.z == 2
def test_is_basemodel_subclass() -> None:
"""Test pydantic."""
if PYDANTIC_MAJOR_VERSION == 1:
from pydantic import BaseModel as BaseModelV1Proper
assert is_basemodel_subclass(BaseModelV1Proper)
elif PYDANTIC_MAJOR_VERSION == 2:
from pydantic import BaseModel as BaseModelV2
from pydantic.v1 import BaseModel as BaseModelV1
assert is_basemodel_subclass(BaseModelV2)
assert is_basemodel_subclass(BaseModelV1)
else:
msg = f"Unsupported Pydantic version: {PYDANTIC_MAJOR_VERSION}"
raise ValueError(msg)
def test_is_basemodel_instance() -> None:
"""Test pydantic."""
if PYDANTIC_MAJOR_VERSION == 1:
from pydantic import BaseModel as BaseModelV1Proper
class FooV1(BaseModelV1Proper):
x: int
assert is_basemodel_instance(FooV1(x=5))
elif PYDANTIC_MAJOR_VERSION == 2:
from pydantic import BaseModel as BaseModelV2
from pydantic.v1 import BaseModel as BaseModelV1
class Foo(BaseModelV2):
x: int
assert is_basemodel_instance(Foo(x=5))
class Bar(BaseModelV1):
x: int
assert is_basemodel_instance(Bar(x=5))
else:
msg = f"Unsupported Pydantic version: {PYDANTIC_MAJOR_VERSION}"
raise ValueError(msg)
@pytest.mark.skipif(PYDANTIC_MAJOR_VERSION != 2, reason="Only tests Pydantic v2")
def test_with_field_metadata() -> None:
"""Test pydantic with field metadata"""
from pydantic import BaseModel as BaseModelV2
from pydantic import Field as FieldV2
class Foo(BaseModelV2):
x: list[int] = FieldV2(
description="List of integers", min_length=10, max_length=15
)
subset_model = _create_subset_model_v2("Foo", Foo, ["x"])
assert subset_model.model_json_schema() == {
"properties": {
"x": {
"description": "List of integers",
"items": {"type": "integer"},
"maxItems": 15,
"minItems": 10,
"title": "X",
"type": "array",
}
},
"required": ["x"],
"title": "Foo",
"type": "object",
}
@pytest.mark.skipif(PYDANTIC_MAJOR_VERSION != 1, reason="Only tests Pydantic v1")
def test_fields_pydantic_v1() -> None:
from pydantic import BaseModel
class Foo(BaseModel):
x: int
fields = get_fields(Foo)
assert fields == {"x": Foo.model_fields["x"]} # type: ignore[index]
@pytest.mark.skipif(PYDANTIC_MAJOR_VERSION != 2, reason="Only tests Pydantic v2")
def test_fields_pydantic_v2_proper() -> None:
from pydantic import BaseModel
class Foo(BaseModel):
x: int
fields = get_fields(Foo)
assert fields == {"x": Foo.model_fields["x"]}
@pytest.mark.skipif(PYDANTIC_MAJOR_VERSION != 2, reason="Only tests Pydantic v2")
def test_fields_pydantic_v1_from_2() -> None:
from pydantic.v1 import BaseModel
class Foo(BaseModel):
x: int
fields = get_fields(Foo)
assert fields == {"x": Foo.__fields__["x"]}
def test_create_model_v2() -> None:
"""Test that create model v2 works as expected."""
with pytest.warns(None) as record: # type: ignore
foo = create_model_v2("Foo", field_definitions={"a": (int, None)})
foo.model_json_schema()
assert list(record) == []
# schema is used by pydantic, but OK to re-use
with pytest.warns(None) as record: # type: ignore
foo = create_model_v2("Foo", field_definitions={"schema": (int, None)})
foo.model_json_schema()
assert list(record) == []
# From protected namespaces, but definitely OK to use.
with pytest.warns(None) as record: # type: ignore
foo = create_model_v2("Foo", field_definitions={"model_id": (int, None)})
foo.model_json_schema()
assert list(record) == []
with pytest.warns(None) as record: # type: ignore
# Verify that we can use non-English characters
field_name = "もしもし"
foo = create_model_v2("Foo", field_definitions={field_name: (int, None)})
foo.model_json_schema()
assert list(record) == []
| |
153506
|
# Add a test for schema of runnable assign
def foo(x: int) -> int:
return x
foo_ = RunnableLambda(foo)
assert foo_.assign(bar=lambda x: "foo").get_output_schema().model_json_schema() == {
"properties": {"bar": {"title": "Bar"}, "root": {"title": "Root"}},
"required": ["root", "bar"],
"title": "RunnableAssignOutput",
"type": "object",
}
def test_passthrough_assign_schema() -> None:
retriever = FakeRetriever() # str -> List[Document]
prompt = PromptTemplate.from_template("{context} {question}")
fake_llm = FakeListLLM(responses=["a"]) # str -> List[List[str]]
seq_w_assign: Runnable = (
RunnablePassthrough.assign(context=itemgetter("question") | retriever)
| prompt
| fake_llm
)
assert seq_w_assign.get_input_jsonschema() == {
"properties": {"question": {"title": "Question", "type": "string"}},
"title": "RunnableSequenceInput",
"type": "object",
"required": ["question"],
}
assert seq_w_assign.get_output_jsonschema() == {
"title": "FakeListLLMOutput",
"type": "string",
}
invalid_seq_w_assign: Runnable = (
RunnablePassthrough.assign(context=itemgetter("question") | retriever)
| fake_llm
)
# fallback to RunnableAssign.input_schema if next runnable doesn't have
# expected dict input_schema
assert invalid_seq_w_assign.get_input_jsonschema() == {
"properties": {"question": {"title": "Question"}},
"title": "RunnableParallel<context>Input",
"type": "object",
"required": ["question"],
}
@pytest.mark.skipif(
sys.version_info < (3, 9), reason="Requires python version >= 3.9 to run."
)
def test_lambda_schemas(snapshot: SnapshotAssertion) -> None:
first_lambda = lambda x: x["hello"] # noqa: E731
assert RunnableLambda(first_lambda).get_input_jsonschema() == {
"title": "RunnableLambdaInput",
"type": "object",
"properties": {"hello": {"title": "Hello"}},
"required": ["hello"],
}
second_lambda = lambda x, y: (x["hello"], x["bye"], y["bah"]) # noqa: E731
assert RunnableLambda(second_lambda).get_input_jsonschema() == { # type: ignore[arg-type]
"title": "RunnableLambdaInput",
"type": "object",
"properties": {"hello": {"title": "Hello"}, "bye": {"title": "Bye"}},
"required": ["bye", "hello"],
}
def get_value(input): # type: ignore[no-untyped-def]
return input["variable_name"]
assert RunnableLambda(get_value).get_input_jsonschema() == {
"title": "get_value_input",
"type": "object",
"properties": {"variable_name": {"title": "Variable Name"}},
"required": ["variable_name"],
}
async def aget_value(input): # type: ignore[no-untyped-def]
return (input["variable_name"], input.get("another"))
assert RunnableLambda(aget_value).get_input_jsonschema() == {
"title": "aget_value_input",
"type": "object",
"properties": {
"another": {"title": "Another"},
"variable_name": {"title": "Variable Name"},
},
"required": ["another", "variable_name"],
}
async def aget_values(input): # type: ignore[no-untyped-def]
return {
"hello": input["variable_name"],
"bye": input["variable_name"],
"byebye": input["yo"],
}
assert RunnableLambda(aget_values).get_input_jsonschema() == {
"title": "aget_values_input",
"type": "object",
"properties": {
"variable_name": {"title": "Variable Name"},
"yo": {"title": "Yo"},
},
"required": ["variable_name", "yo"],
}
class InputType(TypedDict):
variable_name: str
yo: int
class OutputType(TypedDict):
hello: str
bye: str
byebye: int
async def aget_values_typed(input: InputType) -> OutputType:
return {
"hello": input["variable_name"],
"bye": input["variable_name"],
"byebye": input["yo"],
}
assert (
_normalize_schema(
RunnableLambda(
aget_values_typed # type: ignore[arg-type]
).get_input_jsonschema()
)
== _normalize_schema(
{
"$defs": {
"InputType": {
"properties": {
"variable_name": {
"title": "Variable " "Name",
"type": "string",
},
"yo": {"title": "Yo", "type": "integer"},
},
"required": ["variable_name", "yo"],
"title": "InputType",
"type": "object",
}
},
"allOf": [{"$ref": "#/$defs/InputType"}],
"title": "aget_values_typed_input",
}
)
)
if PYDANTIC_VERSION >= (2, 9):
assert _normalize_schema(
RunnableLambda(aget_values_typed).get_output_jsonschema() # type: ignore
) == snapshot(name="schema8")
def test_with_types_with_type_generics() -> None:
"""Verify that with_types works if we use things like List[int]"""
def foo(x: int) -> None:
"""Add one to the input."""
raise NotImplementedError
# Try specifying some
RunnableLambda(foo).with_types(
output_type=list[int], # type: ignore[arg-type]
input_type=list[int], # type: ignore[arg-type]
)
RunnableLambda(foo).with_types(
output_type=Sequence[int], # type: ignore[arg-type]
input_type=Sequence[int], # type: ignore[arg-type]
)
def test_schema_with_itemgetter() -> None:
"""Test runnable with itemgetter."""
foo = RunnableLambda(itemgetter("hello"))
assert _schema(foo.input_schema) == {
"properties": {"hello": {"title": "Hello"}},
"required": ["hello"],
"title": "RunnableLambdaInput",
"type": "object",
}
prompt = ChatPromptTemplate.from_template("what is {language}?")
chain: Runnable = {"language": itemgetter("language")} | prompt
assert _schema(chain.input_schema) == {
"properties": {"language": {"title": "Language"}},
"required": ["language"],
"title": "RunnableParallel<language>Input",
"type": "object",
}
def test_schema_complex_seq() -> None:
prompt1 = ChatPromptTemplate.from_template("what is the city {person} is from?")
prompt2 = ChatPromptTemplate.from_template(
"what country is the city {city} in? respond in {language}"
)
model = FakeListChatModel(responses=[""])
chain1: Runnable = RunnableSequence(
prompt1, model, StrOutputParser(), name="city_chain"
)
assert chain1.name == "city_chain"
chain2: Runnable = (
{"city": chain1, "language": itemgetter("language")}
| prompt2
| model
| StrOutputParser()
)
assert chain2.get_input_jsonschema() == {
"title": "RunnableParallel<city,language>Input",
"type": "object",
"properties": {
"person": {"title": "Person", "type": "string"},
"language": {"title": "Language"},
},
"required": ["person", "language"],
}
assert chain2.get_output_jsonschema() == {
"title": "StrOutputParserOutput",
"type": "string",
}
assert chain2.with_types(input_type=str).get_input_jsonschema() == {
"title": "RunnableSequenceInput",
"type": "string",
}
assert chain2.with_types(input_type=int).get_output_jsonschema() == {
"title": "StrOutputParserOutput",
"type": "string",
}
class InputType(BaseModel):
person: str
assert chain2.with_types(input_type=InputType).get_input_jsonschema() == {
"title": "InputType",
"type": "object",
"properties": {"person": {"title": "Person", "type": "string"}},
"required": ["person"],
}
| |
153548
|
from typing import Any, Callable, NamedTuple, Union
import pytest
from langchain_core.beta.runnables.context import Context
from langchain_core.language_models import FakeListLLM, FakeStreamingListLLM
from langchain_core.output_parsers.string import StrOutputParser
from langchain_core.prompt_values import StringPromptValue
from langchain_core.prompts.prompt import PromptTemplate
from langchain_core.runnables.base import Runnable, RunnableLambda
from langchain_core.runnables.passthrough import RunnablePassthrough
from langchain_core.runnables.utils import aadd, add
class _TestCase(NamedTuple):
input: Any
output: Any
def seq_naive_rag() -> Runnable:
context = [
"Hi there!",
"How are you?",
"What's your name?",
]
retriever = RunnableLambda(lambda x: context)
prompt = PromptTemplate.from_template("{context} {question}")
llm = FakeListLLM(responses=["hello"])
return (
Context.setter("input")
| {
"context": retriever | Context.setter("context"),
"question": RunnablePassthrough(),
}
| prompt
| llm
| StrOutputParser()
| {
"result": RunnablePassthrough(),
"context": Context.getter("context"),
"input": Context.getter("input"),
}
)
def seq_naive_rag_alt() -> Runnable:
context = [
"Hi there!",
"How are you?",
"What's your name?",
]
retriever = RunnableLambda(lambda x: context)
prompt = PromptTemplate.from_template("{context} {question}")
llm = FakeListLLM(responses=["hello"])
return (
Context.setter("input")
| {
"context": retriever | Context.setter("context"),
"question": RunnablePassthrough(),
}
| prompt
| llm
| StrOutputParser()
| Context.setter("result")
| Context.getter(["context", "input", "result"])
)
def seq_naive_rag_scoped() -> Runnable:
context = [
"Hi there!",
"How are you?",
"What's your name?",
]
retriever = RunnableLambda(lambda x: context)
prompt = PromptTemplate.from_template("{context} {question}")
llm = FakeListLLM(responses=["hello"])
scoped = Context.create_scope("a_scope")
return (
Context.setter("input")
| {
"context": retriever | Context.setter("context"),
"question": RunnablePassthrough(),
"scoped": scoped.setter("context") | scoped.getter("context"),
}
| prompt
| llm
| StrOutputParser()
| Context.setter("result")
| Context.getter(["context", "input", "result"])
)
test_cases = [
(
Context.setter("foo") | Context.getter("foo"),
(
_TestCase("foo", "foo"),
_TestCase("bar", "bar"),
),
),
(
Context.setter("input") | {"bar": Context.getter("input")},
(
_TestCase("foo", {"bar": "foo"}),
_TestCase("bar", {"bar": "bar"}),
),
),
(
{"bar": Context.setter("input")} | Context.getter("input"),
(
_TestCase("foo", "foo"),
_TestCase("bar", "bar"),
),
),
(
(
PromptTemplate.from_template("{foo} {bar}")
| Context.setter("prompt")
| FakeListLLM(responses=["hello"])
| StrOutputParser()
| {
"response": RunnablePassthrough(),
"prompt": Context.getter("prompt"),
}
),
(
_TestCase(
{"foo": "foo", "bar": "bar"},
{"response": "hello", "prompt": StringPromptValue(text="foo bar")},
),
_TestCase(
{"foo": "bar", "bar": "foo"},
{"response": "hello", "prompt": StringPromptValue(text="bar foo")},
),
),
),
(
(
PromptTemplate.from_template("{foo} {bar}")
| Context.setter("prompt", prompt_str=lambda x: x.to_string())
| FakeListLLM(responses=["hello"])
| StrOutputParser()
| {
"response": RunnablePassthrough(),
"prompt": Context.getter("prompt"),
"prompt_str": Context.getter("prompt_str"),
}
),
(
_TestCase(
{"foo": "foo", "bar": "bar"},
{
"response": "hello",
"prompt": StringPromptValue(text="foo bar"),
"prompt_str": "foo bar",
},
),
_TestCase(
{"foo": "bar", "bar": "foo"},
{
"response": "hello",
"prompt": StringPromptValue(text="bar foo"),
"prompt_str": "bar foo",
},
),
),
),
(
(
PromptTemplate.from_template("{foo} {bar}")
| Context.setter(prompt_str=lambda x: x.to_string())
| FakeListLLM(responses=["hello"])
| StrOutputParser()
| {
"response": RunnablePassthrough(),
"prompt_str": Context.getter("prompt_str"),
}
),
(
_TestCase(
{"foo": "foo", "bar": "bar"},
{"response": "hello", "prompt_str": "foo bar"},
),
_TestCase(
{"foo": "bar", "bar": "foo"},
{"response": "hello", "prompt_str": "bar foo"},
),
),
),
(
(
PromptTemplate.from_template("{foo} {bar}")
| Context.setter("prompt_str", lambda x: x.to_string())
| FakeListLLM(responses=["hello"])
| StrOutputParser()
| {
"response": RunnablePassthrough(),
"prompt_str": Context.getter("prompt_str"),
}
),
(
_TestCase(
{"foo": "foo", "bar": "bar"},
{"response": "hello", "prompt_str": "foo bar"},
),
_TestCase(
{"foo": "bar", "bar": "foo"},
{"response": "hello", "prompt_str": "bar foo"},
),
),
),
(
(
PromptTemplate.from_template("{foo} {bar}")
| Context.setter("prompt")
| FakeStreamingListLLM(responses=["hello"])
| StrOutputParser()
| {
"response": RunnablePassthrough(),
"prompt": Context.getter("prompt"),
}
),
(
_TestCase(
{"foo": "foo", "bar": "bar"},
{"response": "hello", "prompt": StringPromptValue(text="foo bar")},
),
_TestCase(
{"foo": "bar", "bar": "foo"},
{"response": "hello", "prompt": StringPromptValue(text="bar foo")},
),
),
),
(
seq_naive_rag,
(
_TestCase(
"What up",
{
"result": "hello",
"context": [
"Hi there!",
"How are you?",
"What's your name?",
],
"input": "What up",
},
),
_TestCase(
"Howdy",
{
"result": "hello",
"context": [
"Hi there!",
"How are you?",
"What's your name?",
],
"input": "Howdy",
},
),
),
),
(
seq_naive_rag_alt,
(
_TestCase(
"What up",
{
"result": "hello",
"context": [
"Hi there!",
"How are you?",
"What's your name?",
],
"input": "What up",
},
),
_TestCase(
"Howdy",
{
"result": "hello",
"context": [
"Hi there!",
"How are you?",
"What's your name?",
],
"input": "Howdy",
},
),
),
),
(
seq_naive_rag_scoped,
(
_TestCase(
"What up",
{
"result": "hello",
"context": [
"Hi there!",
"How are you?",
"What's your name?",
],
"input": "What up",
},
),
_TestCase(
"Howdy",
{
"result": "hello",
"context": [
"Hi there!",
"How are you?",
"What's your name?",
],
"input": "Howdy",
},
),
),
),
]
| |
153561
|
"langchain_core",
"language_models",
"fake",
"FakeStreamingListLLM"
],
"repr": "FakeStreamingListLLM(responses=['first item, second item, third item'])",
"name": "FakeStreamingListLLM"
},
{
"lc": 1,
"type": "constructor",
"id": [
"tests",
"unit_tests",
"runnables",
"test_runnable",
"FakeSplitIntoListParser"
],
"kwargs": {},
"name": "FakeSplitIntoListParser"
}
],
"last": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"runnable",
"RunnableEach"
],
"kwargs": {
"bound": {
"lc": 1,
"type": "not_implemented",
"id": [
"langchain_core",
"language_models",
"fake",
"FakeStreamingListLLM"
],
"repr": "FakeStreamingListLLM(responses=['this', 'is', 'a', 'test'])",
"name": "FakeStreamingListLLM"
}
},
"name": "RunnableEach<FakeStreamingListLLM>"
}
},
"name": "RunnableSequence"
}
'''
# ---
# name: test_higher_order_lambda_runnable
'''
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"runnable",
"RunnableSequence"
],
"kwargs": {
"first": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"runnable",
"RunnableParallel"
],
"kwargs": {
"steps__": {
"key": {
"lc": 1,
"type": "not_implemented",
"id": [
"langchain_core",
"runnables",
"base",
"RunnableLambda"
],
"repr": "RunnableLambda(lambda x: x['key'])"
},
"input": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"runnable",
"RunnableParallel"
],
"kwargs": {
"steps__": {
"question": {
"lc": 1,
"type": "not_implemented",
"id": [
"langchain_core",
"runnables",
"base",
"RunnableLambda"
],
"repr": "RunnableLambda(lambda x: x['question'])"
}
}
},
"name": "RunnableParallel<question>"
}
}
},
"name": "RunnableParallel<key,input>"
},
"last": {
"lc": 1,
"type": "not_implemented",
"id": [
"langchain_core",
"runnables",
"base",
"RunnableLambda"
],
"repr": "RunnableLambda(router)"
}
},
"name": "RunnableSequence"
}
'''
# ---
# name: test_lambda_schemas[schema8]
dict({
'$defs': dict({
'OutputType': dict({
'properties': dict({
'bye': dict({
'title': 'Bye',
'type': 'string',
}),
'byebye': dict({
'title': 'Byebye',
'type': 'integer',
}),
'hello': dict({
'title': 'Hello',
'type': 'string',
}),
}),
'required': list([
'hello',
'bye',
'byebye',
]),
'title': 'OutputType',
'type': 'object',
}),
}),
'$ref': '#/$defs/OutputType',
'title': 'aget_values_typed_output',
})
# ---
# name: test_prompt_with_chat_model
'''
ChatPromptTemplate(input_variables=['question'], input_types={}, partial_variables={}, messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], input_types={}, partial_variables={}, template='You are a nice assistant.'), additional_kwargs={}), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['question'], input_types={}, partial_variables={}, template='{question}'), additional_kwargs={})])
| FakeListChatModel(responses=['foo'])
'''
# ---
# name: test_prompt_with_chat_model.1
'''
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"runnable",
"RunnableSequence"
],
"kwargs": {
"first": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"prompts",
"chat",
"ChatPromptTemplate"
],
"kwargs": {
"input_variables": [
"question"
],
"messages": [
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"prompts",
"chat",
"SystemMessagePromptTemplate"
],
"kwargs": {
"prompt": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"prompts",
"prompt",
"PromptTemplate"
],
"kwargs": {
"input_variables": [],
"template": "You are a nice assistant.",
"template_format": "f-string"
},
"name": "PromptTemplate"
}
}
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"prompts",
"chat",
"HumanMessagePromptTemplate"
],
"kwargs": {
"prompt": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"prompts",
"prompt",
"PromptTemplate"
],
"kwargs": {
"input_variables": [
"question"
],
"template": "{question}",
"template_format": "f-string"
},
"name": "PromptTemplate"
}
}
}
]
},
"name": "ChatPromptTemplate"
},
"last": {
"lc": 1,
"type": "not_implemented",
"id": [
"langchain_core",
"language_models",
"fake_chat_models",
"FakeListChatModel"
],
"repr": "FakeListChatModel(responses=['foo'])",
"name": "FakeListChatModel"
}
},
"name": "RunnableSequence"
}
'''
# ---
# name: test_prompt_with_chat_model.2
list([
| |
153565
|
"kwargs": {
"first": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"prompts",
"chat",
"ChatPromptTemplate"
],
"kwargs": {
"input_variables": [
"question"
],
"messages": [
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"prompts",
"chat",
"SystemMessagePromptTemplate"
],
"kwargs": {
"prompt": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"prompts",
"prompt",
"PromptTemplate"
],
"kwargs": {
"input_variables": [],
"template": "You are a nice assistant.",
"template_format": "f-string"
},
"name": "PromptTemplate"
}
}
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"prompts",
"chat",
"HumanMessagePromptTemplate"
],
"kwargs": {
"prompt": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"prompts",
"prompt",
"PromptTemplate"
],
"kwargs": {
"input_variables": [
"question"
],
"template": "{question}",
"template_format": "f-string"
},
"name": "PromptTemplate"
}
}
}
]
},
"name": "ChatPromptTemplate"
},
"last": {
"lc": 1,
"type": "not_implemented",
"id": [
"langchain_core",
"language_models",
"fake",
"FakeListLLM"
],
"repr": "FakeListLLM(responses=['foo', 'bar'])",
"name": "FakeListLLM"
}
},
"name": "RunnableSequence"
}
'''
# ---
# name: test_prompt_with_llm.1
list([
RunTree(id=UUID('00000000-0000-4000-8000-000000000000'), name='RunnableSequence', start_time=FakeDatetime(2023, 1, 1, 0, 0, tzinfo=datetime.timezone.utc), run_type='chain', end_time=FakeDatetime(2023, 1, 1, 0, 0, tzinfo=datetime.timezone.utc), extra={}, error=None, serialized=None, events=[{'name': 'start', 'time': FakeDatetime(2023, 1, 1, 0, 0, tzinfo=datetime.timezone.utc)}, {'name': 'end', 'time': FakeDatetime(2023, 1, 1, 0, 0, tzinfo=datetime.timezone.utc)}], inputs={'question': 'What is your name?'}, outputs={'output': 'foo'}, reference_example_id=None, parent_run_id=None, tags=[], attachments={}, child_runs=[RunTree(id=UUID('00000000-0000-4000-8000-000000000001'), name='ChatPromptTemplate', start_time=FakeDatetime(2023, 1, 1, 0, 0, tzinfo=datetime.timezone.utc), run_type='prompt', end_time=FakeDatetime(2023, 1, 1, 0, 0, tzinfo=datetime.timezone.utc), extra={}, error=None, serialized={'lc': 1, 'type': 'constructor', 'id': ['langchain', 'prompts', 'chat', 'ChatPromptTemplate'], 'kwargs': {'input_variables': ['question'], 'messages': [{'lc': 1, 'type': 'constructor', 'id': ['langchain', 'prompts', 'chat', 'SystemMessagePromptTemplate'], 'kwargs': {'prompt': {'lc': 1, 'type': 'constructor', 'id': ['langchain', 'prompts', 'prompt', 'PromptTemplate'], 'kwargs': {'input_variables': [], 'template': 'You are a nice assistant.', 'template_format': 'f-string'}, 'name': 'PromptTemplate'}}}, {'lc': 1, 'type': 'constructor', 'id': ['langchain', 'prompts', 'chat', 'HumanMessagePromptTemplate'], 'kwargs': {'prompt': {'lc': 1, 'type': 'constructor', 'id': ['langchain', 'prompts', 'prompt', 'PromptTemplate'], 'kwargs': {'input_variables': ['question'], 'template': '{question}', 'template_format': 'f-string'}, 'name': 'PromptTemplate'}}}]}, 'name': 'ChatPromptTemplate'}, events=[{'name': 'start', 'time': FakeDatetime(2023, 1, 1, 0, 0, tzinfo=datetime.timezone.utc)}, {'name': 'end', 'time': FakeDatetime(2023, 1, 1, 0, 0, tzinfo=datetime.timezone.utc)}], inputs={'question': 'What is your name?'}, outputs={'output': ChatPromptValue(messages=[SystemMessage(content='You are a nice assistant.', additional_kwargs={}, response_metadata={}), HumanMessage(content='What is your name?', additional_kwargs={}, response_metadata={})])}, reference_example_id=None, parent_run_id=UUID('00000000-0000-4000-8000-000000000000'), tags=['seq:step:1'], attachments={}, child_runs=[], session_name='default', session_id=None, dotted_order='20230101T000000000000Z00000000-0000-4000-8000-000000000000.20230101T000000000000Z00000000-0000-4000-8000-000000000001', trace_id=UUID('00000000-0000-4000-8000-000000000000')), RunTree(id=UUID('00000000-0000-4000-8000-000000000002'), name='FakeListLLM', start_time=FakeDatetime(2023, 1, 1, 0, 0, tzinfo=datetime.timezone.utc), run_type='llm', end_time=FakeDatetime(2023, 1, 1, 0, 0, tzinfo=datetime.timezone.utc), extra={'invocation_params': {'responses': ['foo', 'bar'], '_type': 'fake-list', 'stop': None}, 'options': {'stop': None}, 'batch_size': 1, 'metadata': {'ls_provider': 'fakelist', 'ls_model_type': 'llm'}}, error=None, serialized={'lc': 1, 'type': 'not_implemented', 'id': ['langchain_core', 'language_models', 'fake', 'FakeListLLM'], 'repr': "FakeListLLM(responses=['foo', 'bar'])", 'name': 'FakeListLLM'}, events=[{'name': 'start', 'time': FakeDatetime(2023, 1, 1, 0, 0, tzinfo=datetime.timezone.utc)}, {'name': 'end', 'time': FakeDatetime(2023, 1, 1, 0, 0, tzinfo=datetime.timezone.utc)}], inputs={'prompts': ['System: You are a nice assistant.\nHuman: What is your name?']}, outputs={'generations': [[{'text': 'foo', 'generation_info': None, 'type': 'Generation'}]], 'llm_output': None, 'run': None, 'type': 'LLMResult'}, reference_example_id=None, parent_run_id=UUID('00000000-0000-4000-8000-000000000000'), tags=['seq:step:2'], attachments={}, child_runs=[], session_name='default', session_id=None, dotted_order='20230101T000000000000Z00000000-0000-4000-8000-000000000000.20230101T000000000000Z00000000-0000-4000-8000-000000000002', trace_id=UUID('00000000-0000-4000-8000-000000000000'))], session_name='default', session_id=None, dotted_order='20230101T000000000000Z00000000-0000-4000-8000-000000000000', trace_id=UUID('00000000-0000-4000-8000-000000000000')),
])
# ---
# name: test_prompt_with_llm.2
list([
| |
153613
|
"""Set of tests that complement the standard tests for vectorstore.
These tests verify that the base abstraction does appropriate delegation to
the relevant methods.
"""
from __future__ import annotations
import uuid
from collections.abc import Iterable, Sequence
from typing import Any, Optional
import pytest
from langchain_core.documents import Document
from langchain_core.embeddings import Embeddings, FakeEmbeddings
from langchain_core.vectorstores import VectorStore
class CustomAddTextsVectorstore(VectorStore):
"""A vectorstore that only implements add texts."""
def __init__(self) -> None:
self.store: dict[str, Document] = {}
def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[list[dict]] = None,
ids: Optional[list[str]] = None,
**kwargs: Any,
) -> list[str]:
if not isinstance(texts, list):
texts = list(texts)
ids_iter = iter(ids or [])
ids_ = []
metadatas_ = metadatas or [{} for _ in texts]
for text, metadata in zip(texts, metadatas_ or []):
next_id = next(ids_iter, None)
id_ = next_id or str(uuid.uuid4())
self.store[id_] = Document(page_content=text, metadata=metadata, id=id_)
ids_.append(id_)
return ids_
def get_by_ids(self, ids: Sequence[str], /) -> list[Document]:
return [self.store[id] for id in ids if id in self.store]
@classmethod
def from_texts( # type: ignore
cls,
texts: list[str],
embedding: Embeddings,
metadatas: Optional[list[dict]] = None,
**kwargs: Any,
) -> CustomAddTextsVectorstore:
vectorstore = CustomAddTextsVectorstore()
vectorstore.add_texts(texts, metadatas=metadatas, **kwargs)
return vectorstore
def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> list[Document]:
raise NotImplementedError
class CustomAddDocumentsVectorstore(VectorStore):
"""A vectorstore that only implements add documents."""
def __init__(self) -> None:
self.store: dict[str, Document] = {}
def add_documents(
self,
documents: list[Document],
*,
ids: Optional[list[str]] = None,
**kwargs: Any,
) -> list[str]:
ids_ = []
ids_iter = iter(ids or [])
for document in documents:
id_ = next(ids_iter) if ids else document.id or str(uuid.uuid4())
self.store[id_] = Document(
id=id_, page_content=document.page_content, metadata=document.metadata
)
ids_.append(id_)
return ids_
def get_by_ids(self, ids: Sequence[str], /) -> list[Document]:
return [self.store[id] for id in ids if id in self.store]
@classmethod
def from_texts( # type: ignore
cls,
texts: list[str],
embedding: Embeddings,
metadatas: Optional[list[dict]] = None,
**kwargs: Any,
) -> CustomAddDocumentsVectorstore:
vectorstore = CustomAddDocumentsVectorstore()
vectorstore.add_texts(texts, metadatas=metadatas, **kwargs)
return vectorstore
def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> list[Document]:
raise NotImplementedError
@pytest.mark.parametrize(
"vs_class", [CustomAddTextsVectorstore, CustomAddDocumentsVectorstore]
)
def test_default_add_documents(vs_class: type[VectorStore]) -> None:
"""Test that we can implement the upsert method of the CustomVectorStore
class without violating the Liskov Substitution Principle.
"""
store = vs_class()
# Check upsert with id
assert store.add_documents([Document(id="1", page_content="hello")]) == ["1"]
assert store.get_by_ids(["1"]) == [Document(id="1", page_content="hello")]
# Check upsert without id
ids = store.add_documents([Document(page_content="world")])
assert len(ids) == 1
assert store.get_by_ids(ids) == [Document(id=ids[0], page_content="world")]
# Check that add_documents works
assert store.add_documents([Document(id="5", page_content="baz")]) == ["5"]
# Test add documents with id specified in both document and ids
original_document = Document(id="7", page_content="baz")
assert store.add_documents([original_document], ids=["6"]) == ["6"]
assert original_document.id == "7" # original document should not be modified
assert store.get_by_ids(["6"]) == [Document(id="6", page_content="baz")]
@pytest.mark.parametrize(
"vs_class", [CustomAddTextsVectorstore, CustomAddDocumentsVectorstore]
)
def test_default_add_texts(vs_class: type[VectorStore]) -> None:
store = vs_class()
# Check that default implementation of add_texts works
assert store.add_texts(["hello", "world"], ids=["3", "4"]) == ["3", "4"]
assert store.get_by_ids(["3", "4"]) == [
Document(id="3", page_content="hello"),
Document(id="4", page_content="world"),
]
# Add texts without ids
ids_ = store.add_texts(["foo", "bar"])
assert len(ids_) == 2
assert store.get_by_ids(ids_) == [
Document(id=ids_[0], page_content="foo"),
Document(id=ids_[1], page_content="bar"),
]
# Add texts with metadatas
ids_2 = store.add_texts(["foo", "bar"], metadatas=[{"foo": "bar"}] * 2)
assert len(ids_2) == 2
assert store.get_by_ids(ids_2) == [
Document(id=ids_2[0], page_content="foo", metadata={"foo": "bar"}),
Document(id=ids_2[1], page_content="bar", metadata={"foo": "bar"}),
]
@pytest.mark.parametrize(
"vs_class", [CustomAddTextsVectorstore, CustomAddDocumentsVectorstore]
)
async def test_default_aadd_documents(vs_class: type[VectorStore]) -> None:
"""Test delegation to the synchronous method."""
store = vs_class()
# Check upsert with id
assert await store.aadd_documents([Document(id="1", page_content="hello")]) == ["1"]
assert await store.aget_by_ids(["1"]) == [Document(id="1", page_content="hello")]
# Check upsert without id
ids = await store.aadd_documents([Document(page_content="world")])
assert len(ids) == 1
assert await store.aget_by_ids(ids) == [Document(id=ids[0], page_content="world")]
# Check that add_documents works
assert await store.aadd_documents([Document(id="5", page_content="baz")]) == ["5"]
# Test add documents with id specified in both document and ids
original_document = Document(id="7", page_content="baz")
assert await store.aadd_documents([original_document], ids=["6"]) == ["6"]
assert original_document.id == "7" # original document should not be modified
assert await store.aget_by_ids(["6"]) == [Document(id="6", page_content="baz")]
| |
153619
|
from langchain_core.output_parsers import __all__
EXPECTED_ALL = [
"BaseLLMOutputParser",
"BaseGenerationOutputParser",
"BaseOutputParser",
"ListOutputParser",
"CommaSeparatedListOutputParser",
"NumberedListOutputParser",
"MarkdownListOutputParser",
"StrOutputParser",
"BaseTransformOutputParser",
"BaseCumulativeTransformOutputParser",
"SimpleJsonOutputParser",
"XMLOutputParser",
"JsonOutputParser",
"PydanticOutputParser",
"JsonOutputToolsParser",
"JsonOutputKeyToolsParser",
"PydanticToolsParser",
]
def test_all_imports() -> None:
assert set(__all__) == set(EXPECTED_ALL)
| |
153626
|
"""Test PydanticOutputParser"""
from enum import Enum
from typing import Literal, Optional
import pydantic
import pytest
from pydantic import BaseModel, Field
from pydantic.v1 import BaseModel as V1BaseModel
from langchain_core.exceptions import OutputParserException
from langchain_core.language_models import ParrotFakeChatModel
from langchain_core.output_parsers import PydanticOutputParser
from langchain_core.output_parsers.json import JsonOutputParser
from langchain_core.prompts.prompt import PromptTemplate
from langchain_core.utils.pydantic import TBaseModel
class ForecastV2(pydantic.BaseModel):
temperature: int
f_or_c: Literal["F", "C"]
forecast: str
class ForecastV1(V1BaseModel):
temperature: int
f_or_c: Literal["F", "C"]
forecast: str
@pytest.mark.parametrize("pydantic_object", [ForecastV2, ForecastV1])
def test_pydantic_parser_chaining(
pydantic_object: TBaseModel,
) -> None:
prompt = PromptTemplate(
template="""{{
"temperature": 20,
"f_or_c": "C",
"forecast": "Sunny"
}}""",
input_variables=[],
)
model = ParrotFakeChatModel()
parser = PydanticOutputParser(pydantic_object=pydantic_object) # type: ignore
chain = prompt | model | parser
res = chain.invoke({})
assert type(res) is pydantic_object
assert res.f_or_c == "C"
assert res.temperature == 20
assert res.forecast == "Sunny"
@pytest.mark.parametrize("pydantic_object", [ForecastV2, ForecastV1])
def test_pydantic_parser_validation(pydantic_object: TBaseModel) -> None:
bad_prompt = PromptTemplate(
template="""{{
"temperature": "oof",
"f_or_c": 1,
"forecast": "Sunny"
}}""",
input_variables=[],
)
model = ParrotFakeChatModel()
parser = PydanticOutputParser(pydantic_object=pydantic_object) # type: ignore
chain = bad_prompt | model | parser
with pytest.raises(OutputParserException):
chain.invoke({})
# JSON output parser tests
@pytest.mark.parametrize("pydantic_object", [ForecastV2, ForecastV1])
def test_json_parser_chaining(
pydantic_object: TBaseModel,
) -> None:
prompt = PromptTemplate(
template="""{{
"temperature": 20,
"f_or_c": "C",
"forecast": "Sunny"
}}""",
input_variables=[],
)
model = ParrotFakeChatModel()
parser = JsonOutputParser(pydantic_object=pydantic_object) # type: ignore
chain = prompt | model | parser
res = chain.invoke({})
assert res["f_or_c"] == "C"
assert res["temperature"] == 20
assert res["forecast"] == "Sunny"
class Actions(Enum):
SEARCH = "Search"
CREATE = "Create"
UPDATE = "Update"
DELETE = "Delete"
class TestModel(BaseModel):
action: Actions = Field(description="Action to be performed")
action_input: str = Field(description="Input to be used in the action")
additional_fields: Optional[str] = Field(
description="Additional fields", default=None
)
for_new_lines: str = Field(description="To be used to test newlines")
# Prevent pytest from trying to run tests on TestModel
TestModel.__test__ = False # type: ignore[attr-defined]
DEF_RESULT = """{
"action": "Update",
"action_input": "The PydanticOutputParser class is powerful",
"additional_fields": null,
"for_new_lines": "not_escape_newline:\n escape_newline: \\n"
}"""
# action 'update' with a lowercase 'u' to test schema validation failure.
DEF_RESULT_FAIL = """{
"action": "update",
"action_input": "The PydanticOutputParser class is powerful",
"additional_fields": null
}"""
DEF_EXPECTED_RESULT = TestModel(
action=Actions.UPDATE,
action_input="The PydanticOutputParser class is powerful",
additional_fields=None,
for_new_lines="not_escape_newline:\n escape_newline: \n",
)
def test_pydantic_output_parser() -> None:
"""Test PydanticOutputParser."""
pydantic_parser: PydanticOutputParser = PydanticOutputParser(
pydantic_object=TestModel
)
result = pydantic_parser.parse(DEF_RESULT)
print("parse_result:", result) # noqa: T201
assert result == DEF_EXPECTED_RESULT
assert pydantic_parser.OutputType is TestModel
def test_pydantic_output_parser_fail() -> None:
"""Test PydanticOutputParser where completion result fails schema validation."""
pydantic_parser: PydanticOutputParser = PydanticOutputParser(
pydantic_object=TestModel
)
with pytest.raises(OutputParserException) as e:
pydantic_parser.parse(DEF_RESULT_FAIL)
assert "Failed to parse TestModel from completion" in str(e)
def test_pydantic_output_parser_type_inference() -> None:
"""Test pydantic output parser type inference."""
class SampleModel(BaseModel):
foo: int
bar: str
# Ignoring mypy error that appears in python 3.8, but not 3.11.
# This seems to be functionally correct, so we'll ignore the error.
pydantic_parser = PydanticOutputParser(pydantic_object=SampleModel) # type: ignore
schema = pydantic_parser.get_output_schema().model_json_schema()
assert schema == {
"properties": {
"bar": {"title": "Bar", "type": "string"},
"foo": {"title": "Foo", "type": "integer"},
},
"required": ["foo", "bar"],
"title": "SampleModel",
"type": "object",
}
def test_format_instructions_preserves_language() -> None:
"""Test format instructions does not attempt to encode into ascii."""
from pydantic import BaseModel, Field
description = (
"你好, こんにちは, नमस्ते, Bonjour, Hola, "
"Olá, 안녕하세요, Jambo, Merhaba, Γειά σου"
)
class Foo(BaseModel):
hello: str = Field(
description=(
"你好, こんにちは, नमस्ते, Bonjour, Hola, "
"Olá, 안녕하세요, Jambo, Merhaba, Γειά σου"
)
)
parser = PydanticOutputParser(pydantic_object=Foo) # type: ignore
assert description in parser.get_format_instructions()
| |
153630
|
import json
from typing import Any
import pytest
from pydantic import BaseModel
from langchain_core.exceptions import OutputParserException
from langchain_core.messages import AIMessage, BaseMessage, HumanMessage
from langchain_core.output_parsers.openai_functions import (
JsonOutputFunctionsParser,
PydanticOutputFunctionsParser,
)
from langchain_core.outputs import ChatGeneration
def test_json_output_function_parser() -> None:
"""Test the JSON output function parser is configured with robust defaults."""
message = AIMessage(
content="This is a test message",
additional_kwargs={
"function_call": {
"name": "function_name",
"arguments": '{"arg1": "code\ncode"}',
}
},
)
chat_generation = ChatGeneration(message=message)
# Full output
# Test that the parsers defaults are configured to parse in non-strict mode
parser = JsonOutputFunctionsParser(args_only=False)
result = parser.parse_result([chat_generation])
assert result == {"arguments": {"arg1": "code\ncode"}, "name": "function_name"}
# Args only
parser = JsonOutputFunctionsParser(args_only=True)
result = parser.parse_result([chat_generation])
assert result == {"arg1": "code\ncode"}
# Verify that the original message is not modified
assert message.additional_kwargs == {
"function_call": {
"name": "function_name",
"arguments": '{"arg1": "code\ncode"}',
}
}
@pytest.mark.parametrize(
"config",
[
{
"args_only": False,
"strict": False,
"args": '{"arg1": "value1"}',
"result": {"arguments": {"arg1": "value1"}, "name": "function_name"},
"exception": None,
},
{
"args_only": True,
"strict": False,
"args": '{"arg1": "value1"}',
"result": {"arg1": "value1"},
"exception": None,
},
{
"args_only": True,
"strict": False,
"args": '{"code": "print(2+\n2)"}',
"result": {"code": "print(2+\n2)"},
"exception": None,
},
{
"args_only": True,
"strict": False,
"args": '{"code": "你好)"}',
"result": {"code": "你好)"},
"exception": None,
},
{
"args_only": True,
"strict": True,
"args": '{"code": "print(2+\n2)"}',
"exception": OutputParserException,
},
],
)
def test_json_output_function_parser_strictness(config: dict[str, Any]) -> None:
"""Test parsing with JSON strictness on and off."""
args = config["args"]
message = AIMessage(
content="This is a test message",
additional_kwargs={
"function_call": {"name": "function_name", "arguments": args}
},
)
chat_generation = ChatGeneration(message=message)
# Full output
parser = JsonOutputFunctionsParser(
strict=config["strict"], args_only=config["args_only"]
)
if config["exception"] is not None:
with pytest.raises(config["exception"]):
parser.parse_result([chat_generation])
else:
assert parser.parse_result([chat_generation]) == config["result"]
@pytest.mark.parametrize(
"bad_message",
[
# Human message has no function call
HumanMessage(content="This is a test message"),
# AIMessage has no function call information.
AIMessage(content="This is a test message", additional_kwargs={}),
# Bad function call information (arguments should be a string)
AIMessage(
content="This is a test message",
additional_kwargs={
"function_call": {"name": "function_name", "arguments": {}}
},
),
# Bad function call information (arguments should be proper json)
AIMessage(
content="This is a test message",
additional_kwargs={
"function_call": {"name": "function_name", "arguments": "noqweqwe"}
},
),
],
)
def test_exceptions_raised_while_parsing(bad_message: BaseMessage) -> None:
"""Test exceptions raised correctly while using JSON parser."""
chat_generation = ChatGeneration(message=bad_message)
with pytest.raises(OutputParserException):
JsonOutputFunctionsParser().parse_result([chat_generation])
def test_pydantic_output_functions_parser() -> None:
"""Test pydantic output functions parser."""
message = AIMessage(
content="This is a test message",
additional_kwargs={
"function_call": {
"name": "function_name",
"arguments": json.dumps({"name": "value", "age": 10}),
}
},
)
chat_generation = ChatGeneration(message=message)
class Model(BaseModel):
"""Test model."""
name: str
age: int
# Full output
parser = PydanticOutputFunctionsParser(pydantic_schema=Model)
result = parser.parse_result([chat_generation])
assert result == Model(name="value", age=10)
def test_pydantic_output_functions_parser_multiple_schemas() -> None:
"""Test that the parser works if providing multiple pydantic schemas."""
message = AIMessage(
content="This is a test message",
additional_kwargs={
"function_call": {
"name": "cookie",
"arguments": json.dumps({"name": "value", "age": 10}),
}
},
)
chat_generation = ChatGeneration(message=message)
class Cookie(BaseModel):
"""Test model."""
name: str
age: int
class Dog(BaseModel):
"""Test model."""
species: str
# Full output
parser = PydanticOutputFunctionsParser(
pydantic_schema={"cookie": Cookie, "dog": Dog}
)
result = parser.parse_result([chat_generation])
assert result == Cookie(name="value", age=10)
| |
153648
|
"""Test few shot prompt template."""
from collections.abc import Sequence
from typing import Any
import pytest
from langchain_core.example_selectors import BaseExampleSelector
from langchain_core.messages import AIMessage, HumanMessage, SystemMessage
from langchain_core.prompts import (
AIMessagePromptTemplate,
ChatPromptTemplate,
HumanMessagePromptTemplate,
)
from langchain_core.prompts.chat import SystemMessagePromptTemplate
from langchain_core.prompts.few_shot import (
FewShotChatMessagePromptTemplate,
FewShotPromptTemplate,
)
from langchain_core.prompts.prompt import PromptTemplate
EXAMPLE_PROMPT = PromptTemplate(
input_variables=["question", "answer"], template="{question}: {answer}"
)
@pytest.fixture()
@pytest.mark.requires("jinja2")
def example_jinja2_prompt() -> tuple[PromptTemplate, list[dict[str, str]]]:
example_template = "{{ word }}: {{ antonym }}"
examples = [
{"word": "happy", "antonym": "sad"},
{"word": "tall", "antonym": "short"},
]
return (
PromptTemplate(
input_variables=["word", "antonym"],
template=example_template,
template_format="jinja2",
),
examples,
)
def test_suffix_only() -> None:
"""Test prompt works with just a suffix."""
suffix = "This is a {foo} test."
input_variables = ["foo"]
prompt = FewShotPromptTemplate(
input_variables=input_variables,
suffix=suffix,
examples=[],
example_prompt=EXAMPLE_PROMPT,
)
output = prompt.format(foo="bar")
expected_output = "This is a bar test."
assert output == expected_output
def test_auto_infer_input_variables() -> None:
"""Test prompt works with just a suffix."""
suffix = "This is a {foo} test."
prompt = FewShotPromptTemplate(
suffix=suffix,
examples=[],
example_prompt=EXAMPLE_PROMPT,
)
assert prompt.input_variables == ["foo"]
def test_prompt_missing_input_variables() -> None:
"""Test error is raised when input variables are not provided."""
# Test when missing in suffix
template = "This is a {foo} test."
with pytest.raises(ValueError):
FewShotPromptTemplate(
input_variables=[],
suffix=template,
examples=[],
example_prompt=EXAMPLE_PROMPT,
validate_template=True,
)
assert FewShotPromptTemplate(
input_variables=[],
suffix=template,
examples=[],
example_prompt=EXAMPLE_PROMPT,
).input_variables == ["foo"]
# Test when missing in prefix
template = "This is a {foo} test."
with pytest.raises(ValueError):
FewShotPromptTemplate(
input_variables=[],
suffix="foo",
examples=[],
prefix=template,
example_prompt=EXAMPLE_PROMPT,
validate_template=True,
)
assert FewShotPromptTemplate(
input_variables=[],
suffix="foo",
examples=[],
prefix=template,
example_prompt=EXAMPLE_PROMPT,
).input_variables == ["foo"]
async def test_few_shot_functionality() -> None:
"""Test that few shot works with examples."""
prefix = "This is a test about {content}."
suffix = "Now you try to talk about {new_content}."
examples = [
{"question": "foo", "answer": "bar"},
{"question": "baz", "answer": "foo"},
]
prompt = FewShotPromptTemplate(
suffix=suffix,
prefix=prefix,
input_variables=["content", "new_content"],
examples=examples,
example_prompt=EXAMPLE_PROMPT,
example_separator="\n",
)
expected_output = (
"This is a test about animals.\n"
"foo: bar\n"
"baz: foo\n"
"Now you try to talk about party."
)
output = prompt.format(content="animals", new_content="party")
assert output == expected_output
output = await prompt.aformat(content="animals", new_content="party")
assert output == expected_output
def test_partial_init_string() -> None:
"""Test prompt can be initialized with partial variables."""
prefix = "This is a test about {content}."
suffix = "Now you try to talk about {new_content}."
examples = [
{"question": "foo", "answer": "bar"},
{"question": "baz", "answer": "foo"},
]
prompt = FewShotPromptTemplate(
suffix=suffix,
prefix=prefix,
input_variables=["new_content"],
partial_variables={"content": "animals"},
examples=examples,
example_prompt=EXAMPLE_PROMPT,
example_separator="\n",
)
output = prompt.format(new_content="party")
expected_output = (
"This is a test about animals.\n"
"foo: bar\n"
"baz: foo\n"
"Now you try to talk about party."
)
assert output == expected_output
def test_partial_init_func() -> None:
"""Test prompt can be initialized with partial variables."""
prefix = "This is a test about {content}."
suffix = "Now you try to talk about {new_content}."
examples = [
{"question": "foo", "answer": "bar"},
{"question": "baz", "answer": "foo"},
]
prompt = FewShotPromptTemplate(
suffix=suffix,
prefix=prefix,
input_variables=["new_content"],
partial_variables={"content": lambda: "animals"},
examples=examples,
example_prompt=EXAMPLE_PROMPT,
example_separator="\n",
)
output = prompt.format(new_content="party")
expected_output = (
"This is a test about animals.\n"
"foo: bar\n"
"baz: foo\n"
"Now you try to talk about party."
)
assert output == expected_output
def test_partial() -> None:
"""Test prompt can be partialed."""
prefix = "This is a test about {content}."
suffix = "Now you try to talk about {new_content}."
examples = [
{"question": "foo", "answer": "bar"},
{"question": "baz", "answer": "foo"},
]
prompt = FewShotPromptTemplate(
suffix=suffix,
prefix=prefix,
input_variables=["content", "new_content"],
examples=examples,
example_prompt=EXAMPLE_PROMPT,
example_separator="\n",
)
new_prompt = prompt.partial(content="foo")
new_output = new_prompt.format(new_content="party")
expected_output = (
"This is a test about foo.\n"
"foo: bar\n"
"baz: foo\n"
"Now you try to talk about party."
)
assert new_output == expected_output
output = prompt.format(new_content="party", content="bar")
expected_output = (
"This is a test about bar.\n"
"foo: bar\n"
"baz: foo\n"
"Now you try to talk about party."
)
assert output == expected_output
@pytest.mark.requires("jinja2")
def test_prompt_jinja2_functionality(
example_jinja2_prompt: tuple[PromptTemplate, list[dict[str, str]]],
) -> None:
prefix = "Starting with {{ foo }}"
suffix = "Ending with {{ bar }}"
prompt = FewShotPromptTemplate(
input_variables=["foo", "bar"],
suffix=suffix,
prefix=prefix,
examples=example_jinja2_prompt[1],
example_prompt=example_jinja2_prompt[0],
template_format="jinja2",
)
output = prompt.format(foo="hello", bar="bye")
expected_output = (
"Starting with hello\n\n" "happy: sad\n\n" "tall: short\n\n" "Ending with bye"
)
assert output == expected_output
| |
153654
|
f test_mustache_prompt_from_template(snapshot: SnapshotAssertion) -> None:
"""Test prompts can be constructed from a template."""
# Single input variable.
template = "This is a {{foo}} test."
prompt = PromptTemplate.from_template(template, template_format="mustache")
assert prompt.format(foo="bar") == "This is a bar test."
assert prompt.input_variables == ["foo"]
assert prompt.get_input_jsonschema() == {
"title": "PromptInput",
"type": "object",
"properties": {"foo": {"title": "Foo", "type": "string", "default": None}},
}
# Multiple input variables.
template = "This {{bar}} is a {{foo}} test."
prompt = PromptTemplate.from_template(template, template_format="mustache")
assert prompt.format(bar="baz", foo="bar") == "This baz is a bar test."
assert prompt.input_variables == ["bar", "foo"]
assert prompt.get_input_jsonschema() == {
"title": "PromptInput",
"type": "object",
"properties": {
"bar": {"title": "Bar", "type": "string", "default": None},
"foo": {"title": "Foo", "type": "string", "default": None},
},
}
# Multiple input variables with repeats.
template = "This {{bar}} is a {{foo}} test {{&foo}}."
prompt = PromptTemplate.from_template(template, template_format="mustache")
assert prompt.format(bar="baz", foo="bar") == "This baz is a bar test bar."
assert prompt.input_variables == ["bar", "foo"]
assert prompt.get_input_jsonschema() == {
"title": "PromptInput",
"type": "object",
"properties": {
"bar": {"title": "Bar", "type": "string", "default": None},
"foo": {"title": "Foo", "type": "string", "default": None},
},
}
# Nested variables.
template = "This {{obj.bar}} is a {{obj.foo}} test {{{foo}}}."
prompt = PromptTemplate.from_template(template, template_format="mustache")
assert prompt.format(obj={"bar": "foo", "foo": "bar"}, foo="baz") == (
"This foo is a bar test baz."
)
assert prompt.input_variables == ["foo", "obj"]
if PYDANTIC_VERSION >= (2, 9):
assert _normalize_schema(prompt.get_input_jsonschema()) == snapshot(
name="schema_0"
)
# . variables
template = "This {{.}} is a test."
prompt = PromptTemplate.from_template(template, template_format="mustache")
assert prompt.format(foo="baz") == ("This {'foo': 'baz'} is a test.")
assert prompt.input_variables == []
assert prompt.get_input_jsonschema() == {
"title": "PromptInput",
"type": "object",
"properties": {},
}
# section/context variables
template = """This{{#foo}}
{{bar}}
{{/foo}}is a test."""
prompt = PromptTemplate.from_template(template, template_format="mustache")
assert prompt.format(foo={"bar": "yo"}) == (
"""This
yo
is a test."""
)
assert prompt.input_variables == ["foo"]
if PYDANTIC_VERSION >= (2, 9):
assert _normalize_schema(prompt.get_input_jsonschema()) == snapshot(
name="schema_2"
)
# more complex nested section/context variables
template = """This{{#foo}}
{{bar}}
{{#baz}}
{{qux}}
{{/baz}}
{{quux}}
{{/foo}}is a test."""
prompt = PromptTemplate.from_template(template, template_format="mustache")
assert prompt.format(
foo={"bar": "yo", "baz": [{"qux": "wassup"}], "quux": "hello"}
) == (
"""This
yo
wassup
hello
is a test."""
)
assert prompt.input_variables == ["foo"]
if PYDANTIC_VERSION >= (2, 9):
assert _normalize_schema(prompt.get_input_jsonschema()) == snapshot(
name="schema_3"
)
# triply nested section/context variables
template = """This{{#foo}}
{{bar}}
{{#baz.qux}}
{{#barfoo}}
{{foobar}}
{{/barfoo}}
{{foobar}}
{{/baz.qux}}
{{quux}}
{{/foo}}is a test."""
prompt = PromptTemplate.from_template(template, template_format="mustache")
assert prompt.format(
foo={
"bar": "yo",
"baz": {
"qux": [
{"foobar": "wassup"},
{"foobar": "yoyo", "barfoo": {"foobar": "hello there"}},
]
},
"quux": "hello",
}
) == (
"""This
yo
wassup
hello there
yoyo
hello
is a test."""
)
assert prompt.input_variables == ["foo"]
if PYDANTIC_VERSION >= (2, 9):
assert _normalize_schema(prompt.get_input_jsonschema()) == snapshot(
name="schema_4"
)
# section/context variables with repeats
template = """This{{#foo}}
{{bar}}
{{/foo}}is a test."""
prompt = PromptTemplate.from_template(template, template_format="mustache")
assert prompt.format(foo=[{"bar": "yo"}, {"bar": "hello"}]) == (
"""This
yo
hello
is a test.""" # noqa: W293
)
assert prompt.input_variables == ["foo"]
if PYDANTIC_VERSION >= (2, 9):
assert _normalize_schema(prompt.get_input_jsonschema()) == snapshot(
name="schema_5"
)
template = """This{{^foo}}
no foos
{{/foo}}is a test."""
prompt = PromptTemplate.from_template(template, template_format="mustache")
assert prompt.format() == (
"""This
no foos
is a test."""
)
assert prompt.input_variables == ["foo"]
assert prompt.get_input_jsonschema() == {
"properties": {"foo": {"default": None, "title": "Foo", "type": "object"}},
"title": "PromptInput",
"type": "object",
}
def test_prompt_from_template_with_partial_variables() -> None:
"""Test prompts can be constructed from a template with partial variables."""
# given
template = "This is a {foo} test {bar}."
partial_variables = {"bar": "baz"}
# when
prompt = PromptTemplate.from_template(template, partial_variables=partial_variables)
# then
expected_prompt = PromptTemplate(
template=template,
input_variables=["foo"],
partial_variables=partial_variables,
)
assert prompt == expected_prompt
def test_prompt_missing_input_variables() -> None:
"""Test error is raised when input variables are not provided."""
template = "This is a {foo} test."
input_variables: list = []
with pytest.raises(ValueError):
PromptTemplate(
input_variables=input_variables, template=template, validate_template=True
)
assert PromptTemplate(
input_variables=input_variables, template=template
).input_variables == ["foo"]
def test_prompt_empty_input_variable() -> None:
"""Test error is raised when empty string input variable."""
with pytest.raises(ValueError):
PromptTemplate(input_variables=[""], template="{}", validate_template=True)
def test_prompt_wrong_input_variables() -> None:
"""Test error is raised when name of input variable is wrong."""
template = "This is a {foo} test."
input_variables = ["bar"]
with pytest.raises(ValueError):
PromptTemplate(
input_variables=input_variables, template=template, validate_template=True
)
assert PromptTemplate(
input_variables=input_variables, template=template
).input_variables == ["foo"]
| |
153658
|
"""Test few shot prompt template."""
import pytest
from langchain_core.prompts.few_shot_with_templates import FewShotPromptWithTemplates
from langchain_core.prompts.prompt import PromptTemplate
EXAMPLE_PROMPT = PromptTemplate(
input_variables=["question", "answer"], template="{question}: {answer}"
)
async def test_prompttemplate_prefix_suffix() -> None:
"""Test that few shot works when prefix and suffix are PromptTemplates."""
prefix = PromptTemplate(
input_variables=["content"], template="This is a test about {content}."
)
suffix = PromptTemplate(
input_variables=["new_content"],
template="Now you try to talk about {new_content}.",
)
examples = [
{"question": "foo", "answer": "bar"},
{"question": "baz", "answer": "foo"},
]
prompt = FewShotPromptWithTemplates(
suffix=suffix,
prefix=prefix,
input_variables=["content", "new_content"],
examples=examples,
example_prompt=EXAMPLE_PROMPT,
example_separator="\n",
)
expected_output = (
"This is a test about animals.\n"
"foo: bar\n"
"baz: foo\n"
"Now you try to talk about party."
)
output = prompt.format(content="animals", new_content="party")
assert output == expected_output
output = await prompt.aformat(content="animals", new_content="party")
assert output == expected_output
def test_prompttemplate_validation() -> None:
"""Test that few shot works when prefix and suffix are PromptTemplates."""
prefix = PromptTemplate(
input_variables=["content"], template="This is a test about {content}."
)
suffix = PromptTemplate(
input_variables=["new_content"],
template="Now you try to talk about {new_content}.",
)
examples = [
{"question": "foo", "answer": "bar"},
{"question": "baz", "answer": "foo"},
]
with pytest.raises(ValueError):
FewShotPromptWithTemplates(
suffix=suffix,
prefix=prefix,
input_variables=[],
examples=examples,
example_prompt=EXAMPLE_PROMPT,
example_separator="\n",
validate_template=True,
)
assert FewShotPromptWithTemplates(
suffix=suffix,
prefix=prefix,
input_variables=[],
examples=examples,
example_prompt=EXAMPLE_PROMPT,
example_separator="\n",
).input_variables == ["content", "new_content"]
| |
153664
|
import base64
import tempfile
from pathlib import Path
from typing import Any, Union, cast
import pytest
from pydantic import ValidationError
from syrupy import SnapshotAssertion
from langchain_core._api.deprecation import (
LangChainPendingDeprecationWarning,
)
from langchain_core.load import dumpd, load
from langchain_core.messages import (
AIMessage,
BaseMessage,
HumanMessage,
SystemMessage,
get_buffer_string,
)
from langchain_core.prompt_values import ChatPromptValue
from langchain_core.prompts import PromptTemplate
from langchain_core.prompts.chat import (
AIMessagePromptTemplate,
BaseMessagePromptTemplate,
ChatMessage,
ChatMessagePromptTemplate,
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
_convert_to_message,
)
from tests.unit_tests.pydantic_utils import _normalize_schema
@pytest.fixture
def messages() -> list[BaseMessagePromptTemplate]:
"""Create messages."""
system_message_prompt = SystemMessagePromptTemplate(
prompt=PromptTemplate(
template="Here's some context: {context}",
input_variables=["context"],
)
)
human_message_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(
template="Hello {foo}, I'm {bar}. Thanks for the {context}",
input_variables=["foo", "bar", "context"],
)
)
ai_message_prompt = AIMessagePromptTemplate(
prompt=PromptTemplate(
template="I'm an AI. I'm {foo}. I'm {bar}.",
input_variables=["foo", "bar"],
)
)
chat_message_prompt = ChatMessagePromptTemplate(
role="test",
prompt=PromptTemplate(
template="I'm a generic message. I'm {foo}. I'm {bar}.",
input_variables=["foo", "bar"],
),
)
return [
system_message_prompt,
human_message_prompt,
ai_message_prompt,
chat_message_prompt,
]
@pytest.fixture
def chat_prompt_template(
messages: list[BaseMessagePromptTemplate],
) -> ChatPromptTemplate:
"""Create a chat prompt template."""
return ChatPromptTemplate(
input_variables=["foo", "bar", "context"],
messages=messages, # type: ignore[arg-type]
)
def test_create_chat_prompt_template_from_template() -> None:
"""Create a chat prompt template."""
prompt = ChatPromptTemplate.from_template("hi {foo} {bar}")
assert prompt.messages == [
HumanMessagePromptTemplate.from_template("hi {foo} {bar}")
]
def test_create_chat_prompt_template_from_template_partial() -> None:
"""Create a chat prompt template with partials."""
prompt = ChatPromptTemplate.from_template(
"hi {foo} {bar}", partial_variables={"foo": "jim"}
)
expected_prompt = PromptTemplate(
template="hi {foo} {bar}",
input_variables=["bar"],
partial_variables={"foo": "jim"},
)
assert len(prompt.messages) == 1
output_prompt = prompt.messages[0]
assert isinstance(output_prompt, HumanMessagePromptTemplate)
assert output_prompt.prompt == expected_prompt
def test_create_system_message_prompt_template_from_template_partial() -> None:
"""Create a system message prompt template with partials."""
graph_creator_content = """
Your instructions are:
{instructions}
History:
{history}
"""
json_prompt_instructions: dict = {}
graph_analyst_template = SystemMessagePromptTemplate.from_template(
template=graph_creator_content,
input_variables=["history"],
partial_variables={"instructions": json_prompt_instructions},
)
assert graph_analyst_template.format(history="history") == SystemMessage(
content="\n Your instructions are:\n "
" {}\n History:\n "
"history\n "
)
def test_create_system_message_prompt_list_template() -> None:
graph_creator_content1 = """
This is the prompt for the first test:
{variables}
"""
graph_creator_content2 = """
This is the prompt for the second test:
{variables}
"""
graph_analyst_template = SystemMessagePromptTemplate.from_template(
template=[graph_creator_content1, graph_creator_content2],
input_variables=["variables"],
)
assert graph_analyst_template.format(variables="foo") == SystemMessage(
content=[
{
"type": "text",
"text": "\n This is the prompt for the first test:\n foo\n ",
},
{
"type": "text",
"text": "\n This is the prompt for "
"the second test:\n foo\n ",
},
]
)
def test_create_system_message_prompt_list_template_partial_variables_not_null() -> (
None
):
graph_creator_content1 = """
This is the prompt for the first test:
{variables}
"""
graph_creator_content2 = """
This is the prompt for the second test:
{variables}
"""
try:
graph_analyst_template = SystemMessagePromptTemplate.from_template(
template=[graph_creator_content1, graph_creator_content2],
input_variables=["variables"],
partial_variables={"variables": "foo"},
)
graph_analyst_template.format(variables="foo")
except ValueError as e:
assert str(e) == "Partial variables are not supported for list of templates."
def test_message_prompt_template_from_template_file() -> None:
expected = ChatMessagePromptTemplate(
prompt=PromptTemplate(
template="Question: {question}\nAnswer:", input_variables=["question"]
),
role="human",
)
actual = ChatMessagePromptTemplate.from_template_file(
Path(__file__).parent.parent / "data" / "prompt_file.txt",
["question"],
role="human",
)
assert expected == actual
async def test_chat_prompt_template(chat_prompt_template: ChatPromptTemplate) -> None:
"""Test chat prompt template."""
prompt = chat_prompt_template.format_prompt(foo="foo", bar="bar", context="context")
assert isinstance(prompt, ChatPromptValue)
messages = prompt.to_messages()
assert len(messages) == 4
assert messages[0].content == "Here's some context: context"
assert messages[1].content == "Hello foo, I'm bar. Thanks for the context"
assert messages[2].content == "I'm an AI. I'm foo. I'm bar."
assert messages[3].content == "I'm a generic message. I'm foo. I'm bar."
async_prompt = await chat_prompt_template.aformat_prompt(
foo="foo", bar="bar", context="context"
)
assert async_prompt.to_messages() == messages
string = prompt.to_string()
expected = (
"System: Here's some context: context\n"
"Human: Hello foo, I'm bar. Thanks for the context\n"
"AI: I'm an AI. I'm foo. I'm bar.\n"
"test: I'm a generic message. I'm foo. I'm bar."
)
assert string == expected
string = chat_prompt_template.format(foo="foo", bar="bar", context="context")
assert string == expected
string = await chat_prompt_template.aformat(foo="foo", bar="bar", context="context")
assert string == expected
def test_chat_prompt_template_from_messages(
messages: list[BaseMessagePromptTemplate],
) -> None:
"""Test creating a chat prompt template from messages."""
chat_prompt_template = ChatPromptTemplate.from_messages(messages)
assert sorted(chat_prompt_template.input_variables) == sorted(
["context", "foo", "bar"]
)
assert len(chat_prompt_template.messages) == 4
async def test_chat_prompt_template_from_messages_using_role_strings() -> None:
"""Test creating a chat prompt template from role string messages."""
template = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful AI bot. Your name is {name}."),
("human", "Hello, how are you doing?"),
("ai", "I'm doing well, thanks!"),
("human", "{user_input}"),
]
)
expected = [
SystemMessage(
content="You are a helpful AI bot. Your name is Bob.", additional_kwargs={}
),
HumanMessage(
content="Hello, how are you doing?", additional_kwargs={}, example=False
),
AIMessage(
content="I'm doing well, thanks!", additional_kwargs={}, example=False
),
HumanMessage(content="What is your name?", additional_kwargs={}, example=False),
]
messages = template.format_messages(name="Bob", user_input="What is your name?")
assert messages == expected
messages = await template.aformat_messages(
name="Bob", user_input="What is your name?"
)
assert messages == expected
| |
153682
|
from langchain_core.documents import Document
def test_str() -> None:
assert str(Document(page_content="Hello, World!")) == "page_content='Hello, World!'"
assert (
str(Document(page_content="Hello, World!", metadata={"a": 3}))
== "page_content='Hello, World!' metadata={'a': 3}"
)
def test_repr() -> None:
assert (
repr(Document(page_content="Hello, World!"))
== "Document(metadata={}, page_content='Hello, World!')"
)
assert (
repr(Document(page_content="Hello, World!", metadata={"a": 3}))
== "Document(metadata={'a': 3}, page_content='Hello, World!')"
)
| |
153683
|
from langchain_core.documents import Document
def test_init() -> None:
for doc in [
Document(page_content="foo"),
Document(page_content="foo", metadata={"a": 1}),
Document(page_content="foo", id=None),
Document(page_content="foo", id="1"),
Document(page_content="foo", id=1),
]:
assert isinstance(doc, Document)
| |
153723
|
# flake8: noqa
"""Global values and configuration that apply to all of LangChain."""
import warnings
from typing import TYPE_CHECKING, Optional
if TYPE_CHECKING:
from langchain_core.caches import BaseCache
# DO NOT USE THESE VALUES DIRECTLY!
# Use them only via `get_<X>()` and `set_<X>()` below,
# or else your code may behave unexpectedly with other uses of these global settings:
# https://github.com/langchain-ai/langchain/pull/11311#issuecomment-1743780004
_verbose: bool = False
_debug: bool = False
_llm_cache: Optional["BaseCache"] = None
def set_verbose(value: bool) -> None:
"""Set a new value for the `verbose` global setting.
Args:
value: The new value for the `verbose` global setting.
"""
try:
import langchain # type: ignore[import]
# We're about to run some deprecated code, don't report warnings from it.
# The user called the correct (non-deprecated) code path and shouldn't get warnings.
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message=(
"Importing verbose from langchain root module is no longer supported"
),
)
# N.B.: This is a workaround for an unfortunate quirk of Python's
# module-level `__getattr__()` implementation:
# https://github.com/langchain-ai/langchain/pull/11311#issuecomment-1743780004
#
# Remove it once `langchain.verbose` is no longer supported, and once all users
# have migrated to using `set_verbose()` here.
langchain.verbose = value
except ImportError:
pass
global _verbose
_verbose = value
def get_verbose() -> bool:
"""Get the value of the `verbose` global setting.
Returns:
The value of the `verbose` global setting.
"""
try:
import langchain # type: ignore[import]
# We're about to run some deprecated code, don't report warnings from it.
# The user called the correct (non-deprecated) code path and shouldn't get warnings.
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message=(
".*Importing verbose from langchain root module is no longer supported"
),
)
# N.B.: This is a workaround for an unfortunate quirk of Python's
# module-level `__getattr__()` implementation:
# https://github.com/langchain-ai/langchain/pull/11311#issuecomment-1743780004
#
# Remove it once `langchain.verbose` is no longer supported, and once all users
# have migrated to using `set_verbose()` here.
#
# In the meantime, the `verbose` setting is considered True if either the old
# or the new value are True. This accommodates users who haven't migrated
# to using `set_verbose()` yet. Those users are getting deprecation warnings
# directing them to use `set_verbose()` when they import `langchain.verbose`.
old_verbose = langchain.verbose
except ImportError:
old_verbose = False
global _verbose
return _verbose or old_verbose
def set_debug(value: bool) -> None:
"""Set a new value for the `debug` global setting.
Args:
value: The new value for the `debug` global setting.
"""
try:
import langchain # type: ignore[import]
# We're about to run some deprecated code, don't report warnings from it.
# The user called the correct (non-deprecated) code path and shouldn't get warnings.
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message="Importing debug from langchain root module is no longer supported",
)
# N.B.: This is a workaround for an unfortunate quirk of Python's
# module-level `__getattr__()` implementation:
# https://github.com/langchain-ai/langchain/pull/11311#issuecomment-1743780004
#
# Remove it once `langchain.debug` is no longer supported, and once all users
# have migrated to using `set_debug()` here.
langchain.debug = value
except ImportError:
pass
global _debug
_debug = value
def get_debug() -> bool:
"""Get the value of the `debug` global setting.
Returns:
The value of the `debug` global setting.
"""
try:
import langchain # type: ignore[import]
# We're about to run some deprecated code, don't report warnings from it.
# The user called the correct (non-deprecated) code path and shouldn't get warnings.
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message="Importing debug from langchain root module is no longer supported",
)
# N.B.: This is a workaround for an unfortunate quirk of Python's
# module-level `__getattr__()` implementation:
# https://github.com/langchain-ai/langchain/pull/11311#issuecomment-1743780004
#
# Remove it once `langchain.debug` is no longer supported, and once all users
# have migrated to using `set_debug()` here.
#
# In the meantime, the `debug` setting is considered True if either the old
# or the new value are True. This accommodates users who haven't migrated
# to using `set_debug()` yet. Those users are getting deprecation warnings
# directing them to use `set_debug()` when they import `langchain.debug`.
old_debug = langchain.debug
except ImportError:
old_debug = False
global _debug
return _debug or old_debug
def set_llm_cache(value: Optional["BaseCache"]) -> None:
"""Set a new LLM cache, overwriting the previous value, if any.
Args:
value: The new LLM cache to use. If `None`, the LLM cache is disabled.
"""
try:
import langchain # type: ignore[import]
# We're about to run some deprecated code, don't report warnings from it.
# The user called the correct (non-deprecated) code path and shouldn't get warnings.
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message=(
"Importing llm_cache from langchain root module is no longer supported"
),
)
# N.B.: This is a workaround for an unfortunate quirk of Python's
# module-level `__getattr__()` implementation:
# https://github.com/langchain-ai/langchain/pull/11311#issuecomment-1743780004
#
# Remove it once `langchain.llm_cache` is no longer supported, and
# once all users have migrated to using `set_llm_cache()` here.
langchain.llm_cache = value
except ImportError:
pass
global _llm_cache
_llm_cache = value
def get_llm_cache() -> "BaseCache":
"""Get the value of the `llm_cache` global setting.
Returns:
The value of the `llm_cache` global setting.
"""
try:
import langchain # type: ignore[import]
# We're about to run some deprecated code, don't report warnings from it.
# The user called the correct (non-deprecated) code path and shouldn't get warnings.
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message=(
"Importing llm_cache from langchain root module is no longer supported"
),
)
# N.B.: This is a workaround for an unfortunate quirk of Python's
# module-level `__getattr__()` implementation:
# https://github.com/langchain-ai/langchain/pull/11311#issuecomment-1743780004
#
# Remove it once `langchain.llm_cache` is no longer supported, and
# once all users have migrated to using `set_llm_cache()` here.
#
# In the meantime, the `llm_cache` setting returns whichever of
# its two backing sources is truthy (not `None` and non-empty),
# or the old value if both are falsy. This accommodates users
# who haven't migrated to using `set_llm_cache()` yet.
# Those users are getting deprecation warnings directing them
# to use `set_llm_cache()` when they import `langchain.llm_cache`.
old_llm_cache = langchain.llm_cache
except ImportError:
old_llm_cache = None
global _llm_cache
return _llm_cache or old_llm_cache
| |
153725
|
"""Abstract base class for a Document retrieval system.
A retrieval system is defined as something that can take string queries and return
the most 'relevant' Documents from some source.
Usage:
A retriever follows the standard Runnable interface, and should be used
via the standard Runnable methods of `invoke`, `ainvoke`, `batch`, `abatch`.
Implementation:
When implementing a custom retriever, the class should implement
the `_get_relevant_documents` method to define the logic for retrieving documents.
Optionally, an async native implementations can be provided by overriding the
`_aget_relevant_documents` method.
Example: A retriever that returns the first 5 documents from a list of documents
.. code-block:: python
from langchain_core import Document, BaseRetriever
from typing import List
class SimpleRetriever(BaseRetriever):
docs: List[Document]
k: int = 5
def _get_relevant_documents(self, query: str) -> List[Document]:
\"\"\"Return the first k documents from the list of documents\"\"\"
return self.docs[:self.k]
async def _aget_relevant_documents(self, query: str) -> List[Document]:
\"\"\"(Optional) async native implementation.\"\"\"
return self.docs[:self.k]
Example: A simple retriever based on a scikit-learn vectorizer
.. code-block:: python
from sklearn.metrics.pairwise import cosine_similarity
class TFIDFRetriever(BaseRetriever, BaseModel):
vectorizer: Any
docs: List[Document]
tfidf_array: Any
k: int = 4
class Config:
arbitrary_types_allowed = True
def _get_relevant_documents(self, query: str) -> List[Document]:
# Ip -- (n_docs,x), Op -- (n_docs,n_Feats)
query_vec = self.vectorizer.transform([query])
# Op -- (n_docs,1) -- Cosine Sim with each doc
results = cosine_similarity(self.tfidf_array, query_vec).reshape((-1,))
return [self.docs[i] for i in results.argsort()[-self.k :][::-1]]
""" # noqa: E501
model_config = ConfigDict(
arbitrary_types_allowed=True,
)
_new_arg_supported: bool = False
_expects_other_args: bool = False
tags: Optional[list[str]] = None
"""Optional list of tags associated with the retriever. Defaults to None.
These tags will be associated with each call to this retriever,
and passed as arguments to the handlers defined in `callbacks`.
You can use these to eg identify a specific instance of a retriever with its
use case.
"""
metadata: Optional[dict[str, Any]] = None
"""Optional metadata associated with the retriever. Defaults to None.
This metadata will be associated with each call to this retriever,
and passed as arguments to the handlers defined in `callbacks`.
You can use these to eg identify a specific instance of a retriever with its
use case.
"""
def __init_subclass__(cls, **kwargs: Any) -> None:
super().__init_subclass__(**kwargs)
# Version upgrade for old retrievers that implemented the public
# methods directly.
if cls.get_relevant_documents != BaseRetriever.get_relevant_documents:
warnings.warn(
"Retrievers must implement abstract `_get_relevant_documents` method"
" instead of `get_relevant_documents`",
DeprecationWarning,
stacklevel=4,
)
swap = cls.get_relevant_documents
cls.get_relevant_documents = ( # type: ignore[assignment]
BaseRetriever.get_relevant_documents
)
cls._get_relevant_documents = swap # type: ignore[assignment]
if (
hasattr(cls, "aget_relevant_documents")
and cls.aget_relevant_documents != BaseRetriever.aget_relevant_documents
):
warnings.warn(
"Retrievers must implement abstract `_aget_relevant_documents` method"
" instead of `aget_relevant_documents`",
DeprecationWarning,
stacklevel=4,
)
aswap = cls.aget_relevant_documents
cls.aget_relevant_documents = ( # type: ignore[assignment]
BaseRetriever.aget_relevant_documents
)
cls._aget_relevant_documents = aswap # type: ignore[assignment]
parameters = signature(cls._get_relevant_documents).parameters
cls._new_arg_supported = parameters.get("run_manager") is not None
# If a V1 retriever broke the interface and expects additional arguments
cls._expects_other_args = (
len(set(parameters.keys()) - {"self", "query", "run_manager"}) > 0
)
def _get_ls_params(self, **kwargs: Any) -> LangSmithRetrieverParams:
"""Get standard params for tracing."""
default_retriever_name = self.get_name()
if default_retriever_name.startswith("Retriever"):
default_retriever_name = default_retriever_name[9:]
elif default_retriever_name.endswith("Retriever"):
default_retriever_name = default_retriever_name[:-9]
default_retriever_name = default_retriever_name.lower()
ls_params = LangSmithRetrieverParams(ls_retriever_name=default_retriever_name)
return ls_params
def invoke(
self, input: str, config: Optional[RunnableConfig] = None, **kwargs: Any
) -> list[Document]:
"""Invoke the retriever to get relevant documents.
Main entry point for synchronous retriever invocations.
Args:
input: The query string.
config: Configuration for the retriever. Defaults to None.
kwargs: Additional arguments to pass to the retriever.
Returns:
List of relevant documents.
Examples:
.. code-block:: python
retriever.invoke("query")
"""
from langchain_core.callbacks.manager import CallbackManager
config = ensure_config(config)
inheritable_metadata = {
**(config.get("metadata") or {}),
**self._get_ls_params(**kwargs),
}
callback_manager = CallbackManager.configure(
config.get("callbacks"),
None,
verbose=kwargs.get("verbose", False),
inheritable_tags=config.get("tags"),
local_tags=self.tags,
inheritable_metadata=inheritable_metadata,
local_metadata=self.metadata,
)
run_manager = callback_manager.on_retriever_start(
None,
input,
name=config.get("run_name") or self.get_name(),
run_id=kwargs.pop("run_id", None),
)
try:
_kwargs = kwargs if self._expects_other_args else {}
if self._new_arg_supported:
result = self._get_relevant_documents(
input, run_manager=run_manager, **_kwargs
)
else:
result = self._get_relevant_documents(input, **_kwargs)
except Exception as e:
run_manager.on_retriever_error(e)
raise e
else:
run_manager.on_retriever_end(
result,
)
return result
async def ainvoke(
self,
input: str,
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> list[Document]:
"""Asynchronously invoke the retriever to get relevant documents.
Main entry point for asynchronous retriever invocations.
Args:
input: The query string.
config: Configuration for the retriever. Defaults to None.
kwargs: Additional arguments to pass to the retriever.
Returns:
List of relevant documents.
Examples:
.. code-block:: python
await retriever.ainvoke("query")
"""
from langchain_core.callbacks.manager import AsyncCallbackManager
config = ensure_config(config)
inheritable_metadata = {
**(config.get("metadata") or {}),
**self._get_ls_params(**kwargs),
}
callback_manager = AsyncCallbackManager.configure(
config.get("callbacks"),
None,
verbose=kwargs.get("verbose", False),
inheritable_tags=config.get("tags"),
local_tags=self.tags,
inheritable_metadata=inheritable_metadata,
local_metadata=self.metadata,
)
run_manager = await callback_manager.on_retriever_start(
None,
input,
name=config.get("run_name") or self.get_name(),
run_id=kwargs.pop("run_id", None),
)
try:
_kwargs = kwargs if self._expects_other_args else {}
if self._new_arg_supported:
result = await self._aget_relevant_documents(
input, run_manager=run_manager, **_kwargs
)
else:
result = await self._aget_relevant_documents(input, **_kwargs)
except Exception as e:
await run_manager.on_retriever_error(e)
raise e
else:
await run_manager.on_retriever_end(
result,
)
return result
| |
153736
|
"""Custom **exceptions** for LangChain."""
from enum import Enum
from typing import Any, Optional
class LangChainException(Exception): # noqa: N818
"""General LangChain exception."""
class TracerException(LangChainException):
"""Base class for exceptions in tracers module."""
class OutputParserException(ValueError, LangChainException): # noqa: N818
"""Exception that output parsers should raise to signify a parsing error.
This exists to differentiate parsing errors from other code or execution errors
that also may arise inside the output parser. OutputParserExceptions will be
available to catch and handle in ways to fix the parsing error, while other
errors will be raised.
Parameters:
error: The error that's being re-raised or an error message.
observation: String explanation of error which can be passed to a
model to try and remediate the issue. Defaults to None.
llm_output: String model output which is error-ing.
Defaults to None.
send_to_llm: Whether to send the observation and llm_output back to an Agent
after an OutputParserException has been raised. This gives the underlying
model driving the agent the context that the previous output was improperly
structured, in the hopes that it will update the output to the correct
format. Defaults to False.
"""
def __init__(
self,
error: Any,
observation: Optional[str] = None,
llm_output: Optional[str] = None,
send_to_llm: bool = False,
):
if isinstance(error, str):
error = create_message(
message=error, error_code=ErrorCode.OUTPUT_PARSING_FAILURE
)
super().__init__(error)
if send_to_llm and (observation is None or llm_output is None):
msg = (
"Arguments 'observation' & 'llm_output'"
" are required if 'send_to_llm' is True"
)
raise ValueError(msg)
self.observation = observation
self.llm_output = llm_output
self.send_to_llm = send_to_llm
class ErrorCode(Enum):
INVALID_PROMPT_INPUT = "INVALID_PROMPT_INPUT"
INVALID_TOOL_RESULTS = "INVALID_TOOL_RESULTS"
MESSAGE_COERCION_FAILURE = "MESSAGE_COERCION_FAILURE"
MODEL_AUTHENTICATION = "MODEL_AUTHENTICATION"
MODEL_NOT_FOUND = "MODEL_NOT_FOUND"
MODEL_RATE_LIMIT = "MODEL_RATE_LIMIT"
OUTPUT_PARSING_FAILURE = "OUTPUT_PARSING_FAILURE"
def create_message(*, message: str, error_code: ErrorCode) -> str:
return (
f"{message}\n"
"For troubleshooting, visit: https://python.langchain.com/docs/"
f"troubleshooting/errors/{error_code.value}"
)
| |
153743
|
class BaseChatModel(BaseLanguageModel[BaseMessage], ABC):
"""Base class for chat models.
Key imperative methods:
Methods that actually call the underlying model.
+---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
| Method | Input | Output | Description |
+===========================+================================================================+=====================================================================+==================================================================================================+
| `invoke` | str | List[dict | tuple | BaseMessage] | PromptValue | BaseMessage | A single chat model call. |
+---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
| `ainvoke` | ''' | BaseMessage | Defaults to running invoke in an async executor. |
+---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
| `stream` | ''' | Iterator[BaseMessageChunk] | Defaults to yielding output of invoke. |
+---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
| `astream` | ''' | AsyncIterator[BaseMessageChunk] | Defaults to yielding output of ainvoke. |
+---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
| `astream_events` | ''' | AsyncIterator[StreamEvent] | Event types: 'on_chat_model_start', 'on_chat_model_stream', 'on_chat_model_end'. |
+---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
| `batch` | List['''] | List[BaseMessage] | Defaults to running invoke in concurrent threads. |
+---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
| `abatch` | List['''] | List[BaseMessage] | Defaults to running ainvoke in concurrent threads. |
+---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
| `batch_as_completed` | List['''] | Iterator[Tuple[int, Union[BaseMessage, Exception]]] | Defaults to running invoke in concurrent threads. |
+---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
| `abatch_as_completed` | List['''] | AsyncIterator[Tuple[int, Union[BaseMessage, Exception]]] | Defaults to running ainvoke in concurrent threads. |
+---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
This table provides a brief overview of the main imperative methods. Please see the base Runnable reference for full documentation.
Key declarative methods:
Methods for creating another Runnable using the ChatModel.
+----------------------------------+-----------------------------------------------------------------------------------------------------------+
| Method | Description |
+==================================+===========================================================================================================+
| `bind_tools` | Create ChatModel that can call tools. |
+----------------------------------+-----------------------------------------------------------------------------------------------------------+
| `with_structured_output` | Create wrapper that structures model output using schema. |
+----------------------------------+-----------------------------------------------------------------------------------------------------------+
| `with_retry` | Create wrapper that retries model calls on failure. |
+----------------------------------+-----------------------------------------------------------------------------------------------------------+
| `with_fallbacks` | Create wrapper that falls back to other models on failure. |
+----------------------------------+-----------------------------------------------------------------------------------------------------------+
| `configurable_fields` | Specify init args of the model that can be configured at runtime via the RunnableConfig. |
+----------------------------------+-----------------------------------------------------------------------------------------------------------+
| `configurable_alternatives` | Specify alternative models which can be swapped in at runtime via the RunnableConfig. |
+----------------------------------+-----------------------------------------------------------------------------------------------------------+
This table provides a brief overview of the main declarative methods. Please see the reference for each method for full documentation.
Creating custom chat model:
Custom chat model implementations should inherit from this class.
Please reference the table below for information about which
methods and properties are required or optional for implementations.
+----------------------------------+--------------------------------------------------------------------+-------------------+
| Method/Property | Description | Required/Optional |
+==================================+====================================================================+===================+
| `_generate` | Use to generate a chat result from a prompt | Required |
+----------------------------------+--------------------------------------------------------------------+-------------------+
| `_llm_type` (property) | Used to uniquely identify the type of the model. Used for logging. | Required |
+----------------------------------+--------------------------------------------------------------------+-------------------+
| `_identifying_params` (property) | Represent model parameterization for tracing purposes. | Optional |
+----------------------------------+--------------------------------------------------------------------+-------------------+
| `_stream` | Use to implement streaming | Optional |
+----------------------------------+--------------------------------------------------------------------+-------------------+
| `_agenerate` | Use to implement a native async method | Optional |
+----------------------------------+--------------------------------------------------------------------+-------------------+
| `_astream` | Use to implement async version of `_stream` | Optional |
+----------------------------------+--------------------------------------------------------------------+-------------------+
Follow the guide for more information on how to implement a custom Chat Model:
[Guide](https://python.langchain.com/docs/how_to/custom_chat_model/).
""" # noqa: E501
callback_manager: Optional[BaseCallbackManager] = deprecated(
name="callback_manager", since="0.1.7", removal="1.0", alternative="callbacks"
)(
Field(
default=None,
exclude=True,
description="Callback manager to add to the run trace.",
)
)
rate_limiter: Optional[BaseRateLimiter] = Field(default=None, exclude=True)
"An optional rate limiter to use for limiting the number of requests."
disable_streaming: Union[bool, Literal["tool_calling"]] = False
"""Whether to disable streaming for this model.
If streaming is bypassed, then ``stream()/astream()`` will defer to
``invoke()/ainvoke()``.
- If True, will always bypass streaming case.
- If "tool_calling", will bypass streaming case only when the model is called
with a ``tools`` keyword argument.
- If False (default), will always use streaming case if available.
"""
@model_validator(mode="before")
@classmethod
def raise_deprecation(cls, values: dict) -> Any:
"""Raise deprecation warning if callback_manager is used.
Args:
values (Dict): Values to validate.
Returns:
Dict: Validated values.
Raises:
DeprecationWarning: If callback_manager is used.
"""
if values.get("callback_manager") is not None:
warnings.warn(
"callback_manager is deprecated. Please use callbacks instead.",
DeprecationWarning,
stacklevel=5,
)
values["callbacks"] = values.pop("callback_manager", None)
return values
model_config = ConfigDict(
arbitrary_types_allowed=True,
)
@cached_property
def _serialized(self) -> dict[str, Any]:
return dumpd(self)
# --- Runnable methods ---
@property
@override
def OutputType(self) -> Any:
"""Get the output type for this runnable."""
return AnyMessage
def _convert_input(self, input: LanguageModelInput) -> PromptValue:
if isinstance(input, PromptValue):
return input
elif isinstance(input, str):
return StringPromptValue(text=input)
elif isinstance(input, Sequence):
return ChatPromptValue(messages=convert_to_messages(input))
else:
msg = (
f"Invalid input type {type(input)}. "
"Must be a PromptValue, str, or list of BaseMessages."
)
raise ValueError(msg)
def invoke(
self,
input: LanguageModelInput,
config: Optional[RunnableConfig] = None,
*,
stop: Optional[list[str]] = None,
**kwargs: Any,
) -> BaseMessage:
config = ensure_config(config)
return cast(
ChatGeneration,
self.generate_prompt(
[self._convert_input(input)],
stop=stop,
callbacks=config.get("callbacks"),
tags=config.get("tags"),
metadata=config.get("metadata"),
run_name=config.get("run_name"),
run_id=config.pop("run_id", None),
**kwargs,
).generations[0][0],
).message
async def ainvoke(
self,
input: LanguageModelInput,
config: Optional[RunnableConfig] = None,
*,
stop: Optional[list[str]] = None,
**kwargs: Any,
) -> BaseMessage:
config = ensure_config(config)
llm_result = await self.agenerate_prompt(
[self._convert_input(input)],
stop=stop,
callbacks=config.get("callbacks"),
tags=config.get("tags"),
metadata=config.get("metadata"),
run_name=config.get("run_name"),
run_id=config.pop("run_id", None),
**kwargs,
)
return cast(ChatGeneration, llm_result.generations[0][0]).message
| |
153765
|
import inspect
from typing import Any, Callable, Literal, Optional, Union, get_type_hints
from pydantic import BaseModel, Field, create_model
from langchain_core.callbacks import Callbacks
from langchain_core.runnables import Runnable
from langchain_core.tools.base import BaseTool
from langchain_core.tools.simple import Tool
from langchain_core.tools.structured import StructuredTool
def tool(
*args: Union[str, Callable, Runnable],
return_direct: bool = False,
args_schema: Optional[type] = None,
infer_schema: bool = True,
response_format: Literal["content", "content_and_artifact"] = "content",
parse_docstring: bool = False,
error_on_invalid_docstring: bool = True,
) -> Callable:
"""Make tools out of functions, can be used with or without arguments.
Args:
*args: The arguments to the tool.
return_direct: Whether to return directly from the tool rather
than continuing the agent loop. Defaults to False.
args_schema: optional argument schema for user to specify.
Defaults to None.
infer_schema: Whether to infer the schema of the arguments from
the function's signature. This also makes the resultant tool
accept a dictionary input to its `run()` function.
Defaults to True.
response_format: The tool response format. If "content" then the output of
the tool is interpreted as the contents of a ToolMessage. If
"content_and_artifact" then the output is expected to be a two-tuple
corresponding to the (content, artifact) of a ToolMessage.
Defaults to "content".
parse_docstring: if ``infer_schema`` and ``parse_docstring``, will attempt to
parse parameter descriptions from Google Style function docstrings.
Defaults to False.
error_on_invalid_docstring: if ``parse_docstring`` is provided, configure
whether to raise ValueError on invalid Google Style docstrings.
Defaults to True.
Returns:
The tool.
Requires:
- Function must be of type (str) -> str
- Function must have a docstring
Examples:
.. code-block:: python
@tool
def search_api(query: str) -> str:
# Searches the API for the query.
return
@tool("search", return_direct=True)
def search_api(query: str) -> str:
# Searches the API for the query.
return
@tool(response_format="content_and_artifact")
def search_api(query: str) -> Tuple[str, dict]:
return "partial json of results", {"full": "object of results"}
.. versionadded:: 0.2.14
Parse Google-style docstrings:
.. code-block:: python
@tool(parse_docstring=True)
def foo(bar: str, baz: int) -> str:
\"\"\"The foo.
Args:
bar: The bar.
baz: The baz.
\"\"\"
return bar
foo.args_schema.model_json_schema()
.. code-block:: python
{
"title": "foo",
"description": "The foo.",
"type": "object",
"properties": {
"bar": {
"title": "Bar",
"description": "The bar.",
"type": "string"
},
"baz": {
"title": "Baz",
"description": "The baz.",
"type": "integer"
}
},
"required": [
"bar",
"baz"
]
}
Note that parsing by default will raise ``ValueError`` if the docstring
is considered invalid. A docstring is considered invalid if it contains
arguments not in the function signature, or is unable to be parsed into
a summary and "Args:" blocks. Examples below:
.. code-block:: python
# No args section
def invalid_docstring_1(bar: str, baz: int) -> str:
\"\"\"The foo.\"\"\"
return bar
# Improper whitespace between summary and args section
def invalid_docstring_2(bar: str, baz: int) -> str:
\"\"\"The foo.
Args:
bar: The bar.
baz: The baz.
\"\"\"
return bar
# Documented args absent from function signature
def invalid_docstring_3(bar: str, baz: int) -> str:
\"\"\"The foo.
Args:
banana: The bar.
monkey: The baz.
\"\"\"
return bar
"""
def _make_with_name(tool_name: str) -> Callable:
def _make_tool(dec_func: Union[Callable, Runnable]) -> BaseTool:
if isinstance(dec_func, Runnable):
runnable = dec_func
if runnable.input_schema.model_json_schema().get("type") != "object":
msg = "Runnable must have an object schema."
raise ValueError(msg)
async def ainvoke_wrapper(
callbacks: Optional[Callbacks] = None, **kwargs: Any
) -> Any:
return await runnable.ainvoke(kwargs, {"callbacks": callbacks})
def invoke_wrapper(
callbacks: Optional[Callbacks] = None, **kwargs: Any
) -> Any:
return runnable.invoke(kwargs, {"callbacks": callbacks})
coroutine = ainvoke_wrapper
func = invoke_wrapper
schema: Optional[type[BaseModel]] = runnable.input_schema
description = repr(runnable)
elif inspect.iscoroutinefunction(dec_func):
coroutine = dec_func
func = None
schema = args_schema
description = None
else:
coroutine = None
func = dec_func
schema = args_schema
description = None
if infer_schema or args_schema is not None:
return StructuredTool.from_function(
func,
coroutine,
name=tool_name,
description=description,
return_direct=return_direct,
args_schema=schema,
infer_schema=infer_schema,
response_format=response_format,
parse_docstring=parse_docstring,
error_on_invalid_docstring=error_on_invalid_docstring,
)
# If someone doesn't want a schema applied, we must treat it as
# a simple string->string function
if dec_func.__doc__ is None:
msg = (
"Function must have a docstring if "
"description not provided and infer_schema is False."
)
raise ValueError(msg)
return Tool(
name=tool_name,
func=func,
description=f"{tool_name} tool",
return_direct=return_direct,
coroutine=coroutine,
response_format=response_format,
)
return _make_tool
if len(args) == 2 and isinstance(args[0], str) and isinstance(args[1], Runnable):
return _make_with_name(args[0])(args[1])
elif len(args) == 1 and isinstance(args[0], str):
# if the argument is a string, then we use the string as the tool name
# Example usage: @tool("search", return_direct=True)
return _make_with_name(args[0])
elif len(args) == 1 and callable(args[0]):
# if the argument is a function, then we use the function name as the tool name
# Example usage: @tool
return _make_with_name(args[0].__name__)(args[0])
elif len(args) == 0:
# if there are no arguments, then we use the function name as the tool name
# Example usage: @tool(return_direct=True)
def _partial(func: Callable[[str], str]) -> BaseTool:
return _make_with_name(func.__name__)(func)
return _partial
else:
msg = "Too many arguments for tool decorator"
raise ValueError(msg)
def _get_description_from_runnable(runnable: Runnable) -> str:
"""Generate a placeholder description of a runnable."""
input_schema = runnable.input_schema.model_json_schema()
return f"Takes {input_schema}."
def _get_schema_from_runnable_and_arg_types(
runnable: Runnable,
name: str,
arg_types: Optional[dict[str, type]] = None,
) -> type[BaseModel]:
"""Infer args_schema for tool."""
if arg_types is None:
try:
arg_types = get_type_hints(runnable.InputType)
except TypeError as e:
msg = (
"Tool input must be str or dict. If dict, dict arguments must be "
"typed. Either annotate types (e.g., with TypedDict) or pass "
f"arg_types into `.as_tool` to specify. {str(e)}"
)
raise TypeError(msg) from e
fields = {key: (key_type, Field(...)) for key, key_type in arg_types.items()}
return create_model(name, **fields) # type: ignore
| |
153767
|
from __future__ import annotations
from functools import partial
from typing import Optional
from pydantic import BaseModel, Field
from langchain_core.callbacks import Callbacks
from langchain_core.prompts import (
BasePromptTemplate,
PromptTemplate,
aformat_document,
format_document,
)
from langchain_core.retrievers import BaseRetriever
from langchain_core.tools.simple import Tool
class RetrieverInput(BaseModel):
"""Input to the retriever."""
query: str = Field(description="query to look up in retriever")
def _get_relevant_documents(
query: str,
retriever: BaseRetriever,
document_prompt: BasePromptTemplate,
document_separator: str,
callbacks: Callbacks = None,
) -> str:
docs = retriever.invoke(query, config={"callbacks": callbacks})
return document_separator.join(
format_document(doc, document_prompt) for doc in docs
)
async def _aget_relevant_documents(
query: str,
retriever: BaseRetriever,
document_prompt: BasePromptTemplate,
document_separator: str,
callbacks: Callbacks = None,
) -> str:
docs = await retriever.ainvoke(query, config={"callbacks": callbacks})
return document_separator.join(
[await aformat_document(doc, document_prompt) for doc in docs]
)
def create_retriever_tool(
retriever: BaseRetriever,
name: str,
description: str,
*,
document_prompt: Optional[BasePromptTemplate] = None,
document_separator: str = "\n\n",
) -> Tool:
"""Create a tool to do retrieval of documents.
Args:
retriever: The retriever to use for the retrieval
name: The name for the tool. This will be passed to the language model,
so should be unique and somewhat descriptive.
description: The description for the tool. This will be passed to the language
model, so should be descriptive.
document_prompt: The prompt to use for the document. Defaults to None.
document_separator: The separator to use between documents. Defaults to "\n\n".
Returns:
Tool class to pass to an agent.
"""
document_prompt = document_prompt or PromptTemplate.from_template("{page_content}")
func = partial(
_get_relevant_documents,
retriever=retriever,
document_prompt=document_prompt,
document_separator=document_separator,
)
afunc = partial(
_aget_relevant_documents,
retriever=retriever,
document_prompt=document_prompt,
document_separator=document_separator,
)
return Tool(
name=name,
description=description,
func=func,
coroutine=afunc,
args_schema=RetrieverInput,
)
| |
153769
|
from __future__ import annotations
import textwrap
from collections.abc import Awaitable
from inspect import signature
from typing import (
Annotated,
Any,
Callable,
Literal,
Optional,
Union,
)
from pydantic import BaseModel, Field, SkipValidation
from langchain_core.callbacks import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
from langchain_core.messages import ToolCall
from langchain_core.runnables import RunnableConfig, run_in_executor
from langchain_core.tools.base import (
FILTERED_ARGS,
BaseTool,
_get_runnable_config_param,
create_schema_from_function,
)
from langchain_core.utils.pydantic import TypeBaseModel
class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Annotated[TypeBaseModel, SkipValidation()] = Field(
..., description="The tool schema."
)
"""The input arguments' schema."""
func: Optional[Callable[..., Any]] = None
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
# TODO: Is this needed?
async def ainvoke(
self,
input: Union[str, dict, ToolCall],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await run_in_executor(config, self.invoke, input, config, **kwargs)
return await super().ainvoke(input, config, **kwargs)
# --- Tool ---
@property
def args(self) -> dict:
"""The tool's input arguments."""
return self.args_schema.model_json_schema()["properties"]
def _run(
self,
*args: Any,
config: RunnableConfig,
run_manager: Optional[CallbackManagerForToolRun] = None,
**kwargs: Any,
) -> Any:
"""Use the tool."""
if self.func:
if run_manager and signature(self.func).parameters.get("callbacks"):
kwargs["callbacks"] = run_manager.get_child()
if config_param := _get_runnable_config_param(self.func):
kwargs[config_param] = config
return self.func(*args, **kwargs)
msg = "StructuredTool does not support sync invocation."
raise NotImplementedError(msg)
async def _arun(
self,
*args: Any,
config: RunnableConfig,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
**kwargs: Any,
) -> Any:
"""Use the tool asynchronously."""
if self.coroutine:
if run_manager and signature(self.coroutine).parameters.get("callbacks"):
kwargs["callbacks"] = run_manager.get_child()
if config_param := _get_runnable_config_param(self.coroutine):
kwargs[config_param] = config
return await self.coroutine(*args, **kwargs)
# If self.coroutine is None, then this will delegate to the default
# implementation which is expected to delegate to _run on a separate thread.
return await super()._arun(
*args, config=config, run_manager=run_manager, **kwargs
)
@classmethod
def from_function(
cls,
func: Optional[Callable] = None,
coroutine: Optional[Callable[..., Awaitable[Any]]] = None,
name: Optional[str] = None,
description: Optional[str] = None,
return_direct: bool = False,
args_schema: Optional[type[BaseModel]] = None,
infer_schema: bool = True,
*,
response_format: Literal["content", "content_and_artifact"] = "content",
parse_docstring: bool = False,
error_on_invalid_docstring: bool = False,
**kwargs: Any,
) -> StructuredTool:
"""Create tool from a given function.
A classmethod that helps to create a tool from a function.
Args:
func: The function from which to create a tool.
coroutine: The async function from which to create a tool.
name: The name of the tool. Defaults to the function name.
description: The description of the tool.
Defaults to the function docstring.
return_direct: Whether to return the result directly or as a callback.
Defaults to False.
args_schema: The schema of the tool's input arguments. Defaults to None.
infer_schema: Whether to infer the schema from the function's signature.
Defaults to True.
response_format: The tool response format. If "content" then the output of
the tool is interpreted as the contents of a ToolMessage. If
"content_and_artifact" then the output is expected to be a two-tuple
corresponding to the (content, artifact) of a ToolMessage.
Defaults to "content".
parse_docstring: if ``infer_schema`` and ``parse_docstring``, will attempt
to parse parameter descriptions from Google Style function docstrings.
Defaults to False.
error_on_invalid_docstring: if ``parse_docstring`` is provided, configure
whether to raise ValueError on invalid Google Style docstrings.
Defaults to False.
kwargs: Additional arguments to pass to the tool
Returns:
The tool.
Raises:
ValueError: If the function is not provided.
Examples:
.. code-block:: python
def add(a: int, b: int) -> int:
\"\"\"Add two numbers\"\"\"
return a + b
tool = StructuredTool.from_function(add)
tool.run(1, 2) # 3
"""
if func is not None:
source_function = func
elif coroutine is not None:
source_function = coroutine
else:
msg = "Function and/or coroutine must be provided"
raise ValueError(msg)
name = name or source_function.__name__
if args_schema is None and infer_schema:
# schema name is appended within function
args_schema = create_schema_from_function(
name,
source_function,
parse_docstring=parse_docstring,
error_on_invalid_docstring=error_on_invalid_docstring,
filter_args=_filter_schema_args(source_function),
)
description_ = description
if description is None and not parse_docstring:
description_ = source_function.__doc__ or None
if description_ is None and args_schema:
description_ = args_schema.__doc__ or None
if description_ is None:
msg = "Function must have a docstring if description not provided."
raise ValueError(msg)
if description is None:
# Only apply if using the function's docstring
description_ = textwrap.dedent(description_).strip()
# Description example:
# search_api(query: str) - Searches the API for the query.
description_ = f"{description_.strip()}"
return cls(
name=name,
func=func,
coroutine=coroutine,
args_schema=args_schema, # type: ignore[arg-type]
description=description_,
return_direct=return_direct,
response_format=response_format,
**kwargs,
)
def _filter_schema_args(func: Callable) -> list[str]:
filter_args = list(FILTERED_ARGS)
if config_param := _get_runnable_config_param(func):
filter_args.append(config_param)
# filter_args.extend(_get_non_model_params(type_hints))
return filter_args
| |
153771
|
from __future__ import annotations
import asyncio
import functools
import inspect
import json
import uuid
import warnings
from abc import ABC, abstractmethod
from collections.abc import Sequence
from contextvars import copy_context
from inspect import signature
from typing import (
Annotated,
Any,
Callable,
Literal,
Optional,
TypeVar,
Union,
cast,
get_args,
get_origin,
get_type_hints,
)
from pydantic import (
BaseModel,
ConfigDict,
Field,
PydanticDeprecationWarning,
SkipValidation,
ValidationError,
model_validator,
validate_arguments,
)
from pydantic.v1 import BaseModel as BaseModelV1
from pydantic.v1 import ValidationError as ValidationErrorV1
from pydantic.v1 import validate_arguments as validate_arguments_v1
from langchain_core._api import deprecated
from langchain_core.callbacks import (
AsyncCallbackManager,
BaseCallbackManager,
CallbackManager,
Callbacks,
)
from langchain_core.messages.tool import ToolCall, ToolMessage
from langchain_core.runnables import (
RunnableConfig,
RunnableSerializable,
ensure_config,
patch_config,
run_in_executor,
)
from langchain_core.runnables.config import _set_config_context
from langchain_core.runnables.utils import asyncio_accepts_context
from langchain_core.utils.function_calling import (
_parse_google_docstring,
_py_38_safe_origin,
)
from langchain_core.utils.pydantic import (
TypeBaseModel,
_create_subset_model,
get_fields,
is_basemodel_subclass,
is_pydantic_v1_subclass,
is_pydantic_v2_subclass,
)
FILTERED_ARGS = ("run_manager", "callbacks")
class SchemaAnnotationError(TypeError):
"""Raised when 'args_schema' is missing or has an incorrect type annotation."""
def _is_annotated_type(typ: type[Any]) -> bool:
return get_origin(typ) is Annotated
def _get_annotation_description(arg_type: type) -> str | None:
if _is_annotated_type(arg_type):
annotated_args = get_args(arg_type)
for annotation in annotated_args[1:]:
if isinstance(annotation, str):
return annotation
return None
def _get_filtered_args(
inferred_model: type[BaseModel],
func: Callable,
*,
filter_args: Sequence[str],
include_injected: bool = True,
) -> dict:
"""Get the arguments from a function's signature."""
schema = inferred_model.model_json_schema()["properties"]
valid_keys = signature(func).parameters
return {
k: schema[k]
for i, (k, param) in enumerate(valid_keys.items())
if k not in filter_args
and (i > 0 or param.name not in ("self", "cls"))
and (include_injected or not _is_injected_arg_type(param.annotation))
}
def _parse_python_function_docstring(
function: Callable, annotations: dict, error_on_invalid_docstring: bool = False
) -> tuple[str, dict]:
"""Parse the function and argument descriptions from the docstring of a function.
Assumes the function docstring follows Google Python style guide.
"""
docstring = inspect.getdoc(function)
return _parse_google_docstring(
docstring,
list(annotations),
error_on_invalid_docstring=error_on_invalid_docstring,
)
def _validate_docstring_args_against_annotations(
arg_descriptions: dict, annotations: dict
) -> None:
"""Raise error if docstring arg is not in type annotations."""
for docstring_arg in arg_descriptions:
if docstring_arg not in annotations:
msg = f"Arg {docstring_arg} in docstring not found in function signature."
raise ValueError(msg)
def _infer_arg_descriptions(
fn: Callable,
*,
parse_docstring: bool = False,
error_on_invalid_docstring: bool = False,
) -> tuple[str, dict]:
"""Infer argument descriptions from a function's docstring."""
if hasattr(inspect, "get_annotations"):
# This is for python < 3.10
annotations = inspect.get_annotations(fn) # type: ignore
else:
annotations = getattr(fn, "__annotations__", {})
if parse_docstring:
description, arg_descriptions = _parse_python_function_docstring(
fn, annotations, error_on_invalid_docstring=error_on_invalid_docstring
)
else:
description = inspect.getdoc(fn) or ""
arg_descriptions = {}
if parse_docstring:
_validate_docstring_args_against_annotations(arg_descriptions, annotations)
for arg, arg_type in annotations.items():
if arg in arg_descriptions:
continue
if desc := _get_annotation_description(arg_type):
arg_descriptions[arg] = desc
return description, arg_descriptions
def _is_pydantic_annotation(annotation: Any, pydantic_version: str = "v2") -> bool:
"""Determine if a type annotation is a Pydantic model."""
base_model_class = BaseModelV1 if pydantic_version == "v1" else BaseModel
try:
return issubclass(annotation, base_model_class)
except TypeError:
return False
def _function_annotations_are_pydantic_v1(
signature: inspect.Signature, func: Callable
) -> bool:
"""Determine if all Pydantic annotations in a function signature are from V1."""
any_v1_annotations = any(
_is_pydantic_annotation(parameter.annotation, pydantic_version="v1")
for parameter in signature.parameters.values()
)
any_v2_annotations = any(
_is_pydantic_annotation(parameter.annotation, pydantic_version="v2")
for parameter in signature.parameters.values()
)
if any_v1_annotations and any_v2_annotations:
msg = (
f"Function {func} contains a mix of Pydantic v1 and v2 annotations. "
"Only one version of Pydantic annotations per function is supported."
)
raise NotImplementedError(msg)
return any_v1_annotations and not any_v2_annotations
class _SchemaConfig:
"""Configuration for the pydantic model.
This is used to configure the pydantic model created from
a function's signature.
Parameters:
extra: Whether to allow extra fields in the model.
arbitrary_types_allowed: Whether to allow arbitrary types in the model.
Defaults to True.
"""
extra: str = "forbid"
arbitrary_types_allowed: bool = True
| |
153773
|
class BaseTool(RunnableSerializable[Union[str, dict, ToolCall], Any]):
"""Interface LangChain tools must implement."""
def __init_subclass__(cls, **kwargs: Any) -> None:
"""Create the definition of the new tool class."""
super().__init_subclass__(**kwargs)
args_schema_type = cls.__annotations__.get("args_schema", None)
if args_schema_type is not None and args_schema_type == BaseModel:
# Throw errors for common mis-annotations.
# TODO: Use get_args / get_origin and fully
# specify valid annotations.
typehint_mandate = """
class ChildTool(BaseTool):
...
args_schema: Type[BaseModel] = SchemaClass
..."""
name = cls.__name__
msg = (
f"Tool definition for {name} must include valid type annotations"
f" for argument 'args_schema' to behave as expected.\n"
f"Expected annotation of 'Type[BaseModel]'"
f" but got '{args_schema_type}'.\n"
f"Expected class looks like:\n"
f"{typehint_mandate}"
)
raise SchemaAnnotationError(msg)
name: str
"""The unique name of the tool that clearly communicates its purpose."""
description: str
"""Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
"""
args_schema: Annotated[Optional[TypeBaseModel], SkipValidation()] = Field(
default=None, description="The tool schema."
)
"""Pydantic model class to validate and parse the tool's input arguments.
Args schema should be either:
- A subclass of pydantic.BaseModel.
or
- A subclass of pydantic.v1.BaseModel if accessing v1 namespace in pydantic 2
"""
return_direct: bool = False
"""Whether to return the tool's output directly.
Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
"""
verbose: bool = False
"""Whether to log the tool's progress."""
callbacks: Callbacks = Field(default=None, exclude=True)
"""Callbacks to be called during tool execution."""
callback_manager: Optional[BaseCallbackManager] = deprecated(
name="callback_manager", since="0.1.7", removal="1.0", alternative="callbacks"
)(
Field(
default=None,
exclude=True,
description="Callback manager to add to the run trace.",
)
)
tags: Optional[list[str]] = None
"""Optional list of tags associated with the tool. Defaults to None.
These tags will be associated with each call to this tool,
and passed as arguments to the handlers defined in `callbacks`.
You can use these to eg identify a specific instance of a tool with its use case.
"""
metadata: Optional[dict[str, Any]] = None
"""Optional metadata associated with the tool. Defaults to None.
This metadata will be associated with each call to this tool,
and passed as arguments to the handlers defined in `callbacks`.
You can use these to eg identify a specific instance of a tool with its use case.
"""
handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = (
False
)
"""Handle the content of the ToolException thrown."""
handle_validation_error: Optional[
Union[bool, str, Callable[[Union[ValidationError, ValidationErrorV1]], str]]
] = False
"""Handle the content of the ValidationError thrown."""
response_format: Literal["content", "content_and_artifact"] = "content"
"""The tool response format. Defaults to 'content'.
If "content" then the output of the tool is interpreted as the contents of a
ToolMessage. If "content_and_artifact" then the output is expected to be a
two-tuple corresponding to the (content, artifact) of a ToolMessage.
"""
def __init__(self, **kwargs: Any) -> None:
"""Initialize the tool."""
if (
"args_schema" in kwargs
and kwargs["args_schema"] is not None
and not is_basemodel_subclass(kwargs["args_schema"])
):
msg = (
f"args_schema must be a subclass of pydantic BaseModel. "
f"Got: {kwargs['args_schema']}."
)
raise TypeError(msg)
super().__init__(**kwargs)
model_config = ConfigDict(
arbitrary_types_allowed=True,
)
@property
def is_single_input(self) -> bool:
"""Whether the tool only accepts a single input."""
keys = {k for k in self.args if k != "kwargs"}
return len(keys) == 1
@property
def args(self) -> dict:
return self.get_input_schema().model_json_schema()["properties"]
@property
def tool_call_schema(self) -> type[BaseModel]:
full_schema = self.get_input_schema()
fields = []
for name, type_ in _get_all_basemodel_annotations(full_schema).items():
if not _is_injected_arg_type(type_):
fields.append(name)
return _create_subset_model(
self.name, full_schema, fields, fn_description=self.description
)
# --- Runnable ---
def get_input_schema(
self, config: Optional[RunnableConfig] = None
) -> type[BaseModel]:
"""The tool's input schema.
Args:
config: The configuration for the tool.
Returns:
The input schema for the tool.
"""
if self.args_schema is not None:
return self.args_schema
else:
return create_schema_from_function(self.name, self._run)
def invoke(
self,
input: Union[str, dict, ToolCall],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
tool_input, kwargs = _prep_run_args(input, config, **kwargs)
return self.run(tool_input, **kwargs)
async def ainvoke(
self,
input: Union[str, dict, ToolCall],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
tool_input, kwargs = _prep_run_args(input, config, **kwargs)
return await self.arun(tool_input, **kwargs)
# --- Tool ---
def _parse_input(self, tool_input: Union[str, dict]) -> Union[str, dict[str, Any]]:
"""Convert tool input to a pydantic model.
Args:
tool_input: The input to the tool.
"""
input_args = self.args_schema
if isinstance(tool_input, str):
if input_args is not None:
key_ = next(iter(get_fields(input_args).keys()))
if hasattr(input_args, "model_validate"):
input_args.model_validate({key_: tool_input})
else:
input_args.parse_obj({key_: tool_input})
return tool_input
else:
if input_args is not None:
if issubclass(input_args, BaseModel):
result = input_args.model_validate(tool_input)
result_dict = result.model_dump()
elif issubclass(input_args, BaseModelV1):
result = input_args.parse_obj(tool_input)
result_dict = result.dict()
else:
msg = (
"args_schema must be a Pydantic BaseModel, "
f"got {self.args_schema}"
)
raise NotImplementedError(msg)
return {
k: getattr(result, k)
for k, v in result_dict.items()
if k in tool_input
}
return tool_input
@model_validator(mode="before")
@classmethod
def raise_deprecation(cls, values: dict) -> Any:
"""Raise deprecation warning if callback_manager is used.
Args:
values: The values to validate.
Returns:
The validated values.
"""
if values.get("callback_manager") is not None:
warnings.warn(
"callback_manager is deprecated. Please use callbacks instead.",
DeprecationWarning,
stacklevel=6,
)
values["callbacks"] = values.pop("callback_manager", None)
return values
@abstractmethod
def _run(self, *args: Any, **kwargs: Any) -> Any:
"""Use the tool.
Add run_manager: Optional[CallbackManagerForToolRun] = None
to child implementations to enable tracing.
"""
async def _arun(self, *args: Any, **kwargs: Any) -> Any:
"""Use the tool asynchronously.
Add run_manager: Optional[AsyncCallbackManagerForToolRun] = None
to child implementations to enable tracing.
"""
if kwargs.get("run_manager") and signature(self._run).parameters.get(
"run_manager"
):
kwargs["run_manager"] = kwargs["run_manager"].get_sync()
return await run_in_executor(None, self._run, *args, **kwargs)
| |
153798
|
"""**Embeddings** interface."""
from abc import ABC, abstractmethod
from langchain_core.runnables.config import run_in_executor
class Embeddings(ABC):
"""Interface for embedding models.
This is an interface meant for implementing text embedding models.
Text embedding models are used to map text to a vector (a point in n-dimensional
space).
Texts that are similar will usually be mapped to points that are close to each
other in this space. The exact details of what's considered "similar" and how
"distance" is measured in this space are dependent on the specific embedding model.
This abstraction contains a method for embedding a list of documents and a method
for embedding a query text. The embedding of a query text is expected to be a single
vector, while the embedding of a list of documents is expected to be a list of
vectors.
Usually the query embedding is identical to the document embedding, but the
abstraction allows treating them independently.
In addition to the synchronous methods, this interface also provides asynchronous
versions of the methods.
By default, the asynchronous methods are implemented using the synchronous methods;
however, implementations may choose to override the asynchronous methods with
an async native implementation for performance reasons.
"""
@abstractmethod
def embed_documents(self, texts: list[str]) -> list[list[float]]:
"""Embed search docs.
Args:
texts: List of text to embed.
Returns:
List of embeddings.
"""
@abstractmethod
def embed_query(self, text: str) -> list[float]:
"""Embed query text.
Args:
text: Text to embed.
Returns:
Embedding.
"""
async def aembed_documents(self, texts: list[str]) -> list[list[float]]:
"""Asynchronous Embed search docs.
Args:
texts: List of text to embed.
Returns:
List of embeddings.
"""
return await run_in_executor(None, self.embed_documents, texts)
async def aembed_query(self, text: str) -> list[float]:
"""Asynchronous Embed query text.
Args:
text: Text to embed.
Returns:
Embedding.
"""
return await run_in_executor(None, self.embed_query, text)
| |
153810
|
class RunManager(BaseRunManager):
"""Sync Run Manager."""
def on_text(
self,
text: str,
**kwargs: Any,
) -> Any:
"""Run when a text is received.
Args:
text (str): The received text.
**kwargs (Any): Additional keyword arguments.
Returns:
Any: The result of the callback.
"""
handle_event(
self.handlers,
"on_text",
None,
text,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
tags=self.tags,
**kwargs,
)
def on_retry(
self,
retry_state: RetryCallState,
**kwargs: Any,
) -> None:
"""Run when a retry is received.
Args:
retry_state (RetryCallState): The retry state.
**kwargs (Any): Additional keyword arguments.
"""
handle_event(
self.handlers,
"on_retry",
"ignore_retry",
retry_state,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
tags=self.tags,
**kwargs,
)
class ParentRunManager(RunManager):
"""Sync Parent Run Manager."""
def get_child(self, tag: Optional[str] = None) -> CallbackManager:
"""Get a child callback manager.
Args:
tag (str, optional): The tag for the child callback manager.
Defaults to None.
Returns:
CallbackManager: The child callback manager.
"""
manager = CallbackManager(handlers=[], parent_run_id=self.run_id)
manager.set_handlers(self.inheritable_handlers)
manager.add_tags(self.inheritable_tags)
manager.add_metadata(self.inheritable_metadata)
if tag is not None:
manager.add_tags([tag], False)
return manager
class AsyncRunManager(BaseRunManager, ABC):
"""Async Run Manager."""
@abstractmethod
def get_sync(self) -> RunManager:
"""Get the equivalent sync RunManager.
Returns:
RunManager: The sync RunManager.
"""
async def on_text(
self,
text: str,
**kwargs: Any,
) -> Any:
"""Run when a text is received.
Args:
text (str): The received text.
**kwargs (Any): Additional keyword arguments.
Returns:
Any: The result of the callback.
"""
await ahandle_event(
self.handlers,
"on_text",
None,
text,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
tags=self.tags,
**kwargs,
)
async def on_retry(
self,
retry_state: RetryCallState,
**kwargs: Any,
) -> None:
"""Async run when a retry is received.
Args:
retry_state (RetryCallState): The retry state.
**kwargs (Any): Additional keyword arguments.
"""
await ahandle_event(
self.handlers,
"on_retry",
"ignore_retry",
retry_state,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
tags=self.tags,
**kwargs,
)
class AsyncParentRunManager(AsyncRunManager):
"""Async Parent Run Manager."""
def get_child(self, tag: Optional[str] = None) -> AsyncCallbackManager:
"""Get a child callback manager.
Args:
tag (str, optional): The tag for the child callback manager.
Defaults to None.
Returns:
AsyncCallbackManager: The child callback manager.
"""
manager = AsyncCallbackManager(handlers=[], parent_run_id=self.run_id)
manager.set_handlers(self.inheritable_handlers)
manager.add_tags(self.inheritable_tags)
manager.add_metadata(self.inheritable_metadata)
if tag is not None:
manager.add_tags([tag], False)
return manager
class CallbackManagerForLLMRun(RunManager, LLMManagerMixin):
"""Callback manager for LLM run."""
def on_llm_new_token(
self,
token: str,
*,
chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None,
**kwargs: Any,
) -> None:
"""Run when LLM generates a new token.
Args:
token (str): The new token.
chunk (Optional[Union[GenerationChunk, ChatGenerationChunk]], optional):
The chunk. Defaults to None.
**kwargs (Any): Additional keyword arguments.
"""
handle_event(
self.handlers,
"on_llm_new_token",
"ignore_llm",
token=token,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
tags=self.tags,
chunk=chunk,
**kwargs,
)
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Run when LLM ends running.
Args:
response (LLMResult): The LLM result.
**kwargs (Any): Additional keyword arguments.
"""
handle_event(
self.handlers,
"on_llm_end",
"ignore_llm",
response,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
tags=self.tags,
**kwargs,
)
def on_llm_error(
self,
error: BaseException,
**kwargs: Any,
) -> None:
"""Run when LLM errors.
Args:
error (Exception or KeyboardInterrupt): The error.
kwargs (Any): Additional keyword arguments.
- response (LLMResult): The response which was generated before
the error occurred.
"""
handle_event(
self.handlers,
"on_llm_error",
"ignore_llm",
error,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
tags=self.tags,
**kwargs,
)
class AsyncCallbackManagerForLLMRun(AsyncRunManager, LLMManagerMixin):
"""Async callback manager for LLM run."""
def get_sync(self) -> CallbackManagerForLLMRun:
"""Get the equivalent sync RunManager.
Returns:
CallbackManagerForLLMRun: The sync RunManager.
"""
return CallbackManagerForLLMRun(
run_id=self.run_id,
handlers=self.handlers,
inheritable_handlers=self.inheritable_handlers,
parent_run_id=self.parent_run_id,
tags=self.tags,
inheritable_tags=self.inheritable_tags,
metadata=self.metadata,
inheritable_metadata=self.inheritable_metadata,
)
@shielded
async def on_llm_new_token(
self,
token: str,
*,
chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None,
**kwargs: Any,
) -> None:
"""Run when LLM generates a new token.
Args:
token (str): The new token.
chunk (Optional[Union[GenerationChunk, ChatGenerationChunk]], optional):
The chunk. Defaults to None.
**kwargs (Any): Additional keyword arguments.
"""
await ahandle_event(
self.handlers,
"on_llm_new_token",
"ignore_llm",
token,
chunk=chunk,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
tags=self.tags,
**kwargs,
)
@shielded
async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Run when LLM ends running.
Args:
response (LLMResult): The LLM result.
**kwargs (Any): Additional keyword arguments.
"""
await ahandle_event(
self.handlers,
"on_llm_end",
"ignore_llm",
response,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
tags=self.tags,
**kwargs,
)
@shielded
async def on_llm_error(
self,
error: BaseException,
**kwargs: Any,
) -> None:
"""Run when LLM errors.
Args:
error (Exception or KeyboardInterrupt): The error.
kwargs (Any): Additional keyword arguments.
- response (LLMResult): The response which was generated before
the error occurred.
"""
await ahandle_event(
self.handlers,
"on_llm_error",
"ignore_llm",
error,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
tags=self.tags,
**kwargs,
)
| |
153821
|
class AsyncCallbackHandler(BaseCallbackHandler):
"""Async callback handler for LangChain."""
async def on_llm_start(
self,
serialized: dict[str, Any],
prompts: list[str],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[list[str]] = None,
metadata: Optional[dict[str, Any]] = None,
**kwargs: Any,
) -> None:
"""Run when LLM starts running.
**ATTENTION**: This method is called for non-chat models (regular LLMs). If
you're implementing a handler for a chat model,
you should use on_chat_model_start instead.
Args:
serialized (Dict[str, Any]): The serialized LLM.
prompts (List[str]): The prompts.
run_id (UUID): The run ID. This is the ID of the current run.
parent_run_id (UUID): The parent run ID. This is the ID of the parent run.
tags (Optional[List[str]]): The tags.
metadata (Optional[Dict[str, Any]]): The metadata.
kwargs (Any): Additional keyword arguments.
"""
async def on_chat_model_start(
self,
serialized: dict[str, Any],
messages: list[list[BaseMessage]],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[list[str]] = None,
metadata: Optional[dict[str, Any]] = None,
**kwargs: Any,
) -> Any:
"""Run when a chat model starts running.
**ATTENTION**: This method is called for chat models. If you're implementing
a handler for a non-chat model, you should use on_llm_start instead.
Args:
serialized (Dict[str, Any]): The serialized chat model.
messages (List[List[BaseMessage]]): The messages.
run_id (UUID): The run ID. This is the ID of the current run.
parent_run_id (UUID): The parent run ID. This is the ID of the parent run.
tags (Optional[List[str]]): The tags.
metadata (Optional[Dict[str, Any]]): The metadata.
kwargs (Any): Additional keyword arguments.
"""
# NotImplementedError is thrown intentionally
# Callback handler will fall back to on_llm_start if this is exception is thrown
msg = f"{self.__class__.__name__} does not implement `on_chat_model_start`"
raise NotImplementedError(msg)
async def on_llm_new_token(
self,
token: str,
*,
chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[list[str]] = None,
**kwargs: Any,
) -> None:
"""Run on new LLM token. Only available when streaming is enabled.
Args:
token (str): The new token.
chunk (GenerationChunk | ChatGenerationChunk): The new generated chunk,
containing content and other information.
run_id (UUID): The run ID. This is the ID of the current run.
parent_run_id (UUID): The parent run ID. This is the ID of the parent run.
tags (Optional[List[str]]): The tags.
kwargs (Any): Additional keyword arguments.
"""
async def on_llm_end(
self,
response: LLMResult,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[list[str]] = None,
**kwargs: Any,
) -> None:
"""Run when LLM ends running.
Args:
response (LLMResult): The response which was generated.
run_id (UUID): The run ID. This is the ID of the current run.
parent_run_id (UUID): The parent run ID. This is the ID of the parent run.
tags (Optional[List[str]]): The tags.
kwargs (Any): Additional keyword arguments.
"""
async def on_llm_error(
self,
error: BaseException,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[list[str]] = None,
**kwargs: Any,
) -> None:
"""Run when LLM errors.
Args:
error: The error that occurred.
run_id: The run ID. This is the ID of the current run.
parent_run_id: The parent run ID. This is the ID of the parent run.
tags: The tags.
kwargs (Any): Additional keyword arguments.
- response (LLMResult): The response which was generated before
the error occurred.
"""
async def on_chain_start(
self,
serialized: dict[str, Any],
inputs: dict[str, Any],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[list[str]] = None,
metadata: Optional[dict[str, Any]] = None,
**kwargs: Any,
) -> None:
"""Run when a chain starts running.
Args:
serialized (Dict[str, Any]): The serialized chain.
inputs (Dict[str, Any]): The inputs.
run_id (UUID): The run ID. This is the ID of the current run.
parent_run_id (UUID): The parent run ID. This is the ID of the parent run.
tags (Optional[List[str]]): The tags.
metadata (Optional[Dict[str, Any]]): The metadata.
kwargs (Any): Additional keyword arguments.
"""
async def on_chain_end(
self,
outputs: dict[str, Any],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[list[str]] = None,
**kwargs: Any,
) -> None:
"""Run when a chain ends running.
Args:
outputs (Dict[str, Any]): The outputs of the chain.
run_id (UUID): The run ID. This is the ID of the current run.
parent_run_id (UUID): The parent run ID. This is the ID of the parent run.
tags (Optional[List[str]]): The tags.
kwargs (Any): Additional keyword arguments.
"""
async def on_chain_error(
self,
error: BaseException,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[list[str]] = None,
**kwargs: Any,
) -> None:
"""Run when chain errors.
Args:
error (BaseException): The error that occurred.
run_id (UUID): The run ID. This is the ID of the current run.
parent_run_id (UUID): The parent run ID. This is the ID of the parent run.
tags (Optional[List[str]]): The tags.
kwargs (Any): Additional keyword arguments.
"""
async def on_tool_start(
self,
serialized: dict[str, Any],
input_str: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[list[str]] = None,
metadata: Optional[dict[str, Any]] = None,
inputs: Optional[dict[str, Any]] = None,
**kwargs: Any,
) -> None:
"""Run when the tool starts running.
Args:
serialized (Dict[str, Any]): The serialized tool.
input_str (str): The input string.
run_id (UUID): The run ID. This is the ID of the current run.
parent_run_id (UUID): The parent run ID. This is the ID of the parent run.
tags (Optional[List[str]]): The tags.
metadata (Optional[Dict[str, Any]]): The metadata.
inputs (Optional[Dict[str, Any]]): The inputs.
kwargs (Any): Additional keyword arguments.
"""
async def on_tool_end(
self,
output: Any,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[list[str]] = None,
**kwargs: Any,
) -> None:
"""Run when the tool ends running.
Args:
output (Any): The output of the tool.
run_id (UUID): The run ID. This is the ID of the current run.
parent_run_id (UUID): The parent run ID. This is the ID of the parent run.
tags (Optional[List[str]]): The tags.
kwargs (Any): Additional keyword arguments.
"""
| |
153826
|
"""Helper functions for deprecating parts of the LangChain API.
This module was adapted from matplotlibs _api/deprecation.py module:
https://github.com/matplotlib/matplotlib/blob/main/lib/matplotlib/_api/deprecation.py
.. warning::
This module is for internal use only. Do not use it in your own code.
We may change the API at any time with no warning.
"""
import contextlib
import functools
import inspect
import warnings
from collections.abc import Generator
from typing import (
Any,
Callable,
TypeVar,
Union,
cast,
)
from typing_extensions import ParamSpec
from langchain_core._api.internal import is_caller_internal
class LangChainDeprecationWarning(DeprecationWarning):
"""A class for issuing deprecation warnings for LangChain users."""
class LangChainPendingDeprecationWarning(PendingDeprecationWarning):
"""A class for issuing deprecation warnings for LangChain users."""
# PUBLIC API
# Last Any should be FieldInfoV1 but this leads to circular imports
T = TypeVar("T", bound=Union[type, Callable[..., Any], Any])
def _validate_deprecation_params(
pending: bool,
removal: str,
alternative: str,
alternative_import: str,
) -> None:
"""Validate the deprecation parameters."""
if pending and removal:
msg = "A pending deprecation cannot have a scheduled removal"
raise ValueError(msg)
if alternative and alternative_import:
msg = "Cannot specify both alternative and alternative_import"
raise ValueError(msg)
if alternative_import and "." not in alternative_import:
msg = (
"alternative_import must be a fully qualified module path. Got "
f" {alternative_import}"
)
raise ValueError(msg)
def deprecated(
since: str,
*,
message: str = "",
name: str = "",
alternative: str = "",
alternative_import: str = "",
pending: bool = False,
obj_type: str = "",
addendum: str = "",
removal: str = "",
package: str = "",
) -> Callable[[T], T]:
"""Decorator to mark a function, a class, or a property as deprecated.
When deprecating a classmethod, a staticmethod, or a property, the
``@deprecated`` decorator should go *under* ``@classmethod`` and
``@staticmethod`` (i.e., `deprecated` should directly decorate the
underlying callable), but *over* ``@property``.
When deprecating a class ``C`` intended to be used as a base class in a
multiple inheritance hierarchy, ``C`` *must* define an ``__init__`` method
(if ``C`` instead inherited its ``__init__`` from its own base class, then
``@deprecated`` would mess up ``__init__`` inheritance when installing its
own (deprecation-emitting) ``C.__init__``).
Parameters are the same as for `warn_deprecated`, except that *obj_type*
defaults to 'class' if decorating a class, 'attribute' if decorating a
property, and 'function' otherwise.
Arguments:
since : str
The release at which this API became deprecated.
message : str, optional
Override the default deprecation message. The %(since)s,
%(name)s, %(alternative)s, %(obj_type)s, %(addendum)s,
and %(removal)s format specifiers will be replaced by the
values of the respective arguments passed to this function.
name : str, optional
The name of the deprecated object.
alternative : str, optional
An alternative API that the user may use in place of the
deprecated API. The deprecation warning will tell the user
about this alternative if provided.
pending : bool, optional
If True, uses a PendingDeprecationWarning instead of a
DeprecationWarning. Cannot be used together with removal.
obj_type : str, optional
The object type being deprecated.
addendum : str, optional
Additional text appended directly to the final message.
removal : str, optional
The expected removal version. With the default (an empty
string), a removal version is automatically computed from
since. Set to other Falsy values to not schedule a removal
date. Cannot be used together with pending.
Examples
--------
.. code-block:: python
@deprecated('1.4.0')
def the_function_to_deprecate():
pass
"""
_validate_deprecation_params(pending, removal, alternative, alternative_import)
| |
153828
|
def warn_deprecated(
since: str,
*,
message: str = "",
name: str = "",
alternative: str = "",
alternative_import: str = "",
pending: bool = False,
obj_type: str = "",
addendum: str = "",
removal: str = "",
package: str = "",
) -> None:
"""Display a standardized deprecation.
Arguments:
since : str
The release at which this API became deprecated.
message : str, optional
Override the default deprecation message. The %(since)s,
%(name)s, %(alternative)s, %(obj_type)s, %(addendum)s,
and %(removal)s format specifiers will be replaced by the
values of the respective arguments passed to this function.
name : str, optional
The name of the deprecated object.
alternative : str, optional
An alternative API that the user may use in place of the
deprecated API. The deprecation warning will tell the user
about this alternative if provided.
pending : bool, optional
If True, uses a PendingDeprecationWarning instead of a
DeprecationWarning. Cannot be used together with removal.
obj_type : str, optional
The object type being deprecated.
addendum : str, optional
Additional text appended directly to the final message.
removal : str, optional
The expected removal version. With the default (an empty
string), a removal version is automatically computed from
since. Set to other Falsy values to not schedule a removal
date. Cannot be used together with pending.
"""
if not pending:
if not removal:
removal = f"in {removal}" if removal else "within ?? minor releases"
msg = (
f"Need to determine which default deprecation schedule to use. "
f"{removal}"
)
raise NotImplementedError(msg)
else:
removal = f"in {removal}"
if not message:
message = ""
_package = (
package or name.split(".")[0].replace("_", "-")
if "." in name
else "LangChain"
)
if obj_type:
message += f"The {obj_type} `{name}`"
else:
message += f"`{name}`"
if pending:
message += " will be deprecated in a future version"
else:
message += f" was deprecated in {_package} {since}"
if removal:
message += f" and will be removed {removal}"
if alternative_import:
alt_package = alternative_import.split(".")[0].replace("_", "-")
if alt_package == _package:
message += f". Use {alternative_import} instead."
else:
alt_module, alt_name = alternative_import.rsplit(".", 1)
message += (
f". An updated version of the {obj_type} exists in the "
f"{alt_package} package and should be used instead. To use it run "
f"`pip install -U {alt_package}` and import as "
f"`from {alt_module} import {alt_name}`."
)
elif alternative:
message += f". Use {alternative} instead."
if addendum:
message += f" {addendum}"
warning_cls = (
LangChainPendingDeprecationWarning if pending else LangChainDeprecationWarning
)
warning = warning_cls(message)
warnings.warn(warning, category=LangChainDeprecationWarning, stacklevel=4)
def surface_langchain_deprecation_warnings() -> None:
"""Unmute LangChain deprecation warnings."""
warnings.filterwarnings(
"default",
category=LangChainPendingDeprecationWarning,
)
warnings.filterwarnings(
"default",
category=LangChainDeprecationWarning,
)
_P = ParamSpec("_P")
_R = TypeVar("_R")
def rename_parameter(
*,
since: str,
removal: str,
old: str,
new: str,
) -> Callable[[Callable[_P, _R]], Callable[_P, _R]]:
"""Decorator indicating that parameter *old* of *func* is renamed to *new*.
The actual implementation of *func* should use *new*, not *old*. If *old*
is passed to *func*, a DeprecationWarning is emitted, and its value is
used, even if *new* is also passed by keyword.
Example:
.. code-block:: python
@_api.rename_parameter("3.1", "bad_name", "good_name")
def func(good_name): ...
"""
def decorator(f: Callable[_P, _R]) -> Callable[_P, _R]:
@functools.wraps(f)
def wrapper(*args: _P.args, **kwargs: _P.kwargs) -> _R:
if new in kwargs and old in kwargs:
msg = f"{f.__name__}() got multiple values for argument {new!r}"
raise TypeError(msg)
if old in kwargs:
warn_deprecated(
since,
removal=removal,
message=f"The parameter `{old}` of `{f.__name__}` was "
f"deprecated in {since} and will be removed "
f"in {removal} Use `{new}` instead.",
)
kwargs[new] = kwargs.pop(old)
return f(*args, **kwargs)
return wrapper
return decorator
| |
153837
|
"""Utilities for tests."""
from __future__ import annotations
import inspect
import textwrap
import warnings
from contextlib import nullcontext
from functools import lru_cache, wraps
from types import GenericAlias
from typing import (
Any,
Callable,
Optional,
TypeVar,
Union,
cast,
overload,
)
import pydantic
from pydantic import (
BaseModel,
ConfigDict,
PydanticDeprecationWarning,
RootModel,
root_validator,
)
from pydantic import (
create_model as _create_model_base,
)
from pydantic.json_schema import (
DEFAULT_REF_TEMPLATE,
GenerateJsonSchema,
JsonSchemaMode,
JsonSchemaValue,
)
from pydantic_core import core_schema
def get_pydantic_major_version() -> int:
"""Get the major version of Pydantic."""
try:
import pydantic
return int(pydantic.__version__.split(".")[0])
except ImportError:
return 0
PYDANTIC_MAJOR_VERSION = get_pydantic_major_version()
if PYDANTIC_MAJOR_VERSION == 1:
from pydantic.fields import FieldInfo as FieldInfoV1
PydanticBaseModel = pydantic.BaseModel
TypeBaseModel = type[BaseModel]
elif PYDANTIC_MAJOR_VERSION == 2:
from pydantic.v1.fields import FieldInfo as FieldInfoV1 # type: ignore[assignment]
# Union type needs to be last assignment to PydanticBaseModel to make mypy happy.
PydanticBaseModel = Union[BaseModel, pydantic.BaseModel] # type: ignore
TypeBaseModel = Union[type[BaseModel], type[pydantic.BaseModel]] # type: ignore
else:
msg = f"Unsupported Pydantic version: {PYDANTIC_MAJOR_VERSION}"
raise ValueError(msg)
TBaseModel = TypeVar("TBaseModel", bound=PydanticBaseModel)
def is_pydantic_v1_subclass(cls: type) -> bool:
"""Check if the installed Pydantic version is 1.x-like."""
if PYDANTIC_MAJOR_VERSION == 1:
return True
elif PYDANTIC_MAJOR_VERSION == 2:
from pydantic.v1 import BaseModel as BaseModelV1
if issubclass(cls, BaseModelV1):
return True
return False
def is_pydantic_v2_subclass(cls: type) -> bool:
"""Check if the installed Pydantic version is 1.x-like."""
from pydantic import BaseModel
return PYDANTIC_MAJOR_VERSION == 2 and issubclass(cls, BaseModel)
def is_basemodel_subclass(cls: type) -> bool:
"""Check if the given class is a subclass of Pydantic BaseModel.
Check if the given class is a subclass of any of the following:
* pydantic.BaseModel in Pydantic 1.x
* pydantic.BaseModel in Pydantic 2.x
* pydantic.v1.BaseModel in Pydantic 2.x
"""
# Before we can use issubclass on the cls we need to check if it is a class
if not inspect.isclass(cls) or isinstance(cls, GenericAlias):
return False
if PYDANTIC_MAJOR_VERSION == 1:
from pydantic import BaseModel as BaseModelV1Proper
if issubclass(cls, BaseModelV1Proper):
return True
elif PYDANTIC_MAJOR_VERSION == 2:
from pydantic import BaseModel as BaseModelV2
from pydantic.v1 import BaseModel as BaseModelV1
if issubclass(cls, BaseModelV2):
return True
if issubclass(cls, BaseModelV1):
return True
else:
msg = f"Unsupported Pydantic version: {PYDANTIC_MAJOR_VERSION}"
raise ValueError(msg)
return False
def is_basemodel_instance(obj: Any) -> bool:
"""Check if the given class is an instance of Pydantic BaseModel.
Check if the given class is an instance of any of the following:
* pydantic.BaseModel in Pydantic 1.x
* pydantic.BaseModel in Pydantic 2.x
* pydantic.v1.BaseModel in Pydantic 2.x
"""
if PYDANTIC_MAJOR_VERSION == 1:
from pydantic import BaseModel as BaseModelV1Proper
if isinstance(obj, BaseModelV1Proper):
return True
elif PYDANTIC_MAJOR_VERSION == 2:
from pydantic import BaseModel as BaseModelV2
from pydantic.v1 import BaseModel as BaseModelV1
if isinstance(obj, BaseModelV2):
return True
if isinstance(obj, BaseModelV1):
return True
else:
msg = f"Unsupported Pydantic version: {PYDANTIC_MAJOR_VERSION}"
raise ValueError(msg)
return False
# How to type hint this?
def pre_init(func: Callable) -> Any:
"""Decorator to run a function before model initialization.
Args:
func (Callable): The function to run before model initialization.
Returns:
Any: The decorated function.
"""
with warnings.catch_warnings():
warnings.filterwarnings(action="ignore", category=PydanticDeprecationWarning)
@root_validator(pre=True)
@wraps(func)
def wrapper(cls: type[BaseModel], values: dict[str, Any]) -> dict[str, Any]:
"""Decorator to run a function before model initialization.
Args:
cls (Type[BaseModel]): The model class.
values (Dict[str, Any]): The values to initialize the model with.
Returns:
Dict[str, Any]: The values to initialize the model with.
"""
# Insert default values
fields = cls.model_fields
for name, field_info in fields.items():
# Check if allow_population_by_field_name is enabled
# If yes, then set the field name to the alias
if (
hasattr(cls, "Config")
and hasattr(cls.Config, "allow_population_by_field_name")
and cls.Config.allow_population_by_field_name
and field_info.alias in values
):
values[name] = values.pop(field_info.alias)
if (
hasattr(cls, "model_config")
and cls.model_config.get("populate_by_name")
and field_info.alias in values
):
values[name] = values.pop(field_info.alias)
if (
name not in values or values[name] is None
) and not field_info.is_required():
if field_info.default_factory is not None:
values[name] = field_info.default_factory()
else:
values[name] = field_info.default
# Call the decorated function
return func(cls, values)
return wrapper
class _IgnoreUnserializable(GenerateJsonSchema):
"""A JSON schema generator that ignores unknown types.
https://docs.pydantic.dev/latest/concepts/json_schema/#customizing-the-json-schema-generation-process
"""
def handle_invalid_for_json_schema(
self, schema: core_schema.CoreSchema, error_info: str
) -> JsonSchemaValue:
return {}
def _create_subset_model_v1(
name: str,
model: type[BaseModel],
field_names: list,
*,
descriptions: Optional[dict] = None,
fn_description: Optional[str] = None,
) -> type[BaseModel]:
"""Create a pydantic model with only a subset of model's fields."""
if PYDANTIC_MAJOR_VERSION == 1:
from pydantic import create_model
elif PYDANTIC_MAJOR_VERSION == 2:
from pydantic.v1 import create_model # type: ignore
else:
msg = f"Unsupported pydantic version: {PYDANTIC_MAJOR_VERSION}"
raise NotImplementedError(msg)
fields = {}
for field_name in field_names:
# Using pydantic v1 so can access __fields__ as a dict.
field = model.__fields__[field_name] # type: ignore
t = (
# this isn't perfect but should work for most functions
field.outer_type_
if field.required and not field.allow_none
else Optional[field.outer_type_]
)
if descriptions and field_name in descriptions:
field.field_info.description = descriptions[field_name]
fields[field_name] = (t, field.field_info)
rtn = create_model(name, **fields) # type: ignore
rtn.__doc__ = textwrap.dedent(fn_description or model.__doc__ or "")
return rtn
| |
153838
|
def _create_subset_model_v2(
name: str,
model: type[pydantic.BaseModel],
field_names: list[str],
*,
descriptions: Optional[dict] = None,
fn_description: Optional[str] = None,
) -> type[pydantic.BaseModel]:
"""Create a pydantic model with a subset of the model fields."""
from pydantic import ConfigDict, create_model
from pydantic.fields import FieldInfo
descriptions_ = descriptions or {}
fields = {}
for field_name in field_names:
field = model.model_fields[field_name] # type: ignore
description = descriptions_.get(field_name, field.description)
field_info = FieldInfo(description=description, default=field.default)
if field.metadata:
field_info.metadata = field.metadata
fields[field_name] = (field.annotation, field_info)
rtn = create_model( # type: ignore
name, **fields, __config__=ConfigDict(arbitrary_types_allowed=True)
)
# TODO(0.3): Determine if there is a more "pydantic" way to preserve annotations.
# This is done to preserve __annotations__ when working with pydantic 2.x
# and using the Annotated type with TypedDict.
# Comment out the following line, to trigger the relevant test case.
selected_annotations = [
(name, annotation)
for name, annotation in model.__annotations__.items()
if name in field_names
]
rtn.__annotations__ = dict(selected_annotations)
rtn.__doc__ = textwrap.dedent(fn_description or model.__doc__ or "")
return rtn
# Private functionality to create a subset model that's compatible across
# different versions of pydantic.
# Handles pydantic versions 1.x and 2.x. including v1 of pydantic in 2.x.
# However, can't find a way to type hint this.
def _create_subset_model(
name: str,
model: TypeBaseModel,
field_names: list[str],
*,
descriptions: Optional[dict] = None,
fn_description: Optional[str] = None,
) -> type[BaseModel]:
"""Create subset model using the same pydantic version as the input model."""
if PYDANTIC_MAJOR_VERSION == 1:
return _create_subset_model_v1(
name,
model,
field_names,
descriptions=descriptions,
fn_description=fn_description,
)
elif PYDANTIC_MAJOR_VERSION == 2:
from pydantic.v1 import BaseModel as BaseModelV1
if issubclass(model, BaseModelV1):
return _create_subset_model_v1(
name,
model,
field_names,
descriptions=descriptions,
fn_description=fn_description,
)
else:
return _create_subset_model_v2(
name,
model,
field_names,
descriptions=descriptions,
fn_description=fn_description,
)
else:
msg = f"Unsupported pydantic version: {PYDANTIC_MAJOR_VERSION}"
raise NotImplementedError(msg)
if PYDANTIC_MAJOR_VERSION == 2:
from pydantic import BaseModel as BaseModelV2
from pydantic.fields import FieldInfo as FieldInfoV2
from pydantic.v1 import BaseModel as BaseModelV1
@overload
def get_fields(model: type[BaseModelV2]) -> dict[str, FieldInfoV2]: ...
@overload
def get_fields(model: BaseModelV2) -> dict[str, FieldInfoV2]: ...
@overload
def get_fields(model: type[BaseModelV1]) -> dict[str, FieldInfoV1]: ...
@overload
def get_fields(model: BaseModelV1) -> dict[str, FieldInfoV1]: ...
def get_fields(
model: Union[
BaseModelV2,
BaseModelV1,
type[BaseModelV2],
type[BaseModelV1],
],
) -> Union[dict[str, FieldInfoV2], dict[str, FieldInfoV1]]:
"""Get the field names of a Pydantic model."""
if hasattr(model, "model_fields"):
return model.model_fields # type: ignore
elif hasattr(model, "__fields__"):
return model.__fields__ # type: ignore
else:
msg = f"Expected a Pydantic model. Got {type(model)}"
raise TypeError(msg)
elif PYDANTIC_MAJOR_VERSION == 1:
from pydantic import BaseModel as BaseModelV1_
def get_fields( # type: ignore[no-redef]
model: Union[type[BaseModelV1_], BaseModelV1_],
) -> dict[str, FieldInfoV1]:
"""Get the field names of a Pydantic model."""
return model.__fields__ # type: ignore
else:
msg = f"Unsupported Pydantic version: {PYDANTIC_MAJOR_VERSION}"
raise ValueError(msg)
_SchemaConfig = ConfigDict(
arbitrary_types_allowed=True, frozen=True, protected_namespaces=()
)
NO_DEFAULT = object()
def _create_root_model(
name: str,
type_: Any,
module_name: Optional[str] = None,
default_: object = NO_DEFAULT,
) -> type[BaseModel]:
"""Create a base class."""
def schema(
cls: type[BaseModel],
by_alias: bool = True,
ref_template: str = DEFAULT_REF_TEMPLATE,
) -> dict[str, Any]:
# Complains about schema not being defined in superclass
schema_ = super(cls, cls).schema( # type: ignore[misc]
by_alias=by_alias, ref_template=ref_template
)
schema_["title"] = name
return schema_
def model_json_schema(
cls: type[BaseModel],
by_alias: bool = True,
ref_template: str = DEFAULT_REF_TEMPLATE,
schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema,
mode: JsonSchemaMode = "validation",
) -> dict[str, Any]:
# Complains about model_json_schema not being defined in superclass
schema_ = super(cls, cls).model_json_schema( # type: ignore[misc]
by_alias=by_alias,
ref_template=ref_template,
schema_generator=schema_generator,
mode=mode,
)
schema_["title"] = name
return schema_
base_class_attributes = {
"__annotations__": {"root": type_},
"model_config": ConfigDict(arbitrary_types_allowed=True),
"schema": classmethod(schema),
"model_json_schema": classmethod(model_json_schema),
"__module__": module_name or "langchain_core.runnables.utils",
}
if default_ is not NO_DEFAULT:
base_class_attributes["root"] = default_
with warnings.catch_warnings():
try:
if (
isinstance(type_, type)
and not isinstance(type_, GenericAlias)
and issubclass(type_, BaseModelV1)
):
warnings.filterwarnings(
action="ignore", category=PydanticDeprecationWarning
)
except TypeError:
pass
custom_root_type = type(name, (RootModel,), base_class_attributes)
return cast(type[BaseModel], custom_root_type)
@lru_cache(maxsize=256)
def _create_root_model_cached(
model_name: str,
type_: Any,
*,
module_name: Optional[str] = None,
default_: object = NO_DEFAULT,
) -> type[BaseModel]:
return _create_root_model(
model_name, type_, default_=default_, module_name=module_name
)
@lru_cache(maxsize=256)
def _create_model_cached(
__model_name: str,
**field_definitions: Any,
) -> type[BaseModel]:
return _create_model_base(
__model_name,
__config__=_SchemaConfig,
**_remap_field_definitions(field_definitions),
)
def create_model(
__model_name: str,
__module_name: Optional[str] = None,
**field_definitions: Any,
) -> type[BaseModel]:
"""Create a pydantic model with the given field definitions.
Please use create_model_v2 instead of this function.
Args:
__model_name: The name of the model.
__module_name: The name of the module where the model is defined.
This is used by Pydantic to resolve any forward references.
**field_definitions: The field definitions for the model.
Returns:
Type[BaseModel]: The created model.
"""
kwargs = {}
if "__root__" in field_definitions:
kwargs["root"] = field_definitions.pop("__root__")
return create_model_v2(
__model_name,
module_name=__module_name,
field_definitions=field_definitions,
**kwargs,
)
# Reserved names should capture all the `public` names / methods that are
# used by BaseModel internally. This will keep the reserved names up-to-date.
# For reference, the reserved names are:
| |
153891
|
class Runnable(Generic[Input, Output], ABC):
"""A unit of work that can be invoked, batched, streamed, transformed and composed.
Key Methods
===========
- **invoke/ainvoke**: Transforms a single input into an output.
- **batch/abatch**: Efficiently transforms multiple inputs into outputs.
- **stream/astream**: Streams output from a single input as it's produced.
- **astream_log**: Streams output and selected intermediate results from an input.
Built-in optimizations:
- **Batch**: By default, batch runs invoke() in parallel using a thread pool executor.
Override to optimize batching.
- **Async**: Methods with "a" suffix are asynchronous. By default, they execute
the sync counterpart using asyncio's thread pool.
Override for native async.
All methods accept an optional config argument, which can be used to configure
execution, add tags and metadata for tracing and debugging etc.
Runnables expose schematic information about their input, output and config via
the input_schema property, the output_schema property and config_schema method.
LCEL and Composition
====================
The LangChain Expression Language (LCEL) is a declarative way to compose Runnables
into chains. Any chain constructed this way will automatically have sync, async,
batch, and streaming support.
The main composition primitives are RunnableSequence and RunnableParallel.
**RunnableSequence** invokes a series of runnables sequentially, with
one Runnable's output serving as the next's input. Construct using
the `|` operator or by passing a list of runnables to RunnableSequence.
**RunnableParallel** invokes runnables concurrently, providing the same input
to each. Construct it using a dict literal within a sequence or by passing a
dict to RunnableParallel.
For example,
.. code-block:: python
from langchain_core.runnables import RunnableLambda
# A RunnableSequence constructed using the `|` operator
sequence = RunnableLambda(lambda x: x + 1) | RunnableLambda(lambda x: x * 2)
sequence.invoke(1) # 4
sequence.batch([1, 2, 3]) # [4, 6, 8]
# A sequence that contains a RunnableParallel constructed using a dict literal
sequence = RunnableLambda(lambda x: x + 1) | {
'mul_2': RunnableLambda(lambda x: x * 2),
'mul_5': RunnableLambda(lambda x: x * 5)
}
sequence.invoke(1) # {'mul_2': 4, 'mul_5': 10}
Standard Methods
================
All Runnables expose additional methods that can be used to modify their behavior
(e.g., add a retry policy, add lifecycle listeners, make them configurable, etc.).
These methods will work on any Runnable, including Runnable chains constructed
by composing other Runnables. See the individual methods for details.
For example,
.. code-block:: python
from langchain_core.runnables import RunnableLambda
import random
def add_one(x: int) -> int:
return x + 1
def buggy_double(y: int) -> int:
'''Buggy code that will fail 70% of the time'''
if random.random() > 0.3:
print('This code failed, and will probably be retried!') # noqa: T201
raise ValueError('Triggered buggy code')
return y * 2
sequence = (
RunnableLambda(add_one) |
RunnableLambda(buggy_double).with_retry( # Retry on failure
stop_after_attempt=10,
wait_exponential_jitter=False
)
)
print(sequence.input_schema.model_json_schema()) # Show inferred input schema
print(sequence.output_schema.model_json_schema()) # Show inferred output schema
print(sequence.invoke(2)) # invoke the sequence (note the retry above!!)
Debugging and tracing
=====================
As the chains get longer, it can be useful to be able to see intermediate results
to debug and trace the chain.
You can set the global debug flag to True to enable debug output for all chains:
.. code-block:: python
from langchain_core.globals import set_debug
set_debug(True)
Alternatively, you can pass existing or custom callbacks to any given chain:
.. code-block:: python
from langchain_core.tracers import ConsoleCallbackHandler
chain.invoke(
...,
config={'callbacks': [ConsoleCallbackHandler()]}
)
For a UI (and much more) checkout LangSmith: https://docs.smith.langchain.com/
""" # noqa: E501
name: Optional[str]
"""The name of the Runnable. Used for debugging and tracing."""
def get_name(
self, suffix: Optional[str] = None, *, name: Optional[str] = None
) -> str:
"""Get the name of the Runnable."""
if name:
name_ = name
elif hasattr(self, "name") and self.name:
name_ = self.name
else:
# Here we handle a case where the runnable subclass is also a pydantic
# model.
cls = self.__class__
# Then it's a pydantic sub-class, and we have to check
# whether it's a generic, and if so recover the original name.
if (
hasattr(
cls,
"__pydantic_generic_metadata__",
)
and "origin" in cls.__pydantic_generic_metadata__
and cls.__pydantic_generic_metadata__["origin"] is not None
):
name_ = cls.__pydantic_generic_metadata__["origin"].__name__
else:
name_ = cls.__name__
if suffix:
if name_[0].isupper():
return name_ + suffix.title()
else:
return name_ + "_" + suffix.lower()
else:
return name_
@property
def InputType(self) -> type[Input]: # noqa: N802
"""The type of input this Runnable accepts specified as a type annotation."""
# First loop through all parent classes and if any of them is
# a pydantic model, we will pick up the generic parameterization
# from that model via the __pydantic_generic_metadata__ attribute.
for base in self.__class__.mro():
if hasattr(base, "__pydantic_generic_metadata__"):
metadata = base.__pydantic_generic_metadata__
if "args" in metadata and len(metadata["args"]) == 2:
return metadata["args"][0]
# If we didn't find a pydantic model in the parent classes,
# then loop through __orig_bases__. This corresponds to
# Runnables that are not pydantic models.
for cls in self.__class__.__orig_bases__: # type: ignore[attr-defined]
type_args = get_args(cls)
if type_args and len(type_args) == 2:
return type_args[0]
msg = (
f"Runnable {self.get_name()} doesn't have an inferable InputType. "
"Override the InputType property to specify the input type."
)
raise TypeError(msg)
@property
def OutputType(self) -> type[Output]: # noqa: N802
"""The type of output this Runnable produces specified as a type annotation."""
# First loop through bases -- this will help generic
# any pydantic models.
for base in self.__class__.mro():
if hasattr(base, "__pydantic_generic_metadata__"):
metadata = base.__pydantic_generic_metadata__
if "args" in metadata and len(metadata["args"]) == 2:
return metadata["args"][1]
for cls in self.__class__.__orig_bases__: # type: ignore[attr-defined]
type_args = get_args(cls)
if type_args and len(type_args) == 2:
return type_args[1]
msg = (
f"Runnable {self.get_name()} doesn't have an inferable OutputType. "
"Override the OutputType property to specify the output type."
)
raise TypeError(msg)
@property
def input_schema(self) -> type[BaseModel]:
"""The type of input this Runnable accepts specified as a pydantic model."""
return self.get_input_schema()
| |
153908
|
def invoke(
self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any
) -> dict[str, Any]:
from langchain_core.callbacks.manager import CallbackManager
# setup callbacks
config = ensure_config(config)
callback_manager = CallbackManager.configure(
inheritable_callbacks=config.get("callbacks"),
local_callbacks=None,
verbose=False,
inheritable_tags=config.get("tags"),
local_tags=None,
inheritable_metadata=config.get("metadata"),
local_metadata=None,
)
# start the root run
run_manager = callback_manager.on_chain_start(
None,
input,
name=config.get("run_name") or self.get_name(),
run_id=config.pop("run_id", None),
)
def _invoke_step(
step: Runnable[Input, Any], input: Input, config: RunnableConfig, key: str
) -> Any:
child_config = patch_config(
config,
# mark each step as a child run
callbacks=run_manager.get_child(f"map:key:{key}"),
)
context = copy_context()
context.run(_set_config_context, child_config)
return context.run(
step.invoke,
input,
child_config,
)
# gather results from all steps
try:
# copy to avoid issues from the caller mutating the steps during invoke()
steps = dict(self.steps__)
with get_executor_for_config(config) as executor:
futures = [
executor.submit(_invoke_step, step, input, config, key)
for key, step in steps.items()
]
output = {key: future.result() for key, future in zip(steps, futures)}
# finish the root run
except BaseException as e:
run_manager.on_chain_error(e)
raise
else:
run_manager.on_chain_end(output)
return output
async def ainvoke(
self,
input: Input,
config: Optional[RunnableConfig] = None,
**kwargs: Optional[Any],
) -> dict[str, Any]:
# setup callbacks
config = ensure_config(config)
callback_manager = get_async_callback_manager_for_config(config)
# start the root run
run_manager = await callback_manager.on_chain_start(
None,
input,
name=config.get("run_name") or self.get_name(),
run_id=config.pop("run_id", None),
)
async def _ainvoke_step(
step: Runnable[Input, Any], input: Input, config: RunnableConfig, key: str
) -> Any:
child_config = patch_config(
config,
callbacks=run_manager.get_child(f"map:key:{key}"),
)
context = copy_context()
context.run(_set_config_context, child_config)
if asyncio_accepts_context():
return await asyncio.create_task( # type: ignore
step.ainvoke(input, child_config), context=context
)
else:
return await asyncio.create_task(step.ainvoke(input, child_config))
# gather results from all steps
try:
# copy to avoid issues from the caller mutating the steps during invoke()
steps = dict(self.steps__)
results = await asyncio.gather(
*(
_ainvoke_step(
step,
input,
# mark each step as a child run
config,
key,
)
for key, step in steps.items()
)
)
output = dict(zip(steps, results))
# finish the root run
except BaseException as e:
await run_manager.on_chain_error(e)
raise
else:
await run_manager.on_chain_end(output)
return output
def _transform(
self,
input: Iterator[Input],
run_manager: CallbackManagerForChainRun,
config: RunnableConfig,
) -> Iterator[AddableDict]:
# Shallow copy steps to ignore mutations while in progress
steps = dict(self.steps__)
# Each step gets a copy of the input iterator,
# which is consumed in parallel in a separate thread.
input_copies = list(safetee(input, len(steps), lock=threading.Lock()))
with get_executor_for_config(config) as executor:
# Create the transform() generator for each step
named_generators = [
(
name,
step.transform(
input_copies.pop(),
patch_config(
config, callbacks=run_manager.get_child(f"map:key:{name}")
),
),
)
for name, step in steps.items()
]
# Start the first iteration of each generator
futures = {
executor.submit(next, generator): (step_name, generator)
for step_name, generator in named_generators
}
# Yield chunks from each as they become available,
# and start the next iteration of that generator that yielded it.
# When all generators are exhausted, stop.
while futures:
completed_futures, _ = wait(futures, return_when=FIRST_COMPLETED)
for future in completed_futures:
(step_name, generator) = futures.pop(future)
try:
chunk = AddableDict({step_name: future.result()})
yield chunk
futures[executor.submit(next, generator)] = (
step_name,
generator,
)
except StopIteration:
pass
def transform(
self,
input: Iterator[Input],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Iterator[dict[str, Any]]:
yield from self._transform_stream_with_config(
input, self._transform, config, **kwargs
)
def stream(
self,
input: Input,
config: Optional[RunnableConfig] = None,
**kwargs: Optional[Any],
) -> Iterator[dict[str, Any]]:
yield from self.transform(iter([input]), config)
async def _atransform(
self,
input: AsyncIterator[Input],
run_manager: AsyncCallbackManagerForChainRun,
config: RunnableConfig,
) -> AsyncIterator[AddableDict]:
# Shallow copy steps to ignore mutations while in progress
steps = dict(self.steps__)
# Each step gets a copy of the input iterator,
# which is consumed in parallel in a separate thread.
input_copies = list(atee(input, len(steps), lock=asyncio.Lock()))
# Create the transform() generator for each step
named_generators = [
(
name,
step.atransform(
input_copies.pop(),
patch_config(
config, callbacks=run_manager.get_child(f"map:key:{name}")
),
),
)
for name, step in steps.items()
]
# Wrap in a coroutine to satisfy linter
async def get_next_chunk(generator: AsyncIterator) -> Optional[Output]:
return await py_anext(generator)
# Start the first iteration of each generator
tasks = {
asyncio.create_task(get_next_chunk(generator)): (step_name, generator)
for step_name, generator in named_generators
}
# Yield chunks from each as they become available,
# and start the next iteration of the generator that yielded it.
# When all generators are exhausted, stop.
while tasks:
completed_tasks, _ = await asyncio.wait(
tasks, return_when=asyncio.FIRST_COMPLETED
)
for task in completed_tasks:
(step_name, generator) = tasks.pop(task)
try:
chunk = AddableDict({step_name: task.result()})
yield chunk
new_task = asyncio.create_task(get_next_chunk(generator))
tasks[new_task] = (step_name, generator)
except StopAsyncIteration:
pass
async def atransform(
self,
input: AsyncIterator[Input],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> AsyncIterator[dict[str, Any]]:
async for chunk in self._atransform_stream_with_config(
input, self._atransform, config, **kwargs
):
yield chunk
async def astream(
self,
input: Input,
config: Optional[RunnableConfig] = None,
**kwargs: Optional[Any],
) -> AsyncIterator[dict[str, Any]]:
async def input_aiter() -> AsyncIterator[Input]:
yield input
async for chunk in self.atransform(input_aiter(), config):
yield chunk
# We support both names
RunnableMap = RunnableParallel
| |
153909
|
class RunnableGenerator(Runnable[Input, Output]):
"""Runnable that runs a generator function.
RunnableGenerators can be instantiated directly or by using a generator within
a sequence.
RunnableGenerators can be used to implement custom behavior, such as custom output
parsers, while preserving streaming capabilities. Given a generator function with
a signature Iterator[A] -> Iterator[B], wrapping it in a RunnableGenerator allows
it to emit output chunks as soon as they are streamed in from the previous step.
Note that if a generator function has a signature A -> Iterator[B], such that it
requires its input from the previous step to be completed before emitting chunks
(e.g., most LLMs need the entire prompt available to start generating), it can
instead be wrapped in a RunnableLambda.
Here is an example to show the basic mechanics of a RunnableGenerator:
.. code-block:: python
from typing import Any, AsyncIterator, Iterator
from langchain_core.runnables import RunnableGenerator
def gen(input: Iterator[Any]) -> Iterator[str]:
for token in ["Have", " a", " nice", " day"]:
yield token
runnable = RunnableGenerator(gen)
runnable.invoke(None) # "Have a nice day"
list(runnable.stream(None)) # ["Have", " a", " nice", " day"]
runnable.batch([None, None]) # ["Have a nice day", "Have a nice day"]
# Async version:
async def agen(input: AsyncIterator[Any]) -> AsyncIterator[str]:
for token in ["Have", " a", " nice", " day"]:
yield token
runnable = RunnableGenerator(agen)
await runnable.ainvoke(None) # "Have a nice day"
[p async for p in runnable.astream(None)] # ["Have", " a", " nice", " day"]
RunnableGenerator makes it easy to implement custom behavior within a streaming
context. Below we show an example:
.. code-block:: python
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableGenerator, RunnableLambda
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
model = ChatOpenAI()
chant_chain = (
ChatPromptTemplate.from_template("Give me a 3 word chant about {topic}")
| model
| StrOutputParser()
)
def character_generator(input: Iterator[str]) -> Iterator[str]:
for token in input:
if "," in token or "." in token:
yield "👏" + token
else:
yield token
runnable = chant_chain | character_generator
assert type(runnable.last) is RunnableGenerator
"".join(runnable.stream({"topic": "waste"})) # Reduce👏, Reuse👏, Recycle👏.
# Note that RunnableLambda can be used to delay streaming of one step in a
# sequence until the previous step is finished:
def reverse_generator(input: str) -> Iterator[str]:
# Yield characters of input in reverse order.
for character in input[::-1]:
yield character
runnable = chant_chain | RunnableLambda(reverse_generator)
"".join(runnable.stream({"topic": "waste"})) # ".elcycer ,esuer ,ecudeR"
"""
def __init__(
self,
transform: Union[
Callable[[Iterator[Input]], Iterator[Output]],
Callable[[AsyncIterator[Input]], AsyncIterator[Output]],
],
atransform: Optional[
Callable[[AsyncIterator[Input]], AsyncIterator[Output]]
] = None,
*,
name: Optional[str] = None,
) -> None:
"""Initialize a RunnableGenerator.
Args:
transform: The transform function.
atransform: The async transform function. Defaults to None.
Raises:
TypeError: If the transform is not a generator function.
"""
if atransform is not None:
self._atransform = atransform
func_for_name: Callable = atransform
if is_async_generator(transform):
self._atransform = transform # type: ignore[assignment]
func_for_name = transform
elif inspect.isgeneratorfunction(transform):
self._transform = transform
func_for_name = transform
else:
msg = (
"Expected a generator function type for `transform`."
f"Instead got an unsupported type: {type(transform)}"
)
raise TypeError(msg)
try:
self.name = name or func_for_name.__name__
except AttributeError:
self.name = "RunnableGenerator"
@property
@override
def InputType(self) -> Any:
func = getattr(self, "_transform", None) or self._atransform
try:
params = inspect.signature(func).parameters
first_param = next(iter(params.values()), None)
if first_param and first_param.annotation != inspect.Parameter.empty:
return getattr(first_param.annotation, "__args__", (Any,))[0]
else:
return Any
except ValueError:
return Any
def get_input_schema(
self, config: Optional[RunnableConfig] = None
) -> type[BaseModel]:
# Override the default implementation.
# For a runnable generator, we need to bring to provide the
# module of the underlying function when creating the model.
root_type = self.InputType
func = getattr(self, "_transform", None) or self._atransform
module = getattr(func, "__module__", None)
if (
inspect.isclass(root_type)
and not isinstance(root_type, GenericAlias)
and issubclass(root_type, BaseModel)
):
return root_type
return create_model_v2(
self.get_name("Input"),
root=root_type,
# To create the schema, we need to provide the module
# where the underlying function is defined.
# This allows pydantic to resolve type annotations appropriately.
module_name=module,
)
@property
@override
def OutputType(self) -> Any:
func = getattr(self, "_transform", None) or self._atransform
try:
sig = inspect.signature(func)
return (
getattr(sig.return_annotation, "__args__", (Any,))[0]
if sig.return_annotation != inspect.Signature.empty
else Any
)
except ValueError:
return Any
def get_output_schema(
self, config: Optional[RunnableConfig] = None
) -> type[BaseModel]:
# Override the default implementation.
# For a runnable generator, we need to bring to provide the
# module of the underlying function when creating the model.
root_type = self.OutputType
func = getattr(self, "_transform", None) or self._atransform
module = getattr(func, "__module__", None)
if (
inspect.isclass(root_type)
and not isinstance(root_type, GenericAlias)
and issubclass(root_type, BaseModel)
):
return root_type
return create_model_v2(
self.get_name("Output"),
root=root_type,
# To create the schema, we need to provide the module
# where the underlying function is defined.
# This allows pydantic to resolve type annotations appropriately.
module_name=module,
)
def __eq__(self, other: Any) -> bool:
if isinstance(other, RunnableGenerator):
if hasattr(self, "_transform") and hasattr(other, "_transform"):
return self._transform == other._transform
elif hasattr(self, "_atransform") and hasattr(other, "_atransform"):
return self._atransform == other._atransform
else:
return False
else:
return False
def __repr__(self) -> str:
return f"RunnableGenerator({self.name})"
def transform(
self,
input: Iterator[Input],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Iterator[Output]:
if not hasattr(self, "_transform"):
msg = f"{repr(self)} only supports async methods."
raise NotImplementedError(msg)
return self._transform_stream_with_config(
input,
self._transform, # type: ignore[arg-type]
config,
**kwargs, # type: ignore[arg-type]
)
def stream(
self,
input: Input,
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Iterator[Output]:
return self.transform(iter([input]), config, **kwargs)
def invoke(
self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any
) -> Output:
final: Optional[Output] = None
for output in self.stream(input, config, **kwargs):
final = output if final is None else final + output # type: ignore[operator]
return cast(Output, final)
def at
| |
153920
|
from __future__ import annotations
import inspect
from collections.abc import Sequence
from types import GenericAlias
from typing import (
TYPE_CHECKING,
Any,
Callable,
Optional,
Union,
)
from pydantic import BaseModel
from typing_extensions import override
from langchain_core.chat_history import BaseChatMessageHistory
from langchain_core.load.load import load
from langchain_core.runnables.base import Runnable, RunnableBindingBase, RunnableLambda
from langchain_core.runnables.passthrough import RunnablePassthrough
from langchain_core.runnables.utils import (
ConfigurableFieldSpec,
Output,
get_unique_config_specs,
)
from langchain_core.utils.pydantic import create_model_v2
if TYPE_CHECKING:
from langchain_core.language_models.base import LanguageModelLike
from langchain_core.messages.base import BaseMessage
from langchain_core.runnables.config import RunnableConfig
from langchain_core.tracers.schemas import Run
MessagesOrDictWithMessages = Union[Sequence["BaseMessage"], dict[str, Any]]
GetSessionHistoryCallable = Callable[..., BaseChatMessageHistory]
class RunnableWithMessageHistory(RunnableBindingBase):
"""Runnable that manages chat message history for another Runnable.
A chat message history is a sequence of messages that represent a conversation.
RunnableWithMessageHistory wraps another Runnable and manages the chat message
history for it; it is responsible for reading and updating the chat message
history.
The formats supported for the inputs and outputs of the wrapped Runnable
are described below.
RunnableWithMessageHistory must always be called with a config that contains
the appropriate parameters for the chat message history factory.
By default, the Runnable is expected to take a single configuration parameter
called `session_id` which is a string. This parameter is used to create a new
or look up an existing chat message history that matches the given session_id.
In this case, the invocation would look like this:
`with_history.invoke(..., config={"configurable": {"session_id": "bar"}})`
; e.g., ``{"configurable": {"session_id": "<SESSION_ID>"}}``.
The configuration can be customized by passing in a list of
``ConfigurableFieldSpec`` objects to the ``history_factory_config`` parameter (see
example below).
In the examples, we will use a chat message history with an in-memory
implementation to make it easy to experiment and see the results.
For production use cases, you will want to use a persistent implementation
of chat message history, such as ``RedisChatMessageHistory``.
Parameters:
get_session_history: Function that returns a new BaseChatMessageHistory.
This function should either take a single positional argument
`session_id` of type string and return a corresponding
chat message history instance.
input_messages_key: Must be specified if the base runnable accepts a dict
as input. The key in the input dict that contains the messages.
output_messages_key: Must be specified if the base Runnable returns a dict
as output. The key in the output dict that contains the messages.
history_messages_key: Must be specified if the base runnable accepts a dict
as input and expects a separate key for historical messages.
history_factory_config: Configure fields that should be passed to the
chat history factory. See ``ConfigurableFieldSpec`` for more details.
Example: Chat message history with an in-memory implementation for testing.
.. code-block:: python
from operator import itemgetter
from typing import List
from langchain_openai.chat_models import ChatOpenAI
from langchain_core.chat_history import BaseChatMessageHistory
from langchain_core.documents import Document
from langchain_core.messages import BaseMessage, AIMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from pydantic import BaseModel, Field
from langchain_core.runnables import (
RunnableLambda,
ConfigurableFieldSpec,
RunnablePassthrough,
)
from langchain_core.runnables.history import RunnableWithMessageHistory
class InMemoryHistory(BaseChatMessageHistory, BaseModel):
\"\"\"In memory implementation of chat message history.\"\"\"
messages: List[BaseMessage] = Field(default_factory=list)
def add_messages(self, messages: List[BaseMessage]) -> None:
\"\"\"Add a list of messages to the store\"\"\"
self.messages.extend(messages)
def clear(self) -> None:
self.messages = []
# Here we use a global variable to store the chat message history.
# This will make it easier to inspect it to see the underlying results.
store = {}
def get_by_session_id(session_id: str) -> BaseChatMessageHistory:
if session_id not in store:
store[session_id] = InMemoryHistory()
return store[session_id]
history = get_by_session_id("1")
history.add_message(AIMessage(content="hello"))
print(store) # noqa: T201
Example where the wrapped Runnable takes a dictionary input:
.. code-block:: python
from typing import Optional
from langchain_community.chat_models import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
prompt = ChatPromptTemplate.from_messages([
("system", "You're an assistant who's good at {ability}"),
MessagesPlaceholder(variable_name="history"),
("human", "{question}"),
])
chain = prompt | ChatAnthropic(model="claude-2")
chain_with_history = RunnableWithMessageHistory(
chain,
# Uses the get_by_session_id function defined in the example
# above.
get_by_session_id,
input_messages_key="question",
history_messages_key="history",
)
print(chain_with_history.invoke( # noqa: T201
{"ability": "math", "question": "What does cosine mean?"},
config={"configurable": {"session_id": "foo"}}
))
# Uses the store defined in the example above.
print(store) # noqa: T201
print(chain_with_history.invoke( # noqa: T201
{"ability": "math", "question": "What's its inverse"},
config={"configurable": {"session_id": "foo"}}
))
print(store) # noqa: T201
Example where the session factory takes two keys, user_id and conversation id):
.. code-block:: python
store = {}
def get_session_history(
user_id: str, conversation_id: str
) -> BaseChatMessageHistory:
if (user_id, conversation_id) not in store:
store[(user_id, conversation_id)] = InMemoryHistory()
return store[(user_id, conversation_id)]
prompt = ChatPromptTemplate.from_messages([
("system", "You're an assistant who's good at {ability}"),
MessagesPlaceholder(variable_name="history"),
("human", "{question}"),
])
chain = prompt | ChatAnthropic(model="claude-2")
with_message_history = RunnableWithMessageHistory(
chain,
get_session_history=get_session_history,
input_messages_key="question",
history_messages_key="history",
history_factory_config=[
ConfigurableFieldSpec(
id="user_id",
annotation=str,
name="User ID",
description="Unique identifier for the user.",
default="",
is_shared=True,
),
ConfigurableFieldSpec(
id="conversation_id",
annotation=str,
name="Conversation ID",
description="Unique identifier for the conversation.",
default="",
is_shared=True,
),
],
)
with_message_history.invoke(
{"ability": "math", "question": "What does cosine mean?"},
config={"configurable": {"user_id": "123", "conversation_id": "1"}}
)
"""
get_session_history: GetSessionHistoryCallable
input_messages_key: Optional[str] = None
output_messages_key: Optional[str] = None
history_messages_key: Optional[str] = None
history_factory_config: Sequence[ConfigurableFieldSpec]
@classmethod
def get_lc_namespace(cls) -> list[str]:
"""Get the namespace of the langchain object."""
return ["langchain", "schema", "runnable"]
| |
153926
|
class VectorStore(ABC):
"""Interface for vector store."""
def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[list[dict]] = None,
*,
ids: Optional[list[str]] = None,
**kwargs: Any,
) -> list[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
ids: Optional list of IDs associated with the texts.
**kwargs: vectorstore specific parameters.
One of the kwargs should be `ids` which is a list of ids
associated with the texts.
Returns:
List of ids from adding the texts into the vectorstore.
Raises:
ValueError: If the number of metadatas does not match the number of texts.
ValueError: If the number of ids does not match the number of texts.
"""
if type(self).add_documents != VectorStore.add_documents:
# Import document in local scope to avoid circular imports
from langchain_core.documents import Document
# This condition is triggered if the subclass has provided
# an implementation of the upsert method.
# The existing add_texts
texts_: Sequence[str] = (
texts if isinstance(texts, (list, tuple)) else list(texts)
)
if metadatas and len(metadatas) != len(texts_):
msg = (
"The number of metadatas must match the number of texts."
f"Got {len(metadatas)} metadatas and {len(texts_)} texts."
)
raise ValueError(msg)
metadatas_ = iter(metadatas) if metadatas else cycle([{}])
ids_: Iterator[Optional[str]] = iter(ids) if ids else cycle([None])
docs = [
Document(id=id_, page_content=text, metadata=metadata_)
for text, metadata_, id_ in zip(texts, metadatas_, ids_)
]
if ids is not None:
# For backward compatibility
kwargs["ids"] = ids
return self.add_documents(docs, **kwargs)
msg = f"`add_texts` has not been implemented for {self.__class__.__name__} "
raise NotImplementedError(msg)
@property
def embeddings(self) -> Optional[Embeddings]:
"""Access the query embedding object if available."""
logger.debug(
f"The embeddings property has not been "
f"implemented for {self.__class__.__name__}"
)
return None
def delete(self, ids: Optional[list[str]] = None, **kwargs: Any) -> Optional[bool]:
"""Delete by vector ID or other criteria.
Args:
ids: List of ids to delete. If None, delete all. Default is None.
**kwargs: Other keyword arguments that subclasses might use.
Returns:
Optional[bool]: True if deletion is successful,
False otherwise, None if not implemented.
"""
msg = "delete method must be implemented by subclass."
raise NotImplementedError(msg)
def get_by_ids(self, ids: Sequence[str], /) -> list[Document]:
"""Get documents by their IDs.
The returned documents are expected to have the ID field set to the ID of the
document in the vector store.
Fewer documents may be returned than requested if some IDs are not found or
if there are duplicated IDs.
Users should not assume that the order of the returned documents matches
the order of the input IDs. Instead, users should rely on the ID field of the
returned documents.
This method should **NOT** raise exceptions if no documents are found for
some IDs.
Args:
ids: List of ids to retrieve.
Returns:
List of Documents.
.. versionadded:: 0.2.11
"""
msg = f"{self.__class__.__name__} does not yet support get_by_ids."
raise NotImplementedError(msg)
# Implementations should override this method to provide an async native version.
async def aget_by_ids(self, ids: Sequence[str], /) -> list[Document]:
"""Async get documents by their IDs.
The returned documents are expected to have the ID field set to the ID of the
document in the vector store.
Fewer documents may be returned than requested if some IDs are not found or
if there are duplicated IDs.
Users should not assume that the order of the returned documents matches
the order of the input IDs. Instead, users should rely on the ID field of the
returned documents.
This method should **NOT** raise exceptions if no documents are found for
some IDs.
Args:
ids: List of ids to retrieve.
Returns:
List of Documents.
.. versionadded:: 0.2.11
"""
return await run_in_executor(None, self.get_by_ids, ids)
async def adelete(
self, ids: Optional[list[str]] = None, **kwargs: Any
) -> Optional[bool]:
"""Async delete by vector ID or other criteria.
Args:
ids: List of ids to delete. If None, delete all. Default is None.
**kwargs: Other keyword arguments that subclasses might use.
Returns:
Optional[bool]: True if deletion is successful,
False otherwise, None if not implemented.
"""
return await run_in_executor(None, self.delete, ids, **kwargs)
async def aadd_texts(
self,
texts: Iterable[str],
metadatas: Optional[list[dict]] = None,
*,
ids: Optional[list[str]] = None,
**kwargs: Any,
) -> list[str]:
"""Async run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
Default is None.
ids: Optional list
**kwargs: vectorstore specific parameters.
Returns:
List of ids from adding the texts into the vectorstore.
Raises:
ValueError: If the number of metadatas does not match the number of texts.
ValueError: If the number of ids does not match the number of texts.
"""
if ids is not None:
# For backward compatibility
kwargs["ids"] = ids
if type(self).aadd_documents != VectorStore.aadd_documents:
# Import document in local scope to avoid circular imports
from langchain_core.documents import Document
# This condition is triggered if the subclass has provided
# an implementation of the upsert method.
# The existing add_texts
texts_: Sequence[str] = (
texts if isinstance(texts, (list, tuple)) else list(texts)
)
if metadatas and len(metadatas) != len(texts_):
msg = (
"The number of metadatas must match the number of texts."
f"Got {len(metadatas)} metadatas and {len(texts_)} texts."
)
raise ValueError(msg)
metadatas_ = iter(metadatas) if metadatas else cycle([{}])
ids_: Iterator[Optional[str]] = iter(ids) if ids else cycle([None])
docs = [
Document(id=id_, page_content=text, metadata=metadata_)
for text, metadata_, id_ in zip(texts, metadatas_, ids_)
]
return await self.aadd_documents(docs, **kwargs)
return await run_in_executor(None, self.add_texts, texts, metadatas, **kwargs)
def add_documents(self, documents: list[Document], **kwargs: Any) -> list[str]:
"""Add or update documents in the vectorstore.
Args:
documents: Documents to add to the vectorstore.
kwargs: Additional keyword arguments.
if kwargs contains ids and documents contain ids,
the ids in the kwargs will receive precedence.
Returns:
List of IDs of the added texts.
Raises:
ValueError: If the number of ids does not match the number of documents.
"""
if type(self).add_texts != VectorStore.add_texts:
if "ids" not in kwargs:
ids = [doc.id for doc in documents]
# If there's at least one valid ID, we'll assume that IDs
# should be used.
if any(ids):
kwargs["ids"] = ids
texts = [doc.page_content for doc in documents]
metadatas = [doc.metadata for doc in documents]
return self.add_texts(texts, metadatas, **kwargs)
msg = (
f"`add_documents` and `add_texts` has not been implemented "
f"for {self.__class__.__name__} "
)
raise NotImplementedError(msg)
| |
153927
|
async def aadd_documents(
self, documents: list[Document], **kwargs: Any
) -> list[str]:
"""Async run more documents through the embeddings and add to
the vectorstore.
Args:
documents: Documents to add to the vectorstore.
kwargs: Additional keyword arguments.
Returns:
List of IDs of the added texts.
Raises:
ValueError: If the number of IDs does not match the number of documents.
"""
# If the async method has been overridden, we'll use that.
if type(self).aadd_texts != VectorStore.aadd_texts:
if "ids" not in kwargs:
ids = [doc.id for doc in documents]
# If there's at least one valid ID, we'll assume that IDs
# should be used.
if any(ids):
kwargs["ids"] = ids
texts = [doc.page_content for doc in documents]
metadatas = [doc.metadata for doc in documents]
return await self.aadd_texts(texts, metadatas, **kwargs)
return await run_in_executor(None, self.add_documents, documents, **kwargs)
def search(self, query: str, search_type: str, **kwargs: Any) -> list[Document]:
"""Return docs most similar to query using a specified search type.
Args:
query: Input text
search_type: Type of search to perform. Can be "similarity",
"mmr", or "similarity_score_threshold".
**kwargs: Arguments to pass to the search method.
Returns:
List of Documents most similar to the query.
Raises:
ValueError: If search_type is not one of "similarity",
"mmr", or "similarity_score_threshold".
"""
if search_type == "similarity":
return self.similarity_search(query, **kwargs)
elif search_type == "similarity_score_threshold":
docs_and_similarities = self.similarity_search_with_relevance_scores(
query, **kwargs
)
return [doc for doc, _ in docs_and_similarities]
elif search_type == "mmr":
return self.max_marginal_relevance_search(query, **kwargs)
else:
msg = (
f"search_type of {search_type} not allowed. Expected "
"search_type to be 'similarity', 'similarity_score_threshold'"
" or 'mmr'."
)
raise ValueError(msg)
async def asearch(
self, query: str, search_type: str, **kwargs: Any
) -> list[Document]:
"""Async return docs most similar to query using a specified search type.
Args:
query: Input text.
search_type: Type of search to perform. Can be "similarity",
"mmr", or "similarity_score_threshold".
**kwargs: Arguments to pass to the search method.
Returns:
List of Documents most similar to the query.
Raises:
ValueError: If search_type is not one of "similarity",
"mmr", or "similarity_score_threshold".
"""
if search_type == "similarity":
return await self.asimilarity_search(query, **kwargs)
elif search_type == "similarity_score_threshold":
docs_and_similarities = await self.asimilarity_search_with_relevance_scores(
query, **kwargs
)
return [doc for doc, _ in docs_and_similarities]
elif search_type == "mmr":
return await self.amax_marginal_relevance_search(query, **kwargs)
else:
msg = (
f"search_type of {search_type} not allowed. Expected "
"search_type to be 'similarity', 'similarity_score_threshold' or 'mmr'."
)
raise ValueError(msg)
@abstractmethod
def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> list[Document]:
"""Return docs most similar to query.
Args:
query: Input text.
k: Number of Documents to return. Defaults to 4.
**kwargs: Arguments to pass to the search method.
Returns:
List of Documents most similar to the query.
"""
@staticmethod
def _euclidean_relevance_score_fn(distance: float) -> float:
"""Return a similarity score on a scale [0, 1]."""
# The 'correct' relevance function
# may differ depending on a few things, including:
# - the distance / similarity metric used by the VectorStore
# - the scale of your embeddings (OpenAI's are unit normed. Many
# others are not!)
# - embedding dimensionality
# - etc.
# This function converts the Euclidean norm of normalized embeddings
# (0 is most similar, sqrt(2) most dissimilar)
# to a similarity function (0 to 1)
return 1.0 - distance / math.sqrt(2)
@staticmethod
def _cosine_relevance_score_fn(distance: float) -> float:
"""Normalize the distance to a score on a scale [0, 1]."""
return 1.0 - distance
@staticmethod
def _max_inner_product_relevance_score_fn(distance: float) -> float:
"""Normalize the distance to a score on a scale [0, 1]."""
if distance > 0:
return 1.0 - distance
return -1.0 * distance
def _select_relevance_score_fn(self) -> Callable[[float], float]:
"""
The 'correct' relevance function
may differ depending on a few things, including:
- the distance / similarity metric used by the VectorStore
- the scale of your embeddings (OpenAI's are unit normed. Many others are not!)
- embedding dimensionality
- etc.
Vectorstores should define their own selection-based method of relevance.
"""
raise NotImplementedError
def similarity_search_with_score(
self, *args: Any, **kwargs: Any
) -> list[tuple[Document, float]]:
"""Run similarity search with distance.
Args:
*args: Arguments to pass to the search method.
**kwargs: Arguments to pass to the search method.
Returns:
List of Tuples of (doc, similarity_score).
"""
raise NotImplementedError
async def asimilarity_search_with_score(
self, *args: Any, **kwargs: Any
) -> list[tuple[Document, float]]:
"""Async run similarity search with distance.
Args:
*args: Arguments to pass to the search method.
**kwargs: Arguments to pass to the search method.
Returns:
List of Tuples of (doc, similarity_score).
"""
# This is a temporary workaround to make the similarity search
# asynchronous. The proper solution is to make the similarity search
# asynchronous in the vector store implementations.
return await run_in_executor(
None, self.similarity_search_with_score, *args, **kwargs
)
def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> list[tuple[Document, float]]:
"""
Default similarity search with relevance scores. Modify if necessary
in subclass.
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Args:
query: Input text.
k: Number of Documents to return. Defaults to 4.
**kwargs: kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns:
List of Tuples of (doc, similarity_score)
"""
relevance_score_fn = self._select_relevance_score_fn()
docs_and_scores = self.similarity_search_with_score(query, k, **kwargs)
return [(doc, relevance_score_fn(score)) for doc, score in docs_and_scores]
async def _asimilarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> list[tuple[Document, float]]:
"""
Default similarity search with relevance scores. Modify if necessary
in subclass.
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Args:
query: Input text.
k: Number of Documents to return. Defaults to 4.
**kwargs: kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns:
List of Tuples of (doc, similarity_score)
"""
relevance_score_fn = self._select_relevance_score_fn()
docs_and_scores = await self.asimilarity_search_with_score(query, k, **kwargs)
return [(doc, relevance_score_fn(score)) for doc, score in docs_and_scores]
| |
153929
|
async def amax_marginal_relevance_search_by_vector(
self,
embedding: list[float],
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> list[Document]:
"""Async return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
Default is 20.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
**kwargs: Arguments to pass to the search method.
Returns:
List of Documents selected by maximal marginal relevance.
"""
return await run_in_executor(
None,
self.max_marginal_relevance_search_by_vector,
embedding,
k=k,
fetch_k=fetch_k,
lambda_mult=lambda_mult,
**kwargs,
)
@classmethod
def from_documents(
cls: type[VST],
documents: list[Document],
embedding: Embeddings,
**kwargs: Any,
) -> VST:
"""Return VectorStore initialized from documents and embeddings.
Args:
documents: List of Documents to add to the vectorstore.
embedding: Embedding function to use.
kwargs: Additional keyword arguments.
Returns:
VectorStore: VectorStore initialized from documents and embeddings.
"""
texts = [d.page_content for d in documents]
metadatas = [d.metadata for d in documents]
if "ids" not in kwargs:
ids = [doc.id for doc in documents]
# If there's at least one valid ID, we'll assume that IDs
# should be used.
if any(ids):
kwargs["ids"] = ids
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
@classmethod
async def afrom_documents(
cls: type[VST],
documents: list[Document],
embedding: Embeddings,
**kwargs: Any,
) -> VST:
"""Async return VectorStore initialized from documents and embeddings.
Args:
documents: List of Documents to add to the vectorstore.
embedding: Embedding function to use.
kwargs: Additional keyword arguments.
Returns:
VectorStore: VectorStore initialized from documents and embeddings.
"""
texts = [d.page_content for d in documents]
metadatas = [d.metadata for d in documents]
if "ids" not in kwargs:
ids = [doc.id for doc in documents]
# If there's at least one valid ID, we'll assume that IDs
# should be used.
if any(ids):
kwargs["ids"] = ids
return await cls.afrom_texts(texts, embedding, metadatas=metadatas, **kwargs)
@classmethod
@abstractmethod
def from_texts(
cls: type[VST],
texts: list[str],
embedding: Embeddings,
metadatas: Optional[list[dict]] = None,
*,
ids: Optional[list[str]] = None,
**kwargs: Any,
) -> VST:
"""Return VectorStore initialized from texts and embeddings.
Args:
texts: Texts to add to the vectorstore.
embedding: Embedding function to use.
metadatas: Optional list of metadatas associated with the texts.
Default is None.
ids: Optional list of IDs associated with the texts.
kwargs: Additional keyword arguments.
Returns:
VectorStore: VectorStore initialized from texts and embeddings.
"""
@classmethod
async def afrom_texts(
cls: type[VST],
texts: list[str],
embedding: Embeddings,
metadatas: Optional[list[dict]] = None,
*,
ids: Optional[list[str]] = None,
**kwargs: Any,
) -> VST:
"""Async return VectorStore initialized from texts and embeddings.
Args:
texts: Texts to add to the vectorstore.
embedding: Embedding function to use.
metadatas: Optional list of metadatas associated with the texts.
Default is None.
ids: Optional list of IDs associated with the texts.
kwargs: Additional keyword arguments.
Returns:
VectorStore: VectorStore initialized from texts and embeddings.
"""
if ids is not None:
kwargs["ids"] = ids
return await run_in_executor(
None, cls.from_texts, texts, embedding, metadatas, **kwargs
)
def _get_retriever_tags(self) -> list[str]:
"""Get tags for retriever."""
tags = [self.__class__.__name__]
if self.embeddings:
tags.append(self.embeddings.__class__.__name__)
return tags
def as_retriever(self, **kwargs: Any) -> VectorStoreRetriever:
"""Return VectorStoreRetriever initialized from this VectorStore.
Args:
**kwargs: Keyword arguments to pass to the search function.
Can include:
search_type (Optional[str]): Defines the type of search that
the Retriever should perform.
Can be "similarity" (default), "mmr", or
"similarity_score_threshold".
search_kwargs (Optional[Dict]): Keyword arguments to pass to the
search function. Can include things like:
k: Amount of documents to return (Default: 4)
score_threshold: Minimum relevance threshold
for similarity_score_threshold
fetch_k: Amount of documents to pass to MMR algorithm
(Default: 20)
lambda_mult: Diversity of results returned by MMR;
1 for minimum diversity and 0 for maximum. (Default: 0.5)
filter: Filter by document metadata
Returns:
VectorStoreRetriever: Retriever class for VectorStore.
Examples:
.. code-block:: python
# Retrieve more documents with higher diversity
# Useful if your dataset has many similar documents
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 6, 'lambda_mult': 0.25}
)
# Fetch more documents for the MMR algorithm to consider
# But only return the top 5
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 5, 'fetch_k': 50}
)
# Only retrieve documents that have a relevance score
# Above a certain threshold
docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.8}
)
# Only get the single most similar document from the dataset
docsearch.as_retriever(search_kwargs={'k': 1})
# Use a filter to only retrieve documents from a specific paper
docsearch.as_retriever(
search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}}
)
"""
tags = kwargs.pop("tags", None) or [] + self._get_retriever_tags()
return VectorStoreRetriever(vectorstore=self, tags=tags, **kwargs)
| |
153930
|
class VectorStoreRetriever(BaseRetriever):
"""Base Retriever class for VectorStore."""
vectorstore: VectorStore
"""VectorStore to use for retrieval."""
search_type: str = "similarity"
"""Type of search to perform. Defaults to "similarity"."""
search_kwargs: dict = Field(default_factory=dict)
"""Keyword arguments to pass to the search function."""
allowed_search_types: ClassVar[Collection[str]] = (
"similarity",
"similarity_score_threshold",
"mmr",
)
model_config = ConfigDict(
arbitrary_types_allowed=True,
)
@model_validator(mode="before")
@classmethod
def validate_search_type(cls, values: dict) -> Any:
"""Validate search type.
Args:
values: Values to validate.
Returns:
Values: Validated values.
Raises:
ValueError: If search_type is not one of the allowed search types.
ValueError: If score_threshold is not specified with a float value(0~1)
"""
search_type = values.get("search_type", "similarity")
if search_type not in cls.allowed_search_types:
msg = (
f"search_type of {search_type} not allowed. Valid values are: "
f"{cls.allowed_search_types}"
)
raise ValueError(msg)
if search_type == "similarity_score_threshold":
score_threshold = values.get("search_kwargs", {}).get("score_threshold")
if (score_threshold is None) or (not isinstance(score_threshold, float)):
msg = (
"`score_threshold` is not specified with a float value(0~1) "
"in `search_kwargs`."
)
raise ValueError(msg)
return values
def _get_ls_params(self, **kwargs: Any) -> LangSmithRetrieverParams:
"""Get standard params for tracing."""
ls_params = super()._get_ls_params(**kwargs)
ls_params["ls_vector_store_provider"] = self.vectorstore.__class__.__name__
if self.vectorstore.embeddings:
ls_params["ls_embedding_provider"] = (
self.vectorstore.embeddings.__class__.__name__
)
elif hasattr(self.vectorstore, "embedding") and isinstance(
self.vectorstore.embedding, Embeddings
):
ls_params["ls_embedding_provider"] = (
self.vectorstore.embedding.__class__.__name__
)
return ls_params
def _get_relevant_documents(
self, query: str, *, run_manager: CallbackManagerForRetrieverRun
) -> list[Document]:
if self.search_type == "similarity":
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
elif self.search_type == "similarity_score_threshold":
docs_and_similarities = (
self.vectorstore.similarity_search_with_relevance_scores(
query, **self.search_kwargs
)
)
docs = [doc for doc, _ in docs_and_similarities]
elif self.search_type == "mmr":
docs = self.vectorstore.max_marginal_relevance_search(
query, **self.search_kwargs
)
else:
msg = f"search_type of {self.search_type} not allowed."
raise ValueError(msg)
return docs
async def _aget_relevant_documents(
self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun
) -> list[Document]:
if self.search_type == "similarity":
docs = await self.vectorstore.asimilarity_search(
query, **self.search_kwargs
)
elif self.search_type == "similarity_score_threshold":
docs_and_similarities = (
await self.vectorstore.asimilarity_search_with_relevance_scores(
query, **self.search_kwargs
)
)
docs = [doc for doc, _ in docs_and_similarities]
elif self.search_type == "mmr":
docs = await self.vectorstore.amax_marginal_relevance_search(
query, **self.search_kwargs
)
else:
msg = f"search_type of {self.search_type} not allowed."
raise ValueError(msg)
return docs
def add_documents(self, documents: list[Document], **kwargs: Any) -> list[str]:
"""Add documents to the vectorstore.
Args:
documents: Documents to add to the vectorstore.
**kwargs: Other keyword arguments that subclasses might use.
Returns:
List of IDs of the added texts.
"""
return self.vectorstore.add_documents(documents, **kwargs)
async def aadd_documents(
self, documents: list[Document], **kwargs: Any
) -> list[str]:
"""Async add documents to the vectorstore.
Args:
documents: Documents to add to the vectorstore.
**kwargs: Other keyword arguments that subclasses might use.
Returns:
List of IDs of the added texts.
"""
return await self.vectorstore.aadd_documents(documents, **kwargs)
| |
153951
|
"""**OutputParser** classes parse the output of an LLM call.
**Class hierarchy:**
.. code-block::
BaseLLMOutputParser --> BaseOutputParser --> <name>OutputParser # ListOutputParser, PydanticOutputParser
**Main helpers:**
.. code-block::
Serializable, Generation, PromptValue
""" # noqa: E501
from langchain_core.output_parsers.base import (
BaseGenerationOutputParser,
BaseLLMOutputParser,
BaseOutputParser,
)
from langchain_core.output_parsers.json import JsonOutputParser, SimpleJsonOutputParser
from langchain_core.output_parsers.list import (
CommaSeparatedListOutputParser,
ListOutputParser,
MarkdownListOutputParser,
NumberedListOutputParser,
)
from langchain_core.output_parsers.openai_tools import (
JsonOutputKeyToolsParser,
JsonOutputToolsParser,
PydanticToolsParser,
)
from langchain_core.output_parsers.pydantic import PydanticOutputParser
from langchain_core.output_parsers.string import StrOutputParser
from langchain_core.output_parsers.transform import (
BaseCumulativeTransformOutputParser,
BaseTransformOutputParser,
)
from langchain_core.output_parsers.xml import XMLOutputParser
__all__ = [
"BaseLLMOutputParser",
"BaseGenerationOutputParser",
"BaseOutputParser",
"ListOutputParser",
"CommaSeparatedListOutputParser",
"NumberedListOutputParser",
"MarkdownListOutputParser",
"StrOutputParser",
"BaseTransformOutputParser",
"BaseCumulativeTransformOutputParser",
"SimpleJsonOutputParser",
"XMLOutputParser",
"JsonOutputParser",
"PydanticOutputParser",
"JsonOutputToolsParser",
"JsonOutputKeyToolsParser",
"PydanticToolsParser",
]
| |
153952
|
import json
from typing import Annotated, Generic, Optional
import pydantic
from pydantic import SkipValidation
from typing_extensions import override
from langchain_core.exceptions import OutputParserException
from langchain_core.output_parsers import JsonOutputParser
from langchain_core.outputs import Generation
from langchain_core.utils.pydantic import (
PYDANTIC_MAJOR_VERSION,
PydanticBaseModel,
TBaseModel,
)
class PydanticOutputParser(JsonOutputParser, Generic[TBaseModel]):
"""Parse an output using a pydantic model."""
pydantic_object: Annotated[type[TBaseModel], SkipValidation()] # type: ignore
"""The pydantic model to parse."""
def _parse_obj(self, obj: dict) -> TBaseModel:
if PYDANTIC_MAJOR_VERSION == 2:
try:
if issubclass(self.pydantic_object, pydantic.BaseModel):
return self.pydantic_object.model_validate(obj)
elif issubclass(self.pydantic_object, pydantic.v1.BaseModel):
return self.pydantic_object.parse_obj(obj)
else:
msg = f"Unsupported model version for PydanticOutputParser: \
{self.pydantic_object.__class__}"
raise OutputParserException(msg)
except (pydantic.ValidationError, pydantic.v1.ValidationError) as e:
raise self._parser_exception(e, obj) from e
else: # pydantic v1
try:
return self.pydantic_object.parse_obj(obj)
except pydantic.ValidationError as e:
raise self._parser_exception(e, obj) from e
def _parser_exception(
self, e: Exception, json_object: dict
) -> OutputParserException:
json_string = json.dumps(json_object)
name = self.pydantic_object.__name__
msg = f"Failed to parse {name} from completion {json_string}. Got: {e}"
return OutputParserException(msg, llm_output=json_string)
def parse_result(
self, result: list[Generation], *, partial: bool = False
) -> Optional[TBaseModel]:
"""Parse the result of an LLM call to a pydantic object.
Args:
result: The result of the LLM call.
partial: Whether to parse partial JSON objects.
If True, the output will be a JSON object containing
all the keys that have been returned so far.
Defaults to False.
Returns:
The parsed pydantic object.
"""
try:
json_object = super().parse_result(result)
return self._parse_obj(json_object)
except OutputParserException as e:
if partial:
return None
raise e
def parse(self, text: str) -> TBaseModel:
"""Parse the output of an LLM call to a pydantic object.
Args:
text: The output of the LLM call.
Returns:
The parsed pydantic object.
"""
return super().parse(text)
def get_format_instructions(self) -> str:
"""Return the format instructions for the JSON output.
Returns:
The format instructions for the JSON output.
"""
# Copy schema to avoid altering original Pydantic schema.
schema = dict(self.pydantic_object.model_json_schema().items())
# Remove extraneous fields.
reduced_schema = schema
if "title" in reduced_schema:
del reduced_schema["title"]
if "type" in reduced_schema:
del reduced_schema["type"]
# Ensure json in context is well-formed with double quotes.
schema_str = json.dumps(reduced_schema, ensure_ascii=False)
return _PYDANTIC_FORMAT_INSTRUCTIONS.format(schema=schema_str)
@property
def _type(self) -> str:
return "pydantic"
@property
@override
def OutputType(self) -> type[TBaseModel]:
"""Return the pydantic model."""
return self.pydantic_object
PydanticOutputParser.model_rebuild()
_PYDANTIC_FORMAT_INSTRUCTIONS = """The output should be formatted as a JSON instance that conforms to the JSON schema below.
As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}
the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
Here is the output schema:
```
{schema}
```""" # noqa: E501
# Re-exporting types for backwards compatibility
__all__ = [
"PydanticBaseModel",
"PydanticOutputParser",
"TBaseModel",
]
| |
153953
|
# flake8: noqa
JSON_FORMAT_INSTRUCTIONS = """The output should be formatted as a JSON instance that conforms to the JSON schema below.
As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}
the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
Here is the output schema:
```
{schema}
```"""
| |
153956
|
from __future__ import annotations
import json
from json import JSONDecodeError
from typing import Annotated, Any, Optional, TypeVar, Union
import jsonpatch # type: ignore[import]
import pydantic
from pydantic import SkipValidation
from langchain_core.exceptions import OutputParserException
from langchain_core.output_parsers.format_instructions import JSON_FORMAT_INSTRUCTIONS
from langchain_core.output_parsers.transform import BaseCumulativeTransformOutputParser
from langchain_core.outputs import Generation
from langchain_core.utils.json import (
parse_and_check_json_markdown,
parse_json_markdown,
parse_partial_json,
)
from langchain_core.utils.pydantic import PYDANTIC_MAJOR_VERSION
if PYDANTIC_MAJOR_VERSION < 2:
PydanticBaseModel = pydantic.BaseModel
else:
from pydantic.v1 import BaseModel
# Union type needs to be last assignment to PydanticBaseModel to make mypy happy.
PydanticBaseModel = Union[BaseModel, pydantic.BaseModel] # type: ignore
TBaseModel = TypeVar("TBaseModel", bound=PydanticBaseModel)
class JsonOutputParser(BaseCumulativeTransformOutputParser[Any]):
"""Parse the output of an LLM call to a JSON object.
When used in streaming mode, it will yield partial JSON objects containing
all the keys that have been returned so far.
In streaming, if `diff` is set to `True`, yields JSONPatch operations
describing the difference between the previous and the current object.
"""
pydantic_object: Annotated[Optional[type[TBaseModel]], SkipValidation()] = None # type: ignore
"""The Pydantic object to use for validation.
If None, no validation is performed."""
def _diff(self, prev: Optional[Any], next: Any) -> Any:
return jsonpatch.make_patch(prev, next).patch
def _get_schema(self, pydantic_object: type[TBaseModel]) -> dict[str, Any]:
if issubclass(pydantic_object, pydantic.BaseModel):
return pydantic_object.model_json_schema()
elif issubclass(pydantic_object, pydantic.v1.BaseModel):
return pydantic_object.schema()
def parse_result(self, result: list[Generation], *, partial: bool = False) -> Any:
"""Parse the result of an LLM call to a JSON object.
Args:
result: The result of the LLM call.
partial: Whether to parse partial JSON objects.
If True, the output will be a JSON object containing
all the keys that have been returned so far.
If False, the output will be the full JSON object.
Default is False.
Returns:
The parsed JSON object.
Raises:
OutputParserException: If the output is not valid JSON.
"""
text = result[0].text
text = text.strip()
if partial:
try:
return parse_json_markdown(text)
except JSONDecodeError:
return None
else:
try:
return parse_json_markdown(text)
except JSONDecodeError as e:
msg = f"Invalid json output: {text}"
raise OutputParserException(msg, llm_output=text) from e
def parse(self, text: str) -> Any:
"""Parse the output of an LLM call to a JSON object.
Args:
text: The output of the LLM call.
Returns:
The parsed JSON object.
"""
return self.parse_result([Generation(text=text)])
def get_format_instructions(self) -> str:
"""Return the format instructions for the JSON output.
Returns:
The format instructions for the JSON output.
"""
if self.pydantic_object is None:
return "Return a JSON object."
else:
# Copy schema to avoid altering original Pydantic schema.
schema = dict(self._get_schema(self.pydantic_object).items())
# Remove extraneous fields.
reduced_schema = schema
if "title" in reduced_schema:
del reduced_schema["title"]
if "type" in reduced_schema:
del reduced_schema["type"]
# Ensure json in context is well-formed with double quotes.
schema_str = json.dumps(reduced_schema)
return JSON_FORMAT_INSTRUCTIONS.format(schema=schema_str)
@property
def _type(self) -> str:
return "simple_json_output_parser"
# For backwards compatibility
SimpleJsonOutputParser = JsonOutputParser
parse_partial_json = parse_partial_json
parse_and_check_json_markdown = parse_and_check_json_markdown
| |
153958
|
class BaseOutputParser(
BaseLLMOutputParser, RunnableSerializable[LanguageModelOutput, T]
):
"""Base class to parse the output of an LLM call.
Output parsers help structure language model responses.
Example:
.. code-block:: python
class BooleanOutputParser(BaseOutputParser[bool]):
true_val: str = "YES"
false_val: str = "NO"
def parse(self, text: str) -> bool:
cleaned_text = text.strip().upper()
if cleaned_text not in (self.true_val.upper(), self.false_val.upper()):
raise OutputParserException(
f"BooleanOutputParser expected output value to either be "
f"{self.true_val} or {self.false_val} (case-insensitive). "
f"Received {cleaned_text}."
)
return cleaned_text == self.true_val.upper()
@property
def _type(self) -> str:
return "boolean_output_parser"
""" # noqa: E501
@property
@override
def InputType(self) -> Any:
"""Return the input type for the parser."""
return Union[str, AnyMessage]
@property
@override
def OutputType(self) -> type[T]:
"""Return the output type for the parser.
This property is inferred from the first type argument of the class.
Raises:
TypeError: If the class doesn't have an inferable OutputType.
"""
for base in self.__class__.mro():
if hasattr(base, "__pydantic_generic_metadata__"):
metadata = base.__pydantic_generic_metadata__
if "args" in metadata and len(metadata["args"]) > 0:
return metadata["args"][0]
msg = (
f"Runnable {self.__class__.__name__} doesn't have an inferable OutputType. "
"Override the OutputType property to specify the output type."
)
raise TypeError(msg)
def invoke(
self,
input: Union[str, BaseMessage],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> T:
if isinstance(input, BaseMessage):
return self._call_with_config(
lambda inner_input: self.parse_result(
[ChatGeneration(message=inner_input)]
),
input,
config,
run_type="parser",
)
else:
return self._call_with_config(
lambda inner_input: self.parse_result([Generation(text=inner_input)]),
input,
config,
run_type="parser",
)
async def ainvoke(
self,
input: Union[str, BaseMessage],
config: Optional[RunnableConfig] = None,
**kwargs: Optional[Any],
) -> T:
if isinstance(input, BaseMessage):
return await self._acall_with_config(
lambda inner_input: self.aparse_result(
[ChatGeneration(message=inner_input)]
),
input,
config,
run_type="parser",
)
else:
return await self._acall_with_config(
lambda inner_input: self.aparse_result([Generation(text=inner_input)]),
input,
config,
run_type="parser",
)
def parse_result(self, result: list[Generation], *, partial: bool = False) -> T:
"""Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, which
is assumed to be the highest-likelihood Generation.
Args:
result: A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
partial: Whether to parse the output as a partial result. This is useful
for parsers that can parse partial results. Default is False.
Returns:
Structured output.
"""
return self.parse(result[0].text)
@abstractmethod
def parse(self, text: str) -> T:
"""Parse a single string model output into some structure.
Args:
text: String output of a language model.
Returns:
Structured output.
"""
async def aparse_result(
self, result: list[Generation], *, partial: bool = False
) -> T:
"""Async parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, which
is assumed to be the highest-likelihood Generation.
Args:
result: A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
partial: Whether to parse the output as a partial result. This is useful
for parsers that can parse partial results. Default is False.
Returns:
Structured output.
"""
return await run_in_executor(None, self.parse_result, result, partial=partial)
async def aparse(self, text: str) -> T:
"""Async parse a single string model output into some structure.
Args:
text: String output of a language model.
Returns:
Structured output.
"""
return await run_in_executor(None, self.parse, text)
# TODO: rename 'completion' -> 'text'.
def parse_with_prompt(self, completion: str, prompt: PromptValue) -> Any:
"""Parse the output of an LLM call with the input prompt for context.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Args:
completion: String output of a language model.
prompt: Input PromptValue.
Returns:
Structured output.
"""
return self.parse(completion)
def get_format_instructions(self) -> str:
"""Instructions on how the LLM output should be formatted."""
raise NotImplementedError
@property
def _type(self) -> str:
"""Return the output parser type for serialization."""
msg = (
f"_type property is not implemented in class {self.__class__.__name__}."
" This is required for serialization."
)
raise NotImplementedError(msg)
def dict(self, **kwargs: Any) -> dict:
"""Return dictionary representation of output parser."""
output_parser_dict = super().dict(**kwargs)
with contextlib.suppress(NotImplementedError):
output_parser_dict["_type"] = self._type
return output_parser_dict
| |
153963
|
class FewShotChatMessagePromptTemplate(
BaseChatPromptTemplate, _FewShotPromptTemplateMixin
):
"""Chat prompt template that supports few-shot examples.
The high level structure of produced by this prompt template is a list of messages
consisting of prefix message(s), example message(s), and suffix message(s).
This structure enables creating a conversation with intermediate examples like:
System: You are a helpful AI Assistant
Human: What is 2+2?
AI: 4
Human: What is 2+3?
AI: 5
Human: What is 4+4?
This prompt template can be used to generate a fixed list of examples or else
to dynamically select examples based on the input.
Examples:
Prompt template with a fixed list of examples (matching the sample
conversation above):
.. code-block:: python
from langchain_core.prompts import (
FewShotChatMessagePromptTemplate,
ChatPromptTemplate
)
examples = [
{"input": "2+2", "output": "4"},
{"input": "2+3", "output": "5"},
]
example_prompt = ChatPromptTemplate.from_messages(
[('human', '{input}'), ('ai', '{output}')]
)
few_shot_prompt = FewShotChatMessagePromptTemplate(
examples=examples,
# This is a prompt template used to format each individual example.
example_prompt=example_prompt,
)
final_prompt = ChatPromptTemplate.from_messages(
[
('system', 'You are a helpful AI Assistant'),
few_shot_prompt,
('human', '{input}'),
]
)
final_prompt.format(input="What is 4+4?")
Prompt template with dynamically selected examples:
.. code-block:: python
from langchain_core.prompts import SemanticSimilarityExampleSelector
from langchain_core.embeddings import OpenAIEmbeddings
from langchain_core.vectorstores import Chroma
examples = [
{"input": "2+2", "output": "4"},
{"input": "2+3", "output": "5"},
{"input": "2+4", "output": "6"},
# ...
]
to_vectorize = [
" ".join(example.values())
for example in examples
]
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_texts(
to_vectorize, embeddings, metadatas=examples
)
example_selector = SemanticSimilarityExampleSelector(
vectorstore=vectorstore
)
from langchain_core import SystemMessage
from langchain_core.prompts import HumanMessagePromptTemplate
from langchain_core.prompts.few_shot import FewShotChatMessagePromptTemplate
few_shot_prompt = FewShotChatMessagePromptTemplate(
# Which variable(s) will be passed to the example selector.
input_variables=["input"],
example_selector=example_selector,
# Define how each example will be formatted.
# In this case, each example will become 2 messages:
# 1 human, and 1 AI
example_prompt=(
HumanMessagePromptTemplate.from_template("{input}")
+ AIMessagePromptTemplate.from_template("{output}")
),
)
# Define the overall prompt.
final_prompt = (
SystemMessagePromptTemplate.from_template(
"You are a helpful AI Assistant"
)
+ few_shot_prompt
+ HumanMessagePromptTemplate.from_template("{input}")
)
# Show the prompt
print(final_prompt.format_messages(input="What's 3+3?")) # noqa: T201
# Use within an LLM
from langchain_core.chat_models import ChatAnthropic
chain = final_prompt | ChatAnthropic(model="claude-3-haiku-20240307")
chain.invoke({"input": "What's 3+3?"})
"""
input_variables: list[str] = Field(default_factory=list)
"""A list of the names of the variables the prompt template will use
to pass to the example_selector, if provided."""
example_prompt: Union[BaseMessagePromptTemplate, BaseChatPromptTemplate]
"""The class to format each example."""
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return whether or not the class is serializable."""
return False
model_config = ConfigDict(
arbitrary_types_allowed=True,
extra="forbid",
)
def format_messages(self, **kwargs: Any) -> list[BaseMessage]:
"""Format kwargs into a list of messages.
Args:
**kwargs: keyword arguments to use for filling in templates in messages.
Returns:
A list of formatted messages with all template variables filled in.
"""
# Get the examples to use.
examples = self._get_examples(**kwargs)
examples = [
{k: e[k] for k in self.example_prompt.input_variables} for e in examples
]
# Format the examples.
messages = [
message
for example in examples
for message in self.example_prompt.format_messages(**example)
]
return messages
async def aformat_messages(self, **kwargs: Any) -> list[BaseMessage]:
"""Async format kwargs into a list of messages.
Args:
**kwargs: keyword arguments to use for filling in templates in messages.
Returns:
A list of formatted messages with all template variables filled in.
"""
# Get the examples to use.
examples = await self._aget_examples(**kwargs)
examples = [
{k: e[k] for k in self.example_prompt.input_variables} for e in examples
]
# Format the examples.
messages = [
message
for example in examples
for message in await self.example_prompt.aformat_messages(**example)
]
return messages
def format(self, **kwargs: Any) -> str:
"""Format the prompt with inputs generating a string.
Use this method to generate a string representation of a prompt consisting
of chat messages.
Useful for feeding into a string-based completion language model or debugging.
Args:
**kwargs: keyword arguments to use for formatting.
Returns:
A string representation of the prompt
"""
messages = self.format_messages(**kwargs)
return get_buffer_string(messages)
async def aformat(self, **kwargs: Any) -> str:
"""Async format the prompt with inputs generating a string.
Use this method to generate a string representation of a prompt consisting
of chat messages.
Useful for feeding into a string-based completion language model or debugging.
Args:
**kwargs: keyword arguments to use for formatting.
Returns:
A string representation of the prompt
"""
messages = await self.aformat_messages(**kwargs)
return get_buffer_string(messages)
def pretty_repr(self, html: bool = False) -> str:
"""Return a pretty representation of the prompt template.
Args:
html: Whether or not to return an HTML formatted string.
Returns:
A pretty representation of the prompt template.
"""
raise NotImplementedError
| |
153968
|
class ChatPromptTemplate(BaseChatPromptTemplate):
"""Prompt template for chat models.
Use to create flexible templated prompts for chat models.
Examples:
.. versionchanged:: 0.2.24
You can pass any Message-like formats supported by
``ChatPromptTemplate.from_messages()`` directly to ``ChatPromptTemplate()``
init.
.. code-block:: python
from langchain_core.prompts import ChatPromptTemplate
template = ChatPromptTemplate([
("system", "You are a helpful AI bot. Your name is {name}."),
("human", "Hello, how are you doing?"),
("ai", "I'm doing well, thanks!"),
("human", "{user_input}"),
])
prompt_value = template.invoke(
{
"name": "Bob",
"user_input": "What is your name?"
}
)
# Output:
# ChatPromptValue(
# messages=[
# SystemMessage(content='You are a helpful AI bot. Your name is Bob.'),
# HumanMessage(content='Hello, how are you doing?'),
# AIMessage(content="I'm doing well, thanks!"),
# HumanMessage(content='What is your name?')
# ]
#)
Messages Placeholder:
.. code-block:: python
# In addition to Human/AI/Tool/Function messages,
# you can initialize the template with a MessagesPlaceholder
# either using the class directly or with the shorthand tuple syntax:
template = ChatPromptTemplate([
("system", "You are a helpful AI bot."),
# Means the template will receive an optional list of messages under
# the "conversation" key
("placeholder", "{conversation}")
# Equivalently:
# MessagesPlaceholder(variable_name="conversation", optional=True)
])
prompt_value = template.invoke(
{
"conversation": [
("human", "Hi!"),
("ai", "How can I assist you today?"),
("human", "Can you make me an ice cream sundae?"),
("ai", "No.")
]
}
)
# Output:
# ChatPromptValue(
# messages=[
# SystemMessage(content='You are a helpful AI bot.'),
# HumanMessage(content='Hi!'),
# AIMessage(content='How can I assist you today?'),
# HumanMessage(content='Can you make me an ice cream sundae?'),
# AIMessage(content='No.'),
# ]
#)
Single-variable template:
If your prompt has only a single input variable (i.e., 1 instance of "{variable_nams}"),
and you invoke the template with a non-dict object, the prompt template will
inject the provided argument into that variable location.
.. code-block:: python
from langchain_core.prompts import ChatPromptTemplate
template = ChatPromptTemplate([
("system", "You are a helpful AI bot. Your name is Carl."),
("human", "{user_input}"),
])
prompt_value = template.invoke("Hello, there!")
# Equivalent to
# prompt_value = template.invoke({"user_input": "Hello, there!"})
# Output:
# ChatPromptValue(
# messages=[
# SystemMessage(content='You are a helpful AI bot. Your name is Carl.'),
# HumanMessage(content='Hello, there!'),
# ]
# )
""" # noqa: E501
messages: Annotated[list[MessageLike], SkipValidation()]
"""List of messages consisting of either message prompt templates or messages."""
validate_template: bool = False
"""Whether or not to try validating the template."""
def __init__(
self,
messages: Sequence[MessageLikeRepresentation],
*,
template_format: Literal["f-string", "mustache", "jinja2"] = "f-string",
**kwargs: Any,
) -> None:
"""Create a chat prompt template from a variety of message formats.
Args:
messages: sequence of message representations.
A message can be represented using the following formats:
(1) BaseMessagePromptTemplate, (2) BaseMessage, (3) 2-tuple of
(message type, template); e.g., ("human", "{user_input}"),
(4) 2-tuple of (message class, template), (5) a string which is
shorthand for ("human", template); e.g., "{user_input}".
template_format: format of the template. Defaults to "f-string".
input_variables: A list of the names of the variables whose values are
required as inputs to the prompt.
optional_variables: A list of the names of the variables for placeholder
or MessagePlaceholder that are optional. These variables are auto inferred
from the prompt and user need not provide them.
partial_variables: A dictionary of the partial variables the prompt
template carries. Partial variables populate the template so that you
don't need to pass them in every time you call the prompt.
validate_template: Whether to validate the template.
input_types: A dictionary of the types of the variables the prompt template
expects. If not provided, all variables are assumed to be strings.
Returns:
A chat prompt template.
Examples:
Instantiation from a list of message templates:
.. code-block:: python
template = ChatPromptTemplate([
("human", "Hello, how are you?"),
("ai", "I'm doing well, thanks!"),
("human", "That's good to hear."),
])
Instantiation from mixed message formats:
.. code-block:: python
template = ChatPromptTemplate([
SystemMessage(content="hello"),
("human", "Hello, how are you?"),
])
"""
_messages = [
_convert_to_message(message, template_format) for message in messages
]
# Automatically infer input variables from messages
input_vars: set[str] = set()
optional_variables: set[str] = set()
partial_vars: dict[str, Any] = {}
for _message in _messages:
if isinstance(_message, MessagesPlaceholder) and _message.optional:
partial_vars[_message.variable_name] = []
optional_variables.add(_message.variable_name)
elif isinstance(
_message, (BaseChatPromptTemplate, BaseMessagePromptTemplate)
):
input_vars.update(_message.input_variables)
kwargs = {
"input_variables": sorted(input_vars),
"optional_variables": sorted(optional_variables),
"partial_variables": partial_vars,
**kwargs,
}
cast(type[ChatPromptTemplate], super()).__init__(messages=_messages, **kwargs)
@classmethod
def get_lc_namespace(cls) -> list[str]:
"""Get the namespace of the langchain object."""
return ["langchain", "prompts", "chat"]
def __add__(self, other: Any) -> ChatPromptTemplate:
"""Combine two prompt templates.
Args:
other: Another prompt template.
Returns:
Combined prompt template.
"""
# Allow for easy combining
if isinstance(other, ChatPromptTemplate):
return ChatPromptTemplate(messages=self.messages + other.messages) # type: ignore[call-arg]
elif isinstance(
other, (BaseMessagePromptTemplate, BaseMessage, BaseChatPromptTemplate)
):
return ChatPromptTemplate(messages=self.messages + [other]) # type: ignore[call-arg]
elif isinstance(other, (list, tuple)):
_other = ChatPromptTemplate.from_messages(other)
return ChatPromptTemplate(messages=self.messages + _other.messages) # type: ignore[call-arg]
elif isinstance(other, str):
prompt = HumanMessagePromptTemplate.from_template(other)
return ChatPromptTemplate(messages=self.messages + [prompt]) # type: ignore[call-arg]
else:
msg = f"Unsupported operand type for +: {type(other)}"
raise NotImplementedError(msg)
| |
153969
|
@model_validator(mode="before")
@classmethod
def validate_input_variables(cls, values: dict) -> Any:
"""Validate input variables.
If input_variables is not set, it will be set to the union of
all input variables in the messages.
Args:
values: values to validate.
Returns:
Validated values.
Raises:
ValueError: If input variables do not match.
"""
messages = values["messages"]
input_vars = set()
optional_variables = set()
input_types: dict[str, Any] = values.get("input_types", {})
for message in messages:
if isinstance(message, (BaseMessagePromptTemplate, BaseChatPromptTemplate)):
input_vars.update(message.input_variables)
if isinstance(message, MessagesPlaceholder):
if "partial_variables" not in values:
values["partial_variables"] = {}
if (
message.optional
and message.variable_name not in values["partial_variables"]
):
values["partial_variables"][message.variable_name] = []
optional_variables.add(message.variable_name)
if message.variable_name not in input_types:
input_types[message.variable_name] = list[AnyMessage]
if "partial_variables" in values:
input_vars = input_vars - set(values["partial_variables"])
if optional_variables:
input_vars = input_vars - optional_variables
if "input_variables" in values and values.get("validate_template"):
if input_vars != set(values["input_variables"]):
msg = (
"Got mismatched input_variables. "
f"Expected: {input_vars}. "
f"Got: {values['input_variables']}"
)
raise ValueError(msg)
else:
values["input_variables"] = sorted(input_vars)
if optional_variables:
values["optional_variables"] = sorted(optional_variables)
values["input_types"] = input_types
return values
@classmethod
def from_template(cls, template: str, **kwargs: Any) -> ChatPromptTemplate:
"""Create a chat prompt template from a template string.
Creates a chat template consisting of a single message assumed to be from
the human.
Args:
template: template string
**kwargs: keyword arguments to pass to the constructor.
Returns:
A new instance of this class.
"""
prompt_template = PromptTemplate.from_template(template, **kwargs)
message = HumanMessagePromptTemplate(prompt=prompt_template)
return cls.from_messages([message])
@classmethod
@deprecated("0.0.1", alternative="from_messages classmethod", pending=True)
def from_role_strings(
cls, string_messages: list[tuple[str, str]]
) -> ChatPromptTemplate:
"""Create a chat prompt template from a list of (role, template) tuples.
Args:
string_messages: list of (role, template) tuples.
Returns:
a chat prompt template.
"""
return cls( # type: ignore[call-arg]
messages=[
ChatMessagePromptTemplate.from_template(template, role=role)
for role, template in string_messages
]
)
@classmethod
@deprecated("0.0.1", alternative="from_messages classmethod", pending=True)
def from_strings(
cls, string_messages: list[tuple[type[BaseMessagePromptTemplate], str]]
) -> ChatPromptTemplate:
"""Create a chat prompt template from a list of (role class, template) tuples.
Args:
string_messages: list of (role class, template) tuples.
Returns:
a chat prompt template.
"""
return cls.from_messages(string_messages)
@classmethod
def from_messages(
cls,
messages: Sequence[MessageLikeRepresentation],
template_format: Literal["f-string", "mustache", "jinja2"] = "f-string",
) -> ChatPromptTemplate:
"""Create a chat prompt template from a variety of message formats.
Examples:
Instantiation from a list of message templates:
.. code-block:: python
template = ChatPromptTemplate.from_messages([
("human", "Hello, how are you?"),
("ai", "I'm doing well, thanks!"),
("human", "That's good to hear."),
])
Instantiation from mixed message formats:
.. code-block:: python
template = ChatPromptTemplate.from_messages([
SystemMessage(content="hello"),
("human", "Hello, how are you?"),
])
Args:
messages: sequence of message representations.
A message can be represented using the following formats:
(1) BaseMessagePromptTemplate, (2) BaseMessage, (3) 2-tuple of
(message type, template); e.g., ("human", "{user_input}"),
(4) 2-tuple of (message class, template), (5) a string which is
shorthand for ("human", template); e.g., "{user_input}".
template_format: format of the template. Defaults to "f-string".
Returns:
a chat prompt template.
"""
return cls(messages, template_format=template_format)
def format_messages(self, **kwargs: Any) -> list[BaseMessage]:
"""Format the chat template into a list of finalized messages.
Args:
**kwargs: keyword arguments to use for filling in template variables
in all the template messages in this chat template.
Returns:
list of formatted messages.
"""
kwargs = self._merge_partial_and_user_variables(**kwargs)
result = []
for message_template in self.messages:
if isinstance(message_template, BaseMessage):
result.extend([message_template])
elif isinstance(
message_template, (BaseMessagePromptTemplate, BaseChatPromptTemplate)
):
message = message_template.format_messages(**kwargs)
result.extend(message)
else:
msg = f"Unexpected input: {message_template}"
raise ValueError(msg)
return result
async def aformat_messages(self, **kwargs: Any) -> list[BaseMessage]:
"""Async format the chat template into a list of finalized messages.
Args:
**kwargs: keyword arguments to use for filling in template variables
in all the template messages in this chat template.
Returns:
list of formatted messages.
Raises:
ValueError: If unexpected input.
"""
kwargs = self._merge_partial_and_user_variables(**kwargs)
result = []
for message_template in self.messages:
if isinstance(message_template, BaseMessage):
result.extend([message_template])
elif isinstance(
message_template, (BaseMessagePromptTemplate, BaseChatPromptTemplate)
):
message = await message_template.aformat_messages(**kwargs)
result.extend(message)
else:
msg = f"Unexpected input: {message_template}"
raise ValueError(msg)
return result
def partial(self, **kwargs: Any) -> ChatPromptTemplate:
"""Get a new ChatPromptTemplate with some input variables already filled in.
Args:
**kwargs: keyword arguments to use for filling in template variables. Ought
to be a subset of the input variables.
Returns:
A new ChatPromptTemplate.
Example:
.. code-block:: python
from langchain_core.prompts import ChatPromptTemplate
template = ChatPromptTemplate.from_messages(
[
("system", "You are an AI assistant named {name}."),
("human", "Hi I'm {user}"),
("ai", "Hi there, {user}, I'm {name}."),
("human", "{input}"),
]
)
template2 = template.partial(user="Lucy", name="R2D2")
template2.format_messages(input="hello")
"""
prompt_dict = self.__dict__.copy()
prompt_dict["input_variables"] = list(
set(self.input_variables).difference(kwargs)
)
prompt_dict["partial_variables"] = {**self.partial_variables, **kwargs}
return type(self)(**prompt_dict)
def append(self, message: MessageLikeRepresentation) -> None:
"""Append a message to the end of the chat template.
Args:
message: representation of a message to append.
"""
self.messages.append(_convert_to_message(message))
def extend(self, messages: Sequence[MessageLikeRepresentation]) -> None:
"""Extend the chat template with a sequence of messages.
Args:
messages: sequence of message representations to append.
"""
self.messages.extend([_convert_to_message(message) for message in messages])
@overload
def __getitem__(self, index: int) -> MessageLike: ...
@overload
def __getitem__(self, index: slice) -> ChatPromptTemplate: ...
def __getitem__(
self, index: Union[int, slice]
) -> Union[MessageLike, ChatPromptTemplate]:
"""Use to index into the chat template."""
if isinstance(index, slice):
start, stop, step = index.indices(len(self.messages))
messages = self.messages[start:stop:step]
return ChatPromptTemplate.from_messages(messages)
else:
return self.messages[index]
def __len__(self) -> int:
"""Get the length of the chat template."""
return len(self.messages)
| |
153973
|
"""Prompt schema definition."""
from __future__ import annotations
import warnings
from pathlib import Path
from typing import Any, Literal, Optional, Union
from pydantic import BaseModel, model_validator
from langchain_core.prompts.string import (
DEFAULT_FORMATTER_MAPPING,
StringPromptTemplate,
check_valid_template,
get_template_variables,
mustache_schema,
)
from langchain_core.runnables.config import RunnableConfig
class PromptTemplate(StringPromptTemplate):
"""Prompt template for a language model.
A prompt template consists of a string template. It accepts a set of parameters
from the user that can be used to generate a prompt for a language model.
The template can be formatted using either f-strings (default) or jinja2 syntax.
*Security warning*:
Prefer using `template_format="f-string"` instead of
`template_format="jinja2"`, or make sure to NEVER accept jinja2 templates
from untrusted sources as they may lead to arbitrary Python code execution.
As of LangChain 0.0.329, Jinja2 templates will be rendered using
Jinja2's SandboxedEnvironment by default. This sand-boxing should
be treated as a best-effort approach rather than a guarantee of security,
as it is an opt-out rather than opt-in approach.
Despite the sand-boxing, we recommend to never use jinja2 templates
from untrusted sources.
Example:
.. code-block:: python
from langchain_core.prompts import PromptTemplate
# Instantiation using from_template (recommended)
prompt = PromptTemplate.from_template("Say {foo}")
prompt.format(foo="bar")
# Instantiation using initializer
prompt = PromptTemplate(template="Say {foo}")
"""
@property
def lc_attributes(self) -> dict[str, Any]:
return {
"template_format": self.template_format,
}
@classmethod
def get_lc_namespace(cls) -> list[str]:
"""Get the namespace of the langchain object."""
return ["langchain", "prompts", "prompt"]
template: str
"""The prompt template."""
template_format: Literal["f-string", "mustache", "jinja2"] = "f-string"
"""The format of the prompt template.
Options are: 'f-string', 'mustache', 'jinja2'."""
validate_template: bool = False
"""Whether or not to try validating the template."""
@model_validator(mode="before")
@classmethod
def pre_init_validation(cls, values: dict) -> Any:
"""Check that template and input variables are consistent."""
if values.get("template") is None:
# Will let pydantic fail with a ValidationError if template
# is not provided.
return values
# Set some default values based on the field defaults
values.setdefault("template_format", "f-string")
values.setdefault("partial_variables", {})
if values.get("validate_template"):
if values["template_format"] == "mustache":
msg = "Mustache templates cannot be validated."
raise ValueError(msg)
if "input_variables" not in values:
msg = "Input variables must be provided to validate the template."
raise ValueError(msg)
all_inputs = values["input_variables"] + list(values["partial_variables"])
check_valid_template(
values["template"], values["template_format"], all_inputs
)
if values["template_format"]:
values["input_variables"] = [
var
for var in get_template_variables(
values["template"], values["template_format"]
)
if var not in values["partial_variables"]
]
return values
def get_input_schema(self, config: RunnableConfig | None = None) -> type[BaseModel]:
"""Get the input schema for the prompt.
Args:
config: The runnable configuration.
Returns:
The input schema for the prompt.
"""
if self.template_format != "mustache":
return super().get_input_schema(config)
return mustache_schema(self.template)
def __add__(self, other: Any) -> PromptTemplate:
"""Override the + operator to allow for combining prompt templates."""
# Allow for easy combining
if isinstance(other, PromptTemplate):
if self.template_format != "f-string":
msg = "Adding prompt templates only supported for f-strings."
raise ValueError(msg)
if other.template_format != "f-string":
msg = "Adding prompt templates only supported for f-strings."
raise ValueError(msg)
input_variables = list(
set(self.input_variables) | set(other.input_variables)
)
template = self.template + other.template
# If any do not want to validate, then don't
validate_template = self.validate_template and other.validate_template
partial_variables = dict(self.partial_variables.items())
for k, v in other.partial_variables.items():
if k in partial_variables:
msg = "Cannot have same variable partialed twice."
raise ValueError(msg)
else:
partial_variables[k] = v
return PromptTemplate(
template=template,
input_variables=input_variables,
partial_variables=partial_variables,
template_format="f-string",
validate_template=validate_template,
)
elif isinstance(other, str):
prompt = PromptTemplate.from_template(other)
return self + prompt
else:
msg = f"Unsupported operand type for +: {type(other)}"
raise NotImplementedError(msg)
@property
def _prompt_type(self) -> str:
"""Return the prompt type key."""
return "prompt"
def format(self, **kwargs: Any) -> str:
"""Format the prompt with the inputs.
Args:
kwargs: Any arguments to be passed to the prompt template.
Returns:
A formatted string.
"""
kwargs = self._merge_partial_and_user_variables(**kwargs)
return DEFAULT_FORMATTER_MAPPING[self.template_format](self.template, **kwargs)
@classmethod
def from_examples(
cls,
examples: list[str],
suffix: str,
input_variables: list[str],
example_separator: str = "\n\n",
prefix: str = "",
**kwargs: Any,
) -> PromptTemplate:
"""Take examples in list format with prefix and suffix to create a prompt.
Intended to be used as a way to dynamically create a prompt from examples.
Args:
examples: List of examples to use in the prompt.
suffix: String to go after the list of examples. Should generally
set up the user's input.
input_variables: A list of variable names the final prompt template
will expect.
example_separator: The separator to use in between examples. Defaults
to two new line characters.
prefix: String that should go before any examples. Generally includes
examples. Default to an empty string.
Returns:
The final prompt generated.
"""
template = example_separator.join([prefix, *examples, suffix])
return cls(input_variables=input_variables, template=template, **kwargs)
@classmethod
def from_file(
cls,
template_file: Union[str, Path],
input_variables: Optional[list[str]] = None,
encoding: Optional[str] = None,
**kwargs: Any,
) -> PromptTemplate:
"""Load a prompt from a file.
Args:
template_file: The path to the file containing the prompt template.
input_variables: [DEPRECATED] A list of variable names the final prompt
template will expect. Defaults to None.
encoding: The encoding system for opening the template file.
If not provided, will use the OS default.
input_variables is ignored as from_file now delegates to from_template().
Returns:
The prompt loaded from the file.
"""
with open(str(template_file), encoding=encoding) as f:
template = f.read()
if input_variables:
warnings.warn(
"`input_variables' is deprecated and ignored.",
DeprecationWarning,
stacklevel=2,
)
return cls.from_template(template=template, **kwargs)
| |
153974
|
@classmethod
def from_template(
cls,
template: str,
*,
template_format: str = "f-string",
partial_variables: Optional[dict[str, Any]] = None,
**kwargs: Any,
) -> PromptTemplate:
"""Load a prompt template from a template.
*Security warning*:
Prefer using `template_format="f-string"` instead of
`template_format="jinja2"`, or make sure to NEVER accept jinja2 templates
from untrusted sources as they may lead to arbitrary Python code execution.
As of LangChain 0.0.329, Jinja2 templates will be rendered using
Jinja2's SandboxedEnvironment by default. This sand-boxing should
be treated as a best-effort approach rather than a guarantee of security,
as it is an opt-out rather than opt-in approach.
Despite the sand-boxing, we recommend never using jinja2 templates
from untrusted sources.
Args:
template: The template to load.
template_format: The format of the template. Use `jinja2` for jinja2,
and `f-string` or None for f-strings.
Defaults to `f-string`.
partial_variables: A dictionary of variables that can be used to partially
fill in the template. For example, if the template is
`"{variable1} {variable2}"`, and `partial_variables` is
`{"variable1": "foo"}`, then the final prompt will be
`"foo {variable2}"`. Defaults to None.
kwargs: Any other arguments to pass to the prompt template.
Returns:
The prompt template loaded from the template.
"""
input_variables = get_template_variables(template, template_format)
_partial_variables = partial_variables or {}
if _partial_variables:
input_variables = [
var for var in input_variables if var not in _partial_variables
]
return cls(
input_variables=input_variables,
template=template,
template_format=template_format, # type: ignore[arg-type]
partial_variables=_partial_variables,
**kwargs,
)
| |
153979
|
"""Base class for all prompt templates, returning a prompt."""
input_variables: list[str]
"""A list of the names of the variables whose values are required as inputs to the
prompt."""
optional_variables: list[str] = Field(default=[])
"""optional_variables: A list of the names of the variables for placeholder
or MessagePlaceholder that are optional. These variables are auto inferred
from the prompt and user need not provide them."""
input_types: typing.Dict[str, Any] = Field(default_factory=dict, exclude=True) # noqa: UP006
"""A dictionary of the types of the variables the prompt template expects.
If not provided, all variables are assumed to be strings."""
output_parser: Optional[BaseOutputParser] = None
"""How to parse the output of calling an LLM on this formatted prompt."""
partial_variables: Mapping[str, Any] = Field(default_factory=dict)
"""A dictionary of the partial variables the prompt template carries.
Partial variables populate the template so that you don't need to
pass them in every time you call the prompt."""
metadata: Optional[typing.Dict[str, Any]] = None # noqa: UP006
"""Metadata to be used for tracing."""
tags: Optional[list[str]] = None
"""Tags to be used for tracing."""
@model_validator(mode="after")
def validate_variable_names(self) -> Self:
"""Validate variable names do not include restricted names."""
if "stop" in self.input_variables:
msg = (
"Cannot have an input variable named 'stop', as it is used internally,"
" please rename."
)
raise ValueError(
create_message(message=msg, error_code=ErrorCode.INVALID_PROMPT_INPUT)
)
if "stop" in self.partial_variables:
msg = (
"Cannot have an partial variable named 'stop', as it is used "
"internally, please rename."
)
raise ValueError(
create_message(message=msg, error_code=ErrorCode.INVALID_PROMPT_INPUT)
)
overall = set(self.input_variables).intersection(self.partial_variables)
if overall:
msg = f"Found overlapping input and partial variables: {overall}"
raise ValueError(
create_message(message=msg, error_code=ErrorCode.INVALID_PROMPT_INPUT)
)
return self
@classmethod
def get_lc_namespace(cls) -> list[str]:
"""Get the namespace of the langchain object.
Returns ["langchain", "schema", "prompt_template"]."""
return ["langchain", "schema", "prompt_template"]
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return whether this class is serializable.
Returns True."""
return True
model_config = ConfigDict(
arbitrary_types_allowed=True,
)
@cached_property
def _serialized(self) -> dict[str, Any]:
return dumpd(self)
@property
@override
def OutputType(self) -> Any:
"""Return the output type of the prompt."""
return Union[StringPromptValue, ChatPromptValueConcrete]
def get_input_schema(
self, config: Optional[RunnableConfig] = None
) -> type[BaseModel]:
"""Get the input schema for the prompt.
Args:
config: RunnableConfig, configuration for the prompt.
Returns:
Type[BaseModel]: The input schema for the prompt.
"""
# This is correct, but pydantic typings/mypy don't think so.
required_input_variables = {
k: (self.input_types.get(k, str), ...) for k in self.input_variables
}
optional_input_variables = {
k: (self.input_types.get(k, str), None) for k in self.optional_variables
}
return create_model_v2(
"PromptInput",
field_definitions={**required_input_variables, **optional_input_variables},
)
def _validate_input(self, inner_input: Any) -> dict:
if not isinstance(inner_input, dict):
if len(self.input_variables) == 1:
var_name = self.input_variables[0]
inner_input = {var_name: inner_input}
else:
msg = (
f"Expected mapping type as input to {self.__class__.__name__}. "
f"Received {type(inner_input)}."
)
raise TypeError(
create_message(
message=msg, error_code=ErrorCode.INVALID_PROMPT_INPUT
)
)
missing = set(self.input_variables).difference(inner_input)
if missing:
msg = (
f"Input to {self.__class__.__name__} is missing variables {missing}. "
f" Expected: {self.input_variables}"
f" Received: {list(inner_input.keys())}"
)
example_key = missing.pop()
msg += (
f"\nNote: if you intended {{{example_key}}} to be part of the string"
" and not a variable, please escape it with double curly braces like: "
f"'{{{{{example_key}}}}}'."
)
raise KeyError(
create_message(message=msg, error_code=ErrorCode.INVALID_PROMPT_INPUT)
)
return inner_input
def _format_prompt_with_error_handling(self, inner_input: dict) -> PromptValue:
_inner_input = self._validate_input(inner_input)
return self.format_prompt(**_inner_input)
async def _aformat_prompt_with_error_handling(
self, inner_input: dict
) -> PromptValue:
_inner_input = self._validate_input(inner_input)
return await self.aformat_prompt(**_inner_input)
def invoke(
self, input: dict, config: Optional[RunnableConfig] = None, **kwargs: Any
) -> PromptValue:
"""Invoke the prompt.
Args:
input: Dict, input to the prompt.
config: RunnableConfig, configuration for the prompt.
Returns:
PromptValue: The output of the prompt.
"""
config = ensure_config(config)
if self.metadata:
config["metadata"] = {**config["metadata"], **self.metadata}
if self.tags:
config["tags"] = config["tags"] + self.tags
return self._call_with_config(
self._format_prompt_with_error_handling,
input,
config,
run_type="prompt",
serialized=self._serialized,
)
async def ainvoke(
self, input: dict, config: Optional[RunnableConfig] = None, **kwargs: Any
) -> PromptValue:
"""Async invoke the prompt.
Args:
input: Dict, input to the prompt.
config: RunnableConfig, configuration for the prompt.
Returns:
PromptValue: The output of the prompt.
"""
config = ensure_config(config)
if self.metadata:
config["metadata"].update(self.metadata)
if self.tags:
config["tags"].extend(self.tags)
return await self._acall_with_config(
self._aformat_prompt_with_error_handling,
input,
config,
run_type="prompt",
serialized=self._serialized,
)
@abstractmethod
def format_prompt(self, **kwargs: Any) -> PromptValue:
"""Create Prompt Value.
Args:
kwargs: Any arguments to be passed to the prompt template.
Returns:
PromptValue: The output of the prompt.
"""
async def aformat_prompt(self, **kwargs: Any) -> PromptValue:
"""Async create Prompt Value.
Args:
kwargs: Any arguments to be passed to the prompt template.
Returns:
PromptValue: The output of the prompt.
"""
return self.format_prompt(**kwargs)
def partial(self, **kwargs: Union[str, Callable[[], str]]) -> BasePromptTemplate:
"""Return a partial of the prompt template.
Args:
kwargs: Union[str, Callable[[], str], partial variables to set.
Returns:
BasePromptTemplate: A partial of the prompt template.
"""
prompt_dict = self.__dict__.copy()
prompt_dict["input_variables"] = list(
set(self.input_variables).difference(kwargs)
)
prompt_dict["partial_variables"] = {**self.partial_variables, **kwargs}
return type(self)(**prompt_dict)
def _merge_partial_and_user_variables(self, **kwargs: Any) -> dict[str, Any]:
# Get partial params:
partial_kwargs = {
k: v if not callable(v) else v() for k, v in self.partial_variables.items()
}
return {**partial_kwargs, **kwargs}
@abstractmethod
def format(self, **kwargs: Any) -> FormatOutputType:
"""Format the prompt with the inputs.
Args:
kwargs: Any arguments to be passed to the prompt template.
Returns:
A formatted string.
Example:
.. code-block:: python
prompt.format(variable1="foo")
"""
async def aformat(self, **kwargs: Any) -> FormatOutputType:
"""Async format the prompt with the inputs.
Args:
kwargs: Any arguments to be passed to the prompt template.
Returns:
A formatted string.
Example:
.. code-block:: python
await prompt.aformat(variable1="foo")
"""
return self.format(**kwargs)
| |
153980
|
@property
def _prompt_type(self) -> str:
"""Return the prompt type key."""
raise NotImplementedError
def dict(self, **kwargs: Any) -> dict:
"""Return dictionary representation of prompt.
Args:
kwargs: Any additional arguments to pass to the dictionary.
Returns:
Dict: Dictionary representation of the prompt.
Raises:
NotImplementedError: If the prompt type is not implemented.
"""
prompt_dict = super().model_dump(**kwargs)
with contextlib.suppress(NotImplementedError):
prompt_dict["_type"] = self._prompt_type
return prompt_dict
def save(self, file_path: Union[Path, str]) -> None:
"""Save the prompt.
Args:
file_path: Path to directory to save prompt to.
Raises:
ValueError: If the prompt has partial variables.
ValueError: If the file path is not json or yaml.
NotImplementedError: If the prompt type is not implemented.
Example:
.. code-block:: python
prompt.save(file_path="path/prompt.yaml")
"""
if self.partial_variables:
msg = "Cannot save prompt with partial variables."
raise ValueError(msg)
# Fetch dictionary to save
prompt_dict = self.dict()
if "_type" not in prompt_dict:
msg = f"Prompt {self} does not support saving."
raise NotImplementedError(msg)
# Convert file to Path object.
save_path = Path(file_path) if isinstance(file_path, str) else file_path
directory_path = save_path.parent
directory_path.mkdir(parents=True, exist_ok=True)
if save_path.suffix == ".json":
with open(file_path, "w") as f:
json.dump(prompt_dict, f, indent=4)
elif save_path.suffix.endswith((".yaml", ".yml")):
with open(file_path, "w") as f:
yaml.dump(prompt_dict, f, default_flow_style=False)
else:
msg = f"{save_path} must be json or yaml"
raise ValueError(msg)
def _get_document_info(doc: Document, prompt: BasePromptTemplate[str]) -> dict:
base_info = {"page_content": doc.page_content, **doc.metadata}
missing_metadata = set(prompt.input_variables).difference(base_info)
if len(missing_metadata) > 0:
required_metadata = [
iv for iv in prompt.input_variables if iv != "page_content"
]
msg = (
f"Document prompt requires documents to have metadata variables: "
f"{required_metadata}. Received document with missing metadata: "
f"{list(missing_metadata)}."
)
raise ValueError(
create_message(message=msg, error_code=ErrorCode.INVALID_PROMPT_INPUT)
)
return {k: base_info[k] for k in prompt.input_variables}
def format_document(doc: Document, prompt: BasePromptTemplate[str]) -> str:
"""Format a document into a string based on a prompt template.
First, this pulls information from the document from two sources:
1. page_content:
This takes the information from the `document.page_content`
and assigns it to a variable named `page_content`.
2. metadata:
This takes information from `document.metadata` and assigns
it to variables of the same name.
Those variables are then passed into the `prompt` to produce a formatted string.
Args:
doc: Document, the page_content and metadata will be used to create
the final string.
prompt: BasePromptTemplate, will be used to format the page_content
and metadata into the final string.
Returns:
string of the document formatted.
Example:
.. code-block:: python
from langchain_core.documents import Document
from langchain_core.prompts import PromptTemplate
doc = Document(page_content="This is a joke", metadata={"page": "1"})
prompt = PromptTemplate.from_template("Page {page}: {page_content}")
format_document(doc, prompt)
>>> "Page 1: This is a joke"
"""
return prompt.format(**_get_document_info(doc, prompt))
async def aformat_document(doc: Document, prompt: BasePromptTemplate[str]) -> str:
"""Async format a document into a string based on a prompt template.
First, this pulls information from the document from two sources:
1. page_content:
This takes the information from the `document.page_content`
and assigns it to a variable named `page_content`.
2. metadata:
This takes information from `document.metadata` and assigns
it to variables of the same name.
Those variables are then passed into the `prompt` to produce a formatted string.
Args:
doc: Document, the page_content and metadata will be used to create
the final string.
prompt: BasePromptTemplate, will be used to format the page_content
and metadata into the final string.
Returns:
string of the document formatted.
"""
return await prompt.aformat(**_get_document_info(doc, prompt))
| |
153989
|
class Document(BaseMedia):
"""Class for storing a piece of text and associated metadata.
Example:
.. code-block:: python
from langchain_core.documents import Document
document = Document(
page_content="Hello, world!",
metadata={"source": "https://example.com"}
)
"""
page_content: str
"""String text."""
type: Literal["Document"] = "Document"
def __init__(self, page_content: str, **kwargs: Any) -> None:
"""Pass page_content in as positional or named arg."""
# my-py is complaining that page_content is not defined on the base class.
# Here, we're relying on pydantic base class to handle the validation.
super().__init__(page_content=page_content, **kwargs) # type: ignore[call-arg]
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return whether this class is serializable."""
return True
@classmethod
def get_lc_namespace(cls) -> list[str]:
"""Get the namespace of the langchain object."""
return ["langchain", "schema", "document"]
def __str__(self) -> str:
"""Override __str__ to restrict it to page_content and metadata."""
# The format matches pydantic format for __str__.
#
# The purpose of this change is to make sure that user code that
# feeds Document objects directly into prompts remains unchanged
# due to the addition of the id field (or any other fields in the future).
#
# This override will likely be removed in the future in favor of
# a more general solution of formatting content directly inside the prompts.
if self.metadata:
return f"page_content='{self.page_content}' metadata={self.metadata}"
else:
return f"page_content='{self.page_content}'"
| |
153991
|
"""Module contains logic for indexing documents into vector stores."""
from __future__ import annotations
import hashlib
import json
import uuid
from collections.abc import AsyncIterable, AsyncIterator, Iterable, Iterator, Sequence
from itertools import islice
from typing import (
Any,
Callable,
Literal,
Optional,
TypedDict,
TypeVar,
Union,
cast,
)
from pydantic import model_validator
from langchain_core.document_loaders.base import BaseLoader
from langchain_core.documents import Document
from langchain_core.indexing.base import DocumentIndex, RecordManager
from langchain_core.vectorstores import VectorStore
# Magic UUID to use as a namespace for hashing.
# Used to try and generate a unique UUID for each document
# from hashing the document content and metadata.
NAMESPACE_UUID = uuid.UUID(int=1984)
T = TypeVar("T")
def _hash_string_to_uuid(input_string: str) -> uuid.UUID:
"""Hashes a string and returns the corresponding UUID."""
hash_value = hashlib.sha1(input_string.encode("utf-8")).hexdigest()
return uuid.uuid5(NAMESPACE_UUID, hash_value)
def _hash_nested_dict_to_uuid(data: dict[Any, Any]) -> uuid.UUID:
"""Hashes a nested dictionary and returns the corresponding UUID."""
serialized_data = json.dumps(data, sort_keys=True)
hash_value = hashlib.sha1(serialized_data.encode("utf-8")).hexdigest()
return uuid.uuid5(NAMESPACE_UUID, hash_value)
class _HashedDocument(Document):
"""A hashed document with a unique ID."""
uid: str
hash_: str
"""The hash of the document including content and metadata."""
content_hash: str
"""The hash of the document content."""
metadata_hash: str
"""The hash of the document metadata."""
@classmethod
def is_lc_serializable(cls) -> bool:
return False
@model_validator(mode="before")
@classmethod
def calculate_hashes(cls, values: dict[str, Any]) -> Any:
"""Root validator to calculate content and metadata hash."""
content = values.get("page_content", "")
metadata = values.get("metadata", {})
forbidden_keys = ("hash_", "content_hash", "metadata_hash")
for key in forbidden_keys:
if key in metadata:
msg = (
f"Metadata cannot contain key {key} as it "
f"is reserved for internal use."
)
raise ValueError(msg)
content_hash = str(_hash_string_to_uuid(content))
try:
metadata_hash = str(_hash_nested_dict_to_uuid(metadata))
except Exception as e:
msg = (
f"Failed to hash metadata: {e}. "
f"Please use a dict that can be serialized using json."
)
raise ValueError(msg) from e
values["content_hash"] = content_hash
values["metadata_hash"] = metadata_hash
values["hash_"] = str(_hash_string_to_uuid(content_hash + metadata_hash))
_uid = values.get("uid")
if _uid is None:
values["uid"] = values["hash_"]
return values
def to_document(self) -> Document:
"""Return a Document object."""
return Document(
id=self.uid,
page_content=self.page_content,
metadata=self.metadata,
)
@classmethod
def from_document(
cls, document: Document, *, uid: Optional[str] = None
) -> _HashedDocument:
"""Create a HashedDocument from a Document."""
return cls( # type: ignore[call-arg]
uid=uid, # type: ignore[arg-type]
page_content=document.page_content,
metadata=document.metadata,
)
def _batch(size: int, iterable: Iterable[T]) -> Iterator[list[T]]:
"""Utility batching function."""
it = iter(iterable)
while True:
chunk = list(islice(it, size))
if not chunk:
return
yield chunk
async def _abatch(size: int, iterable: AsyncIterable[T]) -> AsyncIterator[list[T]]:
"""Utility batching function."""
batch: list[T] = []
async for element in iterable:
if len(batch) < size:
batch.append(element)
if len(batch) >= size:
yield batch
batch = []
if batch:
yield batch
def _get_source_id_assigner(
source_id_key: Union[str, Callable[[Document], str], None],
) -> Callable[[Document], Union[str, None]]:
"""Get the source id from the document."""
if source_id_key is None:
return lambda doc: None
elif isinstance(source_id_key, str):
return lambda doc: doc.metadata[source_id_key]
elif callable(source_id_key):
return source_id_key
else:
msg = (
f"source_id_key should be either None, a string or a callable. "
f"Got {source_id_key} of type {type(source_id_key)}."
)
raise ValueError(msg)
def _deduplicate_in_order(
hashed_documents: Iterable[_HashedDocument],
) -> Iterator[_HashedDocument]:
"""Deduplicate a list of hashed documents while preserving order."""
seen: set[str] = set()
for hashed_doc in hashed_documents:
if hashed_doc.hash_ not in seen:
seen.add(hashed_doc.hash_)
yield hashed_doc
# PUBLIC API
class IndexingResult(TypedDict):
"""Return a detailed a breakdown of the result of the indexing operation."""
num_added: int
"""Number of added documents."""
num_updated: int
"""Number of updated documents because they were not up to date."""
num_deleted: int
"""Number of deleted documents."""
num_skipped: int
"""Number of skipped documents because they were already up to date."""
def index(
docs_source: Union[BaseLoader, Iterable[Document]],
record_manager: RecordManager,
vector_store: Union[VectorStore, DocumentIndex],
*,
batch_size: int = 100,
cleanup: Literal["incremental", "full", None] = None,
source_id_key: Union[str, Callable[[Document], str], None] = None,
cleanup_batch_size: int = 1_000,
force_update: bool = False,
upsert_kwargs: Optional[dict[str, Any]] = None,
) -> IndexingResult:
| |
154000
|
from importlib import metadata
from langchain_core._api.deprecation import warn_deprecated
## Create namespaces for pydantic v1 and v2.
# This code must stay at the top of the file before other modules may
# attempt to import pydantic since it adds pydantic_v1 and pydantic_v2 to sys.modules.
#
# This hack is done for the following reasons:
# * Langchain will attempt to remain compatible with both pydantic v1 and v2 since
# both dependencies and dependents may be stuck on either version of v1 or v2.
# * Creating namespaces for pydantic v1 and v2 should allow us to write code that
# unambiguously uses either v1 or v2 API.
# * This change is easier to roll out and roll back.
try:
from pydantic.v1 import * # noqa: F403
except ImportError:
from pydantic import * # type: ignore # noqa: F403
try:
_PYDANTIC_MAJOR_VERSION: int = int(metadata.version("pydantic").split(".")[0])
except metadata.PackageNotFoundError:
_PYDANTIC_MAJOR_VERSION = 0
warn_deprecated(
"0.3.0",
removal="1.0.0",
alternative="pydantic.v1 or pydantic",
message=(
"As of langchain-core 0.3.0, LangChain uses pydantic v2 internally. "
"The langchain_core.pydantic_v1 module was a "
"compatibility shim for pydantic v1, and should no longer be used. "
"Please update the code to import from Pydantic directly.\n\n"
"For example, replace imports like: "
"`from langchain_core.pydantic_v1 import BaseModel`\n"
"with: `from pydantic import BaseModel`\n"
"or the v1 compatibility namespace if you are working in a code base "
"that has not been fully upgraded to pydantic 2 yet. "
"\tfrom pydantic.v1 import BaseModel\n"
),
)
| |
154001
|
from langchain_core._api import warn_deprecated
try:
from pydantic.v1.dataclasses import * # noqa: F403
except ImportError:
from pydantic.dataclasses import * # type: ignore # noqa: F403
warn_deprecated(
"0.3.0",
removal="1.0.0",
alternative="pydantic.v1 or pydantic",
message=(
"As of langchain-core 0.3.0, LangChain uses pydantic v2 internally. "
"The langchain_core.pydantic_v1 module was a "
"compatibility shim for pydantic v1, and should no longer be used. "
"Please update the code to import from Pydantic directly.\n\n"
"For example, replace imports like: "
"`from langchain_core.pydantic_v1 import BaseModel`\n"
"with: `from pydantic import BaseModel`\n"
"or the v1 compatibility namespace if you are working in a code base "
"that has not been fully upgraded to pydantic 2 yet. "
"\tfrom pydantic.v1 import BaseModel\n"
),
)
| |
154002
|
from langchain_core._api import warn_deprecated
try:
from pydantic.v1.main import * # noqa: F403
except ImportError:
from pydantic.main import * # type: ignore # noqa: F403
warn_deprecated(
"0.3.0",
removal="1.0.0",
alternative="pydantic.v1 or pydantic",
message=(
"As of langchain-core 0.3.0, LangChain uses pydantic v2 internally. "
"The langchain_core.pydantic_v1 module was a "
"compatibility shim for pydantic v1, and should no longer be used. "
"Please update the code to import from Pydantic directly.\n\n"
"For example, replace imports like: "
"`from langchain_core.pydantic_v1 import BaseModel`\n"
"with: `from pydantic import BaseModel`\n"
"or the v1 compatibility namespace if you are working in a code base "
"that has not been fully upgraded to pydantic 2 yet. "
"\tfrom pydantic.v1 import BaseModel\n"
),
)
| |
154043
|
# 🦜️🔗 LangChain
⚡ Building applications with LLMs through composability ⚡
[](https://github.com/langchain-ai/langchain/releases)
[](https://github.com/langchain-ai/langchain/actions/workflows/lint.yml)
[](https://github.com/langchain-ai/langchain/actions/workflows/test.yml)
[](https://pepy.tech/project/langchain)
[](https://opensource.org/licenses/MIT)
[](https://twitter.com/langchainai)
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[](https://codespaces.new/langchain-ai/langchain)
[](https://star-history.com/#langchain-ai/langchain)
[](https://libraries.io/github/langchain-ai/langchain)
[](https://github.com/langchain-ai/langchain/issues)
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com).
[LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications.
Fill out [this form](https://www.langchain.com/contact-sales) to speak with our sales team.
## Quick Install
`pip install langchain`
or
`pip install langsmith && conda install langchain -c conda-forge`
## 🤔 What is this?
Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. However, using these LLMs in isolation is often insufficient for creating a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.
This library aims to assist in the development of those types of applications. Common examples of these applications include:
**❓ Question answering with RAG**
- [Documentation](https://python.langchain.com/docs/use_cases/question_answering/)
- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)
**🧱 Extracting structured output**
- [Documentation](https://python.langchain.com/docs/use_cases/extraction/)
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain-extract/)
**🤖 Chatbots**
- [Documentation](https://python.langchain.com/docs/use_cases/chatbots)
- End-to-end Example: [Web LangChain (web researcher chatbot)](https://weblangchain.vercel.app) and [repo](https://github.com/langchain-ai/weblangchain)
## 📖 Documentation
Please see [here](https://python.langchain.com) for full documentation on:
- Getting started (installation, setting up the environment, simple examples)
- How-To examples (demos, integrations, helper functions)
- Reference (full API docs)
- Resources (high-level explanation of core concepts)
## 🚀 What can this help with?
There are five main areas that LangChain is designed to help with.
These are, in increasing order of complexity:
**📃 Models and Prompts:**
This includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with chat models and LLMs.
**🔗 Chains:**
Chains go beyond a single LLM call and involve sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
**📚 Retrieval Augmented Generation:**
Retrieval Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources.
**🤖 Agents:**
Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.
**🧐 Evaluation:**
[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
For more information on these concepts, please see our [full documentation](https://python.langchain.com).
## 💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see the [Contributing Guide](https://python.langchain.com/docs/contributing/).
| |
154050
|
"""Global values and configuration that apply to all of LangChain."""
import warnings
from typing import TYPE_CHECKING, Optional
if TYPE_CHECKING:
from langchain_core.caches import BaseCache
# DO NOT USE THESE VALUES DIRECTLY!
# Use them only via `get_<X>()` and `set_<X>()` below,
# or else your code may behave unexpectedly with other uses of these global settings:
# https://github.com/langchain-ai/langchain/pull/11311#issuecomment-1743780004
_verbose: bool = False
_debug: bool = False
_llm_cache: Optional["BaseCache"] = None
def set_verbose(value: bool) -> None:
"""Set a new value for the `verbose` global setting."""
import langchain
# We're about to run some deprecated code, don't report warnings from it.
# The user called the correct (non-deprecated) code path and shouldn't get warnings.
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message=(
"Importing verbose from langchain root module is no longer supported"
),
)
# N.B.: This is a workaround for an unfortunate quirk of Python's
# module-level `__getattr__()` implementation:
# https://github.com/langchain-ai/langchain/pull/11311#issuecomment-1743780004
#
# Remove it once `langchain.verbose` is no longer supported, and once all users
# have migrated to using `set_verbose()` here.
langchain.verbose = value
global _verbose
_verbose = value
def get_verbose() -> bool:
"""Get the value of the `verbose` global setting."""
import langchain
# We're about to run some deprecated code, don't report warnings from it.
# The user called the correct (non-deprecated) code path and shouldn't get warnings.
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message=(
"Importing verbose from langchain root module is no longer supported"
),
)
# N.B.: This is a workaround for an unfortunate quirk of Python's
# module-level `__getattr__()` implementation:
# https://github.com/langchain-ai/langchain/pull/11311#issuecomment-1743780004
#
# Remove it once `langchain.verbose` is no longer supported, and once all users
# have migrated to using `set_verbose()` here.
#
# In the meantime, the `verbose` setting is considered True if either the old
# or the new value are True. This accommodates users who haven't migrated
# to using `set_verbose()` yet. Those users are getting deprecation warnings
# directing them to use `set_verbose()` when they import `langhchain.verbose`.
old_verbose = langchain.verbose
global _verbose
return _verbose or old_verbose
def set_debug(value: bool) -> None:
"""Set a new value for the `debug` global setting."""
import langchain
# We're about to run some deprecated code, don't report warnings from it.
# The user called the correct (non-deprecated) code path and shouldn't get warnings.
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message="Importing debug from langchain root module is no longer supported",
)
# N.B.: This is a workaround for an unfortunate quirk of Python's
# module-level `__getattr__()` implementation:
# https://github.com/langchain-ai/langchain/pull/11311#issuecomment-1743780004
#
# Remove it once `langchain.debug` is no longer supported, and once all users
# have migrated to using `set_debug()` here.
langchain.debug = value
global _debug
_debug = value
def get_debug() -> bool:
"""Get the value of the `debug` global setting."""
import langchain
# We're about to run some deprecated code, don't report warnings from it.
# The user called the correct (non-deprecated) code path and shouldn't get warnings.
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message="Importing debug from langchain root module is no longer supported",
)
# N.B.: This is a workaround for an unfortunate quirk of Python's
# module-level `__getattr__()` implementation:
# https://github.com/langchain-ai/langchain/pull/11311#issuecomment-1743780004
#
# Remove it once `langchain.debug` is no longer supported, and once all users
# have migrated to using `set_debug()` here.
#
# In the meantime, the `debug` setting is considered True if either the old
# or the new value are True. This accommodates users who haven't migrated
# to using `set_debug()` yet. Those users are getting deprecation warnings
# directing them to use `set_debug()` when they import `langhchain.debug`.
old_debug = langchain.debug
global _debug
return _debug or old_debug
def set_llm_cache(value: Optional["BaseCache"]) -> None:
"""Set a new LLM cache, overwriting the previous value, if any."""
import langchain
# We're about to run some deprecated code, don't report warnings from it.
# The user called the correct (non-deprecated) code path and shouldn't get warnings.
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message=(
"Importing llm_cache from langchain root module is no longer supported"
),
)
# N.B.: This is a workaround for an unfortunate quirk of Python's
# module-level `__getattr__()` implementation:
# https://github.com/langchain-ai/langchain/pull/11311#issuecomment-1743780004
#
# Remove it once `langchain.llm_cache` is no longer supported, and
# once all users have migrated to using `set_llm_cache()` here.
langchain.llm_cache = value
global _llm_cache
_llm_cache = value
def get_llm_cache() -> "BaseCache":
"""Get the value of the `llm_cache` global setting."""
import langchain
# We're about to run some deprecated code, don't report warnings from it.
# The user called the correct (non-deprecated) code path and shouldn't get warnings.
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message=(
"Importing llm_cache from langchain root module is no longer supported"
),
)
# N.B.: This is a workaround for an unfortunate quirk of Python's
# module-level `__getattr__()` implementation:
# https://github.com/langchain-ai/langchain/pull/11311#issuecomment-1743780004
#
# Remove it once `langchain.llm_cache` is no longer supported, and
# once all users have migrated to using `set_llm_cache()` here.
#
# In the meantime, the `llm_cache` setting returns whichever of
# its two backing sources is truthy (not `None` and non-empty),
# or the old value if both are falsy. This accommodates users
# who haven't migrated to using `set_llm_cache()` yet.
# Those users are getting deprecation warnings directing them
# to use `set_llm_cache()` when they import `langhchain.llm_cache`.
old_llm_cache = langchain.llm_cache
global _llm_cache
return _llm_cache or old_llm_cache
| |
154053
|
def __getattr__(name: str) -> Any:
if name == "MRKLChain":
from langchain.agents import MRKLChain
_warn_on_import(name, replacement="langchain.agents.MRKLChain")
return MRKLChain
elif name == "ReActChain":
from langchain.agents import ReActChain
_warn_on_import(name, replacement="langchain.agents.ReActChain")
return ReActChain
elif name == "SelfAskWithSearchChain":
from langchain.agents import SelfAskWithSearchChain
_warn_on_import(name, replacement="langchain.agents.SelfAskWithSearchChain")
return SelfAskWithSearchChain
elif name == "ConversationChain":
from langchain.chains import ConversationChain
_warn_on_import(name, replacement="langchain.chains.ConversationChain")
return ConversationChain
elif name == "LLMBashChain":
raise ImportError(
"This module has been moved to langchain-experimental. "
"For more details: "
"https://github.com/langchain-ai/langchain/discussions/11352."
"To access this code, install it with `pip install langchain-experimental`."
"`from langchain_experimental.llm_bash.base "
"import LLMBashChain`"
)
elif name == "LLMChain":
from langchain.chains import LLMChain
_warn_on_import(name, replacement="langchain.chains.LLMChain")
return LLMChain
elif name == "LLMCheckerChain":
from langchain.chains import LLMCheckerChain
_warn_on_import(name, replacement="langchain.chains.LLMCheckerChain")
return LLMCheckerChain
elif name == "LLMMathChain":
from langchain.chains import LLMMathChain
_warn_on_import(name, replacement="langchain.chains.LLMMathChain")
return LLMMathChain
elif name == "QAWithSourcesChain":
from langchain.chains import QAWithSourcesChain
_warn_on_import(name, replacement="langchain.chains.QAWithSourcesChain")
return QAWithSourcesChain
elif name == "VectorDBQA":
from langchain.chains import VectorDBQA
_warn_on_import(name, replacement="langchain.chains.VectorDBQA")
return VectorDBQA
elif name == "VectorDBQAWithSourcesChain":
from langchain.chains import VectorDBQAWithSourcesChain
_warn_on_import(name, replacement="langchain.chains.VectorDBQAWithSourcesChain")
return VectorDBQAWithSourcesChain
elif name == "InMemoryDocstore":
from langchain_community.docstore import InMemoryDocstore
_warn_on_import(name, replacement="langchain.docstore.InMemoryDocstore")
return InMemoryDocstore
elif name == "Wikipedia":
from langchain_community.docstore import Wikipedia
_warn_on_import(name, replacement="langchain.docstore.Wikipedia")
return Wikipedia
elif name == "Anthropic":
from langchain_community.llms import Anthropic
_warn_on_import(name, replacement="langchain_community.llms.Anthropic")
return Anthropic
elif name == "Banana":
from langchain_community.llms import Banana
_warn_on_import(name, replacement="langchain_community.llms.Banana")
return Banana
elif name == "CerebriumAI":
from langchain_community.llms import CerebriumAI
_warn_on_import(name, replacement="langchain_community.llms.CerebriumAI")
return CerebriumAI
elif name == "Cohere":
from langchain_community.llms import Cohere
_warn_on_import(name, replacement="langchain_community.llms.Cohere")
return Cohere
elif name == "ForefrontAI":
from langchain_community.llms import ForefrontAI
_warn_on_import(name, replacement="langchain_community.llms.ForefrontAI")
return ForefrontAI
elif name == "GooseAI":
from langchain_community.llms import GooseAI
_warn_on_import(name, replacement="langchain_community.llms.GooseAI")
return GooseAI
elif name == "HuggingFaceHub":
from langchain_community.llms import HuggingFaceHub
_warn_on_import(name, replacement="langchain_community.llms.HuggingFaceHub")
return HuggingFaceHub
elif name == "HuggingFaceTextGenInference":
from langchain_community.llms import HuggingFaceTextGenInference
_warn_on_import(
name, replacement="langchain_community.llms.HuggingFaceTextGenInference"
)
return HuggingFaceTextGenInference
elif name == "LlamaCpp":
from langchain_community.llms import LlamaCpp
_warn_on_import(name, replacement="langchain_community.llms.LlamaCpp")
return LlamaCpp
elif name == "Modal":
from langchain_community.llms import Modal
_warn_on_import(name, replacement="langchain_community.llms.Modal")
return Modal
elif name == "OpenAI":
from langchain_community.llms import OpenAI
_warn_on_import(name, replacement="langchain_community.llms.OpenAI")
return OpenAI
elif name == "Petals":
from langchain_community.llms import Petals
_warn_on_import(name, replacement="langchain_community.llms.Petals")
return Petals
elif name == "PipelineAI":
from langchain_community.llms import PipelineAI
_warn_on_import(name, replacement="langchain_community.llms.PipelineAI")
return PipelineAI
elif name == "SagemakerEndpoint":
from langchain_community.llms import SagemakerEndpoint
_warn_on_import(name, replacement="langchain_community.llms.SagemakerEndpoint")
return SagemakerEndpoint
elif name == "StochasticAI":
from langchain_community.llms import StochasticAI
_warn_on_import(name, replacement="langchain_community.llms.StochasticAI")
return StochasticAI
elif name == "Writer":
from langchain_community.llms import Writer
_warn_on_import(name, replacement="langchain_community.llms.Writer")
return Writer
elif name == "HuggingFacePipeline":
from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline
_warn_on_import(
name,
replacement="langchain_community.llms.huggingface_pipeline.HuggingFacePipeline",
)
return HuggingFacePipeline
elif name == "FewShotPromptTemplate":
from langchain_core.prompts import FewShotPromptTemplate
_warn_on_import(
name, replacement="langchain_core.prompts.FewShotPromptTemplate"
)
return FewShotPromptTemplate
elif name == "Prompt":
from langchain_core.prompts import PromptTemplate
_warn_on_import(name, replacement="langchain_core.prompts.PromptTemplate")
# it's renamed as prompt template anyways
# this is just for backwards compat
return PromptTemplate
elif name == "PromptTemplate":
from langchain_core.prompts import PromptTemplate
_warn_on_import(name, replacement="langchain_core.prompts.PromptTemplate")
return PromptTemplate
elif name == "BasePromptTemplate":
from langchain_core.prompts import BasePromptTemplate
_warn_on_import(name, replacement="langchain_core.prompts.BasePromptTemplate")
return BasePromptTemplate
elif name == "ArxivAPIWrapper":
from langchain_community.utilities import ArxivAPIWrapper
_warn_on_import(
name, replacement="langchain_community.utilities.ArxivAPIWrapper"
)
return ArxivAPIWrapper
elif name == "GoldenQueryAPIWrapper":
from langchain_community.utilities import GoldenQueryAPIWrapper
_warn_on_import(
name, replacement="langchain_community.utilities.GoldenQueryAPIWrapper"
)
return GoldenQueryAPIWrapper
elif name == "GoogleSearchAPIWrapper":
from langchain_community.utilities import GoogleSearchAPIWrapper
_warn_on_import(
name, replacement="langchain_community.utilities.GoogleSearchAPIWrapper"
)
return GoogleSearchAPIWrapper
elif name == "GoogleSerperAPIWrapper":
from langchain_community.utilities import GoogleSerperAPIWrapper
_warn_on_import(
name, replacement="langchain_community.utilities.GoogleSerperAPIWrapper"
)
return GoogleSerperAPIWrapper
elif name == "PowerBIDataset":
from langchain_community.utilities import PowerBIDataset
_warn_on_import(
name, replacement="langchain_community.utilities.PowerBIDataset"
)
return PowerBIDataset
elif name == "SearxSearchWrapper":
from langchain_community.utilities import SearxSearchWrapper
_warn_on_import(
name, replacement="langchain_community.utilities.SearxSearchWrapper"
)
return SearxSearchWrapper
| |
154054
|
elif name == "WikipediaAPIWrapper":
from langchain_community.utilities import WikipediaAPIWrapper
_warn_on_import(
name, replacement="langchain_community.utilities.WikipediaAPIWrapper"
)
return WikipediaAPIWrapper
elif name == "WolframAlphaAPIWrapper":
from langchain_community.utilities import WolframAlphaAPIWrapper
_warn_on_import(
name, replacement="langchain_community.utilities.WolframAlphaAPIWrapper"
)
return WolframAlphaAPIWrapper
elif name == "SQLDatabase":
from langchain_community.utilities import SQLDatabase
_warn_on_import(name, replacement="langchain_community.utilities.SQLDatabase")
return SQLDatabase
elif name == "FAISS":
from langchain_community.vectorstores import FAISS
_warn_on_import(name, replacement="langchain_community.vectorstores.FAISS")
return FAISS
elif name == "ElasticVectorSearch":
from langchain_community.vectorstores import ElasticVectorSearch
_warn_on_import(
name, replacement="langchain_community.vectorstores.ElasticVectorSearch"
)
return ElasticVectorSearch
# For backwards compatibility
elif name == "SerpAPIChain" or name == "SerpAPIWrapper":
from langchain_community.utilities import SerpAPIWrapper
_warn_on_import(
name, replacement="langchain_community.utilities.SerpAPIWrapper"
)
return SerpAPIWrapper
elif name == "verbose":
from langchain.globals import _verbose
_warn_on_import(
name,
replacement=(
"langchain.globals.set_verbose() / langchain.globals.get_verbose()"
),
)
return _verbose
elif name == "debug":
from langchain.globals import _debug
_warn_on_import(
name,
replacement=(
"langchain.globals.set_debug() / langchain.globals.get_debug()"
),
)
return _debug
elif name == "llm_cache":
from langchain.globals import _llm_cache
_warn_on_import(
name,
replacement=(
"langchain.globals.set_llm_cache() / langchain.globals.get_llm_cache()"
),
)
return _llm_cache
else:
raise AttributeError(f"Could not find: {name}")
__all__ = [
"LLMChain",
"LLMCheckerChain",
"LLMMathChain",
"ArxivAPIWrapper",
"GoldenQueryAPIWrapper",
"SelfAskWithSearchChain",
"SerpAPIWrapper",
"SerpAPIChain",
"SearxSearchWrapper",
"GoogleSearchAPIWrapper",
"GoogleSerperAPIWrapper",
"WolframAlphaAPIWrapper",
"WikipediaAPIWrapper",
"Anthropic",
"Banana",
"CerebriumAI",
"Cohere",
"ForefrontAI",
"GooseAI",
"Modal",
"OpenAI",
"Petals",
"PipelineAI",
"StochasticAI",
"Writer",
"BasePromptTemplate",
"Prompt",
"FewShotPromptTemplate",
"PromptTemplate",
"ReActChain",
"Wikipedia",
"HuggingFaceHub",
"SagemakerEndpoint",
"HuggingFacePipeline",
"SQLDatabase",
"PowerBIDataset",
"FAISS",
"MRKLChain",
"VectorDBQA",
"ElasticVectorSearch",
"InMemoryDocstore",
"ConversationChain",
"VectorDBQAWithSourcesChain",
"QAWithSourcesChain",
"LlamaCpp",
"HuggingFaceTextGenInference",
]
| |
154081
|
from typing import Any, List
from langchain_core.callbacks import (
AsyncCallbackManagerForRetrieverRun,
CallbackManagerForRetrieverRun,
)
from langchain_core.documents import Document
from langchain_core.retrievers import BaseRetriever, RetrieverLike
from pydantic import ConfigDict
from langchain.retrievers.document_compressors.base import (
BaseDocumentCompressor,
)
class ContextualCompressionRetriever(BaseRetriever):
"""Retriever that wraps a base retriever and compresses the results."""
base_compressor: BaseDocumentCompressor
"""Compressor for compressing retrieved documents."""
base_retriever: RetrieverLike
"""Base Retriever to use for getting relevant documents."""
model_config = ConfigDict(
arbitrary_types_allowed=True,
)
def _get_relevant_documents(
self,
query: str,
*,
run_manager: CallbackManagerForRetrieverRun,
**kwargs: Any,
) -> List[Document]:
"""Get documents relevant for a query.
Args:
query: string to find relevant documents for
Returns:
Sequence of relevant documents
"""
docs = self.base_retriever.invoke(
query, config={"callbacks": run_manager.get_child()}, **kwargs
)
if docs:
compressed_docs = self.base_compressor.compress_documents(
docs, query, callbacks=run_manager.get_child()
)
return list(compressed_docs)
else:
return []
async def _aget_relevant_documents(
self,
query: str,
*,
run_manager: AsyncCallbackManagerForRetrieverRun,
**kwargs: Any,
) -> List[Document]:
"""Get documents relevant for a query.
Args:
query: string to find relevant documents for
Returns:
List of relevant documents
"""
docs = await self.base_retriever.ainvoke(
query, config={"callbacks": run_manager.get_child()}, **kwargs
)
if docs:
compressed_docs = await self.base_compressor.acompress_documents(
docs, query, callbacks=run_manager.get_child()
)
return list(compressed_docs)
else:
return []
| |
154101
|
class EnsembleRetriever(BaseRetriever):
"""Retriever that ensembles the multiple retrievers.
It uses a rank fusion.
Args:
retrievers: A list of retrievers to ensemble.
weights: A list of weights corresponding to the retrievers. Defaults to equal
weighting for all retrievers.
c: A constant added to the rank, controlling the balance between the importance
of high-ranked items and the consideration given to lower-ranked items.
Default is 60.
id_key: The key in the document's metadata used to determine unique documents.
If not specified, page_content is used.
"""
retrievers: List[RetrieverLike]
weights: List[float]
c: int = 60
id_key: Optional[str] = None
@property
def config_specs(self) -> List[ConfigurableFieldSpec]:
"""List configurable fields for this runnable."""
return get_unique_config_specs(
spec for retriever in self.retrievers for spec in retriever.config_specs
)
@model_validator(mode="before")
@classmethod
def set_weights(cls, values: Dict[str, Any]) -> Any:
if not values.get("weights"):
n_retrievers = len(values["retrievers"])
values["weights"] = [1 / n_retrievers] * n_retrievers
return values
def invoke(
self, input: str, config: Optional[RunnableConfig] = None, **kwargs: Any
) -> List[Document]:
from langchain_core.callbacks import CallbackManager
config = ensure_config(config)
callback_manager = CallbackManager.configure(
config.get("callbacks"),
None,
verbose=kwargs.get("verbose", False),
inheritable_tags=config.get("tags", []),
local_tags=self.tags,
inheritable_metadata=config.get("metadata", {}),
local_metadata=self.metadata,
)
run_manager = callback_manager.on_retriever_start(
None,
input,
name=config.get("run_name") or self.get_name(),
**kwargs,
)
try:
result = self.rank_fusion(input, run_manager=run_manager, config=config)
except Exception as e:
run_manager.on_retriever_error(e)
raise e
else:
run_manager.on_retriever_end(
result,
**kwargs,
)
return result
async def ainvoke(
self, input: str, config: Optional[RunnableConfig] = None, **kwargs: Any
) -> List[Document]:
from langchain_core.callbacks import AsyncCallbackManager
config = ensure_config(config)
callback_manager = AsyncCallbackManager.configure(
config.get("callbacks"),
None,
verbose=kwargs.get("verbose", False),
inheritable_tags=config.get("tags", []),
local_tags=self.tags,
inheritable_metadata=config.get("metadata", {}),
local_metadata=self.metadata,
)
run_manager = await callback_manager.on_retriever_start(
None,
input,
name=config.get("run_name") or self.get_name(),
**kwargs,
)
try:
result = await self.arank_fusion(
input, run_manager=run_manager, config=config
)
except Exception as e:
await run_manager.on_retriever_error(e)
raise e
else:
await run_manager.on_retriever_end(
result,
**kwargs,
)
return result
def _get_relevant_documents(
self,
query: str,
*,
run_manager: CallbackManagerForRetrieverRun,
) -> List[Document]:
"""
Get the relevant documents for a given query.
Args:
query: The query to search for.
Returns:
A list of reranked documents.
"""
# Get fused result of the retrievers.
fused_documents = self.rank_fusion(query, run_manager)
return fused_documents
async def _aget_relevant_documents(
self,
query: str,
*,
run_manager: AsyncCallbackManagerForRetrieverRun,
) -> List[Document]:
"""
Asynchronously get the relevant documents for a given query.
Args:
query: The query to search for.
Returns:
A list of reranked documents.
"""
# Get fused result of the retrievers.
fused_documents = await self.arank_fusion(query, run_manager)
return fused_documents
def rank_fusion(
self,
query: str,
run_manager: CallbackManagerForRetrieverRun,
*,
config: Optional[RunnableConfig] = None,
) -> List[Document]:
"""
Retrieve the results of the retrievers and use rank_fusion_func to get
the final result.
Args:
query: The query to search for.
Returns:
A list of reranked documents.
"""
# Get the results of all retrievers.
retriever_docs = [
retriever.invoke(
query,
patch_config(
config, callbacks=run_manager.get_child(tag=f"retriever_{i+1}")
),
)
for i, retriever in enumerate(self.retrievers)
]
# Enforce that retrieved docs are Documents for each list in retriever_docs
for i in range(len(retriever_docs)):
retriever_docs[i] = [
Document(page_content=cast(str, doc)) if isinstance(doc, str) else doc
for doc in retriever_docs[i]
]
# apply rank fusion
fused_documents = self.weighted_reciprocal_rank(retriever_docs)
return fused_documents
async def arank_fusion(
self,
query: str,
run_manager: AsyncCallbackManagerForRetrieverRun,
*,
config: Optional[RunnableConfig] = None,
) -> List[Document]:
"""
Asynchronously retrieve the results of the retrievers
and use rank_fusion_func to get the final result.
Args:
query: The query to search for.
Returns:
A list of reranked documents.
"""
# Get the results of all retrievers.
retriever_docs = await asyncio.gather(
*[
retriever.ainvoke(
query,
patch_config(
config, callbacks=run_manager.get_child(tag=f"retriever_{i+1}")
),
)
for i, retriever in enumerate(self.retrievers)
]
)
# Enforce that retrieved docs are Documents for each list in retriever_docs
for i in range(len(retriever_docs)):
retriever_docs[i] = [
Document(page_content=doc) if not isinstance(doc, Document) else doc # type: ignore[arg-type]
for doc in retriever_docs[i]
]
# apply rank fusion
fused_documents = self.weighted_reciprocal_rank(retriever_docs)
return fused_documents
def weighted_reciprocal_rank(
self, doc_lists: List[List[Document]]
) -> List[Document]:
"""
Perform weighted Reciprocal Rank Fusion on multiple rank lists.
You can find more details about RRF here:
https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf
Args:
doc_lists: A list of rank lists, where each rank list contains unique items.
Returns:
list: The final aggregated list of items sorted by their weighted RRF
scores in descending order.
"""
if len(doc_lists) != len(self.weights):
raise ValueError(
"Number of rank lists must be equal to the number of weights."
)
# Associate each doc's content with its RRF score for later sorting by it
# Duplicated contents across retrievers are collapsed & scored cumulatively
rrf_score: Dict[str, float] = defaultdict(float)
for doc_list, weight in zip(doc_lists, self.weights):
for rank, doc in enumerate(doc_list, start=1):
rrf_score[
(
doc.page_content
if self.id_key is None
else doc.metadata[self.id_key]
)
] += weight / (rank + self.c)
# Docs are deduplicated by their contents then sorted by their scores
all_docs = chain.from_iterable(doc_lists)
sorted_docs = sorted(
unique_by_key(
all_docs,
lambda doc: (
doc.page_content
if self.id_key is None
else doc.metadata[self.id_key]
),
),
reverse=True,
key=lambda doc: rrf_score[
doc.page_content if self.id_key is None else doc.metadata[self.id_key]
],
)
return sorted_docs
| |
154106
|
import datetime
from copy import deepcopy
from typing import Any, Dict, List, Optional, Tuple
from langchain_core.callbacks import (
AsyncCallbackManagerForRetrieverRun,
CallbackManagerForRetrieverRun,
)
from langchain_core.documents import Document
from langchain_core.retrievers import BaseRetriever
from langchain_core.vectorstores import VectorStore
from pydantic import ConfigDict, Field
def _get_hours_passed(time: datetime.datetime, ref_time: datetime.datetime) -> float:
"""Get the hours passed between two datetimes."""
return (time - ref_time).total_seconds() / 3600
class TimeWeightedVectorStoreRetriever(BaseRetriever):
"""Retriever that combines embedding similarity with
recency in retrieving values."""
vectorstore: VectorStore
"""The vectorstore to store documents and determine salience."""
search_kwargs: dict = Field(default_factory=lambda: dict(k=100))
"""Keyword arguments to pass to the vectorstore similarity search."""
# TODO: abstract as a queue
memory_stream: List[Document] = Field(default_factory=list)
"""The memory_stream of documents to search through."""
decay_rate: float = Field(default=0.01)
"""The exponential decay factor used as (1.0-decay_rate)**(hrs_passed)."""
k: int = 4
"""The maximum number of documents to retrieve in a given call."""
other_score_keys: List[str] = []
"""Other keys in the metadata to factor into the score, e.g. 'importance'."""
default_salience: Optional[float] = None
"""The salience to assign memories not retrieved from the vector store.
None assigns no salience to documents not fetched from the vector store.
"""
model_config = ConfigDict(
arbitrary_types_allowed=True,
)
def _document_get_date(self, field: str, document: Document) -> datetime.datetime:
"""Return the value of the date field of a document."""
if field in document.metadata:
if isinstance(document.metadata[field], float):
return datetime.datetime.fromtimestamp(document.metadata[field])
return document.metadata[field]
return datetime.datetime.now()
def _get_combined_score(
self,
document: Document,
vector_relevance: Optional[float],
current_time: datetime.datetime,
) -> float:
"""Return the combined score for a document."""
hours_passed = _get_hours_passed(
current_time,
self._document_get_date("last_accessed_at", document),
)
score = (1.0 - self.decay_rate) ** hours_passed
for key in self.other_score_keys:
if key in document.metadata:
score += document.metadata[key]
if vector_relevance is not None:
score += vector_relevance
return score
def get_salient_docs(self, query: str) -> Dict[int, Tuple[Document, float]]:
"""Return documents that are salient to the query."""
docs_and_scores: List[Tuple[Document, float]]
docs_and_scores = self.vectorstore.similarity_search_with_relevance_scores(
query, **self.search_kwargs
)
results = {}
for fetched_doc, relevance in docs_and_scores:
if "buffer_idx" in fetched_doc.metadata:
buffer_idx = fetched_doc.metadata["buffer_idx"]
doc = self.memory_stream[buffer_idx]
results[buffer_idx] = (doc, relevance)
return results
async def aget_salient_docs(self, query: str) -> Dict[int, Tuple[Document, float]]:
"""Return documents that are salient to the query."""
docs_and_scores: List[Tuple[Document, float]]
docs_and_scores = (
await self.vectorstore.asimilarity_search_with_relevance_scores(
query, **self.search_kwargs
)
)
results = {}
for fetched_doc, relevance in docs_and_scores:
if "buffer_idx" in fetched_doc.metadata:
buffer_idx = fetched_doc.metadata["buffer_idx"]
doc = self.memory_stream[buffer_idx]
results[buffer_idx] = (doc, relevance)
return results
def _get_rescored_docs(
self, docs_and_scores: Dict[Any, Tuple[Document, Optional[float]]]
) -> List[Document]:
current_time = datetime.datetime.now()
rescored_docs = [
(doc, self._get_combined_score(doc, relevance, current_time))
for doc, relevance in docs_and_scores.values()
]
rescored_docs.sort(key=lambda x: x[1], reverse=True)
result = []
# Ensure frequently accessed memories aren't forgotten
for doc, _ in rescored_docs[: self.k]:
# TODO: Update vector store doc once `update` method is exposed.
buffered_doc = self.memory_stream[doc.metadata["buffer_idx"]]
buffered_doc.metadata["last_accessed_at"] = current_time
result.append(buffered_doc)
return result
def _get_relevant_documents(
self, query: str, *, run_manager: CallbackManagerForRetrieverRun
) -> List[Document]:
docs_and_scores = {
doc.metadata["buffer_idx"]: (doc, self.default_salience)
for doc in self.memory_stream[-self.k :]
}
# If a doc is considered salient, update the salience score
docs_and_scores.update(self.get_salient_docs(query))
return self._get_rescored_docs(docs_and_scores)
async def _aget_relevant_documents(
self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun
) -> List[Document]:
docs_and_scores = {
doc.metadata["buffer_idx"]: (doc, self.default_salience)
for doc in self.memory_stream[-self.k :]
}
# If a doc is considered salient, update the salience score
docs_and_scores.update(await self.aget_salient_docs(query))
return self._get_rescored_docs(docs_and_scores)
def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:
"""Add documents to vectorstore."""
current_time = kwargs.get("current_time")
if current_time is None:
current_time = datetime.datetime.now()
# Avoid mutating input documents
dup_docs = [deepcopy(d) for d in documents]
for i, doc in enumerate(dup_docs):
if "last_accessed_at" not in doc.metadata:
doc.metadata["last_accessed_at"] = current_time
if "created_at" not in doc.metadata:
doc.metadata["created_at"] = current_time
doc.metadata["buffer_idx"] = len(self.memory_stream) + i
self.memory_stream.extend(dup_docs)
return self.vectorstore.add_documents(dup_docs, **kwargs)
async def aadd_documents(
self, documents: List[Document], **kwargs: Any
) -> List[str]:
"""Add documents to vectorstore."""
current_time = kwargs.get("current_time")
if current_time is None:
current_time = datetime.datetime.now()
# Avoid mutating input documents
dup_docs = [deepcopy(d) for d in documents]
for i, doc in enumerate(dup_docs):
if "last_accessed_at" not in doc.metadata:
doc.metadata["last_accessed_at"] = current_time
if "created_at" not in doc.metadata:
doc.metadata["created_at"] = current_time
doc.metadata["buffer_idx"] = len(self.memory_stream) + i
self.memory_stream.extend(dup_docs)
return await self.vectorstore.aadd_documents(dup_docs, **kwargs)
| |
154109
|
import asyncio
import logging
from typing import List, Optional, Sequence
from langchain_core.callbacks import (
AsyncCallbackManagerForRetrieverRun,
CallbackManagerForRetrieverRun,
)
from langchain_core.documents import Document
from langchain_core.language_models import BaseLanguageModel
from langchain_core.output_parsers import BaseOutputParser
from langchain_core.prompts import BasePromptTemplate
from langchain_core.prompts.prompt import PromptTemplate
from langchain_core.retrievers import BaseRetriever
from langchain_core.runnables import Runnable
from langchain.chains.llm import LLMChain
logger = logging.getLogger(__name__)
class LineListOutputParser(BaseOutputParser[List[str]]):
"""Output parser for a list of lines."""
def parse(self, text: str) -> List[str]:
lines = text.strip().split("\n")
return list(filter(None, lines)) # Remove empty lines
# Default prompt
DEFAULT_QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an AI language model assistant. Your task is
to generate 3 different versions of the given user
question to retrieve relevant documents from a vector database.
By generating multiple perspectives on the user question,
your goal is to help the user overcome some of the limitations
of distance-based similarity search. Provide these alternative
questions separated by newlines. Original question: {question}""",
)
def _unique_documents(documents: Sequence[Document]) -> List[Document]:
return [doc for i, doc in enumerate(documents) if doc not in documents[:i]]
class MultiQueryRetriever(BaseRetriever):
"""Given a query, use an LLM to write a set of queries.
Retrieve docs for each query. Return the unique union of all retrieved docs.
"""
retriever: BaseRetriever
llm_chain: Runnable
verbose: bool = True
parser_key: str = "lines"
"""DEPRECATED. parser_key is no longer used and should not be specified."""
include_original: bool = False
"""Whether to include the original query in the list of generated queries."""
@classmethod
def from_llm(
cls,
retriever: BaseRetriever,
llm: BaseLanguageModel,
prompt: BasePromptTemplate = DEFAULT_QUERY_PROMPT,
parser_key: Optional[str] = None,
include_original: bool = False,
) -> "MultiQueryRetriever":
"""Initialize from llm using default template.
Args:
retriever: retriever to query documents from
llm: llm for query generation using DEFAULT_QUERY_PROMPT
prompt: The prompt which aims to generate several different versions
of the given user query
include_original: Whether to include the original query in the list of
generated queries.
Returns:
MultiQueryRetriever
"""
output_parser = LineListOutputParser()
llm_chain = prompt | llm | output_parser
return cls(
retriever=retriever,
llm_chain=llm_chain,
include_original=include_original,
)
async def _aget_relevant_documents(
self,
query: str,
*,
run_manager: AsyncCallbackManagerForRetrieverRun,
) -> List[Document]:
"""Get relevant documents given a user query.
Args:
query: user query
Returns:
Unique union of relevant documents from all generated queries
"""
queries = await self.agenerate_queries(query, run_manager)
if self.include_original:
queries.append(query)
documents = await self.aretrieve_documents(queries, run_manager)
return self.unique_union(documents)
async def agenerate_queries(
self, question: str, run_manager: AsyncCallbackManagerForRetrieverRun
) -> List[str]:
"""Generate queries based upon user input.
Args:
question: user query
Returns:
List of LLM generated queries that are similar to the user input
"""
response = await self.llm_chain.ainvoke(
{"question": question}, config={"callbacks": run_manager.get_child()}
)
if isinstance(self.llm_chain, LLMChain):
lines = response["text"]
else:
lines = response
if self.verbose:
logger.info(f"Generated queries: {lines}")
return lines
async def aretrieve_documents(
self, queries: List[str], run_manager: AsyncCallbackManagerForRetrieverRun
) -> List[Document]:
"""Run all LLM generated queries.
Args:
queries: query list
Returns:
List of retrieved Documents
"""
document_lists = await asyncio.gather(
*(
self.retriever.ainvoke(
query, config={"callbacks": run_manager.get_child()}
)
for query in queries
)
)
return [doc for docs in document_lists for doc in docs]
def _get_relevant_documents(
self,
query: str,
*,
run_manager: CallbackManagerForRetrieverRun,
) -> List[Document]:
"""Get relevant documents given a user query.
Args:
query: user query
Returns:
Unique union of relevant documents from all generated queries
"""
queries = self.generate_queries(query, run_manager)
if self.include_original:
queries.append(query)
documents = self.retrieve_documents(queries, run_manager)
return self.unique_union(documents)
def generate_queries(
self, question: str, run_manager: CallbackManagerForRetrieverRun
) -> List[str]:
"""Generate queries based upon user input.
Args:
question: user query
Returns:
List of LLM generated queries that are similar to the user input
"""
response = self.llm_chain.invoke(
{"question": question}, config={"callbacks": run_manager.get_child()}
)
if isinstance(self.llm_chain, LLMChain):
lines = response["text"]
else:
lines = response
if self.verbose:
logger.info(f"Generated queries: {lines}")
return lines
def retrieve_documents(
self, queries: List[str], run_manager: CallbackManagerForRetrieverRun
) -> List[Document]:
"""Run all LLM generated queries.
Args:
queries: query list
Returns:
List of retrieved Documents
"""
documents = []
for query in queries:
docs = self.retriever.invoke(
query, config={"callbacks": run_manager.get_child()}
)
documents.extend(docs)
return documents
def unique_union(self, documents: List[Document]) -> List[Document]:
"""Get unique Documents.
Args:
documents: List of retrieved Documents
Returns:
List of unique retrieved Documents
"""
return _unique_documents(documents)
| |
154126
|
from typing import Callable, Dict, Optional, Sequence
import numpy as np
from langchain_core.callbacks.manager import Callbacks
from langchain_core.documents import Document
from langchain_core.embeddings import Embeddings
from langchain_core.utils import pre_init
from pydantic import ConfigDict, Field
from langchain.retrievers.document_compressors.base import (
BaseDocumentCompressor,
)
def _get_similarity_function() -> Callable:
try:
from langchain_community.utils.math import cosine_similarity
except ImportError:
raise ImportError(
"To use please install langchain-community "
"with `pip install langchain-community`."
)
return cosine_similarity
class EmbeddingsFilter(BaseDocumentCompressor):
"""Document compressor that uses embeddings to drop documents
unrelated to the query."""
embeddings: Embeddings
"""Embeddings to use for embedding document contents and queries."""
similarity_fn: Callable = Field(default_factory=_get_similarity_function)
"""Similarity function for comparing documents. Function expected to take as input
two matrices (List[List[float]]) and return a matrix of scores where higher values
indicate greater similarity."""
k: Optional[int] = 20
"""The number of relevant documents to return. Can be set to None, in which case
`similarity_threshold` must be specified. Defaults to 20."""
similarity_threshold: Optional[float] = None
"""Threshold for determining when two documents are similar enough
to be considered redundant. Defaults to None, must be specified if `k` is set
to None."""
model_config = ConfigDict(
arbitrary_types_allowed=True,
)
@pre_init
def validate_params(cls, values: Dict) -> Dict:
"""Validate similarity parameters."""
if values["k"] is None and values["similarity_threshold"] is None:
raise ValueError("Must specify one of `k` or `similarity_threshold`.")
return values
def compress_documents(
self,
documents: Sequence[Document],
query: str,
callbacks: Optional[Callbacks] = None,
) -> Sequence[Document]:
"""Filter documents based on similarity of their embeddings to the query."""
try:
from langchain_community.document_transformers.embeddings_redundant_filter import ( # noqa: E501
_get_embeddings_from_stateful_docs,
get_stateful_documents,
)
except ImportError:
raise ImportError(
"To use please install langchain-community "
"with `pip install langchain-community`."
)
stateful_documents = get_stateful_documents(documents)
embedded_documents = _get_embeddings_from_stateful_docs(
self.embeddings, stateful_documents
)
embedded_query = self.embeddings.embed_query(query)
similarity = self.similarity_fn([embedded_query], embedded_documents)[0]
included_idxs = np.arange(len(embedded_documents))
if self.k is not None:
included_idxs = np.argsort(similarity)[::-1][: self.k]
if self.similarity_threshold is not None:
similar_enough = np.where(
similarity[included_idxs] > self.similarity_threshold
)
included_idxs = included_idxs[similar_enough]
for i in included_idxs:
stateful_documents[i].state["query_similarity_score"] = similarity[i]
return [stateful_documents[i] for i in included_idxs]
async def acompress_documents(
self,
documents: Sequence[Document],
query: str,
callbacks: Optional[Callbacks] = None,
) -> Sequence[Document]:
"""Filter documents based on similarity of their embeddings to the query."""
try:
from langchain_community.document_transformers.embeddings_redundant_filter import ( # noqa: E501
_aget_embeddings_from_stateful_docs,
get_stateful_documents,
)
except ImportError:
raise ImportError(
"To use please install langchain-community "
"with `pip install langchain-community`."
)
stateful_documents = get_stateful_documents(documents)
embedded_documents = await _aget_embeddings_from_stateful_docs(
self.embeddings, stateful_documents
)
embedded_query = await self.embeddings.aembed_query(query)
similarity = self.similarity_fn([embedded_query], embedded_documents)[0]
included_idxs = np.arange(len(embedded_documents))
if self.k is not None:
included_idxs = np.argsort(similarity)[::-1][: self.k]
if self.similarity_threshold is not None:
similar_enough = np.where(
similarity[included_idxs] > self.similarity_threshold
)
included_idxs = included_idxs[similar_enough]
for i in included_idxs:
stateful_documents[i].state["query_similarity_score"] = similarity[i]
return [stateful_documents[i] for i in included_idxs]
| |
154155
|
"""**Tools** are classes that an Agent uses to interact with the world.
Each tool has a **description**. Agent uses the description to choose the right
tool for the job.
**Class hierarchy:**
.. code-block::
ToolMetaclass --> BaseTool --> <name>Tool # Examples: AIPluginTool, BaseGraphQLTool
<name> # Examples: BraveSearch, HumanInputRun
**Main helpers:**
.. code-block::
CallbackManagerForToolRun, AsyncCallbackManagerForToolRun
"""
import warnings
from typing import Any
from langchain_core._api import LangChainDeprecationWarning
from langchain_core.tools import (
BaseTool as BaseTool,
)
from langchain_core.tools import (
StructuredTool as StructuredTool,
)
from langchain_core.tools import (
Tool as Tool,
)
from langchain_core.tools.convert import tool as tool
from langchain._api.interactive_env import is_interactive_env
# Used for internal purposes
_DEPRECATED_TOOLS = {"PythonAstREPLTool", "PythonREPLTool"}
def _import_python_tool_PythonAstREPLTool() -> Any:
raise ImportError(
"This tool has been moved to langchain experiment. "
"This tool has access to a python REPL. "
"For best practices make sure to sandbox this tool. "
"Read https://github.com/langchain-ai/langchain/blob/master/SECURITY.md "
"To keep using this code as is, install langchain experimental and "
"update relevant imports replacing 'langchain' with 'langchain_experimental'"
)
def _import_python_tool_PythonREPLTool() -> Any:
raise ImportError(
"This tool has been moved to langchain experiment. "
"This tool has access to a python REPL. "
"For best practices make sure to sandbox this tool. "
"Read https://github.com/langchain-ai/langchain/blob/master/SECURITY.md "
"To keep using this code as is, install langchain experimental and "
"update relevant imports replacing 'langchain' with 'langchain_experimental'"
)
def __getattr__(name: str) -> Any:
if name == "PythonAstREPLTool":
return _import_python_tool_PythonAstREPLTool()
elif name == "PythonREPLTool":
return _import_python_tool_PythonREPLTool()
else:
from langchain_community import tools
# If not in interactive env, raise warning.
if not is_interactive_env():
warnings.warn(
"Importing tools from langchain is deprecated. Importing from "
"langchain will no longer be supported as of langchain==0.2.0. "
"Please import from langchain-community instead:\n\n"
f"`from langchain_community.tools import {name}`.\n\n"
"To install langchain-community run "
"`pip install -U langchain-community`.",
category=LangChainDeprecationWarning,
)
return getattr(tools, name)
__all__ = [
"StructuredTool",
"BaseTool",
"tool",
"Tool",
"AINAppOps",
"AINOwnerOps",
"AINRuleOps",
"AINTransfer",
"AINValueOps",
"AIPluginTool",
"APIOperation",
"ArxivQueryRun",
"AzureCogsFormRecognizerTool",
"AzureCogsImageAnalysisTool",
"AzureCogsSpeech2TextTool",
"AzureCogsText2SpeechTool",
"AzureCogsTextAnalyticsHealthTool",
"BaseGraphQLTool",
"BaseRequestsTool",
"BaseSQLDatabaseTool",
"BaseSparkSQLTool",
"BearlyInterpreterTool",
"BingSearchResults",
"BingSearchRun",
"BraveSearch",
"ClickTool",
"CopyFileTool",
"CurrentWebPageTool",
"DeleteFileTool",
"DuckDuckGoSearchResults",
"DuckDuckGoSearchRun",
"E2BDataAnalysisTool",
"EdenAiExplicitImageTool",
"EdenAiObjectDetectionTool",
"EdenAiParsingIDTool",
"EdenAiParsingInvoiceTool",
"EdenAiSpeechToTextTool",
"EdenAiTextModerationTool",
"EdenAiTextToSpeechTool",
"EdenaiTool",
"ElevenLabsText2SpeechTool",
"ExtractHyperlinksTool",
"ExtractTextTool",
"FileSearchTool",
"GetElementsTool",
"GmailCreateDraft",
"GmailGetMessage",
"GmailGetThread",
"GmailSearch",
"GmailSendMessage",
"GoogleCloudTextToSpeechTool",
"GooglePlacesTool",
"GoogleSearchResults",
"GoogleSearchRun",
"GoogleSerperResults",
"GoogleSerperRun",
"SearchAPIResults",
"SearchAPIRun",
"HumanInputRun",
"IFTTTWebhook",
"InfoPowerBITool",
"InfoSQLDatabaseTool",
"InfoSparkSQLTool",
"JiraAction",
"JsonGetValueTool",
"JsonListKeysTool",
"ListDirectoryTool",
"ListPowerBITool",
"ListSQLDatabaseTool",
"ListSparkSQLTool",
"MerriamWebsterQueryRun",
"MetaphorSearchResults",
"MoveFileTool",
"NasaAction",
"NavigateBackTool",
"NavigateTool",
"O365CreateDraftMessage",
"O365SearchEmails",
"O365SearchEvents",
"O365SendEvent",
"O365SendMessage",
"OpenAPISpec",
"OpenWeatherMapQueryRun",
"PubmedQueryRun",
"RedditSearchRun",
"QueryCheckerTool",
"QueryPowerBITool",
"QuerySQLCheckerTool",
"QuerySQLDataBaseTool",
"QuerySparkSQLTool",
"ReadFileTool",
"RequestsDeleteTool",
"RequestsGetTool",
"RequestsPatchTool",
"RequestsPostTool",
"RequestsPutTool",
"SteamWebAPIQueryRun",
"SceneXplainTool",
"SearxSearchResults",
"SearxSearchRun",
"ShellTool",
"SlackGetChannel",
"SlackGetMessage",
"SlackScheduleMessage",
"SlackSendMessage",
"SleepTool",
"StdInInquireTool",
"StackExchangeTool",
"SteamshipImageGenerationTool",
"VectorStoreQATool",
"VectorStoreQAWithSourcesTool",
"WikipediaQueryRun",
"WolframAlphaQueryRun",
"WriteFileTool",
"YahooFinanceNewsTool",
"YouTubeSearchTool",
"ZapierNLAListActions",
"ZapierNLARunAction",
"format_tool_to_openai_function",
]
| |
154185
|
from typing import Any
def __getattr__(name: str = "") -> Any:
raise AttributeError(
"This tool has been moved to langchain experiment. "
"This tool has access to a python REPL. "
"For best practices make sure to sandbox this tool. "
"Read https://github.com/langchain-ai/langchain/blob/master/SECURITY.md "
"To keep using this code as is, install langchain experimental and "
"update relevant imports replacing 'langchain' with 'langchain_experimental'"
)
| |
154344
|
"""**Embedding models** are wrappers around embedding models
from different APIs and services.
**Embedding models** can be LLMs or not.
**Class hierarchy:**
.. code-block::
Embeddings --> <name>Embeddings # Examples: OpenAIEmbeddings, HuggingFaceEmbeddings
"""
import logging
from typing import TYPE_CHECKING, Any
from langchain._api import create_importer
from langchain.embeddings.cache import CacheBackedEmbeddings
if TYPE_CHECKING:
from langchain_community.embeddings import (
AlephAlphaAsymmetricSemanticEmbedding,
AlephAlphaSymmetricSemanticEmbedding,
AwaEmbeddings,
AzureOpenAIEmbeddings,
BedrockEmbeddings,
BookendEmbeddings,
ClarifaiEmbeddings,
CohereEmbeddings,
DashScopeEmbeddings,
DatabricksEmbeddings,
DeepInfraEmbeddings,
DeterministicFakeEmbedding,
EdenAiEmbeddings,
ElasticsearchEmbeddings,
EmbaasEmbeddings,
ErnieEmbeddings,
FakeEmbeddings,
FastEmbedEmbeddings,
GooglePalmEmbeddings,
GPT4AllEmbeddings,
GradientEmbeddings,
HuggingFaceBgeEmbeddings,
HuggingFaceEmbeddings,
HuggingFaceHubEmbeddings,
HuggingFaceInferenceAPIEmbeddings,
HuggingFaceInstructEmbeddings,
InfinityEmbeddings,
JavelinAIGatewayEmbeddings,
JinaEmbeddings,
JohnSnowLabsEmbeddings,
LlamaCppEmbeddings,
LocalAIEmbeddings,
MiniMaxEmbeddings,
MlflowAIGatewayEmbeddings,
MlflowEmbeddings,
ModelScopeEmbeddings,
MosaicMLInstructorEmbeddings,
NLPCloudEmbeddings,
OctoAIEmbeddings,
OllamaEmbeddings,
OpenAIEmbeddings,
OpenVINOEmbeddings,
QianfanEmbeddingsEndpoint,
SagemakerEndpointEmbeddings,
SelfHostedEmbeddings,
SelfHostedHuggingFaceEmbeddings,
SelfHostedHuggingFaceInstructEmbeddings,
SentenceTransformerEmbeddings,
SpacyEmbeddings,
TensorflowHubEmbeddings,
VertexAIEmbeddings,
VoyageEmbeddings,
XinferenceEmbeddings,
)
logger = logging.getLogger(__name__)
# TODO: this is in here to maintain backwards compatibility
class HypotheticalDocumentEmbedder:
def __init__(self, *args: Any, **kwargs: Any):
logger.warning(
"Using a deprecated class. Please use "
"`from langchain.chains import HypotheticalDocumentEmbedder` instead"
)
from langchain.chains.hyde.base import HypotheticalDocumentEmbedder as H
return H(*args, **kwargs) # type: ignore
@classmethod
def from_llm(cls, *args: Any, **kwargs: Any) -> Any:
logger.warning(
"Using a deprecated class. Please use "
"`from langchain.chains import HypotheticalDocumentEmbedder` instead"
)
from langchain.chains.hyde.base import HypotheticalDocumentEmbedder as H
return H.from_llm(*args, **kwargs)
# Create a way to dynamically look up deprecated imports.
# Used to consolidate logic for raising deprecation warnings and
# handling optional imports.
DEPRECATED_LOOKUP = {
"AlephAlphaAsymmetricSemanticEmbedding": "langchain_community.embeddings",
"AlephAlphaSymmetricSemanticEmbedding": "langchain_community.embeddings",
"AwaEmbeddings": "langchain_community.embeddings",
"AzureOpenAIEmbeddings": "langchain_community.embeddings",
"BedrockEmbeddings": "langchain_community.embeddings",
"BookendEmbeddings": "langchain_community.embeddings",
"ClarifaiEmbeddings": "langchain_community.embeddings",
"CohereEmbeddings": "langchain_community.embeddings",
"DashScopeEmbeddings": "langchain_community.embeddings",
"DatabricksEmbeddings": "langchain_community.embeddings",
"DeepInfraEmbeddings": "langchain_community.embeddings",
"DeterministicFakeEmbedding": "langchain_community.embeddings",
"EdenAiEmbeddings": "langchain_community.embeddings",
"ElasticsearchEmbeddings": "langchain_community.embeddings",
"EmbaasEmbeddings": "langchain_community.embeddings",
"ErnieEmbeddings": "langchain_community.embeddings",
"FakeEmbeddings": "langchain_community.embeddings",
"FastEmbedEmbeddings": "langchain_community.embeddings",
"GooglePalmEmbeddings": "langchain_community.embeddings",
"GPT4AllEmbeddings": "langchain_community.embeddings",
"GradientEmbeddings": "langchain_community.embeddings",
"HuggingFaceBgeEmbeddings": "langchain_community.embeddings",
"HuggingFaceEmbeddings": "langchain_community.embeddings",
"HuggingFaceHubEmbeddings": "langchain_community.embeddings",
"HuggingFaceInferenceAPIEmbeddings": "langchain_community.embeddings",
"HuggingFaceInstructEmbeddings": "langchain_community.embeddings",
"InfinityEmbeddings": "langchain_community.embeddings",
"JavelinAIGatewayEmbeddings": "langchain_community.embeddings",
"JinaEmbeddings": "langchain_community.embeddings",
"JohnSnowLabsEmbeddings": "langchain_community.embeddings",
"LlamaCppEmbeddings": "langchain_community.embeddings",
"LocalAIEmbeddings": "langchain_community.embeddings",
"MiniMaxEmbeddings": "langchain_community.embeddings",
"MlflowAIGatewayEmbeddings": "langchain_community.embeddings",
"MlflowEmbeddings": "langchain_community.embeddings",
"ModelScopeEmbeddings": "langchain_community.embeddings",
"MosaicMLInstructorEmbeddings": "langchain_community.embeddings",
"NLPCloudEmbeddings": "langchain_community.embeddings",
"OctoAIEmbeddings": "langchain_community.embeddings",
"OllamaEmbeddings": "langchain_community.embeddings",
"OpenAIEmbeddings": "langchain_community.embeddings",
"OpenVINOEmbeddings": "langchain_community.embeddings",
"QianfanEmbeddingsEndpoint": "langchain_community.embeddings",
"SagemakerEndpointEmbeddings": "langchain_community.embeddings",
"SelfHostedEmbeddings": "langchain_community.embeddings",
"SelfHostedHuggingFaceEmbeddings": "langchain_community.embeddings",
"SelfHostedHuggingFaceInstructEmbeddings": "langchain_community.embeddings",
"SentenceTransformerEmbeddings": "langchain_community.embeddings",
"SpacyEmbeddings": "langchain_community.embeddings",
"TensorflowHubEmbeddings": "langchain_community.embeddings",
"VertexAIEmbeddings": "langchain_community.embeddings",
"VoyageEmbeddings": "langchain_community.embeddings",
"XinferenceEmbeddings": "langchain_community.embeddings",
}
_import_attribute = create_importer(__package__, deprecated_lookups=DEPRECATED_LOOKUP)
def __getattr__(name: str) -> Any:
"""Look up attributes dynamically."""
return _import_attribute(name)
| |
154387
|
from __future__ import annotations
from typing import Any, Dict, List, Type
from langchain_core._api import deprecated
from langchain_core.chat_history import BaseChatMessageHistory
from langchain_core.language_models import BaseLanguageModel
from langchain_core.messages import BaseMessage, SystemMessage, get_buffer_string
from langchain_core.prompts import BasePromptTemplate
from langchain_core.utils import pre_init
from pydantic import BaseModel
from langchain.chains.llm import LLMChain
from langchain.memory.chat_memory import BaseChatMemory
from langchain.memory.prompt import SUMMARY_PROMPT
@deprecated(
since="0.2.12",
removal="1.0",
message=(
"Refer here for how to incorporate summaries of conversation history: "
"https://langchain-ai.github.io/langgraph/how-tos/memory/add-summary-conversation-history/" # noqa: E501
),
)
class SummarizerMixin(BaseModel):
"""Mixin for summarizer."""
human_prefix: str = "Human"
ai_prefix: str = "AI"
llm: BaseLanguageModel
prompt: BasePromptTemplate = SUMMARY_PROMPT
summary_message_cls: Type[BaseMessage] = SystemMessage
def predict_new_summary(
self, messages: List[BaseMessage], existing_summary: str
) -> str:
new_lines = get_buffer_string(
messages,
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
chain = LLMChain(llm=self.llm, prompt=self.prompt)
return chain.predict(summary=existing_summary, new_lines=new_lines)
async def apredict_new_summary(
self, messages: List[BaseMessage], existing_summary: str
) -> str:
new_lines = get_buffer_string(
messages,
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
chain = LLMChain(llm=self.llm, prompt=self.prompt)
return await chain.apredict(summary=existing_summary, new_lines=new_lines)
@deprecated(
since="0.3.1",
removal="1.0.0",
message=(
"Please see the migration guide at: "
"https://python.langchain.com/docs/versions/migrating_memory/"
),
)
class ConversationSummaryMemory(BaseChatMemory, SummarizerMixin):
"""Continually summarizes the conversation history.
The summary is updated after each conversation turn.
The implementations returns a summary of the conversation history which
can be used to provide context to the model.
"""
buffer: str = ""
memory_key: str = "history" #: :meta private:
@classmethod
def from_messages(
cls,
llm: BaseLanguageModel,
chat_memory: BaseChatMessageHistory,
*,
summarize_step: int = 2,
**kwargs: Any,
) -> ConversationSummaryMemory:
obj = cls(llm=llm, chat_memory=chat_memory, **kwargs)
for i in range(0, len(obj.chat_memory.messages), summarize_step):
obj.buffer = obj.predict_new_summary(
obj.chat_memory.messages[i : i + summarize_step], obj.buffer
)
return obj
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return [self.memory_key]
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
"""Return history buffer."""
if self.return_messages:
buffer: Any = [self.summary_message_cls(content=self.buffer)]
else:
buffer = self.buffer
return {self.memory_key: buffer}
@pre_init
def validate_prompt_input_variables(cls, values: Dict) -> Dict:
"""Validate that prompt input variables are consistent."""
prompt_variables = values["prompt"].input_variables
expected_keys = {"summary", "new_lines"}
if expected_keys != set(prompt_variables):
raise ValueError(
"Got unexpected prompt input variables. The prompt expects "
f"{prompt_variables}, but it should have {expected_keys}."
)
return values
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer."""
super().save_context(inputs, outputs)
self.buffer = self.predict_new_summary(
self.chat_memory.messages[-2:], self.buffer
)
def clear(self) -> None:
"""Clear memory contents."""
super().clear()
self.buffer = ""
| |
154462
|
from langchain_core.tracers.stdout import (
ConsoleCallbackHandler,
FunctionCallbackHandler,
)
__all__ = ["FunctionCallbackHandler", "ConsoleCallbackHandler"]
| |
154477
|
"""LangSmith evaluation utilities.
This module provides utilities for evaluating Chains and other language model
applications using LangChain evaluators and LangSmith.
For more information on the LangSmith API, see the `LangSmith API documentation <https://docs.smith.langchain.com/docs/>`_.
**Example**
.. code-block:: python
from langsmith import Client
from langchain_community.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.smith import EvaluatorType, RunEvalConfig, run_on_dataset
def construct_chain():
llm = ChatOpenAI(temperature=0)
chain = LLMChain.from_string(
llm,
"What's the answer to {your_input_key}"
)
return chain
evaluation_config = RunEvalConfig(
evaluators=[
EvaluatorType.QA, # "Correctness" against a reference answer
EvaluatorType.EMBEDDING_DISTANCE,
RunEvalConfig.Criteria("helpfulness"),
RunEvalConfig.Criteria({
"fifth-grader-score": "Do you have to be smarter than a fifth grader to answer this question?"
}),
]
)
client = Client()
run_on_dataset(
client,
"<my_dataset_name>",
construct_chain,
evaluation=evaluation_config
)
**Attributes**
- ``arun_on_dataset``: Asynchronous function to evaluate a chain or other LangChain component over a dataset.
- ``run_on_dataset``: Function to evaluate a chain or other LangChain component over a dataset.
- ``RunEvalConfig``: Class representing the configuration for running evaluation.
- ``StringRunEvaluatorChain``: Class representing a string run evaluator chain.
- ``InputFormatError``: Exception raised when the input format is incorrect.
""" # noqa: E501
from langchain.smith.evaluation.config import RunEvalConfig
from langchain.smith.evaluation.runner_utils import (
InputFormatError,
arun_on_dataset,
run_on_dataset,
)
from langchain.smith.evaluation.string_run_evaluator import StringRunEvaluatorChain
__all__ = [
"InputFormatError",
"arun_on_dataset",
"run_on_dataset",
"StringRunEvaluatorChain",
"RunEvalConfig",
]
| |
154496
|
"""**Chat Models** are a variation on language models.
While Chat Models use language models under the hood, the interface they expose
is a bit different. Rather than expose a "text in, text out" API, they expose
an interface where "chat messages" are the inputs and outputs.
**Class hierarchy:**
.. code-block::
BaseLanguageModel --> BaseChatModel --> <name> # Examples: ChatOpenAI, ChatGooglePalm
**Main helpers:**
.. code-block::
AIMessage, BaseMessage, HumanMessage
""" # noqa: E501
import warnings
from langchain_core._api import LangChainDeprecationWarning
from langchain._api.interactive_env import is_interactive_env
from langchain.chat_models.base import init_chat_model
def __getattr__(name: str) -> None:
from langchain_community import chat_models
# If not in interactive env, raise warning.
if not is_interactive_env():
warnings.warn(
"Importing chat models from langchain is deprecated. Importing from "
"langchain will no longer be supported as of langchain==0.2.0. "
"Please import from langchain-community instead:\n\n"
f"`from langchain_community.chat_models import {name}`.\n\n"
"To install langchain-community run `pip install -U langchain-community`.",
category=LangChainDeprecationWarning,
)
return getattr(chat_models, name)
__all__ = [
"init_chat_model",
"ChatOpenAI",
"BedrockChat",
"AzureChatOpenAI",
"FakeListChatModel",
"PromptLayerChatOpenAI",
"ChatDatabricks",
"ChatEverlyAI",
"ChatAnthropic",
"ChatCohere",
"ChatGooglePalm",
"ChatMlflow",
"ChatMLflowAIGateway",
"ChatOllama",
"ChatVertexAI",
"JinaChat",
"HumanInputChatModel",
"MiniMaxChat",
"ChatAnyscale",
"ChatLiteLLM",
"ErnieBotChat",
"ChatJavelinAIGateway",
"ChatKonko",
"PaiEasChatEndpoint",
"QianfanChatEndpoint",
"ChatFireworks",
"ChatYandexGPT",
"ChatBaichuan",
"ChatHunyuan",
"GigaChat",
"VolcEngineMaasChat",
]
| |
154500
|
from typing import TYPE_CHECKING, Any
from langchain._api import create_importer
if TYPE_CHECKING:
from langchain_community.chat_models.openai import ChatOpenAI
# Create a way to dynamically look up deprecated imports.
# Used to consolidate logic for raising deprecation warnings and
# handling optional imports.
DEPRECATED_LOOKUP = {"ChatOpenAI": "langchain_community.chat_models.openai"}
_import_attribute = create_importer(__package__, deprecated_lookups=DEPRECATED_LOOKUP)
def __getattr__(name: str) -> Any:
"""Look up attributes dynamically."""
return _import_attribute(name)
__all__ = [
"ChatOpenAI",
]
| |
154539
|
class AgentExecutor(Chain):
"""Agent that is using tools."""
agent: Union[BaseSingleActionAgent, BaseMultiActionAgent, Runnable]
"""The agent to run for creating a plan and determining actions
to take at each step of the execution loop."""
tools: Sequence[BaseTool]
"""The valid tools the agent can call."""
return_intermediate_steps: bool = False
"""Whether to return the agent's trajectory of intermediate steps
at the end in addition to the final output."""
max_iterations: Optional[int] = 15
"""The maximum number of steps to take before ending the execution
loop.
Setting to 'None' could lead to an infinite loop."""
max_execution_time: Optional[float] = None
"""The maximum amount of wall clock time to spend in the execution
loop.
"""
early_stopping_method: str = "force"
"""The method to use for early stopping if the agent never
returns `AgentFinish`. Either 'force' or 'generate'.
`"force"` returns a string saying that it stopped because it met a
time or iteration limit.
`"generate"` calls the agent's LLM Chain one final time to generate
a final answer based on the previous steps.
"""
handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = (
False
)
"""How to handle errors raised by the agent's output parser.
Defaults to `False`, which raises the error.
If `true`, the error will be sent back to the LLM as an observation.
If a string, the string itself will be sent to the LLM as an observation.
If a callable function, the function will be called with the exception
as an argument, and the result of that function will be passed to the agent
as an observation.
"""
trim_intermediate_steps: Union[
int, Callable[[List[Tuple[AgentAction, str]]], List[Tuple[AgentAction, str]]]
] = -1
"""How to trim the intermediate steps before returning them.
Defaults to -1, which means no trimming.
"""
@classmethod
def from_agent_and_tools(
cls,
agent: Union[BaseSingleActionAgent, BaseMultiActionAgent, Runnable],
tools: Sequence[BaseTool],
callbacks: Callbacks = None,
**kwargs: Any,
) -> AgentExecutor:
"""Create from agent and tools.
Args:
agent: Agent to use.
tools: Tools to use.
callbacks: Callbacks to use.
kwargs: Additional arguments.
Returns:
AgentExecutor: Agent executor object.
"""
return cls(
agent=agent,
tools=tools,
callbacks=callbacks,
**kwargs,
)
@model_validator(mode="after")
def validate_tools(self) -> Self:
"""Validate that tools are compatible with agent.
Args:
values: Values to validate.
Returns:
Dict: Validated values.
Raises:
ValueError: If allowed tools are different than provided tools.
"""
agent = self.agent
tools = self.tools
allowed_tools = agent.get_allowed_tools() # type: ignore
if allowed_tools is not None:
if set(allowed_tools) != set([tool.name for tool in tools]):
raise ValueError(
f"Allowed tools ({allowed_tools}) different than "
f"provided tools ({[tool.name for tool in tools]})"
)
return self
@model_validator(mode="before")
@classmethod
def validate_runnable_agent(cls, values: Dict) -> Any:
"""Convert runnable to agent if passed in.
Args:
values: Values to validate.
Returns:
Dict: Validated values.
"""
agent = values.get("agent")
if agent and isinstance(agent, Runnable):
try:
output_type = agent.OutputType
except Exception as _:
multi_action = False
else:
multi_action = output_type == Union[List[AgentAction], AgentFinish]
stream_runnable = values.pop("stream_runnable", True)
if multi_action:
values["agent"] = RunnableMultiActionAgent(
runnable=agent, stream_runnable=stream_runnable
)
else:
values["agent"] = RunnableAgent(
runnable=agent, stream_runnable=stream_runnable
)
return values
@property
def _action_agent(self) -> Union[BaseSingleActionAgent, BaseMultiActionAgent]:
"""Type cast self.agent.
The .agent attribute type includes Runnable, but is converted to one of
RunnableAgentType in the validate_runnable_agent root_validator.
To support instantiating with a Runnable, here we explicitly cast the type
to reflect the changes made in the root_validator.
"""
if isinstance(self.agent, Runnable):
return cast(RunnableAgentType, self.agent)
else:
return self.agent
def save(self, file_path: Union[Path, str]) -> None:
"""Raise error - saving not supported for Agent Executors.
Args:
file_path: Path to save to.
Raises:
ValueError: Saving not supported for agent executors.
"""
raise ValueError(
"Saving not supported for agent executors. "
"If you are trying to save the agent, please use the "
"`.save_agent(...)`"
)
def save_agent(self, file_path: Union[Path, str]) -> None:
"""Save the underlying agent.
Args:
file_path: Path to save to.
"""
return self._action_agent.save(file_path)
def iter(
self,
inputs: Any,
callbacks: Callbacks = None,
*,
include_run_info: bool = False,
async_: bool = False, # arg kept for backwards compat, but ignored
) -> AgentExecutorIterator:
"""Enables iteration over steps taken to reach final output.
Args:
inputs: Inputs to the agent.
callbacks: Callbacks to run.
include_run_info: Whether to include run info.
async_: Whether to run async. (Ignored)
Returns:
AgentExecutorIterator: Agent executor iterator object.
"""
return AgentExecutorIterator(
self,
inputs,
callbacks,
tags=self.tags,
include_run_info=include_run_info,
)
@property
def input_keys(self) -> List[str]:
"""Return the input keys.
:meta private:
"""
return self._action_agent.input_keys
@property
def output_keys(self) -> List[str]:
"""Return the singular output key.
:meta private:
"""
if self.return_intermediate_steps:
return self._action_agent.return_values + ["intermediate_steps"]
else:
return self._action_agent.return_values
def lookup_tool(self, name: str) -> BaseTool:
"""Lookup tool by name.
Args:
name: Name of tool.
Returns:
BaseTool: Tool object.
"""
return {tool.name: tool for tool in self.tools}[name]
def _should_continue(self, iterations: int, time_elapsed: float) -> bool:
if self.max_iterations is not None and iterations >= self.max_iterations:
return False
if (
self.max_execution_time is not None
and time_elapsed >= self.max_execution_time
):
return False
return True
def _return(
self,
output: AgentFinish,
intermediate_steps: list,
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
if run_manager:
run_manager.on_agent_finish(output, color="green", verbose=self.verbose)
final_output = output.return_values
if self.return_intermediate_steps:
final_output["intermediate_steps"] = intermediate_steps
return final_output
async def _areturn(
self,
output: AgentFinish,
intermediate_steps: list,
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
if run_manager:
await run_manager.on_agent_finish(
output, color="green", verbose=self.verbose
)
final_output = output.return_values
if self.return_intermediate_steps:
final_output["intermediate_steps"] = intermediate_steps
return final_output
def _consume_next_step(
self, values: NextStepOutput
) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
if isinstance(values[-1], AgentFinish):
assert len(values) == 1
return values[-1]
else:
return [
(a.action, a.observation) for a in values if isinstance(a, AgentStep)
]
| |
154540
|
def _take_next_step(
self,
name_to_tool_map: Dict[str, BaseTool],
color_mapping: Dict[str, str],
inputs: Dict[str, str],
intermediate_steps: List[Tuple[AgentAction, str]],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
return self._consume_next_step(
[
a
for a in self._iter_next_step(
name_to_tool_map,
color_mapping,
inputs,
intermediate_steps,
run_manager,
)
]
)
def _iter_next_step(
self,
name_to_tool_map: Dict[str, BaseTool],
color_mapping: Dict[str, str],
inputs: Dict[str, str],
intermediate_steps: List[Tuple[AgentAction, str]],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Iterator[Union[AgentFinish, AgentAction, AgentStep]]:
"""Take a single step in the thought-action-observation loop.
Override this to take control of how the agent makes and acts on choices.
"""
try:
intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
# Call the LLM to see what to do.
output = self._action_agent.plan(
intermediate_steps,
callbacks=run_manager.get_child() if run_manager else None,
**inputs,
)
except OutputParserException as e:
if isinstance(self.handle_parsing_errors, bool):
raise_error = not self.handle_parsing_errors
else:
raise_error = False
if raise_error:
raise ValueError(
"An output parsing error occurred. "
"In order to pass this error back to the agent and have it try "
"again, pass `handle_parsing_errors=True` to the AgentExecutor. "
f"This is the error: {str(e)}"
)
text = str(e)
if isinstance(self.handle_parsing_errors, bool):
if e.send_to_llm:
observation = str(e.observation)
text = str(e.llm_output)
else:
observation = "Invalid or incomplete response"
elif isinstance(self.handle_parsing_errors, str):
observation = self.handle_parsing_errors
elif callable(self.handle_parsing_errors):
observation = self.handle_parsing_errors(e)
else:
raise ValueError("Got unexpected type of `handle_parsing_errors`")
output = AgentAction("_Exception", observation, text)
if run_manager:
run_manager.on_agent_action(output, color="green")
tool_run_kwargs = self._action_agent.tool_run_logging_kwargs()
observation = ExceptionTool().run(
output.tool_input,
verbose=self.verbose,
color=None,
callbacks=run_manager.get_child() if run_manager else None,
**tool_run_kwargs,
)
yield AgentStep(action=output, observation=observation)
return
# If the tool chosen is the finishing tool, then we end and return.
if isinstance(output, AgentFinish):
yield output
return
actions: List[AgentAction]
if isinstance(output, AgentAction):
actions = [output]
else:
actions = output
for agent_action in actions:
yield agent_action
for agent_action in actions:
yield self._perform_agent_action(
name_to_tool_map, color_mapping, agent_action, run_manager
)
def _perform_agent_action(
self,
name_to_tool_map: Dict[str, BaseTool],
color_mapping: Dict[str, str],
agent_action: AgentAction,
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> AgentStep:
if run_manager:
run_manager.on_agent_action(agent_action, color="green")
# Otherwise we lookup the tool
if agent_action.tool in name_to_tool_map:
tool = name_to_tool_map[agent_action.tool]
return_direct = tool.return_direct
color = color_mapping[agent_action.tool]
tool_run_kwargs = self._action_agent.tool_run_logging_kwargs()
if return_direct:
tool_run_kwargs["llm_prefix"] = ""
# We then call the tool on the tool input to get an observation
observation = tool.run(
agent_action.tool_input,
verbose=self.verbose,
color=color,
callbacks=run_manager.get_child() if run_manager else None,
**tool_run_kwargs,
)
else:
tool_run_kwargs = self._action_agent.tool_run_logging_kwargs()
observation = InvalidTool().run(
{
"requested_tool_name": agent_action.tool,
"available_tool_names": list(name_to_tool_map.keys()),
},
verbose=self.verbose,
color=None,
callbacks=run_manager.get_child() if run_manager else None,
**tool_run_kwargs,
)
return AgentStep(action=agent_action, observation=observation)
async def _atake_next_step(
self,
name_to_tool_map: Dict[str, BaseTool],
color_mapping: Dict[str, str],
inputs: Dict[str, str],
intermediate_steps: List[Tuple[AgentAction, str]],
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
return self._consume_next_step(
[
a
async for a in self._aiter_next_step(
name_to_tool_map,
color_mapping,
inputs,
intermediate_steps,
run_manager,
)
]
)
async def _aiter_next_step(
self,
name_to_tool_map: Dict[str, BaseTool],
color_mapping: Dict[str, str],
inputs: Dict[str, str],
intermediate_steps: List[Tuple[AgentAction, str]],
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> AsyncIterator[Union[AgentFinish, AgentAction, AgentStep]]:
"""Take a single step in the thought-action-observation loop.
Override this to take control of how the agent makes and acts on choices.
"""
try:
intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
# Call the LLM to see what to do.
output = await self._action_agent.aplan(
intermediate_steps,
callbacks=run_manager.get_child() if run_manager else None,
**inputs,
)
except OutputParserException as e:
if isinstance(self.handle_parsing_errors, bool):
raise_error = not self.handle_parsing_errors
else:
raise_error = False
if raise_error:
raise ValueError(
"An output parsing error occurred. "
"In order to pass this error back to the agent and have it try "
"again, pass `handle_parsing_errors=True` to the AgentExecutor. "
f"This is the error: {str(e)}"
)
text = str(e)
if isinstance(self.handle_parsing_errors, bool):
if e.send_to_llm:
observation = str(e.observation)
text = str(e.llm_output)
else:
observation = "Invalid or incomplete response"
elif isinstance(self.handle_parsing_errors, str):
observation = self.handle_parsing_errors
elif callable(self.handle_parsing_errors):
observation = self.handle_parsing_errors(e)
else:
raise ValueError("Got unexpected type of `handle_parsing_errors`")
output = AgentAction("_Exception", observation, text)
tool_run_kwargs = self._action_agent.tool_run_logging_kwargs()
observation = await ExceptionTool().arun(
output.tool_input,
verbose=self.verbose,
color=None,
callbacks=run_manager.get_child() if run_manager else None,
**tool_run_kwargs,
)
yield AgentStep(action=output, observation=observation)
return
# If the tool chosen is the finishing tool, then we end and return.
if isinstance(output, AgentFinish):
yield output
return
actions: List[AgentAction]
if isinstance(output, AgentAction):
actions = [output]
else:
actions = output
for agent_action in actions:
yield agent_action
# Use asyncio.gather to run multiple tool.arun() calls concurrently
result = await asyncio.gather(
*[
self._aperform_agent_action(
name_to_tool_map, color_mapping, agent_action, run_manager
)
for agent_action in actions
],
)
# TODO This could yield each result as it becomes available
for chunk in result:
yield chunk
| |
154541
|
async def _aperform_agent_action(
self,
name_to_tool_map: Dict[str, BaseTool],
color_mapping: Dict[str, str],
agent_action: AgentAction,
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> AgentStep:
if run_manager:
await run_manager.on_agent_action(
agent_action, verbose=self.verbose, color="green"
)
# Otherwise we lookup the tool
if agent_action.tool in name_to_tool_map:
tool = name_to_tool_map[agent_action.tool]
return_direct = tool.return_direct
color = color_mapping[agent_action.tool]
tool_run_kwargs = self._action_agent.tool_run_logging_kwargs()
if return_direct:
tool_run_kwargs["llm_prefix"] = ""
# We then call the tool on the tool input to get an observation
observation = await tool.arun(
agent_action.tool_input,
verbose=self.verbose,
color=color,
callbacks=run_manager.get_child() if run_manager else None,
**tool_run_kwargs,
)
else:
tool_run_kwargs = self._action_agent.tool_run_logging_kwargs()
observation = await InvalidTool().arun(
{
"requested_tool_name": agent_action.tool,
"available_tool_names": list(name_to_tool_map.keys()),
},
verbose=self.verbose,
color=None,
callbacks=run_manager.get_child() if run_manager else None,
**tool_run_kwargs,
)
return AgentStep(action=agent_action, observation=observation)
def _call(
self,
inputs: Dict[str, str],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
"""Run text through and get agent response."""
# Construct a mapping of tool name to tool for easy lookup
name_to_tool_map = {tool.name: tool for tool in self.tools}
# We construct a mapping from each tool to a color, used for logging.
color_mapping = get_color_mapping(
[tool.name for tool in self.tools], excluded_colors=["green", "red"]
)
intermediate_steps: List[Tuple[AgentAction, str]] = []
# Let's start tracking the number of iterations and time elapsed
iterations = 0
time_elapsed = 0.0
start_time = time.time()
# We now enter the agent loop (until it returns something).
while self._should_continue(iterations, time_elapsed):
next_step_output = self._take_next_step(
name_to_tool_map,
color_mapping,
inputs,
intermediate_steps,
run_manager=run_manager,
)
if isinstance(next_step_output, AgentFinish):
return self._return(
next_step_output, intermediate_steps, run_manager=run_manager
)
intermediate_steps.extend(next_step_output)
if len(next_step_output) == 1:
next_step_action = next_step_output[0]
# See if tool should return directly
tool_return = self._get_tool_return(next_step_action)
if tool_return is not None:
return self._return(
tool_return, intermediate_steps, run_manager=run_manager
)
iterations += 1
time_elapsed = time.time() - start_time
output = self._action_agent.return_stopped_response(
self.early_stopping_method, intermediate_steps, **inputs
)
return self._return(output, intermediate_steps, run_manager=run_manager)
async def _acall(
self,
inputs: Dict[str, str],
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> Dict[str, str]:
"""Async run text through and get agent response."""
# Construct a mapping of tool name to tool for easy lookup
name_to_tool_map = {tool.name: tool for tool in self.tools}
# We construct a mapping from each tool to a color, used for logging.
color_mapping = get_color_mapping(
[tool.name for tool in self.tools], excluded_colors=["green"]
)
intermediate_steps: List[Tuple[AgentAction, str]] = []
# Let's start tracking the number of iterations and time elapsed
iterations = 0
time_elapsed = 0.0
start_time = time.time()
# We now enter the agent loop (until it returns something).
try:
async with asyncio_timeout(self.max_execution_time):
while self._should_continue(iterations, time_elapsed):
next_step_output = await self._atake_next_step(
name_to_tool_map,
color_mapping,
inputs,
intermediate_steps,
run_manager=run_manager,
)
if isinstance(next_step_output, AgentFinish):
return await self._areturn(
next_step_output,
intermediate_steps,
run_manager=run_manager,
)
intermediate_steps.extend(next_step_output)
if len(next_step_output) == 1:
next_step_action = next_step_output[0]
# See if tool should return directly
tool_return = self._get_tool_return(next_step_action)
if tool_return is not None:
return await self._areturn(
tool_return, intermediate_steps, run_manager=run_manager
)
iterations += 1
time_elapsed = time.time() - start_time
output = self._action_agent.return_stopped_response(
self.early_stopping_method, intermediate_steps, **inputs
)
return await self._areturn(
output, intermediate_steps, run_manager=run_manager
)
except (TimeoutError, asyncio.TimeoutError):
# stop early when interrupted by the async timeout
output = self._action_agent.return_stopped_response(
self.early_stopping_method, intermediate_steps, **inputs
)
return await self._areturn(
output, intermediate_steps, run_manager=run_manager
)
def _get_tool_return(
self, next_step_output: Tuple[AgentAction, str]
) -> Optional[AgentFinish]:
"""Check if the tool is a returning tool."""
agent_action, observation = next_step_output
name_to_tool_map = {tool.name: tool for tool in self.tools}
return_value_key = "output"
if len(self._action_agent.return_values) > 0:
return_value_key = self._action_agent.return_values[0]
# Invalid tools won't be in the map, so we return False.
if agent_action.tool in name_to_tool_map:
if name_to_tool_map[agent_action.tool].return_direct:
return AgentFinish(
{return_value_key: observation},
"",
)
return None
def _prepare_intermediate_steps(
self, intermediate_steps: List[Tuple[AgentAction, str]]
) -> List[Tuple[AgentAction, str]]:
if (
isinstance(self.trim_intermediate_steps, int)
and self.trim_intermediate_steps > 0
):
return intermediate_steps[-self.trim_intermediate_steps :]
elif callable(self.trim_intermediate_steps):
return self.trim_intermediate_steps(intermediate_steps)
else:
return intermediate_steps
def stream(
self,
input: Union[Dict[str, Any], Any],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Iterator[AddableDict]:
"""Enables streaming over steps taken to reach final output.
Args:
input: Input to the agent.
config: Config to use.
kwargs: Additional arguments.
Yields:
AddableDict: Addable dictionary.
"""
config = ensure_config(config)
iterator = AgentExecutorIterator(
self,
input,
config.get("callbacks"),
tags=config.get("tags"),
metadata=config.get("metadata"),
run_name=config.get("run_name"),
run_id=config.get("run_id"),
yield_actions=True,
**kwargs,
)
for step in iterator:
yield step
async def astream(
self,
input: Union[Dict[str, Any], Any],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> AsyncIterator[AddableDict]:
"""Async enables streaming over steps taken to reach final output.
Args:
input: Input to the agent.
config: Config to use.
kwargs: Additional arguments.
Yields:
AddableDict: Addable dictionary.
"""
config = ensure_config(config)
iterator = AgentExecutorIterator(
self,
input,
config.get("callbacks"),
tags=config.get("tags"),
metadata=config.get("metadata"),
run_name=config.get("run_name"),
run_id=config.get("run_id"),
yield_actions=True,
**kwargs,
)
async for step in iterator:
yield step
| |
154544
|
"""Module definitions of agent types together with corresponding agents."""
from enum import Enum
from langchain_core._api import deprecated
@deprecated(
"0.1.0",
message=(
"Use new agent constructor methods like create_react_agent, create_json_agent, "
"create_structured_chat_agent, etc."
),
removal="1.0",
)
class AgentType(str, Enum):
"""An enum for agent types.
See documentation: https://python.langchain.com/docs/modules/agents/agent_types/
"""
ZERO_SHOT_REACT_DESCRIPTION = "zero-shot-react-description"
"""A zero shot agent that does a reasoning step before acting."""
REACT_DOCSTORE = "react-docstore"
"""A zero shot agent that does a reasoning step before acting.
This agent has access to a document store that allows it to look up
relevant information to answering the question.
"""
SELF_ASK_WITH_SEARCH = "self-ask-with-search"
"""An agent that breaks down a complex question into a series of simpler questions.
This agent uses a search tool to look up answers to the simpler questions
in order to answer the original complex question.
"""
CONVERSATIONAL_REACT_DESCRIPTION = "conversational-react-description"
CHAT_ZERO_SHOT_REACT_DESCRIPTION = "chat-zero-shot-react-description"
"""A zero shot agent that does a reasoning step before acting.
This agent is designed to be used in conjunction
"""
CHAT_CONVERSATIONAL_REACT_DESCRIPTION = "chat-conversational-react-description"
STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION = (
"structured-chat-zero-shot-react-description"
)
"""An zero-shot react agent optimized for chat models.
This agent is capable of invoking tools that have multiple inputs.
"""
OPENAI_FUNCTIONS = "openai-functions"
"""An agent optimized for using open AI functions."""
OPENAI_MULTI_FUNCTIONS = "openai-multi-functions"
| |
154550
|
"""Chain that does self-ask with search."""
from __future__ import annotations
from typing import TYPE_CHECKING, Any, Sequence, Union
from langchain_core._api import deprecated
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts import BasePromptTemplate
from langchain_core.runnables import Runnable, RunnablePassthrough
from langchain_core.tools import BaseTool, Tool
from pydantic import Field
from langchain.agents.agent import Agent, AgentExecutor, AgentOutputParser
from langchain.agents.agent_types import AgentType
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.self_ask_with_search.output_parser import SelfAskOutputParser
from langchain.agents.self_ask_with_search.prompt import PROMPT
from langchain.agents.utils import validate_tools_single_input
if TYPE_CHECKING:
from langchain_community.utilities.google_serper import GoogleSerperAPIWrapper
from langchain_community.utilities.searchapi import SearchApiAPIWrapper
from langchain_community.utilities.serpapi import SerpAPIWrapper
@deprecated("0.1.0", alternative="create_self_ask_with_search", removal="1.0")
class SelfAskWithSearchAgent(Agent):
"""Agent for the self-ask-with-search paper."""
output_parser: AgentOutputParser = Field(default_factory=SelfAskOutputParser)
@classmethod
def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:
return SelfAskOutputParser()
@property
def _agent_type(self) -> str:
"""Return Identifier of an agent type."""
return AgentType.SELF_ASK_WITH_SEARCH
@classmethod
def create_prompt(cls, tools: Sequence[BaseTool]) -> BasePromptTemplate:
"""Prompt does not depend on tools."""
return PROMPT
@classmethod
def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:
validate_tools_single_input(cls.__name__, tools)
super()._validate_tools(tools)
if len(tools) != 1:
raise ValueError(f"Exactly one tool must be specified, but got {tools}")
tool_names = {tool.name for tool in tools}
if tool_names != {"Intermediate Answer"}:
raise ValueError(
f"Tool name should be Intermediate Answer, got {tool_names}"
)
@property
def observation_prefix(self) -> str:
"""Prefix to append the observation with."""
return "Intermediate answer: "
@property
def llm_prefix(self) -> str:
"""Prefix to append the LLM call with."""
return ""
@deprecated("0.1.0", removal="1.0")
class SelfAskWithSearchChain(AgentExecutor):
"""[Deprecated] Chain that does self-ask with search."""
def __init__(
self,
llm: BaseLanguageModel,
search_chain: Union[
GoogleSerperAPIWrapper, SearchApiAPIWrapper, SerpAPIWrapper
],
**kwargs: Any,
):
"""Initialize only with an LLM and a search chain."""
search_tool = Tool(
name="Intermediate Answer",
func=search_chain.run,
coroutine=search_chain.arun,
description="Search",
)
agent = SelfAskWithSearchAgent.from_llm_and_tools(llm, [search_tool])
super().__init__(agent=agent, tools=[search_tool], **kwargs)
def create_self_ask_with_search_agent(
llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: BasePromptTemplate
) -> Runnable:
"""Create an agent that uses self-ask with search prompting.
Args:
llm: LLM to use as the agent.
tools: List of tools. Should just be of length 1, with that tool having
name `Intermediate Answer`
prompt: The prompt to use, must have input key `agent_scratchpad` which will
contain agent actions and tool outputs.
Returns:
A Runnable sequence representing an agent. It takes as input all the same input
variables as the prompt passed in does. It returns as output either an
AgentAction or AgentFinish.
Examples:
.. code-block:: python
from langchain import hub
from langchain_community.chat_models import ChatAnthropic
from langchain.agents import (
AgentExecutor, create_self_ask_with_search_agent
)
prompt = hub.pull("hwchase17/self-ask-with-search")
model = ChatAnthropic(model="claude-3-haiku-20240307")
tools = [...] # Should just be one tool with name `Intermediate Answer`
agent = create_self_ask_with_search_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
agent_executor.invoke({"input": "hi"})
Prompt:
The prompt must have input key `agent_scratchpad` which will
contain agent actions and tool outputs as a string.
Here's an example:
.. code-block:: python
from langchain_core.prompts import PromptTemplate
template = '''Question: Who lived longer, Muhammad Ali or Alan Turing?
Are follow up questions needed here: Yes.
Follow up: How old was Muhammad Ali when he died?
Intermediate answer: Muhammad Ali was 74 years old when he died.
Follow up: How old was Alan Turing when he died?
Intermediate answer: Alan Turing was 41 years old when he died.
So the final answer is: Muhammad Ali
Question: When was the founder of craigslist born?
Are follow up questions needed here: Yes.
Follow up: Who was the founder of craigslist?
Intermediate answer: Craigslist was founded by Craig Newmark.
Follow up: When was Craig Newmark born?
Intermediate answer: Craig Newmark was born on December 6, 1952.
So the final answer is: December 6, 1952
Question: Who was the maternal grandfather of George Washington?
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
Question: Are both the directors of Jaws and Casino Royale from the same country?
Are follow up questions needed here: Yes.
Follow up: Who is the director of Jaws?
Intermediate answer: The director of Jaws is Steven Spielberg.
Follow up: Where is Steven Spielberg from?
Intermediate answer: The United States.
Follow up: Who is the director of Casino Royale?
Intermediate answer: The director of Casino Royale is Martin Campbell.
Follow up: Where is Martin Campbell from?
Intermediate answer: New Zealand.
So the final answer is: No
Question: {input}
Are followup questions needed here:{agent_scratchpad}'''
prompt = PromptTemplate.from_template(template)
""" # noqa: E501
missing_vars = {"agent_scratchpad"}.difference(
prompt.input_variables + list(prompt.partial_variables)
)
if missing_vars:
raise ValueError(f"Prompt missing required variables: {missing_vars}")
if len(tools) != 1:
raise ValueError("This agent expects exactly one tool")
tool = list(tools)[0]
if tool.name != "Intermediate Answer":
raise ValueError(
"This agent expects the tool to be named `Intermediate Answer`"
)
llm_with_stop = llm.bind(stop=["\nIntermediate answer:"])
agent = (
RunnablePassthrough.assign(
agent_scratchpad=lambda x: format_log_to_str(
x["intermediate_steps"],
observation_prefix="\nIntermediate answer: ",
llm_prefix="",
),
# Give it a default
chat_history=lambda x: x.get("chat_history", ""),
)
| prompt
| llm_with_stop
| SelfAskOutputParser()
)
return agent
| |
154575
|
from typing import List, Sequence, Union
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts.chat import ChatPromptTemplate
from langchain_core.runnables import Runnable, RunnablePassthrough
from langchain_core.tools import BaseTool
from langchain_core.tools.render import ToolsRenderer, render_text_description
from langchain.agents.format_scratchpad import format_log_to_messages
from langchain.agents.json_chat.prompt import TEMPLATE_TOOL_RESPONSE
from langchain.agents.output_parsers import JSONAgentOutputParser
def create_json_chat_agent(
llm: BaseLanguageModel,
tools: Sequence[BaseTool],
prompt: ChatPromptTemplate,
stop_sequence: Union[bool, List[str]] = True,
tools_renderer: ToolsRenderer = render_text_description,
template_tool_response: str = TEMPLATE_TOOL_RESPONSE,
) -> Runnable:
"""Create an agent that uses JSON to format its logic, build for Chat Models.
Args:
llm: LLM to use as the agent.
tools: Tools this agent has access to.
prompt: The prompt to use. See Prompt section below for more.
stop_sequence: bool or list of str.
If True, adds a stop token of "Observation:" to avoid hallucinates.
If False, does not add a stop token.
If a list of str, uses the provided list as the stop tokens.
Default is True. You may to set this to False if the LLM you are using
does not support stop sequences.
tools_renderer: This controls how the tools are converted into a string and
then passed into the LLM. Default is `render_text_description`.
template_tool_response: Template prompt that uses the tool response (observation)
to make the LLM generate the next action to take.
Default is TEMPLATE_TOOL_RESPONSE.
Returns:
A Runnable sequence representing an agent. It takes as input all the same input
variables as the prompt passed in does. It returns as output either an
AgentAction or AgentFinish.
Raises:
ValueError: If the prompt is missing required variables.
ValueError: If the template_tool_response is missing
the required variable 'observation'.
Example:
.. code-block:: python
from langchain import hub
from langchain_community.chat_models import ChatOpenAI
from langchain.agents import AgentExecutor, create_json_chat_agent
prompt = hub.pull("hwchase17/react-chat-json")
model = ChatOpenAI()
tools = ...
agent = create_json_chat_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
agent_executor.invoke({"input": "hi"})
# Using with chat history
from langchain_core.messages import AIMessage, HumanMessage
agent_executor.invoke(
{
"input": "what's my name?",
"chat_history": [
HumanMessage(content="hi! my name is bob"),
AIMessage(content="Hello Bob! How can I assist you today?"),
],
}
)
Prompt:
The prompt must have input keys:
* `tools`: contains descriptions and arguments for each tool.
* `tool_names`: contains all tool names.
* `agent_scratchpad`: must be a MessagesPlaceholder. Contains previous agent actions and tool outputs as messages.
Here's an example:
.. code-block:: python
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
system = '''Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering \
simple questions to providing in-depth explanations and discussions on a wide range of \
topics. As a language model, Assistant is able to generate human-like text based on \
the input it receives, allowing it to engage in natural-sounding conversations and \
provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly \
evolving. It is able to process and understand large amounts of text, and can use this \
knowledge to provide accurate and informative responses to a wide range of questions. \
Additionally, Assistant is able to generate its own text based on the input it \
receives, allowing it to engage in discussions and provide explanations and \
descriptions on a wide range of topics.
Overall, Assistant is a powerful system that can help with a wide range of tasks \
and provide valuable insights and information on a wide range of topics. Whether \
you need help with a specific question or just want to have a conversation about \
a particular topic, Assistant is here to assist.'''
human = '''TOOLS
------
Assistant can ask the user to use tools to look up information that may be helpful in \
answering the users original question. The tools the human can use are:
{tools}
RESPONSE FORMAT INSTRUCTIONS
----------------------------
When responding to me, please output a response in one of two formats:
**Option 1:**
Use this if you want the human to use a tool.
Markdown code snippet formatted in the following schema:
```json
{{
"action": string, \\ The action to take. Must be one of {tool_names}
"action_input": string \\ The input to the action
}}
```
**Option #2:**
Use this if you want to respond directly to the human. Markdown code snippet formatted \
in the following schema:
```json
{{
"action": "Final Answer",
"action_input": string \\ You should put what you want to return to use here
}}
```
USER'S INPUT
--------------------
Here is the user's input (remember to respond with a markdown code snippet of a json \
blob with a single action, and NOTHING else):
{input}'''
prompt = ChatPromptTemplate.from_messages(
[
("system", system),
MessagesPlaceholder("chat_history", optional=True),
("human", human),
MessagesPlaceholder("agent_scratchpad"),
]
)
""" # noqa: E501
missing_vars = {"tools", "tool_names", "agent_scratchpad"}.difference(
prompt.input_variables + list(prompt.partial_variables)
)
if missing_vars:
raise ValueError(f"Prompt missing required variables: {missing_vars}")
if "{observation}" not in template_tool_response:
raise ValueError(
"Template tool response missing required variable 'observation'"
)
prompt = prompt.partial(
tools=tools_renderer(list(tools)),
tool_names=", ".join([t.name for t in tools]),
)
if stop_sequence:
stop = ["\nObservation"] if stop_sequence is True else stop_sequence
llm_to_use = llm.bind(stop=stop)
else:
llm_to_use = llm
agent = (
RunnablePassthrough.assign(
agent_scratchpad=lambda x: format_log_to_messages(
x["intermediate_steps"], template_tool_response=template_tool_response
)
)
| prompt
| llm_to_use
| JSONAgentOutputParser()
)
return agent
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.