id
stringlengths
6
6
text
stringlengths
20
17.2k
title
stringclasses
1 value
167579
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Components Of LlamaIndex\n", "\n", "In this notebook we will demonstrate building RAG application and customize it using different components of LlamaIndex.\n", "\n", "1. Question Answering\n", "2. Summarization.\n", "3. ChatEngine.\n", "4. Customizing QA System.\n", "5. Index as Retriever.\n", "\n", "[ChatEngine Documentation](https://docs.llamaindex.ai/en/stable/module_guides/deploying/chat_engines/usage_pattern/ )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Installation" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install llama-index\n", "# !pip install llama-index-llms-openai\n", "# !pip install llama-index-embeddings-openai" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup API Key" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "\n", "os.environ[\"OPENAI_API_KEY\"] = \"sk-...\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Download Data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--2024-04-28 04:34:09-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 75042 (73K) [text/plain]\n", "Saving to: ‘paul_graham_essay.txt’\n", "\n", "\r", "paul_graham_essay.t 0%[ ] 0 --.-KB/s \r", "paul_graham_essay.t 100%[===================>] 73.28K --.-KB/s in 0.01s \n", "\n", "2024-04-28 04:34:09 (5.45 MB/s) - ‘paul_graham_essay.txt’ saved [75042/75042]\n", "\n", "--2024-04-28 04:34:09-- http://paul_graham_essay.txt/\n", "Resolving paul_graham_essay.txt (paul_graham_essay.txt)... failed: Name or service not known.\n", "wget: unable to resolve host address ‘paul_graham_essay.txt’\n", "FINISHED --2024-04-28 04:34:09--\n", "Total wall clock time: 0.2s\n", "Downloaded: 1 files, 73K in 0.01s (5.45 MB/s)\n" ] } ], "source": [ "!wget \"https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt\" \"paul_graham_essay.txt\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load Data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from llama_index.core import VectorStoreIndex, SimpleDirectoryReader\n", "\n", "documents = SimpleDirectoryReader(\n", " input_files=[\"paul_graham_essay.txt\"]\n", ").load_data()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set LLM and Embedding Model" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from llama_index.embeddings.openai import OpenAIEmbedding\n", "from llama_index.llms.openai import OpenAI\n", "from llama_index.core import Settings\n", "\n", "llm = OpenAI(model=\"gpt-3.5-turbo\", temperature=0.2)\n", "embed_model = OpenAIEmbedding()\n", "\n", "Settings.llm = llm\n", "Settings.embed_model = embed_model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create Nodes" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from llama_index.core.node_parser import TokenTextSplitter\n", "\n", "splitter = TokenTextSplitter(chunk_size=1024, chunk_overlap=20)\n", "nodes = splitter.get_nodes_from_documents(documents)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [
167611
"with a Spearman rank correlation of 0.55 at the model level. This provides an\n", "additional data point suggesting that LLM-based automated evals could be a\n", "cost-effective and reasonable alternative to human evals.\n", "\n", "### How to apply evals?\n", "\n", "**Building solid evals should be the starting point** for any LLM-based system\n", "or product (as well as conventional machine learning systems).\n", "\n", "Unfortunately, classical metrics such as BLEU and ROUGE don’t make sense for\n", "more complex tasks such as abstractive summarization or dialogue. Furthermore,\n", "we’ve seen that benchmarks like MMLU (and metrics like ROUGE) are sensitive to\n", "how they’re implemented and measured. And to be candid, unless your LLM system\n", "is studying for a school exam, using MMLU as an eval [doesn’t quite make\n", "sense](https://twitter.com/Tim_Dettmers/status/1680782418335367169).\n", "\n", "Thus, instead of using off-the-shelf benchmarks, we can **start by collecting\n", "a set of task-specific evals** (i.e., prompt, context, expected outputs as\n", "references). These evals will then guide prompt engineering, model selection,\n", "fine-tuning, and so on. And as we update our systems, we can run these evals\n", "to quickly measure improvements or regressions. Think of it as Eval Driven\n", "Development (EDD).\n", "\n", "In addition to the evaluation dataset, we **also need useful metrics**. They\n", "help us distill performance changes into a single number that’s comparable\n", "across eval runs. And if we can simplify the problem, we can choose metrics\n", "that are easier to compute and interpret.\n", "\n", "The simplest task is probably classification: If we’re using an LLM for\n", "classification-like tasks (e.g., toxicity detection, document categorization)\n", "or extractive QA without dialogue, we can rely on standard classification\n", "metrics such as recall, precision, PRAUC, etc. If our task has no correct\n", "answer but we have references (e.g., machine translation, extractive\n", "summarization), we can rely on reference metrics based on matching (BLEU,\n", "ROUGE) or semantic similarity (BERTScore, MoverScore).\n", "\n", "However, these metrics may not work for more open-ended tasks such as\n", "abstractive summarization, dialogue, and others. But collecting human\n", "judgments can be slow and expensive. Thus, we may opt to lean on **automated\n", "evaluations via a strong LLM**.\n", "\n", "Relative to human judgments which are typically noisy (due to differing biases\n", "among annotators), LLM judgments tend to be less noisy (as the bias is more\n", "systematic) but more biased. Nonetheless, since we’re aware of these biases,\n", "we can mitigate them accordingly:\n", "\n", " * Position bias: LLMs tend to favor the response in the first position. To mitigate this, we can evaluate the same pair of responses twice while swapping their order. If the same response is preferred in both orders, we mark it as a win; else, it’s a tie.\n", " * Verbosity bias: LLMs tend to favor longer, wordier responses over more concise ones, even if the latter is clearer and of higher quality. A possible solution is to ensure that comparison responses are similar in length.\n", " * Self-enhancement bias: LLMs have a slight bias towards their own answers. [GPT-4 favors itself with a 10% higher win rate while Claude-v1 favors itself with a 25% higher win rate.](https://arxiv.org/abs/2306.05685) To counter this, don’t use the same LLM for evaluation tasks.\n", "\n", "Another tip: Rather than asking an LLM for a direct evaluation (via giving a\n", "score), try giving it a reference and asking for a comparison. This helps with\n", "reducing noise.\n", "\n", "Finally, sometimes the best eval is human eval aka vibe check. (Not to be\n", "confused with the poorly named code evaluation benchmark\n", "[HumanEval](https://arxiv.org/abs/2107.03374).) As mentioned in the [Latent\n", "Space podcast with MosaicML](https://www.latent.space/p/mosaic-mpt-7b#details)\n", "(34th minute):\n", "\n", "> The vibe-based eval cannot be underrated. … One of our evals was just having\n", "> a bunch of prompts and watching the answers as the models trained and see if\n", "> they change. Honestly, I don’t really believe that any of these eval metrics\n", "> capture what we care about. One of our prompts was “suggest games for a\n", "> 3-year-old and a 7-year-old to play” and that was a lot more valuable to see\n", "> how the answer changed during the course of training. — Jonathan Frankle\n", "\n", "Also see this [deep dive into evals](/writing/abstractive/) for abstractive\n", "summarization. It covers reference, context, and preference-based metrics, and\n", "also discusses hallucination detection.\n", "\n", "## Retrieval-Augmented Generation: To add knowledge\n", "\n", "Retrieval-Augmented Generation (RAG) fetches relevant data from outside the\n", "foundation model and enhances the input with this data, providing richer\n", "context to improve output.\n", "\n", "### Why RAG?\n", "\n", "RAG helps reduce hallucination by grounding the model on the retrieved\n", "context, thus increasing factuality. In addition, it’s cheaper to keep\n", "retrieval indices up-to-date than to continuously pre-train an LLM. This cost\n", "efficiency makes it easier to provide LLMs with access to recent data via RAG.\n", "Finally, if we need to update or remove data such as biased or toxic\n", "documents, it’s more straightforward to update the retrieval index (compared\n", "to fine-tuning or prompting an LLM not to generate toxic outputs).\n", "\n", "In short, RAG applies mature and simpler ideas from the field of information\n", "retrieval to support LLM generation. In a [recent Sequoia\n", "survey](https://www.sequoiacap.com/article/llm-stack-perspective/), 88% of\n", "respondents believe that retrieval will be a key component of their stack.\n", "\n", "### More about RAG\n", "\n", "Before diving into RAG, it helps to have a basic understanding of text\n", "embeddings. (Feel free to skip this section if you’re familiar with the\n", "subject.)\n", "\n", "A text embedding is a **compressed, abstract representation of text data**\n", "where text of arbitrary length can be represented as a fixed-size vector of\n", "numbers. It’s usually learned from a corpus of text such as Wikipedia. Think\n", "of them as a universal encoding for text, where **similar items are close to\n", "each other while dissimilar items are farther apart**.\n", "\n", "A good embedding is one that does well on a downstream task, such as\n", "retrieving similar items. Huggingface’s [Massive Text Embedding Benchmark\n", "(MTEB)](https://huggingface.co/spaces/mteb/leaderboard) scores various models\n", "on diverse tasks such as classification, clustering, retrieval, summarization,\n", "etc.\n", "\n", "Quick note: While we mainly discuss text embeddings here, embeddings can take\n", "many modalities. For example, [CLIP](https://arxiv.org/abs/2103.00020) is\n", "multimodal and embeds images and text in the same space, allowing us to find\n", "images most similar to an input text. We can also [embed products based on\n", "user behavior](/writing/search-query-matching/#supervised-techniques-improves-\n", "modeling-of-our-desired-event) (e.g., clicks, purchases) or [graph\n", "relationships](/writing/search-query-matching/#self-supervised-techniques-no-\n", "need-for-labels).\n", "\n",
167612
"**RAG has its roots in open-domain Q &A.** An early [Meta\n", "paper](https://arxiv.org/abs/2005.04611) showed that retrieving relevant\n", "documents via TF-IDF and providing them as context to a language model (BERT)\n", "improved performance on an open-domain QA task. They converted each task into\n", "a cloze statement and queried the language model for the missing token.\n", "\n", "Following that, **[Dense Passage Retrieval\n", "(DPR)](https://arxiv.org/abs/2004.04906)** showed that using dense embeddings\n", "(instead of a sparse vector space such as TF-IDF) for document retrieval can\n", "outperform strong baselines like Lucene BM25 (65.2% vs. 42.9% for top-5\n", "accuracy.) They also showed that higher retrieval precision translates to\n", "higher end-to-end QA accuracy, highlighting the importance of upstream\n", "retrieval.\n", "\n", "To learn the DPR embedding, they fine-tuned two independent BERT-based\n", "encoders on existing question-answer pairs. The passage encoder (\\\\(E_p\\\\))\n", "embeds text passages into vectors while the query encoder (\\\\(E_q\\\\)) embeds\n", "questions into vectors. The query embedding is then used to retrieve \\\\(k\\\\)\n", "passages that are most similar to the question.\n", "\n", "They trained the encoders so that the dot-product similarity makes a good\n", "ranking function, and optimized the loss function as the negative log-\n", "likelihood of the positive passage. The DPR embeddings are optimized for\n", "maximum inner product between the question and relevant passage vectors. The\n", "goal is to learn a vector space such that pairs of questions and their\n", "relevant passages are close together.\n", "\n", "For inference, they embed all passages (via \\\\(E_p\\\\)) and index them in FAISS\n", "offline. Then, given a question at query time, they compute the question\n", "embedding (via \\\\(E_q\\\\)), retrieve the top \\\\(k\\\\) passages via approximate\n", "nearest neighbors, and provide it to the language model (BERT) that outputs\n", "the answer to the question.\n", "\n", "**[Retrieval Augmented Generation (RAG)](https://arxiv.org/abs/2005.11401)** ,\n", "from which this pattern gets its name, highlighted the downsides of pre-\n", "trained LLMs. These include not being able to expand or revise memory, not\n", "providing insights into generated output, and hallucinations.\n", "\n", "To address these downsides, they introduced RAG (aka semi-parametric models).\n", "Dense vector retrieval serves as the non-parametric component while a pre-\n", "trained LLM acts as the parametric component. They reused the DPR encoders to\n", "initialize the retriever and build the document index. For the LLM, they used\n", "BART, a 400M parameter seq2seq model.\n", "\n", "![Overview of Retrieval Augmented Generation](/assets/rag.jpg)\n", "\n", "Overview of Retrieval Augmented Generation\n", "([source](https://arxiv.org/abs/2005.11401))\n", "\n", "During inference, they concatenate the input with the retrieved document.\n", "Then, the LLM generates \\\\(\\text{token}_i\\\\) based on the original input, the\n", "retrieved document, and the previous \\\\(i-1\\\\) tokens. For generation, they\n", "proposed two approaches that vary in how the retrieved passages are used to\n", "generate output.\n", "\n", "In the first approach, RAG-Sequence, the model uses the same document to\n", "generate the complete sequence. Thus, for \\\\(k\\\\) retrieved documents, the\n", "generator produces an output for each document. Then, the probability of each\n", "output sequence is marginalized (sum the probability of each output sequence\n", "in \\\\(k\\\\) and weigh it by the probability of each document being retrieved).\n", "Finally, the output sequence with the highest probability is selected.\n", "\n", "On the other hand, RAG-Token can generate each token based on a _different_\n", "document. Given \\\\(k\\\\) retrieved documents, the generator produces a\n", "distribution for the next output token for each document before marginalizing\n", "(aggregating all the individual token distributions.). The process is then\n", "repeated for the next token. This means that, for each token generation, it\n", "can retrieve a different set of \\\\(k\\\\) relevant documents based on the\n", "original input _and_ previously generated tokens. Thus, documents can have\n", "different retrieval probabilities and contribute differently to the next\n", "generated token.\n", "\n", "[**Fusion-in-Decoder (FiD)**](https://arxiv.org/abs/2007.01282) also uses\n", "retrieval with generative models for open-domain QA. It supports two methods\n", "for retrieval, BM25 (Lucene with default parameters) and DPR. FiD is named for\n", "how it performs fusion on the retrieved documents in the decoder only.\n", "\n", "![Overview of Fusion-in-Decoder](/assets/fid.jpg)\n", "\n", "Overview of Fusion-in-Decoder ([source](https://arxiv.org/abs/2007.01282))\n", "\n", "For each retrieved passage, the title and passage are concatenated with the\n", "question. These pairs are processed independently in the encoder. They also\n", "add special tokens such as `question:`, `title:`, and `context:` before their\n", "corresponding sections. The decoder attends over the concatenation of these\n", "retrieved passages.\n", "\n", "Because it processes passages independently in the encoder, it can scale to a\n", "large number of passages as it only needs to do self-attention over one\n", "context at a time. Thus, compute grows linearly (instead of quadratically)\n", "with the number of retrieved passages, making it more scalable than\n", "alternatives such as RAG-Token. Then, during decoding, the decoder processes\n", "the encoded passages jointly, allowing it to better aggregate context across\n", "multiple retrieved passages.\n", "\n", "[**Retrieval-Enhanced Transformer (RETRO)**](https://arxiv.org/abs/2112.04426)\n", "adopts a similar pattern where it combines a frozen BERT retriever, a\n", "differentiable encoder, and chunked cross-attention to generate output. What’s\n", "different is that RETRO does retrieval throughout the entire pre-training\n", "stage, and not just during inference. Furthermore, they fetch relevant\n", "documents based on chunks of the input. This allows for finer-grained,\n", "repeated retrieval during generation instead of only retrieving once per\n", "query.\n", "\n", "For each input chunk (\\\\(C_u\\\\)), the \\\\(k\\\\) retrieved chunks \\\\(RET(C_u)\\\\)\n", "are fed into an encoder. The output is the encoded neighbors \\\\(E^{j}_{u}\\\\)\n", "where \\\\(E^{j}_{u} = \\text{Encoder}(\\text{RET}(C_{u})^{j}, H_{u}) \\in\n", "\\mathbb{R}^{r \\times d_{0}}\\\\). Here, each chunk encoding is conditioned on\n", "\\\\(H_u\\\\) (the intermediate activations) and the activations of chunk\n", "\\\\(C_u\\\\) through cross-attention layers. In short, the encoding of the\n", "retrieved chunks depends on the attended activation of the input chunk.\n", "\\\\(E^{j}_{u}\\\\) is then used to condition the generation of the next chunk.\n", "\n", "![Overview of RETRO](/assets/retro.jpg)\n", "\n", "Overview of RETRO ([source](https://arxiv.org/abs/2112.04426))\n", "\n", "During retrieval, RETRO splits the input sequence into chunks of 64 tokens.\n", "Then, it finds text similar to the _previous_ chunk to provide context to the\n", "_current_ chunk. The retrieval index consists of two contiguous chunks of\n",
167614
"embedding models. It comes with pre-trained embeddings for 157 languages and\n", "is extremely fast, even without a GPU. It’s my go-to for early-stage proof of\n", "concepts.\n", "\n", "Another good baseline is [sentence-\n", "transformers](https://github.com/UKPLab/sentence-transformers). It makes it\n", "simple to compute embeddings for sentences, paragraphs, and even images. It’s\n", "based on workhorse transformers such as BERT and RoBERTa and is available in\n", "more than 100 languages.\n", "\n", "More recently, instructor models have shown SOTA performance. During training,\n", "these models prepend the task description to the text. Then, when embedding\n", "new text, we simply have to describe the task to get task-specific embeddings.\n", "(Not that different from instruction tuning for embedding models IMHO.)\n", "\n", "An example is the [E5](https://arxiv.org/abs/2212.03533) family of models. For\n", "open QA and information retrieval, we simply prepend documents in the index\n", "with `passage:`, and prepend queries with `query:`. If the task is symmetric\n", "(e.g., semantic similarity, paraphrase retrieval) or if we want to use\n", "embeddings as features (e.g., classification, clustering), we just use the\n", "`query:` prefix.\n", "\n", "The [Instructor](https://arxiv.org/abs/2212.09741) model takes it a step\n", "further, allowing users to customize the prepended prompt: “Represent the\n", "`domain` `task_type` for the `task_objective`:” For example, “Represent the\n", "Wikipedia document for retrieval:”. (The domain and task objective are\n", "optional). This brings the concept of prompt tuning into the field of text\n", "embedding.\n", "\n", "Finally, as of Aug 1st, the top embedding model on the [MTEB\n", "Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) is the\n", "[GTE](https://huggingface.co/thenlper/gte-large) family of models by Alibaba\n", "DAMO Academy. The top performing model’s size is half of the next best model\n", "`e5-large-v2` (0.67GB vs 1.34GB). In 2nd position is `gte-base` with a model\n", "size of only 0.22GB and embedding dimension of 768. (H/T\n", "[Nirant](https://twitter.com/NirantK).)\n", "\n", "To retrieve documents with low latency at scale, we use approximate nearest\n", "neighbors (ANN). It optimizes for retrieval speed and returns the approximate\n", "(instead of exact) top \\\\(k\\\\) most similar neighbors, trading off a little\n", "accuracy loss for a large speed up.\n", "\n", "ANN embedding indices are data structures that let us do ANN searches\n", "efficiently. At a high level, they build partitions over the embedding space\n", "so we can quickly zoom in on the specific space where the query vector is.\n", "Some popular techniques include:\n", "\n", " * [Locality Sensitive Hashing](https://en.wikipedia.org/wiki/Locality-sensitive_hashing) (LSH): The core idea is to create hash functions so that similar items are likely to end up in the same hash bucket. By only needing to check the relevant buckets, we can perform ANN queries efficiently.\n", " * [Facebook AI Similarity Search](https://github.com/facebookresearch/faiss) (FAISS): It uses a combination of quantization and indexing for efficient retrieval, supports both CPU and GPU, and can handle billions of vectors due to its efficient use of memory.\n", " * [Hierarchical Navigable Small Worlds](https://github.com/nmslib/hnswlib) (HNSW): Inspired by “six degrees of separation”, it builds a hierarchical graph structure that embodies the small world phenomenon. Here, most nodes can be reached from any other node via a minimum number of hops. This structure allows HNSW to initiate queries from broader, coarser approximations and progressively narrow the search at lower levels.\n", " * [Scalable Nearest Neighbors](https://github.com/google-research/google-research/tree/master/scann) (ScaNN): It has a two-step process. First, coarse quantization reduces the search space. Then, fine-grained search is done within the reduced set. Best recall/latency trade-off I’ve seen.\n", "\n", "When evaluating an ANN index, some factors to consider include:\n", "\n", " * Recall: How does it fare against exact nearest neighbors?\n", " * Latency/throughput: How many queries can it handle per second?\n", " * Memory footprint: How much RAM is required to serve an index?\n", " * Ease of adding new items: Can new items be added without having to reindex all documents (LSH) or does the index need to be rebuilt (ScaNN)?\n", "\n", "No single framework is better than all others in every aspect. Thus, start by\n", "defining your functional and non-functional requirements before benchmarking.\n", "Personally, I’ve found ScaNN to be outstanding in the recall-latency trade-off\n", "(see benchmark graph [here](/writing/real-time-recommendations/#how-to-design-\n", "and-implement-an-mvp)).\n", "\n", "## Fine-tuning: To get better at specific tasks\n", "\n", "Fine-tuning is the process of taking a pre-trained model (that has already\n", "been trained with a vast amount of data) and further refining it on a specific\n", "task. The intent is to harness the knowledge that the model has already\n", "acquired during its pre-training and apply it to a specific task, usually\n", "involving a smaller, task-specific, dataset.\n", "\n", "The term “fine-tuning” is used loosely and can refer to several concepts such\n", "as:\n", "\n", " * **Continued pre-training** : With domain-specific data, apply the same pre-training regime (next token prediction, masked language modeling) on the base model.\n", " * **Instruction fine-tuning** : The pre-trained (base) model is fine-tuned on examples of instruction-output pairs to follow instructions, answer questions, be waifu, etc.\n", " * **Single-task fine-tuning** : The pre-trained model is honed for a narrow and specific task such as toxicity detection or summarization, similar to BERT and T5.\n", " * **Reinforcement learning with human feedback (RLHF)** : This combines instruction fine-tuning with reinforcement learning. It requires collecting human preferences (e.g., pairwise comparisons) which are then used to train a reward model. The reward model is then used to further fine-tune the instructed LLM via RL techniques such as proximal policy optimization (PPO).\n", "\n", "We’ll mainly focus on single-task and instruction fine-tuning here.\n", "\n", "### Why fine-tuning?\n", "\n", "Fine-tuning an open LLM is becoming an increasingly viable alternative to\n", "using a 3rd-party, cloud-based LLM for several reasons.\n", "\n", "**Performance & control:** Fine-tuning can improve the performance of an off-\n", "the-shelf base model, and may even surpass a 3rd-party LLM. It also provides\n", "greater control over LLM behavior, resulting in a more robust system or\n", "product. Overall, fine-tuning enables us to build products that are\n", "differentiated from simply using 3rd-party or open LLMs.\n", "\n", "**Modularization:** Single-task fine-tuning lets us to use an army of smaller\n", "models that each specialize on their own tasks. Via this setup, a system can\n", "be modularized into individual models for tasks like content moderation,\n", "extraction, summarization, etc. Also, given that each model only has to focus\n", "on a narrow set of tasks, we can get around the alignment tax, where fine-\n", "tuning a model on one task reduces performance on other tasks.\n", "\n", "**Reduced dependencies:** By fine-tuning and hosting our own models, we can\n", "reduce legal concerns about proprietary data (e.g., PII, internal documents\n", "and code) being exposed to external APIs. It also gets around constraints that\n",
167618
"tuning with the HHH prompt led to better performance compared to fine-tuning\n", "with RLHF.\n", "\n", "![Example of HHH prompt](/assets/hhh.jpg)\n", "\n", "Example of HHH prompt ([source](https://arxiv.org/abs/2204.05862))\n", "\n", "**A more common approach is to validate the output.** An example is the\n", "[Guardrails package](https://github.com/ShreyaR/guardrails). It allows users\n", "to add structural, type, and quality requirements on LLM outputs via Pydantic-\n", "style validation. And if the check fails, it can trigger corrective action\n", "such as filtering on the offending output or regenerating another response.\n", "\n", "Most of the validation logic is in\n", "[`validators.py`](https://github.com/ShreyaR/guardrails/blob/main/guardrails/validators.py).\n", "It’s interesting to see how they’re implemented. Broadly speaking, its\n", "validators fall into the following categories:\n", "\n", " * Single output value validation: This includes ensuring that the output (i) is one of the predefined choices, (ii) has a length within a certain range, (iii) if numeric, falls within an expected range, and (iv) is a complete sentence.\n", " * Syntactic checks: This includes ensuring that generated URLs are valid and reachable, and that Python and SQL code is bug-free.\n", " * Semantic checks: This verifies that the output is aligned with the reference document, or that the extractive summary closely matches the source document. These checks can be done via cosine similarity or fuzzy matching techniques.\n", " * Safety checks: This ensures that the generated output is free of inappropriate language or that the quality of translated text is high.\n", "\n", "Nvidia’s [NeMo-Guardrails](https://github.com/NVIDIA/NeMo-Guardrails) follows\n", "a similar principle but is designed to guide LLM-based conversational systems.\n", "Rather than focusing on syntactic guardrails, it emphasizes semantic ones.\n", "This includes ensuring that the assistant steers clear of politically charged\n", "topics, provides factually correct information, and can detect jailbreaking\n", "attempts.\n", "\n", "Thus, NeMo’s approach is somewhat different: Instead of using more\n", "deterministic checks like verifying if a value exists in a list or inspecting\n", "code for syntax errors, NeMo leans heavily on using another LLM to validate\n", "outputs (inspired by [SelfCheckGPT](https://arxiv.org/abs/2303.08896)).\n", "\n", "In their example for fact-checking and preventing hallucination, they ask the\n", "LLM itself to check whether the most recent output is consistent with the\n", "given context. To fact-check, the LLM is queried if the response is true based\n", "on the documents retrieved from the knowledge base. To prevent hallucinations,\n", "since there isn’t a knowledge base available, they get the LLM to generate\n", "multiple alternative completions which serve as the context. The underlying\n", "assumption is that if the LLM produces multiple completions that disagree with\n", "one another, the original completion is likely a hallucination.\n", "\n", "The moderation example follows a similar approach: The response is screened\n", "for harmful and unethical content via an LLM. Given the nuance of ethics and\n", "harmful content, heuristics and conventional machine learning techniques fall\n", "short. Thus, an LLM is required for a deeper understanding of the intent and\n", "structure of dialogue.\n", "\n", "Apart from using guardrails to verify the output of LLMs, we can also\n", "**directly steer the output to adhere to a specific grammar.** An example of\n", "this is Microsoft’s [Guidance](https://github.com/microsoft/guidance). Unlike\n", "Guardrails which [imposes JSON schema via a\n", "prompt](https://github.com/ShreyaR/guardrails/blob/main/guardrails/constants.xml#L14),\n", "Guidance enforces the schema by injecting tokens that make up the structure.\n", "\n", "We can think of Guidance as a domain-specific language for LLM interactions\n", "and output. It draws inspiration from [Handlebars](https://handlebarsjs.com),\n", "a popular templating language used in web applications that empowers users to\n", "perform variable interpolation and logical control.\n", "\n", "However, Guidance sets itself apart from regular templating languages by\n", "executing linearly. This means it maintains the order of tokens generated.\n", "Thus, by inserting tokens that are part of the structure—instead of relying on\n", "the LLM to generate them correctly—Guidance can dictate the specific output\n", "format. In their examples, they show how to [generate JSON that’s always\n", "valid](https://github.com/microsoft/guidance#guaranteeing-valid-syntax-json-\n", "example-notebook), [generate complex output\n", "formats](https://github.com/microsoft/guidance#rich-output-structure-example-\n", "notebook) with multiple keys, ensure that LLMs [play the right\n", "roles](https://github.com/microsoft/guidance#role-based-chat-model-example-\n", "notebook), and have [agents interact with each\n", "other](https://github.com/microsoft/guidance#agents-notebook).\n", "\n", "They also introduced a concept called [token\n", "healing](https://github.com/microsoft/guidance#token-healing-notebook), a\n", "useful feature that helps avoid subtle bugs that occur due to tokenization. In\n", "simple terms, it rewinds the generation by one token before the end of the\n", "prompt and then restricts the first generated token to have a prefix matching\n", "the last token in the prompt. This eliminates the need to fret about token\n", "boundaries when crafting prompts.\n", "\n", "### How to apply guardrails?\n", "\n", "Though the concept of guardrails for LLMs in industry is still nascent, there\n", "are a handful of immediately useful and practical strategies we can consider.\n", "\n", "**Structural guidance:** Apply guidance whenever possible. It provides direct\n", "control over outputs and offers a more precise method to ensure that output\n", "conforms to a specific structure or format.\n", "\n", "**Syntactic guardrails:** These include checking if categorical output is\n", "within a set of acceptable choices, or if numeric output is within an expected\n", "range. Also, if we generate SQL, these can verify its free from syntax errors\n", "and also ensure that all columns in the query match the schema. Ditto for\n", "generating code (e.g., Python, JavaScript).\n", "\n", "**Content safety guardrails:** These verify that the output has no harmful or\n", "inappropriate content. It can be as simple as checking against the [List of\n", "Dirty, Naughty, Obscene, and Otherwise Bad\n", "Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-\n", "Bad-Words) or using [profanity detection](https://pypi.org/project/profanity-\n", "check/) models. (It’s [common to run moderation classifiers on\n", "output](https://twitter.com/goodside/status/1685023251532320768).) More\n", "complex and nuanced output can rely on an LLM evaluator.\n", "\n", "**Semantic/factuality guardrails:** These confirm that the output is\n", "semantically relevant to the input. Say we’re generating a two-sentence\n", "summary of a movie based on its synopsis. We can validate if the produced\n", "summary is semantically similar to the output, or have (another) LLM ascertain\n", "if the summary accurately represents the provided synopsis.\n", "\n", "**Input guardrails:** These limit the types of input the model will respond\n", "to, helping to mitigate the risk of the model responding to inappropriate or\n", "adversarial prompts which would lead to generating harmful content. For\n", "example, you’ll get an error if you ask Midjourney to generate NSFW content.\n", "This can be as straightforward as comparing against a list of strings or using\n", "a moderation classifier.\n", "\n",
167622
"\n", "Grusky, Max. [“Rogue Scores.”](https://aclanthology.org/2023.acl-long.107/)\n", "Proceedings of the 61st Annual Meeting of the Association for Computational\n", "Linguistics (Volume 1: Long Papers). 2023.\n", "\n", "Liu, Yang, et al. [“Gpteval: Nlg evaluation using gpt-4 with better human\n", "alignment.”](https://arxiv.org/abs/2303.16634) arXiv preprint arXiv:2303.16634\n", "(2023).\n", "\n", "Fourrier, Clémentine, et al. [“What’s going on with the Open LLM\n", "Leaderboard?”](https://huggingface.co/blog/evaluating-mmlu-leaderboard#whats-\n", "going-on-with-the-open-llm-leaderboard) (2023).\n", "\n", "Zheng, Lianmin, et al. [“Judging LLM-as-a-judge with MT-Bench and Chatbot\n", "Arena.”](https://arxiv.org/abs/2306.05685) arXiv preprint arXiv:2306.05685\n", "(2023).\n", "\n", "Dettmers, Tim, et al. [“Qlora: Efficient finetuning of quantized\n", "llms.”](https://arxiv.org/abs/2305.14314) arXiv preprint arXiv:2305.14314\n", "(2023).\n", "\n", "Swyx et al. [MPT-7B and The Beginning of\n", "Context=Infinity](https://www.latent.space/p/mosaic-mpt-7b#details) (2023).\n", "\n", "Fradin, Michelle, Reeder, Lauren [“The New Language Model\n", "Stack”](https://www.sequoiacap.com/article/llm-stack-perspective/) (2023).\n", "\n", "Radford, Alec, et al. [“Learning transferable visual models from natural\n", "language supervision.”](https://arxiv.org/abs/2103.00020) International\n", "conference on machine learning. PMLR, 2021.\n", "\n", "Yan, Ziyou. [“Search: Query Matching via Lexical, Graph, and Embedding\n", "Methods.”](https://eugeneyan.com/writing/search-query-matching/)\n", "eugeneyan.com, (2021).\n", "\n", "Petroni, Fabio, et al. [“How context affects language models’ factual\n", "predictions.”](https://arxiv.org/abs/2005.04611) arXiv preprint\n", "arXiv:2005.04611 (2020).\n", "\n", "Karpukhin, Vladimir, et al. [“Dense passage retrieval for open-domain question\n", "answering.”](https://arxiv.org/abs/2004.04906) arXiv preprint arXiv:2004.04906\n", "(2020).\n", "\n", "Lewis, Patrick, et al. [“Retrieval-augmented generation for knowledge-\n", "intensive nlp tasks.”](https://arxiv.org/abs/2005.11401) Advances in Neural\n", "Information Processing Systems 33 (2020): 9459-9474.\n", "\n", "Izacard, Gautier, and Edouard Grave. [“Leveraging passage retrieval with\n", "generative models for open domain question\n", "answering.”](https://arxiv.org/abs/2007.01282) arXiv preprint arXiv:2007.01282\n", "(2020).\n", "\n", "Borgeaud, Sebastian, et al. [“Improving language models by retrieving from\n", "trillions of tokens.”](https://arxiv.org/abs/2112.04426) International\n", "conference on machine learning. PMLR, (2022).\n", "\n", "Lazaridou, Angeliki, et al. [“Internet-augmented language models through few-\n", "shot prompting for open-domain question\n", "answering.”](https://arxiv.org/abs/2203.05115) arXiv preprint arXiv:2203.05115\n", "(2022).\n", "\n", "Wang, Yue, et al. [“Codet5+: Open code large language models for code\n", "understanding and generation.”](https://arxiv.org/abs/2305.07922) arXiv\n", "preprint arXiv:2305.07922 (2023).\n", "\n", "Gao, Luyu, et al. [“Precise zero-shot dense retrieval without relevance\n", "labels.”](https://arxiv.org/abs/2212.10496) arXiv preprint arXiv:2212.10496\n", "(2022).\n", "\n", "Yan, Ziyou. [“Obsidian-Copilot: An Assistant for Writing &\n", "Reflecting.”](https://eugeneyan.com/writing/obsidian-copilot/) eugeneyan.com,\n", "(2023).\n", "\n", "Bojanowski, Piotr, et al. [“Enriching word vectors with subword\n", "information.”](https://arxiv.org/abs/1607.04606) Transactions of the\n", "association for computational linguistics 5 (2017): 135-146.\n", "\n", "Reimers, Nils, and Iryna Gurevych. [“Making Monolingual Sentence Embeddings\n", "Multilingual Using Knowledge Distillation.”](https://arxiv.org/abs/2004.09813)\n", "Proceedings of the 2020 Conference on Empirical Methods in Natural Language\n", "Processing, Association for Computational Linguistics, (2020).\n", "\n", "Wang, Liang, et al. [“Text embeddings by weakly-supervised contrastive pre-\n", "training.”](https://arxiv.org/abs/2212.03533) arXiv preprint arXiv:2212.03533\n", "(2022).\n", "\n", "Su, Hongjin, et al. [“One embedder, any task: Instruction-finetuned text\n", "embeddings.”](https://arxiv.org/abs/2212.09741) arXiv preprint\n", "arXiv:2212.09741 (2022).\n", "\n", "Johnson, Jeff, et al. [“Billion-Scale Similarity Search with\n", "GPUs.”](https://arxiv.org/abs/1702.08734) IEEE Transactions on Big Data, vol.\n", "7, no. 3, IEEE, 2019, pp. 535–47.\n", "\n", "Malkov, Yu A., and Dmitry A. Yashunin. [“Efficient and Robust Approximate\n", "Nearest Neighbor Search Using Hierarchical Navigable Small World\n", "Graphs.”](https://arxiv.org/abs/1603.09320) IEEE Transactions on Pattern\n", "Analysis and Machine Intelligence, vol. 42, no. 4, IEEE, 2018, pp. 824–36.\n", "\n", "Guo, Ruiqi, et al. [“Accelerating Large-Scale Inference with Anisotropic\n", "Vector Quantization.”](https://arxiv.org/abs/1908.10396.) International\n", "Conference on Machine Learning, (2020)\n", "\n", "Ouyang, Long, et al. [“Training language models to follow instructions with\n", "human feedback.”](https://arxiv.org/abs/2203.02155) Advances in Neural\n", "Information Processing Systems 35 (2022): 27730-27744.\n", "\n", "Howard, Jeremy, and Sebastian Ruder. [“Universal language model fine-tuning\n", "for text classification.”](https://arxiv.org/abs/1801.06146) arXiv preprint\n", "arXiv:1801.06146 (2018).\n", "\n", "Devlin, Jacob, et al. [“Bert: Pre-training of deep bidirectional transformers\n",
167693
"Requirement already satisfied: aiohttp in /usr/local/lib/python3.10/dist-packages (from datasets->FlagEmbedding==1.2.11) (3.10.5)\n", "Collecting scikit-learn (from sentence_transformers->FlagEmbedding==1.2.11)\n", " Downloading scikit_learn-1.5.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (12 kB)\n", "Collecting scipy (from sentence_transformers->FlagEmbedding==1.2.11)\n", " Downloading scipy-1.14.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (60 kB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m60.8/60.8 kB\u001b[0m \u001b[31m29.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hRequirement already satisfied: Pillow in /usr/local/lib/python3.10/dist-packages (from sentence_transformers->FlagEmbedding==1.2.11) (9.3.0)\n", "Requirement already satisfied: aiohappyeyeballs>=2.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->FlagEmbedding==1.2.11) (2.4.0)\n", "Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->FlagEmbedding==1.2.11) (1.3.1)\n", "Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->FlagEmbedding==1.2.11) (23.1.0)\n", "Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->FlagEmbedding==1.2.11) (1.4.1)\n", "Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->FlagEmbedding==1.2.11) (6.0.5)\n", "Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->FlagEmbedding==1.2.11) (1.9.11)\n", "Requirement already satisfied: async-timeout<5.0,>=4.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->FlagEmbedding==1.2.11) (4.0.3)\n", "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->transformers>=4.33.0->FlagEmbedding==1.2.11) (2.1.1)\n", "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->transformers>=4.33.0->FlagEmbedding==1.2.11) (3.4)\n", "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->transformers>=4.33.0->FlagEmbedding==1.2.11) (1.26.13)\n", "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->transformers>=4.33.0->FlagEmbedding==1.2.11) (2022.12.7)\n", "Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.6.0->FlagEmbedding==1.2.11) (2.1.2)\n", "Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.10/dist-packages (from pandas->datasets->FlagEmbedding==1.2.11) (2.8.2)\n", "Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->datasets->FlagEmbedding==1.2.11) (2024.1)\n", "Requirement already satisfied: tzdata>=2022.7 in /usr/local/lib/python3.10/dist-packages (from pandas->datasets->FlagEmbedding==1.2.11) (2024.1)\n", "Requirement already satisfied: joblib>=1.2.0 in /usr/local/lib/python3.10/dist-packages (from scikit-learn->sentence_transformers->FlagEmbedding==1.2.11) (1.4.2)\n", "Collecting threadpoolctl>=3.1.0 (from scikit-learn->sentence_transformers->FlagEmbedding==1.2.11)\n", " Downloading threadpoolctl-3.5.0-py3-none-any.whl.metadata (13 kB)\n", "Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch>=1.6.0->FlagEmbedding==1.2.11) (1.3.0)\n", "Requirement already satisfied: six>=1.5 in /usr/lib/python3/dist-packages (from python-dateutil>=2.8.2->pandas->datasets->FlagEmbedding==1.2.11) (1.16.0)\n", "Downloading accelerate-0.34.0-py3-none-any.whl (324 kB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m324.3/324.3 kB\u001b[0m \u001b[31m47.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hDownloading transformers-4.44.2-py3-none-any.whl (9.5 MB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m9.5/9.5 MB\u001b[0m \u001b[31m98.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0mta \u001b[36m0:00:01\u001b[0m\n", "\u001b[?25hDownloading datasets-2.21.0-py3-none-any.whl (527 kB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m527.3/527.3 kB\u001b[0m \u001b[31m93.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hDownloading peft-0.12.0-py3-none-any.whl (296 kB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m296.4/296.4 kB\u001b[0m \u001b[31m57.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hDownloading sentence_transformers-3.0.1-py3-none-any.whl (227 kB)\n",
167695
"Installing collected packages: xxhash, threadpoolctl, scipy, safetensors, requests, pyarrow, fsspec, dill, scikit-learn, multiprocess, huggingface-hub, tokenizers, accelerate, transformers, datasets, sentence_transformers, peft, FlagEmbedding\n", " Attempting uninstall: requests\n", " Found existing installation: requests 2.31.0\n", " Uninstalling requests-2.31.0:\n", " Successfully uninstalled requests-2.31.0\n", " Attempting uninstall: fsspec\n", " Found existing installation: fsspec 2024.9.0\n", " Uninstalling fsspec-2024.9.0:\n", " Successfully uninstalled fsspec-2024.9.0\n", "Successfully installed FlagEmbedding-1.2.11 accelerate-0.34.0 datasets-2.21.0 dill-0.3.8 fsspec-2024.6.1 huggingface-hub-0.24.6 multiprocess-0.70.16 peft-0.12.0 pyarrow-17.0.0 requests-2.32.3 safetensors-0.4.4 scikit-learn-1.5.1 scipy-1.14.1 sentence_transformers-3.0.1 threadpoolctl-3.5.0 tokenizers-0.19.1 transformers-4.44.2 xxhash-3.5.0\n", "\u001b[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\u001b[0m\u001b[33m\n", "\u001b[0m\n", "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.3.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.2\u001b[0m\n", "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpython -m pip install --upgrade pip\u001b[0m\n", "Requirement already satisfied: llama-parse in /usr/local/lib/python3.10/dist-packages (0.5.2)\n", "Requirement already satisfied: llama-index-core>=0.11.0 in /usr/local/lib/python3.10/dist-packages (from llama-parse) (0.11.5)\n", "Requirement already satisfied: PyYAML>=6.0.1 in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (6.0.1)\n", "Requirement already satisfied: SQLAlchemy>=1.4.49 in /usr/local/lib/python3.10/dist-packages (from SQLAlchemy[asyncio]>=1.4.49->llama-index-core>=0.11.0->llama-parse) (2.0.34)\n", "Requirement already satisfied: aiohttp<4.0.0,>=3.8.6 in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (3.10.5)\n", "Requirement already satisfied: dataclasses-json in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (0.6.7)\n", "Requirement already satisfied: deprecated>=1.2.9.3 in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (1.2.14)\n", "Requirement already satisfied: dirtyjson<2.0.0,>=1.0.8 in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (1.0.8)\n", "Requirement already satisfied: fsspec>=2023.5.0 in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (2024.6.1)\n", "Requirement already satisfied: httpx in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (0.27.2)\n", "Requirement already satisfied: nest-asyncio<2.0.0,>=1.5.8 in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (1.5.8)\n", "Requirement already satisfied: networkx>=3.0 in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (3.0)\n", "Requirement already satisfied: nltk>3.8.1 in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (3.9.1)\n", "Requirement already satisfied: numpy<2.0.0 in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (1.24.1)\n", "Requirement already satisfied: pillow>=9.0.0 in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (9.3.0)\n", "Requirement already satisfied: pydantic<3.0.0,>=2.7.0 in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (2.8.2)\n", "Requirement already satisfied: requests>=2.31.0 in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (2.32.3)\n", "Requirement already satisfied: tenacity!=8.4.0,<9.0.0,>=8.2.0 in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (8.5.0)\n", "Requirement already satisfied: tiktoken>=0.3.3 in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (0.7.0)\n", "Requirement already satisfied: tqdm<5.0.0,>=4.66.1 in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (4.66.5)\n", "Requirement already satisfied: typing-extensions>=4.5.0 in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (4.12.2)\n", "Requirement already satisfied: typing-inspect>=0.8.0 in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (0.9.0)\n", "Requirement already satisfied: wrapt in /usr/local/lib/python3.10/dist-packages (from llama-index-core>=0.11.0->llama-parse) (1.16.0)\n", "Requirement already satisfied: aiohappyeyeballs>=2.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.6->llama-index-core>=0.11.0->llama-parse) (2.4.0)\n", "Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.6->llama-index-core>=0.11.0->llama-parse) (1.3.1)\n", "Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.6->llama-index-core>=0.11.0->llama-parse) (23.1.0)\n",
167752
# Basic Strategies There are many easy things to try, when you need to quickly squeeze out extra performance and optimize your RAG workflow. ## Prompt Engineering If you're encountering failures related to the LLM, like hallucinations or poorly formatted outputs, then this should be one of the first things you try. Some tasks are listed below, from simple to advanced. 1. Try inspecting the prompts used in your RAG workflow (e.g. the question–answering prompt) and customizing it. - [Customizing Prompts](../../examples/prompts/prompt_mixin.ipynb) - [Advanced Prompts](../../examples/prompts/advanced_prompts.ipynb) 2. Try adding **prompt functions**, allowing you to dynamically inject few-shot examples or process the injected inputs. - [Advanced Prompts](../../examples/prompts/advanced_prompts.ipynb) - [RAG Prompts](../../examples/prompts/prompts_rag.ipynb) ## Embeddings Choosing the right embedding model plays a large role in overall performance. - Maybe you need something better than the default `text-embedding-ada-002` model from OpenAI? - Maybe you want to scale to a local server? - Maybe you need an embedding model that works well for a specific language? Beyond OpenAI, many options existing for embedding APIs, running your own embedding model locally, or even hosting your own server. A great resource to check on the current best overall embeddings models is the [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard), which ranks embeddings models on over 50 datasets and tasks. **NOTE:** Unlike an LLM (which you can change at any time), if you change your embedding model, you must re-index your data. Furthermore, you should ensure the same embedding model is used for both indexing and querying. We have a list of [all supported embedding model integrations](../../module_guides/models/embeddings.md). ## Chunk Sizes Depending on the type of data you are indexing, or the results from your retrieval, you may want to customize the chunk size or chunk overlap. When documents are ingested into an index, they are split into chunks with a certain amount of overlap. The default chunk size is 1024, while the default chunk overlap is 20. Changing either of these parameters will change the embeddings that are calculated. A smaller chunk size means the embeddings are more precise, while a larger chunk size means that the embeddings may be more general, but can miss fine-grained details. We have done our own [initial evaluation on chunk sizes here](https://blog.llamaindex.ai/evaluating-the-ideal-chunk-size-for-a-rag-system-using-llamaindex-6207e5d3fec5). Furthermore, when changing the chunk size for a vector index, you may also want to increase the `similarity_top_k` parameter to better represent the amount of data to retrieve for each query. Here is a full example: ``` from llama_index.core import SimpleDirectoryReader, VectorStoreIndex from llama_index.core import Settings documents = SimpleDirectoryReader("./data").load_data() Settings.chunk_size = 512 Settings.chunk_overlap = 50 index = VectorStoreIndex.from_documents( documents, ) query_engine = index.as_query_engine(similarity_top_k=4) ``` Since we halved the default chunk size, the example also doubles the `similarity_top_k` from the default of 2 to 4. ## Hybrid Search Hybrid search is a common term for retrieval that involves combining results from both semantic search (i.e. embedding similarity) and keyword search. Embeddings are not perfect, and may fail to return text chunks with matching keywords in the retrieval step. The solution to this issue is often hybrid search. In LlamaIndex, there are two main ways to achieve this: 1. Use a vector database that has a hybrid search functionality (see [our complete list of supported vector stores](../../module_guides/storing/vector_stores.md)). 2. Set up a local hybrid search mechanism with BM25. Relevant guides with both approaches can be found below: - [BM25 Retriever](../../examples/retrievers/bm25_retriever.ipynb) - [Reciprocal Rerank Query Fusion](../../examples/retrievers/reciprocal_rerank_fusion.ipynb) - [Weaviate Hybrid Search](../../examples/vector_stores/WeaviateIndexDemo-Hybrid.ipynb) - [Pinecone Hybrid Search](../../examples/vector_stores/PineconeIndexDemo-Hybrid.ipynb) - [Milvus Hybrid Search](../../examples/vector_stores/MilvusHybridIndexDemo.ipynb) ## Metadata Filters Before throwing your documents into a vector index, it can be useful to attach metadata to them. While this metadata can be used later on to help track the sources to answers from the `response` object, it can also be used at query time to filter data before performing the top-k similarity search. Metadata filters can be set manually, so that only nodes with the matching metadata are returned: ```python from llama_index.core import VectorStoreIndex, Document from llama_index.core.vector_stores import MetadataFilters, ExactMatchFilter documents = [ Document(text="text", metadata={"author": "LlamaIndex"}), Document(text="text", metadata={"author": "John Doe"}), ] filters = MetadataFilters( filters=[ExactMatchFilter(key="author", value="John Doe")] ) index = VectorStoreIndex.from_documents(documents) query_engine = index.as_query_engine(filters=filters) ``` If you are using an advanced LLM like GPT-4, and your [vector database supports filtering](../../module_guides/storing/vector_stores.md), you can get the LLM to write filters automatically at query time, using an `AutoVectorRetriever`. - [Vector Store Guide](../../module_guides/indexing/vector_store_guide.ipynb) ## Document/Node Usage Take a look at our in-depth guides for more details on how to use Documents/Nodes. - [Documents Usage](../../module_guides/loading/documents_and_nodes/usage_documents.md) - [Nodes Usage](../../module_guides/loading/documents_and_nodes/usage_nodes.md) - [Metadata Extraction](../../module_guides/loading/documents_and_nodes/usage_metadata_extractor.md) ## Multi-Tenancy RAG Multi-Tenancy in RAG systems is crucial for ensuring data security. It enables users to access exclusively their indexed documents, thereby preventing unauthorized sharing and safeguarding data privacy. Search operations are confined to the user's own data, protecting sensitive information. Implementation can be achieved with `VectorStoreIndex` and `VectorDB` providers through Metadata Filters. Refer the guides below for more details. - [Multi Tenancy RAG](../../examples/multi_tenancy/multi_tenancy_rag.ipynb) For detailed guidance on implementing Multi-Tenancy RAG with LlamaIndex and Qdrant, refer to the [blog post](https://qdrant.tech/documentation/tutorials/llama-index-multitenancy/) released by Qdrant.
167761
# SimpleDirectoryReader `SimpleDirectoryReader` is the simplest way to load data from local files into LlamaIndex. For production use cases it's more likely that you'll want to use one of the many Readers available on [LlamaHub](https://llamahub.ai/), but `SimpleDirectoryReader` is a great way to get started. ## Supported file types By default `SimpleDirectoryReader` will try to read any files it finds, treating them all as text. In addition to plain text, it explicitly supports the following file types, which are automatically detected based on file extension: - .csv - comma-separated values - .docx - Microsoft Word - .epub - EPUB ebook format - .hwp - Hangul Word Processor - .ipynb - Jupyter Notebook - .jpeg, .jpg - JPEG image - .mbox - MBOX email archive - .md - Markdown - .mp3, .mp4 - audio and video - .pdf - Portable Document Format - .png - Portable Network Graphics - .ppt, .pptm, .pptx - Microsoft PowerPoint One file type you may be expecting to find here is JSON; for that we recommend you use our [JSON Loader](https://llamahub.ai/l/readers/llama-index-readers-json). ## Usage The most basic usage is to pass an `input_dir` and it will load all supported files in that directory: ```python from llama_index.core import SimpleDirectoryReader reader = SimpleDirectoryReader(input_dir="path/to/directory") documents = reader.load_data() ``` Documents can also be loaded with parallel processing if loading many files from a directory. Note that there are differences when using `multiprocessing` with Windows and Linux/MacOS machines, which is explained throughout the `multiprocessing` docs (e.g. see [here](https://docs.python.org/3/library/multiprocessing.html?highlight=process#the-spawn-and-forkserver-start-methods)). Ultimately, Windows users may see less or no performance gains whereas Linux/MacOS users would see these gains when loading the exact same set of files. ```python ... documents = reader.load_data(num_workers=4) ``` ### Reading from subdirectories By default, `SimpleDirectoryReader` will only read files in the top level of the directory. To read from subdirectories, set `recursive=True`: ```python SimpleDirectoryReader(input_dir="path/to/directory", recursive=True) ``` ### Iterating over files as they load You can also use the `iter_data()` method to iterate over and process files as they load ```python reader = SimpleDirectoryReader(input_dir="path/to/directory", recursive=True) all_docs = [] for docs in reader.iter_data(): # <do something with the documents per file> all_docs.extend(docs) ``` ### Restricting the files loaded Instead of all files you can pass a list of file paths: ```python SimpleDirectoryReader(input_files=["path/to/file1", "path/to/file2"]) ``` or you can pass a list of file paths to **exclude** using `exclude`: ```python SimpleDirectoryReader( input_dir="path/to/directory", exclude=["path/to/file1", "path/to/file2"] ) ``` You can also set `required_exts` to a list of file extensions to only load files with those extensions: ```python SimpleDirectoryReader( input_dir="path/to/directory", required_exts=[".pdf", ".docx"] ) ``` And you can set a maximum number of files to be loaded with `num_files_limit`: ```python SimpleDirectoryReader(input_dir="path/to/directory", num_files_limit=100) ``` ### Specifying file encoding `SimpleDirectoryReader` expects files to be `utf-8` encoded but you can override this using the `encoding` parameter: ```python SimpleDirectoryReader(input_dir="path/to/directory", encoding="latin-1") ``` ### Extracting metadata You can specify a function that will read each file and extract metadata that gets attached to the resulting `Document` object for each file by passing the function as `file_metadata`: ```python def get_meta(file_path): return {"foo": "bar", "file_path": file_path} SimpleDirectoryReader(input_dir="path/to/directory", file_metadata=get_meta) ``` The function should take a single argument, the file path, and return a dictionary of metadata. ### Extending to other file types You can extend `SimpleDirectoryReader` to read other file types by passing a dictionary of file extensions to instances of `BaseReader` as `file_extractor`. A BaseReader should read the file and return a list of Documents. For example, to add custom support for `.myfile` files : ```python from llama_index.core import SimpleDirectoryReader from llama_index.core.readers.base import BaseReader from llama_index.core import Document class MyFileReader(BaseReader): def load_data(self, file, extra_info=None): with open(file, "r") as f: text = f.read() # load_data returns a list of Document objects return [Document(text=text + "Foobar", extra_info=extra_info or {})] reader = SimpleDirectoryReader( input_dir="./data", file_extractor={".myfile": MyFileReader()} ) documents = reader.load_data() print(documents) ``` Note that this mapping will override the default file extractors for the file types you specify, so you'll need to add them back in if you want to support them. ### Support for External FileSystems As with other modules, the `SimpleDirectoryReader` takes an optional `fs` parameter that can be used to traverse remote filesystems. This can be any filesystem object that is implemented by the [`fsspec`](https://filesystem-spec.readthedocs.io/en/latest/) protocol. The `fsspec` protocol has open-source implementations for a variety of remote filesystems including [AWS S3](https://github.com/fsspec/s3fs), [Azure Blob & DataLake](https://github.com/fsspec/adlfs), [Google Drive](https://github.com/fsspec/gdrivefs), [SFTP](https://github.com/fsspec/sshfs), and [many others](https://github.com/fsspec/). Here's an example that connects to S3: ```python from s3fs import S3FileSystem s3_fs = S3FileSystem(key="...", secret="...") bucket_name = "my-document-bucket" reader = SimpleDirectoryReader( input_dir=bucket_name, fs=s3_fs, recursive=True, # recursively searches all subdirectories ) documents = reader.load_data() print(documents) ``` A full example notebook can be found [here](https://github.com/run-llama/llama_index/blob/main/docs/docs/examples/data_connectors/simple_directory_reader_remote_fs.ipynb).
167765
# Defining and Customizing Documents ## Defining Documents Documents can either be created automatically via data loaders, or constructed manually. By default, all of our [data loaders](../connector/index.md) (including those offered on LlamaHub) return `Document` objects through the `load_data` function. ```python from llama_index.core import SimpleDirectoryReader documents = SimpleDirectoryReader("./data").load_data() ``` You can also choose to construct documents manually. LlamaIndex exposes the `Document` struct. ```python from llama_index.core import Document text_list = [text1, text2, ...] documents = [Document(text=t) for t in text_list] ``` To speed up prototyping and development, you can also quickly create a document using some default text: ```python document = Document.example() ``` ## Customizing Documents This section covers various ways to customize `Document` objects. Since the `Document` object is a subclass of our `TextNode` object, all these settings and details apply to the `TextNode` object class as well. ### Metadata Documents also offer the chance to include useful metadata. Using the `metadata` dictionary on each document, additional information can be included to help inform responses and track down sources for query responses. This information can be anything, such as filenames or categories. If you are integrating with a vector database, keep in mind that some vector databases require that the keys must be strings, and the values must be flat (either `str`, `float`, or `int`). Any information set in the `metadata` dictionary of each document will show up in the `metadata` of each source node created from the document. Additionally, this information is included in the nodes, enabling the index to utilize it on queries and responses. By default, the metadata is injected into the text for both embedding and LLM model calls. There are a few ways to set up this dictionary: 1. In the document constructor: ```python document = Document( text="text", metadata={"filename": "<doc_file_name>", "category": "<category>"}, ) ``` 2. After the document is created: ```python document.metadata = {"filename": "<doc_file_name>"} ``` 3. Set the filename automatically using the `SimpleDirectoryReader` and `file_metadata` hook. This will automatically run the hook on each document to set the `metadata` field: ```python from llama_index.core import SimpleDirectoryReader filename_fn = lambda filename: {"file_name": filename} # automatically sets the metadata of each document according to filename_fn documents = SimpleDirectoryReader( "./data", file_metadata=filename_fn ).load_data() ``` ### Customizing the id As detailed in the section [Document Management](../../indexing/document_management.md), the `doc_id` is used to enable efficient refreshing of documents in the index. When using the `SimpleDirectoryReader`, you can automatically set the doc `doc_id` to be the full path to each document: ```python from llama_index.core import SimpleDirectoryReader documents = SimpleDirectoryReader("./data", filename_as_id=True).load_data() print([x.doc_id for x in documents]) ``` You can also set the `doc_id` of any `Document` directly! ```python document.doc_id = "My new document id!" ``` Note: the ID can also be set through the `node_id` or `id_` property on a Document object, similar to a `TextNode` object. ### Advanced - Metadata Customization A key detail mentioned above is that by default, any metadata you set is included in the embeddings generation and LLM. #### Customizing LLM Metadata Text Typically, a document might have many metadata keys, but you might not want all of them visible to the LLM during response synthesis. In the above examples, we may not want the LLM to read the `file_name` of our document. However, the `file_name` might include information that will help generate better embeddings. A key advantage of doing this is to bias the embeddings for retrieval without changing what the LLM ends up reading. We can exclude it like so: ```python document.excluded_llm_metadata_keys = ["file_name"] ``` Then, we can test what the LLM will actually end up reading using the `get_content()` function and specifying `MetadataMode.LLM`: ```python from llama_index.core.schema import MetadataMode print(document.get_content(metadata_mode=MetadataMode.LLM)) ``` #### Customizing Embedding Metadata Text Similar to customing the metadata visible to the LLM, we can also customize the metadata visible to embeddings. In this case, you can specifically exclude metadata visible to the embedding model, in case you DON'T want particular text to bias the embeddings. ```python document.excluded_embed_metadata_keys = ["file_name"] ``` Then, we can test what the embedding model will actually end up reading using the `get_content()` function and specifying `MetadataMode.EMBED`: ```python from llama_index.core.schema import MetadataMode print(document.get_content(metadata_mode=MetadataMode.EMBED)) ``` #### Customizing Metadata Format As you know by now, metadata is injected into the actual text of each document/node when sent to the LLM or embedding model. By default, the format of this metadata is controlled by three attributes: 1. `Document.metadata_seperator` -> default = `"\n"` When concatenating all key/value fields of your metadata, this field controls the separator between each key/value pair. 2. `Document.metadata_template` -> default = `"{key}: {value}"` This attribute controls how each key/value pair in your metadata is formatted. The two variables `key` and `value` string keys are required. 3. `Document.text_template` -> default = `{metadata_str}\n\n{content}` Once your metadata is converted into a string using `metadata_seperator` and `metadata_template`, this templates controls what that metadata looks like when joined with the text content of your document/node. The `metadata` and `content` string keys are required. ### Summary Knowing all this, let's create a short example using all this power: ```python from llama_index.core import Document from llama_index.core.schema import MetadataMode document = Document( text="This is a super-customized document", metadata={ "file_name": "super_secret_document.txt", "category": "finance", "author": "LlamaIndex", }, excluded_llm_metadata_keys=["file_name"], metadata_seperator="::", metadata_template="{key}=>{value}", text_template="Metadata: {metadata_str}\n-----\nContent: {content}", ) print( "The LLM sees this: \n", document.get_content(metadata_mode=MetadataMode.LLM), ) print( "The Embedding model sees this: \n", document.get_content(metadata_mode=MetadataMode.EMBED), ) ``` ### Advanced - Automatic Metadata Extraction We have [initial examples](./usage_metadata_extractor.md) of using LLMs themselves to perform metadata extraction.
167785
# Using LLMs ## Concept Picking the proper Large Language Model (LLM) is one of the first steps you need to consider when building any LLM application over your data. LLMs are a core component of LlamaIndex. They can be used as standalone modules or plugged into other core LlamaIndex modules (indices, retrievers, query engines). They are always used during the response synthesis step (e.g. after retrieval). Depending on the type of index being used, LLMs may also be used during index construction, insertion, and query traversal. LlamaIndex provides a unified interface for defining LLM modules, whether it's from OpenAI, Hugging Face, or LangChain, so that you don't have to write the boilerplate code of defining the LLM interface yourself. This interface consists of the following (more details below): - Support for **text completion** and **chat** endpoints (details below) - Support for **streaming** and **non-streaming** endpoints - Support for **synchronous** and **asynchronous** endpoints ## Usage Pattern The following code snippet shows how you can get started using LLMs. If you don't already have it, install your LLM: ``` pip install llama-index-llms-openai ``` Then: ```python from llama_index.llms.openai import OpenAI # non-streaming resp = OpenAI().complete("Paul Graham is ") print(resp) ``` Find more details on [standalone usage](./llms/usage_standalone.md) or [custom usage](./llms/usage_custom.md). ## A Note on Tokenization By default, LlamaIndex uses a global tokenizer for all token counting. This defaults to `cl100k` from tiktoken, which is the tokenizer to match the default LLM `gpt-3.5-turbo`. If you change the LLM, you may need to update this tokenizer to ensure accurate token counts, chunking, and prompting. The single requirement for a tokenizer is that it is a callable function, that takes a string, and returns a list. You can set a global tokenizer like so: ```python from llama_index.core import Settings # tiktoken import tiktoken Settings.tokenizer = tiktoken.encoding_for_model("gpt-3.5-turbo").encode # huggingface from transformers import AutoTokenizer Settings.tokenizer = AutoTokenizer.from_pretrained( "HuggingFaceH4/zephyr-7b-beta" ) ``` ## LLM Compatibility Tracking While LLMs are powerful, not every LLM is easy to set up. Furthermore, even with proper setup, some LLMs have trouble performing tasks that require strict instruction following. LlamaIndex offers integrations with nearly every LLM, but it can be often unclear if the LLM will work well out of the box, or if further customization is needed. The tables below attempt to validate the **initial** experience with various LlamaIndex features for various LLMs. These notebooks serve as a best attempt to gauge performance, as well as how much effort and tweaking is needed to get things to function properly. Generally, paid APIs such as OpenAI or Anthropic are viewed as more reliable. However, local open-source models have been gaining popularity due to their customizability and approach to transparency. **Contributing:** Anyone is welcome to contribute new LLMs to the documentation. Simply copy an existing notebook, setup and test your LLM, and open a PR with your results. If you have ways to improve the setup for existing notebooks, contributions to change this are welcome! **Legend** - ✅ = should work fine - ⚠️ = sometimes unreliable, may need prompt engineering to improve - 🛑 = usually unreliable, would need prompt engineering/fine-tuning to improve ### Paid LLM APIs | Model Name | Basic Query Engines | Router Query Engine | Sub Question Query Engine | Text2SQL | Pydantic Programs | Data Agents | <div style="width:290px">Notes</div> | | ------------------------------------------------------------------------------------------------------------------------ | ------------------- | ------------------- | ------------------------- | -------- | ----------------- | ----------- | --------------------------------------- | | [gpt-3.5-turbo](https://colab.research.google.com/drive/1vvdcf7VYNQA67NOxBHCyQvgb2Pu7iY_5?usp=sharing) (openai) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | [gpt-3.5-turbo-instruct](https://colab.research.google.com/drive/1Ne-VmMNYGOKUeECvkjurdKqMDpfqJQHE?usp=sharing) (openai) | ✅ | ✅ | ✅ | ✅ | ✅ | ⚠️ | Tool usage in data-agents seems flakey. | | [gpt-4](https://colab.research.google.com/drive/1QUNyCVt8q5G32XHNztGw4YJ2EmEkeUe8?usp=sharing) (openai) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | [claude-3 opus](https://colab.research.google.com/drive/1xeFgAmSLpY_9w7bcGPvIcE8UuFSI3xjF?usp=sharing) | ✅ | ⚠️ | ✅ | ✅ | ✅ | ✅ | | | [claude-3 sonnet](https://colab.research.google.com/drive/1xeFgAmSLpY_9w7bcGPvIcE8UuFSI3xjF?usp=sharing) | ✅ | ✅ | ✅ | ✅ | ✅ | ⚠️ | Prone to hallucinating tool inputs. | | [claude-2](https://colab.research.google.com/drive/1IuHRN67MYOaLx2_AgJ9gWVtlK7bIvS1f?usp=sharing) (anthropic) | ✅ | ✅ | ✅ | ✅ | ✅ | ⚠️ | Prone to hallucinating tool inputs. | | [claude-instant-1.2](https://colab.research.google.com/drive/1ahq-2kXwCVCA_3xyC5UMWHyfAcjoG8Gp?usp=sharing) (anthropic) | ✅ | ✅ | ✅ | ✅ | ✅ | ⚠️ | Prone to hallucinating tool inputs. | ### Open Source LLMs Since open source LLMs require large amounts of resources, the quantization is re
167787
# Embeddings ## Concept Embeddings are used in LlamaIndex to represent your documents using a sophisticated numerical representation. Embedding models take text as input, and return a long list of numbers used to capture the semantics of the text. These embedding models have been trained to represent text this way, and help enable many applications, including search! At a high level, if a user asks a question about dogs, then the embedding for that question will be highly similar to text that talks about dogs. When calculating the similarity between embeddings, there are many methods to use (dot product, cosine similarity, etc.). By default, LlamaIndex uses cosine similarity when comparing embeddings. There are many embedding models to pick from. By default, LlamaIndex uses `text-embedding-ada-002` from OpenAI. We also support any embedding model offered by Langchain [here](https://python.langchain.com/docs/modules/data_connection/text_embedding/), as well as providing an easy to extend base class for implementing your own embeddings. ## Usage Pattern Most commonly in LlamaIndex, embedding models will be specified in the `Settings` object, and then used in a vector index. The embedding model will be used to embed the documents used during index construction, as well as embedding any queries you make using the query engine later on. You can also specify embedding models per-index. If you don't already have your embeddings installed: ``` pip install llama-index-embeddings-openai ``` Then: ```python from llama_index.embeddings.openai import OpenAIEmbedding from llama_index.core import VectorStoreIndex from llama_index.core import Settings # global Settings.embed_model = OpenAIEmbedding() # per-index index = VectorStoreIndex.from_documents(documents, embed_model=embed_model) ``` To save costs, you may want to use a local model. ``` pip install llama-index-embeddings-huggingface ``` ```python from llama_index.embeddings.huggingface import HuggingFaceEmbedding from llama_index.core import Settings Settings.embed_model = HuggingFaceEmbedding( model_name="BAAI/bge-small-en-v1.5" ) ``` This will use a well-performing and fast default from Hugging Face. You can find more usage details and available customization options below. ## Getting Started The most common usage for an embedding model will be setting it in the global `Settings` object, and then using it to construct an index and query. The input documents will be broken into nodes, and the embedding model will generate an embedding for each node. By default, LlamaIndex will use `text-embedding-ada-002`, which is what the example below manually sets up for you. ```python from llama_index.core import VectorStoreIndex, SimpleDirectoryReader from llama_index.embeddings.openai import OpenAIEmbedding from llama_index.core import Settings # global default Settings.embed_model = OpenAIEmbedding() documents = SimpleDirectoryReader("./data").load_data() index = VectorStoreIndex.from_documents(documents) ``` Then, at query time, the embedding model will be used again to embed the query text. ```python query_engine = index.as_query_engine() response = query_engine.query("query string") ``` ## Customization ### Batch Size By default, embeddings requests are sent to OpenAI in batches of 10. For some users, this may (rarely) incur a rate limit. For other users embedding many documents, this batch size may be too small. ```python # set the batch size to 42 embed_model = OpenAIEmbedding(embed_batch_size=42) ``` ### Local Embedding Models The easiest way to use a local model is: ```python from llama_index.embeddings.huggingface import HuggingFaceEmbedding from llama_index.core import Settings Settings.embed_model = HuggingFaceEmbedding( model_name="BAAI/bge-small-en-v1.5" ) ``` ### HuggingFace Optimum ONNX Embeddings LlamaIndex also supports creating and using ONNX embeddings using the Optimum library from HuggingFace. Simple create and save the ONNX embeddings, and use them. Some prerequisites: ``` pip install transformers optimum[exporters] pip install llama-index-embeddings-huggingface-optimum ``` Creation with specifying the model and output path: ```python from llama_index.embeddings.huggingface_optimum import OptimumEmbedding OptimumEmbedding.create_and_save_optimum_model( "BAAI/bge-small-en-v1.5", "./bge_onnx" ) ``` And then usage: ```python Settings.embed_model = OptimumEmbedding(folder_name="./bge_onnx") ``` ### LangChain Integrations We also support any embeddings offered by Langchain [here](https://python.langchain.com/docs/modules/data_connection/text_embedding/). The example below loads a model from Hugging Face, using Langchain's embedding class. ``` pip install llama-index-embeddings-langchain ``` ```python from langchain.embeddings.huggingface import HuggingFaceBgeEmbeddings from llama_index.core import Settings Settings.embed_model = HuggingFaceBgeEmbeddings(model_name="BAAI/bge-base-en") ``` ### Custom Embedding Model If you wanted to use embeddings not offered by LlamaIndex or Langchain, you can also extend our base embeddings class and implement your own! The example below uses Instructor Embeddings ([install/setup details here](https://huggingface.co/hkunlp/instructor-large)), and implements a custom embeddings class. Instructor embeddings work by providing text, as well as "instructions" on the domain of the text to embed. This is helpful when embedding text from a very specific and specialized topic. ```python from typing import Any, List from InstructorEmbedding import INSTRUCTOR from llama_index.core.embeddings import BaseEmbedding class InstructorEmbeddings(BaseEmbedding): def __init__( self, instructor_model_name: str = "hkunlp/instructor-large", instruction: str = "Represent the Computer Science documentation or question:", **kwargs: Any, ) -> None: super().__init__(**kwargs) self._model = INSTRUCTOR(instructor_model_name) self._instruction = instruction def _get_query_embedding(self, query: str) -> List[float]: embeddings = self._model.encode([[self._instruction, query]]) return embeddings[0] def _get_text_embedding(self, text: str) -> List[float]: embeddings = self._model.encode([[self._instruction, text]]) return embeddings[0] def _get_text_embeddings(self, texts: List[str]) -> List[List[float]]: embeddings = self._model.encode( [[self._instruction, text] for text in texts] ) return embeddings async def _get_query_embedding(self, query: str) -> List[float]: return self._get_query_embedding(query) async def _get_text_embedding(self, text: str) -> List[float]: return self._get_text_embedding(text) ``` ## Standalone Usage You can also use embeddings as a standalone module for your project, existing application, or general testing and exploration. ```python embeddings = embed_model.get_text_embedding( "It is raining cats and dogs here!" ) ``` ## List of supported embeddings We support integrations with OpenAI, Azure, and anything LangChain offers. - [Azure OpenAI](../../examples/customization/llms/AzureOpenAI.ipynb) - [CalrifAI](../../examples/embeddings/clarifai.ipynb) - [Cohere](../../examples/embeddings/cohereai.ipynb) - [Custom](../../examples/embeddings/custom_embeddings.ipynb) - [Dashscope](../../examples/embeddings/dashscope_embeddings.ipynb) - [ElasticSearch](../../examples/embeddings/elasticsearch.ipynb) - [FastEmbed](../../examples/embeddings/fastembed.ipynb) - [Google Palm](../../examples/embeddings/google_palm.ipynb) - [Gradient](../../examples/embeddings/gradient.ipynb) - [Anyscale](../../examples/embeddings/Anyscale.ipynb) - [Huggingface](../../examples/embeddings/huggingface.ipynb) - [JinaAI](../../examples/embeddings/jinaai_embeddings.ipynb) - [Langchain](../../examples/embeddings/Langchain.ipynb) - [LLM Rails](../../examples/embeddings/llm_rails.ipynb) - [MistralAI](../../examples/embeddings/mistralai.ipynb) - [OpenAI](../../examples/embeddings/OpenAI.ipynb) - [Sagemaker](../../examples/embeddings/sagemaker_embedding_endpoint.ipynb) - [Text Embedding Inference](../../examples/embeddings/text_embedding_inference.ipynb) - [TogetherAI](../../examples/embeddings/together.ipynb) - [Upstage](../../examples/embeddings/upstage.ipynb) - [VoyageAI](../../examples/embeddings/voyageai.ipynb) - [Nomic](../../examples/embeddings/nomic.ipynb) - [Fireworks AI](../../examples/embeddings/fireworks.ipynb)
167791
## Usage Pattern ### Defining a custom prompt Defining a custom prompt is as simple as creating a format string ```python from llama_index.core import PromptTemplate template = ( "We have provided context information below. \n" "---------------------\n" "{context_str}" "\n---------------------\n" "Given this information, please answer the question: {query_str}\n" ) qa_template = PromptTemplate(template) # you can create text prompt (for completion API) prompt = qa_template.format(context_str=..., query_str=...) # or easily convert to message prompts (for chat API) messages = qa_template.format_messages(context_str=..., query_str=...) ``` > Note: you may see references to legacy prompt subclasses such as `QuestionAnswerPrompt`, `RefinePrompt`. These have been deprecated (and now are type aliases of `PromptTemplate`). Now you can directly specify `PromptTemplate(template)` to construct custom prompts. But you still have to make sure the template string contains the expected parameters (e.g. `{context_str}` and `{query_str}`) when replacing a default question answer prompt. You can also define a template from chat messages ```python from llama_index.core import ChatPromptTemplate from llama_index.core.llms import ChatMessage, MessageRole message_templates = [ ChatMessage(content="You are an expert system.", role=MessageRole.SYSTEM), ChatMessage( content="Generate a short story about {topic}", role=MessageRole.USER, ), ] chat_template = ChatPromptTemplate(message_templates=message_templates) # you can create message prompts (for chat API) messages = chat_template.format_messages(topic=...) # or easily convert to text prompt (for completion API) prompt = chat_template.format(topic=...) ``` ### Getting and Setting Custom Prompts Since LlamaIndex is a multi-step pipeline, it's important to identify the operation that you want to modify and pass in the custom prompt at the right place. For instance, prompts are used in response synthesizer, retrievers, index construction, etc; some of these modules are nested in other modules (synthesizer is nested in query engine). See [this guide](../../../examples/prompts/prompt_mixin.ipynb) for full details on accessing/customizing prompts. #### Commonly Used Prompts The most commonly used prompts will be the `text_qa_template` and the `refine_template`. - `text_qa_template` - used to get an initial answer to a query using retrieved nodes - `refine_template` - used when the retrieved text does not fit into a single LLM call with `response_mode="compact"` (the default), or when more than one node is retrieved using `response_mode="refine"`. The answer from the first query is inserted as an `existing_answer`, and the LLM must update or repeat the existing answer based on the new context. #### Accessing Prompts You can call `get_prompts` on many modules in LlamaIndex to get a flat list of prompts used within the module and nested submodules. For instance, take a look at the following snippet. ```python query_engine = index.as_query_engine(response_mode="compact") prompts_dict = query_engine.get_prompts() print(list(prompts_dict.keys())) ``` You might get back the following keys: ``` ['response_synthesizer:text_qa_template', 'response_synthesizer:refine_template'] ``` Note that prompts are prefixed by their sub-modules as "namespaces". #### Updating Prompts You can customize prompts on any module that implements `get_prompts` with the `update_prompts` function. Just pass in argument values with the keys equal to the keys you see in the prompt dictionary obtained through `get_prompts`. e.g. regarding the example above, we might do the following ```python # shakespeare! qa_prompt_tmpl_str = ( "Context information is below.\n" "---------------------\n" "{context_str}\n" "---------------------\n" "Given the context information and not prior knowledge, " "answer the query in the style of a Shakespeare play.\n" "Query: {query_str}\n" "Answer: " ) qa_prompt_tmpl = PromptTemplate(qa_prompt_tmpl_str) query_engine.update_prompts( {"response_synthesizer:text_qa_template": qa_prompt_tmpl} ) ``` #### Modify prompts used in query engine For query engines, you can also pass in custom prompts directly during query-time (i.e. for executing a query against an index and synthesizing the final response). There are also two equivalent ways to override the prompts: 1. via the high-level API ```python query_engine = index.as_query_engine( text_qa_template=custom_qa_prompt, refine_template=custom_refine_prompt ) ``` 2. via the low-level composition API ```python retriever = index.as_retriever() synth = get_response_synthesizer( text_qa_template=custom_qa_prompt, refine_template=custom_refine_prompt ) query_engine = RetrieverQueryEngine(retriever, response_synthesizer) ``` The two approaches above are equivalent, where 1 is essentially syntactic sugar for 2 and hides away the underlying complexity. You might want to use 1 to quickly modify some common parameters, and use 2 to have more granular control. For more details on which classes use which prompts, please visit [Query class references](../../../api_reference/response_synthesizers/index.md). Check out the [reference documentation](../../../api_reference/prompts/index.md) for a full set of all prompts. #### Modify prompts used in index construction Some indices use different types of prompts during construction (**NOTE**: the most common ones, `VectorStoreIndex` and `SummaryIndex`, don't use any). For instance, `TreeIndex` uses a summary prompt to hierarchically summarize the nodes, and `KeywordTableIndex` uses a keyword extract prompt to extract keywords. There are two equivalent ways to override the prompts: 1. via the default nodes constructor ```python index = TreeIndex(nodes, summary_template=custom_prompt) ``` 2. via the documents constructor. ```python index = TreeIndex.from_documents(docs, summary_template=custom_prompt) ``` For more details on which index uses which prompts, please visit [Index class references](../../../api_reference/indices/index.md). ### [Advanced] Advanced Prompt Capabilities In this section we show some advanced prompt capabilities in LlamaIndex. Related Guides: - [Advanced Prompts](../../../examples/prompts/advanced_prompts.ipynb) - [Prompt Engineering for RAG](../../../examples/prompts/prompts_rag.ipynb) #### Partial Formatting Partially format a prompt, filling in some variables while leaving others to be filled in later. ```python from llama_index.core import PromptTemplate prompt_tmpl_str = "{foo} {bar}" prompt_tmpl = PromptTemplate(prompt_tmpl_str) partial_prompt_tmpl = prompt_tmpl.partial_format(foo="abc") fmt_str = partial_prompt_tmpl.format(bar="def") ``` #### Template Variable Mappings LlamaIndex prompt abstractions generally expect certain keys. E.g. our `text_qa_prompt` expects `context_str` for context and `query_str` for the user query. But if you're trying to adapt a string template for use with LlamaIndex, it can be annoying to change out the template variables. Instead, define `template_var_mappings`: ```python template_var_mappings = {"context_str": "my_context", "query_str": "my_query"} prompt_tmpl = PromptTemplate( qa_prompt_tmpl_str, template_var_mappings=template_var_mappings ) ``` #### Function Mappings Pass in functions as template variables instead of fixed values. This is quite advanced and powerful; allows you to do dynamic few-shot prompting, etc. Here's an example of reformatting the `context_str`. ```python def format_context_fn(**kwargs): # format context with bullet points context_list = kwargs["context_str"].split("\n\n") fmtted_context = "\n\n".join([f"- {c}" for c in context_list]) return fmtted_context prompt_tmpl = PromptTemplate( qa_prompt_tmpl_str, function_mappings={"context_str": format_context_fn} ) prompt_tmpl.format(context_str="context", query_str="query") ```
167792
# Prompts ## Concept Prompting is the fundamental input that gives LLMs their expressive power. LlamaIndex uses prompts to build the index, do insertion, perform traversal during querying, and to synthesize the final answer. LlamaIndex uses a set of [default prompt templates](https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/prompts/default_prompts.py) that work well out of the box. In addition, there are some prompts written and used specifically for chat models like `gpt-3.5-turbo` [here](https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/prompts/chat_prompts.py). Users may also provide their own prompt templates to further customize the behavior of the framework. The best method for customizing is copying the default prompt from the link above, and using that as the base for any modifications. ## Usage Pattern Using prompts is simple. ```python from llama_index.core import PromptTemplate template = ( "We have provided context information below. \n" "---------------------\n" "{context_str}" "\n---------------------\n" "Given this information, please answer the question: {query_str}\n" ) qa_template = PromptTemplate(template) # you can create text prompt (for completion API) prompt = qa_template.format(context_str=..., query_str=...) # or easily convert to message prompts (for chat API) messages = qa_template.format_messages(context_str=..., query_str=...) ``` See our [Usage Pattern Guide](./usage_pattern.md) for more details. ## Example Guides Simple Customization Examples - [Completion prompts](../../../examples/customization/prompts/completion_prompts.ipynb) - [Chat prompts](../../../examples/customization/prompts/chat_prompts.ipynb) - [Prompt Mixin](../../../examples/prompts/prompt_mixin.ipynb) Prompt Engineering Guides - [Advanced Prompts](../../../examples/prompts/advanced_prompts.ipynb) - [RAG Prompts](../../../examples/prompts/prompts_rag.ipynb) Experimental - [Prompt Optimization](../../../examples/prompts/prompt_optimization.ipynb) - [Emotion Prompting](../../../examples/prompts/emotion_prompt.ipynb)
167795
# Customizing LLMs within LlamaIndex Abstractions You can plugin these LLM abstractions within our other modules in LlamaIndex (indexes, retrievers, query engines, agents) which allow you to build advanced workflows over your data. By default, we use OpenAI's `gpt-3.5-turbo` model. But you may choose to customize the underlying LLM being used. Below we show a few examples of LLM customization. This includes - changing the underlying LLM - changing the number of output tokens (for OpenAI, Cohere, or AI21) - having more fine-grained control over all parameters for any LLM, from context window to chunk overlap ## Example: Changing the underlying LLM An example snippet of customizing the LLM being used is shown below. In this example, we use `gpt-4` instead of `gpt-3.5-turbo`. Available models include `gpt-3.5-turbo`, `gpt-3.5-turbo-instruct`, `gpt-3.5-turbo-16k`, `gpt-4`, `gpt-4-32k`, `text-davinci-003`, and `text-davinci-002`. Note that you may also plug in any LLM shown on Langchain's [LLM](https://python.langchain.com/docs/integrations/llms/) page. ```python from llama_index.core import KeywordTableIndex, SimpleDirectoryReader from llama_index.llms.openai import OpenAI # alternatively # from langchain.llms import ... documents = SimpleDirectoryReader("data").load_data() # define LLM llm = OpenAI(temperature=0.1, model="gpt-4") # build index index = KeywordTableIndex.from_documents(documents, llm=llm) # get response from query query_engine = index.as_query_engine() response = query_engine.query( "What did the author do after his time at Y Combinator?" ) ``` ## Example: Changing the number of output tokens (for OpenAI, Cohere, AI21) The number of output tokens is usually set to some low number by default (for instance, with OpenAI the default is 256). For OpenAI, Cohere, AI21, you just need to set the `max_tokens` parameter (or maxTokens for AI21). We will handle text chunking/calculations under the hood. ```python from llama_index.core import KeywordTableIndex, SimpleDirectoryReader from llama_index.llms.openai import OpenAI from llama_index.core import Settings documents = SimpleDirectoryReader("data").load_data() # define global LLM Settings.llm = OpenAI(temperature=0, model="gpt-3.5-turbo", max_tokens=512) ``` ## Example: Explicitly configure `context_window` and `num_output` If you are using other LLM classes from langchain, you may need to explicitly configure the `context_window` and `num_output` via the `Settings` since the information is not available by default. ```python from llama_index.core import KeywordTableIndex, SimpleDirectoryReader from llama_index.llms.openai import OpenAI from llama_index.core import Settings documents = SimpleDirectoryReader("data").load_data() # set context window Settings.context_window = 4096 # set number of output tokens Settings.num_output = 256 # define LLM Settings.llm = OpenAI( temperature=0, model="gpt-3.5-turbo", max_tokens=num_output, ) ``` ## Example: Using a HuggingFace LLM LlamaIndex supports using LLMs from HuggingFace directly. Note that for a completely private experience, also setup a [local embeddings model](../embeddings.md). Many open-source models from HuggingFace require either some preamble before each prompt, which is a `system_prompt`. Additionally, queries themselves may need an additional wrapper around the `query_str` itself. All this information is usually available from the HuggingFace model card for the model you are using. Below, this example uses both the `system_prompt` and `query_wrapper_prompt`, using specific prompts from the model card found [here](https://huggingface.co/stabilityai/stablelm-tuned-alpha-3b). ```python from llama_index.core import PromptTemplate # Transform a string into input zephyr-specific input def completion_to_prompt(completion): return f"<|system|>\n</s>\n<|user|>\n{completion}</s>\n<|assistant|>\n" # Transform a list of chat messages into zephyr-specific input def messages_to_prompt(messages): prompt = "" for message in messages: if message.role == "system": prompt += f"<|system|>\n{message.content}</s>\n" elif message.role == "user": prompt += f"<|user|>\n{message.content}</s>\n" elif message.role == "assistant": prompt += f"<|assistant|>\n{message.content}</s>\n" # ensure we start with a system prompt, insert blank if needed if not prompt.startswith("<|system|>\n"): prompt = "<|system|>\n</s>\n" + prompt # add final assistant prompt prompt = prompt + "<|assistant|>\n" return prompt import torch from llama_index.llms.huggingface import HuggingFaceLLM from llama_index.core import Settings Settings.llm = HuggingFaceLLM( model_name="HuggingFaceH4/zephyr-7b-beta", tokenizer_name="HuggingFaceH4/zephyr-7b-beta", context_window=3900, max_new_tokens=256, generate_kwargs={"temperature": 0.7, "top_k": 50, "top_p": 0.95}, messages_to_prompt=messages_to_prompt, completion_to_prompt=completion_to_prompt, device_map="auto", ) ``` Some models will raise errors if all the keys from the tokenizer are passed to the model. A common tokenizer output that causes issues is `token_type_ids`. Below is an example of configuring the predictor to remove this before passing the inputs to the model: ```python HuggingFaceLLM( # ... tokenizer_outputs_to_remove=["token_type_ids"] ) ``` A full API reference can be found [here](../../../api_reference/llms/huggingface.md). Several example notebooks are also listed below: - [StableLM](../../../examples/customization/llms/SimpleIndexDemo-Huggingface_stablelm.ipynb) - [Camel](../../../examples/customization/llms/SimpleIndexDemo-Huggingface_camel.ipynb)
167799
# Customizing Storage By default, LlamaIndex hides away the complexities and let you query your data in under 5 lines of code: ```python from llama_index.core import VectorStoreIndex, SimpleDirectoryReader documents = SimpleDirectoryReader("data").load_data() index = VectorStoreIndex.from_documents(documents) query_engine = index.as_query_engine() response = query_engine.query("Summarize the documents.") ``` Under the hood, LlamaIndex also supports a swappable **storage layer** that allows you to customize where ingested documents (i.e., `Node` objects), embedding vectors, and index metadata are stored. ![](../../_static/storage/storage.png) ### Low-Level API To do this, instead of the high-level API, ```python index = VectorStoreIndex.from_documents(documents) ``` we use a lower-level API that gives more granular control: ```python from llama_index.core.storage.docstore import SimpleDocumentStore from llama_index.core.storage.index_store import SimpleIndexStore from llama_index.core.vector_stores import SimpleVectorStore from llama_index.core.node_parser import SentenceSplitter # create parser and parse document into nodes parser = SentenceSplitter() nodes = parser.get_nodes_from_documents(documents) # create storage context using default stores storage_context = StorageContext.from_defaults( docstore=SimpleDocumentStore(), vector_store=SimpleVectorStore(), index_store=SimpleIndexStore(), ) # create (or load) docstore and add nodes storage_context.docstore.add_documents(nodes) # build index index = VectorStoreIndex(nodes, storage_context=storage_context) # save index index.storage_context.persist(persist_dir="<persist_dir>") # can also set index_id to save multiple indexes to the same folder index.set_index_id("<index_id>") index.storage_context.persist(persist_dir="<persist_dir>") # to load index later, make sure you setup the storage context # this will loaded the persisted stores from persist_dir storage_context = StorageContext.from_defaults(persist_dir="<persist_dir>") # then load the index object from llama_index.core import load_index_from_storage loaded_index = load_index_from_storage(storage_context) # if loading an index from a persist_dir containing multiple indexes loaded_index = load_index_from_storage(storage_context, index_id="<index_id>") # if loading multiple indexes from a persist dir loaded_indicies = load_index_from_storage( storage_context, index_ids=["<index_id>", ...] ) ``` You can customize the underlying storage with a one-line change to instantiate different document stores, index stores, and vector stores. See [Document Stores](./docstores.md), [Vector Stores](./vector_stores.md), [Index Stores](./index_stores.md) guides for more details. ### Vector Store Integrations and Storage Most of our vector store integrations store the entire index (vectors + text) in the vector store itself. This comes with the major benefit of not having to explicitly persist the index as shown above, since the vector store is already hosted and persisting the data in our index. The vector stores that support this practice are: - AzureAISearchVectorStore - ChatGPTRetrievalPluginClient - CassandraVectorStore - ChromaVectorStore - EpsillaVectorStore - DocArrayHnswVectorStore - DocArrayInMemoryVectorStore - JaguarVectorStore - LanceDBVectorStore - MetalVectorStore - MilvusVectorStore - MyScaleVectorStore - OpensearchVectorStore - PineconeVectorStore - QdrantVectorStore - RedisVectorStore - UpstashVectorStore - WeaviateVectorStore A small example using Pinecone is below: ```python import pinecone from llama_index.core import VectorStoreIndex, SimpleDirectoryReader from llama_index.vector_stores.pinecone import PineconeVectorStore # Creating a Pinecone index api_key = "api_key" pinecone.init(api_key=api_key, environment="us-west1-gcp") pinecone.create_index( "quickstart", dimension=1536, metric="euclidean", pod_type="p1" ) index = pinecone.Index("quickstart") # construct vector store vector_store = PineconeVectorStore(pinecone_index=index) # create storage context storage_context = StorageContext.from_defaults(vector_store=vector_store) # load documents documents = SimpleDirectoryReader("./data").load_data() # create index, which will insert documents/vectors to pinecone index = VectorStoreIndex.from_documents( documents, storage_context=storage_context ) ``` If you have an existing vector store with data already loaded in, you can connect to it and directly create a `VectorStoreIndex` as follows: ```python index = pinecone.Index("quickstart") vector_store = PineconeVectorStore(pinecone_index=index) loaded_index = VectorStoreIndex.from_vector_store(vector_store=vector_store) ```
167840
# Retriever ## Concept Retrievers are responsible for fetching the most relevant context given a user query (or chat message). It can be built on top of [indexes](../../indexing/index.md), but can also be defined independently. It is used as a key building block in [query engines](../../deploying/query_engine/index.md) (and [Chat Engines](../../deploying/chat_engines/index.md)) for retrieving relevant context. !!! tip Confused about where retriever fits in the RAG workflow? Read about [high-level concepts](../../../getting_started/concepts.md) ## Usage Pattern Get started with: ```python retriever = index.as_retriever() nodes = retriever.retrieve("Who is Paul Graham?") ``` ## Get Started Get a retriever from index: ```python retriever = index.as_retriever() ``` Retrieve relevant context for a question: ```python nodes = retriever.retrieve("Who is Paul Graham?") ``` > Note: To learn how to build an index, see [Indexing](../../indexing/index.md) ## High-Level API ### Selecting a Retriever You can select the index-specific retriever class via `retriever_mode`. For example, with a `SummaryIndex`: ```python retriever = summary_index.as_retriever( retriever_mode="llm", ) ``` This creates a [SummaryIndexLLMRetriever](../../../api_reference/retrievers/summary.md) on top of the summary index. See [**Retriever Modes**](retriever_modes.md) for a full list of (index-specific) retriever modes and the retriever classes they map to. ### Configuring a Retriever In the same way, you can pass kwargs to configure the selected retriever. > Note: take a look at the API reference for the selected retriever class' constructor parameters for a list of valid kwargs. For example, if we selected the "llm" retriever mode, we might do the following: ```python retriever = summary_index.as_retriever( retriever_mode="llm", choice_batch_size=5, ) ``` ## Low-Level Composition API You can use the low-level composition API if you need more granular control. To achieve the same outcome as above, you can directly import and construct the desired retriever class: ```python from llama_index.core.retrievers import SummaryIndexLLMRetriever retriever = SummaryIndexLLMRetriever( index=summary_index, choice_batch_size=5, ) ``` ## Examples See more examples in the [retrievers guide](./retrievers.md).
167846
# Output Parsing Modules LlamaIndex supports integrations with output parsing modules offered by other frameworks. These output parsing modules can be used in the following ways: - To provide formatting instructions for any prompt / query (through `output_parser.format`) - To provide "parsing" for LLM outputs (through `output_parser.parse`) ### Guardrails Guardrails is an open-source Python package for specification/validation/correction of output schemas. See below for a code example. ```python from llama_index.core import VectorStoreIndex, SimpleDirectoryReader from llama_index.output_parsers.guardrails import GuardrailsOutputParser from llama_index.llms.openai import OpenAI # load documents, build index documents = SimpleDirectoryReader("../paul_graham_essay/data").load_data() index = VectorStoreIndex(documents, chunk_size=512) # define query / output spec rail_spec = """ <rail version="0.1"> <output> <list name="points" description="Bullet points regarding events in the author's life."> <object> <string name="explanation" format="one-line" on-fail-one-line="noop" /> <string name="explanation2" format="one-line" on-fail-one-line="noop" /> <string name="explanation3" format="one-line" on-fail-one-line="noop" /> </object> </list> </output> <prompt> Query string here. @xml_prefix_prompt {output_schema} @json_suffix_prompt_v2_wo_none </prompt> </rail> """ # define output parser output_parser = GuardrailsOutputParser.from_rail_string( rail_spec, llm=OpenAI() ) # Attach output parser to LLM llm = OpenAI(output_parser=output_parser) # obtain a structured response query_engine = index.as_query_engine(llm=llm) response = query_engine.query( "What are the three items the author did growing up?", ) print(response) ``` Output: ``` {'points': [{'explanation': 'Writing short stories', 'explanation2': 'Programming on an IBM 1401', 'explanation3': 'Using microcomputers'}]} ``` ### Langchain Langchain also offers output parsing modules that you can use within LlamaIndex. ```python from llama_index.core import VectorStoreIndex, SimpleDirectoryReader from llama_index.core.output_parsers import LangchainOutputParser from llama_index.llms.openai import OpenAI from langchain.output_parsers import StructuredOutputParser, ResponseSchema # load documents, build index documents = SimpleDirectoryReader("../paul_graham_essay/data").load_data() index = VectorStoreIndex.from_documents(documents) # define output schema response_schemas = [ ResponseSchema( name="Education", description="Describes the author's educational experience/background.", ), ResponseSchema( name="Work", description="Describes the author's work experience/background.", ), ] # define output parser lc_output_parser = StructuredOutputParser.from_response_schemas( response_schemas ) output_parser = LangchainOutputParser(lc_output_parser) # Attach output parser to LLM llm = OpenAI(output_parser=output_parser) # obtain a structured response query_engine = index.as_query_engine(llm=llm) response = query_engine.query( "What are a few things the author did growing up?", ) print(str(response)) ``` Output: ``` {'Education': 'Before college, the author wrote short stories and experimented with programming on an IBM 1401.', 'Work': 'The author worked on writing and programming outside of school.'} ``` ### Guides More examples: - [Guardrails](../../../examples/output_parsing/GuardrailsDemo.ipynb) - [Langchain](../../../examples/output_parsing/LangchainOutputParserDemo.ipynb) - [Guidance Pydantic Program](../../../examples/output_parsing/guidance_pydantic_program.ipynb) - [Guidance Sub-Question](../../../examples/output_parsing/guidance_sub_question.ipynb) - [Openai Pydantic Program](../../../examples/output_parsing/openai_pydantic_program.ipynb)
167854
# Usage Pattern ## Getting Started An agent is initialized from a set of Tools. Here's an example of instantiating a ReAct agent from a set of Tools. ```python from llama_index.core.tools import FunctionTool from llama_index.llms.openai import OpenAI from llama_index.core.agent import ReActAgent # define sample Tool def multiply(a: int, b: int) -> int: """Multiple two integers and returns the result integer""" return a * b multiply_tool = FunctionTool.from_defaults(fn=multiply) # initialize llm llm = OpenAI(model="gpt-3.5-turbo-0613") # initialize ReAct agent agent = ReActAgent.from_tools([multiply_tool], llm=llm, verbose=True) ``` An agent supports both `chat` and `query` endpoints, inheriting from our `ChatEngine` and `QueryEngine` respectively. Example usage: ```python agent.chat("What is 2123 * 215123") ``` To automatically pick the best agent depending on the LLM, you can use the `from_llm` method to generate an agent. ```python from llama_index.core.agent import AgentRunner agent = AgentRunner.from_llm([multiply_tool], llm=llm, verbose=True) ``` ## Defining Tools ### Query Engine Tools It is easy to wrap query engines as tools for an agent as well. Simply do the following: ```python from llama_index.core.agent import ReActAgent from llama_index.core.tools import QueryEngineTool # NOTE: lyft_index and uber_index are both SimpleVectorIndex instances lyft_engine = lyft_index.as_query_engine(similarity_top_k=3) uber_engine = uber_index.as_query_engine(similarity_top_k=3) query_engine_tools = [ QueryEngineTool( query_engine=lyft_engine, metadata=ToolMetadata( name="lyft_10k", description="Provides information about Lyft financials for year 2021. " "Use a detailed plain text question as input to the tool.", ), return_direct=False, ), QueryEngineTool( query_engine=uber_engine, metadata=ToolMetadata( name="uber_10k", description="Provides information about Uber financials for year 2021. " "Use a detailed plain text question as input to the tool.", ), return_direct=False, ), ] # initialize ReAct agent agent = ReActAgent.from_tools(query_engine_tools, llm=llm, verbose=True) ``` ### Use other agents as Tools A nifty feature of our agents is that since they inherit from `BaseQueryEngine`, you can easily define other agents as tools through our `QueryEngineTool`. ```python from llama_index.core.tools import QueryEngineTool query_engine_tools = [ QueryEngineTool( query_engine=sql_agent, metadata=ToolMetadata( name="sql_agent", description="Agent that can execute SQL queries." ), ), QueryEngineTool( query_engine=gmail_agent, metadata=ToolMetadata( name="gmail_agent", description="Tool that can send emails on Gmail.", ), ), ] outer_agent = ReActAgent.from_tools(query_engine_tools, llm=llm, verbose=True) ``` ## Agent With Planning Breaking down an initial task into easier-to-digest sub-tasks is a powerful pattern. LlamaIndex provides an agent planning module that does just this: ```python from llama_index.agent.openai import OpenAIAgentWorker from llama_index.core.agent import ( StructuredPlannerAgent, FunctionCallingAgentWorker, ) worker = FunctionCallingAgentWorker.from_tools(tools, llm=llm) agent = StructuredPlannerAgent(worker) ``` In general, this agent may take longer to respond compared to the basic `AgentRunner` class, but the outputs will often be more complete. Another tradeoff to consider is that planning often requires a very capable LLM (for context, `gpt-3.5-turbo` is sometimes flakey for planning, while `gpt-4-turbo` does much better.) See more in the [complete guide](../../../examples/agent/structured_planner.ipynb) ## Lower-Level API The OpenAIAgent and ReActAgent are simple wrappers on top of an `AgentRunner` interacting with an `AgentWorker`. _All_ agents can be defined this manner. For example for the OpenAIAgent: ```python from llama_index.core.agent import AgentRunner from llama_index.agent.openai import OpenAIAgentWorker # construct OpenAIAgent from tools openai_step_engine = OpenAIAgentWorker.from_tools(tools, llm=llm, verbose=True) agent = AgentRunner(openai_step_engine) ``` This is also the preferred format for custom agents. Check out the [lower-level agent guide](agent_runner.md) for more details. ## Customizing your Agent If you wish to define a custom agent, the easiest way to do so is to just define a stateful function and wrap it with a `FnAgentWorker`. The `state` variable passed in and out of the function can contain anything you want it to, whether it's tools or arbitrary variables. It also contains task and output objects. ```python ## This is an example showing a trivial function that multiplies an input number by 2 each time. ## Pass this into an agent def multiply_agent_fn(state: dict) -> Tuple[Dict[str, Any], bool]: """Mock agent input function.""" if "max_count" not in state: raise ValueError("max_count must be specified.") # __output__ is a special key indicating the final output of the agent # __task__ is a special key representing the Task object passed by the agent to the function. # `task.input` is the input string passed if "__output__" not in state: state["__output__"] = int(state["__task__"].input) state["count"] = 0 else: state["__output__"] = state["__output__"] * 2 state["count"] += 1 is_done = state["count"] >= state["max_count"] # the output of this function should be a tuple of the state variable and is_done return state, is_done from llama_index.core.agent import FnAgentWorker agent = FnAgentWorker( fn=multiply_agent_fn, initial_state={"max_count": 5} ).as_agent() agent.query("5") ``` Check out our [Custom Agent Notebook Guide](../../../examples/agent/custom_agent.ipynb) for more details.
167859
# Tools ## Concept Having proper tool abstractions is at the core of building [data agents](./index.md). Defining a set of Tools is similar to defining any API interface, with the exception that these Tools are meant for agent rather than human use. We allow users to define both a **Tool** as well as a **ToolSpec** containing a series of functions under the hood. When using an agent or LLM with function calling, the tool selected (and the arguments written for that tool) rely strongly on the **tool name** and **description** of the tools purpose and arguments. Spending time tuning these parameters can result in larges changes in how the LLM calls these tools. A Tool implements a very generic interface - simply define `__call__` and also return some basic metadata (name, description, function schema). We offer a few different types of Tools: - `FunctionTool`: A function tool allows users to easily convert any user-defined function into a Tool. It can also auto-infer the function schema. - `QueryEngineTool`: A tool that wraps an existing [query engine](../query_engine/index.md). Note: since our agent abstractions inherit from `BaseQueryEngine`, these tools can also wrap other agents. - Community contributed `ToolSpecs` that define one or more tools around a single service (like Gmail) - Utility tools for wrapping other tools to handle returning large amounts of data from a tool ## FunctionTool A function tool is a simple wrapper around any existing function (both sync and async are supported!). ```python from llama_index.core.tools import FunctionTool def get_weather(location: str) -> str: """Usfeful for getting the weather for a given location.""" ... tool = FunctionTool.from_defaults( get_weather, # async_fn=aget_weather, # optional! ) agent = ReActAgent.from_tools(tools, llm=llm, verbose=True) ``` For a better function definition, you can also leverage pydantic for the function arguments. ```python from pydantic import Field def get_weather( location: str = Field( description="A city name and state, formatted like '<name>, <state>'" ), ) -> str: """Usfeful for getting the weather for a given location.""" ... tool = FunctionTool.from_defaults(get_weather) ``` By default, the tool name will be the function name, and the docstring will be the tool description. But you can also override this. ```python tool = FunctionTool.from_defaults(get_weather, name="...", description="...") ``` ## QueryEngineTool Any query engine can be turned into a tool, using `QueryEngineTool`: ```python from llama_index.core.tools import QueryEngineTool tool = QueryEngineTool.from_defaults( query_engine, name="...", description="..." ) ``` ## Tool Specs We also offer a rich set of Tools and Tool Specs through [LlamaHub](https://llamahub.ai/) 🦙. You can think of tool specs like bundles of tools meant to be used together. Usually these cover useful tools across a single interface/service, like Gmail. To use with an agent, you can install the specific tool spec integration: ```bash pip install llama-index-tools-google ``` And then use it: ```python from llama_index.agent.openai import OpenAIAgent from llama_index.tools.google import GmailToolSpec tool_spec = GmailToolSpec() agent = OpenAIAgent.from_tools(tool_spec.to_tool_list(), verbose=True) ``` See [LlamaHub](https://llamahub.ai) for a full list of community contributed tool specs. ## Utility Tools Oftentimes, directly querying an API can return a massive volume of data, which on its own may overflow the context window of the LLM (or at the very least unnecessarily increase the number of tokens that you are using). To tackle this, we’ve provided an initial set of “utility tools” in LlamaHub Tools - utility tools are not conceptually tied to a given service (e.g. Gmail, Notion), but rather can augment the capabilities of existing Tools. In this particular case, utility tools help to abstract away common patterns of needing to cache/index and query data that’s returned from any API request. Let’s walk through our two main utility tools below. ### OnDemandLoaderTool This tool turns any existing LlamaIndex data loader ( `BaseReader` class) into a tool that an agent can use. The tool can be called with all the parameters needed to trigger `load_data` from the data loader, along with a natural language query string. During execution, we first load data from the data loader, index it (for instance with a vector store), and then query it “on-demand”. All three of these steps happen in a single tool call. Oftentimes this can be preferable to figuring out how to load and index API data yourself. While this may allow for data reusability, oftentimes users just need an ad-hoc index to abstract away prompt window limitations for any API call. A usage example is given below: ```python from llama_index.readers.wikipedia import WikipediaReader from llama_index.core.tools.ondemand_loader_tool import OnDemandLoaderTool tool = OnDemandLoaderTool.from_defaults( reader, name="Wikipedia Tool", description="A tool for loading data and querying articles from Wikipedia", ) ``` ### LoadAndSearchToolSpec The LoadAndSearchToolSpec takes in any existing Tool as input. As a tool spec, it implements `to_tool_list` , and when that function is called, two tools are returned: a `load` tool and then a `search` tool. The `load` Tool execution would call the underlying Tool, and the index the output (by default with a vector index). The `search` Tool execution would take in a query string as input and call the underlying index. This is helpful for any API endpoint that will by default return large volumes of data - for instance our WikipediaToolSpec will by default return entire Wikipedia pages, which will easily overflow most LLM context windows. Example usage is shown below: ```python from llama_index.tools.wikipedia import WikipediaToolSpec from llama_index.core.tools.tool_spec.load_and_search import ( LoadAndSearchToolSpec, ) wiki_spec = WikipediaToolSpec() # Get the search wikipedia tool tool = wiki_spec.to_tool_list()[1] # Create the Agent with load/search tools agent = OpenAIAgent.from_tools( LoadAndSearchToolSpec.from_defaults(tool).to_tool_list(), verbose=True ) ``` ### Return Direct You'll notice the option `return_direct` in the tool class constructor. If this is set to `True`, the response from an agent is returned directly, without being interpreted and rewritten by the agent. This can be helpful for decreasing runtime, or designing/specifying tools that will end the agent reasoning loop. For example, say you specify a tool: ```python tool = QueryEngineTool.from_defaults( query_engine, name="<name>", description="<description>", return_direct=True, ) agent = OpenAIAgent.from_tools([tool]) response = agent.chat("<question that invokes tool>") ``` In the above example, the query engine tool would be invoked, and the response from that tool would be directly returned as the response, and the execution loop would end. If `return_direct=False` was used, then the agent would rewrite the response using the context of the chat history, or even make another tool call. We have also provided an [example notebook](../../../examples/agent/return_direct_agent.ipynb) of using `return_direct`. ## Debugging Tools Often, it can be useful to debug what exactly the tool definition is that is being sent to APIs. You can get a good peek at this by using the underlying function to get the current tool schema, which is levereged in APIs like OpenAI and Anthropic. ```python schema = tool.metadata.get_parameters_dict() print(schema) ```
167866
# Chatbots Chatbots are another extremely popular use case for LLMs. Instead of single-shot question-answering, a chatbot can handle multiple back-and-forth queries and answers, getting clarification or answering follow-up questions. LlamaIndex gives you the tools to build knowledge-augmented chatbots and agents. This use case builds upon the [QA](q_and_a/index.md) use case, make sure to check that out first! ## Resources The central module guide you'll want to check out is our [Chat Engines](../module_guides/deploying/chat_engines/index.md). Here are some additional relevant resources to build full-stack chatbot apps: - [Building a chatbot](../understanding/putting_it_all_together/chatbots/building_a_chatbot.md) tutorial - [create-llama](https://blog.llamaindex.ai/create-llama-a-command-line-tool-to-generate-llamaindex-apps-8f7683021191), a command line tool that generates a full-stack chatbot application for you - [SECinsights.ai](https://www.secinsights.ai/), an open-source application that uses LlamaIndex to build a chatbot that answers questions about SEC filings - [RAGs](https://blog.llamaindex.ai/introducing-rags-your-personalized-chatgpt-experience-over-your-data-2b9d140769b1), a project inspired by OpenAI's GPTs that lets you build a low-code chatbot over your data using Streamlit - Our [OpenAI agents](../module_guides/deploying/agents/modules.md) are all chat bots in nature ## External sources - [Building a chatbot with Streamlit](https://blog.streamlit.io/build-a-chatbot-with-custom-data-sources-powered-by-llamaindex/)
167907
# Large Language Models ##### FAQ 1. [How to use a custom/local embedding model?](#1-how-to-define-a-custom-llm) 2. [How to use a local hugging face embedding model?](#2-how-to-use-a-different-openai-model) 3. [How can I customize my prompt](#3-how-can-i-customize-my-prompt) 4. [Is it required to fine-tune my model?](#4-is-it-required-to-fine-tune-my-model) 5. [I want to the LLM answer in Chinese/Italian/French but only answers in English, how to proceed?](#5-i-want-to-the-llm-answer-in-chineseitalianfrench-but-only-answers-in-english-how-to-proceed) 6. [Is LlamaIndex GPU accelerated?](#6-is-llamaindex-gpu-accelerated) --- ##### 1. How to define a custom LLM? You can access [Usage Custom](../../module_guides/models/llms/usage_custom.md#example-using-a-custom-llm-model---advanced) to define a custom LLM. --- ##### 2. How to use a different OpenAI model? To use a different OpenAI model you can access [Configure Model](../../examples/llm/openai.ipynb) to set your own custom model. --- ##### 3. How can I customize my prompt? You can access [Prompts](../../module_guides/models/prompts/index.md) to learn how to customize your prompts. --- ##### 4. Is it required to fine-tune my model? No. there's isolated modules which might provide better results, but isn't required, you can use llamaindex without needing to fine-tune the model. --- ##### 5. I want to the LLM answer in Chinese/Italian/French but only answers in English, how to proceed? To the LLM answer in another language more accurate you can update the prompts to enforce more the output language. ```py response = query_engine.query("Rest of your query... \nRespond in Italian") ``` Alternatively: ```py from llama_index.core import Settings from llama_index.llms.openai import OpenAI llm = OpenAI(system_prompt="Always respond in Italian.") # set a global llm Settings.llm = llm query_engine = load_index_from_storage( storage_context, ).as_query_engine() ``` --- ##### 6. Is LlamaIndex GPU accelerated? Yes, you can run a language model (LLM) on a GPU when running it locally. You can find an example of setting up LLMs with GPU support in the [llama2 setup](../../examples/vector_stores/SimpleIndexDemoLlama-Local.ipynb) documentation. ---
167908
# Documents and Nodes ##### FAQ 1. [What is the default `chunk_size` of a Node object?](#1-what-is-the-default-chunk_size-of-a-node-object) 2. [How to add information like name, url in a `Document` object?](#2-how-to-add-information-like-name-url-in-a-document-object) 3. [How to update existing document in an Index?](#3-how-to-update-existing-document-in-an-index) --- ##### 1. What is the default `chunk_size` of a Node object? It's 1024 by default. If you want to customize the `chunk_size`, You can follow [Customizing Node](../../module_guides/loading/node_parsers/index.md#customization) --- ##### 2. How to add information like name, url in a `Document` object? You can customize the Document object and add extra info in the form of metadata. To know more on this follow [Customize Document](../../module_guides/loading/documents_and_nodes/usage_documents.md#customizing-documents). --- ##### 3. How to update existing document in an Index? You can update/delete existing document in an Index with the help of `doc_id`. You can add new document to an existing Index too. To know more check [Document Management](../../module_guides/indexing/document_management.md) ---
167940
# Frequently Asked Questions (FAQ) !!! tip If you haven't already, [install LlamaIndex](installation.md) and complete the [starter tutorial](starter_example.md). If you run into terms you don't recognize, check out the [high-level concepts](concepts.md). In this section, we start with the code you wrote for the [starter example](starter_example.md) and show you the most common ways you might want to customize it for your use case: ```python from llama_index.core import VectorStoreIndex, SimpleDirectoryReader documents = SimpleDirectoryReader("data").load_data() index = VectorStoreIndex.from_documents(documents) query_engine = index.as_query_engine() response = query_engine.query("What did the author do growing up?") print(response) ``` --- ## **"I want to parse my documents into smaller chunks"** ```python # Global settings from llama_index.core import Settings Settings.chunk_size = 512 # Local settings from llama_index.core.node_parser import SentenceSplitter index = VectorStoreIndex.from_documents( documents, transformations=[SentenceSplitter(chunk_size=512)] ) ``` --- ## **"I want to use a different vector store"** First, you can install the vector store you want to use. For example, to use Chroma as the vector store, you can install it using pip: ```bash pip install llama-index-vector-stores-chroma ``` To learn more about all integrations available, check out [LlamaHub](https://llamahub.ai). Then, you can use it in your code: ```python import chromadb from llama_index.vector_stores.chroma import ChromaVectorStore from llama_index.core import StorageContext chroma_client = chromadb.PersistentClient() chroma_collection = chroma_client.create_collection("quickstart") vector_store = ChromaVectorStore(chroma_collection=chroma_collection) storage_context = StorageContext.from_defaults(vector_store=vector_store) ``` `StorageContext` defines the storage backend for where the documents, embeddings, and indexes are stored. You can learn more about [storage](../module_guides/storing/index.md) and [how to customize it](../module_guides/storing/customization.md). ```python from llama_index.core import VectorStoreIndex, SimpleDirectoryReader documents = SimpleDirectoryReader("data").load_data() index = VectorStoreIndex.from_documents( documents, storage_context=storage_context ) query_engine = index.as_query_engine() response = query_engine.query("What did the author do growing up?") print(response) ``` --- ## **"I want to retrieve more context when I query"** ```python from llama_index.core import VectorStoreIndex, SimpleDirectoryReader documents = SimpleDirectoryReader("data").load_data() index = VectorStoreIndex.from_documents(documents) query_engine = index.as_query_engine(similarity_top_k=5) response = query_engine.query("What did the author do growing up?") print(response) ``` `as_query_engine` builds a default `retriever` and `query engine` on top of the index. You can configure the retriever and query engine by passing in keyword arguments. Here, we configure the retriever to return the top 5 most similar documents (instead of the default of 2). You can learn more about [retrievers](../module_guides/querying/retriever/retrievers.md) and [query engines](../module_guides/querying/retriever/index.md). --- ## **"I want to use a different LLM"** ```python # Global settings from llama_index.core import Settings from llama_index.llms.ollama import Ollama Settings.llm = Ollama(model="mistral", request_timeout=60.0) # Local settings index.as_query_engine(llm=Ollama(model="mistral", request_timeout=60.0)) ``` You can learn more about [customizing LLMs](../module_guides/models/llms.md). --- ## **"I want to use a different response mode"** ```python from llama_index.core import VectorStoreIndex, SimpleDirectoryReader documents = SimpleDirectoryReader("data").load_data() index = VectorStoreIndex.from_documents(documents) query_engine = index.as_query_engine(response_mode="tree_summarize") response = query_engine.query("What did the author do growing up?") print(response) ``` You can learn more about [query engines](../module_guides/querying/index.md) and [response modes](../module_guides/deploying/query_engine/response_modes.md). --- ## **"I want to stream the response back"** ```python from llama_index.core import VectorStoreIndex, SimpleDirectoryReader documents = SimpleDirectoryReader("data").load_data() index = VectorStoreIndex.from_documents(documents) query_engine = index.as_query_engine(streaming=True) response = query_engine.query("What did the author do growing up?") response.print_response_stream() ``` You can learn more about [streaming responses](../module_guides/deploying/query_engine/streaming.md). --- ## **"I want a chatbot instead of Q&A"** ```python from llama_index.core import VectorStoreIndex, SimpleDirectoryReader documents = SimpleDirectoryReader("data").load_data() index = VectorStoreIndex.from_documents(documents) query_engine = index.as_chat_engine() response = query_engine.chat("What did the author do growing up?") print(response) response = query_engine.chat("Oh interesting, tell me more.") print(response) ``` Learn more about the [chat engine](../module_guides/deploying/chat_engines/usage_pattern.md). --- ## Next Steps - Want a thorough walkthrough of (almost) everything you can configure? Get started with [Understanding LlamaIndex](../understanding/index.md). - Want more in-depth understanding of specific modules? Check out the [component guides](../module_guides/index.md).
167943
# Starter Tutorial (OpenAI) This is our famous "5 lines of code" starter example using OpenAI. !!! tip Make sure you've followed the [installation](installation.md) steps first. !!! tip Want to use local models? If you want to do our starter tutorial using only local models, [check out this tutorial instead](starter_example_local.md). ## Download data This example uses the text of Paul Graham's essay, ["What I Worked On"](http://paulgraham.com/worked.html). This and many other examples can be found in the `examples` folder of our repo. The easiest way to get it is to [download it via this link](https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt) and save it in a folder called `data`. ## Set your OpenAI API key LlamaIndex uses OpenAI's `gpt-3.5-turbo` by default. Make sure your API key is available to your code by setting it as an environment variable. In MacOS and Linux, this is the command: ``` export OPENAI_API_KEY=XXXXX ``` and on Windows it is ``` set OPENAI_API_KEY=XXXXX ``` ## Load data and build an index In the same folder where you created the `data` folder, create a file called `starter.py` file with the following: ```python from llama_index.core import VectorStoreIndex, SimpleDirectoryReader documents = SimpleDirectoryReader("data").load_data() index = VectorStoreIndex.from_documents(documents) ``` This builds an index over the documents in the `data` folder (which in this case just consists of the essay text, but could contain many documents). Your directory structure should look like this: <pre> ├── starter.py └── data    └── paul_graham_essay.txt </pre> ## Query your data Add the following lines to `starter.py` ```python query_engine = index.as_query_engine() response = query_engine.query("What did the author do growing up?") print(response) ``` This creates an engine for Q&A over your index and asks a simple question. You should get back a response similar to the following: `The author wrote short stories and tried to program on an IBM 1401.` ## Viewing Queries and Events Using Logging Want to see what's happening under the hood? Let's add some logging. Add these lines to the top of `starter.py`: ```python import logging import sys logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout)) ``` You can set the level to `DEBUG` for verbose output, or use `level=logging.INFO` for less. ## Storing your index By default, the data you just loaded is stored in memory as a series of vector embeddings. You can save time (and requests to OpenAI) by saving the embeddings to disk. That can be done with this line: ```python index.storage_context.persist() ``` By default, this will save the data to the directory `storage`, but you can change that by passing a `persist_dir` parameter. Of course, you don't get the benefits of persisting unless you load the data. So let's modify `starter.py` to generate and store the index if it doesn't exist, but load it if it does: ```python import os.path from llama_index.core import ( VectorStoreIndex, SimpleDirectoryReader, StorageContext, load_index_from_storage, ) # check if storage already exists PERSIST_DIR = "./storage" if not os.path.exists(PERSIST_DIR): # load the documents and create the index documents = SimpleDirectoryReader("data").load_data() index = VectorStoreIndex.from_documents(documents) # store it for later index.storage_context.persist(persist_dir=PERSIST_DIR) else: # load the existing index storage_context = StorageContext.from_defaults(persist_dir=PERSIST_DIR) index = load_index_from_storage(storage_context) # Either way we can now query the index query_engine = index.as_query_engine() response = query_engine.query("What did the author do growing up?") print(response) ``` Now you can efficiently query to your heart's content! But this is just the beginning of what you can do with LlamaIndex. !!! tip - learn more about the [high-level concepts](./concepts.md). - tell me how to [customize things](./customization.md). - curious about a specific module? check out the [component guides](../module_guides/index.md).
167949
# Starter Tools We have created a variety of open-source tools to help you bootstrap your generative AI projects. ## create-llama: Full-stack web application generator The `create-llama` tool is a CLI tool that helps you create a full-stack web application with your choice of frontend and backend that indexes your documents and allows you to chat with them. Running it is as simple as running: ```shell npx create-llama@latest ``` For full documentation, check out the [create-llama README on npm](https://www.npmjs.com/package/create-llama). ## SEC Insights: advanced query techniques Indexing and querying financial filings is a very common use-case for generative AI. To help you get started, we have created and open-sourced a full-stack application that lets you select filings from public companies across multiple years and summarize and compare them. It uses advanced querying and retrieval techniques to achieve high quality results. You can use the app yourself at [SECinsights.ai](https://www.secinsights.ai/) or check out the code on [GitHub](https://github.com/run-llama/sec-insights). ![SEC Insights](secinsights.png) ## Chat LlamaIndex: Full-stack chat application Chat LlamaIndex is another full-stack, open-source application that has a variety of interaction modes including streaming chat and multi-modal querying over images. It's a great way to see advanced chat application techniques. You can use it at [chat.llamaindex.ai](https://chat.llamaindex.ai/) or check out the code on [GitHub](https://github.com/run-llama/chat-llamaindex). ![Chat LlamaIndex](chatllamaindex.png) ## LlamaBot: Slack and Discord apps LlamaBot is another open-source application, this time for building a Slack bot that listens to messages within your organization and answers questions about what's going on. You can check out the [full tutorial and code on GitHub]https://github.com/run-llama/llamabot). If you prefer Discord, there is a [Discord version contributed by the community](https://twitter.com/clusteredbytes/status/1754220009885163957). ![LlamaBot](llamabot.png) ## RAG CLI: quick command-line chat with any document We provide a command-line tool that quickly lets you chat with documents. Learn more in the [RAG CLI documentation](rag_cli.md).
168097
class DuckDBVectorStore(BasePydanticVectorStore): """DuckDB vector store. In this vector store, embeddings are stored within a DuckDB database. During query time, the index uses DuckDB to query for the top k most similar nodes. Examples: `pip install llama-index-vector-stores-duckdb` ```python from llama_index.vector_stores.duckdb import DuckDBVectorStore # in-memory vector_store = DuckDBVectorStore() # persist to disk vector_store = DuckDBVectorStore("pg.duckdb", persist_dir="./persist/") ``` """ stores_text: bool = True flat_metadata: bool = True database_name: Optional[str] table_name: Optional[str] # schema_name: Optional[str] # TODO: support schema name embed_dim: Optional[int] # hybrid_search: Optional[bool] # TODO: support hybrid search text_search_config: Optional[dict] persist_dir: Optional[str] _conn: Any = PrivateAttr() _is_initialized: bool = PrivateAttr(default=False) _database_path: Optional[str] = PrivateAttr() def __init__( self, database_name: Optional[str] = ":memory:", table_name: Optional[str] = "documents", # schema_name: Optional[str] = "main", embed_dim: Optional[int] = None, # hybrid_search: Optional[bool] = False, # https://duckdb.org/docs/extensions/full_text_search text_search_config: Optional[dict] = { "stemmer": "english", "stopwords": "english", "ignore": "(\\.|[^a-z])+", "strip_accents": True, "lower": True, "overwrite": False, }, persist_dir: Optional[str] = "./storage", **kwargs: Any, ) -> None: """Init params.""" try: import duckdb except ImportError: raise ImportError(import_err_msg) database_path = None if database_name == ":memory:": _home_dir = os.path.expanduser("~") conn = duckdb.connect(database_name) conn.execute(f"SET home_directory='{_home_dir}';") conn.install_extension("json") conn.load_extension("json") conn.install_extension("fts") conn.load_extension("fts") else: # check if persist dir exists if not os.path.exists(persist_dir): os.makedirs(persist_dir) database_path = os.path.join(persist_dir, database_name) with DuckDBLocalContext(database_path) as _conn: pass conn = None super().__init__( database_name=database_name, table_name=table_name, # schema_name=schema_name, embed_dim=embed_dim, # hybrid_search=hybrid_search, text_search_config=text_search_config, persist_dir=persist_dir, ) self._is_initialized = False self._conn = conn self._database_path = database_path @classmethod def from_local( cls, database_path: str, table_name: Optional[str] = "documents", # schema_name: Optional[str] = "main", embed_dim: Optional[int] = None, # hybrid_search: Optional[bool] = False, text_search_config: Optional[dict] = { "stemmer": "english", "stopwords": "english", "ignore": "(\\.|[^a-z])+", "strip_accents": True, "lower": True, "overwrite": False, }, **kwargs: Any, ) -> "DuckDBVectorStore": """Load a DuckDB vector store from a local file.""" with DuckDBLocalContext(database_path) as _conn: try: _table_info = _conn.execute(f"SHOW {table_name};").fetchall() except Exception as e: raise ValueError(f"Index table {table_name} not found in the database.") # Not testing for the column type similarity only testing for the column names. _std = {"text", "node_id", "embedding", "metadata_"} _ti = {_i[0] for _i in _table_info} if _std != _ti: raise ValueError( f"Index table {table_name} does not have the correct schema." ) _cls = cls( database_name=os.path.basename(database_path), table_name=table_name, embed_dim=embed_dim, text_search_config=text_search_config, persist_dir=os.path.dirname(database_path), **kwargs, ) _cls._is_initialized = True return _cls @classmethod def from_params( cls, database_name: Optional[str] = ":memory:", table_name: Optional[str] = "documents", # schema_name: Optional[str] = "main", embed_dim: Optional[int] = None, # hybrid_search: Optional[bool] = False, text_search_config: Optional[dict] = { "stemmer": "english", "stopwords": "english", "ignore": "(\\.|[^a-z])+", "strip_accents": True, "lower": True, "overwrite": False, }, persist_dir: Optional[str] = "./storage", **kwargs: Any, ) -> "DuckDBVectorStore": return cls( database_name=database_name, table_name=table_name, # schema_name=schema_name, embed_dim=embed_dim, # hybrid_search=hybrid_search, text_search_config=text_search_config, persist_dir=persist_dir, **kwargs, ) @classmethod def class_name(cls) -> str: return "DuckDBVectorStore" @property def client(self) -> Any: """Return client.""" return self._conn def _initialize(self) -> None: if not self._is_initialized: # TODO: schema.table also. # Check if table and type is present # if not, create table if self.embed_dim is None: _query = f""" CREATE TABLE {self.table_name} ( node_id VARCHAR, text TEXT, embedding FLOAT[], metadata_ JSON ); """ else: _query = f""" CREATE TABLE {self.table_name} ( node_id VARCHAR, text TEXT, embedding FLOAT[{self.embed_dim}], metadata_ JSON ); """ if self.database_name == ":memory:": self._conn.execute(_query) else: with DuckDBLocalContext(self._database_path) as _conn: _conn.execute(_query) self._is_initialized = True def _node_to_table_row(self, node: BaseNode) -> Any: return ( node.node_id, node.get_content(metadata_mode=MetadataMode.NONE), node.get_embedding(), node_to_metadata_dict( node, remove_text=True, flat_metadata=self.flat_metadata, ), ) def _table_row_to_node(self, row: Any) -> BaseNode: return metadata_dict_to_node(json.loads(row[3]), row[1]) def add(self, nodes: List[BaseNode], **add_kwargs: Any) -> List[str]: """Add nodes to index. Args: nodes: List[BaseNode]: list of nodes with embeddings """ self._initialize() ids = [] if self.database_name == ":memory:": _table = self._conn.table(self.table_name) for node in nodes: ids.append(node.node_id) _row = self._node_to_table_row(node) _table.insert(_row) else: with DuckDBLocalContext(self._database_path) as _conn: _table = _conn.table(self.table_name) for node in nodes: ids.append(node.node_id) _row = self._node_to_table_row(node) _table.insert(_row) return ids def delete(self, ref_doc_id: str, **delete_kwargs: Any) -> None: """ Delete nodes using with ref_doc_id. Args: ref_doc_id (str): The doc_id of the document to delete. """ _ddb_query = f""" DELETE FROM {self.table_name} WHERE json_extract_string(metadata_, '$.ref_doc_id') = '{ref_doc_id}'; """ if self.database_name == ":memory:": self._conn.execute(_ddb_query) else: with DuckDBLocalContext(self._database_path) as _conn: _conn.execute(_ddb_query)
168102
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# DuckDB\n", "\n", ">[DuckDB](https://duckdb.org/docs/api/python/overview) is a fast in-process analytical database. DuckDB is under an MIT license.\n", "\n", "In this notebook we are going to show how to use DuckDB as a Vector store to be used in LlamaIndex.\n", "\n", "Install DuckDB with:\n", "\n", "```sh\n", "pip install duckdb\n", "```\n", "\n", "Make sure to use the latest DuckDB version (>= 0.10.0).\n", "\n", "You can run DuckDB in different modes depending on persistence:\n", "- `in-memory` is the default mode, where the database is created in memory, you can force this to be use by setting `database_name = \":memory:\"` when initializing the vector store.\n", "- `persistence` is set by using a name for a database and setting a persistence directory `database_name = \"my_vector_store.duckdb\"` where the database is persisted in the default `persist_dir` or to the one you set it to.\n", "\n", "With the vector store created, you can:\n", "- `.add` \n", "- `.get` \n", "- `.update`\n", "- `.upsert`\n", "- `.delete`\n", "- `.peek`\n", "- `.query` to run a search. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Basic example\n", "\n", "In this basic example, we take the Paul Graham essay, split it into chunks, embed it using an open-source embedding model, load it into `DuckDBVectorStore`, and then query it.\n", "\n", "For the embedding model we will use OpenAI. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install llama-index" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Creating a DuckDB Index" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install duckdb\n", "!pip install llama-index-vector-stores-duckdb" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from llama_index.core import VectorStoreIndex, SimpleDirectoryReader\n", "from llama_index.vector_stores.duckdb import DuckDBVectorStore\n", "from llama_index.core import StorageContext\n", "\n", "from IPython.display import Markdown, display" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Setup OpenAI API\n", "import os\n", "import openai\n", "\n", "openai.api_key = os.environ[\"OPENAI_API_KEY\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Download and prepare the sample dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--2024-02-16 19:38:34-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.108.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 75042 (73K) [text/plain]\n", "Saving to: ‘data/paul_graham/paul_graham_essay.txt’\n", "\n", "data/paul_graham/pa 100%[===================>] 73.28K --.-KB/s in 0.06s \n", "\n", "2024-02-16 19:38:34 (1.24 MB/s) - ‘data/paul_graham/paul_graham_essay.txt’ saved [75042/75042]\n", "\n" ] } ], "source": [ "!mkdir -p 'data/paul_graham/'\n", "!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "documents = SimpleDirectoryReader(\"data/paul_graham/\").load_data()\n", "\n", "vector_store = DuckDBVectorStore()\n", "storage_context = StorageContext.from_defaults(vector_store=vector_store)\n", "\n", "index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "<b>The author mentions that before college, they worked on two main things outside of school: writing and programming. They wrote short stories and also tried writing programs on an IBM 1401 computer. They later got a microcomputer and started programming more extensively.</b>" ], "text/plain": [ "<IPython.core.display.Markdown object>" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "query_engine = index.as_query_engine()\n", "response = query_engine.query(\"What did the author do growing up?\")\n", "display(Markdown(f\"<b>{response}</b>\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Persisting to disk example\n", "\n", "Extending the previous example, if you want to save to disk, simply initialize the DuckDBVectorStore by specifying a database name and persist directory." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Save to disk\n", "documents = SimpleDirectoryReader(\"data/paul_graham/\").load_data()\n", "\n", "vector_store = DuckDBVectorStore(\"pg.duckdb\", persist_dir=\"./persist/\")\n", "storage_context = StorageContext.from_defaults(vector_store=vector_store)\n", "\n", "index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Load from disk\n", "vector_store = DuckDBVectorStore.from_local(\"./persist/pg.duckdb\")\n", "index = VectorStoreIndex.from_vector_store(vector_store)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "<b>The author mentions that before college, they worked on two main things outside of school: writing and programming. They wrote short stories and also tried writing programs on an IBM 1401 computer. They later got a microcomputer and started programming more extensively.</b>" ], "text/plain": [ "<IPython.core.display.Markdown object>" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Query Data\n", "query_engine = index.as_query_engine()\n",
168298
class MilvusVectorStore(BasePydanticVectorStore): """The Milvus Vector Store. In this vector store we store the text, its embedding and a its metadata in a Milvus collection. This implementation allows the use of an already existing collection. It also supports creating a new one if the collection doesn't exist or if `overwrite` is set to True. Args: uri (str, optional): The URI to connect to, comes in the form of "https://address:port" for Milvus or Zilliz Cloud service, or "path/to/local/milvus.db" for the lite local Milvus. Defaults to "./milvus_llamaindex.db". token (str, optional): The token for log in. Empty if not using rbac, if using rbac it will most likely be "username:password". collection_name (str, optional): The name of the collection where data will be stored. Defaults to "llamalection". dim (int, optional): The dimension of the embedding vectors for the collection. Required if creating a new collection. embedding_field (str, optional): The name of the embedding field for the collection, defaults to DEFAULT_EMBEDDING_KEY. doc_id_field (str, optional): The name of the doc_id field for the collection, defaults to DEFAULT_DOC_ID_KEY. similarity_metric (str, optional): The similarity metric to use, currently supports IP, COSINE and L2. consistency_level (str, optional): Which consistency level to use for a newly created collection. Defaults to "Session". overwrite (bool, optional): Whether to overwrite existing collection with same name. Defaults to False. text_key (str, optional): What key text is stored in in the passed collection. Used when bringing your own collection. Defaults to None. index_config (dict, optional): The configuration used for building the Milvus index. Defaults to None. search_config (dict, optional): The configuration used for searching the Milvus index. Note that this must be compatible with the index type specified by `index_config`. Defaults to None. collection_properties (dict, optional): The collection properties such as TTL (Time-To-Live) and MMAP (memory mapping). Defaults to None. It could include: - 'collection.ttl.seconds' (int): Once this property is set, data in the current collection expires in the specified time. Expired data in the collection will be cleaned up and will not be involved in searches or queries. - 'mmap.enabled' (bool): Whether to enable memory-mapped storage at the collection level. batch_size (int): Configures the number of documents processed in one batch when inserting data into Milvus. Defaults to DEFAULT_BATCH_SIZE. enable_sparse (bool): A boolean flag indicating whether to enable support for sparse embeddings for hybrid retrieval. Defaults to False. sparse_embedding_function (BaseSparseEmbeddingFunction, optional): If enable_sparse is True, this object should be provided to convert text to a sparse embedding. hybrid_ranker (str): Specifies the type of ranker used in hybrid search queries. Currently only supports ['RRFRanker','WeightedRanker']. Defaults to "RRFRanker". hybrid_ranker_params (dict, optional): Configuration parameters for the hybrid ranker. The structure of this dictionary depends on the specific ranker being used: - For "RRFRanker", it should include: - 'k' (int): A parameter used in Reciprocal Rank Fusion (RRF). This value is used to calculate the rank scores as part of the RRF algorithm, which combines multiple ranking strategies into a single score to improve search relevance. - For "WeightedRanker", it expects: - 'weights' (list of float): A list of exactly two weights: 1. The weight for the dense embedding component. 2. The weight for the sparse embedding component. These weights are used to adjust the importance of the dense and sparse components of the embeddings in the hybrid retrieval process. Defaults to an empty dictionary, implying that the ranker will operate with its predefined default settings. index_management (IndexManagement): Specifies the index management strategy to use. Defaults to "create_if_not_exists". scalar_field_names (list): The names of the extra scalar fields to be included in the collection schema. scalar_field_types (list): The types of the extra scalar fields. Raises: ImportError: Unable to import `pymilvus`. MilvusException: Error communicating with Milvus, more can be found in logging under Debug. Returns: MilvusVectorstore: Vectorstore that supports add, delete, and query. Examples: `pip install llama-index-vector-stores-milvus` ```python from llama_index.vector_stores.milvus import MilvusVectorStore # Setup MilvusVectorStore vector_store = MilvusVectorStore( dim=1536, collection_name="your_collection_name", uri="http://milvus_address:port", token="your_milvus_token_here", overwrite=True ) ``` """ stores_text: bool = True stores_node: bool = True uri: str = "./milvus_llamaindex.db" token: str = "" collection_name: str = "llamacollection" dim: Optional[int] embedding_field: str = DEFAULT_EMBEDDING_KEY doc_id_field: str = DEFAULT_DOC_ID_KEY similarity_metric: str = "IP" consistency_level: str = "Session" overwrite: bool = False text_key: Optional[str] output_fields: List[str] = Field(default_factory=list) index_config: Optional[dict] search_config: Optional[dict] collection_properties: Optional[dict] batch_size: int = DEFAULT_BATCH_SIZE enable_sparse: bool = False sparse_embedding_field: str = "sparse_embedding" sparse_embedding_function: Any hybrid_ranker: str hybrid_ranker_params: dict = {} index_management: IndexManagement = IndexManagement.CREATE_IF_NOT_EXISTS scalar_field_names: Optional[List[str]] scalar_field_types: Optional[List[DataType]] _milvusclient: MilvusClient = PrivateAttr() _collection: Any = PrivateAttr()
168299
def __init__( self, uri: str = "./milvus_llamaindex.db", token: str = "", collection_name: str = "llamacollection", dim: Optional[int] = None, embedding_field: str = DEFAULT_EMBEDDING_KEY, doc_id_field: str = DEFAULT_DOC_ID_KEY, similarity_metric: str = "IP", consistency_level: str = "Session", overwrite: bool = False, text_key: Optional[str] = None, output_fields: Optional[List[str]] = None, index_config: Optional[dict] = None, search_config: Optional[dict] = None, collection_properties: Optional[dict] = None, batch_size: int = DEFAULT_BATCH_SIZE, enable_sparse: bool = False, sparse_embedding_function: Optional[BaseSparseEmbeddingFunction] = None, hybrid_ranker: str = "RRFRanker", hybrid_ranker_params: dict = {}, index_management: IndexManagement = IndexManagement.CREATE_IF_NOT_EXISTS, scalar_field_names: Optional[List[str]] = None, scalar_field_types: Optional[List[DataType]] = None, **kwargs: Any, ) -> None: """Init params.""" super().__init__( collection_name=collection_name, dim=dim, embedding_field=embedding_field, doc_id_field=doc_id_field, consistency_level=consistency_level, overwrite=overwrite, text_key=text_key, output_fields=output_fields or [], index_config=index_config if index_config else {}, search_config=search_config if search_config else {}, collection_properties=collection_properties, batch_size=batch_size, enable_sparse=enable_sparse, sparse_embedding_function=sparse_embedding_function, hybrid_ranker=hybrid_ranker, hybrid_ranker_params=hybrid_ranker_params, index_management=index_management, scalar_field_names=scalar_field_names, scalar_field_types=scalar_field_types, ) # Select the similarity metric similarity_metrics_map = { "ip": "IP", "l2": "L2", "euclidean": "L2", "cosine": "COSINE", } self.similarity_metric = similarity_metrics_map.get( similarity_metric.lower(), "L2" ) # Connect to Milvus instance self._milvusclient = MilvusClient( uri=uri, token=token, **kwargs, # pass additional arguments such as server_pem_path ) # Delete previous collection if overwriting if overwrite and collection_name in self.client.list_collections(): self._milvusclient.drop_collection(collection_name) # Create the collection if it does not exist if collection_name not in self.client.list_collections(): if dim is None: raise ValueError("Dim argument required for collection creation.") if self.enable_sparse is False: # Check if custom index should be created if ( index_config is not None and self.index_management is not IndexManagement.NO_VALIDATION ): try: # Prepare index index_params = self.client.prepare_index_params() index_type = index_config["index_type"] index_params.add_index( field_name=embedding_field, index_type=index_type, metric_type=self.similarity_metric, ) # Create a schema according to LlamaIndex Schema. schema = self._create_schema() schema.verify() # Using private method exposed by pymilvus client, in order to avoid creating indexes twice # Reason: create_collection in pymilvus only checks schema and ignores index_config setup # https://github.com/milvus-io/pymilvus/issues/2265 self.client._create_collection_with_schema( collection_name=collection_name, schema=schema, index_params=index_params, dimemsion=dim, primary_field=MILVUS_ID_FIELD, vector_field=embedding_field, id_type="string", max_length=65_535, consistency_level=consistency_level, ) self._collection = Collection( collection_name, using=self._milvusclient._using ) except Exception as e: logger.error("Error creating collection with index_config") raise NotImplementedError( "Error creating collection with index_config" ) from e else: self._milvusclient.create_collection( collection_name=collection_name, dimension=dim, primary_field_name=MILVUS_ID_FIELD, vector_field_name=embedding_field, id_type="string", metric_type=self.similarity_metric, max_length=65_535, consistency_level=consistency_level, ) self._collection = Collection( collection_name, using=self._milvusclient._using ) # Check if we have to create an index here to avoid duplicity of indexes self._create_index_if_required() else: try: _ = DataType.SPARSE_FLOAT_VECTOR except Exception as e: logger.error( "Hybrid retrieval is only supported in Milvus 2.4.0 or later." ) raise NotImplementedError( "Hybrid retrieval requires Milvus 2.4.0 or later." ) from e self._create_hybrid_index(collection_name) else: self._collection = Collection( collection_name, using=self._milvusclient._using ) # Set properties if collection_properties: if self._milvusclient.get_load_state(collection_name) == LoadState.Loaded: self._collection.release() self._collection.set_properties(properties=collection_properties) self._collection.load() else: self._collection.set_properties(properties=collection_properties) self.enable_sparse = enable_sparse if self.enable_sparse is True and sparse_embedding_function is None: logger.warning("Sparse embedding function is not provided, using default.") self.sparse_embedding_function = get_default_sparse_embedding_function() elif self.enable_sparse is True and sparse_embedding_function is not None: self.sparse_embedding_function = sparse_embedding_function else: pass logger.debug(f"Successfully created a new collection: {self.collection_name}") @property def client(self) -> Any: """Get client.""" return self._milvusclient def add(self, nodes: List[BaseNode], **add_kwargs: Any) -> List[str]: """Add the embeddings and their nodes into Milvus. Args: nodes (List[BaseNode]): List of nodes with embeddings to insert. Raises: MilvusException: Failed to insert data. Returns: List[str]: List of ids inserted. """ insert_list = [] insert_ids = [] if self.enable_sparse is True and self.sparse_embedding_function is None: logger.fatal( "sparse_embedding_function is None when enable_sparse is True." ) # Process that data we are going to insert for node in nodes: entry = node_to_metadata_dict(node) entry[MILVUS_ID_FIELD] = node.node_id entry[self.embedding_field] = node.embedding if self.enable_sparse is True: entry[ self.sparse_embedding_field ] = self.sparse_embedding_function.encode_documents([node.text])[0] insert_ids.append(node.node_id) insert_list.append(entry) # Insert the data into milvus for insert_batch in iter_batch(insert_list, self.batch_size): self._collection.insert(insert_batch) if add_kwargs.get("force_flush", False): self._collection.flush() logger.debug( f"Successfully inserted embeddings into: {self.collection_name} " f"Num Inserted: {len(insert_list)}" ) return insert_ids def delete(self, ref_doc_id: str, **delete_kwargs: Any) -> None: """ Delete nodes using with ref_doc_id. Args: ref_doc_id (str): The doc_id of the document to delete. Raises: MilvusException: Failed to delete the doc. """ # Adds ability for multiple doc delete in future. doc_ids: List[str] if isinstance(ref_doc_id, list): doc_ids = ref_doc_id # type: ignore else: doc_ids = [ref_doc_id] # Begin by querying for the primary keys to delete doc_ids = ['"' + entry + '"' for entry in doc_ids] entries = self._milvusclient.query( collection_name=self.collection_name, filter=f"{self.doc_id_field} in [{','.join(doc_ids)}]", ) if len(entries) > 0: ids = [entry["id"] for entry in entries] self._milvusclient.delete(collection_name=self.collection_name, pks=ids) logger.debug(f"Successfully deleted embedding with doc_id: {doc_ids}")
168345
"""DeepLake vector store index. An index that is built within DeepLake. """ import logging from typing import Any, List, Optional, cast from llama_index.core.bridge.pydantic import PrivateAttr from llama_index.core.schema import BaseNode, MetadataMode, TextNode from llama_index.core.vector_stores.types import ( BasePydanticVectorStore, VectorStoreQuery, VectorStoreQueryResult, MetadataFilters, FilterCondition, FilterOperator, ) from llama_index.core.vector_stores.utils import ( metadata_dict_to_node, node_to_metadata_dict, ) from deeplake.core.vectorstore.deeplake_vectorstore import VectorStore logger = logging.getLogger(__name__)
168346
class DeepLakeVectorStore(BasePydanticVectorStore): """The DeepLake Vector Store. In this vector store we store the text, its embedding and a few pieces of its metadata in a deeplake dataset. This implementation allows the use of an already existing deeplake dataset if it is one that was created this vector store. It also supports creating a new one if the dataset doesn't exist or if `overwrite` is set to True. Examples: `pip install llama-index-vector-stores-deeplake` ```python from llama_index.vector_stores.deeplake import DeepLakeVectorStore # Create an instance of DeepLakeVectorStore vector_store = DeepLakeVectorStore(dataset_path=dataset_path, overwrite=True) ``` """ stores_text: bool = True flat_metadata: bool = True ingestion_batch_size: int num_workers: int token: Optional[str] read_only: Optional[bool] dataset_path: str _embedding_dimension: int = PrivateAttr() _ttl_seconds: Optional[int] = PrivateAttr() _deeplake_db: Any = PrivateAttr() _deeplake_db_collection: Any = PrivateAttr() _vectorstore: "VectorStore" = PrivateAttr() _id_tensor_name: str = PrivateAttr() def __init__( self, dataset_path: str = "llama_index", token: Optional[str] = None, read_only: Optional[bool] = False, ingestion_batch_size: int = 1024, ingestion_num_workers: int = 4, overwrite: bool = False, exec_option: Optional[str] = None, verbose: bool = True, **kwargs: Any, ) -> None: """ Args: dataset_path (str): The full path for storing to the Deep Lake Vector Store. It can be: - a Deep Lake cloud path of the form ``hub://org_id/dataset_name``. Requires registration with Deep Lake. - an s3 path of the form ``s3://bucketname/path/to/dataset``. Credentials are required in either the environment or passed to the creds argument. - a local file system path of the form ``./path/to/dataset`` or ``~/path/to/dataset`` or ``path/to/dataset``. - a memory path of the form ``mem://path/to/dataset`` which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. Defaults to "llama_index". overwrite (bool, optional): If set to True this overwrites the Vector Store if it already exists. Defaults to False. token (str, optional): Activeloop token, used for fetching user credentials. This is Optional, tokens are normally autogenerated. Defaults to None. read_only (bool, optional): Opens dataset in read-only mode if True. Defaults to False. ingestion_batch_size (int): During data ingestion, data is divided into batches. Batch size is the size of each batch. Defaults to 1024. ingestion_num_workers (int): number of workers to use during data ingestion. Defaults to 4. exec_option (str): Default method for search execution. It could be either ``"auto"``, ``"python"``, ``"compute_engine"`` or ``"tensor_db"``. Defaults to ``"auto"``. If None, it's set to "auto". - ``auto``- Selects the best execution method based on the storage location of the Vector Store. It is the default option. - ``python`` - Pure-python implementation that runs on the client and can be used for data stored anywhere. WARNING: using this option with big datasets is discouraged because it can lead to memory issues. - ``compute_engine`` - Performant C++ implementation of the Deep Lake Compute Engine that runs on the client and can be used for any data stored in or connected to Deep Lake. It cannot be used with in-memory or local datasets. - ``tensor_db`` - Performant and fully-hosted Managed Tensor Database that is responsible for storage and query execution. Only available for data stored in the Deep Lake Managed Database. Store datasets in this database by specifying runtime = {"tensor_db": True} during dataset creation. Raises: ImportError: Unable to import `deeplake`. """ super().__init__( dataset_path=dataset_path, token=token, read_only=read_only, ingestion_batch_size=ingestion_batch_size, num_workers=ingestion_num_workers, ) self._vectorstore = VectorStore( path=dataset_path, ingestion_batch_size=ingestion_batch_size, num_workers=ingestion_num_workers, token=token, read_only=read_only, exec_option=exec_option, overwrite=overwrite, verbose=verbose, **kwargs, ) self._id_tensor_name = "ids" if "ids" in self._vectorstore.tensors() else "id" @property def client(self) -> Any: """Get client. Returns: Any: DeepLake vectorstore dataset. """ return self._vectorstore.dataset def summary(self): self._vectorstore.summary() def get_nodes( self, node_ids: Optional[List[str]] = None, filters: Optional[MetadataFilters] = None, ) -> List[BaseNode]: """Get nodes from vector store.""" if node_ids: data = self._vectorstore.search(filter={"id": node_ids}) else: data = self._vectorstore.search(filter={}) nodes = [] for metadata in data["metadata"]: nodes.append(metadata_dict_to_node(metadata)) def filter_func(doc): if not filters: return True found_one = False for f in filters.filters: value = doc.metadata[f.key] if f.operator == FilterOperator.EQ: result = value == f.value elif f.operator == FilterOperator.GT: result = value > f.value elif f.operator == FilterOperator.GTE: result = value >= f.value elif f.operator == FilterOperator.LT: result = value < f.value elif f.operator == FilterOperator.LTE: result = value <= f.value elif f.operator == FilterOperator.NE: result = value != f.value elif f.operator == FilterOperator.IN: result = value in f.value elif f.operator == FilterOperator.NOT_IN: result = value not in f.value elif f.operator == FilterOperator.TEXT_MATCH: result = f.value in value else: raise ValueError(f"Unsupported filter operator: {f.operator}") if result: found_one = True if filters.condition == FilterCondition.OR: return True else: if filters.condition == FilterCondition.AND: return False return found_one if filters: return [x for x in nodes if filter_func(x)] else: return nodes def delete_nodes( self, node_ids: Optional[List[str]] = None, filters: Optional[MetadataFilters] = None, **delete_kwargs: Any, ) -> None: if filters: self._vectorstore.delete( ids=[ x.node_id for x in self.get_nodes(node_ids=node_ids, filters=filters) ] ) else: self._vectorstore.delete(ids=node_ids) def clear(self) -> None: """Clear the vector store.""" self._vectorstore.delete(filter=lambda x: True) def add(self, nodes: List[BaseNode], **add_kwargs: Any) -> List[str]: """Add the embeddings and their nodes into DeepLake. Args: nodes (List[BaseNode]): List of nodes with embeddings to insert. Returns: List[str]: List of ids inserted. """ embedding = [] metadata = [] id_ = [] text = [] for node in nodes: embedding.append(node.get_embedding()) metadata.append( node_to_metadata_dict( node, remove_text=False, flat_metadata=self.flat_metadata ) ) id_.append(node.node_id) text.append(node.get_content(metadata_mode=MetadataMode.NONE)) kwargs = { "embedding": embedding, "metadata": metadata, self._id_tensor_name: id_, "text": text, } return self._vectorstore.add( return_ids=True, **kwargs, ) def delete(self, ref_doc_id: str, **delete_kwargs: Any) -> None: """ Delete nodes using with ref_doc_id. Args: ref_doc_id (str): The doc_id of the document to delete. """ self._vectorstore.delete(filter={"metadata": {"doc_id": ref_doc_id}})
168348
import pytest import jwt # noqa from llama_index.core import Document from llama_index.core.vector_stores.types import ( BasePydanticVectorStore, MetadataFilter, MetadataFilters, FilterCondition, FilterOperator, ) from llama_index.vector_stores.deeplake import DeepLakeVectorStore def test_class(): names_of_base_classes = [b.__name__ for b in DeepLakeVectorStore.__mro__] assert BasePydanticVectorStore.__name__ in names_of_base_classes def test_e2e(): vs = DeepLakeVectorStore(dataset_path="mem://test", overwrite=True) ids = vs.add( nodes=[ Document(text="Doc 1", embedding=[1, 2, 1], metadata={"a": "1", "b": 10}), Document(text="Doc 2", embedding=[1, 2, 2], metadata={"a": "2", "b": 11}), Document(text="Doc 3", embedding=[1, 2, 3], metadata={"a": "3", "b": 12}), ] ) nodes = vs.get_nodes(node_ids=[ids[0], ids[2]]) assert [x.text for x in nodes] == ["Doc 1", "Doc 3"] nodes = vs.get_nodes(node_ids=["a"]) assert len(nodes) == 0 assert [ x.text for x in vs.get_nodes( filters=MetadataFilters( filters=[ MetadataFilter(key="a", value="2"), ] ) ) ] == ["Doc 2"] assert [ x.text for x in vs.get_nodes( filters=MetadataFilters( filters=[ MetadataFilter(key="a", value="2"), MetadataFilter(key="a", value="3"), ] ) ) ] == [] assert [ x.text for x in vs.get_nodes( filters=MetadataFilters( condition=FilterCondition.OR, filters=[ MetadataFilter(key="a", value="2"), MetadataFilter(key="a", value="3"), ], ) ) ] == ["Doc 2", "Doc 3"] assert [ x.text for x in vs.get_nodes( filters=MetadataFilters( condition=FilterCondition.OR, filters=[ MetadataFilter(key="a", value="2"), MetadataFilter(key="a", value="3"), ], ) ) ] == ["Doc 2", "Doc 3"] assert [ x.text for x in vs.get_nodes( filters=MetadataFilters( filters=[ MetadataFilter(key="b", value=10, operator=FilterOperator.GT), ] ) ) ] == ["Doc 2", "Doc 3"] assert [ x.text for x in vs.get_nodes( filters=MetadataFilters( filters=[ MetadataFilter(key="b", value=11, operator=FilterOperator.LTE), ] ) ) ] == ["Doc 1", "Doc 2"] vs.delete_nodes(node_ids=[ids[0], ids[2]]) assert [x.text for x in vs.get_nodes()] == ["Doc 2"] vs.add( nodes=[ Document(text="Doc 4", embedding=[1, 2, 1], metadata={"a": "4", "b": 14}), Document(text="Doc 5", embedding=[1, 2, 2], metadata={"a": "5", "b": 15}), Document(text="Doc 6", embedding=[1, 2, 3], metadata={"a": "6", "b": 16}), ] ) vs.delete_nodes( filters=MetadataFilters( filters=[ MetadataFilter(key="b", value=14, operator=FilterOperator.GT), ] ) ) assert [x.text for x in vs.get_nodes()] == ["Doc 2", "Doc 4"] vs.clear() with pytest.raises(ValueError) as e: vs.get_nodes() assert str(e.value) == "specified dataset is empty"
168427
"""Azure AI Search vector store.""" import enum import json import logging from enum import auto from typing import Any, Callable, Dict, List, Optional, Tuple, Union, cast from azure.search.documents import SearchClient from azure.search.documents.aio import SearchClient as AsyncSearchClient from azure.search.documents.indexes import SearchIndexClient from azure.search.documents.indexes.aio import ( SearchIndexClient as AsyncSearchIndexClient, ) from llama_index.core.bridge.pydantic import PrivateAttr from llama_index.core.schema import BaseNode, MetadataMode, TextNode from llama_index.core.vector_stores.types import ( BasePydanticVectorStore, FilterCondition, FilterOperator, MetadataFilters, VectorStoreQuery, VectorStoreQueryMode, VectorStoreQueryResult, ) from llama_index.core.vector_stores.utils import ( legacy_metadata_dict_to_node, metadata_dict_to_node, node_to_metadata_dict, ) logger = logging.getLogger(__name__) class MetadataIndexFieldType(int, enum.Enum): """ Enumeration representing the supported types for metadata fields in an Azure AI Search Index, corresponds with types supported in a flat metadata dictionary. """ STRING = auto() BOOLEAN = auto() INT32 = auto() INT64 = auto() DOUBLE = auto() COLLECTION = auto() class IndexManagement(int, enum.Enum): """Enumeration representing the supported index management operations.""" NO_VALIDATION = auto() VALIDATE_INDEX = auto() CREATE_IF_NOT_EXISTS = auto() DEFAULT_MAX_BATCH_SIZE = 700 DEFAULT_MAX_MB_SIZE = 14 * 1024 * 1024 # 14MB in bytes class AzureAISearchVectorStore(BasePydanticVectorStore): """ Azure AI Search vector store. Examples: `pip install llama-index-vector-stores-azureaisearch` ```python from azure.core.credentials import AzureKeyCredential from azure.search.documents import SearchClient from azure.search.documents.indexes import SearchIndexClient from llama_index.vector_stores.azureaisearch import AzureAISearchVectorStore from llama_index.vector_stores.azureaisearch import IndexManagement, MetadataIndexFieldType # Azure AI Search setup search_service_api_key = "YOUR-AZURE-SEARCH-SERVICE-ADMIN-KEY" search_service_endpoint = "YOUR-AZURE-SEARCH-SERVICE-ENDPOINT" search_service_api_version = "2024-07-01" credential = AzureKeyCredential(search_service_api_key) # Index name to use index_name = "llamaindex-vector-demo" # Use index client to demonstrate creating an index index_client = SearchIndexClient( endpoint=search_service_endpoint, credential=credential, ) metadata_fields = { "author": "author", "theme": ("topic", MetadataIndexFieldType.STRING), "director": "director", } # Creating an Azure AI Search Vector Store vector_store = AzureAISearchVectorStore( search_or_index_client=index_client, filterable_metadata_field_keys=metadata_fields, index_name=index_name, index_management=IndexManagement.CREATE_IF_NOT_EXISTS, id_field_key="id", chunk_field_key="chunk", embedding_field_key="embedding", embedding_dimensionality=1536, metadata_string_field_key="metadata", doc_id_field_key="doc_id", language_analyzer="en.lucene", vector_algorithm_type="exhaustiveKnn", ) ``` """ stores_text: bool = True flat_metadata: bool = False _index_client: SearchIndexClient = PrivateAttr() _index_name: Optional[str] = PrivateAttr() _async_index_client: AsyncSearchIndexClient = PrivateAttr() _search_client: SearchClient = PrivateAttr() _async_search_client: AsyncSearchClient = PrivateAttr() _embedding_dimensionality: int = PrivateAttr() _language_analyzer: str = PrivateAttr() _field_mapping: Dict[str, str] = PrivateAttr() _index_management: IndexManagement = PrivateAttr() _index_mapping: Callable[ [Dict[str, str], Dict[str, Any]], Dict[str, str] ] = PrivateAttr() _metadata_to_index_field_map: Dict[ str, Tuple[str, MetadataIndexFieldType] ] = PrivateAttr() _vector_profile_name: str = PrivateAttr() _compression_type: str = PrivateAttr() def _normalise_metadata_to_index_fields( self, filterable_metadata_field_keys: Union[ List[str], Dict[str, str], Dict[str, Tuple[str, MetadataIndexFieldType]], None, ] = [], ) -> Dict[str, Tuple[str, MetadataIndexFieldType]]: index_field_spec: Dict[str, Tuple[str, MetadataIndexFieldType]] = {} if isinstance(filterable_metadata_field_keys, List): for field in filterable_metadata_field_keys: # Index field name and the metadata field name are the same # Use String as the default index field type index_field_spec[field] = (field, MetadataIndexFieldType.STRING) elif isinstance(filterable_metadata_field_keys, dict): for k, v in filterable_metadata_field_keys.items(): if isinstance(v, tuple): # Index field name and metadata field name may differ # The index field type used is as supplied index_field_spec[k] = v elif isinstance(v, list): # Handle list types as COLLECTION index_field_spec[k] = (k, MetadataIndexFieldType.COLLECTION) elif isinstance(v, bool): index_field_spec[k] = (k, MetadataIndexFieldType.BOOLEAN) elif isinstance(v, int): index_field_spec[k] = (k, MetadataIndexFieldType.INT32) elif isinstance(v, float): index_field_spec[k] = (k, MetadataIndexFieldType.DOUBLE) elif isinstance(v, str): index_field_spec[k] = (k, MetadataIndexFieldType.STRING) else: # Index field name and metadata field name may differ # Use String as the default index field type index_field_spec[k] = (v, MetadataIndexFieldType.STRING) return index_field_spec def _index_exists(self, index_name: str) -> bool: return index_name in self._index_client.list_index_names() async def _aindex_exists(self, index_name: str) -> bool: return index_name in [ name async for name in self._async_index_client.list_index_names() ] def _create_index_if_not_exists(self, index_name: str) -> None: if not self._index_exists(index_name): logger.info( f"Index {index_name} does not exist in Azure AI Search, creating index" ) self._create_index(index_name) async def _acreate_index_if_not_exists(self, index_name: str) -> None: if not await self._aindex_exists(index_name): logger.info( f"Index {index_name} does not exist in Azure AI Search, creating index" ) await self._acreate_index(index_name) def _create_metadata_index_fields(self) -> List[Any]: """Create a list of index fields for storing metadata values.""" from azure.search.documents.indexes.models import SimpleField index_fields = [] # create search fields for v in self._metadata_to_index_field_map.values(): field_name, field_type = v # Skip if the field is already mapped if field_name in self._field_mapping.values(): continue if field_type == MetadataIndexFieldType.STRING: index_field_type = "Edm.String" elif field_type == MetadataIndexFieldType.INT32: index_field_type = "Edm.Int32" elif field_type == MetadataIndexFieldType.INT64: index_field_type = "Edm.Int64" elif field_type == MetadataIndexFieldType.DOUBLE: index_field_type = "Edm.Double" elif field_type == MetadataIndexFieldType.BOOLEAN: index_field_type = "Edm.Boolean" elif field_type == MetadataIndexFieldType.COLLECTION: index_field_type = "Collection(Edm.String)" field = SimpleField(name=field_name, type=index_field_type, filterable=True) index_fields.append(field) return index_fields def _get_compressions(self) -> List[Any]: """Get the compressions for the vector search.""" from azure.search.documents.indexes.models import ( BinaryQuantizationCompression, ScalarQuantizationCompression, ) compressions = [] if self._compression_type == "binary": compressions.append( BinaryQuantizationCompression(compression_name="myBinaryCompression") ) elif self._compression_type == "scalar": compressions.append( ScalarQuantizationCompression(compression_name="myScalarCompression") ) return compressions
168428
def _create_index(self, index_name: Optional[str]) -> None: """ Creates a default index based on the supplied index name, key field names and metadata filtering keys. """ from azure.search.documents.indexes.models import ( ExhaustiveKnnAlgorithmConfiguration, ExhaustiveKnnParameters, HnswAlgorithmConfiguration, HnswParameters, SearchableField, SearchField, SearchFieldDataType, SearchIndex, SemanticConfiguration, SemanticField, SemanticPrioritizedFields, SemanticSearch, SimpleField, VectorSearch, VectorSearchAlgorithmKind, VectorSearchAlgorithmMetric, VectorSearchProfile, ) logger.info(f"Configuring {index_name} fields for Azure AI Search") fields = [ SimpleField(name=self._field_mapping["id"], type="Edm.String", key=True), SearchableField( name=self._field_mapping["chunk"], type="Edm.String", analyzer_name=self._language_analyzer, ), SearchField( name=self._field_mapping["embedding"], type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True, vector_search_dimensions=self._embedding_dimensionality, vector_search_profile_name=self._vector_profile_name, ), SimpleField(name=self._field_mapping["metadata"], type="Edm.String"), SimpleField( name=self._field_mapping["doc_id"], type="Edm.String", filterable=True ), ] logger.info(f"Configuring {index_name} metadata fields") metadata_index_fields = self._create_metadata_index_fields() fields.extend(metadata_index_fields) logger.info(f"Configuring {index_name} vector search") # Determine the compression type compressions = self._get_compressions() logger.info( f"Configuring {index_name} vector search with {self._compression_type} compression" ) # Configure the vector search algorithms and profiles vector_search = VectorSearch( algorithms=[ HnswAlgorithmConfiguration( name="myHnsw", kind=VectorSearchAlgorithmKind.HNSW, parameters=HnswParameters( m=4, ef_construction=400, ef_search=500, metric=VectorSearchAlgorithmMetric.COSINE, ), ), ExhaustiveKnnAlgorithmConfiguration( name="myExhaustiveKnn", kind=VectorSearchAlgorithmKind.EXHAUSTIVE_KNN, parameters=ExhaustiveKnnParameters( metric=VectorSearchAlgorithmMetric.COSINE, ), ), ], compressions=compressions, profiles=[ VectorSearchProfile( name="myHnswProfile", algorithm_configuration_name="myHnsw", compression_name=( compressions[0].compression_name if compressions else None ), ), VectorSearchProfile( name="myExhaustiveKnnProfile", algorithm_configuration_name="myExhaustiveKnn", compression_name=None, # Exhaustive KNN doesn't support compression at the moment ), ], ) logger.info(f"Configuring {index_name} semantic search") semantic_config = SemanticConfiguration( name="mySemanticConfig", prioritized_fields=SemanticPrioritizedFields( content_fields=[SemanticField(field_name=self._field_mapping["chunk"])], ), ) semantic_search = SemanticSearch(configurations=[semantic_config]) index = SearchIndex( name=index_name, fields=fields, vector_search=vector_search, semantic_search=semantic_search, ) logger.debug(f"Creating {index_name} search index") self._index_client.create_index(index) async def _acreate_index(self, index_name: Optional[str]) -> None: """ Asynchronous version of index creation with optional compression. Creates a default index based on the supplied index name, key field names, and metadata filtering keys. """ from azure.search.documents.indexes.models import ( ExhaustiveKnnAlgorithmConfiguration, ExhaustiveKnnParameters, HnswAlgorithmConfiguration, HnswParameters, SearchField, SearchFieldDataType, SearchIndex, SearchableField, SemanticConfiguration, SemanticField, SemanticPrioritizedFields, SemanticSearch, SimpleField, VectorSearch, VectorSearchAlgorithmKind, VectorSearchAlgorithmMetric, VectorSearchProfile, ) logger.info(f"Configuring {index_name} fields for Azure AI Search") fields = [ SimpleField(name=self._field_mapping["id"], type="Edm.String", key=True), SearchableField( name=self._field_mapping["chunk"], type="Edm.String", analyzer_name=self._language_analyzer, ), SearchField( name=self._field_mapping["embedding"], type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True, vector_search_dimensions=self._embedding_dimensionality, vector_search_profile_name=self._vector_profile_name, ), SimpleField(name=self._field_mapping["metadata"], type="Edm.String"), SimpleField( name=self._field_mapping["doc_id"], type="Edm.String", filterable=True ), ] logger.info(f"Configuring {index_name} metadata fields") metadata_index_fields = self._create_metadata_index_fields() fields.extend(metadata_index_fields) # Determine the compression type compressions = self._get_compressions() logger.info( f"Configuring {index_name} vector search with {self._compression_type} compression" ) # Configure the vector search algorithms and profiles vector_search = VectorSearch( algorithms=[ HnswAlgorithmConfiguration( name="myHnsw", kind=VectorSearchAlgorithmKind.HNSW, # For more information on HNSw parameters, visit https://learn.microsoft.com//azure/search/vector-search-ranking#creating-the-hnsw-graph parameters=HnswParameters( m=4, ef_construction=400, ef_search=500, metric=VectorSearchAlgorithmMetric.COSINE, ), ), ExhaustiveKnnAlgorithmConfiguration( name="myExhaustiveKnn", kind=VectorSearchAlgorithmKind.EXHAUSTIVE_KNN, parameters=ExhaustiveKnnParameters( metric=VectorSearchAlgorithmMetric.COSINE, ), ), ], compressions=compressions, profiles=[ VectorSearchProfile( name="myHnswProfile", algorithm_configuration_name="myHnsw", compression_name=( compressions[0].compression_name if compressions else None ), ), VectorSearchProfile( name="myExhaustiveKnnProfile", algorithm_configuration_name="myExhaustiveKnn", compression_name=None, # Exhaustive KNN doesn't support compression at the moment ), ], ) logger.info(f"Configuring {index_name} semantic search") semantic_config = SemanticConfiguration( name="mySemanticConfig", prioritized_fields=SemanticPrioritizedFields( content_fields=[SemanticField(field_name=self._field_mapping["chunk"])], ), ) semantic_search = SemanticSearch(configurations=[semantic_config]) index = SearchIndex( name=index_name, fields=fields, vector_search=vector_search, semantic_search=semantic_search, ) logger.debug(f"Creating {index_name} search index") await self._async_index_client.create_index(index) def _validate_index(self, index_name: Optional[str]) -> None: if self._index_client and index_name and not self._index_exists(index_name): raise ValueError(f"Validation failed, index {index_name} does not exist.") async def _avalidate_index(self, index_name: Optional[str]) -> None: if ( self._async_index_client and index_name and not await self._aindex_exists(index_name) ): raise ValueError(f"Validation failed, index {index_name} does not exist.")
168438
# LlamaIndex Vector_Stores Integration: MongoDB ## Setting up MongoDB Atlas as the Datastore Provider MongoDB Atlas is a multi-cloud database service made by the same people that build MongoDB. Atlas simplifies deploying and managing your databases while offering the versatility you need to build resilient and performant global applications on the cloud providers of your choice. You can perform semantic search on data in your Atlas cluster running MongoDB v6.0.11, v7.0.2, or later using Atlas Vector Search. You can store vector embeddings for any kind of data along with other data in your collection on the Atlas cluster. In the section, we provide detailed instructions to run the tests. ### Deploy a Cluster Follow the [Getting-Started](https://www.mongodb.com/basics/mongodb-atlas-tutorial) documentation to create an account, deploy an Atlas cluster, and connect to a database. ### Retrieve the URI used by Python to connect to the Cluster When you deploy, this will be stored as the environment variable: `MONGODB_URI`, It will look something like the following. The username and password, if not provided, can be configured in _Database Access_ under Security in the left panel. ``` export MONGODB_URI="mongodb+srv://<username>:<password>@cluster0.foo.mongodb.net/?retryWrites=true&w=majority" ``` There are a number of ways to navigate the Atlas UI. Keep your eye out for "Connect" and "driver". On the left panel, navigate and click 'Database' under DEPLOYMENT. Click the Connect button that appears, then Drivers. Select Python. (Have no concern for the version. This is the PyMongo, not Python, version.) Once you have got the Connect Window open, you will see an instruction to `pip install pymongo`. You will also see a **connection string**. This is the `uri` that a `pymongo.MongoClient` uses to connect to the Database. ### Test the connection Atlas provides a simple check. Once you have your `uri` and `pymongo` installed, try the following in a python console. ```python from pymongo.mongo_client import MongoClient client = MongoClient(uri) # Create a new client and connect to the server try: client.admin.command( "ping" ) # Send a ping to confirm a successful connection print("Pinged your deployment. You successfully connected to MongoDB!") except Exception as e: print(e) ``` **Troubleshooting** - You can edit a Database's users and passwords on the 'Database Access' page, under Security. - Remember to add your IP address. (Try `curl -4 ifconfig.co`) ### Create a Database and Collection As mentioned, Vector Databases provide two functions. In addition to being the data store, they provide very efficient search based on natural language queries. With Vector Search, one will index and query data with a powerful vector search algorithm using "Hierarchical Navigable Small World (HNSW) graphs to find vector similarity. The indexing runs beside the data as a separate service asynchronously. The Search index monitors changes to the Collection that it applies to. Subsequently, one need not upload the data first. We will create an empty collection now, which will simplify setup in the example notebook. Back in the UI, navigate to the Database Deployments page by clicking Database on the left panel. Click the "Browse Collections" and then "+ Create Database" buttons. This will open a window where you choose Database and Collection names. (No additional preferences.) Remember these values as they will be as the environment variables, `MONGODB_DATABASE` and `MONGODB_COLLECTION`. ### Set Datastore Environment Variables To establish a connection to the MongoDB Cluster, Database, and Collection, plus create a Vector Search Index, define the following environment variables. You can confirm that the required ones have been set like this: `assert "MONGODB_URI" in os.environ` **IMPORTANT** It is crucial that the choices are consistent between setup in Atlas and Python environment(s). | Name | Description | Example | | -------------------- | ----------------- | ------------------------------------------------------------------- | | `MONGODB_URI` | Connection String | mongodb+srv://`<user>`:`<password>`@llama-index.zeatahb.mongodb.net | | `MONGODB_DATABASE` | Database name | llama_index_test_db | | `MONGODB_COLLECTION` | Collection name | llama_index_test_vectorstore | | `MONGODB_INDEX` | Search index name | vector_index | The following will be required to authenticate with OpenAI. | Name | Description | | ---------------- | ------------------------------------------------------------ | | `OPENAI_API_KEY` | OpenAI token created at https://platform.openai.com/api-keys | ### Create an Atlas Vector Search Index The final step to configure MongoDB as the Datastore is to create a Vector Search Index. The procedure is described [here](https://www.mongodb.com/docs/atlas/atlas-vector-search/create-index/#procedure). Under Services on the left panel, choose Atlas Search > Create Search Index > Atlas Vector Search JSON Editor. The Plugin expects an index definition like the following. To begin, choose `numDimensions: 1536` along with the suggested EMBEDDING variables above. You can experiment with these later. ```json { "fields": [ { "numDimensions": 1536, "path": "embedding", "similarity": "cosine", "type": "vector" } ] } ``` ### Running MongoDB Integration Tests In addition to the Jupyter Notebook in `examples/`, a suite of integration tests is available to verify the MongoDB integration. The test suite needs the cluster up and running, and the environment variables defined above.
168444
"""MongoDB Vector store index. An index that is built on top of an existing vector store. """ import logging import os from importlib.metadata import version from typing import Any, Dict, List, Optional, cast from llama_index.core.bridge.pydantic import PrivateAttr from llama_index.core.schema import BaseNode, MetadataMode, TextNode from llama_index.core.vector_stores.types import ( BasePydanticVectorStore, VectorStoreQuery, VectorStoreQueryResult, VectorStoreQueryMode, ) from llama_index.core.vector_stores.utils import ( legacy_metadata_dict_to_node, metadata_dict_to_node, node_to_metadata_dict, ) from llama_index.vector_stores.mongodb.pipelines import ( fulltext_search_stage, vector_search_stage, combine_pipelines, reciprocal_rank_stage, final_hybrid_stage, filters_to_mql, ) from pymongo import MongoClient from pymongo.driver_info import DriverInfo from pymongo.collection import Collection logger = logging.getLogger(__name__) class MongoDBAtlasVectorSearch(BasePydanticVectorStore): """MongoDB Atlas Vector Store. To use, you should have both: - the ``pymongo`` python package installed - a connection string associated with a MongoDB Atlas Cluster that has an Atlas Vector Search index To get started head over to the [Atlas quick start](https://www.mongodb.com/docs/atlas/getting-started/). Once your store is created, be sure to enable indexing in the Atlas GUI. Please refer to the [documentation](https://www.mongodb.com/docs/atlas/atlas-vector-search/create-index/) to get more details on how to define an Atlas Vector Search index. You can name the index {ATLAS_VECTOR_SEARCH_INDEX_NAME} and create the index on the namespace {DB_NAME}.{COLLECTION_NAME}. Finally, write the following definition in the JSON editor on MongoDB Atlas: ``` { "name": "vector_index", "type": "vectorSearch", "fields":[ { "type": "vector", "path": "embedding", "numDimensions": 1536, "similarity": "cosine" } ] } ``` Examples: `pip install llama-index-vector-stores-mongodb` ```python import pymongo from llama_index.vector_stores.mongodb import MongoDBAtlasVectorSearch # Ensure you have the MongoDB URI with appropriate credentials mongo_uri = "mongodb+srv://<username>:<password>@<host>?retryWrites=true&w=majority" mongodb_client = pymongo.MongoClient(mongo_uri) # Create an instance of MongoDBAtlasVectorSearch vector_store = MongoDBAtlasVectorSearch(mongodb_client) ``` """ stores_text: bool = True flat_metadata: bool = False _mongodb_client: Any = PrivateAttr() _collection: Any = PrivateAttr() _vector_index_name: str = PrivateAttr() _embedding_key: str = PrivateAttr() _id_key: str = PrivateAttr() _text_key: str = PrivateAttr() _metadata_key: str = PrivateAttr() _fulltext_index_name: str = PrivateAttr() _insert_kwargs: Dict = PrivateAttr() _index_name: str = PrivateAttr() # DEPRECATED _oversampling_factor: int = PrivateAttr() def __init__( self, mongodb_client: Optional[Any] = None, db_name: str = "default_db", collection_name: str = "default_collection", vector_index_name: str = "vector_index", id_key: str = "_id", embedding_key: str = "embedding", text_key: str = "text", metadata_key: str = "metadata", fulltext_index_name: str = "fulltext_index", index_name: str = None, insert_kwargs: Optional[Dict] = None, oversampling_factor: int = 10, **kwargs: Any, ) -> None: """Initialize the vector store. Args: mongodb_client: A MongoDB client. db_name: A MongoDB database name. collection_name: A MongoDB collection name. vector_index_name: A MongoDB Atlas *Vector* Search index name. ($vectorSearch) id_key: The data field to use as the id. embedding_key: A MongoDB field that will contain the embedding for each document. text_key: A MongoDB field that will contain the text for each document. metadata_key: A MongoDB field that will contain the metadata for each document. insert_kwargs: The kwargs used during `insert`. fulltext_index_name: A MongoDB Atlas *full-text* Search index name. ($search) oversampling_factor: This times n_results is 'ef' in the HNSW algorithm. 'ef' determines the number of nearest neighbor candidates to consider during the search phase. A higher value leads to more accuracy, but is slower. Default = 10 index_name: DEPRECATED: Please use vector_index_name. """ super().__init__() if mongodb_client is not None: self._mongodb_client = cast(MongoClient, mongodb_client) else: if "MONGODB_URI" not in os.environ: raise ValueError( "Must specify MONGODB_URI via env variable " "if not directly passing in client." ) self._mongodb_client = MongoClient( os.environ["MONGODB_URI"], driver=DriverInfo(name="llama-index", version=version("llama-index")), ) if index_name is not None: logger.warning("index_name is deprecated. Please use vector_index_name") if vector_index_name is None: vector_index_name = index_name else: logger.warning( "vector_index_name and index_name both specified. Will use vector_index_name" ) self._collection: Collection = self._mongodb_client[db_name][collection_name] self._vector_index_name = vector_index_name self._embedding_key = embedding_key self._id_key = id_key self._text_key = text_key self._metadata_key = metadata_key self._fulltext_index_name = fulltext_index_name self._insert_kwargs = insert_kwargs or {} self._oversampling_factor = oversampling_factor def add( self, nodes: List[BaseNode], **add_kwargs: Any, ) -> List[str]: """Add nodes to index. Args: nodes: List[BaseNode]: list of nodes with embeddings Returns: A List of ids for successfully added nodes. """ ids = [] data_to_insert = [] for node in nodes: metadata = node_to_metadata_dict( node, remove_text=True, flat_metadata=self.flat_metadata ) entry = { self._id_key: node.node_id, self._embedding_key: node.get_embedding(), self._text_key: node.get_content(metadata_mode=MetadataMode.NONE) or "", self._metadata_key: metadata, } data_to_insert.append(entry) ids.append(node.node_id) logger.debug("Inserting data into MongoDB: %s", data_to_insert) insert_result = self._collection.insert_many( data_to_insert, **self._insert_kwargs ) logger.debug("Result of insert: %s", insert_result) return ids def delete(self, ref_doc_id: str, **delete_kwargs: Any) -> None: """ Delete nodes using with ref_doc_id. Args: ref_doc_id (str): The doc_id of the document to delete. """ # delete by filtering on the doc_id metadata self._collection.delete_many( filter={self._metadata_key + ".ref_doc_id": ref_doc_id}, **delete_kwargs ) @property def client(self) -> Any: """Return MongoDB client.""" return self._mongodb_client
168452
# Astra DB Vector Store A LlamaIndex vector store using Astra DB as the backend. ## Usage Pre-requisite: ```bash pip install llama-index-vector-stores-astra-db ``` A minimal example: ```python from llama_index.vector_stores.astra_db import AstraDBVectorStore vector_store = AstraDBVectorStore( token="AstraCS:xY3b...", # Your Astra DB token api_endpoint="https://012...abc-us-east1.apps.astra.datastax.com", # Your Astra DB API endpoint collection_name="astra_v_table", # Table name of your choice embedding_dimension=1536, # Embedding dimension of the embeddings model used ) ``` ## More examples and references A more detailed usage guide can be found [at this demo notebook](https://docs.llamaindex.ai/en/stable/examples/vector_stores/AstraDBIndexDemo.html) in the LlamaIndex docs. > **Note**: Please see the AstraDB documentation [here](https://docs.datastax.com/en/astra/astra-db-vector/clients/python.html).
168470
class ChromaVectorStore(BasePydanticVectorStore): """Chroma vector store. In this vector store, embeddings are stored within a ChromaDB collection. During query time, the index uses ChromaDB to query for the top k most similar nodes. Args: chroma_collection (chromadb.api.models.Collection.Collection): ChromaDB collection instance Examples: `pip install llama-index-vector-stores-chroma` ```python import chromadb from llama_index.vector_stores.chroma import ChromaVectorStore # Create a Chroma client and collection chroma_client = chromadb.EphemeralClient() chroma_collection = chroma_client.create_collection("example_collection") # Set up the ChromaVectorStore and StorageContext vector_store = ChromaVectorStore(chroma_collection=chroma_collection) ``` """ stores_text: bool = True flat_metadata: bool = True collection_name: Optional[str] host: Optional[str] port: Optional[str] ssl: bool headers: Optional[Dict[str, str]] persist_dir: Optional[str] collection_kwargs: Dict[str, Any] = Field(default_factory=dict) _collection: Collection = PrivateAttr() def __init__( self, chroma_collection: Optional[Any] = None, collection_name: Optional[str] = None, host: Optional[str] = None, port: Optional[str] = None, ssl: bool = False, headers: Optional[Dict[str, str]] = None, persist_dir: Optional[str] = None, collection_kwargs: Optional[dict] = None, **kwargs: Any, ) -> None: """Init params.""" collection_kwargs = collection_kwargs or {} super().__init__( host=host, port=port, ssl=ssl, headers=headers, collection_name=collection_name, persist_dir=persist_dir, collection_kwargs=collection_kwargs or {}, ) if chroma_collection is None: client = chromadb.HttpClient(host=host, port=port, ssl=ssl, headers=headers) self._collection = client.get_or_create_collection( name=collection_name, **collection_kwargs ) else: self._collection = cast(Collection, chroma_collection) @classmethod def from_collection(cls, collection: Any) -> "ChromaVectorStore": try: from chromadb import Collection except ImportError: raise ImportError(import_err_msg) if not isinstance(collection, Collection): raise Exception("argument is not chromadb collection instance") return cls(chroma_collection=collection) @classmethod def from_params( cls, collection_name: str, host: Optional[str] = None, port: Optional[str] = None, ssl: bool = False, headers: Optional[Dict[str, str]] = None, persist_dir: Optional[str] = None, collection_kwargs: dict = {}, **kwargs: Any, ) -> "ChromaVectorStore": if persist_dir: client = chromadb.PersistentClient(path=persist_dir) collection = client.get_or_create_collection( name=collection_name, **collection_kwargs ) elif host and port: client = chromadb.HttpClient(host=host, port=port, ssl=ssl, headers=headers) collection = client.get_or_create_collection( name=collection_name, **collection_kwargs ) else: raise ValueError( "Either `persist_dir` or (`host`,`port`) must be specified" ) return cls( chroma_collection=collection, host=host, port=port, ssl=ssl, headers=headers, persist_dir=persist_dir, collection_kwargs=collection_kwargs, **kwargs, ) @classmethod def class_name(cls) -> str: return "ChromaVectorStore" def get_nodes( self, node_ids: Optional[List[str]], filters: Optional[List[MetadataFilters]] = None, ) -> List[BaseNode]: """Get nodes from index. Args: node_ids (List[str]): list of node ids filters (List[MetadataFilters]): list of metadata filters """ if not self._collection: raise ValueError("Collection not initialized") node_ids = node_ids or [] if filters: where = _to_chroma_filter(filters) else: where = {} result = self._get(None, where=where, ids=node_ids) return result.nodes def add(self, nodes: List[BaseNode], **add_kwargs: Any) -> List[str]: """Add nodes to index. Args: nodes: List[BaseNode]: list of nodes with embeddings """ if not self._collection: raise ValueError("Collection not initialized") max_chunk_size = MAX_CHUNK_SIZE node_chunks = chunk_list(nodes, max_chunk_size) all_ids = [] for node_chunk in node_chunks: embeddings = [] metadatas = [] ids = [] documents = [] for node in node_chunk: embeddings.append(node.get_embedding()) metadata_dict = node_to_metadata_dict( node, remove_text=True, flat_metadata=self.flat_metadata ) for key in metadata_dict: if metadata_dict[key] is None: metadata_dict[key] = "" metadatas.append(metadata_dict) ids.append(node.node_id) documents.append(node.get_content(metadata_mode=MetadataMode.NONE)) self._collection.add( embeddings=embeddings, ids=ids, metadatas=metadatas, documents=documents, ) all_ids.extend(ids) return all_ids def delete(self, ref_doc_id: str, **delete_kwargs: Any) -> None: """ Delete nodes using with ref_doc_id. Args: ref_doc_id (str): The doc_id of the document to delete. """ self._collection.delete(where={"document_id": ref_doc_id}) def delete_nodes( self, node_ids: Optional[List[str]] = None, filters: Optional[List[MetadataFilters]] = None, ) -> None: """Delete nodes from index. Args: node_ids (List[str]): list of node ids filters (List[MetadataFilters]): list of metadata filters """ if not self._collection: raise ValueError("Collection not initialized") node_ids = node_ids or [] if filters: where = _to_chroma_filter(filters) else: where = {} self._collection.delete(ids=node_ids, where=where) def clear(self) -> None: """Clear the collection.""" ids = self._collection.get()["ids"] self._collection.delete(ids=ids) @property def client(self) -> Any: """Return client.""" return self._collection def query(self, query: VectorStoreQuery, **kwargs: Any) -> VectorStoreQueryResult: """Query index for top k most similar nodes. Args: query_embedding (List[float]): query embedding similarity_top_k (int): top k most similar nodes """ if query.filters is not None: if "where" in kwargs: raise ValueError( "Cannot specify metadata filters via both query and kwargs. " "Use kwargs only for chroma specific items that are " "not supported via the generic query interface." ) where = _to_chroma_filter(query.filters) else: where = kwargs.pop("where", {}) if not query.query_embedding: return self._get(limit=query.similarity_top_k, where=where, **kwargs) return self._query( query_embeddings=query.query_embedding, n_results=query.similarity_top_k, where=where, **kwargs, )
168652
class AzureCosmosDBMongoDBVectorSearch(BasePydanticVectorStore): """Azure CosmosDB MongoDB vCore Vector Store. To use, you should have both: - the ``pymongo`` python package installed - a connection string associated with an Azure Cosmodb MongoDB vCore Cluster Examples: `pip install llama-index-vector-stores-azurecosmosmongo` ```python import pymongo from llama_index.vector_stores.azurecosmosmongo import AzureCosmosDBMongoDBVectorSearch # Set up the connection string with your Azure CosmosDB MongoDB URI connection_string = "YOUR_AZURE_COSMOSDB_MONGODB_URI" mongodb_client = pymongo.MongoClient(connection_string) # Create an instance of AzureCosmosDBMongoDBVectorSearch vector_store = AzureCosmosDBMongoDBVectorSearch( mongodb_client=mongodb_client, db_name="demo_vectordb", collection_name="paul_graham_essay", ) ``` """ stores_text: bool = True flat_metadata: bool = True _collection: Any = PrivateAttr() _index_name: str = PrivateAttr() _embedding_key: str = PrivateAttr() _id_key: str = PrivateAttr() _text_key: str = PrivateAttr() _metadata_key: str = PrivateAttr() _insert_kwargs: dict = PrivateAttr() _db_name: str = PrivateAttr() _collection_name: str = PrivateAttr() _cosmos_search_kwargs: dict = PrivateAttr() _mongodb_client: Any = PrivateAttr() def __init__( self, mongodb_client: Optional[Any] = None, db_name: str = "default_db", collection_name: str = "default_collection", index_name: str = "default_vector_search_index", id_key: str = "id", embedding_key: str = "content_vector", text_key: str = "text", metadata_key: str = "metadata", cosmos_search_kwargs: Optional[Dict] = None, insert_kwargs: Optional[Dict] = None, **kwargs: Any, ) -> None: """Initialize the vector store. Args: mongodb_client: An Azure CosmoDB MongoDB client (type: MongoClient, shown any for lazy import). db_name: An Azure CosmosDB MongoDB database name. collection_name: An Azure CosmosDB collection name. index_name: An Azure CosmosDB MongoDB vCore Vector Search index name. id_key: The data field to use as the id. embedding_key: An Azure CosmosDB MongoDB field that will contain the embedding for each document. text_key: An Azure CosmosDB MongoDB field that will contain the text for each document. metadata_key: An Azure CosmosDB MongoDB field that will contain the metadata for each document. cosmos_search_kwargs: An Azure CosmosDB MongoDB field that will contain search options, such as kind, numLists, similarity, and dimensions. insert_kwargs: The kwargs used during `insert`. """ super().__init__() if mongodb_client is not None: self._mongodb_client = cast(pymongo.MongoClient, mongodb_client) else: if "AZURE_COSMOSDB_MONGODB_URI" not in os.environ: raise ValueError( "Must specify Azure cosmodb 'AZURE_COSMOSDB_MONGODB_URI' via env variable " "if not directly passing in client." ) self._mongodb_client = pymongo.MongoClient( os.environ["AZURE_COSMOSDB_MONGODB_URI"], appname="LlamaIndex-CDBMongoVCore-VectorStore-Python", ) self._collection = self._mongodb_client[db_name][collection_name] self._index_name = index_name self._embedding_key = embedding_key self._id_key = id_key self._text_key = text_key self._metadata_key = metadata_key self._insert_kwargs = insert_kwargs or {} self._db_name = db_name self._collection_name = collection_name self._cosmos_search_kwargs = cosmos_search_kwargs or {} self._create_vector_search_index() def _create_vector_search_index(self) -> None: db = self._mongodb_client[self._db_name] create_index_commands = {} kind = self._cosmos_search_kwargs.get("kind", "vector-hnsw") if kind == "vector-ivf": create_index_commands = self._get_vector_index_ivf(kind) elif kind == "vector-hnsw": create_index_commands = self._get_vector_index_hnsw(kind) db.command(create_index_commands) def _get_vector_index_ivf( self, kind: str, ) -> Dict[str, Any]: return { "createIndexes": self._collection_name, "indexes": [ { "name": self._index_name, "key": {self._embedding_key: "cosmosSearch"}, "cosmosSearchOptions": { "kind": kind, "numLists": self._cosmos_search_kwargs.get("numLists", 1), "similarity": self._cosmos_search_kwargs.get( "similarity", "COS" ), "dimensions": self._cosmos_search_kwargs.get( "dimensions", 1536 ), }, } ], } def _get_vector_index_hnsw( self, kind: str, ) -> Dict[str, Any]: return { "createIndexes": self._collection_name, "indexes": [ { "name": self._index_name, "key": {self._embedding_key: "cosmosSearch"}, "cosmosSearchOptions": { "kind": kind, "m": self._cosmos_search_kwargs.get("m", 2), "efConstruction": self._cosmos_search_kwargs.get( "efConstruction", 64 ), "similarity": self._cosmos_search_kwargs.get( "similarity", "COS" ), "dimensions": self._cosmos_search_kwargs.get( "dimensions", 1536 ), }, } ], } def create_filter_index( self, property_to_filter: str, index_name: str, ) -> dict[str, Any]: db = self._mongodb_client[self._db_name] command = { "createIndexes": self._collection.name, "indexes": [ { "key": {property_to_filter: 1}, "name": index_name, } ], } create_index_responses: dict[str, Any] = db.command(command) return create_index_responses def add( self, nodes: List[BaseNode], **add_kwargs: Any, ) -> List[str]: """Add nodes to index. Args: nodes: List[BaseNode]: list of nodes with embeddings Returns: A List of ids for successfully added nodes. """ ids = [] data_to_insert = [] for node in nodes: metadata = node_to_metadata_dict( node, remove_text=True, flat_metadata=self.flat_metadata ) entry = { self._id_key: node.node_id, self._embedding_key: node.get_embedding(), self._text_key: node.get_content(metadata_mode=MetadataMode.NONE) or "", self._metadata_key: metadata, "timeStamp": date.today(), } data_to_insert.append(entry) ids.append(node.node_id) logger.debug("Inserting data into MongoDB: %s", data_to_insert) insert_result = self._collection.insert_many( data_to_insert, **self._insert_kwargs ) logger.debug("Result of insert: %s", insert_result) return ids def delete(self, ref_doc_id: str, **delete_kwargs: Any) -> None: """ Delete nodes using with ref_doc_id. Args: ref_doc_id (str): The doc_id of the document to delete. """ # delete by filtering on the doc_id metadata self._collection.delete_one( filter={self._metadata_key + ".ref_doc_id": ref_doc_id}, **delete_kwargs ) @property def client(self) -> Any: """Return MongoDB client.""" return self._mongodb_client
168721
from typing import Any, List, Literal, Optional import fsspec from llama_index.vector_stores.docarray.base import DocArrayVectorStore class DocArrayInMemoryVectorStore(DocArrayVectorStore): """Class representing a DocArray In-Memory vector store. This class is a document index provided by Docarray that stores documents in memory. Examples: `pip install llama-index-vector-stores-docarray` ```python from llama_index.vector_stores.docarray import DocArrayInMemoryVectorStore # Create an instance of DocArrayInMemoryVectorStore vector_store = DocArrayInMemoryVectorStore() ``` """ def __init__( self, index_path: Optional[str] = None, metric: Literal[ "cosine_sim", "euclidian_dist", "sgeuclidean_dist" ] = "cosine_sim", ): """Initializes the DocArrayInMemoryVectorStore. Args: index_path (Optional[str]): The path to the index file. metric (Literal["cosine_sim", "euclidian_dist", "sgeuclidean_dist"]): The distance metric to use. Default is "cosine_sim". """ import_err_msg = """ `docarray` package not found. Install the package via pip: `pip install docarray` """ try: import docarray # noqa except ImportError: raise ImportError(import_err_msg) self._ref_docs = None # type: ignore[assignment] self._index_file_path = index_path self._index, self._schema = self._init_index(metric=metric) def _init_index(self, **kwargs: Any): # type: ignore[no-untyped-def] """Initializes the in-memory exact nearest neighbour index. Args: **kwargs: Variable length argument list. Returns: tuple: The in-memory exact nearest neighbour index and its schema. """ from docarray.index import InMemoryExactNNIndex schema = self._get_schema(**kwargs) index = InMemoryExactNNIndex[schema] # type: ignore[valid-type] params = {"index_file_path": self._index_file_path} return index(**params), schema # type: ignore[arg-type] def _find_docs_to_be_removed(self, doc_id: str) -> List[str]: """Finds the documents to be removed from the vector store. Args: doc_id (str): Reference document ID that should be removed. Returns: List[str]: List of document IDs to be removed. """ query = {"metadata__doc_id": {"$eq": doc_id}} docs = self._index.filter(query) return [doc.id for doc in docs] def persist( self, persist_path: str, fs: Optional[fsspec.AbstractFileSystem] = None ) -> None: """Persists the in-memory vector store to a file. Args: persist_path (str): The path to persist the index. fs (fsspec.AbstractFileSystem, optional): Filesystem to persist to. (doesn't apply) """ index_path = persist_path or self._index_file_path self._index.persist(index_path)
168769
class RedisVectorStore(BasePydanticVectorStore): """RedisVectorStore. The RedisVectorStore takes a user-defined schema object and a Redis connection client or URL string. The schema is optional, but useful for: - Defining a custom index name, key prefix, and key separator. - Defining *additional* metadata fields to use as query filters. - Setting custom specifications on fields to improve search quality, e.g which vector index algorithm to use. Other Notes: - All embeddings and docs are stored in Redis. During query time, the index uses Redis to query for the top k most similar nodes. - Redis & LlamaIndex expect at least 4 *required* fields for any schema, default or custom, `id`, `doc_id`, `text`, `vector`. Args: schema (IndexSchema, optional): Redis index schema object. redis_client (Redis, optional): Redis client connection. redis_url (str, optional): Redis server URL. Defaults to "redis://localhost:6379". overwrite (bool, optional): Whether to overwrite the index if it already exists. Defaults to False. Raises: ValueError: If your Redis server does not have search or JSON enabled. ValueError: If a Redis connection failed to be established. ValueError: If an invalid schema is provided. Example: from redisvl.schema import IndexSchema from llama_index.vector_stores.redis import RedisVectorStore # Use default schema rds = RedisVectorStore(redis_url="redis://localhost:6379") # Use custom schema from dict schema = IndexSchema.from_dict({ "index": {"name": "my-index", "prefix": "docs"}, "fields": [ {"name": "id", "type": "tag"}, {"name": "doc_id", "type": "tag}, {"name": "text", "type": "text"}, {"name": "vector", "type": "vector", "attrs": {"dims": 1536, "algorithm": "flat"}} ] }) vector_store = RedisVectorStore( schema=schema, redis_url="redis://localhost:6379" ) """ stores_text: bool = True stores_node: bool = True flat_metadata: bool = False _index: SearchIndex = PrivateAttr() _overwrite: bool = PrivateAttr() _return_fields: List[str] = PrivateAttr() def __init__( self, schema: Optional[IndexSchema] = None, redis_client: Optional[Redis] = None, redis_url: Optional[str] = None, overwrite: bool = False, return_fields: Optional[List[str]] = None, **kwargs: Any, ) -> None: super().__init__() # check for indicators of old schema self._flag_old_kwargs(**kwargs) # Setup schema if not schema: logger.info("Using default RedisVectorStore schema.") schema = RedisVectorStoreSchema() self._validate_schema(schema) self._return_fields = return_fields or [ NODE_ID_FIELD_NAME, DOC_ID_FIELD_NAME, TEXT_FIELD_NAME, NODE_CONTENT_FIELD_NAME, ] self._index = SearchIndex(schema=schema) self._overwrite = overwrite # Establish redis connection if redis_client: self._index.set_client(redis_client) elif redis_url: self._index.connect(redis_url) else: raise ValueError( "Failed to connect to Redis. Must provide a valid redis client or url" ) # Create index self.create_index() def _flag_old_kwargs(self, **kwargs): old_kwargs = [ "index_name", "index_prefix", "prefix_ending", "index_args", "metadata_fields", ] for kwarg in old_kwargs: if kwarg in kwargs: raise ValueError( f"Deprecated kwarg, {kwarg}, found upon initialization. " "RedisVectorStore now requires an IndexSchema object. " "See the documentation for a complete example: https://docs.llamaindex.ai/en/stable/examples/vector_stores/RedisIndexDemo/" ) def _validate_schema(self, schema: IndexSchema) -> str: base_schema = RedisVectorStoreSchema() for name, field in base_schema.fields.items(): if (name not in schema.fields) or ( not schema.fields[name].type == field.type ): raise ValueError( f"Required field {name} must be present in the index " f"and of type {schema.fields[name].type}" ) @property def client(self) -> "Redis": """Return the redis client instance.""" return self._index.client @property def index_name(self) -> str: """Return the name of the index based on the schema.""" return self._index.name @property def schema(self) -> IndexSchema: """Return the index schema.""" return self._index.schema def set_return_fields(self, return_fields: List[str]) -> None: """Update the return fields for the query response.""" self._return_fields = return_fields def index_exists(self) -> bool: """Check whether the index exists in Redis. Returns: bool: True or False. """ return self._index.exists() def create_index(self, overwrite: Optional[bool] = None) -> None: """Create an index in Redis.""" if overwrite is None: overwrite = self._overwrite # Create index honoring overwrite policy if overwrite: self._index.create(overwrite=True, drop=True) else: self._index.create() def add(self, nodes: List[BaseNode], **add_kwargs: Any) -> List[str]: """Add nodes to the index. Args: nodes (List[BaseNode]): List of nodes with embeddings Returns: List[str]: List of ids of the documents added to the index. Raises: ValueError: If the index already exists and overwrite is False. """ # Check to see if empty document list was passed if len(nodes) == 0: return [] # Now check for the scenario where user is trying to index embeddings that don't align with schema embedding_len = len(nodes[0].get_embedding()) expected_dims = self._index.schema.fields[VECTOR_FIELD_NAME].attrs.dims if expected_dims != embedding_len: raise ValueError( f"Attempting to index embeddings of dim {embedding_len} " f"which doesn't match the index schema expectation of {expected_dims}. " "Please review the Redis integration example to learn how to customize schema. " "" ) data: List[Dict[str, Any]] = [] for node in nodes: embedding = node.get_embedding() record = { NODE_ID_FIELD_NAME: node.node_id, DOC_ID_FIELD_NAME: node.ref_doc_id, TEXT_FIELD_NAME: node.get_content(metadata_mode=MetadataMode.NONE), VECTOR_FIELD_NAME: array_to_buffer(embedding, dtype="FLOAT32"), } # parse and append metadata additional_metadata = node_to_metadata_dict( node, remove_text=True, flat_metadata=self.flat_metadata ) data.append({**record, **additional_metadata}) # Load nodes to Redis keys = self._index.load(data, id_field=NODE_ID_FIELD_NAME, **add_kwargs) logger.info(f"Added {len(keys)} documents to index {self._index.name}") return [ key.strip(self._index.prefix + self._index.key_separator) for key in keys ] def delete(self, ref_doc_id: str, **delete_kwargs: Any) -> None: """ Delete nodes using with ref_doc_id. Args: ref_doc_id (str): The doc_id of the document to delete. """ # build a filter to target specific docs by doc ID doc_filter = Tag(DOC_ID_FIELD_NAME) == ref_doc_id total = self._index.query(CountQuery(doc_filter)) delete_query = FilterQuery( return_fields=[NODE_ID_FIELD_NAME], filter_expression=doc_filter, num_results=total, ) # fetch docs to delete and flush them docs_to_delete = self._index.search(delete_query.query, delete_query.params) with self._index.client.pipeline(transaction=False) as pipe: for doc in docs_to_delete.docs: pipe.delete(doc.id) res = pipe.execute() logger.info( f"Deleted {len(docs_to_delete.docs)} documents from index {self._index.name}" ) def delete_index(self) -> None: """Delete the index and all documents.""" logger.info(f"Deleting index {self._index.name}") self._index.delete(drop=True)
168825
"""Pathway Retriever.""" import json from typing import List, Optional import requests from llama_index.core.base.base_retriever import BaseRetriever from llama_index.core.callbacks.base import CallbackManager from llama_index.core.constants import DEFAULT_SIMILARITY_TOP_K from llama_index.core.schema import ( NodeWithScore, QueryBundle, TextNode, ) # Copied from https://github.com/pathwaycom/pathway/blob/main/python/pathway/xpacks/llm/vector_store.py # to remove dependency on Pathway library when only the client is used. class _VectorStoreClient: def __init__( self, host: Optional[str] = None, port: Optional[int] = None, url: Optional[str] = None, ): """ A client you can use to query :py:class:`VectorStoreServer`. Please provide either the `url`, or `host` and `port`. Args: - host: host on which `:py:class:`VectorStoreServer` listens - port: port on which `:py:class:`VectorStoreServer` listens - url: url at which `:py:class:`VectorStoreServer` listens """ err = "Either (`host` and `port`) or `url` must be provided, but not both." if url is not None: if host or port: raise ValueError(err) self.url = url else: if host is None: raise ValueError(err) port = port or 80 self.url = f"http://{host}:{port}" def query( self, query: str, k: int = 3, metadata_filter: Optional[str] = None ) -> List[dict]: """ Perform a query to the vector store and fetch results. Args: - query: - k: number of documents to be returned - metadata_filter: optional string representing the metadata filtering query in the JMESPath format. The search will happen only for documents satisfying this filtering. """ data = {"query": query, "k": k} if metadata_filter is not None: data["metadata_filter"] = metadata_filter url = self.url + "/v1/retrieve" response = requests.post( url, data=json.dumps(data), headers={"Content-Type": "application/json"}, timeout=3, ) responses = response.json() return sorted(responses, key=lambda x: x["dist"]) # Make an alias __call__ = query def get_vectorstore_statistics(self) -> dict: """Fetch basic statistics about the vector store.""" url = self.url + "/v1/statistics" response = requests.post( url, json={}, headers={"Content-Type": "application/json"}, ) return response.json() def get_input_files( self, metadata_filter: Optional[str] = None, filepath_globpattern: Optional[str] = None, ) -> list: """ Fetch information on documents in the vector store. Args: metadata_filter: optional string representing the metadata filtering query in the JMESPath format. The search will happen only for documents satisfying this filtering. filepath_globpattern: optional glob pattern specifying which documents will be searched for this query. """ url = self.url + "/v1/inputs" response = requests.post( url, json={ "metadata_filter": metadata_filter, "filepath_globpattern": filepath_globpattern, }, headers={"Content-Type": "application/json"}, ) return response.json() class PathwayRetriever(BaseRetriever): """Pathway retriever. Pathway is an open data processing framework. It allows you to easily develop data transformation pipelines that work with live data sources and changing data. This is the client that implements Retriever API for PathwayVectorServer. """ def __init__( self, host: Optional[str] = None, port: Optional[int] = None, url: Optional[str] = None, similarity_top_k: int = DEFAULT_SIMILARITY_TOP_K, callback_manager: Optional[CallbackManager] = None, ) -> None: """Initializing the Pathway retriever client.""" self.client = _VectorStoreClient(host, port, url) self.similarity_top_k = similarity_top_k super().__init__(callback_manager) def _retrieve(self, query_bundle: QueryBundle) -> List[NodeWithScore]: """Retrieve.""" rets = self.client(query=query_bundle.query_str, k=self.similarity_top_k) items = [ NodeWithScore( node=TextNode(text=ret["text"], extra_info=ret["metadata"]), # Transform cosine distance into a similairty score # (higher is more similar) score=1 - ret["dist"], ) for ret in rets ] return sorted(items, key=lambda x: x.score or 0.0, reverse=True)
168867
class VertexAISearchRetriever(BaseRetriever): """`Vertex AI Search` retrieval. For a detailed explanation of the Vertex AI Search concepts and configuration parameters, refer to the product documentation. https://cloud.google.com/generative-ai-app-builder/docs/enterprise-search-introduction Args: project_id: str #Google Cloud Project ID data_store_id: str #Vertex AI Search data store ID. location_id: str = "global" #Vertex AI Search data store location. serving_config_id: str = "default_config" #Vertex AI Search serving config ID credentials: Any = None The default custom credentials (google.auth.credentials.Credentials) to use when making API calls. If not provided, credentials will be ascertained from the environment engine_data_type: int = 0 Defines the Vertex AI Search data type 0 - Unstructured data 1 - Structured data 2 - Website data Example: retriever = VertexAISearchRetriever( project_id=PROJECT_ID, data_store_id=DATA_STORE_ID, location_id=LOCATION_ID, engine_data_type=0 ) """ """ The following parameter explanation can be found here: https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#contentsearchspec """ filter: Optional[str] = None """Filter expression.""" get_extractive_answers: bool = False """If True return Extractive Answers, otherwise return Extractive Segments or Snippets.""" max_documents: int = 5 """The maximum number of documents to return.""" max_extractive_answer_count: int = 1 """The maximum number of extractive answers returned in each search result. At most 5 answers will be returned for each SearchResult. """ max_extractive_segment_count: int = 1 """The maximum number of extractive segments returned in each search result. Currently one segment will be returned for each SearchResult. """ query_expansion_condition: int = 1 """Specification to determine under which conditions query expansion should occur. 0 - Unspecified query expansion condition. In this case, server behavior defaults to disabled 1 - Disabled query expansion. Only the exact search query is used, even if SearchResponse.total_size is zero. 2 - Automatic query expansion built by the Search API. """ spell_correction_mode: int = 1 """Specification to determine under which conditions query expansion should occur. 0 - Unspecified spell correction mode. In this case, server behavior defaults to auto. 1 - Suggestion only. Search API will try to find a spell suggestion if there is any and put in the `SearchResponse.corrected_query`. The spell suggestion will not be used as the search query. 2 - Automatic spell correction built by the Search API. Search will be based on the corrected query if found. """ boost_spec: Optional[Dict[Any, Any]] = None """BoostSpec for boosting search results. A protobuf should be provided. https://cloud.google.com/generative-ai-app-builder/docs/boost-search-results https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1beta/BoostSpec """ return_extractive_segment_score: bool = True """ Specifies whether to return the confidence score from the extractive segments in each search result. This feature is available only for new or allowlisted data stores. """ _client: SearchServiceClient _serving_config: str def __init__( self, project_id: str, data_store_id: str, location_id: str = "global", serving_config_id: str = "default_config", credentials: Any = None, engine_data_type: int = 0, max_documents: int = 5, user_agent: Optional[str] = None, **kwargs: Any, ) -> None: """Initializes private fields.""" self.project_id = project_id self.location_id = location_id self.data_store_id = data_store_id self.serving_config_id = serving_config_id self.engine_data_type = engine_data_type self.credentials = credentials self.max_documents = max_documents self._user_agent = user_agent or "llama-index/0.0.0" self.client_options = ClientOptions( api_endpoint=( f"{self.location_id}-discoveryengine.googleapis.com" if self.location_id != "global" else None ) ) try: from google.cloud.discoveryengine_v1beta import SearchServiceClient except ImportError as exc: raise ImportError( "Could not import google-cloud-discoveryengine python package. " "Please, install vertexaisearch dependency group: " ) from exc try: super().__init__(**kwargs) except ValueError as e: print(f"Error initializing GoogleVertexAISearchRetriever: {e!s}") raise # For more information, refer to: # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store self._client = SearchServiceClient( credentials=self.credentials, client_options=self.client_options, client_info=get_client_info(module="vertex-ai-search"), ) self._serving_config = self._client.serving_config_path( project=self.project_id, location=self.location_id, data_store=self.data_store_id, serving_config=self.serving_config_id, ) def _get_content_spec_kwargs(self) -> Optional[Dict[str, Any]]: """Prepares a ContentSpec object.""" from google.cloud.discoveryengine_v1beta import SearchRequest if self.engine_data_type == 0: if self.get_extractive_answers: extractive_content_spec = SearchRequest.ContentSearchSpec.ExtractiveContentSpec( max_extractive_answer_count=self.max_extractive_answer_count, return_extractive_segment_score=self.return_extractive_segment_score, ) else: extractive_content_spec = SearchRequest.ContentSearchSpec.ExtractiveContentSpec( max_extractive_segment_count=self.max_extractive_segment_count, return_extractive_segment_score=self.return_extractive_segment_score, ) content_search_spec = {"extractive_content_spec": extractive_content_spec} elif self.engine_data_type == 1: content_search_spec = None elif self.engine_data_type == 2: content_search_spec = { "extractive_content_spec": SearchRequest.ContentSearchSpec.ExtractiveContentSpec( max_extractive_segment_count=self.max_extractive_segment_count, max_extractive_answer_count=self.max_extractive_answer_count, return_extractive_segment_score=self.return_extractive_segment_score, ), "snippet_spec": SearchRequest.ContentSearchSpec.SnippetSpec( return_snippet=True ), } else: raise NotImplementedError( "Only data store type 0 (Unstructured), 1 (Structured)," "or 2 (Website) are supported currently." + f" Got {self.engine_data_type}" ) return content_search_spec def _create_search_request(self, query: str) -> SearchRequest: """Prepares a SearchRequest object.""" from google.cloud.discoveryengine_v1beta import SearchRequest query_expansion_spec = SearchRequest.QueryExpansionSpec( condition=self.query_expansion_condition, ) spell_correction_spec = SearchRequest.SpellCorrectionSpec( mode=self.spell_correction_mode ) content_search_spec_kwargs = self._get_content_spec_kwargs() if content_search_spec_kwargs is not None: content_search_spec = SearchRequest.ContentSearchSpec( **content_search_spec_kwargs ) else: content_search_spec = None return SearchRequest( query=query, filter=self.filter, serving_config=self._serving_config, page_size=self.max_documents, content_search_spec=content_search_spec, query_expansion_spec=query_expansion_spec, spell_correction_spec=spell_correction_spec, boost_spec=SearchRequest.BoostSpec(**self.boost_spec) if self.boost_spec else None, )
168931
from typing import Any, Dict, List, Optional from llama_index.core.base.base_retriever import BaseRetriever from llama_index.core.callbacks.base import CallbackManager from llama_index.core.constants import DEFAULT_SIMILARITY_TOP_K from llama_index.core.schema import NodeWithScore, QueryBundle from llama_index.core.settings import Settings from llama_index.core.vector_stores.types import MetadataFilters from llama_index.indices.managed.bge_m3.base import BGEM3Index class BGEM3Retriever(BaseRetriever): """Vector index retriever. Args: index (BGEM3Index): BGEM3 index. similarity_top_k (int): number of top k results to return. filters (Optional[MetadataFilters]): metadata filters, defaults to None doc_ids (Optional[List[str]]): list of documents to constrain search. bge_m3_kwargs (dict): Additional bge_m3 specific kwargs to pass through to the bge_m3 index at query time. """ def __init__( self, index: BGEM3Index, similarity_top_k: int = DEFAULT_SIMILARITY_TOP_K, filters: Optional[MetadataFilters] = None, node_ids: Optional[List[str]] = None, doc_ids: Optional[List[str]] = None, callback_manager: Optional[CallbackManager] = None, object_map: Optional[dict] = None, verbose: bool = False, **kwargs: Any, ) -> None: """Initialize params.""" self._index = index self._docstore = self._index.docstore self._similarity_top_k = similarity_top_k self._node_ids = node_ids self._doc_ids = doc_ids self._filters = filters self._kwargs: Dict[str, Any] = kwargs.get("bge_m3_kwargs", {}) self._model = self._index.model self._batch_size = self._index.batch_size self._query_maxlen = self._index.query_maxlen self._weights_for_different_modes = self._index.weights_for_different_modes super().__init__( callback_manager=callback_manager or Settings.callback_manager, object_map=object_map, verbose=verbose, ) def _retrieve( self, query_bundle: QueryBundle, ) -> List[NodeWithScore]: return self._index.query( query_str=query_bundle.query_str, top_k=self._similarity_top_k, **self._kwargs, )
168950
""" Vectara index. An index that is built on top of Vectara. """ import json import logging from typing import Any, List, Optional, Tuple, Dict from enum import Enum import urllib.parse from llama_index.core.base.base_retriever import BaseRetriever from llama_index.core.callbacks.base import CallbackManager from llama_index.core.indices.vector_store.retrievers.auto_retriever.auto_retriever import ( VectorIndexAutoRetriever, ) from llama_index.core.schema import NodeWithScore, QueryBundle, TextNode from llama_index.core.types import TokenGen from llama_index.core.llms import ( CompletionResponse, ) from llama_index.core.vector_stores.types import ( FilterCondition, MetadataFilters, VectorStoreInfo, VectorStoreQuerySpec, ) from llama_index.indices.managed.vectara.base import VectaraIndex from llama_index.indices.managed.vectara.prompts import ( DEFAULT_VECTARA_QUERY_PROMPT_TMPL, ) _logger = logging.getLogger(__name__) MMR_RERANKER_ID = 272725718 SLINGSHOT_RERANKER_ID = 272725719 UDF_RERANKER_ID = 272725722 class VectaraReranker(str, Enum): NONE = "none" MMR = "mmr" SLINGSHOT_ALT_NAME = "slingshot" SLINGSHOT = "multilingual_reranker_v1" UDF = "udf" class VectaraRetriever(BaseRetriever): """ Vectara Retriever. Args: index (VectaraIndex): the Vectara Index similarity_top_k (int): number of top k results to return, defaults to 5. reranker (str): reranker to use: none, mmr, multilingual_reranker_v1, or udf. Note that "multilingual_reranker_v1" is a Vectara Scale feature only. lambda_val (float): for hybrid search. 0 = neural search only. 1 = keyword match only. In between values are a linear interpolation n_sentences_before (int): number of sentences before the matched sentence to return in the node n_sentences_after (int): number of sentences after the matched sentence to return in the node filter: metadata filter (if specified) rerank_k: number of results to fetch for Reranking, defaults to 50. mmr_diversity_bias: number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to minimum diversity and 1 to maximum diversity. Defaults to 0.3. udf_expression: the user defined expression for reranking results. See (https://docs.vectara.com/docs/learn/user-defined-function-reranker) for more details about syntax for udf reranker expressions. summary_enabled: whether to generate summaries or not. Defaults to False. summary_response_lang: language to use for summary generation. summary_num_results: number of results to use for summary generation. summary_prompt_name: name of the prompt to use for summary generation. citations_style: The style of the citations in the summary generation, either "numeric", "html", "markdown", or "none". This is a Vectara Scale only feature. Defaults to None. citations_url_pattern: URL pattern for html and markdown citations. If non-empty, specifies the URL pattern to use for citations; e.g. "{doc.url}". See (https://docs.vectara.com/docs/api-reference/search-apis/search #citation-format-in-summary) for more details. This is a Vectara Scale only feature. Defaults to None. citations_text_pattern: The displayed text for citations. If not specified, numeric citations are displayed for text. """ def __init__( self, index: VectaraIndex, similarity_top_k: int = 10, lambda_val: float = 0.005, n_sentences_before: int = 2, n_sentences_after: int = 2, filter: str = "", reranker: VectaraReranker = VectaraReranker.NONE, rerank_k: int = 50, mmr_diversity_bias: float = 0.3, udf_expression: str = None, summary_enabled: bool = False, summary_response_lang: str = "eng", summary_num_results: int = 7, summary_prompt_name: str = "vectara-summary-ext-24-05-sml", citations_style: Optional[str] = None, citations_url_pattern: Optional[str] = None, citations_text_pattern: Optional[str] = None, callback_manager: Optional[CallbackManager] = None, x_source_str: str = "llama_index", **kwargs: Any, ) -> None: """Initialize params.""" self._index = index self._similarity_top_k = similarity_top_k self._lambda_val = lambda_val self._n_sentences_before = n_sentences_before self._n_sentences_after = n_sentences_after self._filter = filter self._citations_style = citations_style.upper() if citations_style else None self._citations_url_pattern = citations_url_pattern self._citations_text_pattern = citations_text_pattern self._x_source_str = x_source_str if reranker == VectaraReranker.MMR: self._rerank = True self._rerank_k = rerank_k self._mmr_diversity_bias = mmr_diversity_bias self._reranker_id = MMR_RERANKER_ID elif ( reranker == VectaraReranker.SLINGSHOT or reranker == VectaraReranker.SLINGSHOT_ALT_NAME ): self._rerank = True self._rerank_k = rerank_k self._reranker_id = SLINGSHOT_RERANKER_ID elif reranker == VectaraReranker.UDF and udf_expression is not None: self._rerank = True self._rerank_k = rerank_k self._udf_expression = udf_expression self._reranker_id = UDF_RERANKER_ID else: self._rerank = False if summary_enabled: self._summary_enabled = True self._summary_response_lang = summary_response_lang self._summary_num_results = summary_num_results self._summary_prompt_name = summary_prompt_name else: self._summary_enabled = False super().__init__(callback_manager) def _get_post_headers(self) -> dict: """Returns headers that should be attached to each post request.""" return { "x-api-key": self._index._vectara_api_key, "customer-id": self._index._vectara_customer_id, "Content-Type": "application/json", "X-Source": self._x_source_str, } @property def similarity_top_k(self) -> int: """Return similarity top k.""" return self._similarity_top_k @similarity_top_k.setter def similarity_top_k(self, similarity_top_k: int) -> None: """Set similarity top k.""" self._similarity_top_k = similarity_top_k def _retrieve( self, query_bundle: QueryBundle, **kwargs: Any, ) -> List[NodeWithScore]: """ Retrieve top k most similar nodes. Args: query: Query Bundle """ return self._vectara_query(query_bundle, **kwargs)[0] # return top_nodes only
169733
import logging import os from typing import Any, Callable, Optional, Tuple, Union from llama_index.core.base.llms.generic_utils import get_from_param_or_env from tenacity import ( before_sleep_log, retry, retry_if_exception_type, stop_after_attempt, stop_after_delay, wait_exponential, wait_random_exponential, ) from tenacity.stop import stop_base import openai from openai.types.chat import ChatCompletionMessageToolCall from openai.types.chat.chat_completion_chunk import ChoiceDeltaToolCall DEFAULT_OPENAI_API_BASE = "https://api.openai.com/v1" DEFAULT_OPENAI_API_VERSION = "" MISSING_API_KEY_ERROR_MESSAGE = """No API key found for OpenAI. Please set either the OPENAI_API_KEY environment variable or \ openai.api_key prior to initialization. API keys can be found or created at \ https://platform.openai.com/account/api-keys """ logger = logging.getLogger(__name__) OpenAIToolCall = Union[ChatCompletionMessageToolCall, ChoiceDeltaToolCall] def create_retry_decorator( max_retries: int, random_exponential: bool = False, stop_after_delay_seconds: Optional[float] = None, min_seconds: float = 4, max_seconds: float = 10, ) -> Callable[[Any], Any]: wait_strategy = ( wait_random_exponential(min=min_seconds, max=max_seconds) if random_exponential else wait_exponential(multiplier=1, min=min_seconds, max=max_seconds) ) stop_strategy: stop_base = stop_after_attempt(max_retries) if stop_after_delay_seconds is not None: stop_strategy = stop_strategy | stop_after_delay(stop_after_delay_seconds) return retry( reraise=True, stop=stop_strategy, wait=wait_strategy, retry=( retry_if_exception_type( ( openai.APIConnectionError, openai.APITimeoutError, openai.RateLimitError, openai.InternalServerError, ) ) ), before_sleep=before_sleep_log(logger, logging.WARNING), ) def resolve_openai_credentials( api_key: Optional[str] = None, api_base: Optional[str] = None, api_version: Optional[str] = None, ) -> Tuple[Optional[str], str, str]: """ "Resolve OpenAI credentials. The order of precedence is: 1. param 2. env 3. openai module 4. default """ # resolve from param or env api_key = get_from_param_or_env("api_key", api_key, "OPENAI_API_KEY", "") api_base = get_from_param_or_env("api_base", api_base, "OPENAI_API_BASE", "") api_version = get_from_param_or_env( "api_version", api_version, "OPENAI_API_VERSION", "" ) # resolve from openai module or default final_api_key = api_key or openai.api_key or "" final_api_base = api_base or openai.base_url or DEFAULT_OPENAI_API_BASE final_api_version = api_version or openai.api_version or DEFAULT_OPENAI_API_VERSION return final_api_key, str(final_api_base), final_api_version def validate_openai_api_key(api_key: Optional[str] = None) -> None: openai_api_key = api_key or os.environ.get("OPENAI_API_KEY", "") if not openai_api_key: raise ValueError(MISSING_API_KEY_ERROR_MESSAGE)
169735
"""OpenAI embeddings file.""" from enum import Enum from typing import Any, Dict, List, Optional, Tuple import httpx from llama_index.core.base.embeddings.base import BaseEmbedding from llama_index.core.bridge.pydantic import Field, PrivateAttr from llama_index.core.callbacks.base import CallbackManager from llama_index.embeddings.openai.utils import ( DEFAULT_OPENAI_API_BASE, DEFAULT_OPENAI_API_VERSION, create_retry_decorator, resolve_openai_credentials, ) from openai import AsyncOpenAI, OpenAI embedding_retry_decorator = create_retry_decorator( max_retries=6, random_exponential=True, stop_after_delay_seconds=60, min_seconds=1, max_seconds=20, ) class OpenAIEmbeddingMode(str, Enum): """OpenAI embedding mode.""" SIMILARITY_MODE = "similarity" TEXT_SEARCH_MODE = "text_search" class OpenAIEmbeddingModelType(str, Enum): """OpenAI embedding model type.""" DAVINCI = "davinci" CURIE = "curie" BABBAGE = "babbage" ADA = "ada" TEXT_EMBED_ADA_002 = "text-embedding-ada-002" TEXT_EMBED_3_LARGE = "text-embedding-3-large" TEXT_EMBED_3_SMALL = "text-embedding-3-small" class OpenAIEmbeddingModeModel(str, Enum): """OpenAI embedding mode model.""" # davinci TEXT_SIMILARITY_DAVINCI = "text-similarity-davinci-001" TEXT_SEARCH_DAVINCI_QUERY = "text-search-davinci-query-001" TEXT_SEARCH_DAVINCI_DOC = "text-search-davinci-doc-001" # curie TEXT_SIMILARITY_CURIE = "text-similarity-curie-001" TEXT_SEARCH_CURIE_QUERY = "text-search-curie-query-001" TEXT_SEARCH_CURIE_DOC = "text-search-curie-doc-001" # babbage TEXT_SIMILARITY_BABBAGE = "text-similarity-babbage-001" TEXT_SEARCH_BABBAGE_QUERY = "text-search-babbage-query-001" TEXT_SEARCH_BABBAGE_DOC = "text-search-babbage-doc-001" # ada TEXT_SIMILARITY_ADA = "text-similarity-ada-001" TEXT_SEARCH_ADA_QUERY = "text-search-ada-query-001" TEXT_SEARCH_ADA_DOC = "text-search-ada-doc-001" # text-embedding-ada-002 TEXT_EMBED_ADA_002 = "text-embedding-ada-002" # text-embedding-3-large TEXT_EMBED_3_LARGE = "text-embedding-3-large" # text-embedding-3-small TEXT_EMBED_3_SMALL = "text-embedding-3-small" # convenient shorthand OAEM = OpenAIEmbeddingMode OAEMT = OpenAIEmbeddingModelType OAEMM = OpenAIEmbeddingModeModel EMBED_MAX_TOKEN_LIMIT = 2048 _QUERY_MODE_MODEL_DICT = { (OAEM.SIMILARITY_MODE, "davinci"): OAEMM.TEXT_SIMILARITY_DAVINCI, (OAEM.SIMILARITY_MODE, "curie"): OAEMM.TEXT_SIMILARITY_CURIE, (OAEM.SIMILARITY_MODE, "babbage"): OAEMM.TEXT_SIMILARITY_BABBAGE, (OAEM.SIMILARITY_MODE, "ada"): OAEMM.TEXT_SIMILARITY_ADA, (OAEM.SIMILARITY_MODE, "text-embedding-ada-002"): OAEMM.TEXT_EMBED_ADA_002, (OAEM.SIMILARITY_MODE, "text-embedding-3-small"): OAEMM.TEXT_EMBED_3_SMALL, (OAEM.SIMILARITY_MODE, "text-embedding-3-large"): OAEMM.TEXT_EMBED_3_LARGE, (OAEM.TEXT_SEARCH_MODE, "davinci"): OAEMM.TEXT_SEARCH_DAVINCI_QUERY, (OAEM.TEXT_SEARCH_MODE, "curie"): OAEMM.TEXT_SEARCH_CURIE_QUERY, (OAEM.TEXT_SEARCH_MODE, "babbage"): OAEMM.TEXT_SEARCH_BABBAGE_QUERY, (OAEM.TEXT_SEARCH_MODE, "ada"): OAEMM.TEXT_SEARCH_ADA_QUERY, (OAEM.TEXT_SEARCH_MODE, "text-embedding-ada-002"): OAEMM.TEXT_EMBED_ADA_002, (OAEM.TEXT_SEARCH_MODE, "text-embedding-3-large"): OAEMM.TEXT_EMBED_3_LARGE, (OAEM.TEXT_SEARCH_MODE, "text-embedding-3-small"): OAEMM.TEXT_EMBED_3_SMALL, } _TEXT_MODE_MODEL_DICT = { (OAEM.SIMILARITY_MODE, "davinci"): OAEMM.TEXT_SIMILARITY_DAVINCI, (OAEM.SIMILARITY_MODE, "curie"): OAEMM.TEXT_SIMILARITY_CURIE, (OAEM.SIMILARITY_MODE, "babbage"): OAEMM.TEXT_SIMILARITY_BABBAGE, (OAEM.SIMILARITY_MODE, "ada"): OAEMM.TEXT_SIMILARITY_ADA, (OAEM.SIMILARITY_MODE, "text-embedding-ada-002"): OAEMM.TEXT_EMBED_ADA_002, (OAEM.SIMILARITY_MODE, "text-embedding-3-small"): OAEMM.TEXT_EMBED_3_SMALL, (OAEM.SIMILARITY_MODE, "text-embedding-3-large"): OAEMM.TEXT_EMBED_3_LARGE, (OAEM.TEXT_SEARCH_MODE, "davinci"): OAEMM.TEXT_SEARCH_DAVINCI_DOC, (OAEM.TEXT_SEARCH_MODE, "curie"): OAEMM.TEXT_SEARCH_CURIE_DOC, (OAEM.TEXT_SEARCH_MODE, "babbage"): OAEMM.TEXT_SEARCH_BABBAGE_DOC, (OAEM.TEXT_SEARCH_MODE, "ada"): OAEMM.TEXT_SEARCH_ADA_DOC, (OAEM.TEXT_SEARCH_MODE, "text-embedding-ada-002"): OAEMM.TEXT_EMBED_ADA_002, (OAEM.TEXT_SEARCH_MODE, "text-embedding-3-large"): OAEMM.TEXT_EMBED_3_LARGE, (OAEM.TEXT_SEARCH_MODE, "text-embedding-3-small"): OAEMM.TEXT_EMBED_3_SMALL, } @embedding_retry_decorator def get_embedding(client: OpenAI, text: str, engine: str, **kwargs: Any) -> List[float]: """Get embedding. NOTE: Copied from OpenAI's embedding utils: https://github.com/openai/openai-python/blob/main/openai/embeddings_utils.py Copied here to avoid importing unnecessary dependencies like matplotlib, plotly, scipy, sklearn. """ text = text.replace("\n", " ") return ( client.embeddings.create(input=[text], model=engine, **kwargs).data[0].embedding ) @embedding_retry_decorator async def aget_embedding( aclient: AsyncOpenAI, text: str, engine: str, **kwargs: Any ) -> List[float]: """Asynchronously get embedding. NOTE: Copied from OpenAI's embedding utils: https://github.com/openai/openai-python/blob/main/openai/embeddings_utils.py Copied here to avoid importing unnecessary dependencies like matplotlib, plotly, scipy, sklearn. """ text = text.replace("\n", " ") return ( (await aclient.embeddings.create(input=[text], model=engine, **kwargs)) .data[0] .embedding ) @embedding_retry_decorator def get_embeddings( client: OpenAI, list_of_text: List[str], engine: str, **kwargs: Any ) -> List[List[float]]: """Get embeddings. NOTE: Copied from OpenAI's embedding utils: https://github.com/openai/openai-python/blob/main/openai/embeddings_utils.py Copied here to avoid importing unnecessary dependencies like matplotlib, plotly, scipy, sklearn. """ assert len(list_of_text) <= 2048, "The batch size should not be larger than 2048." list_of_text = [text.replace("\n", " ") for text in list_of_text] data = client.embeddings.create(input=list_of_text, model=engine, **kwargs).data return [d.embedding for d in data]
169823
[build-system] build-backend = "poetry.core.masonry.api" requires = ["poetry-core"] [tool.codespell] check-filenames = true check-hidden = true skip = "*.csv,*.html,*.json,*.jsonl,*.pdf,*.txt,*.ipynb" [tool.llamahub] contains_example = false import_path = "llama_index.embeddings.instructor" [tool.llamahub.class_authors] InstructorEmbedding = "llama-index" [tool.mypy] disallow_untyped_defs = true exclude = ["_static", "build", "examples", "notebooks", "venv"] ignore_missing_imports = true python_version = "3.8" [tool.poetry] authors = ["Your Name <you@example.com>"] description = "llama-index embeddings instructor integration" exclude = ["**/BUILD"] license = "MIT" name = "llama-index-embeddings-instructor" readme = "README.md" version = "0.2.1" [tool.poetry.dependencies] python = ">=3.8.1,<4.0" instructorembedding = "^1.0.1" torch = "^2.1.2" sentence-transformers = "^2.2.2" llama-index-core = "^0.11.0" [tool.poetry.group.dev.dependencies] ipython = "8.10.0" jupyter = "^1.0.0" mypy = "0.991" pre-commit = "3.2.0" pylint = "2.15.10" pytest = "7.2.1" pytest-mock = "3.11.1" ruff = "0.0.292" tree-sitter-languages = "^1.8.0" types-Deprecated = ">=0.1.0" types-PyYAML = "^6.0.12.12" types-protobuf = "^4.24.0.4" types-redis = "4.5.5.0" types-requests = "2.28.11.8" types-setuptools = "67.1.0.0" [tool.poetry.group.dev.dependencies.black] extras = ["jupyter"] version = "<=23.9.1,>=23.7.0" [tool.poetry.group.dev.dependencies.codespell] extras = ["toml"] version = ">=v2.2.6" [[tool.poetry.packages]] include = "llama_index/"
169857
[build-system] build-backend = "poetry.core.masonry.api" requires = ["poetry-core"] [tool.codespell] check-filenames = true check-hidden = true skip = "*.csv,*.html,*.json,*.jsonl,*.pdf,*.txt,*.ipynb" [tool.llamahub] contains_example = false import_path = "llama_index.embeddings.langchain" [tool.llamahub.class_authors] LangchainEmbedding = "llama-index" [tool.mypy] disallow_untyped_defs = true exclude = ["_static", "build", "examples", "notebooks", "venv"] ignore_missing_imports = true python_version = "3.8" [tool.poetry] authors = ["Your Name <you@example.com>"] description = "llama-index embeddings langchain integration" exclude = ["**/BUILD"] license = "MIT" name = "llama-index-embeddings-langchain" readme = "README.md" version = "0.2.1" [tool.poetry.dependencies] python = ">=3.8.1,<4.0" llama-index-core = "^0.11.0" [tool.poetry.group.dev.dependencies] ipython = "8.10.0" jupyter = "^1.0.0" mypy = "0.991" pre-commit = "3.2.0" pylint = "2.15.10" pytest = "7.2.1" pytest-mock = "3.11.1" ruff = "0.0.292" tree-sitter-languages = "^1.8.0" types-Deprecated = ">=0.1.0" types-PyYAML = "^6.0.12.12" types-protobuf = "^4.24.0.4" types-redis = "4.5.5.0" types-requests = "2.28.11.8" types-setuptools = "67.1.0.0" [tool.poetry.group.dev.dependencies.black] extras = ["jupyter"] version = "<=23.9.1,>=23.7.0" [tool.poetry.group.dev.dependencies.codespell] extras = ["toml"] version = ">=v2.2.6" [[tool.poetry.packages]] include = "llama_index/"
169860
from llama_index.embeddings.langchain.base import LangchainEmbedding __all__ = ["LangchainEmbedding"]
169862
"""Langchain Embedding Wrapper Module.""" from typing import TYPE_CHECKING, List, Optional from llama_index.core.base.embeddings.base import ( DEFAULT_EMBED_BATCH_SIZE, BaseEmbedding, ) from llama_index.core.bridge.pydantic import PrivateAttr from llama_index.core.callbacks import CallbackManager if TYPE_CHECKING: from llama_index.core.bridge.langchain import Embeddings as LCEmbeddings class LangchainEmbedding(BaseEmbedding): """External embeddings (taken from Langchain). Args: langchain_embedding (langchain.embeddings.Embeddings): Langchain embeddings class. """ _langchain_embedding: "LCEmbeddings" = PrivateAttr() _async_not_implemented_warned: bool = PrivateAttr(default=False) def __init__( self, langchain_embeddings: "LCEmbeddings", model_name: Optional[str] = None, embed_batch_size: int = DEFAULT_EMBED_BATCH_SIZE, callback_manager: Optional[CallbackManager] = None, ): # attempt to get a useful model name if model_name is not None: model_name = model_name elif hasattr(langchain_embeddings, "model_name"): model_name = langchain_embeddings.model_name elif hasattr(langchain_embeddings, "model"): model_name = langchain_embeddings.model else: model_name = type(langchain_embeddings).__name__ super().__init__( embed_batch_size=embed_batch_size, callback_manager=callback_manager, model_name=model_name, ) self._langchain_embedding = langchain_embeddings @classmethod def class_name(cls) -> str: return "LangchainEmbedding" def _async_not_implemented_warn_once(self) -> None: if not self._async_not_implemented_warned: print("Async embedding not available, falling back to sync method.") self._async_not_implemented_warned = True def _get_query_embedding(self, query: str) -> List[float]: """Get query embedding.""" return self._langchain_embedding.embed_query(query) async def _aget_query_embedding(self, query: str) -> List[float]: try: return await self._langchain_embedding.aembed_query(query) except NotImplementedError: # Warn the user that sync is being used self._async_not_implemented_warn_once() return self._get_query_embedding(query) async def _aget_text_embedding(self, text: str) -> List[float]: try: embeds = await self._langchain_embedding.aembed_documents([text]) return embeds[0] except NotImplementedError: # Warn the user that sync is being used self._async_not_implemented_warn_once() return self._get_text_embedding(text) def _get_text_embedding(self, text: str) -> List[float]: """Get text embedding.""" return self._langchain_embedding.embed_documents([text])[0] def _get_text_embeddings(self, texts: List[str]) -> List[List[float]]: """Get text embeddings.""" return self._langchain_embedding.embed_documents(texts)
169863
from llama_index.core.base.embeddings.base import BaseEmbedding from llama_index.embeddings.langchain import LangchainEmbedding def test_langchain_embedding_class(): names_of_base_classes = [b.__name__ for b in LangchainEmbedding.__mro__] assert BaseEmbedding.__name__ in names_of_base_classes
170048
class WandbCallbackHandler(BaseCallbackHandler): """Callback handler that logs events to wandb. NOTE: this is a beta feature. The usage within our codebase, and the interface may change. Use the `WandbCallbackHandler` to log trace events to wandb. This handler is useful for debugging and visualizing the trace events. It captures the payload of the events and logs them to wandb. The handler also tracks the start and end of events. This is particularly useful for debugging your LLM calls. The `WandbCallbackHandler` can also be used to log the indices and graphs to wandb using the `persist_index` method. This will save the indexes as artifacts in wandb. The `load_storage_context` method can be used to load the indexes from wandb artifacts. This method will return a `StorageContext` object that can be used to build the index, using `load_index_from_storage`, `load_indices_from_storage` or `load_graph_from_storage` functions. Args: event_starts_to_ignore (Optional[List[CBEventType]]): list of event types to ignore when tracking event starts. event_ends_to_ignore (Optional[List[CBEventType]]): list of event types to ignore when tracking event ends. """ def __init__( self, run_args: Optional[WandbRunArgs] = None, tokenizer: Optional[Callable[[str], List]] = None, event_starts_to_ignore: Optional[List[CBEventType]] = None, event_ends_to_ignore: Optional[List[CBEventType]] = None, ) -> None: try: import wandb from wandb.sdk.data_types import trace_tree self._wandb = wandb self._trace_tree = trace_tree except ImportError: raise ImportError( "WandbCallbackHandler requires wandb. " "Please install it with `pip install wandb`." ) from llama_index.core.indices import ( ComposableGraph, GPTEmptyIndex, GPTKeywordTableIndex, GPTRAKEKeywordTableIndex, GPTSimpleKeywordTableIndex, GPTSQLStructStoreIndex, GPTTreeIndex, GPTVectorStoreIndex, SummaryIndex, ) self._IndexType = ( ComposableGraph, GPTKeywordTableIndex, GPTSimpleKeywordTableIndex, GPTRAKEKeywordTableIndex, SummaryIndex, GPTEmptyIndex, GPTTreeIndex, GPTVectorStoreIndex, GPTSQLStructStoreIndex, ) self._run_args = run_args # Check if a W&B run is already initialized; if not, initialize one self._ensure_run(should_print_url=(self._wandb.run is None)) # type: ignore[attr-defined] self._event_pairs_by_id: Dict[str, List[CBEvent]] = defaultdict(list) self._cur_trace_id: Optional[str] = None self._trace_map: Dict[str, List[str]] = defaultdict(list) self.tokenizer = tokenizer or get_tokenizer() self._token_counter = TokenCounter(tokenizer=self.tokenizer) event_starts_to_ignore = ( event_starts_to_ignore if event_starts_to_ignore else [] ) event_ends_to_ignore = event_ends_to_ignore if event_ends_to_ignore else [] super().__init__( event_starts_to_ignore=event_starts_to_ignore, event_ends_to_ignore=event_ends_to_ignore, ) def on_event_start( self, event_type: CBEventType, payload: Optional[Dict[str, Any]] = None, event_id: str = "", parent_id: str = "", **kwargs: Any, ) -> str: """Store event start data by event type. Args: event_type (CBEventType): event type to store. payload (Optional[Dict[str, Any]]): payload to store. event_id (str): event id to store. parent_id (str): parent event id. """ event = CBEvent(event_type, payload=payload, id_=event_id) self._event_pairs_by_id[event.id_].append(event) return event.id_ def on_event_end( self, event_type: CBEventType, payload: Optional[Dict[str, Any]] = None, event_id: str = "", **kwargs: Any, ) -> None: """Store event end data by event type. Args: event_type (CBEventType): event type to store. payload (Optional[Dict[str, Any]]): payload to store. event_id (str): event id to store. """ event = CBEvent(event_type, payload=payload, id_=event_id) self._event_pairs_by_id[event.id_].append(event) self._trace_map = defaultdict(list) def start_trace(self, trace_id: Optional[str] = None) -> None: """Launch a trace.""" self._trace_map = defaultdict(list) self._cur_trace_id = trace_id self._start_time = datetime.now() def end_trace( self, trace_id: Optional[str] = None, trace_map: Optional[Dict[str, List[str]]] = None, ) -> None: # Ensure W&B run is initialized self._ensure_run() self._trace_map = trace_map or defaultdict(list) self._end_time = datetime.now() # Log the trace map to wandb # We can control what trace ids we want to log here. self.log_trace_tree() # TODO (ayulockin): Log the LLM token counts to wandb when weave is ready def log_trace_tree(self) -> None: """Log the trace tree to wandb.""" try: child_nodes = self._trace_map["root"] root_span = self._convert_event_pair_to_wb_span( self._event_pairs_by_id[child_nodes[0]], trace_id=self._cur_trace_id if len(child_nodes) > 1 else None, ) if len(child_nodes) == 1: child_nodes = self._trace_map[child_nodes[0]] root_span = self._build_trace_tree(child_nodes, root_span) else: root_span = self._build_trace_tree(child_nodes, root_span) if root_span: root_trace = self._trace_tree.WBTraceTree(root_span) if self._wandb.run: # type: ignore[attr-defined] self._wandb.run.log({"trace": root_trace}) # type: ignore[attr-defined] self._wandb.termlog("Logged trace tree to W&B.") # type: ignore[attr-defined] except Exception as e: print(f"Failed to log trace tree to W&B: {e}") # ignore errors to not break user code def persist_index( self, index: "IndexType", index_name: str, persist_dir: Union[str, None] = None ) -> None: """Upload an index to wandb as an artifact. You can learn more about W&B artifacts here: https://docs.wandb.ai/guides/artifacts. For the `ComposableGraph` index, the root id is stored as artifact metadata. Args: index (IndexType): index to upload. index_name (str): name of the index. This will be used as the artifact name. persist_dir (Union[str, None]): directory to persist the index. If None, a temporary directory will be created and used. """ if persist_dir is None: persist_dir = f"{self._wandb.run.dir}/storage" # type: ignore _default_persist_dir = True if not os.path.exists(persist_dir): os.makedirs(persist_dir) if isinstance(index, self._IndexType): try: index.storage_context.persist(persist_dir) # type: ignore metadata = None # For the `ComposableGraph` index, store the root id as metadata if isinstance(index, self._IndexType[0]): root_id = index.root_id metadata = {"root_id": root_id} self._upload_index_as_wb_artifact(persist_dir, index_name, metadata) except Exception as e: # Silently ignore errors to not break user code self._print_upload_index_fail_message(e) # clear the default storage dir if _default_persist_dir: shutil.rmtree(persist_dir, ignore_errors=True)
170343
def get_triplets( self, entity_names: Optional[List[str]] = None, relation_names: Optional[List[str]] = None, properties: Optional[dict] = None, ids: Optional[List[str]] = None, ) -> List[Triplet]: # TODO: handle ids of chunk nodes cypher_statement = "MATCH (e:`__Entity__`) " params = {} if entity_names or properties or ids: cypher_statement += "WHERE " if entity_names: cypher_statement += "e.name in $entity_names " params["entity_names"] = entity_names if ids: cypher_statement += "e.id in $ids " params["ids"] = ids if properties: prop_list = [] for i, prop in enumerate(properties): prop_list.append(f"e.`{prop}` = $property_{i}") params[f"property_{i}"] = properties[prop] cypher_statement += " AND ".join(prop_list) return_statement = f""" WITH e CALL {{ WITH e MATCH (e)-[r{':`' + '`|`'.join(relation_names) + '`' if relation_names else ''}]->(t:__Entity__) RETURN e.name AS source_id, [l in labels(e) WHERE l <> '__Entity__' | l][0] AS source_type, e{{.* , embedding: Null, name: Null}} AS source_properties, type(r) AS type, t.name AS target_id, [l in labels(t) WHERE l <> '__Entity__' | l][0] AS target_type, t{{.* , embedding: Null, name: Null}} AS target_properties UNION ALL WITH e MATCH (e)<-[r{':`' + '`|`'.join(relation_names) + '`' if relation_names else ''}]-(t:__Entity__) RETURN t.name AS source_id, [l in labels(t) WHERE l <> '__Entity__' | l][0] AS source_type, e{{.* , embedding: Null, name: Null}} AS source_properties, type(r) AS type, e.name AS target_id, [l in labels(e) WHERE l <> '__Entity__' | l][0] AS target_type, t{{.* , embedding: Null, name: Null}} AS target_properties }} RETURN source_id, source_type, type, target_id, target_type, source_properties, target_properties""" cypher_statement += return_statement data = self.structured_query(cypher_statement, param_map=params) data = data if data else [] triples = [] for record in data: source = EntityNode( name=record["source_id"], label=record["source_type"], properties=remove_empty_values(record["source_properties"]), ) target = EntityNode( name=record["target_id"], label=record["target_type"], properties=remove_empty_values(record["target_properties"]), ) rel = Relation( source_id=record["source_id"], target_id=record["target_id"], label=record["type"], ) triples.append([source, rel, target]) return triples def get_rel_map( self, graph_nodes: List[LabelledNode], depth: int = 2, limit: int = 30, ignore_rels: Optional[List[str]] = None, ) -> List[Triplet]: """Get depth-aware rel map.""" triples = [] ids = [node.id for node in graph_nodes] # Needs some optimization response = self.structured_query( f""" WITH $ids AS id_list UNWIND range(0, size(id_list) - 1) AS idx MATCH (e:`__Entity__`) WHERE e.id = id_list[idx] MATCH p=(e)-[r*1..{depth}]-(other) WHERE ALL(rel in relationships(p) WHERE type(rel) <> 'MENTIONS') UNWIND relationships(p) AS rel WITH distinct rel, idx WITH startNode(rel) AS source, type(rel) AS type, endNode(rel) AS endNode, idx LIMIT $limit RETURN source.id AS source_id, [l in labels(source) WHERE l <> '__Entity__' | l][0] AS source_type, source{{.* , embedding: Null, id: Null}} AS source_properties, type, endNode.id AS target_id, [l in labels(endNode) WHERE l <> '__Entity__' | l][0] AS target_type, endNode{{.* , embedding: Null, id: Null}} AS target_properties, idx ORDER BY idx LIMIT $limit """, param_map={"ids": ids, "limit": limit}, ) response = response if response else [] ignore_rels = ignore_rels or [] for record in response: if record["type"] in ignore_rels: continue source = EntityNode( name=record["source_id"], label=record["source_type"], properties=remove_empty_values(record["source_properties"]), ) target = EntityNode( name=record["target_id"], label=record["target_type"], properties=remove_empty_values(record["target_properties"]), ) rel = Relation( source_id=record["source_id"], target_id=record["target_id"], label=record["type"], ) triples.append([source, rel, target]) return triples def structured_query( self, query: str, param_map: Optional[Dict[str, Any]] = None ) -> Any: param_map = param_map or {} result = self._driver.query(query, param_map) full_result = [ {h[1]: d[i] for i, h in enumerate(result.header)} for d in result.result_set ] if self.sanitize_query_output: return [value_sanitize(el) for el in full_result] return full_result def vector_query( self, query: VectorStoreQuery, **kwargs: Any ) -> Tuple[List[LabelledNode], List[float]]: """Query the graph store with a vector store query.""" conditions = None if query.filters: conditions = [ f"e.{filter.key} {filter.operator.value} {filter.value}" for filter in query.filters.filters ] filters = ( f" {query.filters.condition.value} ".join(conditions).replace("==", "=") if conditions is not None else "1 = 1" ) data = self.structured_query( f"""MATCH (e:`__Entity__`) WHERE e.embedding IS NOT NULL AND ({filters}) WITH e, vec.euclideanDistance(e.embedding, vecf32($embedding)) AS score ORDER BY score LIMIT $limit RETURN e.id AS name, [l in labels(e) WHERE l <> '__Entity__' | l][0] AS type, e{{.* , embedding: Null, name: Null, id: Null}} AS properties, score""", param_map={ "embedding": query.query_embedding, "dimension": len(query.query_embedding), "limit": query.similarity_top_k, }, ) data = data if data else [] nodes = [] scores = [] for record in data: node = EntityNode( name=record["name"], label=record["type"], properties=remove_empty_values(record["properties"]), ) nodes.append(node) scores.append(record["score"]) return (nodes, scores)
170845
# LlamaIndex Output_Parsers Integration: Langchain
170933
# GCS File or Directory Loader This loader parses any file stored on Google Cloud Storage (GCS), or the entire Bucket (with an optional prefix filter) if no particular file is specified. It now supports more advanced operations through the implementation of ResourcesReaderMixin and FileSystemReaderMixin. ## Features - Parse single files or entire buckets from GCS - List resources in GCS buckets - Retrieve detailed information about GCS objects - Load specific resources from GCS - Read file content directly - Supports various authentication methods - Comprehensive logging for easier debugging - Robust error handling for improved reliability ## Authentication When initializing `GCSReader`, you may pass in your [GCP Service Account Key](https://cloud.google.com/iam/docs/keys-create-delete) in several ways: 1. As a file path (`service_account_key_path`) 2. As a JSON string (`service_account_key_json`) 3. As a dictionary (`service_account_key`) If no credentials are provided, the loader will attempt to use default credentials. ## Usage To use this loader, you need to pass in the name of your GCS Bucket. You can then either parse a single file by passing its key, or parse multiple files using a prefix. ```python from llama_index import GCSReader import logging # Set up logging (optional, but recommended) logging.basicConfig(level=logging.INFO) # Initialize the reader reader = GCSReader( bucket="scrabble-dictionary", key="dictionary.txt", # Optional: specify a single file # prefix="subdirectory/", # Optional: specify a prefix to filter files service_account_key_json="[SERVICE_ACCOUNT_KEY_JSON]", ) # Load data documents = reader.load_data() # List resources in the bucket resources = reader.list_resources() # Get information about a specific resource resource_info = reader.get_resource_info("dictionary.txt") # Load a specific resource specific_doc = reader.load_resource("dictionary.txt") # Read file content directly file_content = reader.read_file_content("dictionary.txt") print(f"Loaded {len(documents)} documents") print(f"Found {len(resources)} resources") print(f"Resource info: {resource_info}") print(f"Specific document: {specific_doc}") print(f"File content length: {len(file_content)} bytes") ``` Note: If the file is nested in a subdirectory, the key should contain that, e.g., `subdirectory/input.txt`. ## Advanced Usage All files are parsed with `SimpleDirectoryReader`. You may specify a custom `file_extractor`, relying on any of the loaders in the LlamaIndex library (or your own)! ```python from llama_index import GCSReader, SimpleMongoReader reader = GCSReader( bucket="my-bucket", file_extractor={ ".mongo": SimpleMongoReader(), # Add more custom extractors as needed }, ) ``` ## Error Handling The GCSReader now includes comprehensive error handling. You can catch exceptions to handle specific error cases: ```python from google.auth.exceptions import DefaultCredentialsError try: reader = GCSReader(bucket="your-bucket-name") documents = reader.load_data() except DefaultCredentialsError: print("Authentication failed. Please check your credentials.") except Exception as e: print(f"An error occurred: {str(e)}") ``` ## Logging To get insights into the GCSReader's operations, configure logging in your application: ```python import logging logging.basicConfig(level=logging.INFO) ``` This loader is designed to be used as a way to load data into [LlamaIndex](https://github.com/run-llama/llama_index/). For more advanced usage, including custom file extractors, metadata extraction, and working with specific file types, please refer to the [LlamaIndex documentation](https://docs.llamaindex.ai/).
170938
class GCSReader(BasePydanticReader, ResourcesReaderMixin, FileSystemReaderMixin): """ A reader for Google Cloud Storage (GCS) files and directories. This class allows reading files from GCS, listing resources, and retrieving resource information. It supports authentication via service account keys and implements various reader mixins. Attributes: bucket (str): The name of the GCS bucket. key (Optional[str]): The specific file key to read. If None, the entire bucket is parsed. prefix (Optional[str]): The prefix to filter by when iterating through the bucket. recursive (bool): Whether to recursively search in subdirectories. file_extractor (Optional[Dict[str, Union[str, BaseReader]]]): Custom file extractors. required_exts (Optional[List[str]]): List of required file extensions. filename_as_id (bool): Whether to use the filename as the document ID. num_files_limit (Optional[int]): Maximum number of files to read. file_metadata (Optional[Callable[[str], Dict]]): Function to extract metadata from filenames. service_account_key (Optional[Dict[str, str]]): Service account key as a dictionary. service_account_key_json (Optional[str]): Service account key as a JSON string. service_account_key_path (Optional[str]): Path to the service account key file. """ is_remote: bool = True bucket: str key: Optional[str] = None prefix: Optional[str] = "" recursive: bool = True file_extractor: Optional[Dict[str, Union[str, BaseReader]]] = Field( default=None, exclude=True ) required_exts: Optional[List[str]] = None filename_as_id: bool = True num_files_limit: Optional[int] = None file_metadata: Optional[FileMetadataCallable] = Field(default=None, exclude=True) service_account_key: Optional[Dict[str, str]] = None service_account_key_json: Optional[str] = None service_account_key_path: Optional[str] = None @classmethod def class_name(cls) -> str: """Return the name of the class.""" return "GCSReader" def _get_gcsfs(self): """ Create and return a GCSFileSystem object. This method handles authentication using the provided service account credentials. Returns: GCSFileSystem: An authenticated GCSFileSystem object. Raises: ValueError: If no valid authentication method is provided. DefaultCredentialsError: If there's an issue with the provided credentials. """ from gcsfs import GCSFileSystem try: if self.service_account_key is not None: creds = service_account.Credentials.from_service_account_info( self.service_account_key, scopes=SCOPES ) elif self.service_account_key_json is not None: creds = service_account.Credentials.from_service_account_info( json.loads(self.service_account_key_json), scopes=SCOPES ) elif self.service_account_key_path is not None: creds = service_account.Credentials.from_service_account_file( self.service_account_key_path, scopes=SCOPES ) else: logger.warning( "No explicit credentials provided. Falling back to default credentials." ) creds = None # This will use default credentials return GCSFileSystem(token=creds) except DefaultCredentialsError as e: logger.error(f"Failed to authenticate with GCS: {e!s}") raise def _get_simple_directory_reader(self) -> SimpleDirectoryReader: """ Create and return a SimpleDirectoryReader for GCS. This method sets up a SimpleDirectoryReader with the appropriate GCS filesystem and other configured parameters. Returns: SimpleDirectoryReader: A configured SimpleDirectoryReader for GCS. """ gcsfs = self._get_gcsfs() input_dir = self.bucket input_files = None if self.key: input_files = [f"{self.bucket}/{self.key}"] elif self.prefix: input_dir = f"{input_dir}/{self.prefix}" return SimpleDirectoryReader( input_dir=input_dir, input_files=input_files, recursive=self.recursive, file_extractor=self.file_extractor, required_exts=self.required_exts, filename_as_id=self.filename_as_id, num_files_limit=self.num_files_limit, file_metadata=self.file_metadata, fs=gcsfs, ) def load_data(self) -> List[Document]: """ Load data from the specified GCS bucket or file. Returns: List[Document]: A list of loaded documents. Raises: Exception: If there's an error loading the data. """ try: logger.info(f"Loading data from GCS bucket: {self.bucket}") return self._get_simple_directory_reader().load_data() except Exception as e: logger.error(f"Error loading data from GCS: {e!s}") raise def list_resources(self, **kwargs) -> List[str]: """ List resources in the specified GCS bucket or directory. Args: **kwargs: Additional arguments to pass to the underlying list_resources method. Returns: List[str]: A list of resource identifiers. Raises: Exception: If there's an error listing the resources. """ try: logger.info(f"Listing resources in GCS bucket: {self.bucket}") return self._get_simple_directory_reader().list_resources(**kwargs) except Exception as e: logger.error(f"Error listing resources in GCS: {e!s}") raise def get_resource_info(self, resource_id: str, **kwargs) -> Dict: """ Get information about a specific GCS resource. Args: resource_id (str): The identifier of the resource. **kwargs: Additional arguments to pass to the underlying info method. Returns: Dict: A dictionary containing resource information. Raises: Exception: If there's an error retrieving the resource information. """ try: logger.info(f"Getting info for resource: {resource_id}") gcsfs = self._get_gcsfs() info_result = gcsfs.info(resource_id) info_dict = { "file_path": info_result.get("name"), "file_size": info_result.get("size"), "last_modified_date": info_result.get("updated"), "content_hash": info_result.get("md5Hash"), "content_type": info_result.get("contentType"), "storage_class": info_result.get("storageClass"), "etag": info_result.get("etag"), "generation": info_result.get("generation"), "created_date": info_result.get("timeCreated"), } # Convert datetime objects to ISO format strings for key in ["last_modified_date", "created_date"]: if isinstance(info_dict.get(key), datetime): info_dict[key] = info_dict[key].isoformat() return {k: v for k, v in info_dict.items() if v is not None} except Exception as e: logger.error(f"Error getting resource info from GCS: {e!s}") raise def load_resource(self, resource_id: str, **kwargs) -> List[Document]: """ Load a specific resource from GCS. Args: resource_id (str): The identifier of the resource to load. **kwargs: Additional arguments to pass to the underlying load_resource method. Returns: List[Document]: A list containing the loaded document. Raises: Exception: If there's an error loading the resource. """ try: logger.info(f"Loading resource: {resource_id}") return self._get_simple_directory_reader().load_resource( resource_id, **kwargs ) except Exception as e: logger.error(f"Error loading resource from GCS: {e!s}") raise def read_file_content(self, input_file: Path, **kwargs) -> bytes: """ Read the content of a specific file from GCS. Args: input_file (Path): The path to the file to read. **kwargs: Additional arguments to pass to the underlying read_file_content method. Returns: bytes: The content of the file. Raises: Exception: If there's an error reading the file content. """ try: logger.info(f"Reading file content: {input_file}") return self._get_simple_directory_reader().read_file_content( input_file, **kwargs ) except Exception as e: logger.error(f"Error reading file content from GCS: {e!s}") raise
171195
""" Azure Storage Blob file and directory reader. A loader that fetches a file or iterates through a directory from Azure Storage Blob. """ import logging import math import os from pathlib import Path import tempfile import time from typing import Any, Dict, List, Optional, Union from azure.storage.blob import ContainerClient from llama_index.core.bridge.pydantic import Field from llama_index.core.readers import SimpleDirectoryReader from llama_index.core.readers.base import BaseReader, BasePydanticReader from llama_index.core.schema import Document from llama_index.core.readers import SimpleDirectoryReader, FileSystemReaderMixin from llama_index.core.readers.base import ( BaseReader, BasePydanticReader, ResourcesReaderMixin, ) logger = logging.getLogger(__name__) class AzStorageBlobReader( BasePydanticReader, ResourcesReaderMixin, FileSystemReaderMixin ): """ General reader for any Azure Storage Blob file or directory. Args: container_name (str): name of the container for the blob. blob (Optional[str]): name of the file to download. If none specified this loader will iterate through list of blobs in the container. name_starts_with (Optional[str]): filter the list of blobs to download to only those whose names begin with the specified string. include: (Union[str, List[str], None]): Specifies one or more additional datasets to include in the response. Options include: 'snapshots', 'metadata', 'uncommittedblobs', 'copy', 'deleted', 'deletedwithversions', 'tags', 'versions', 'immutabilitypolicy', 'legalhold'. file_extractor (Optional[Dict[str, Union[str, BaseReader]]]): A mapping of file extension to a BaseReader class that specifies how to convert that file to text. See `SimpleDirectoryReader` for more details, or call this path ```llama_index.readers.file.base.DEFAULT_FILE_READER_CLS```. connection_string (str): A connection string which can be found under a storage account's "Access keys" security tab. This parameter can be used in place of both the account URL and credential. account_url (str): URI to the storage account, may include SAS token. credential (Union[str, Dict[str, str], AzureNamedKeyCredential, AzureSasCredential, TokenCredential, None] = None): The credentials with which to authenticate. This is optional if the account URL already has a SAS token. """ container_name: str prefix: Optional[str] = "" blob: Optional[str] = None name_starts_with: Optional[str] = None include: Optional[Any] = None file_extractor: Optional[Dict[str, Union[str, BaseReader]]] = Field( default=None, exclude=True ) connection_string: Optional[str] = None account_url: Optional[str] = None credential: Optional[Any] = None is_remote: bool = True # Not in use. As part of the TODO below. Is part of the kwargs. # self.preloaded_data_path = kwargs.get('preloaded_data_path', None) @classmethod def class_name(cls) -> str: return "AzStorageBlobReader" def _get_container_client(self): if self.connection_string: return ContainerClient.from_connection_string( conn_str=self.connection_string, container_name=self.container_name, ) return ContainerClient( self.account_url, self.container_name, credential=self.credential ) def _download_files_and_extract_metadata(self, temp_dir: str) -> Dict[str, Any]: """Download files from Azure Storage Blob and extract metadata.""" container_client = self._get_container_client() blob_meta = {} if self.blob: blobs_list = [self.blob] else: blobs_list = container_client.list_blobs( self.name_starts_with, self.include ) for obj in blobs_list: sanitized_file_name = obj.name.replace("/", "-") if not self.blob else obj download_file_path = os.path.join(temp_dir, sanitized_file_name) logger.info(f"Start download of {sanitized_file_name}") start_time = time.time() blob_client = container_client.get_blob_client(obj) stream = blob_client.download_blob() with open(file=download_file_path, mode="wb") as download_file: stream.readinto(download_file) blob_meta[sanitized_file_name] = blob_client.get_blob_properties() end_time = time.time() logger.debug( f"{sanitized_file_name} downloaded in {end_time - start_time} seconds." ) return blob_meta def _extract_blob_metadata(self, file_metadata: Dict[str, Any]) -> Dict[str, Any]: meta: dict = file_metadata creation_time = meta.get("creation_time") creation_time = creation_time.strftime("%Y-%m-%d") if creation_time else None last_modified = meta.get("last_modified") last_modified = last_modified.strftime("%Y-%m-%d") if last_modified else None last_accessed_on = meta.get("last_accessed_on") last_accessed_on = ( last_accessed_on.strftime("%Y-%m-%d") if last_accessed_on else None ) extracted_meta = { "file_name": meta.get("name"), "file_type": meta.get("content_settings", {}).get("content_type"), "file_size": meta.get("size"), "creation_date": creation_time, "last_modified_date": last_modified, "last_accessed_date": last_accessed_on, "container": meta.get("container"), } extracted_meta.update(meta.get("metadata") or {}) extracted_meta.update(meta.get("tags") or {}) return extracted_meta def _load_documents_with_metadata( self, files_metadata: Dict[str, Any], temp_dir: str ) -> List[Document]: """Load documents from a directory and extract metadata.""" def get_metadata(file_name: str) -> Dict[str, Any]: return files_metadata.get(file_name, {}) loader = SimpleDirectoryReader( temp_dir, file_extractor=self.file_extractor, file_metadata=get_metadata ) return loader.load_data() def list_resources(self, *args: Any, **kwargs: Any) -> List[str]: """List all the blobs in the container.""" blobs_list = self._get_container_client().list_blobs( name_starts_with=self.name_starts_with, include=self.include ) return [blob.name for blob in blobs_list] def get_resource_info(self, resource_id: str, **kwargs: Any) -> Dict: """Get metadata for a specific blob.""" container_client = self._get_container_client() blob_client = container_client.get_blob_client(resource_id) blob_meta = blob_client.get_blob_properties() info_dict = { **self._extract_blob_metadata(blob_meta), "file_path": str(resource_id).replace(":", "/"), } return { meta_key: meta_value for meta_key, meta_value in info_dict.items() if meta_value is not None } def load_resource(self, resource_id: str, **kwargs: Any) -> List[Document]: try: container_client = self._get_container_client() blob_client = container_client.get_blob_client(resource_id) stream = blob_client.download_blob() with tempfile.TemporaryDirectory() as temp_dir: download_file_path = os.path.join( temp_dir, resource_id.replace("/", "-") ) with open(file=download_file_path, mode="wb") as download_file: stream.readinto(download_file) return self._load_documents_with_metadata( {resource_id: blob_client.get_blob_properties()}, temp_dir ) except Exception as e: logger.error( f"Error loading resource {resource_id} from AzStorageBlob: {e}" ) raise def read_file_content(self, input_file: Path, **kwargs) -> bytes: """Read the content of a file from Azure Storage Blob.""" container_client = self._get_container_client() blob_client = container_client.get_blob_client(input_file) stream = blob_client.download_blob() return stream.readall() def load_data(self) -> List[Document]: """Load file(s) from Azure Storage Blob.""" total_download_start_time = time.time() with tempfile.TemporaryDirectory() as temp_dir: files_metadata = self._download_files_and_extract_metadata(temp_dir) total_download_end_time = time.time() total_elapsed_time = math.ceil( total_download_end_time - total_download_start_time ) logger.info( f"Downloading completed in approximately {total_elapsed_time // 60}min" f" {total_elapsed_time % 60}s." ) logger.info("Document creation starting") return self._load_documents_with_metadata(files_metadata, temp_dir)
171340
# LlamaIndex Readers Integration: Milvus ## Overview Milvus Reader is designed to load data from a Milvus vector store, which provides search functionality based on query vectors. It retrieves documents from the specified Milvus collection using the provided connection parameters. ### Installation You can install Milvus Reader via pip: ```bash pip install llama-index-readers-milvus ``` ### Usage ```python from llama_index.readers.milvus import MilvusReader # Initialize MilvusReader reader = MilvusReader( host="<Milvus Host>", # Milvus host address (default: "localhost") port=19530, # Milvus port (default: 19530) user="", # Milvus user (default: "") password="", # Milvus password (default: "") use_secure=False, # Use secure connection (default: False) ) # Load data from Milvus documents = reader.load_data( query_vector=[0.1, 0.2, 0.3], # Query vector collection_name="<Collection Name>", # Name of the Milvus collection limit=10, # Number of results to return search_params=None, # Search parameters (optional) ) ``` Implementation for Milvus reader can be found [here](https://docs.llamaindex.ai/en/stable/examples/data_connectors/MilvusReaderDemo) This loader is designed to be used as a way to load data into [LlamaIndex](https://github.com/run-llama/llama_index/tree/main/llama_index) and/or subsequently used as a Tool in a [LangChain](https://github.com/hwchase17/langchain) Agent.
171394
# Confluence Loader ```bash pip install llama-index-readers-confluence ``` This loader loads pages from a given Confluence cloud instance. The user needs to specify the base URL for a Confluence instance to initialize the ConfluenceReader - base URL needs to end with `/wiki`. The user can optionally specify OAuth 2.0 credentials to authenticate with the Confluence instance. If no credentials are specified, the loader will look for `CONFLUENCE_API_TOKEN` or `CONFLUENCE_USERNAME`/`CONFLUENCE_PASSWORD` environment variables to proceed with basic authentication. > [!NOTE] > Keep in mind `CONFLUENCE_PASSWORD` is not your actual password, but an API Token obtained here: https://id.atlassian.com/manage-profile/security/api-tokens. The following order is used for checking authentication credentials: 1. `oauth2` 2. `api_token` 3. `user_name` and `password` 4. Environment variable `CONFLUENCE_API_TOKEN` 5. Environment variable `CONFLUENCE_USERNAME` and `CONFLUENCE_PASSWORD` For more on authenticating using OAuth 2.0, checkout: - https://atlassian-python-api.readthedocs.io/index.html - https://developer.atlassian.com/cloud/confluence/oauth-2-3lo-apps/ Confluence pages are obtained through one of 4 four mutually exclusive ways: 1. `page_ids`: Load all pages from a list of page ids 2. `space_key`: Load all pages from a space 3. `label`: Load all pages with a given label 4. `cql`: Load all pages that match a given CQL query (Confluence Query Language https://developer.atlassian.com/cloud/confluence/advanced-searching-using-cql/ ). When `page_ids` is specified, `include_children` will cause the loader to also load all descendent pages. When `space_key` is specified, `page_status` further specifies the status of pages to load: None, 'current', 'archived', 'draft'. limit (int): Deprecated, use `max_num_results` instead. max_num_results (int): Maximum number of results to return. If None, return all results. Requests are made in batches to achieve the desired number of results. start(int): Which offset we should jump to when getting pages, only works with space_key cursor(str): An alternative to start for cql queries, the cursor is a pointer to the next "page" when searching atlassian products. The current one after a search can be found with `get_next_cursor()` User can also specify a boolean `include_attachments` to include attachments, this is set to `False` by default, if set to `True` all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG, SVG, Word and Excel. Hint: `space_key` and `page_id` can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id> ## Usage Here's an example usage of the ConfluenceReader. ```python # Example that reads the pages with the `page_ids` from llama_index.readers.confluence import ConfluenceReader token = {"access_token": "<access_token>", "token_type": "<token_type>"} oauth2_dict = {"client_id": "<client_id>", "token": token} base_url = "https://yoursite.atlassian.com/wiki" page_ids = ["<page_id_1>", "<page_id_2>", "<page_id_3"] space_key = "<space_key>" reader = ConfluenceReader(base_url=base_url, oauth2=oauth2_dict) documents = reader.load_data( space_key=space_key, include_attachments=True, page_status="current" ) documents.extend( reader.load_data( page_ids=page_ids, include_children=True, include_attachments=True ) ) ``` ```python # Example that fetches the first 5, then the next 5 pages from a space from llama_index.readers.confluence import ConfluenceReader token = {"access_token": "<access_token>", "token_type": "<token_type>"} oauth2_dict = {"client_id": "<client_id>", "token": token} base_url = "https://yoursite.atlassian.com/wiki" space_key = "<space_key>" reader = ConfluenceReader(base_url=base_url, oauth2=oauth2_dict) documents = reader.load_data( space_key=space_key, include_attachments=True, page_status="current", start=0, max_num_results=5, ) documents.extend( reader.load_data( space_key=space_key, include_children=True, include_attachments=True, start=5, max_num_results=5, ) ) ``` ```python # Example that fetches the first 5 results from a cql query, the uses the cursor to pick up on the next element from llama_index.readers.confluence import ConfluenceReader token = {"access_token": "<access_token>", "token_type": "<token_type>"} oauth2_dict = {"client_id": "<client_id>", "token": token} base_url = "https://yoursite.atlassian.com/wiki" cql = f'type="page" AND label="devops"' reader = ConfluenceReader(base_url=base_url, oauth2=oauth2_dict) documents = reader.load_data(cql=cql, max_num_results=5) cursor = reader.get_next_cursor() documents.extend(reader.load_data(cql=cql, cursor=cursor, max_num_results=5)) ``` This loader is designed to be used as a way to load data into [LlamaIndex](https://github.com/run-llama/llama_index/).
171399
class ConfluenceReader(BaseReader): """Confluence reader. Reads a set of confluence pages given a space key and optionally a list of page ids For more on OAuth login, checkout: - https://atlassian-python-api.readthedocs.io/index.html - https://developer.atlassian.com/cloud/confluence/oauth-2-3lo-apps/ Args: oauth2 (dict): Atlassian OAuth 2.0, minimum fields are `client_id` and `token`, where `token` is a dict and must at least contain "access_token" and "token_type". base_url (str): 'base_url' for confluence cloud instance, this is suffixed with '/wiki', eg 'https://yoursite.atlassian.com/wiki' cloud (bool): connecting to Confluence Cloud or self-hosted instance api_token (str): Confluence API token, see https://confluence.atlassian.com/cloud/api-tokens-938839638.html user_name (str): Confluence username, used for basic auth. Must be used with `password`. password (str): Confluence password, used for basic auth. Must be used with `user_name`. """ def __init__( self, base_url: str = None, oauth2: Optional[Dict] = None, cloud: bool = True, api_token: Optional[str] = None, user_name: Optional[str] = None, password: Optional[str] = None, ) -> None: if base_url is None: raise ValueError("Must provide `base_url`") self.base_url = base_url try: from atlassian import Confluence except ImportError: raise ImportError( "`atlassian` package not found, please run `pip install" " atlassian-python-api`" ) self.confluence: Confluence = None if oauth2: self.confluence = Confluence(url=base_url, oauth2=oauth2, cloud=cloud) else: if api_token is not None: self.confluence = Confluence(url=base_url, token=api_token, cloud=cloud) elif user_name is not None and password is not None: self.confluence = Confluence( url=base_url, username=user_name, password=password, cloud=cloud ) else: api_token = os.getenv(CONFLUENCE_API_TOKEN) if api_token is not None: self.confluence = Confluence( url=base_url, token=api_token, cloud=cloud ) else: user_name = os.getenv(CONFLUENCE_USERNAME) password = os.getenv(CONFLUENCE_PASSWORD) if user_name is not None and password is not None: self.confluence = Confluence( url=base_url, username=user_name, password=password, cloud=cloud, ) else: raise ValueError( "Must set one of environment variables `CONFLUENCE_API_KEY`, or" " `CONFLUENCE_USERNAME` and `CONFLUENCE_PASSWORD`, if oauth2, or" " api_token, or user_name and password parameters are not provided" ) self._next_cursor = None def load_data( self, space_key: Optional[str] = None, page_ids: Optional[List[str]] = None, page_status: Optional[str] = None, label: Optional[str] = None, cql: Optional[str] = None, include_attachments=False, include_children=False, start: Optional[int] = None, cursor: Optional[str] = None, limit: Optional[int] = None, max_num_results: Optional[int] = None, ) -> List[Document]: """Load Confluence pages from Confluence, specifying by one of four mutually exclusive methods: `space_key`, `page_ids`, `label`, or `cql` (Confluence Query Language https://developer.atlassian.com/cloud/confluence/advanced-searching-using-cql/ ). Args: space_key (str): Confluence space key, eg 'DS' page_ids (list): List of page ids, eg ['123456', '123457'] page_status (str): Page status, one of None (all statuses), 'current', 'draft', 'archived'. Only compatible with space_key. label (str): Confluence label, eg 'my-label' cql (str): Confluence Query Language query, eg 'label="my-label"' include_attachments (bool): If True, include attachments. include_children (bool): If True, do a DFS of the descendants of each page_id in `page_ids`. Only compatible with `page_ids`. start (int): Skips over the first n elements. Used only with space_key cursor (str): Skips to the cursor. Used with cql and label, set when the max limit has been hit for cql based search limit (int): Deprecated, use `max_num_results` instead. max_num_results (int): Maximum number of results to return. If None, return all results. Requests are made in batches to achieve the desired number of results. """ num_space_key_parameter = 1 if space_key else 0 num_page_ids_parameter = 1 if page_ids is not None else 0 num_label_parameter = 1 if label else 0 num_cql_parameter = 1 if cql else 0 if ( num_space_key_parameter + num_page_ids_parameter + num_label_parameter + num_cql_parameter != 1 ): raise ValueError( "Must specify exactly one among `space_key`, `page_ids`, `label`, `cql`" " parameters." ) if cursor and start: raise ValueError("Must not specify `start` when `cursor` is specified") if space_key and cursor: raise ValueError("Must not specify `cursor` when `space_key` is specified") if page_status and not space_key: raise ValueError( "Must specify `space_key` when `page_status` is specified." ) if include_children and not page_ids: raise ValueError( "Must specify `page_ids` when `include_children` is specified." ) if limit is not None: max_num_results = limit logger.warning( "`limit` is deprecated and no longer relates to the Confluence server's" " API limits. If you wish to limit the number of returned results" " please use `max_num_results` instead." ) try: import html2text # type: ignore except ImportError: raise ImportError( "`html2text` package not found, please run `pip install html2text`" ) text_maker = html2text.HTML2Text() text_maker.ignore_links = True text_maker.ignore_images = True if not start: start = 0 pages: List = [] if space_key: pages.extend( self._get_data_with_paging( self.confluence.get_all_pages_from_space, start=start, max_num_results=max_num_results, space=space_key, status=page_status, expand="body.export_view.value", content_type="page", ) ) elif label: pages.extend( self._get_cql_data_with_paging( start=start, cursor=cursor, cql=f'type="page" AND label="{label}"', max_num_results=max_num_results, expand="body.export_view.value", ) ) elif cql: pages.extend( self._get_cql_data_with_paging( start=start, cursor=cursor, cql=cql, max_num_results=max_num_results, expand="body.export_view.value", ) ) elif page_ids: if include_children: dfs_page_ids = [] max_num_remaining = max_num_results for page_id in page_ids: current_dfs_page_ids = self._dfs_page_ids( page_id, max_num_remaining ) dfs_page_ids.extend(current_dfs_page_ids) if max_num_results is not None: max_num_remaining -= len(current_dfs_page_ids) if max_num_remaining <= 0: break page_ids = dfs_page_ids for page_id in ( page_ids[:max_num_results] if max_num_results is not None else page_ids ): pages.append( self._get_data_with_retry( self.confluence.get_page_by_id, page_id=page_id, expand="body.export_view.value", ) ) docs = [] for page in pages: doc = self.process_page(page, include_attachments, text_maker) docs.append(doc) return docs
171605
"""Qdrant reader.""" from typing import Dict, List, Optional, cast from llama_index.core.readers.base import BaseReader from llama_index.core.schema import Document class QdrantReader(BaseReader): """Qdrant reader. Retrieve documents from existing Qdrant collections. Args: location: If `:memory:` - use in-memory Qdrant instance. If `str` - use it as a `url` parameter. If `None` - use default values for `host` and `port`. url: either host or str of "Optional[scheme], host, Optional[port], Optional[prefix]". Default: `None` port: Port of the REST API interface. Default: 6333 grpc_port: Port of the gRPC interface. Default: 6334 prefer_grpc: If `true` - use gPRC interface whenever possible in custom methods. https: If `true` - use HTTPS(SSL) protocol. Default: `false` api_key: API key for authentication in Qdrant Cloud. Default: `None` prefix: If not `None` - add `prefix` to the REST URL path. Example: `service/v1` will result in `http://localhost:6333/service/v1/{qdrant-endpoint}` for REST API. Default: `None` timeout: Timeout for REST and gRPC API requests. Default: 5.0 seconds for REST and unlimited for gRPC host: Host name of Qdrant service. If url and host are None, set to 'localhost'. Default: `None` """ def __init__( self, location: Optional[str] = None, url: Optional[str] = None, port: Optional[int] = 6333, grpc_port: int = 6334, prefer_grpc: bool = False, https: Optional[bool] = None, api_key: Optional[str] = None, prefix: Optional[str] = None, timeout: Optional[float] = None, host: Optional[str] = None, path: Optional[str] = None, ): """Initialize with parameters.""" import_err_msg = ( "`qdrant-client` package not found, please run `pip install qdrant-client`" ) try: import qdrant_client except ImportError: raise ImportError(import_err_msg) self._client = qdrant_client.QdrantClient( location=location, url=url, port=port, grpc_port=grpc_port, prefer_grpc=prefer_grpc, https=https, api_key=api_key, prefix=prefix, timeout=timeout, host=host, path=path, ) def load_data( self, collection_name: str, query_vector: List[float], should_search_mapping: Optional[Dict[str, str]] = None, must_search_mapping: Optional[Dict[str, str]] = None, must_not_search_mapping: Optional[Dict[str, str]] = None, rang_search_mapping: Optional[Dict[str, Dict[str, float]]] = None, limit: int = 10, ) -> List[Document]: """Load data from Qdrant. Args: collection_name (str): Name of the Qdrant collection. query_vector (List[float]): Query vector. should_search_mapping (Optional[Dict[str, str]]): Mapping from field name to query string. must_search_mapping (Optional[Dict[str, str]]): Mapping from field name to query string. must_not_search_mapping (Optional[Dict[str, str]]): Mapping from field name to query string. rang_search_mapping (Optional[Dict[str, Dict[str, float]]]): Mapping from field name to range query. limit (int): Number of results to return. Example: reader = QdrantReader() reader.load_data( collection_name="test_collection", query_vector=[0.1, 0.2, 0.3], should_search_mapping={"text_field": "text"}, must_search_mapping={"text_field": "text"}, must_not_search_mapping={"text_field": "text"}, # gte, lte, gt, lt supported rang_search_mapping={"text_field": {"gte": 0.1, "lte": 0.2}}, limit=10 ) Returns: List[Document]: A list of documents. """ from qdrant_client.http.models import ( FieldCondition, Filter, MatchText, MatchValue, Range, ) from qdrant_client.http.models.models import Payload should_search_mapping = should_search_mapping or {} must_search_mapping = must_search_mapping or {} must_not_search_mapping = must_not_search_mapping or {} rang_search_mapping = rang_search_mapping or {} should_search_conditions = [ FieldCondition(key=key, match=MatchText(text=value)) for key, value in should_search_mapping.items() if should_search_mapping ] must_search_conditions = [ FieldCondition(key=key, match=MatchValue(value=value)) for key, value in must_search_mapping.items() if must_search_mapping ] must_not_search_conditions = [ FieldCondition(key=key, match=MatchValue(value=value)) for key, value in must_not_search_mapping.items() if must_not_search_mapping ] rang_search_conditions = [ FieldCondition( key=key, range=Range( gte=value.get("gte"), lte=value.get("lte"), gt=value.get("gt"), lt=value.get("lt"), ), ) for key, value in rang_search_mapping.items() if rang_search_mapping ] should_search_conditions.extend(rang_search_conditions) response = self._client.search( collection_name=collection_name, query_vector=query_vector, query_filter=Filter( must=must_search_conditions, must_not=must_not_search_conditions, should=should_search_conditions, ), with_vectors=True, with_payload=True, limit=limit, ) documents = [] for point in response: payload = cast(Payload, point.payload) try: vector = cast(List[float], point.vector) except ValueError as e: raise ValueError("Could not cast vector to List[float].") from e document = Document( id_=payload.get("doc_id"), text=payload.get("text"), metadata=payload.get("metadata"), embedding=vector, ) documents.append(document) return documents
171893
class MultipartMixedResponse(StreamingResponse): CRLF = b"\r\n" def __init__(self, *args, content_type: str = None, **kwargs): super().__init__(*args, **kwargs) self.content_type = content_type def init_headers(self, headers: Optional[Mapping[str, str]] = None) -> None: super().init_headers(headers) self.boundary_value = secrets.token_hex(16) content_type = f'multipart/mixed; boundary="{self.boundary_value}"' self.raw_headers.append((b"content-type", content_type.encode("latin-1"))) @property def boundary(self): return b"--" + self.boundary_value.encode() def _build_part_headers(self, headers: dict) -> bytes: header_bytes = b"" for header, value in headers.items(): header_bytes += f"{header}: {value}".encode() + self.CRLF return header_bytes def build_part(self, chunk: bytes) -> bytes: part = self.boundary + self.CRLF part_headers = { "Content-Length": len(chunk), "Content-Transfer-Encoding": "base64", } if self.content_type is not None: part_headers["Content-Type"] = self.content_type part += self._build_part_headers(part_headers) part += self.CRLF + chunk + self.CRLF return part async def stream_response(self, send: Send) -> None: await send( { "type": "http.response.start", "status": self.status_code, "headers": self.raw_headers, } ) async for chunk in self.body_iterator: if not isinstance(chunk, bytes): chunk = chunk.encode(self.charset) chunk = b64encode(chunk) await send( { "type": "http.response.body", "body": self.build_part(chunk), "more_body": True, } ) await send({"type": "http.response.body", "body": b"", "more_body": False}) def ungz_file(file: UploadFile, gz_uncompressed_content_type=None) -> UploadFile: def return_content_type(filename): if gz_uncompressed_content_type: return gz_uncompressed_content_type else: return str(mimetypes.guess_type(filename)[0]) filename = str(file.filename) if file.filename else "" if filename.endswith(".gz"): filename = filename[:-3] gzip_file = gzip.open(file.file).read() return UploadFile( file=io.BytesIO(gzip_file), size=len(gzip_file), filename=filename, headers=Headers({"content-type": return_content_type(filename)}), ) @router.post("/sec-filings/v0/section") @router.post("/sec-filings/v0.2.1/section") def pipeline_1( request: Request, gz_uncompressed_content_type: Optional[str] = Form(default=None), text_files: Union[List[UploadFile], None] = File(default=None), output_format: Union[str, None] = Form(default=None), output_schema: str = Form(default=None), section: List[str] = Form(default=[]), section_regex: List[str] = Form(default=[]), ): if text_files: for file_index in range(len(text_files)): if text_files[file_index].content_type == "application/gzip": text_files[file_index] = ungz_file(text_files[file_index]) content_type = request.headers.get("Accept") default_response_type = output_format or "application/json" if not content_type or content_type == "*/*" or content_type == "multipart/mixed": media_type = default_response_type else: media_type = content_type default_response_schema = output_schema or "isd" if isinstance(text_files, list) and len(text_files): if len(text_files) > 1: if content_type and content_type not in [ "*/*", "multipart/mixed", "application/json", ]: raise HTTPException( detail=( f"Conflict in media type {content_type}" ' with response type "multipart/mixed".\n' ), status_code=status.HTTP_406_NOT_ACCEPTABLE, ) def response_generator(is_multipart): for file in text_files: get_validated_mimetype(file) text = file.file.read().decode("utf-8") response = pipeline_api( text, m_section=section, m_section_regex=section_regex, response_type=media_type, response_schema=default_response_schema, ) if is_expected_response_type(media_type, type(response)): raise HTTPException( detail=( f"Conflict in media type {media_type}" f" with response type {type(response)}.\n" ), status_code=status.HTTP_406_NOT_ACCEPTABLE, ) valid_response_types = [ "application/json", "text/csv", "*/*", "multipart/mixed", ] if media_type in valid_response_types: if is_multipart: if type(response) not in [str, bytes]: response = json.dumps(response) yield response else: raise HTTPException( detail=f"Unsupported media type {media_type}.\n", status_code=status.HTTP_406_NOT_ACCEPTABLE, ) if content_type == "multipart/mixed": return MultipartMixedResponse( response_generator(is_multipart=True), content_type=media_type ) else: return ( next(iter(response_generator(is_multipart=False))) if len(text_files) == 1 else response_generator(is_multipart=False) ) else: raise HTTPException( detail='Request parameter "text_files" is required.\n', status_code=status.HTTP_400_BAD_REQUEST, ) app.include_router(router)
172031
"""Azure Cognitive Search reader. A loader that fetches documents from specific index. """ from typing import List, Optional from azure.core.credentials import AzureKeyCredential from azure.search.documents import SearchClient from llama_index.core.readers.base import BaseReader from llama_index.core.schema import Document class AzCognitiveSearchReader(BaseReader): """General reader for any Azure Cognitive Search index reader. Args: service_name (str): the name of azure cognitive search service. search_key (str): provide azure search access key directly. index (str): index name """ def __init__(self, service_name: str, searck_key: str, index: str) -> None: """Initialize Azure cognitive search service using the search key.""" import logging logger = logging.getLogger("azure.core.pipeline.policies.http_logging_policy") logger.setLevel(logging.WARNING) azure_credential = AzureKeyCredential(searck_key) self.search_client = SearchClient( endpoint=f"https://{service_name}.search.windows.net", index_name=index, credential=azure_credential, ) def load_data( self, query: str, content_field: str, filter: Optional[str] = None ) -> List[Document]: """Read data from azure cognitive search index. Args: query (str): search term in Azure Search index content_field (str): field name of the document content. filter (str): Filter expression. For example : 'sourcepage eq 'employee_handbook-3.pdf' and sourcefile eq 'employee_handbook.pdf'' Returns: List[Document]: A list of documents. """ search_result = self.search_client.search(query, filter=filter) return [ Document( text=result[content_field], extra_info={"id": result["id"], "score": result["@search.score"]}, ) for result in search_result ]
172111
import logging from typing import List from llama_index.core.readers.base import BaseReader from llama_index.core.schema import Document logger = logging.getLogger(__file__) class UnstructuredURLLoader(BaseReader): """Loader that uses unstructured to load HTML files.""" def __init__( self, urls: List[str], continue_on_failure: bool = True, headers: dict = {} ): """Initialize with file path.""" try: import unstructured # noqa:F401 from unstructured.__version__ import __version__ as __unstructured_version__ self.__version = __unstructured_version__ except ImportError: raise ValueError( "unstructured package not found, please install it with " "`pip install unstructured`" ) if not self.__is_headers_available() and len(headers.keys()) != 0: logger.warning( "You are using old version of unstructured. " "The headers parameter is ignored" ) self.urls = urls self.continue_on_failure = continue_on_failure self.headers = headers def __is_headers_available(self) -> bool: _unstructured_version = self.__version.split("-")[0] unstructured_version = tuple([int(x) for x in _unstructured_version.split(".")]) return unstructured_version >= (0, 5, 7) def load_data(self) -> List[Document]: """Load file.""" from unstructured.partition.html import partition_html docs: List[Document] = [] for url in self.urls: try: if self.__is_headers_available(): elements = partition_html(url=url, headers=self.headers) else: elements = partition_html(url=url) text = "\n\n".join([str(el) for el in elements]) metadata = {"source": url} docs.append(Document(text=text, extra_info=metadata)) except Exception as e: if self.continue_on_failure: logger.error(f"Error fetching or processing {url}, exception: {e}") else: raise e # noqa: TRY201 return docs
172254
# Unstructured.io File Loader ```bash pip install llama-index-readers-file ``` This loader extracts the text from a variety of unstructured text files using [Unstructured.io](https://github.com/Unstructured-IO/unstructured). Currently, the file extensions that are supported are `.csv`, `.tsv`, `.doc`, `.docx`, `.odt`, `.epub`, `.org`, `.rst`, `.rtf`, `.md`, `.msg`, `.pdf`, `.heic`, `.png`, `.jpg`, `.jpeg`, `.tiff`, `.bmp`, `.ppt`, `.pptx`, `.xlsx`, `.eml`, `.html`, `.xml`, `.txt` and `.json` documents. A single local file is passed in each time you call `load_data`. Check out their documentation to see more details, but notably, this enables you to parse the unstructured data of many use-cases. For example, you can download the 10-K SEC filings of public companies (e.g. [Coinbase](https://www.sec.gov/ix?doc=/Archives/edgar/data/0001679788/000167978822000031/coin-20211231.htm)), and feed it directly into this loader without worrying about cleaning up the formatting or HTML tags. ## Usage To use this loader, you need to pass in any desired keyword arguments to unstructured via the `unstructured_kwargs` parameter. For example a `Path` to a local file or even a stream. Optionally, you may specify `split_documents` if you want each `element` generated by Unstructured.io to be placed in a separate document. This will guarantee that those elements will be split when an index is created in LlamaIndex, which, depending on your use-case, could be a smarter form of text-splitting. By default this is `False`. ```python from pathlib import Path from llama_index.readers.file import UnstructuredReader loader = UnstructuredReader() documents = loader.load_data( unstructured_kwargs={"filename": "./10k_filing.html"} ) ``` You can also easily use this loader in conjunction with `SimpleDirectoryReader` if you want to parse certain files throughout a directory with Unstructured.io. ```python from pathlib import Path from llama_index.core import SimpleDirectoryReader from llama_index.readers.file import UnstructuredReader dir_reader = SimpleDirectoryReader( "./data", file_extractor={ ".pdf": UnstructuredReader(), ".html": UnstructuredReader(), ".eml": UnstructuredReader(), }, ) documents = dir_reader.load_data() ``` ```python # Example using a filestream input, taking advantage of HI_RES partitioning and # native unstructured chunking by_title. documents = UnstructuredReader().load_data( unstructured_kwargs={ "file": filestream, "content_type": file.content_type, "url": None, "strategy": PartitionStrategy.HI_RES, "chunking_strategy": "by_title", }, split_documents=True, # We can generate deterministic ids for each document, or for the whole # document when not splitting, to support document lifecycle operations # (upserts, etc.). deterministic_ids=True, ) ``` This loader is designed to be used as a way to load data into [LlamaIndex](https://github.com/run-llama/llama_index/). ## Troubleshooting **"failed to find libmagic" error**: Try `pip install python-magic-bin==0.4.14`. Solution documented [here](https://github.com/Yelp/elastalert/issues/1927#issuecomment-425040424). On MacOS, you may also try `brew install libmagic`.
172256
""" Unstructured file reader. A parser for unstructured text files using Unstructured.io. Supports .csv, .tsv, .doc, .docx, .odt, .epub, .org, .rst, .rtf, .md, .msg, .pdf, .heic, .png, .jpg, .jpeg, .tiff, .bmp, .ppt, .pptx, .xlsx, .eml, .html, .xml, .txt, .json documents. """ import json from pathlib import Path from typing import Any, Dict, List, Optional, Set, Tuple from llama_index.core.readers.base import BaseReader from llama_index.core.schema import Document, NodeRelationship, TextNode try: from unstructured.documents.elements import Element except ImportError: Element = None class UnstructuredReader(BaseReader): """General unstructured text reader for a variety of files.""" def __init__( self, *args: Any, api_key: str = None, url: str = None, allowed_metadata_types: Optional[Tuple] = None, excluded_metadata_keys: Optional[Set] = None, ) -> None: """ Initialize UnstructuredReader. Args: *args (Any): Additional arguments passed to the BaseReader. api_key (str, optional): API key for accessing the Unstructured.io API. If provided, the reader will use the API for parsing files. Defaults to None. url (str, optional): URL for the Unstructured.io API. If not provided and an api_key is given, defaults to "http://localhost:8000". Ignored if api_key is not provided. Defaults to None. allowed_metadata_types (Optional[Tuple], optional): Tuple of types that are allowed in the metadata. Defaults to (str, int, float, type(None)). excluded_metadata_keys (Optional[Set], optional): Set of metadata keys to exclude from the final document. Defaults to {"orig_elements"}. Attributes: api_key (str or None): Stores the API key. use_api (bool): Indicates whether to use the API for parsing files, based on the presence of the api_key. url (str or None): URL for the Unstructured.io API if using the API. allowed_metadata_types (Tuple): Tuple of types that are allowed in the metadata. excluded_metadata_keys (Set): Set of metadata keys to exclude from the final document. """ super().__init__(*args) # not passing kwargs to parent bc it cannot accept it if Element is None: raise ImportError( "Unstructured is not installed. Please install it using 'pip install -U unstructured'." ) self.api_key = api_key self.use_api = bool(api_key) self.url = url or "http://localhost:8000" if self.use_api else None self.allowed_metadata_types = allowed_metadata_types or ( str, int, float, type(None), ) self.excluded_metadata_keys = excluded_metadata_keys or {"orig_elements"} @classmethod def from_api(cls, api_key: str, url: str = None): """Set the server url and api key.""" return cls(api_key, url) def load_data( self, file: Optional[Path] = None, unstructured_kwargs: Optional[Dict] = None, document_kwargs: Optional[Dict] = None, extra_info: Optional[Dict] = None, split_documents: Optional[bool] = False, excluded_metadata_keys: Optional[List[str]] = None, ) -> List[Document]: """ Load data using Unstructured.io. Depending on the configuration, if url is set or use_api is True, it'll parse the file using an API call, otherwise it parses it locally. extra_info is extended by the returned metadata if split_documents is True. Args: file (Optional[Path]): Path to the file to be loaded. unstructured_kwargs (Optional[Dict]): Additional arguments for unstructured partitioning. document_kwargs (Optional[Dict]): Additional arguments for document creation. extra_info (Optional[Dict]): Extra information to add to the document metadata. split_documents (Optional[bool]): Whether to split the documents. excluded_metadata_keys (Optional[List[str]]): Keys to exclude from the metadata. Returns: List[Document]: List of parsed documents. """ unstructured_kwargs = unstructured_kwargs.copy() if unstructured_kwargs else {} if ( unstructured_kwargs.get("file") is not None and unstructured_kwargs.get("metadata_filename") is None ): raise ValueError( "Please provide a 'metadata_filename' as part of the 'unstructured_kwargs' when loading a file stream." ) elements: List[Element] = self._partition_elements(unstructured_kwargs, file) return self._create_documents( elements, document_kwargs, extra_info, split_documents, excluded_metadata_keys, ) def _partition_elements( self, unstructured_kwargs: Dict, file: Optional[Path] = None ) -> List[Element]: """ Partition the elements from the file or via API. Args: file (Optional[Path]): Path to the file to be loaded. unstructured_kwargs (Dict): Additional arguments for unstructured partitioning. Returns: List[Element]: List of partitioned elements. """ if file: unstructured_kwargs["filename"] = str(file) if self.use_api: from unstructured.partition.api import partition_via_api return partition_via_api( api_key=self.api_key, api_url=self.url + "/general/v0/general", **unstructured_kwargs, ) else: from unstructured.partition.auto import partition return partition(**unstructured_kwargs) def _create_documents( self, elements: List[Element], document_kwargs: Optional[Dict], extra_info: Optional[Dict], split_documents: Optional[bool], excluded_metadata_keys: Optional[List[str]], ) -> List[Document]: """ Create documents from partitioned elements. Args: elements (List): List of partitioned elements. document_kwargs (Optional[Dict]): Additional arguments for document creation. extra_info (Optional[Dict]): Extra information to add to the document metadata. split_documents (Optional[bool]): Whether to split the documents. excluded_metadata_keys (Optional[List[str]]): Keys to exclude from the metadata. Returns: List[Document]: List of parsed documents. """ doc_kwargs = document_kwargs or {} doc_extras = extra_info or {} excluded_keys = set(excluded_metadata_keys or self.excluded_metadata_keys) docs: List[Document] = [] def _merge_metadata( element: Element, sequence_number: Optional[int] = None ) -> Dict[str, Any]: candidate_metadata = {**element.metadata.to_dict(), **doc_extras} metadata = { key: ( value if isinstance(value, self.allowed_metadata_types) else json.dumps(value) ) for key, value in candidate_metadata.items() if key not in excluded_keys } if sequence_number is not None: metadata["sequence_number"] = sequence_number return metadata if len(elements) == 0: return [] text_chunks = [" ".join(str(el).split()) for el in elements] metadata = _merge_metadata(elements[0]) filename = metadata["filename"] source = Document( text="\n\n".join(text_chunks), extra_info=metadata, doc_id=filename, id_=filename, **doc_kwargs, ) if split_documents: docs = [] for sequence_number, element in enumerate(elements): hash_id = element.id_to_hash(sequence_number) node = TextNode( text=element.text, metadata=_merge_metadata(element, sequence_number), doc_id=hash_id, id_=hash_id, **doc_kwargs, ) node.relationships[ NodeRelationship.SOURCE ] = source.as_related_node_info() docs.append(node) else: docs = [source] return docs
172258
# Paged CSV Loader ```bash pip install llama-index-readers-file ``` This loader extracts the text from a local .csv file by formatting each row in an LLM-friendly way and inserting it into a separate Document. A single local file is passed in each time you call `load_data`. For example, a Document might look like: ``` First Name: Bruce Last Name: Wayne Age: 28 Occupation: Unknown ``` ## Usage To use this loader, you need to pass in a `Path` to a local file. ```python from pathlib import Path from llama_index.readers.file import PagedCSVReader loader = PagedCSVReader(encoding="utf-8") documents = loader.load_data(file=Path("./transactions.csv")) ``` This loader is designed to be used as a way to load data into [LlamaIndex](https://github.com/jerryjliu/llama_index) and/or subsequently used as a Tool in a [LangChain](https://github.com/hwchase17/langchain) Agent.
172260
"""Paged CSV reader. A parser for tabular data files. """ from pathlib import Path from typing import Any, Dict, List, Optional from llama_index.core.readers.base import BaseReader from llama_index.core.schema import Document class PagedCSVReader(BaseReader): """Paged CSV parser. Displayed each row in an LLM-friendly format on a separate document. Args: encoding (str): Encoding used to open the file. utf-8 by default. """ def __init__(self, *args: Any, encoding: str = "utf-8", **kwargs: Any) -> None: """Init params.""" super().__init__(*args, **kwargs) self._encoding = encoding def load_data( self, file: Path, extra_info: Optional[Dict] = None, delimiter: str = ",", quotechar: str = '"', ) -> List[Document]: """Parse file.""" import csv docs = [] with open(file, encoding=self._encoding) as fp: csv_reader = csv.DictReader(f=fp, delimiter=delimiter, quotechar=quotechar) # type: ignore for row in csv_reader: docs.append( Document( text="\n".join( f"{k.strip()}: {v.strip()}" for k, v in row.items() ), extra_info=extra_info or {}, ) ) return docs
172395
"""Weaviate reader.""" from typing import Any, List, Optional from llama_index.core.readers.base import BaseReader from llama_index.core.schema import Document class WeaviateReader(BaseReader): """Weaviate reader. Retrieves documents from Weaviate through vector lookup. Allows option to concatenate retrieved documents into one Document, or to return separate Document objects per document. Args: host (str): host. auth_client_secret (Optional[weaviate.auth.AuthCredentials]): auth_client_secret. """ def __init__( self, host: str, auth_client_secret: Optional[Any] = None, ) -> None: """Initialize with parameters.""" try: import weaviate # noqa from weaviate import Client from weaviate.auth import AuthCredentials # noqa except ImportError: raise ImportError( "`weaviate` package not found, please run `pip install weaviate-client`" ) self.client: Client = Client(host, auth_client_secret=auth_client_secret) def load_data( self, class_name: Optional[str] = None, properties: Optional[List[str]] = None, graphql_query: Optional[str] = None, separate_documents: Optional[bool] = True, ) -> List[Document]: """Load data from Weaviate. If `graphql_query` is not found in load_kwargs, we assume that `class_name` and `properties` are provided. Args: class_name (Optional[str]): class_name to retrieve documents from. properties (Optional[List[str]]): properties to retrieve from documents. graphql_query (Optional[str]): Raw GraphQL Query. We assume that the query is a Get query. separate_documents (Optional[bool]): Whether to return separate documents. Defaults to True. Returns: List[Document]: A list of documents. """ if class_name is not None and properties is not None: props_txt = "\n".join(properties) graphql_query = f""" {{ Get {{ {class_name} {{ {props_txt} }} }} }} """ elif graphql_query is not None: pass else: raise ValueError( "Either `class_name` and `properties` must be specified, " "or `graphql_query` must be specified." ) response = self.client.query.raw(graphql_query) if "errors" in response: raise ValueError("Invalid query, got errors: {}".format(response["errors"])) data_response = response["data"] if "Get" not in data_response: raise ValueError("Invalid query response, must be a Get query.") if class_name is None: # infer class_name if only graphql_query was provided class_name = next(iter(data_response["Get"].keys())) entries = data_response["Get"][class_name] documents = [] for entry in entries: embedding: Optional[List[float]] = None # for each entry, join properties into <property>:<value> # separated by newlines text_list = [] for k, v in entry.items(): if k == "_additional": if "vector" in v: embedding = v["vector"] continue text_list.append(f"{k}: {v}") text = "\n".join(text_list) documents.append(Document(text=text, embedding=embedding)) if not separate_documents: # join all documents into one text_list = [doc.get_content() for doc in documents] text = "\n\n".join(text_list) documents = [Document(text=text)] return documents
172404
"""DeepLake reader.""" from typing import List, Optional, Union import numpy as np from llama_index.core.readers.base import BaseReader from llama_index.core.schema import Document distance_metric_map = { "l2": lambda a, b: np.linalg.norm(a - b, axis=1, ord=2), "l1": lambda a, b: np.linalg.norm(a - b, axis=1, ord=1), "max": lambda a, b: np.linalg.norm(a - b, axis=1, ord=np.inf), "cos": lambda a, b: np.dot(a, b.T) / (np.linalg.norm(a) * np.linalg.norm(b, axis=1)), "dot": lambda a, b: np.dot(a, b.T), } def vector_search( query_vector: Union[List, np.ndarray], data_vectors: np.ndarray, distance_metric: str = "l2", limit: Optional[int] = 4, ) -> List: """Naive search for nearest neighbors args: query_vector: Union[List, np.ndarray] data_vectors: np.ndarray limit (int): number of nearest neighbors distance_metric: distance function 'L2' for Euclidean, 'L1' for Nuclear, 'Max' l-infinity distance, 'cos' for cosine similarity, 'dot' for dot product returns: nearest_indices: List, indices of nearest neighbors. """ # Calculate the distance between the query_vector and all data_vectors if isinstance(query_vector, list): query_vector = np.array(query_vector) query_vector = query_vector.reshape(1, -1) distances = distance_metric_map[distance_metric](query_vector, data_vectors) nearest_indices = np.argsort(distances) nearest_indices = ( nearest_indices[::-1][:limit] if distance_metric in ["cos"] else nearest_indices[:limit] ) return nearest_indices.tolist() class DeepLakeReader(BaseReader): """DeepLake reader. Retrieve documents from existing DeepLake datasets. Args: dataset_name: Name of the deeplake dataset. """ def __init__( self, token: Optional[str] = None, ): """Initializing the deepLake reader.""" import_err_msg = ( "`deeplake` package not found, please run `pip install deeplake`" ) try: import deeplake # noqa except ImportError: raise ImportError(import_err_msg) self.token = token def load_data( self, query_vector: List[float], dataset_path: str, limit: int = 4, distance_metric: str = "l2", ) -> List[Document]: """Load data from DeepLake. Args: dataset_name (str): Name of the DeepLake dataset. query_vector (List[float]): Query vector. limit (int): Number of results to return. Returns: List[Document]: A list of documents. """ import deeplake from deeplake.util.exceptions import TensorDoesNotExistError dataset = deeplake.load(dataset_path, token=self.token) try: embeddings = dataset.embedding.numpy(fetch_chunks=True) except Exception: raise TensorDoesNotExistError("embedding") indices = vector_search( query_vector, embeddings, distance_metric=distance_metric, limit=limit ) documents = [] for idx in indices: document = Document( text=str(dataset[idx].text.numpy().tolist()[0]), id_=dataset[idx].ids.numpy().tolist()[0], ) documents.append(document) return documents
172572
# LlamaIndex Readers Integration: Chroma ## Overview Chroma Reader is a tool designed to retrieve documents from existing persisted Chroma collections. Chroma is a framework for managing document collections and their associated embeddings efficiently. ### Installation You can install Chroma Reader via pip: ```bash pip install llama-index-readers-chroma ``` ## Usage ```python from llama_index.core.schema import Document from llama_index.readers.chroma import ChromaReader # Initialize ChromaReader with the collection name and optional parameters reader = ChromaReader( collection_name="<Your Collection Name>", persist_directory="<Directory Path>", # Optional: Directory where the collection is persisted chroma_api_impl="rest", # Optional: Chroma API implementation (default: "rest") chroma_db_impl=None, # Optional: Chroma DB implementation (default: None) host="localhost", # Optional: Host for Chroma DB (default: "localhost") port=8000, # Optional: Port for Chroma DB (default: 8000) ) # Load data from Chroma collection documents = reader.load_data( query_embedding=None, # Provide query embedding if searching by embeddings limit=10, # Number of results to retrieve where=None, # Filter condition for metadata where_document=None, # Filter condition for document query=["search term"], # Provide query text if searching by text ) ``` This loader is designed to be used as a way to load data into [LlamaIndex](https://github.com/run-llama/llama_index/tree/main/llama_index) and/or subsequently used as a Tool in a [LangChain](https://github.com/hwchase17/langchain) Agent.
172576
"""Chroma Reader.""" from typing import Any, List, Optional, Union from llama_index.core.readers.base import BaseReader from llama_index.core.schema import Document class ChromaReader(BaseReader): """Chroma reader. Retrieve documents from existing persisted Chroma collections. Args: collection_name: Name of the persisted collection. persist_directory: Directory where the collection is persisted. """ def __init__( self, collection_name: str, persist_directory: Optional[str] = None, chroma_api_impl: str = "rest", chroma_db_impl: Optional[str] = None, host: str = "localhost", port: int = 8000, ) -> None: """Initialize with parameters.""" import_err_msg = ( "`chromadb` package not found, please run `pip install chromadb`" ) try: import chromadb except ImportError: raise ImportError(import_err_msg) if collection_name is None: raise ValueError("Please provide a collection name.") # from chromadb.config import Settings if persist_directory is not None: self._client = chromadb.PersistentClient( path=persist_directory if persist_directory else "./chroma", ) elif (host is not None) or (port is not None): self._client = chromadb.HttpClient( host=host, port=port, ) self._collection = self._client.get_collection(collection_name) def create_documents(self, results: Any) -> List[Document]: """Create documents from the results. Args: results: Results from the query. Returns: List of documents. """ documents = [] for result in zip( results["ids"][0], results["documents"][0], results["embeddings"][0], results["metadatas"][0], ): document = Document( id_=result[0], text=result[1], embedding=result[2], metadata=result[3], ) documents.append(document) return documents def load_data( self, query_embedding: Optional[List[float]] = None, limit: int = 10, where: Optional[dict] = None, where_document: Optional[dict] = None, query: Optional[Union[str, List[str]]] = None, ) -> Any: """Load data from the collection. Args: limit: Number of results to return. where: Filter results by metadata. {"metadata_field": "is_equal_to_this"} where_document: Filter results by document. {"$contains":"search_string"} Returns: List of documents. """ where = where or {} where_document = where_document or {} if query_embedding is not None: results = self._collection.search( query_embedding=query_embedding, n_results=limit, where=where, where_document=where_document, include=["metadatas", "documents", "distances", "embeddings"], ) return self.create_documents(results) elif query is not None: query = query if isinstance(query, list) else [query] results = self._collection.query( query_texts=query, n_results=limit, where=where, where_document=where_document, include=["metadatas", "documents", "distances", "embeddings"], ) return self.create_documents(results) else: raise ValueError("Please provide either query embedding or query.")
172744
from typing import List, Optional, Sequence from llama_index.core.base.llms.types import ChatMessage, MessageRole BOS, EOS = "<s>", "</s>" B_INST, E_INST = "[INST]", "[/INST]" B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n" DEFAULT_SYSTEM_PROMPT = """\ You are a helpful, respectful and honest assistant. \ Always answer as helpfully as possible and follow ALL given instructions. \ Do not speculate or make up information. \ Do not reference any given instructions or context. \ """ def messages_to_prompt( messages: Sequence[ChatMessage], system_prompt: Optional[str] = None ) -> str: string_messages: List[str] = [] if messages[0].role == MessageRole.SYSTEM: # pull out the system message (if it exists in messages) system_message_str = messages[0].content or "" messages = messages[1:] else: system_message_str = system_prompt or DEFAULT_SYSTEM_PROMPT system_message_str = f"{B_SYS} {system_message_str.strip()} {E_SYS}" for i in range(0, len(messages), 2): # first message should always be a user user_message = messages[i] assert user_message.role == MessageRole.USER if i == 0: # make sure system prompt is included at the start str_message = f"{BOS} {B_INST} {system_message_str} " else: # end previous user-assistant interaction string_messages[-1] += f" {EOS}" # no need to include system prompt str_message = f"{BOS} {B_INST} " # include user message content str_message += f"{user_message.content} {E_INST}" if len(messages) > (i + 1): # if assistant message exists, add to str_message assistant_message = messages[i + 1] assert assistant_message.role == MessageRole.ASSISTANT str_message += f" {assistant_message.content}" string_messages.append(str_message) return "".join(string_messages) def completion_to_prompt(completion: str, system_prompt: Optional[str] = None) -> str: system_prompt_str = system_prompt or DEFAULT_SYSTEM_PROMPT return ( f"{BOS} {B_INST} {B_SYS} {system_prompt_str.strip()} {E_SYS} " f"{completion.strip()} {E_INST}" )
172753
# LlamaIndex Llms Integration: Huggingface ## Installation 1. Install the required Python packages: ```bash %pip install llama-index-llms-huggingface %pip install llama-index-llms-huggingface-api !pip install "transformers[torch]" "huggingface_hub[inference]" !pip install llama-index ``` 2. Set the Hugging Face API token as an environment variable: ```bash export HUGGING_FACE_TOKEN=your_token_here ``` ## Usage ### Import Required Libraries ```python import os from typing import List, Optional from llama_index.llms.huggingface import HuggingFaceLLM from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI ``` ### Run a Model Locally To run the model locally on your machine: ```python locally_run = HuggingFaceLLM(model_name="HuggingFaceH4/zephyr-7b-alpha") ``` ### Run a Model Remotely To run the model remotely using Hugging Face's Inference API: ```python HF_TOKEN: Optional[str] = os.getenv("HUGGING_FACE_TOKEN") remotely_run = HuggingFaceInferenceAPI( model_name="HuggingFaceH4/zephyr-7b-alpha", token=HF_TOKEN ) ``` ### Anonymous Remote Execution You can also use the Inference API anonymously without providing a token: ```python remotely_run_anon = HuggingFaceInferenceAPI( model_name="HuggingFaceH4/zephyr-7b-alpha" ) ``` ### Use Recommended Model If you do not provide a model name, Hugging Face's recommended model is used: ```python remotely_run_recommended = HuggingFaceInferenceAPI(token=HF_TOKEN) ``` ### Generate Text Completion To generate a text completion using the remote model: ```python completion_response = remotely_run_recommended.complete("To infinity, and") print(completion_response) ``` ### Set Global Tokenizer If you modify the LLM, ensure you change the global tokenizer to match: ```python from llama_index.core import set_global_tokenizer from transformers import AutoTokenizer set_global_tokenizer( AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-alpha").encode ) ``` ### LLM Implementation example https://docs.llamaindex.ai/en/stable/examples/llm/huggingface/
172769
# LlamaIndex Llms Integration: Azure Openai ### Installation ```bash %pip install llama-index-llms-azure-openai !pip install llama-index ``` ### Prerequisites Follow this to setup your Azure account: [Setup Azure account](https://docs.llamaindex.ai/en/stable/examples/llm/azure_openai/#prerequisites) ### Set the environment variables ```py OPENAI_API_VERSION = "2023-07-01-preview" AZURE_OPENAI_ENDPOINT = "https://YOUR_RESOURCE_NAME.openai.azure.com/" OPENAI_API_KEY = "<your-api-key>" import os os.environ["OPENAI_API_KEY"] = "<your-api-key>" os.environ[ "AZURE_OPENAI_ENDPOINT" ] = "https://<your-resource-name>.openai.azure.com/" os.environ["OPENAI_API_VERSION"] = "2023-07-01-preview" # Use your LLM from llama_index.llms.azure_openai import AzureOpenAI # Unlike normal OpenAI, you need to pass an engine argument in addition to model. # The engine is the name of your model deployment you selected in Azure OpenAI Studio. llm = AzureOpenAI( engine="simon-llm", model="gpt-35-turbo-16k", temperature=0.0 ) # Alternatively, you can also skip setting environment variables, and pass the parameters in directly via constructor. llm = AzureOpenAI( engine="my-custom-llm", model="gpt-35-turbo-16k", temperature=0.0, azure_endpoint="https://<your-resource-name>.openai.azure.com/", api_key="<your-api-key>", api_version="2023-07-01-preview", ) # Use the complete endpoint for text completion response = llm.complete("The sky is a beautiful blue and") print(response) # Expected Output: # the sun is shining brightly. Fluffy white clouds float lazily across the sky, # creating a picturesque scene. The vibrant blue color of the sky brings a sense # of calm and tranquility... ``` ### Streaming completion ```py response = llm.stream_complete("The sky is a beautiful blue and") for r in response: print(r.delta, end="") # Expected Output (Stream): # the sun is shining brightly. Fluffy white clouds float lazily across the sky, # creating a picturesque scene. The vibrant blue color of the sky brings a sense # of calm and tranquility... # Use the chat endpoint for conversation from llama_index.core.llms import ChatMessage messages = [ ChatMessage( role="system", content="You are a pirate with a colorful personality." ), ChatMessage(role="user", content="Hello"), ] response = llm.chat(messages) print(response) # Expected Output: # assistant: Ahoy there, matey! How be ye on this fine day? I be Captain Jolly Roger, # the most colorful pirate ye ever did lay eyes on! What brings ye to me ship? ``` ### Streaming chat ```py response = llm.stream_chat(messages) for r in response: print(r.delta, end="") # Expected Output (Stream): # Ahoy there, matey! How be ye on this fine day? I be Captain Jolly Roger, # the most colorful pirate ye ever did lay eyes on! What brings ye to me ship? # Rather than adding the same parameters to each chat or completion call, # you can set them at a per-instance level with additional_kwargs. llm = AzureOpenAI( engine="simon-llm", model="gpt-35-turbo-16k", temperature=0.0, additional_kwargs={"user": "your_user_id"}, ) ``` ### LLM Implementation example https://docs.llamaindex.ai/en/stable/examples/llm/azure_openai/
172844
# LlamaIndex Llms Integration: Llama Api ## Prerequisites 1. **API Key**: Obtain an API key from [Llama API](https://www.llama-api.com/). 2. **Python 3.x**: Ensure you have Python installed on your system. ## Installation 1. Install the required Python packages: ```bash %pip install llama-index-program-openai %pip install llama-index-llms-llama-api !pip install llama-index ``` ## Basic Usage ### Import Required Libraries ```python from llama_index.llms.llama_api import LlamaAPI from llama_index.core.llms import ChatMessage ``` ### Initialize LlamaAPI Set up the API key: ```python api_key = "LL-your-key" llm = LlamaAPI(api_key=api_key) ``` ### Complete with a Prompt Generate a response using a prompt: ```python resp = llm.complete("Paul Graham is ") print(resp) ``` ### Chat with a List of Messages Interact with the model using a chat interface: ```python messages = [ ChatMessage( role="system", content="You are a pirate with a colorful personality" ), ChatMessage(role="user", content="What is your name"), ] resp = llm.chat(messages) print(resp) ``` ### Function Calling Define a function using Pydantic and call it through LlamaAPI: ```python from pydantic import BaseModel from llama_index.core.llms.openai_utils import to_openai_function class Song(BaseModel): """A song with name and artist""" name: str artist: str song_fn = to_openai_function(Song) response = llm.complete("Generate a song", functions=[song_fn]) function_call = response.additional_kwargs["function_call"] print(function_call) ``` ### Structured Data Extraction Define schemas for structured output using Pydantic: ```python from pydantic import BaseModel from typing import List class Song(BaseModel): """Data model for a song.""" title: str length_mins: int class Album(BaseModel): """Data model for an album.""" name: str artist: str songs: List[Song] ``` Define the prompt template for extracting structured data: ```python from llama_index.program.openai import OpenAIPydanticProgram prompt_template_str = """\ Extract album and songs from the text provided. For each song, make sure to specify the title and the length_mins. {text} """ llm = LlamaAPI(api_key=api_key, temperature=0.0) program = OpenAIPydanticProgram.from_defaults( output_cls=Album, llm=llm, prompt_template_str=prompt_template_str, verbose=True, ) ``` ### Run Program to Get Structured Output Execute the program to extract structured data from the provided text: ```python output = program( text=""" "Echoes of Eternity" is a compelling and thought-provoking album, skillfully crafted by the renowned artist, Seraphina Rivers. \ This captivating musical collection takes listeners on an introspective journey, delving into the depths of the human experience \ and the vastness of the universe. With her mesmerizing vocals and poignant songwriting, Seraphina Rivers infuses each track with \ raw emotion and a sense of cosmic wonder. The album features several standout songs, including the hauntingly beautiful "Stardust \ Serenade," a celestial ballad that lasts for six minutes, carrying listeners through a celestial dreamscape. "Eclipse of the Soul" \ captivates with its enchanting melodies and spans over eight minutes, inviting introspection and contemplation. Another gem, "Infinity \ Embrace," unfolds like a cosmic odyssey, lasting nearly ten minutes, drawing listeners deeper into its ethereal atmosphere. "Echoes of Eternity" \ is a masterful testament to Seraphina Rivers' artistic prowess, leaving an enduring impact on all who embark on this musical voyage through \ time and space. """ ) ``` ### Output Example You can print the structured output like this: ```python print(output) ``` ### LLM Implementation example https://docs.llamaindex.ai/en/stable/examples/llm/llama_api/
172907
def _chat(self, messages: Sequence[ChatMessage], **kwargs: Any) -> ChatResponse: url = f"{self.api_base}/chat/completions" payload = { "model": self.model, "messages": [ message.dict(exclude={"additional_kwargs"}) for message in messages ], **self._get_all_kwargs(**kwargs), } response = requests.post(url, json=payload, headers=self.headers) response.raise_for_status() data = response.json() message = ChatMessage( role="assistant", content=data["choices"][0]["message"]["content"] ) return ChatResponse(message=message, raw=data) @llm_chat_callback() def chat(self, messages: Sequence[ChatMessage], **kwargs: Any) -> ChatResponse: return self._chat(messages, **kwargs) async def _acomplete(self, prompt: str, **kwargs: Any) -> CompletionResponse: url = f"{self.api_base}/chat/completions" payload = { "model": self.model, "prompt": prompt, **self._get_all_kwargs(**kwargs), } async with httpx.AsyncClient() as client: response = await client.post(url, json=payload, headers=self.headers) response.raise_for_status() data = response.json() return CompletionResponse(text=data["choices"][0]["text"], raw=data) @llm_completion_callback() async def acomplete( self, prompt: str, formatted: bool = False, **kwargs: Any ) -> CompletionResponse: if self._is_chat_model(): raise ValueError("The complete method is not supported for chat models.") return await self._acomplete(prompt, **kwargs) async def _achat( self, messages: Sequence[ChatMessage], **kwargs: Any ) -> ChatResponse: url = f"{self.api_base}/chat/completions" payload = { "model": self.model, "messages": [ message.dict(exclude={"additional_kwargs"}) for message in messages ], **self._get_all_kwargs(**kwargs), } async with httpx.AsyncClient() as client: response = await client.post(url, json=payload, headers=self.headers) response.raise_for_status() data = response.json() message = ChatMessage( role="assistant", content=data["choices"][0]["message"]["content"] ) return ChatResponse(message=message, raw=data) @llm_chat_callback() async def achat( self, messages: Sequence[ChatMessage], **kwargs: Any ) -> ChatResponse: return await self._achat(messages, **kwargs) def _stream_complete(self, prompt: str, **kwargs: Any) -> CompletionResponseGen: url = f"{self.api_base}/chat/completions" payload = { "model": self.model, "prompt": prompt, "stream": True, **self._get_all_kwargs(**kwargs), } def gen() -> CompletionResponseGen: with requests.Session() as session: with session.post( url, json=payload, headers=self.headers, stream=True ) as response: response.raise_for_status() text = "" for line in response.iter_lines( decode_unicode=True ): # decode lines to Unicode if line.startswith("data:"): data = json.loads(line[5:]) delta = data["choices"][0]["text"] text += delta yield CompletionResponse(delta=delta, text=text, raw=data) return gen() @llm_completion_callback() def stream_complete( self, prompt: str, formatted: bool = False, **kwargs: Any ) -> CompletionResponseGen: if self._is_chat_model(): raise ValueError("The complete method is not supported for chat models.") stream_complete_fn = self._stream_complete return stream_complete_fn(prompt, **kwargs) async def _astream_complete( self, prompt: str, **kwargs: Any ) -> CompletionResponseAsyncGen: import aiohttp url = f"{self.api_base}/chat/completions" payload = { "model": self.model, "prompt": prompt, "stream": True, **self._get_all_kwargs(**kwargs), } async def gen() -> CompletionResponseAsyncGen: async with aiohttp.ClientSession() as session: async with session.post( url, json=payload, headers=self.headers ) as response: response.raise_for_status() text = "" async for line in response.content: line_text = line.decode("utf-8").strip() if line_text.startswith("data:"): data = json.loads(line_text[5:]) delta = data["choices"][0]["text"] text += delta yield CompletionResponse(delta=delta, text=text, raw=data) return gen() @llm_completion_callback() async def astream_complete( self, prompt: str, formatted: bool = False, **kwargs: Any ) -> CompletionResponseAsyncGen: if self._is_chat_model(): raise ValueError("The complete method is not supported for chat models.") return await self._astream_complete(prompt, **kwargs) def _stream_chat( self, messages: Sequence[ChatMessage], **kwargs: Any ) -> ChatResponseGen: url = f"{self.api_base}/chat/completions" payload = { "model": self.model, "messages": [ message.dict(exclude={"additional_kwargs"}) for message in messages ], "stream": True, **self._get_all_kwargs(**kwargs), } def gen() -> ChatResponseGen: content = "" with requests.Session() as session: with session.post( url, json=payload, headers=self.headers, stream=True ) as response: response.raise_for_status() for line in response.iter_lines( decode_unicode=True ): # decode lines to Unicode if line.startswith("data:"): data = json.loads(line[5:]) delta = data["choices"][0]["delta"]["content"] content += delta message = ChatMessage( role="assistant", content=content, raw=data ) yield ChatResponse(message=message, delta=delta, raw=data) return gen() @llm_chat_callback() def stream_chat( self, messages: Sequence[ChatMessage], **kwargs: Any ) -> ChatResponseGen: return self._stream_chat(messages, **kwargs) async def _astream_chat( self, messages: Sequence[ChatMessage], **kwargs: Any ) -> ChatResponseAsyncGen: import aiohttp url = f"{self.api_base}/chat/completions" payload = { "model": self.model, "messages": [ message.dict(exclude={"additional_kwargs"}) for message in messages ], "stream": True, **self._get_all_kwargs(**kwargs), } async def gen() -> ChatResponseAsyncGen: async with aiohttp.ClientSession() as session: async with session.post( url, json=payload, headers=self.headers ) as response: response.raise_for_status() content = "" async for line in response.content: line_text = line.decode("utf-8").strip() if line_text.startswith("data:"): data = json.loads(line_text[5:]) delta = data["choices"][0]["delta"]["content"] content += delta message = ChatMessage( role="assistant", content=content, raw=data ) yield ChatResponse(message=message, delta=delta, raw=data) return gen() @llm_chat_callback() async def astream_chat( self, messages: Sequence[ChatMessage], **kwargs: Any ) -> ChatResponseAsyncGen: return await self._astream_chat(messages, **kwargs)
173201
@llm_chat_callback() async def astream_chat( self, messages: Sequence[ChatMessage], **kwargs: Any ) -> ChatResponseAsyncGen: try: import httpx # Prepare the data payload for the Maritalk API formatted_messages = self.parse_messages_for_model(messages) data = { "model": self.model, "messages": formatted_messages, "do_sample": self.do_sample, "max_tokens": self.max_tokens, "temperature": self.temperature, "top_p": self.top_p, "stream": True, **kwargs, } headers = {"authorization": f"Key {self.api_key}"} async def gen() -> ChatResponseAsyncGen: async with httpx.AsyncClient() as client: async with client.stream( "POST", self._endpoint, data=json.dumps(data), headers=headers, timeout=None, ) as response: if response.status_code == 200: content = "" async for line in response.aiter_lines(): if line.startswith("data: "): response_data = line.replace("data: ", "") if response_data: parsed_data = json.loads(response_data) if "text" in parsed_data: content_delta = parsed_data["text"] content += content_delta yield ChatResponse( message=ChatMessage( role=MessageRole.ASSISTANT, content=content, ), delta=content_delta, raw=parsed_data, ) else: raise MaritalkHTTPError(response) return gen() except ImportError: raise ImportError( "Could not import httpx python package. " "Please install it with `pip install httpx`." ) @llm_completion_callback() async def astream_complete( self, prompt: str, formatted: bool = False, **kwargs: Any ) -> CompletionResponseAsyncGen: astream_complete_fn = astream_chat_to_completion_decorator(self.astream_chat) return await astream_complete_fn(prompt, **kwargs)
173266
# LlamaIndex Llms Integration: Huggingface API Integration with Hugging Face's Inference API for generating text. For more information on Hugging Face's Inference API, visit [Hugging Face's Inference API documentation](https://huggingface.co/docs/api-inference/quicktour). ## Installation ```shell pip install llama-index-llms-huggingface-api ``` ## Usage ```python from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI llm = HuggingFaceInferenceAPI( model_name="openai-community/gpt2", temperature=0.7, max_tokens=100, token="<your-token>", # Optional ) response = llm.complete("Hello, how are you?") ```
173399
# LlamaIndex Llms Integration: Ollama ## Installation To install the required package, run: ```bash %pip install llama-index-llms-ollama ``` ## Setup 1. Follow the [Ollama README](https://ollama.com) to set up and run a local Ollama instance. 2. When the Ollama app is running on your local machine, it will serve all of your local models on `localhost:11434`. 3. Select your model when creating the `Ollama` instance by specifying `model=":"`. 4. You can increase the default timeout (30 seconds) by setting `Ollama(..., request_timeout=300.0)`. 5. If you set `llm = Ollama(..., model="<model family>")` without a version, it will automatically look for the latest version. ## Usage ### Initialize Ollama ```python from llama_index.llms.ollama import Ollama llm = Ollama(model="llama3.1:latest", request_timeout=120.0) ``` ### Generate Completions To generate a text completion for a prompt, use the `complete` method: ```python resp = llm.complete("Who is Paul Graham?") print(resp) ``` ### Chat Responses To send a chat message and receive a response, create a list of `ChatMessage` instances and use the `chat` method: ```python from llama_index.core.llms import ChatMessage messages = [ ChatMessage( role="system", content="You are a pirate with a colorful personality." ), ChatMessage(role="user", content="What is your name?"), ] resp = llm.chat(messages) print(resp) ``` ### Streaming Responses #### Stream Complete To stream responses for a prompt, use the `stream_complete` method: ```python response = llm.stream_complete("Who is Paul Graham?") for r in response: print(r.delta, end="") ``` #### Stream Chat To stream chat responses, use the `stream_chat` method: ```python messages = [ ChatMessage( role="system", content="You are a pirate with a colorful personality." ), ChatMessage(role="user", content="What is your name?"), ] resp = llm.stream_chat(messages) for r in resp: print(r.delta, end="") ``` ### JSON Mode Ollama supports a JSON mode to ensure all responses are valid JSON, which is useful for tools that need to parse structured outputs: ```python llm = Ollama(model="llama3.1:latest", request_timeout=120.0, json_mode=True) response = llm.complete( "Who is Paul Graham? Output as a structured JSON object." ) print(str(response)) ``` ### Structured Outputs You can attach a Pydantic class to the LLM to ensure structured outputs: ```python from llama_index.core.bridge.pydantic import BaseModel from llama_index.core.tools import FunctionTool class Song(BaseModel): """A song with name and artist.""" name: str artist: str llm = Ollama(model="llama3.1:latest", request_timeout=120.0) sllm = llm.as_structured_llm(Song) response = sllm.chat([ChatMessage(role="user", content="Name a random song!")]) print( response.message.content ) # e.g., {"name": "Yesterday", "artist": "The Beatles"} ``` ### Asynchronous Chat You can also use asynchronous chat: ```python response = await sllm.achat( [ChatMessage(role="user", content="Name a random song!")] ) print(response.message.content) ``` ### LLM Implementation example https://docs.llamaindex.ai/en/stable/examples/llm/ollama/
173405
@llm_chat_callback() def stream_chat( self, messages: Sequence[ChatMessage], **kwargs: Any ) -> ChatResponseGen: ollama_messages = self._convert_to_ollama_messages(messages) tools = kwargs.pop("tools", None) def gen() -> ChatResponseGen: response = self.client.chat( model=self.model, messages=ollama_messages, stream=True, format="json" if self.json_mode else "", tools=tools, options=self._model_kwargs, keep_alive=self.keep_alive, ) response_txt = "" for r in response: if r["message"]["content"] is None: continue response_txt += r["message"]["content"] tool_calls = r["message"].get("tool_calls", []) token_counts = self._get_response_token_counts(r) if token_counts: r["usage"] = token_counts yield ChatResponse( message=ChatMessage( content=response_txt, role=r["message"]["role"], additional_kwargs={"tool_calls": tool_calls}, ), delta=r["message"]["content"], raw=r, ) return gen() @llm_chat_callback() async def astream_chat( self, messages: Sequence[ChatMessage], **kwargs: Any ) -> ChatResponseAsyncGen: ollama_messages = self._convert_to_ollama_messages(messages) tools = kwargs.pop("tools", None) async def gen() -> ChatResponseAsyncGen: response = await self.async_client.chat( model=self.model, messages=ollama_messages, stream=True, format="json" if self.json_mode else "", tools=tools, options=self._model_kwargs, keep_alive=self.keep_alive, ) response_txt = "" async for r in response: if r["message"]["content"] is None: continue response_txt += r["message"]["content"] tool_calls = r["message"].get("tool_calls", []) token_counts = self._get_response_token_counts(r) if token_counts: r["usage"] = token_counts yield ChatResponse( message=ChatMessage( content=response_txt, role=r["message"]["role"], additional_kwargs={"tool_calls": tool_calls}, ), delta=r["message"]["content"], raw=r, ) return gen() @llm_chat_callback() async def achat( self, messages: Sequence[ChatMessage], **kwargs: Any ) -> ChatResponseAsyncGen: ollama_messages = self._convert_to_ollama_messages(messages) tools = kwargs.pop("tools", None) response = await self.async_client.chat( model=self.model, messages=ollama_messages, stream=False, format="json" if self.json_mode else "", tools=tools, options=self._model_kwargs, keep_alive=self.keep_alive, ) tool_calls = response["message"].get("tool_calls", []) token_counts = self._get_response_token_counts(response) if token_counts: response["usage"] = token_counts return ChatResponse( message=ChatMessage( content=response["message"]["content"], role=response["message"]["role"], additional_kwargs={"tool_calls": tool_calls}, ), raw=response, ) @llm_completion_callback() def complete( self, prompt: str, formatted: bool = False, **kwargs: Any ) -> CompletionResponse: return chat_to_completion_decorator(self.chat)(prompt, **kwargs) @llm_completion_callback() async def acomplete( self, prompt: str, formatted: bool = False, **kwargs: Any ) -> CompletionResponse: return await achat_to_completion_decorator(self.achat)(prompt, **kwargs) @llm_completion_callback() def stream_complete( self, prompt: str, formatted: bool = False, **kwargs: Any ) -> CompletionResponseGen: return stream_chat_to_completion_decorator(self.stream_chat)(prompt, **kwargs) @llm_completion_callback() async def astream_complete( self, prompt: str, formatted: bool = False, **kwargs: Any ) -> CompletionResponseAsyncGen: return await astream_chat_to_completion_decorator(self.astream_chat)( prompt, **kwargs )
173423
# LlamaIndex Llms Integration: Langchain ## Installation 1. Install the required Python packages: ```bash %pip install llama-index-llms-langchain ``` ## Usage ### Import Required Libraries ```python from langchain.llms import OpenAI from llama_index.llms.langchain import LangChainLLM ``` ### Initialize LangChain LLM To create an instance of `LangChainLLM` with OpenAI: ```python llm = LangChainLLM(llm=OpenAI()) ``` ### Generate Streaming Response To generate a streaming response, use the following code: ```python response_gen = llm.stream_complete("Hi this is") for delta in response_gen: print(delta.delta, end="") ``` ### LLM Implementation example https://docs.llamaindex.ai/en/stable/examples/llm/langchain/
173485
# LlamaIndex Llms Integration: Text Generation Inference Integration with [Text Generation Inference](https://huggingface.co/docs/text-generation-inference) from Hugging Face to generate text. ## Installation ```shell pip install llama-index-llms-text-generation-inference ``` ## Usage ```python from llama_index.llms.text_generation_inference import TextGenerationInference llm = TextGenerationInference( model_name="openai-community/gpt2", temperature=0.7, max_tokens=100, token="<your-token>", # Optional ) response = llm.complete("Hello, how are you?") ```
173496
# LlamaIndex Llms Integration: Openai ## Installation To install the required package, run: ```bash %pip install llama-index-llms-openai ``` ## Setup 1. Set your OpenAI API key as an environment variable. You can replace `"sk-..."` with your actual API key: ```python import os os.environ["OPENAI_API_KEY"] = "sk-..." ``` ## Basic Usage ### Generate Completions To generate a completion for a prompt, use the `complete` method: ```python from llama_index.llms.openai import OpenAI resp = OpenAI().complete("Paul Graham is ") print(resp) ``` ### Chat Responses To send a chat message and receive a response, create a list of `ChatMessage` instances and use the `chat` method: ```python from llama_index.core.llms import ChatMessage messages = [ ChatMessage( role="system", content="You are a pirate with a colorful personality." ), ChatMessage(role="user", content="What is your name?"), ] resp = OpenAI().chat(messages) print(resp) ``` ## Streaming Responses ### Stream Complete To stream responses for a prompt, use the `stream_complete` method: ```python from llama_index.llms.openai import OpenAI llm = OpenAI() resp = llm.stream_complete("Paul Graham is ") for r in resp: print(r.delta, end="") ``` ### Stream Chat To stream chat responses, use the `stream_chat` method: ```python from llama_index.llms.openai import OpenAI from llama_index.core.llms import ChatMessage llm = OpenAI() messages = [ ChatMessage( role="system", content="You are a pirate with a colorful personality." ), ChatMessage(role="user", content="What is your name?"), ] resp = llm.stream_chat(messages) for r in resp: print(r.delta, end="") ``` ## Configure Model You can specify a particular model when creating the `OpenAI` instance: ```python llm = OpenAI(model="gpt-3.5-turbo") resp = llm.complete("Paul Graham is ") print(resp) messages = [ ChatMessage( role="system", content="You are a pirate with a colorful personality." ), ChatMessage(role="user", content="What is your name?"), ] resp = llm.chat(messages) print(resp) ``` ## Asynchronous Usage You can also use asynchronous methods for completion: ```python from llama_index.llms.openai import OpenAI llm = OpenAI(model="gpt-3.5-turbo") resp = await llm.acomplete("Paul Graham is ") print(resp) ``` ## Set API Key at a Per-Instance Level If desired, you can have separate LLM instances use different API keys: ```python from llama_index.llms.openai import OpenAI llm = OpenAI(model="gpt-3.5-turbo", api_key="BAD_KEY") resp = OpenAI().complete("Paul Graham is ") print(resp) ``` ### LLM Implementation example https://docs.llamaindex.ai/en/stable/examples/llm/openai/
173722
def _handle_upserts( self, nodes: Sequence[BaseNode], store_doc_text: bool = True, ) -> Sequence[BaseNode]: """Handle docstore upserts by checking hashes and ids.""" assert self.docstore is not None doc_ids_from_nodes = set() deduped_nodes_to_run = {} for node in nodes: ref_doc_id = node.ref_doc_id if node.ref_doc_id else node.id_ doc_ids_from_nodes.add(ref_doc_id) existing_hash = self.docstore.get_document_hash(ref_doc_id) if not existing_hash: # document doesn't exist, so add it deduped_nodes_to_run[ref_doc_id] = node elif existing_hash and existing_hash != node.hash: self.docstore.delete_ref_doc(ref_doc_id, raise_error=False) if self.vector_store is not None: self.vector_store.delete(ref_doc_id) deduped_nodes_to_run[ref_doc_id] = node else: continue # document exists and is unchanged, so skip it if self.docstore_strategy == DocstoreStrategy.UPSERTS_AND_DELETE: # Identify missing docs and delete them from docstore and vector store existing_doc_ids_before = set( self.docstore.get_all_document_hashes().values() ) doc_ids_to_delete = existing_doc_ids_before - doc_ids_from_nodes for ref_doc_id in doc_ids_to_delete: self.docstore.delete_document(ref_doc_id) if self.vector_store is not None: self.vector_store.delete(ref_doc_id) nodes_to_run = list(deduped_nodes_to_run.values()) self.docstore.set_document_hashes({n.id_: n.hash for n in nodes_to_run}) self.docstore.add_documents(nodes_to_run, store_text=store_doc_text) return nodes_to_run @staticmethod def _node_batcher( num_batches: int, nodes: Union[Sequence[BaseNode], List[Document]] ) -> Generator[Union[Sequence[BaseNode], List[Document]], Any, Any]: """Yield successive n-sized chunks from lst.""" batch_size = max(1, int(len(nodes) / num_batches)) for i in range(0, len(nodes), batch_size): yield nodes[i : i + batch_size] @dispatcher.span def run( self, show_progress: bool = False, documents: Optional[List[Document]] = None, nodes: Optional[Sequence[BaseNode]] = None, cache_collection: Optional[str] = None, in_place: bool = True, store_doc_text: bool = True, num_workers: Optional[int] = None, **kwargs: Any, ) -> Sequence[BaseNode]: """ Run a series of transformations on a set of nodes. If a vector store is provided, nodes with embeddings will be added to the vector store. If a vector store + docstore are provided, the docstore will be used to de-duplicate documents. Args: show_progress (bool, optional): Shows execution progress bar(s). Defaults to False. documents (Optional[List[Document]], optional): Set of documents to be transformed. Defaults to None. nodes (Optional[Sequence[BaseNode]], optional): Set of nodes to be transformed. Defaults to None. cache_collection (Optional[str], optional): Cache for transformations. Defaults to None. in_place (bool, optional): Whether transformations creates a new list for transformed nodes or modifies the array passed to `run_transformations`. Defaults to True. num_workers (Optional[int], optional): The number of parallel processes to use. If set to None, then sequential compute is used. Defaults to None. Returns: Sequence[BaseNode]: The set of transformed Nodes/Documents """ input_nodes = self._prepare_inputs(documents, nodes) # check if we need to dedup if self.docstore is not None and self.vector_store is not None: if self.docstore_strategy in ( DocstoreStrategy.UPSERTS, DocstoreStrategy.UPSERTS_AND_DELETE, ): nodes_to_run = self._handle_upserts( input_nodes, store_doc_text=store_doc_text ) elif self.docstore_strategy == DocstoreStrategy.DUPLICATES_ONLY: nodes_to_run = self._handle_duplicates( input_nodes, store_doc_text=store_doc_text ) else: raise ValueError(f"Invalid docstore strategy: {self.docstore_strategy}") elif self.docstore is not None and self.vector_store is None: if self.docstore_strategy == DocstoreStrategy.UPSERTS: print( "Docstore strategy set to upserts, but no vector store. " "Switching to duplicates_only strategy." ) self.docstore_strategy = DocstoreStrategy.DUPLICATES_ONLY elif self.docstore_strategy == DocstoreStrategy.UPSERTS_AND_DELETE: print( "Docstore strategy set to upserts and delete, but no vector store. " "Switching to duplicates_only strategy." ) self.docstore_strategy = DocstoreStrategy.DUPLICATES_ONLY nodes_to_run = self._handle_duplicates( input_nodes, store_doc_text=store_doc_text ) else: nodes_to_run = input_nodes if num_workers and num_workers > 1: if num_workers > multiprocessing.cpu_count(): warnings.warn( "Specified num_workers exceed number of CPUs in the system. " "Setting `num_workers` down to the maximum CPU count." ) with multiprocessing.get_context("spawn").Pool(num_workers) as p: node_batches = self._node_batcher( num_batches=num_workers, nodes=nodes_to_run ) nodes_parallel = p.starmap( run_transformations, zip( node_batches, repeat(self.transformations), repeat(in_place), repeat(self.cache if not self.disable_cache else None), repeat(cache_collection), ), ) nodes = reduce(lambda x, y: x + y, nodes_parallel, []) # type: ignore else: nodes = run_transformations( nodes_to_run, self.transformations, show_progress=show_progress, cache=self.cache if not self.disable_cache else None, cache_collection=cache_collection, in_place=in_place, **kwargs, ) nodes = nodes or [] if self.vector_store is not None: nodes_with_embeddings = [n for n in nodes if n.embedding is not None] if nodes_with_embeddings: self.vector_store.add(nodes_with_embeddings) return nodes # ------ async methods ------ async def _ahandle_duplicates( self, nodes: Sequence[BaseNode], store_doc_text: bool = True, ) -> Sequence[BaseNode]: """Handle docstore duplicates by checking all hashes.""" assert self.docstore is not None existing_hashes = await self.docstore.aget_all_document_hashes() current_hashes = [] nodes_to_run = [] for node in nodes: if node.hash not in existing_hashes and node.hash not in current_hashes: await self.docstore.aset_document_hash(node.id_, node.hash) nodes_to_run.append(node) current_hashes.append(node.hash) await self.docstore.async_add_documents(nodes_to_run, store_text=store_doc_text) return nodes_to_run
173732
"""LlamaIndex data structures.""" # indices from llama_index.core.indices.composability.graph import ComposableGraph from llama_index.core.indices.document_summary import ( DocumentSummaryIndex, GPTDocumentSummaryIndex, ) from llama_index.core.indices.document_summary.base import DocumentSummaryIndex from llama_index.core.indices.empty.base import EmptyIndex, GPTEmptyIndex from llama_index.core.indices.keyword_table.base import ( GPTKeywordTableIndex, KeywordTableIndex, ) from llama_index.core.indices.keyword_table.rake_base import ( GPTRAKEKeywordTableIndex, RAKEKeywordTableIndex, ) from llama_index.core.indices.keyword_table.simple_base import ( GPTSimpleKeywordTableIndex, SimpleKeywordTableIndex, ) from llama_index.core.indices.knowledge_graph import ( KnowledgeGraphIndex, ) from llama_index.core.indices.list import GPTListIndex, ListIndex, SummaryIndex from llama_index.core.indices.list.base import ( GPTListIndex, ListIndex, SummaryIndex, ) from llama_index.core.indices.loading import ( load_graph_from_storage, load_index_from_storage, load_indices_from_storage, ) from llama_index.core.indices.multi_modal import MultiModalVectorStoreIndex from llama_index.core.indices.struct_store.pandas import ( GPTPandasIndex, PandasIndex, ) from llama_index.core.indices.struct_store.sql import ( GPTSQLStructStoreIndex, SQLStructStoreIndex, ) from llama_index.core.indices.tree.base import GPTTreeIndex, TreeIndex from llama_index.core.indices.vector_store import ( GPTVectorStoreIndex, VectorStoreIndex, ) from llama_index.core.indices.property_graph.base import ( PropertyGraphIndex, ) __all__ = [ "load_graph_from_storage", "load_index_from_storage", "load_indices_from_storage", "KeywordTableIndex", "SimpleKeywordTableIndex", "RAKEKeywordTableIndex", "SummaryIndex", "TreeIndex", "DocumentSummaryIndex", "KnowledgeGraphIndex", "PandasIndex", "VectorStoreIndex", "SQLStructStoreIndex", "MultiModalVectorStoreIndex", "EmptyIndex", "ComposableGraph", "PropertyGraphIndex", # legacy "GPTKnowledgeGraphIndex", "GPTKeywordTableIndex", "GPTSimpleKeywordTableIndex", "GPTRAKEKeywordTableIndex", "GPTDocumentSummaryIndex", "GPTListIndex", "GPTTreeIndex", "GPTPandasIndex", "ListIndex", "GPTVectorStoreIndex", "GPTSQLStructStoreIndex", "GPTEmptyIndex", ]
173737
class PromptHelper(BaseComponent): """Prompt helper. General prompt helper that can help deal with LLM context window token limitations. At its core, it calculates available context size by starting with the context window size of an LLM and reserve token space for the prompt template, and the output. It provides utility for "repacking" text chunks (retrieved from index) to maximally make use of the available context window (and thereby reducing the number of LLM calls needed), or truncating them so that they fit in a single LLM call. Args: context_window (int): Context window for the LLM. num_output (int): Number of outputs for the LLM. chunk_overlap_ratio (float): Chunk overlap as a ratio of chunk size chunk_size_limit (Optional[int]): Maximum chunk size to use. tokenizer (Optional[Callable[[str], List]]): Tokenizer to use. separator (str): Separator for text splitter """ context_window: int = Field( default=DEFAULT_CONTEXT_WINDOW, description="The maximum context size that will get sent to the LLM.", ) num_output: int = Field( default=DEFAULT_NUM_OUTPUTS, description="The amount of token-space to leave in input for generation.", ) chunk_overlap_ratio: float = Field( default=DEFAULT_CHUNK_OVERLAP_RATIO, description="The percentage token amount that each chunk should overlap.", ) chunk_size_limit: Optional[int] = Field(description="The maximum size of a chunk.") separator: str = Field( default=" ", description="The separator when chunking tokens." ) _token_counter: TokenCounter = PrivateAttr() def __init__( self, context_window: int = DEFAULT_CONTEXT_WINDOW, num_output: int = DEFAULT_NUM_OUTPUTS, chunk_overlap_ratio: float = DEFAULT_CHUNK_OVERLAP_RATIO, chunk_size_limit: Optional[int] = None, tokenizer: Optional[Callable[[str], List]] = None, separator: str = " ", ) -> None: """Init params.""" if chunk_overlap_ratio > 1.0 or chunk_overlap_ratio < 0.0: raise ValueError("chunk_overlap_ratio must be a float between 0. and 1.") super().__init__( context_window=context_window, num_output=num_output, chunk_overlap_ratio=chunk_overlap_ratio, chunk_size_limit=chunk_size_limit, separator=separator, ) # TODO: make configurable self._token_counter = TokenCounter(tokenizer=tokenizer) @classmethod def from_llm_metadata( cls, llm_metadata: LLMMetadata, chunk_overlap_ratio: float = DEFAULT_CHUNK_OVERLAP_RATIO, chunk_size_limit: Optional[int] = None, tokenizer: Optional[Callable[[str], List]] = None, separator: str = " ", ) -> "PromptHelper": """Create from llm predictor. This will autofill values like context_window and num_output. """ context_window = llm_metadata.context_window if llm_metadata.num_output == -1: num_output = DEFAULT_NUM_OUTPUTS else: num_output = llm_metadata.num_output return cls( context_window=context_window, num_output=num_output, chunk_overlap_ratio=chunk_overlap_ratio, chunk_size_limit=chunk_size_limit, tokenizer=tokenizer, separator=separator, ) @classmethod def class_name(cls) -> str: return "PromptHelper" def _get_available_context_size(self, num_prompt_tokens: int) -> int: """Get available context size. This is calculated as: available context window = total context window - input (partially filled prompt) - output (room reserved for response) Notes: - Available context size is further clamped to be non-negative. """ context_size_tokens = self.context_window - num_prompt_tokens - self.num_output if context_size_tokens < 0: raise ValueError( f"Calculated available context size {context_size_tokens} was" " not non-negative." ) return context_size_tokens def _get_tools_from_llm( self, llm: Optional[LLM] = None, tools: Optional[List["BaseTool"]] = None ) -> List["BaseTool"]: from llama_index.core.program.function_program import get_function_tool tools = tools or [] if isinstance(llm, StructuredLLM): tools.append(get_function_tool(llm.output_cls)) return tools def _get_available_chunk_size( self, prompt: BasePromptTemplate, num_chunks: int = 1, padding: int = 5, llm: Optional[LLM] = None, tools: Optional[List["BaseTool"]] = None, ) -> int: """Get available chunk size. This is calculated as: available chunk size = available context window // number_chunks - padding Notes: - By default, we use padding of 5 (to save space for formatting needs). - Available chunk size is further clamped to chunk_size_limit if specified. """ tools = self._get_tools_from_llm(llm=llm, tools=tools) if isinstance(prompt, SelectorPromptTemplate): prompt = prompt.select(llm=llm) if isinstance(prompt, ChatPromptTemplate): messages: List[ChatMessage] = prompt.message_templates # account for partial formatting partial_messages = [] for message in messages: partial_message = deepcopy(message) prompt_kwargs = prompt.kwargs or {} partial_message.content = format_string( partial_message.content or "", **prompt_kwargs ) # add to list of partial messages partial_messages.append(partial_message) num_prompt_tokens = self._token_counter.estimate_tokens_in_messages( partial_messages ) else: prompt_str = get_empty_prompt_txt(prompt) num_prompt_tokens = self._token_counter.get_string_tokens(prompt_str) num_prompt_tokens += self._token_counter.estimate_tokens_in_tools( [x.metadata.to_openai_tool() for x in tools] ) # structured llms cannot have system prompts currently -- check the underlying llm if isinstance(llm, StructuredLLM): num_prompt_tokens += self._token_counter.get_string_tokens( llm.llm.system_prompt or "" ) elif llm is not None: num_prompt_tokens += self._token_counter.get_string_tokens( llm.system_prompt or "" ) available_context_size = self._get_available_context_size(num_prompt_tokens) result = available_context_size // num_chunks - padding if self.chunk_size_limit is not None: result = min(result, self.chunk_size_limit) return result def get_text_splitter_given_prompt( self, prompt: BasePromptTemplate, num_chunks: int = 1, padding: int = DEFAULT_PADDING, llm: Optional[LLM] = None, tools: Optional[List["BaseTool"]] = None, ) -> TokenTextSplitter: """Get text splitter configured to maximally pack available context window, taking into account of given prompt, and desired number of chunks. """ chunk_size = self._get_available_chunk_size( prompt, num_chunks, padding=padding, llm=llm, tools=tools ) if chunk_size <= 0: raise ValueError(f"Chunk size {chunk_size} is not positive.") chunk_overlap = int(self.chunk_overlap_ratio * chunk_size) return TokenTextSplitter( separator=self.separator, chunk_size=chunk_size, chunk_overlap=chunk_overlap, tokenizer=self._token_counter.tokenizer, ) def truncate( self, prompt: BasePromptTemplate, text_chunks: Sequence[str], padding: int = DEFAULT_PADDING, llm: Optional[LLM] = None, tools: Optional[List["BaseTool"]] = None, ) -> List[str]: """Truncate text chunks to fit available context window.""" text_splitter = self.get_text_splitter_given_prompt( prompt, num_chunks=len(text_chunks), padding=padding, llm=llm, tools=tools, ) return [truncate_text(chunk, text_splitter) for chunk in text_chunks] def repack( self, prompt: BasePromptTemplate, text_chunks: Sequence[str], padding: int = DEFAULT_PADDING, llm: Optional[LLM] = None, tools: Optional[List["BaseTool"]] = None, ) -> List[str]: """Repack text chunks to fit available context window. This will combine text chunks into consolidated chunks that more fully "pack" the prompt template given the context_window. """ text_splitter = self.get_text_splitter_given_prompt( prompt, padding=padding, llm=llm, tools=tools ) combined_str = "\n\n".join([c.strip() for c in text_chunks if c.strip()]) return text_splitter.split_text(combined_str)
173747
## 🌲 Tree Index Currently the tree index refers to the `TreeIndex` class. It organizes external data into a tree structure that can be queried. ### Index Construction The `TreeIndex` first takes in a set of text documents as input. It then builds up a tree-index in a bottom-up fashion; each parent node is able to summarize the children nodes using a general **summarization prompt**; each intermediate node contains text summarizing the components below. Once the index is built, it can be saved to disk as a JSON and loaded for future use. ### Query There are two query modes: `default` and `retrieve`. **Default (GPTTreeIndexLeafQuery)** Using a **query prompt template**, the TreeIndex will be able to recursively perform tree traversal in a top-down fashion in order to answer a question. For example, in the very beginning GPT-3 is tasked with selecting between _n_ top-level nodes which best answers a provided query, by outputting a number as a multiple-choice problem. The TreeIndex then uses the number to select the corresponding node, and the process repeats recursively among the children nodes until a leaf node is reached. **Retrieve (GPTTreeIndexRetQuery)** Simply use the root nodes as context to synthesize an answer to the query. This is especially effective if the tree is preseeded with a `query_str`. ### Usage ```python from llama_index.core import TreeIndex, SimpleDirectoryReader # build index documents = SimpleDirectoryReader("data").load_data() index = TreeIndex.from_documents(documents) # query query_engine = index.as_query_engine() response = query_engine.query("<question text>") ``` ### FAQ **Why build a tree? Why not just incrementally go through each chunk?** Algorithmically speaking, $O(\log N)$ is better than $O(N)$. More broadly, building a tree helps us to test GPT's capabilities in modeling information in a hierarchy. It seems to me that our brains organize information in a similar way (citation needed). We can use this design to test how GPT can use its own hierarchy to answer questions. Practically speaking, it is much cheaper to do so and I want to limit my monthly spending (see below for costs). **How much does this cost to run?** We currently use the Davinci model for good results. Unfortunately Davinci is quite expensive. The cost of building the tree is roughly $cN\log(N)\frac{p}{1000}$, where $p=4096$ is the prompt limit and $c$ is the cost per 1000 tokens ($0.02 as mentioned on the [pricing page](https://openai.com/api/pricing/)). The cost of querying the tree is roughly $c\log(N)\frac{p}{1000}$. For the NYC example, this equates to \$~0.40 per query.
173779
def upsert_triplet( self, triplet: Tuple[str, str, str], include_embeddings: bool = False ) -> None: """Insert triplets and optionally embeddings. Used for manual insertion of KG triplets (in the form of (subject, relationship, object)). Args: triplet (tuple): Knowledge triplet embedding (Any, optional): Embedding option for the triplet. Defaults to None. """ self._graph_store.upsert_triplet(*triplet) triplet_str = str(triplet) if include_embeddings: set_embedding = self._embed_model.get_text_embedding(triplet_str) self._index_struct.add_to_embedding_dict(str(triplet), set_embedding) self._storage_context.index_store.add_index_struct(self._index_struct) def add_node(self, keywords: List[str], node: BaseNode) -> None: """Add node. Used for manual insertion of nodes (keyed by keywords). Args: keywords (List[str]): Keywords to index the node. node (Node): Node to be indexed. """ self._index_struct.add_node(keywords, node) self._docstore.add_documents([node], allow_update=True) def upsert_triplet_and_node( self, triplet: Tuple[str, str, str], node: BaseNode, include_embeddings: bool = False, ) -> None: """Upsert KG triplet and node. Calls both upsert_triplet and add_node. Behavior is idempotent; if Node already exists, only triplet will be added. Args: keywords (List[str]): Keywords to index the node. node (Node): Node to be indexed. include_embeddings (bool): Option to add embeddings for triplets. Defaults to False """ subj, _, obj = triplet self.upsert_triplet(triplet) self.add_node([subj, obj], node) triplet_str = str(triplet) if include_embeddings: set_embedding = self._embed_model.get_text_embedding(triplet_str) self._index_struct.add_to_embedding_dict(str(triplet), set_embedding) self._storage_context.index_store.add_index_struct(self._index_struct) def _delete_node(self, node_id: str, **delete_kwargs: Any) -> None: """Delete a node.""" raise NotImplementedError("Delete is not supported for KG index yet.") @property def ref_doc_info(self) -> Dict[str, RefDocInfo]: """Retrieve a dict mapping of ingested documents and their nodes+metadata.""" node_doc_ids_sets = list(self._index_struct.table.values()) node_doc_ids = list(set().union(*node_doc_ids_sets)) nodes = self.docstore.get_nodes(node_doc_ids) all_ref_doc_info = {} for node in nodes: ref_node = node.source_node if not ref_node: continue ref_doc_info = self.docstore.get_ref_doc_info(ref_node.node_id) if not ref_doc_info: continue all_ref_doc_info[ref_node.node_id] = ref_doc_info return all_ref_doc_info def get_networkx_graph(self, limit: int = 100) -> Any: """Get networkx representation of the graph structure. Args: limit (int): Number of starting nodes to be included in the graph. NOTE: This function requires networkx to be installed. NOTE: This is a beta feature. """ try: import networkx as nx except ImportError: raise ImportError( "Please install networkx to visualize the graph: `pip install networkx`" ) g = nx.Graph() subjs = list(self.index_struct.table.keys()) # add edges rel_map = self._graph_store.get_rel_map(subjs=subjs, depth=1, limit=limit) added_nodes = set() for keyword in rel_map: for path in rel_map[keyword]: subj = keyword for i in range(0, len(path), 2): if i + 2 >= len(path): break if subj not in added_nodes: g.add_node(subj) added_nodes.add(subj) rel = path[i + 1] obj = path[i + 2] g.add_edge(subj, obj, label=rel, title=rel) subj = obj return g @property def query_context(self) -> Dict[str, Any]: return {GRAPH_STORE_KEY: self._graph_store}
173820
def insert_nodes(self, nodes: Sequence[BaseNode], **insert_kwargs: Any) -> None: """ Insert nodes. NOTE: overrides BaseIndex.insert_nodes. VectorStoreIndex only stores nodes in document store if vector store does not store text """ for node in nodes: if isinstance(node, IndexNode): try: node.dict() except ValueError: self._object_map[node.index_id] = node.obj node.obj = None with self._callback_manager.as_trace("insert_nodes"): self._insert(nodes, **insert_kwargs) self._storage_context.index_store.add_index_struct(self._index_struct) def _delete_node(self, node_id: str, **delete_kwargs: Any) -> None: pass def delete_nodes( self, node_ids: List[str], delete_from_docstore: bool = False, **delete_kwargs: Any, ) -> None: """ Delete a list of nodes from the index. Args: node_ids (List[str]): A list of node_ids from the nodes to delete """ # delete nodes from vector store self._vector_store.delete_nodes(node_ids, **delete_kwargs) # delete from docstore only if needed if ( not self._vector_store.stores_text or self._store_nodes_override ) and delete_from_docstore: for node_id in node_ids: self._docstore.delete_document(node_id, raise_error=False) def _delete_from_index_struct(self, ref_doc_id: str) -> None: # delete from index_struct only if needed if not self._vector_store.stores_text or self._store_nodes_override: ref_doc_info = self._docstore.get_ref_doc_info(ref_doc_id) if ref_doc_info is not None: for node_id in ref_doc_info.node_ids: self._index_struct.delete(node_id) self._vector_store.delete(node_id) def _delete_from_docstore(self, ref_doc_id: str) -> None: # delete from docstore only if needed if not self._vector_store.stores_text or self._store_nodes_override: self._docstore.delete_ref_doc(ref_doc_id, raise_error=False) def delete_ref_doc( self, ref_doc_id: str, delete_from_docstore: bool = False, **delete_kwargs: Any ) -> None: """Delete a document and it's nodes by using ref_doc_id.""" self._vector_store.delete(ref_doc_id, **delete_kwargs) self._delete_from_index_struct(ref_doc_id) if delete_from_docstore: self._delete_from_docstore(ref_doc_id) self._storage_context.index_store.add_index_struct(self._index_struct) async def _adelete_from_index_struct(self, ref_doc_id: str) -> None: """Delete from index_struct only if needed.""" if not self._vector_store.stores_text or self._store_nodes_override: ref_doc_info = await self._docstore.aget_ref_doc_info(ref_doc_id) if ref_doc_info is not None: for node_id in ref_doc_info.node_ids: self._index_struct.delete(node_id) self._vector_store.delete(node_id) async def _adelete_from_docstore(self, ref_doc_id: str) -> None: """Delete from docstore only if needed.""" if not self._vector_store.stores_text or self._store_nodes_override: await self._docstore.adelete_ref_doc(ref_doc_id, raise_error=False) async def adelete_ref_doc( self, ref_doc_id: str, delete_from_docstore: bool = False, **delete_kwargs: Any ) -> None: """Delete a document and it's nodes by using ref_doc_id.""" tasks = [ self._vector_store.adelete(ref_doc_id, **delete_kwargs), self._adelete_from_index_struct(ref_doc_id), ] if delete_from_docstore: tasks.append(self._adelete_from_docstore(ref_doc_id)) await asyncio.gather(*tasks) self._storage_context.index_store.add_index_struct(self._index_struct) @property def ref_doc_info(self) -> Dict[str, RefDocInfo]: """Retrieve a dict mapping of ingested documents and their nodes+metadata.""" if not self._vector_store.stores_text or self._store_nodes_override: node_doc_ids = list(self.index_struct.nodes_dict.values()) nodes = self.docstore.get_nodes(node_doc_ids) all_ref_doc_info = {} for node in nodes: ref_node = node.source_node if not ref_node: continue ref_doc_info = self.docstore.get_ref_doc_info(ref_node.node_id) if not ref_doc_info: continue all_ref_doc_info[ref_node.node_id] = ref_doc_info return all_ref_doc_info else: raise NotImplementedError( "Vector store integrations that store text in the vector store are " "not supported by ref_doc_info yet." ) GPTVectorStoreIndex = VectorStoreIndex
173881
"""Retriever tool.""" from typing import TYPE_CHECKING, Any, List, Optional from llama_index.core.base.base_retriever import BaseRetriever if TYPE_CHECKING: from llama_index.core.langchain_helpers.agents.tools import LlamaIndexTool from llama_index.core.schema import MetadataMode, NodeWithScore, QueryBundle from llama_index.core.tools.types import AsyncBaseTool, ToolMetadata, ToolOutput from llama_index.core.postprocessor.types import BaseNodePostprocessor DEFAULT_NAME = "retriever_tool" DEFAULT_DESCRIPTION = """Useful for running a natural language query against a knowledge base and retrieving a set of relevant documents. """ class RetrieverTool(AsyncBaseTool): """Retriever tool. A tool making use of a retriever. Args: retriever (BaseRetriever): A retriever. metadata (ToolMetadata): The associated metadata of the query engine. node_postprocessors (Optional[List[BaseNodePostprocessor]]): A list of node postprocessors. """ def __init__( self, retriever: BaseRetriever, metadata: ToolMetadata, node_postprocessors: Optional[List[BaseNodePostprocessor]] = None, ) -> None: self._retriever = retriever self._metadata = metadata self._node_postprocessors = node_postprocessors or [] @classmethod def from_defaults( cls, retriever: BaseRetriever, node_postprocessors: Optional[List[BaseNodePostprocessor]] = None, name: Optional[str] = None, description: Optional[str] = None, ) -> "RetrieverTool": name = name or DEFAULT_NAME description = description or DEFAULT_DESCRIPTION metadata = ToolMetadata(name=name, description=description) return cls( retriever=retriever, metadata=metadata, node_postprocessors=node_postprocessors, ) @property def retriever(self) -> BaseRetriever: return self._retriever @property def metadata(self) -> ToolMetadata: return self._metadata def call(self, *args: Any, **kwargs: Any) -> ToolOutput: query_str = "" if args is not None: query_str += ", ".join([str(arg) for arg in args]) + "\n" if kwargs is not None: query_str += ( ", ".join([f"{k!s} is {v!s}" for k, v in kwargs.items()]) + "\n" ) if query_str == "": raise ValueError("Cannot call query engine without inputs") docs = self._retriever.retrieve(query_str) docs = self._apply_node_postprocessors(docs, QueryBundle(query_str)) content = "" for doc in docs: node_copy = doc.node.model_copy() node_copy.text_template = "{metadata_str}\n{content}" node_copy.metadata_template = "{key} = {value}" content += node_copy.get_content(MetadataMode.LLM) + "\n\n" return ToolOutput( content=content, tool_name=self.metadata.name, raw_input={"input": query_str}, raw_output=docs, ) async def acall(self, *args: Any, **kwargs: Any) -> ToolOutput: query_str = "" if args is not None: query_str += ", ".join([str(arg) for arg in args]) + "\n" if kwargs is not None: query_str += ( ", ".join([f"{k!s} is {v!s}" for k, v in kwargs.items()]) + "\n" ) if query_str == "": raise ValueError("Cannot call query engine without inputs") docs = await self._retriever.aretrieve(query_str) content = "" docs = self._apply_node_postprocessors(docs, QueryBundle(query_str)) for doc in docs: node_copy = doc.node.model_copy() node_copy.text_template = "{metadata_str}\n{content}" node_copy.metadata_template = "{key} = {value}" content += node_copy.get_content(MetadataMode.LLM) + "\n\n" return ToolOutput( content=content, tool_name=self.metadata.name, raw_input={"input": query_str}, raw_output=docs, ) def as_langchain_tool(self) -> "LlamaIndexTool": raise NotImplementedError("`as_langchain_tool` not implemented here.") def _apply_node_postprocessors( self, nodes: List[NodeWithScore], query_bundle: QueryBundle ) -> List[NodeWithScore]: for node_postprocessor in self._node_postprocessors: nodes = node_postprocessor.postprocess_nodes( nodes, query_bundle=query_bundle ) return nodes
173894
"""Embedding utils for LlamaIndex.""" import os from typing import TYPE_CHECKING, List, Optional, Union if TYPE_CHECKING: from llama_index.core.bridge.langchain import Embeddings as LCEmbeddings from llama_index.core.base.embeddings.base import BaseEmbedding from llama_index.core.callbacks import CallbackManager from llama_index.core.embeddings.mock_embed_model import MockEmbedding from llama_index.core.utils import get_cache_dir EmbedType = Union[BaseEmbedding, "LCEmbeddings", str] def save_embedding(embedding: List[float], file_path: str) -> None: """Save embedding to file.""" with open(file_path, "w") as f: f.write(",".join([str(x) for x in embedding])) def load_embedding(file_path: str) -> List[float]: """Load embedding from file. Will only return first embedding in file.""" with open(file_path) as f: for line in f: embedding = [float(x) for x in line.strip().split(",")] break return embedding def resolve_embed_model( embed_model: Optional[EmbedType] = None, callback_manager: Optional[CallbackManager] = None, ) -> BaseEmbedding: """Resolve embed model.""" from llama_index.core.settings import Settings try: from llama_index.core.bridge.langchain import Embeddings as LCEmbeddings except ImportError: LCEmbeddings = None # type: ignore if embed_model == "default": if os.getenv("IS_TESTING"): embed_model = MockEmbedding(embed_dim=8) embed_model.callback_manager = callback_manager or Settings.callback_manager return embed_model try: from llama_index.embeddings.openai import ( OpenAIEmbedding, ) # pants: no-infer-dep from llama_index.embeddings.openai.utils import ( validate_openai_api_key, ) # pants: no-infer-dep embed_model = OpenAIEmbedding() validate_openai_api_key(embed_model.api_key) # type: ignore except ImportError: raise ImportError( "`llama-index-embeddings-openai` package not found, " "please run `pip install llama-index-embeddings-openai`" ) except ValueError as e: raise ValueError( "\n******\n" "Could not load OpenAI embedding model. " "If you intended to use OpenAI, please check your OPENAI_API_KEY.\n" "Original error:\n" f"{e!s}" "\nConsider using embed_model='local'.\n" "Visit our documentation for more embedding options: " "https://docs.llamaindex.ai/en/stable/module_guides/models/" "embeddings.html#modules" "\n******" ) # for image multi-modal embeddings elif isinstance(embed_model, str) and embed_model.startswith("clip"): try: from llama_index.embeddings.clip import ClipEmbedding # pants: no-infer-dep clip_model_name = ( embed_model.split(":")[1] if ":" in embed_model else "ViT-B/32" ) embed_model = ClipEmbedding(model_name=clip_model_name) except ImportError as e: raise ImportError( "`llama-index-embeddings-clip` package not found, " "please run `pip install llama-index-embeddings-clip` and `pip install git+https://github.com/openai/CLIP.git`" ) if isinstance(embed_model, str): try: from llama_index.embeddings.huggingface import ( HuggingFaceEmbedding, ) # pants: no-infer-dep splits = embed_model.split(":", 1) is_local = splits[0] model_name = splits[1] if len(splits) > 1 else None if is_local != "local": raise ValueError( "embed_model must start with str 'local' or of type BaseEmbedding" ) cache_folder = os.path.join(get_cache_dir(), "models") os.makedirs(cache_folder, exist_ok=True) embed_model = HuggingFaceEmbedding( model_name=model_name, cache_folder=cache_folder ) except ImportError: raise ImportError( "`llama-index-embeddings-huggingface` package not found, " "please run `pip install llama-index-embeddings-huggingface`" ) if LCEmbeddings is not None and isinstance(embed_model, LCEmbeddings): try: from llama_index.embeddings.langchain import ( LangchainEmbedding, ) # pants: no-infer-dep embed_model = LangchainEmbedding(embed_model) except ImportError as e: raise ImportError( "`llama-index-embeddings-langchain` package not found, " "please run `pip install llama-index-embeddings-langchain`" ) if embed_model is None: print("Embeddings have been explicitly disabled. Using MockEmbedding.") embed_model = MockEmbedding(embed_dim=1) assert isinstance(embed_model, BaseEmbedding) embed_model.callback_manager = callback_manager or Settings.callback_manager return embed_model
173924
elf, splits: List[_Split], chunk_size: int) -> List[str]: """Merge splits into chunks.""" chunks: List[str] = [] cur_chunk: List[Tuple[str, int]] = [] # list of (text, length) last_chunk: List[Tuple[str, int]] = [] cur_chunk_len = 0 new_chunk = True def close_chunk() -> None: nonlocal chunks, cur_chunk, last_chunk, cur_chunk_len, new_chunk chunks.append("".join([text for text, length in cur_chunk])) last_chunk = cur_chunk cur_chunk = [] cur_chunk_len = 0 new_chunk = True # add overlap to the next chunk using the last one first # there is a small issue with this logic. If the chunk directly after # the overlap is really big, then we could go over the chunk_size, and # in theory the correct thing to do would be to remove some/all of the # overlap. However, it would complicate the logic further without # much real world benefit, so it's not implemented now. if len(last_chunk) > 0: last_index = len(last_chunk) - 1 while ( last_index >= 0 and cur_chunk_len + last_chunk[last_index][1] <= self.chunk_overlap ): text, length = last_chunk[last_index] cur_chunk_len += length cur_chunk.insert(0, (text, length)) last_index -= 1 while len(splits) > 0: cur_split = splits[0] if cur_split.token_size > chunk_size: raise ValueError("Single token exceeded chunk size") if cur_chunk_len + cur_split.token_size > chunk_size and not new_chunk: # if adding split to current chunk exceeds chunk size: close out chunk close_chunk() else: if ( cur_split.is_sentence or cur_chunk_len + cur_split.token_size <= chunk_size or new_chunk # new chunk, always add at least one split ): # add split to chunk cur_chunk_len += cur_split.token_size cur_chunk.append((cur_split.text, cur_split.token_size)) splits.pop(0) new_chunk = False else: # close out chunk close_chunk() # handle the last chunk if not new_chunk: chunk = "".join([text for text, length in cur_chunk]) chunks.append(chunk) # run postprocessing to remove blank spaces return self._postprocess_chunks(chunks) def _postprocess_chunks(self, chunks: List[str]) -> List[str]: """Post-process chunks. Remove whitespace only chunks and remove leading and trailing whitespace. """ new_chunks = [] for chunk in chunks: stripped_chunk = chunk.strip() if stripped_chunk == "": continue new_chunks.append(stripped_chunk) return new_chunks def _token_size(self, text: str) -> int: return len(self._tokenizer(text)) def _get_splits_by_fns(self, text: str) -> Tuple[List[str], bool]: for split_fn in self._split_fns: splits = split_fn(text) if len(splits) > 1: return splits, True for split_fn in self._sub_sentence_split_fns: splits = split_fn(text) if len(splits) > 1: break return splits, False
173936
"""Vector memory. Memory backed by a vector database. """ import uuid from typing import Any, Dict, List, Optional, Union from llama_index.core.bridge.pydantic import field_validator from llama_index.core.schema import TextNode from llama_index.core.vector_stores.types import BasePydanticVectorStore from llama_index.core.base.llms.types import ChatMessage, MessageRole from llama_index.core.bridge.pydantic import Field from llama_index.core.memory.types import BaseMemory from llama_index.core.embeddings.utils import EmbedType def _stringify_obj(d: Any) -> Union[str, list, dict]: """Utility function to convert all keys in a dictionary to strings.""" if isinstance(d, list): return [_stringify_obj(v) for v in d] elif isinstance(d, Dict): return {str(k): _stringify_obj(v) for k, v in d.items()} else: return str(d) def _stringify_chat_message(msg: ChatMessage) -> Dict: """Utility function to convert chatmessage to serializable dict.""" msg_dict = msg.dict() msg_dict["additional_kwargs"] = _stringify_obj(msg_dict["additional_kwargs"]) return msg_dict def _get_starter_node_for_new_batch() -> TextNode: """Generates a new starter node for a new batch or group of messages.""" return TextNode( id_=str(uuid.uuid4()), text="", metadata={"sub_dicts": []}, excluded_embed_metadata_keys=["sub_dicts"], excluded_llm_metadata_keys=["sub_dicts"], ) class VectorMemory(BaseMemory): """Memory backed by a vector index. NOTE: This class requires the `delete_nodes` method to be implemented by the vector store underlying the vector index. At time of writing (May 2024), Chroma, Qdrant and SimpleVectorStore all support delete_nodes. """ vector_index: Any retriever_kwargs: Dict[str, Any] = Field(default_factory=dict) # Whether to combine a user message with all subsequent messages # until the next user message into a single message # This is on by default, ensuring that we always fetch contiguous blocks of user/response pairs. # Turning this off may lead to errors in the function calling API of the LLM. # If this is on, then any message that's not a user message will be combined with the last user message # in the vector store. batch_by_user_message: bool = True cur_batch_textnode: TextNode = Field( default_factory=_get_starter_node_for_new_batch, description="The super node for the current active user-message batch.", ) @field_validator("vector_index") @classmethod def validate_vector_index(cls, value: Any) -> Any: """Validate vector index.""" # NOTE: we can't import VectorStoreIndex directly due to circular imports, # which is why the type is Any from llama_index.core.indices.vector_store import VectorStoreIndex if not isinstance(value, VectorStoreIndex): raise ValueError( f"Expected 'vector_index' to be an instance of VectorStoreIndex, got {type(value)}" ) return value @classmethod def class_name(cls) -> str: """Get class name.""" return "VectorMemory" @classmethod def from_defaults( cls, vector_store: Optional[BasePydanticVectorStore] = None, embed_model: Optional[EmbedType] = None, index_kwargs: Optional[Dict] = None, retriever_kwargs: Optional[Dict] = None, **kwargs: Any, ) -> "VectorMemory": """Create vector memory. Args: vector_store (Optional[BasePydanticVectorStore]): vector store (note: delete_nodes must be implemented. At time of writing (May 2024), Chroma, Qdrant and SimpleVectorStore all support delete_nodes. embed_model (Optional[EmbedType]): embedding model index_kwargs (Optional[Dict]): kwargs for initializing the index retriever_kwargs (Optional[Dict]): kwargs for initializing the retriever """ from llama_index.core.indices.vector_store import VectorStoreIndex if kwargs: raise ValueError(f"Unexpected kwargs: {kwargs}") index_kwargs = index_kwargs or {} retriever_kwargs = retriever_kwargs or {} if vector_store is None: # initialize a blank in-memory vector store # NOTE: can't easily do that from `from_vector_store` at the moment. index = VectorStoreIndex.from_documents( [], embed_model=embed_model, **index_kwargs ) else: index = VectorStoreIndex.from_vector_store( vector_store, embed_model=embed_model, **index_kwargs ) return cls(vector_index=index, retriever_kwargs=retriever_kwargs) def get( self, input: Optional[str] = None, initial_token_count: int = 0, **kwargs: Any ) -> List[ChatMessage]: """Get chat history.""" if input is None: return [] # retrieve from index retriever = self.vector_index.as_retriever(**self.retriever_kwargs) nodes = retriever.retrieve(input or "") # retrieve underlying messages return [ ChatMessage.model_validate(sub_dict) for node in nodes for sub_dict in node.metadata["sub_dicts"] ] def get_all(self) -> List[ChatMessage]: """Get all chat history.""" # TODO: while we could implement get_all, would be hacky through metadata filtering # since vector stores don't easily support get() raise ValueError( "Vector memory does not support get_all method, can only retrieve based on input." ) def _commit_node(self, override_last: bool = False) -> None: """Commit new node to vector store.""" if self.cur_batch_textnode.text == "": return if override_last: # delete the last node # This is needed since we're updating the last node in the vector # index as its being updated. When a new user-message batch starts # we already will have the last user message group committed to the # vector store index and so we don't need to override_last (i.e. see # logic in self.put().) self.vector_index.delete_nodes([self.cur_batch_textnode.id_]) self.vector_index.insert_nodes([self.cur_batch_textnode]) def put(self, message: ChatMessage) -> None: """Put chat history.""" if not self.batch_by_user_message or message.role in [ MessageRole.USER, MessageRole.SYSTEM, ]: # if not batching by user message, commit to vector store immediately after adding self.cur_batch_textnode = _get_starter_node_for_new_batch() # update current batch textnode sub_dict = _stringify_chat_message(message) if self.cur_batch_textnode.text == "": self.cur_batch_textnode.text += sub_dict["content"] or "" else: self.cur_batch_textnode.text += " " + (sub_dict["content"] or "") self.cur_batch_textnode.metadata["sub_dicts"].append(sub_dict) self._commit_node(override_last=True) def set(self, messages: List[ChatMessage]) -> None: """Set chat history.""" self.reset() for message in messages: self.put(message) def reset(self) -> None: """Reset chat history.""" self.vector_index.vector_store.clear() VectorMemory.model_rebuild()
173939
class ChatSummaryMemoryBuffer(BaseMemory): """Buffer for storing chat history that uses the full text for the latest {token_limit}. All older messages are iteratively summarized using the {llm} provided, with the max number of tokens defined by the {llm}. User can specify whether initial tokens (usually a system prompt) should be counted as part of the {token_limit} using the parameter {count_initial_tokens}. This buffer is useful to retain the most important information from a long chat history, while limiting the token count and latency of each request to the LLM. """ token_limit: int count_initial_tokens: bool = False llm: Optional[SerializeAsAny[LLM]] = None summarize_prompt: Optional[str] = None tokenizer_fn: Callable[[str], List] = Field( default_factory=get_tokenizer, exclude=True, ) chat_store: SerializeAsAny[BaseChatStore] = Field(default_factory=SimpleChatStore) chat_store_key: str = Field(default=DEFAULT_CHAT_STORE_KEY) _token_count: int = PrivateAttr(default=0) @field_serializer("chat_store") def serialize_courses_in_order(self, chat_store: BaseChatStore) -> dict: res = chat_store.model_dump() res.update({"class_name": chat_store.class_name()}) return res @model_validator(mode="before") @classmethod def validate_memory(cls, values: dict) -> dict: """Validate the memory.""" # Validate token limits token_limit = values.get("token_limit", -1) if token_limit < 1: raise ValueError( "Token limit for full-text messages must be set and greater than 0." ) # Validate tokenizer -- this avoids errors when loading from json/dict tokenizer_fn = values.get("tokenizer_fn", None) if tokenizer_fn is None: values["tokenizer_fn"] = get_tokenizer() return values @classmethod def from_defaults( cls, chat_history: Optional[List[ChatMessage]] = None, llm: Optional[LLM] = None, chat_store: Optional[BaseChatStore] = None, chat_store_key: str = DEFAULT_CHAT_STORE_KEY, token_limit: Optional[int] = None, tokenizer_fn: Optional[Callable[[str], List]] = None, summarize_prompt: Optional[str] = None, count_initial_tokens: bool = False, **kwargs: Any, ) -> "ChatSummaryMemoryBuffer": """Create a chat memory buffer from an LLM and an initial list of chat history messages. """ if kwargs: raise ValueError(f"Unexpected keyword arguments: {kwargs}") if llm is not None: context_window = llm.metadata.context_window token_limit = token_limit or int(context_window * DEFAULT_TOKEN_LIMIT_RATIO) elif token_limit is None: token_limit = DEFAULT_TOKEN_LIMIT chat_store = chat_store or SimpleChatStore() if chat_history is not None: chat_store.set_messages(chat_store_key, chat_history) summarize_prompt = summarize_prompt or SUMMARIZE_PROMPT return cls( llm=llm, token_limit=token_limit, # TODO: Check if we can get the tokenizer from the llm tokenizer_fn=tokenizer_fn or get_tokenizer(), summarize_prompt=summarize_prompt, chat_store=chat_store, chat_store_key=chat_store_key, count_initial_tokens=count_initial_tokens, ) @classmethod def class_name(cls) -> str: """Get class name.""" return "ChatSummaryMemoryBuffer" def to_string(self) -> str: """Convert memory to string.""" return self.json() def to_dict(self, **kwargs: Any) -> dict: """Convert memory to dict.""" return self.dict() @classmethod def from_string(cls, json_str: str, **kwargs: Any) -> "ChatSummaryMemoryBuffer": """Create a chat memory buffer from a string.""" dict_obj = json.loads(json_str) return cls.from_dict(dict_obj, **kwargs) @classmethod def from_dict( cls, data: Dict[str, Any], **kwargs: Any ) -> "ChatSummaryMemoryBuffer": from llama_index.core.storage.chat_store.loading import load_chat_store # NOTE: this handles backwards compatibility with the old chat history if "chat_history" in data: chat_history = data.pop("chat_history") simple_store = SimpleChatStore(store={DEFAULT_CHAT_STORE_KEY: chat_history}) data["chat_store"] = simple_store elif "chat_store" in data: chat_store_dict = data.pop("chat_store") chat_store = load_chat_store(chat_store_dict) data["chat_store"] = chat_store # NOTE: The llm will have to be set manually in kwargs if "llm" in data: data.pop("llm") return cls(**data, **kwargs) def get( self, input: Optional[str] = None, initial_token_count: int = 0, **kwargs: Any ) -> List[ChatMessage]: """Get chat history.""" chat_history = self.get_all() if len(chat_history) == 0: return [] # Give the user the choice whether to count the system prompt or not if self.count_initial_tokens: if initial_token_count > self.token_limit: raise ValueError("Initial token count exceeds token limit") self._token_count = initial_token_count ( chat_history_full_text, chat_history_to_be_summarized, ) = self._split_messages_summary_or_full_text(chat_history) if self.llm is None or len(chat_history_to_be_summarized) == 0: # Simply remove the message that don't fit the buffer anymore updated_history = chat_history_full_text else: updated_history = [ self._summarize_oldest_chat_history(chat_history_to_be_summarized), *chat_history_full_text, ] self.reset() self._token_count = 0 self.set(updated_history) return updated_history def get_all(self) -> List[ChatMessage]: """Get all chat history.""" return self.chat_store.get_messages(self.chat_store_key) def put(self, message: ChatMessage) -> None: """Put chat history.""" # ensure everything is serialized self.chat_store.add_message(self.chat_store_key, message) async def aput(self, message: ChatMessage) -> None: """Put chat history.""" await self.chat_store.async_add_message(self.chat_store_key, message) def set(self, messages: List[ChatMessage]) -> None: """Set chat history.""" self.chat_store.set_messages(self.chat_store_key, messages) def reset(self) -> None: """Reset chat history.""" self.chat_store.delete_messages(self.chat_store_key) def get_token_count(self) -> int: """Returns the token count of the memory buffer (excluding the last assistant response).""" return self._token_count def _token_count_for_messages(self, messages: List[ChatMessage]) -> int: """Get token count for list of messages.""" if len(messages) <= 0: return 0 msg_str = " ".join(str(m.content) for m in messages) return len(self.tokenizer_fn(msg_str)) def _split_messages_summary_or_full_text( self, chat_history: List[ChatMessage] ) -> Tuple[List[ChatMessage], List[ChatMessage]]: """Determine which messages will be included as full text, and which will have to be summarized by the llm. """ chat_history_full_text: List[ChatMessage] = [] message_count = len(chat_history) while ( message_count > 0 and self.get_token_count() + self._token_count_for_messages([chat_history[-1]]) <= self.token_limit ): # traverse the history in reverse order, when token limit is about to be # exceeded, we stop, so remaining messages are summarized self._token_count += self._token_count_for_messages([chat_history[-1]]) chat_history_full_text.insert(0, chat_history.pop()) message_count -= 1 chat_history_to_be_summarized = chat_history.copy() self._handle_assistant_and_tool_messages( chat_history_full_text, chat_history_to_be_summarized ) return chat_history_full_text, chat_history_to_be_summarized
174024
class ReActAgent(BaseAgent): """ReAct agent. Uses a ReAct prompt that can be used in both chat and text completion endpoints. Can take in a set of tools that require structured inputs. """ def __init__( self, tools: Sequence[BaseTool], llm: LLM, memory: BaseMemory, max_iterations: int = 10, react_chat_formatter: Optional[ReActChatFormatter] = None, output_parser: Optional[ReActOutputParser] = None, callback_manager: Optional[CallbackManager] = None, verbose: bool = False, tool_retriever: Optional[ObjectRetriever[BaseTool]] = None, ) -> None: super().__init__(callback_manager=callback_manager or llm.callback_manager) self._llm = llm self._memory = memory self._max_iterations = max_iterations self._react_chat_formatter = react_chat_formatter or ReActChatFormatter() self._output_parser = output_parser or ReActOutputParser() self._verbose = verbose self.sources: List[ToolOutput] = [] if len(tools) > 0 and tool_retriever is not None: raise ValueError("Cannot specify both tools and tool_retriever") elif len(tools) > 0: self._get_tools = lambda _: tools elif tool_retriever is not None: tool_retriever_c = cast(ObjectRetriever[BaseTool], tool_retriever) self._get_tools = lambda message: tool_retriever_c.retrieve(message) else: self._get_tools = lambda _: [] @classmethod def from_tools( cls, tools: Optional[List[BaseTool]] = None, tool_retriever: Optional[ObjectRetriever[BaseTool]] = None, llm: Optional[LLM] = None, chat_history: Optional[List[ChatMessage]] = None, memory: Optional[BaseMemory] = None, memory_cls: Type[BaseMemory] = ChatMemoryBuffer, max_iterations: int = 10, react_chat_formatter: Optional[ReActChatFormatter] = None, output_parser: Optional[ReActOutputParser] = None, callback_manager: Optional[CallbackManager] = None, verbose: bool = False, **kwargs: Any, ) -> "ReActAgent": """Convenience constructor method from set of BaseTools (Optional). NOTE: kwargs should have been exhausted by this point. In other words the various upstream components such as BaseSynthesizer (response synthesizer) or BaseRetriever should have picked up off their respective kwargs in their constructions. Returns: ReActAgent """ llm = llm or Settings.llm if callback_manager is not None: llm.callback_manager = callback_manager memory = memory or memory_cls.from_defaults( chat_history=chat_history or [], llm=llm ) return cls( tools=tools or [], tool_retriever=tool_retriever, llm=llm, memory=memory, max_iterations=max_iterations, react_chat_formatter=react_chat_formatter, output_parser=output_parser, callback_manager=callback_manager, verbose=verbose, ) @property def chat_history(self) -> List[ChatMessage]: """Chat history.""" return self._memory.get_all() def reset(self) -> None: self._memory.reset() def _extract_reasoning_step( self, output: ChatResponse, is_streaming: bool = False ) -> Tuple[str, List[BaseReasoningStep], bool]: """ Extracts the reasoning step from the given output. This method parses the message content from the output, extracts the reasoning step, and determines whether the processing is complete. It also performs validation checks on the output and handles possible errors. """ if output.message.content is None: raise ValueError("Got empty message.") message_content = output.message.content current_reasoning = [] try: reasoning_step = self._output_parser.parse(message_content, is_streaming) except BaseException as exc: raise ValueError(f"Could not parse output: {message_content}") from exc if self._verbose: print_text(f"{reasoning_step.get_content()}\n", color="pink") current_reasoning.append(reasoning_step) if reasoning_step.is_done: return message_content, current_reasoning, True reasoning_step = cast(ActionReasoningStep, reasoning_step) if not isinstance(reasoning_step, ActionReasoningStep): raise ValueError(f"Expected ActionReasoningStep, got {reasoning_step}") return message_content, current_reasoning, False def _process_actions( self, tools: Sequence[AsyncBaseTool], output: ChatResponse, is_streaming: bool = False, ) -> Tuple[List[BaseReasoningStep], bool]: tools_dict: Dict[str, AsyncBaseTool] = { tool.metadata.get_name(): tool for tool in tools } _, current_reasoning, is_done = self._extract_reasoning_step( output, is_streaming ) if is_done: return current_reasoning, True # call tool with input reasoning_step = cast(ActionReasoningStep, current_reasoning[-1]) tool = tools_dict[reasoning_step.action] with self.callback_manager.event( CBEventType.FUNCTION_CALL, payload={ EventPayload.FUNCTION_CALL: reasoning_step.action_input, EventPayload.TOOL: tool.metadata, }, ) as event: tool_output = tool.call(**reasoning_step.action_input) event.on_end(payload={EventPayload.FUNCTION_OUTPUT: str(tool_output)}) self.sources.append(tool_output) observation_step = ObservationReasoningStep(observation=str(tool_output)) current_reasoning.append(observation_step) if self._verbose: print_text(f"{observation_step.get_content()}\n", color="blue") return current_reasoning, False async def _aprocess_actions( self, tools: Sequence[AsyncBaseTool], output: ChatResponse, is_streaming: bool = False, ) -> Tuple[List[BaseReasoningStep], bool]: tools_dict = {tool.metadata.name: tool for tool in tools} _, current_reasoning, is_done = self._extract_reasoning_step( output, is_streaming ) if is_done: return current_reasoning, True # call tool with input reasoning_step = cast(ActionReasoningStep, current_reasoning[-1]) tool = tools_dict[reasoning_step.action] with self.callback_manager.event( CBEventType.FUNCTION_CALL, payload={ EventPayload.FUNCTION_CALL: reasoning_step.action_input, EventPayload.TOOL: tool.metadata, }, ) as event: tool_output = await tool.acall(**reasoning_step.action_input) event.on_end(payload={EventPayload.FUNCTION_OUTPUT: str(tool_output)}) self.sources.append(tool_output) observation_step = ObservationReasoningStep(observation=str(tool_output)) current_reasoning.append(observation_step) if self._verbose: print_text(f"{observation_step.get_content()}\n", color="blue") return current_reasoning, False def _get_response( self, current_reasoning: List[BaseReasoningStep], ) -> AgentChatResponse: """Get response from reasoning steps.""" if len(current_reasoning) == 0: raise ValueError("No reasoning steps were taken.") elif len(current_reasoning) == self._max_iterations: raise ValueError("Reached max iterations.") response_step = cast(ResponseReasoningStep, current_reasoning[-1]) # TODO: add sources from reasoning steps return AgentChatResponse(response=response_step.response, sources=self.sources) def _infer_stream_chunk_is_final(self, chunk: ChatResponse) -> bool: """Infers if a chunk from a live stream is the start of the final reasoning step. (i.e., and should eventually become ResponseReasoningStep — not part of this function's logic tho.). Args: chunk (ChatResponse): the current chunk stream to check Returns: bool: Boolean on whether the chunk is the start of the final response """ latest_content = chunk.message.content if latest_content: if not latest_content.startswith( "Thought" ): # doesn't follow thought-action format return True else: if "Answer: " in latest_content: return True return False
174058
from queue import Queue from threading import Event from typing import Any, Generator, List, Optional from uuid import UUID from llama_index.core.bridge.langchain import BaseCallbackHandler, LLMResult class StreamingGeneratorCallbackHandler(BaseCallbackHandler): """Streaming callback handler.""" def __init__(self) -> None: self._token_queue: Queue = Queue() self._done = Event() def __deepcopy__(self, memo: Any) -> "StreamingGeneratorCallbackHandler": # NOTE: hack to bypass deepcopy in langchain return self def on_llm_new_token(self, token: str, **kwargs: Any) -> Any: """Run on new LLM token. Only available when streaming is enabled.""" self._token_queue.put_nowait(token) def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None: self._done.set() def on_llm_error( self, error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any, ) -> None: self._done.set() def get_response_gen(self) -> Generator: while True: if not self._token_queue.empty(): token = self._token_queue.get_nowait() yield token elif self._done.is_set(): break