Skip to content

LLM

pipeline pipeline

The LLM pipeline runs prompts through a large language model (LLM). This pipeline autodetects the LLM framework based on the model path.

Example

The following shows a simple example using this pipeline.

from txtai.pipeline import LLM

# Create and run LLM pipeline
llm = LLM()
llm(
  """
  Answer the following question using the provided context.

  Question:
  What are the applications of txtai?

  Context:
  txtai is an open-source platform for semantic search and
  workflows powered by language models.
  """
)

The LLM pipeline automatically detects the underlying LLM framework. This can also be manually set.

from txtai.pipeline import LLM

# Set method as litellm
llm = LLM("vllm/Open-Orca/Mistral-7B-OpenOrca", method="litellm")

# Set method as llama.cpp
llm = LLM("TheBloke/Mistral-7B-OpenOrca-GGUF/mistral-7b-openorca.Q4_K_M.gguf",
           method="llama.cpp")

Models can be externally loaded and passed to pipelines. This is useful for models that are not yet supported by Transformers and/or need special initialization.

import torch

from transformers import AutoModelForCausalLM, AutoTokenizer
from txtai.pipeline import LLM

# Load Mistral-7B-OpenOrca
path = "Open-Orca/Mistral-7B-OpenOrca"
model = AutoModelForCausalLM.from_pretrained(
  path,
  torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained(path)

llm = LLM((model, tokenizer))

See the links below for more detailed examples.

Notebook Description
Prompt-driven search with LLMs Embeddings-guided and Prompt-driven search with Large Language Models (LLMs) Open In Colab
Prompt templates and task chains Build model prompts and connect tasks together with workflows Open In Colab
Build RAG pipelines with txtai Guide on retrieval augmented generation including how to create citations Open In Colab
Integrate LLM frameworks Integrate llama.cpp, LiteLLM and custom generation frameworks Open In Colab
Generate knowledge with Semantic Graphs and RAG Knowledge exploration and discovery with Semantic Graphs and RAG Open In Colab
Build knowledge graphs with LLMs Build knowledge graphs with LLM-driven entity extraction Open In Colab
Advanced RAG with graph path traversal Graph path traversal to collect complex sets of data for advanced RAG Open In Colab
Advanced RAG with guided generation Retrieval Augmented and Guided Generation Open In Colab

Configuration-driven example

Pipelines are run with Python or configuration. Pipelines can be instantiated in configuration using the lower case name of the pipeline. Configuration-driven pipelines are run with workflows or the API.

config.yml

# Create pipeline using lower case class name
# Use `generator` or `sequences` to force model type
llm:

# Run pipeline with workflow
workflow:
  llm:
    tasks:
      - action: llm

Similar to the Python example above, the underlying Hugging Face pipeline parameters and model parameters can be set in pipeline configuration.

llm:
  path: Open-Orca/Mistral-7B-OpenOrca
  torch_dtype: torch.bfloat16

Run with Workflows

from txtai.app import Application

# Create and run pipeline with workflow
app = Application("config.yml")
list(app.workflow("llm", [
  """
  Answer the following question using the provided context.

  Question:
  What are the applications of txtai? 

  Context:
  txtai is an open-source platform for semantic search and
  workflows powered by language models.
  """
]))

Run with API

CONFIG=config.yml uvicorn "txtai.api:app" &

curl \
  -X POST "http://localhost:8000/workflow" \
  -H "Content-Type: application/json" \
  -d '{"name":"sequences", "elements": ["Answer the following question..."]}'

Methods

Python documentation for the pipeline.

Creates a new LLM.

Parameters:

Name Type Description Default
path

model path

None
method

llm model framework, infers from path if not provided

None
kwargs

model keyword arguments

{}
Source code in txtai/pipeline/llm/llm.py
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
def __init__(self, path=None, method=None, **kwargs):
    """
    Creates a new LLM.

    Args:
        path: model path
        method: llm model framework, infers from path if not provided
        kwargs: model keyword arguments
    """

    # Default LLM if not provided
    path = path if path else "google/flan-t5-base"

    # Generation instance
    self.generator = GenerationFactory.create(path, method, **kwargs)

Generates text using input text

Parameters:

Name Type Description Default
text

text|list

required
maxlength

maximum sequence length

512
kwargs

additional generation keyword arguments

{}

Returns:

Type Description

generated text

Source code in txtai/pipeline/llm/llm.py
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
def __call__(self, text, maxlength=512, **kwargs):
    """
    Generates text using input text

    Args:
        text: text|list
        maxlength: maximum sequence length
        kwargs: additional generation keyword arguments

    Returns:
        generated text
    """

    # Run LLM generation
    return self.generator(text, maxlength, **kwargs)