Skip to content

Sequences

pipeline pipeline

The Sequences pipeline runs text through a sequence-sequence model and generates output text.

Example

The following shows a simple example using this pipeline.

from txtai.pipeline import Sequences

# Create and run pipeline
sequences = Sequences()
sequences("Hello, how are you?", "translate English to French: ")

See the link below for a more detailed example.

Notebook Description
Query translation Domain-specific natural language queries with query translation Open In Colab

Configuration-driven example

Pipelines are run with Python or configuration. Pipelines can be instantiated in configuration using the lower case name of the pipeline. Configuration-driven pipelines are run with workflows or the API.

config.yml

# Create pipeline using lower case class name
sequences:

# Run pipeline with workflow
workflow:
  sequences:
    tasks:
      - action: sequences
        args: ["translate English to French: "]

Run with Workflows

from txtai.app import Application

# Create and run pipeline with workflow
app = Application("config.yml")
list(app.workflow("sequences", ["Hello, how are you?"]))

Run with API

CONFIG=config.yml uvicorn "txtai.api:app" &

curl \
  -X POST "http://localhost:8000/workflow" \
  -H "Content-Type: application/json" \
  -d '{"name":"sequences", "elements": ["Hello, how are you?"]}'

Methods

Python documentation for the pipeline.

__init__(self, path=None, quantize=False, gpu=True, model=None) special

Source code in txtai/pipeline/text/generator.py
def __init__(self, path=None, quantize=False, gpu=True, model=None):
    super().__init__(self.task(), path, quantize, gpu, model)

__call__(self, text, prefix=None, maxlength=512, workers=0, **kwargs) special

Source code in txtai/pipeline/text/generator.py
def __call__(self, text, prefix=None, maxlength=512, workers=0, **kwargs):
    """
    Generates text using input text

    Args:
        text: text|list
        prefix: optional prefix to prepend to text elements
        maxlength: maximum sequence length
        workers: number of concurrent workers to use for processing data, defaults to None
        kwargs: additional generation keyword arguments

    Returns:
        generated text
    """

    # List of texts
    texts = text if isinstance(text, list) else [text]

    # Add prefix, if necessary
    if prefix:
        texts = [f"{prefix}{x}" for x in texts]

    # Run pipeline
    results = self.pipeline(texts, max_length=maxlength, num_workers=workers, **kwargs)

    # Get generated text
    results = [self.clean(texts[x], result) for x, result in enumerate(results)]

    return results[0] if isinstance(text, str) else results