Skip to content

Segmentation

pipeline pipeline

The Segmentation pipeline segments text into semantic units.

Example

The following shows a simple example using this pipeline.

from txtai.pipeline import Segmentation

# Create and run pipeline
segment = Segmentation(sentences=True)
segment("This is a test. And another test.")

Configuration-driven example

Pipelines are run with Python or configuration. Pipelines can be instantiated in configuration using the lower case name of the pipeline. Configuration-driven pipelines are run with workflows or the API.

config.yml

# Create pipeline using lower case class name
segmentation:
  sentences: true

# Run pipeline with workflow
workflow:
  segment:
    tasks:
      - action: segmentation

Run with Workflows

from txtai import Application

# Create and run pipeline with workflow
app = Application("config.yml")
list(app.workflow("segment", ["This is a test. And another test."]))

Run with API

CONFIG=config.yml uvicorn "txtai.api:app" &

curl \
  -X POST "http://localhost:8000/workflow" \
  -H "Content-Type: application/json" \
  -d '{"name":"segment", "elements":["This is a test. And another test."]}'

Methods

Python documentation for the pipeline.

__init__(sentences=False, lines=False, paragraphs=False, minlength=None, join=False, sections=False)

Creates a new Segmentation pipeline.

Parameters:

Name Type Description Default
sentences

tokenize text into sentences if True, defaults to False

False
lines

tokenizes text into lines if True, defaults to False

False
paragraphs

tokenizes text into paragraphs if True, defaults to False

False
minlength

require at least minlength characters per text element, defaults to None

None
join

joins tokenized sections back together if True, defaults to False

False
sections

tokenizes text into sections if True, defaults to False. Splits using section or page breaks, depending on what's available

False
Source code in txtai/pipeline/data/segmentation.py
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
def __init__(self, sentences=False, lines=False, paragraphs=False, minlength=None, join=False, sections=False):
    """
    Creates a new Segmentation pipeline.

    Args:
        sentences: tokenize text into sentences if True, defaults to False
        lines: tokenizes text into lines if True, defaults to False
        paragraphs: tokenizes text into paragraphs if True, defaults to False
        minlength: require at least minlength characters per text element, defaults to None
        join: joins tokenized sections back together if True, defaults to False
        sections: tokenizes text into sections if True, defaults to False. Splits using section or page breaks, depending on what's available
    """

    if not NLTK:
        raise ImportError('Segmentation pipeline is not available - install "pipeline" extra to enable')

    self.sentences = sentences
    self.lines = lines
    self.paragraphs = paragraphs
    self.sections = sections
    self.minlength = minlength
    self.join = join

__call__(text)

Segments text into semantic units.

This method supports text as a string or a list. If the input is a string, the return type is text|list. If text is a list, a list of returned, this could be a list of text or a list of lists depending on the tokenization strategy.

Parameters:

Name Type Description Default
text

text|list

required

Returns:

Type Description

segmented text

Source code in txtai/pipeline/data/segmentation.py
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
def __call__(self, text):
    """
    Segments text into semantic units.

    This method supports text as a string or a list. If the input is a string, the return
    type is text|list. If text is a list, a list of returned, this could be a
    list of text or a list of lists depending on the tokenization strategy.

    Args:
        text: text|list

    Returns:
        segmented text
    """

    # Get inputs
    texts = [text] if not isinstance(text, list) else text

    # Extract text for each input file
    results = []
    for value in texts:
        # Get text
        value = self.text(value)

        # Parse and add extracted results
        results.append(self.parse(value))

    return results[0] if isinstance(text, str) else results