Segmentation
The Segmentation pipeline segments text into semantic units.
Example
The following shows a simple example using this pipeline.
from txtai.pipeline import Segmentation
# Create and run pipeline
segment = Segmentation(sentences=True)
segment("This is a test. And another test.")
Configuration-driven example
Pipelines are run with Python or configuration. Pipelines can be instantiated in configuration using the lower case name of the pipeline. Configuration-driven pipelines are run with workflows or the API.
config.yml
# Create pipeline using lower case class name
segmentation:
sentences: true
# Run pipeline with workflow
workflow:
segment:
tasks:
- action: segmentation
Run with Workflows
from txtai import Application
# Create and run pipeline with workflow
app = Application("config.yml")
list(app.workflow("segment", ["This is a test. And another test."]))
Run with API
CONFIG=config.yml uvicorn "txtai.api:app" &
curl \
-X POST "http://localhost:8000/workflow" \
-H "Content-Type: application/json" \
-d '{"name":"segment", "elements":["This is a test. And another test."]}'
Methods
Python documentation for the pipeline.
__init__(sentences=False, lines=False, paragraphs=False, minlength=None, join=False, sections=False)
Creates a new Segmentation pipeline.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sentences
|
tokenize text into sentences if True, defaults to False |
False
|
|
lines
|
tokenizes text into lines if True, defaults to False |
False
|
|
paragraphs
|
tokenizes text into paragraphs if True, defaults to False |
False
|
|
minlength
|
require at least minlength characters per text element, defaults to None |
None
|
|
join
|
joins tokenized sections back together if True, defaults to False |
False
|
|
sections
|
tokenizes text into sections if True, defaults to False. Splits using section or page breaks, depending on what's available |
False
|
Source code in txtai/pipeline/data/segmentation.py
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
|
__call__(text)
Segments text into semantic units.
This method supports text as a string or a list. If the input is a string, the return type is text|list. If text is a list, a list of returned, this could be a list of text or a list of lists depending on the tokenization strategy.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
text
|
text|list |
required |
Returns:
Type | Description |
---|---|
segmented text |
Source code in txtai/pipeline/data/segmentation.py
46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
|