Skip to content

Entity

pipeline pipeline

The Entity pipeline applies a token classifier to text and extracts entity/label combinations.

Example

The following shows a simple example using this pipeline.

from txtai.pipeline import Entity

# Create and run pipeline
entity = Entity()
entity("Canada's last fully intact ice shelf has suddenly collapsed, " \
       "forming a Manhattan-sized iceberg")

See the link below for a more detailed example.

Notebook Description
Entity extraction workflows Identify entity/label combinations Open In Colab

Configuration-driven example

Pipelines are run with Python or configuration. Pipelines can be instantiated in configuration using the lower case name of the pipeline. Configuration-driven pipelines are run with workflows or the API.

config.yml

# Create pipeline using lower case class name
entity:

# Run pipeline with workflow
workflow:
  entity:
    tasks:
      - action: entity

Run with Workflows

from txtai.app import Application

# Create and run pipeline with workflow
app = Application("config.yml")
list(app.workflow("entity", ["Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg"]))

Run with API

CONFIG=config.yml uvicorn "txtai.api:app" &

curl \
  -X POST "http://localhost:8000/workflow" \
  -H "Content-Type: application/json" \
  -d '{"name":"entity", "elements": ["Canadas last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg"]}'

Methods

Python documentation for the pipeline.

Source code in txtai/pipeline/text/entity.py
13
14
def __init__(self, path=None, quantize=False, gpu=True, model=None, **kwargs):
    super().__init__("token-classification", path, quantize, gpu, model, **kwargs)

Applies a token classifier to text and extracts entity/label combinations.

Parameters:

Name Type Description Default
text

text|list

required
labels

list of entity type labels to accept, defaults to None which accepts all

None
aggregate

method to combine multi token entities - options are "simple" (default), "first", "average" or "max"

'simple'
flatten

flatten output to a list of labels if present. Accepts a boolean or float value to only keep scores greater than that number.

None
join

joins flattened output into a string if True, ignored if flatten not set

False
workers

number of concurrent workers to use for processing data, defaults to None

0

Returns:

Type Description

list of (entity, entity type, score) or list of entities depending on flatten parameter

Source code in txtai/pipeline/text/entity.py
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
def __call__(self, text, labels=None, aggregate="simple", flatten=None, join=False, workers=0):
    """
    Applies a token classifier to text and extracts entity/label combinations.

    Args:
        text: text|list
        labels: list of entity type labels to accept, defaults to None which accepts all
        aggregate: method to combine multi token entities - options are "simple" (default), "first", "average" or "max"
        flatten: flatten output to a list of labels if present. Accepts a boolean or float value to only keep scores greater than that number.
        join: joins flattened output into a string if True, ignored if flatten not set
        workers: number of concurrent workers to use for processing data, defaults to None

    Returns:
        list of (entity, entity type, score) or list of entities depending on flatten parameter
    """

    # Run token classification pipeline
    results = self.pipeline(text, aggregation_strategy=aggregate, num_workers=workers)

    # Convert results to a list if necessary
    if isinstance(text, str):
        results = [results]

    # Score threshold when flatten is set
    threshold = 0.0 if isinstance(flatten, bool) else flatten

    # Extract entities if flatten set, otherwise extract (entity, entity type, score) tuples
    outputs = []
    for result in results:
        if flatten:
            output = [r["word"] for r in result if self.accept(r["entity_group"], labels) and r["score"] >= threshold]
            outputs.append(" ".join(output) if join else output)
        else:
            outputs.append([(r["word"], r["entity_group"], float(r["score"])) for r in result if self.accept(r["entity_group"], labels)])

    return outputs[0] if isinstance(text, str) else outputs