Pipeline
txtai provides a generic pipeline processing framework with the only interface requirement being a __call__
method. Pipelines are flexible and process various types of data. Pipelines can wrap machine learning models as well as other processes.
Pipelines are run with Python or configuration. Pipelines can be instantiated in configuration using the lower case name of the pipeline. Configuration-driven pipelines are run with workflows or the API.
List of pipelines
The following is a list of the current pipelines available in txtai. All pipelines use default models when otherwise not specified. See the model guide for the current model recommendations. All pipelines are designed to work with local models via the Transformers library.
The LLM
and RAG
pipelines also have integrations for llama.cpp and hosted API models via LiteLLM. The LLM
pipeline can be prompted to accomplish many of the same tasks (i.e. summarization, translation, classification).
- Audio
- Data Processing
- Image
- Text
- Training