Transform, enrich and persist lots of data
Load data from various sources, transform it, enrich it with metadata, and persist it with lazy, asynchronous, parallel pipelines.
Large language models are amazing, but need context to solve real problems. Ingest, transform, and index large amounts of data. Query it, augment it, and generate answers with Swiftide.
Swiftide aims to be fast, modular, and with minimal abstractions, and can easily be extended by implementing simple traits.
The AI and LLM landscape is moving fast. The core idea is to have a stable and modular infrastructure, so you can focus on building the actual LLM application.
Written in Rust, Swiftide is fast, safe, and efficient. It is built with Rust’s async and streaming features, and can be used in production.
Transform, enrich and persist lots of data
Load data from various sources, transform it, enrich it with metadata, and persist it with lazy, asynchronous, parallel pipelines.
Transform code and text
Chunk, transform, and augment code with build in transformers. Swiftide uses tree-sitter to interpret and augment code. Text documents, be it markdown, html or unstructured, is also supported.
Query pipeline
Query your indexed data, transform it, filter it, and generate a response. Purpusely build for Retrieval Augmented Generation.
Agents
Build and run autonomous agents on top of Swiftide. Easily define tools, hook in on important parts of the agent lifecycle, and code something that does the job for you.
Customizable templated prompts
Customize and bring your own prompts, build on Tera
, a jinja style templating library.
Many existing integrations
Qdrant, OpenAI, Groq, AWS Bedrock, Redis, FastEmbed, Spider and many more.
Easy to extend
Write your own loaders, transformers, and storages by extending straight forward traits.
Written in Rust
Fast, safe, and efficient. Built with Rust’s async and streaming features.
Part of Bosun.ai
Part of Bosun.ai and actively used in production.
Reference
Full API documentation available on docs.rs