Skip to content

Fast & streaming LLM applications in Rust

Index, query, run agents, bring your experiments right to production

What?

Large language models are amazing, but need context to solve real problems. Ingest, transform, and index large amounts of data. Query it, augment it, and generate answers with Swiftide.


Swiftide aims to be fast, modular, and with minimal abstractions, and can easily be extended by implementing simple traits.

Why?

The AI and LLM landscape is moving fast. The core idea is to have a stable and modular infrastructure, so you can focus on building the actual LLM application.


Written in Rust, Swiftide is fast, safe, and efficient. It is built with Rust’s async and streaming features, and can be used in production.

Some quick examples

// Load markdown from the current directory, generate synthetic questions and answers,
// embed it with FastEmbed and store into Qdrant.
indexing::Pipeline::from_loader(FileLoader::new(".").with_extensions(&["md"]))
.with_default_llm_client(openai_client)
.then_chunk(ChunkMarkdown::from_chunk_range(10..512))
.then(MetadataQAText::default())
.then(move |mut node: Node| {
node.metadata.insert("Hello", "Metadata");
Ok(node)
})
.then_in_batch(Embed::new(FastEmbed::default()))
.then_store_with(
Qdrant::builder()
.batch_size(50)
.vector_size(384)
.collection_name("swiftide-examples")
.build()?,
)
.run()
.await?;

Features

Transform, enrich and persist lots of data

Load data from various sources, transform it, enrich it with metadata, and persist it with lazy, asynchronous, parallel pipelines.

Transform code and text

Chunk, transform, and augment code with build in transformers. Swiftide uses tree-sitter to interpret and augment code. Text documents, be it markdown, html or unstructured, is also supported.

Query pipeline

Query your indexed data, transform it, filter it, and generate a response. Purpusely build for Retrieval Augmented Generation.

Agents

Build and run autonomous agents on top of Swiftide. Easily define tools, hook in on important parts of the agent lifecycle, and code something that does the job for you.

Customizable templated prompts

Customize and bring your own prompts, build on Tera, a jinja style templating library.

Many existing integrations

Qdrant, OpenAI, Groq, AWS Bedrock, Redis, FastEmbed, Spider and many more.

Easy to extend

Write your own loaders, transformers, and storages by extending straight forward traits.

Written in Rust

Fast, safe, and efficient. Built with Rust’s async and streaming features.

Part of Bosun.ai

Part of Bosun.ai and actively used in production.

Reference

Full API documentation available on docs.rs