Qdrant
The sie-qdrant package lets you use SIE as the embedding provider for Qdrant collections. SIE encodes your text into vectors, and you store and search them in Qdrant.
How it works: You create a Qdrant collection with a vector configuration (size and distance metric). Then you use SIEVectorizer to embed your texts via SIE and pass the resulting vectors to Qdrant as PointStruct objects on insert, and as query vectors on search.
Python only. TypeScript support is not yet available for this integration.
Installation
Section titled “Installation”pip install sie-qdrantThis installs sie-sdk and qdrant-client (v1.7+) as dependencies.
Start the Servers
Section titled “Start the Servers”You need both an SIE server (for embeddings) and a Qdrant instance (for vector storage and search).
# SIE serverdocker run -p 8080:8080 ghcr.io/superlinked/sie:default
# Or with GPUdocker run --gpus all -p 8080:8080 ghcr.io/superlinked/sie:default
# Qdrantdocker run -d -p 6333:6333 -p 6334:6334 qdrant/qdrant:v1.13.2Vectorizer
Section titled “Vectorizer”SIEVectorizer calls SIE’s encode() and returns vectors as list[float] — the format Qdrant expects for PointStruct(vector=...) and query_points().
from sie_qdrant import SIEVectorizer
vectorizer = SIEVectorizer( base_url="http://localhost:8080", model="NovaSearch/stella_en_400M_v5",)Any model SIE supports for dense embeddings works — just change the model parameter:
# Nomic MoE (768-dim)vectorizer = SIEVectorizer(model="nomic-ai/nomic-embed-text-v2-moe")
# E5 (1024-dim) — SIE handles query vs document encoding automaticallyvectorizer = SIEVectorizer(model="intfloat/e5-large-v2")
# BGE-M3 (1024-dim, also supports sparse output for hybrid search)vectorizer = SIEVectorizer(model="BAAI/bge-m3")See the Model Catalog for all 85+ supported models.
Configuration Options
Section titled “Configuration Options”| Parameter | Type | Default | Description |
|---|---|---|---|
base_url | str | http://localhost:8080 | SIE server URL |
model | str | BAAI/bge-m3 | Model to use for embeddings (catalog) |
instruction | str | None | Instruction prefix for instruction-tuned models (e.g., E5) |
output_dtype | str | None | Output data type (float32, float16, int8, binary) |
gpu | str | None | Target GPU type for routing |
options | dict | None | Model-specific options |
timeout_s | float | 180.0 | Request timeout in seconds |
Full Example
Section titled “Full Example”Create a Qdrant collection, embed documents with SIE, and search:
from qdrant_client import QdrantClientfrom qdrant_client.models import Distance, VectorParams, PointStructfrom sie_qdrant import SIEVectorizer
# 1. Create vectorizer — this talks to SIEvectorizer = SIEVectorizer( base_url="http://localhost:8080", model="NovaSearch/stella_en_400M_v5",)
# 2. Connect to Qdrantqdrant = QdrantClient("http://localhost:6333")try: # 3. Create a collection (Stella produces 1024-dim dense vectors) qdrant.create_collection( collection_name="documents", vectors_config=VectorParams(size=1024, distance=Distance.COSINE), )
# 4. Embed texts with SIE, then store in Qdrant texts = [ "Machine learning is a subset of artificial intelligence.", "Neural networks are inspired by biological neurons.", "Deep learning uses multiple layers of neural networks.", "Python is popular for machine learning development.", ] vectors = vectorizer.embed_documents(texts) qdrant.upsert( collection_name="documents", points=[ PointStruct(id=i, vector=v, payload={"text": t}) for i, (t, v) in enumerate(zip(texts, vectors)) ], )
# 5. Embed query with SIE, then search in Qdrant query_vec = vectorizer.embed_query("What is deep learning?") results = qdrant.query_points( collection_name="documents", query=query_vec, limit=2, )
for point in results.points: print(point.payload["text"])finally: qdrant.close()Named Vectors (Dense + Sparse) — Advanced
Section titled “Named Vectors (Dense + Sparse) — Advanced”For advanced use cases, SIENamedVectorizer produces multiple vector types in a single SIE encode() call. Qdrant supports both dense and sparse vectors natively — dense vectors use VectorParams and sparse vectors use SparseVectorParams with the efficient SparseVector(indices, values) format. If you’re just getting started, use SIEVectorizer above instead.
The model must support all requested output types. BAAI/bge-m3 supports both dense and sparse. For dense-only models, use SIEVectorizer.
from qdrant_client import QdrantClientfrom qdrant_client.models import ( Distance, VectorParams, PointStruct, SparseVectorParams, SparseVector,)from sie_qdrant import SIENamedVectorizer
# One SIE call produces both dense and sparse vectorsvectorizer = SIENamedVectorizer( base_url="http://localhost:8080", model="BAAI/bge-m3", output_types=["dense", "sparse"],)
qdrant = QdrantClient("http://localhost:6333")try: # Create collection with dense + sparse vector configs qdrant.create_collection( collection_name="documents", vectors_config={"dense": VectorParams(size=1024, distance=Distance.COSINE)}, sparse_vectors_config={"sparse": SparseVectorParams()}, )
# Embed — returns [{"dense": [...], "sparse": {"indices": [...], "values": [...]}}, ...] texts = ["First document", "Second document"] named_vectors = vectorizer.embed_documents(texts) qdrant.upsert( collection_name="documents", points=[ PointStruct( id=i, vector={ "dense": v["dense"], "sparse": SparseVector(**v["sparse"]), }, payload={"text": t}, ) for i, (t, v) in enumerate(zip(texts, named_vectors)) ], )
# Query against the dense vector space query = vectorizer.embed_query("search text") results = qdrant.query_points( collection_name="documents", query=query["dense"], using="dense", limit=5, )
for point in results.points: print(point.payload["text"])finally: qdrant.close()Sparse vectors and hybrid search
Section titled “Sparse vectors and hybrid search”SIE sparse vectors (from SPLADE or BGE-M3) are learned sparse representations — they capture semantic similarity, not just term overlap. Qdrant stores them natively in their compact indices + values format, so there is no storage overhead from expanding to full vocabulary length.
For hybrid search combining dense and sparse results, Qdrant supports prefetch with Reciprocal Rank Fusion (RRF):
from qdrant_client.models import Prefetch, FusionQuery, Fusion
results = qdrant.query_points( collection_name="documents", prefetch=[ Prefetch(query=query["dense"], using="dense", limit=20), Prefetch(query=SparseVector(**query["sparse"]), using="sparse", limit=20), ], query=FusionQuery(fusion=Fusion.RRF), limit=5,)Use SIE sparse vectors when you need learned sparse representations (e.g., SPLADE) that go beyond term-frequency matching.
What’s Next
Section titled “What’s Next”- Encode Text — embedding API details and output types
- Model Catalog — all supported embedding models
- Integrations — all supported vector stores and frameworks
- Troubleshooting — common errors and solutions