Weaviate
The sie-weaviate package lets you use SIE as the embedding provider for Weaviate v4 collections. SIE encodes your text into vectors, and you store and search them in Weaviate.
How it works: You create a Weaviate collection with self_provided vector config (meaning Weaviate won’t generate embeddings itself — you provide them). Then you use SIEVectorizer to embed your texts via SIE and pass the resulting vectors to Weaviate on insert and query.
Python only. TypeScript support is not yet available for this integration.
Installation
Section titled “Installation”pip install sie-weaviateThis installs sie-sdk and weaviate-client (v4.16+) as dependencies.
Start the Servers
Section titled “Start the Servers”You need both an SIE server (for embeddings) and a Weaviate instance (for vector storage and search).
# SIE serverdocker run -p 8080:8080 ghcr.io/superlinked/sie:default
# Or with GPUdocker run --gpus all -p 8080:8080 ghcr.io/superlinked/sie:default
# Weaviatedocker run -d -p 8090:8080 -p 50051:50051 \ -e AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED=true \ -e DEFAULT_VECTORIZER_MODULE=none \ cr.weaviate.io/semitechnologies/weaviate:1.36.6Vectorizer
Section titled “Vectorizer”SIEVectorizer calls SIE’s encode() and returns vectors as list[float] — the format Weaviate expects for DataObject(vector=...) and query.near_vector().
from sie_weaviate import SIEVectorizer
vectorizer = SIEVectorizer( base_url="http://localhost:8080", model="NovaSearch/stella_en_400M_v5",)Any model SIE supports for dense embeddings works — just change the model parameter:
# Nomic MoE (768-dim)vectorizer = SIEVectorizer(model="nomic-ai/nomic-embed-text-v2-moe")
# E5 (1024-dim) — SIE handles query vs document encoding automaticallyvectorizer = SIEVectorizer(model="intfloat/e5-large-v2")
# BGE-M3 (1024-dim, also supports sparse output for hybrid search)vectorizer = SIEVectorizer(model="BAAI/bge-m3")See the Model Catalog for all 85+ supported models.
Configuration Options
Section titled “Configuration Options”| Parameter | Type | Default | Description |
|---|---|---|---|
base_url | str | http://localhost:8080 | SIE server URL |
model | str | BAAI/bge-m3 | Model to use for embeddings |
instruction | str | None | Instruction prefix for instruction-tuned models (e.g., E5) |
output_dtype | str | None | Output data type (float32, float16, int8, binary) |
gpu | str | None | Target GPU type for routing |
options | dict | None | Model-specific options |
timeout_s | float | 180.0 | Request timeout in seconds |
Full Example
Section titled “Full Example”Create a Weaviate collection, embed documents with SIE, and search:
import weaviateimport weaviate.classes as wvcfrom sie_weaviate import SIEVectorizer
# 1. Create vectorizer — this talks to SIEvectorizer = SIEVectorizer( base_url="http://localhost:8080", model="NovaSearch/stella_en_400M_v5",)
# 2. Connect to Weaviateclient = weaviate.connect_to_local(port=8090)try: # 3. Create a collection — self_provided() means we supply vectors ourselves collection = client.collections.create( "Documents", properties=[ wvc.config.Property(name="text", data_type=wvc.config.DataType.TEXT), ], vector_config=wvc.config.Configure.Vectors.self_provided(), )
# 4. Embed texts with SIE, then store in Weaviate texts = [ "Machine learning is a subset of artificial intelligence.", "Neural networks are inspired by biological neurons.", "Deep learning uses multiple layers of neural networks.", "Python is popular for machine learning development.", ] vectors = vectorizer.embed_documents(texts) objects = [ wvc.data.DataObject(properties={"text": t}, vector=v) for t, v in zip(texts, vectors) ] collection.data.insert_many(objects)
# 5. Embed query with SIE, then search in Weaviate query_vec = vectorizer.embed_query("What is deep learning?") results = collection.query.near_vector(near_vector=query_vec, limit=2)
for obj in results.objects: print(obj.properties["text"])finally: client.close()Named Vectors (Dense + Sparse) — Advanced
Section titled “Named Vectors (Dense + Sparse) — Advanced”For advanced use cases, SIENamedVectorizer produces multiple vector types in a single SIE encode() call. This maps to Weaviate’s named vectors feature — store dense and sparse embeddings as separate named vectors in the same collection. If you’re just getting started, use SIEVectorizer above instead.
The model must support all requested output types. BAAI/bge-m3 supports both dense and sparse. For dense-only models, use SIEVectorizer.
import weaviateimport weaviate.classes as wvcfrom sie_weaviate import SIENamedVectorizer
# One SIE call produces both dense and sparse vectorsvectorizer = SIENamedVectorizer( base_url="http://localhost:8080", model="BAAI/bge-m3", output_types=["dense", "sparse"],)
client = weaviate.connect_to_local(port=8090)try: # Create collection with two named vector spaces collection = client.collections.create( "Documents", properties=[ wvc.config.Property(name="text", data_type=wvc.config.DataType.TEXT), ], vector_config=[ wvc.config.Configure.Vectors.self_provided(name="dense"), wvc.config.Configure.Vectors.self_provided(name="sparse"), ], )
# Embed — returns [{"dense": [...], "sparse": [...]}, ...] texts = ["First document", "Second document"] named_vectors = vectorizer.embed_documents(texts) objects = [ wvc.data.DataObject( properties={"text": t}, vector={"dense": v["dense"], "sparse": v["sparse"]}, ) for t, v in zip(texts, named_vectors) ] collection.data.insert_many(objects)
# Query against the dense vector space query = vectorizer.embed_query("search text") results = collection.query.near_vector( near_vector=query["dense"], target_vector="dense", limit=5, )
for obj in results.objects: print(obj.properties["text"])finally: client.close()A note on sparse vectors and hybrid search
Section titled “A note on sparse vectors and hybrid search”SIE sparse vectors (from SPLADE or BGE-M3) are learned sparse representations — they capture semantic similarity, not just term overlap. They are expanded to full vocabulary length (~30K floats for BERT-based models) so positional information is preserved for similarity search.
For most use cases, Weaviate’s built-in BM25 hybrid search is simpler and doesn’t require storing sparse vectors:
results = collection.query.hybrid(query="search text", alpha=0.75)Use SIE sparse vectors when you need learned sparse representations (e.g., SPLADE) that go beyond term-frequency matching.
What’s Next
Section titled “What’s Next”- Encode Text — embedding API details and output types
- Model Catalog — all supported embedding models
- Integrations — all supported vector stores and frameworks
- Troubleshooting — common errors and solutions