Weaviate
The sie-weaviate package lets you use SIE as the embedding provider for Weaviate v4 collections. SIE encodes your text into vectors, and you store and search them in Weaviate.
How it works: You create a Weaviate collection with self_provided vector config (meaning Weaviate won’t generate embeddings itself - you provide them). Then you use SIEVectorizer to embed your texts via SIE and pass the resulting vectors to Weaviate on insert and query.
Python only. TypeScript support is not yet available for this integration.
Installation
Section titled “Installation”pip install sie-weaviateThis installs sie-sdk and weaviate-client (v4.16+) as dependencies.
Start the Servers
Section titled “Start the Servers”You need both an SIE server (for embeddings) and a Weaviate instance (for vector storage and search).
# SIE serverdocker run -p 8080:8080 ghcr.io/superlinked/sie-server:default
# Or with GPUdocker run --gpus all -p 8080:8080 ghcr.io/superlinked/sie-server:default
# Weaviatedocker run -d -p 8090:8080 -p 50051:50051 \ -e AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED=true \ -e DEFAULT_VECTORIZER_MODULE=none \ cr.weaviate.io/semitechnologies/weaviate:1.36.6Vectorizer
Section titled “Vectorizer”SIEVectorizer calls SIE’s encode() and returns vectors as list[float] - the format Weaviate expects for DataObject(vector=...) and query.near_vector().
from sie_weaviate import SIEVectorizer
vectorizer = SIEVectorizer( base_url="http://localhost:8080", model="NovaSearch/stella_en_400M_v5",)Any model SIE supports for dense embeddings works - just change the model parameter:
# Nomic MoE (768-dim)vectorizer = SIEVectorizer(model="nomic-ai/nomic-embed-text-v2-moe")
# E5 (1024-dim) - SIE handles query vs document encoding automaticallyvectorizer = SIEVectorizer(model="intfloat/e5-large-v2")
# BGE-M3 (1024-dim, also supports sparse output for hybrid search)vectorizer = SIEVectorizer(model="BAAI/bge-m3")See the Model Catalog for all 85+ supported models.
Configuration Options
Section titled “Configuration Options”| Parameter | Type | Default | Description |
|---|---|---|---|
base_url | str | http://localhost:8080 | SIE server URL |
model | str | BAAI/bge-m3 | Model to use for embeddings |
instruction | str | None | Instruction prefix for instruction-tuned models (e.g., E5) |
output_dtype | str | None | Output data type (float32, float16, int8, binary) |
gpu | str | None | Target GPU type for routing |
options | dict | None | Model-specific options |
timeout_s | float | 180.0 | Request timeout in seconds |
Full Example
Section titled “Full Example”Create a Weaviate collection, embed documents with SIE, and search:
import weaviateimport weaviate.classes as wvcfrom sie_weaviate import SIEVectorizer
# 1. Create vectorizer - this talks to SIEvectorizer = SIEVectorizer( base_url="http://localhost:8080", model="NovaSearch/stella_en_400M_v5",)
# 2. Connect to Weaviateclient = weaviate.connect_to_local(port=8090)try: # 3. Create a collection - self_provided() means we supply vectors ourselves collection = client.collections.create( "Documents", properties=[ wvc.config.Property(name="text", data_type=wvc.config.DataType.TEXT), ], vector_config=wvc.config.Configure.Vectors.self_provided(), )
# 4. Embed texts with SIE, then store in Weaviate texts = [ "Machine learning is a subset of artificial intelligence.", "Neural networks are inspired by biological neurons.", "Deep learning uses multiple layers of neural networks.", "Python is popular for machine learning development.", ] vectors = vectorizer.embed_documents(texts) objects = [ wvc.data.DataObject(properties={"text": t}, vector=v) for t, v in zip(texts, vectors) ] collection.data.insert_many(objects)
# 5. Embed query with SIE, then search in Weaviate query_vec = vectorizer.embed_query("What is deep learning?") results = collection.query.near_vector(near_vector=query_vec, limit=2)
for obj in results.objects: print(obj.properties["text"])finally: client.close()Named Vectors (Dense + Multivector)
Section titled “Named Vectors (Dense + Multivector)”SIENamedVectorizer produces multiple vector types in a single SIE encode() call. This maps to Weaviate’s named vectors feature - store dense and multivector (ColBERT) embeddings as separate named vectors in the same collection. If you’re just getting started, use SIEVectorizer above instead.
ColBERT models produce per-token embeddings that enable late-interaction scoring - more accurate than single-vector similarity for complex queries.
import weaviateimport weaviate.classes as wvcfrom sie_weaviate import SIENamedVectorizer
# One SIE call produces both dense and multivector outputsvectorizer = SIENamedVectorizer( base_url="http://localhost:8080", model="jinaai/jina-colbert-v2", output_types=["dense", "multivector"],)
client = weaviate.connect_to_local(port=8090)try: # Create collection with two named vector spaces collection = client.collections.create( "Documents", properties=[ wvc.config.Property(name="text", data_type=wvc.config.DataType.TEXT), ], vector_config=[ wvc.config.Configure.Vectors.self_provided(name="dense"), wvc.config.Configure.Vectors.self_provided(name="multivector"), ], )
# Embed - returns [{"dense": [...], "multivector": [[...], ...]}, ...] texts = ["First document", "Second document"] named_vectors = vectorizer.embed_documents(texts) objects = [ wvc.data.DataObject( properties={"text": t}, vector={"dense": v["dense"], "multivector": v["multivector"]}, ) for t, v in zip(texts, named_vectors) ] collection.data.insert_many(objects)
# Query against the dense vector space query = vectorizer.embed_query("search text") results = collection.query.near_vector( near_vector=query["dense"], target_vector="dense", limit=5, )
for obj in results.objects: print(obj.properties["text"])finally: client.close()For hybrid search, Weaviate has built-in BM25 - no extra vectors needed:
results = collection.query.hybrid(query="search text", alpha=0.75)Document Enrichment for Query Agent
Section titled “Document Enrichment for Query Agent”SIEDocumentEnricher combines SIE’s embedding and entity extraction pipelines to produce documents with dense vectors and structured metadata. The extracted properties (persons, organizations, locations, categories) are exactly what Weaviate’s Query Agent uses to construct filters from natural language queries.
The Query Agent’s intelligence is schema-driven - it reads property names, types, and descriptions to build filters. By extracting entities and classifications at index time, you give the agent a rich filtering surface it can use automatically.
import weaviateimport weaviate.classes as wvcfrom sie_weaviate import SIEDocumentEnricher
# Embed + extract entities + classify in one pipelineenricher = SIEDocumentEnricher( base_url="http://localhost:8080", labels=["person", "organization", "location"], classify_model="knowledgator/gliclass-large-v3.0", classify_labels=["technical", "business", "legal"],)
client = weaviate.connect_to_local(port=8090)try: # Property descriptions help the Query Agent understand each field collection = client.collections.create( "Documents", description="Documents with extracted entity and classification metadata.", properties=[ wvc.config.Property(name="text", data_type=wvc.config.DataType.TEXT), wvc.config.Property( name="person", data_type=wvc.config.DataType.TEXT_ARRAY, description="People mentioned in the document", ), wvc.config.Property( name="organization", data_type=wvc.config.DataType.TEXT_ARRAY, description="Organizations mentioned in the document", ), wvc.config.Property( name="location", data_type=wvc.config.DataType.TEXT_ARRAY, description="Locations mentioned in the document", ), wvc.config.Property( name="classification", data_type=wvc.config.DataType.TEXT, description="Document category: technical, business, or legal", ), wvc.config.Property( name="classification_score", data_type=wvc.config.DataType.NUMBER, description="Confidence score for the document classification", ), ], vector_config=wvc.config.Configure.Vectors.self_provided(), )
# Enrich and insert - each doc gets a vector + extracted properties texts = [ "John Smith presented Google's new AI strategy in New York.", "The court ruling on patent law affects tech companies worldwide.", ] docs = enricher.enrich(texts) collection.data.insert_many([ wvc.data.DataObject(properties=doc.properties, vector=doc.vector) for doc in docs ])
# Now the Query Agent can handle queries like: # "find documents about Google" → organization filter + vector search # "show me legal documents mentioning John Smith" → classification + person filter # "what happened in New York?" → location filter + semantic search
# For manual queries, use enrich_query for the vector: query_vec = enricher.enrich_query("AI strategy announcements") results = collection.query.near_vector(near_vector=query_vec, limit=5)
for obj in results.objects: print(obj.properties["text"])finally: client.close()Configuration
Section titled “Configuration”| Parameter | Type | Default | Description |
|---|---|---|---|
base_url | str | http://localhost:8080 | SIE server URL |
embed_model | str | BAAI/bge-m3 | Model for dense embeddings |
extract_model | str | urchade/gliner_medium-v2.1 | Model for entity extraction (GLiNER). Set to None to skip extraction. |
labels | list[str] | None | Entity labels to extract (e.g., ["person", "organization"]) |
classify_model | str | None | Optional model for classification (GLiClass) |
classify_labels | list[str] | None | Classification categories (e.g., ["technical", "business"]) |
gpu | str | None | Target GPU type for routing |
timeout_s | float | 180.0 | Request timeout in seconds |
What’s Next
Section titled “What’s Next”- Encode Text - embedding API details and output types
- Model Catalog - all supported embedding models
- Integrations - all supported vector stores and frameworks
- Troubleshooting - common errors and solutions