Boost performance & reduce cost by self-hosting specialized AI models
Introducing SIE, a multi-model inference cluster for search and document processing workloads, released under Apache 2.0.
SIE embeddings and Qdrant retrieval behind a GPT-4 router: cross-encoder reranking, hard filters, and five agent tools for natural language real estate search.
How hierarchical cluster-embedding chunking with RAPTOR improves RAG retrieval over vanilla chunking, with a step-by-step implementation and a note on serving embeddings in production with SIE.
Part two of our RAG evaluation series: building synthetic eval datasets with RAGAS, interpreting faithfulness and retrieval metrics, and mapping results to inference and serving concerns.
We benchmarked LlamaIndex and LangChain chunkers, MTEB embedding models, ColBERT v2, and rerankers on HotpotQA, SQUAD, and QuAC—and what the results mean for inference-heavy retrieval stacks.
Explore semantic chunking for RAG: embedding similarity, hierarchical clustering, and LLM-based methods, with code, HotpotQA and SQUAD evaluation, and BAAI/bge-small-en-v1.5.
Key considerations and trade-offs for picking a vector database that fits your architecture, scale, and operational limits.
How combining keyword search, vector search, and semantic reranking improves RAG retrieval precision and recall.
Build AI apps that generate and compare vector embeddings directly in your browser using TensorFlow.js. No backend required.