opncrafter

Hybrid Search Strategies

Dec 30, 2025 • 18 min read

Pure vector search is powerful but has a fundamental flaw: it understands concepts but may miss exact matches. Search for "iPhone 14 Pro Max" and vector search might return iPhone 13 models because the embeddings are similar. Hybrid search solves this by combining the semantic understanding of dense embeddings with the precision of traditional keyword matching — giving you the best of both worlds.

1. The Problem with Pure Vector Search

Embedding models map text into high-dimensional space where semantically similar content is close together. This is excellent for conceptual queries like "how do I fix a memory leak?" — it finds relevant content even if exact words don't match.

But vector search struggles with:

  • Exact identifiers: Product SKUs, order IDs, error codes (ERR_NETWORK_CHANGED)
  • Proper nouns: "Elon Musk" vs "the Tesla CEO" have similar embeddings, confusing searches for specific people
  • Version numbers: "Python 3.11" and "Python 3.12" are nearly identical in vector space
  • Technical terms: "HNSW" and "FAISS" are different algorithms but similar embedding neighbors
Real Failure Example: A vector search for "return policy for iPhone 12" might return results about "iPhone 13 return policy" (high cosine similarity) before returning the exact iPhone 12 document, because the model treats model numbers as nearly interchangeable in embedding space.

2. How Hybrid Search Works

Hybrid search runs two retrieval systems in parallel and merges their results:

Dense Retrieval (Semantic)

Embedding-based cosine similarity. Finds conceptually related content even without keyword overlap. Model: text-embedding-3-small or BGE-M3.

Sparse Retrieval (Keyword)

BM25 (Best Match 25) TF-IDF scoring. Finds exact keyword matches with statistical relevance weighting. Same algorithm powering Elasticsearch.

3. BM25: The Algorithm Behind Keyword Search

BM25 scores documents based on term frequency (TF) and inverse document frequency (IDF), with saturation to prevent long documents from dominating just because they repeat terms more:

from rank_bm25 import BM25Okapi
import nltk

# Tokenize your documents
corpus = [
    "iPhone 12 return policy and refund process",
    "iPhone 13 Pro Max review and specifications",
    "Return policy for Apple products purchased online",
]
tokenized_corpus = [doc.lower().split() for doc in corpus]

# Build BM25 index
bm25 = BM25Okapi(tokenized_corpus)

# Search
query = "iPhone 12 return"
tokenized_query = query.lower().split()
scores = bm25.get_scores(tokenized_query)

# Result: [1.84, 0.62, 0.91]
# iPhone 12 return document scores highest (exact match)
# iPhone 13 scores lowest despite model similarity

4. Reciprocal Rank Fusion (RRF)

After getting two ranked lists (one from dense, one from sparse), you need to merge them. Reciprocal Rank Fusion is the standard algorithm. For each document, it assigns a score based on its rank position in each list, then sums them:

def reciprocal_rank_fusion(rankings: list[list[str]], k=60) -> dict:
    """
    Merge multiple ranked lists using RRF.
    k=60 is the standard constant (reduces impact of very top ranks).
    """
    scores = {}
    for ranking in rankings:
        for rank, doc_id in enumerate(ranking):
            if doc_id not in scores:
                scores[doc_id] = 0.0
            scores[doc_id] += 1.0 / (rank + k)
    
    # Sort by combined RRF score (higher = more relevant)
    return dict(sorted(scores.items(), key=lambda x: x[1], reverse=True))

# Example:
dense_results = ["doc_c", "doc_a", "doc_b"]    # Semantic ranking
sparse_results = ["doc_a", "doc_c", "doc_d"]   # Keyword ranking

final_ranking = reciprocal_rank_fusion([dense_results, sparse_results])
# doc_a: 1/61 + 1/62 = 0.0328  (ranked 2nd+1st = very relevant)
# doc_c: 1/60 + 1/62 = 0.0328  (ranked 1st+2nd = very relevant)
# doc_b: 1/62 = 0.0161          (only in semantic)
# doc_d: 1/62 = 0.0161          (only in keyword)

5. Implementation: Hybrid Search with Pinecone

Pinecone natively supports hybrid search by accepting both dense and sparse vectors simultaneously:

from pinecone import Pinecone
from pinecone_text.sparse import BM25Encoder
from openai import OpenAI

pc = Pinecone(api_key="your-key")
index = pc.Index("your-index")

openai_client = OpenAI()
bm25 = BM25Encoder()
bm25.fit(your_corpus)  # Train on your documents

def hybrid_search(query: str, alpha: float = 0.5, top_k: int = 5):
    """
    alpha: 0.0 = pure sparse (keyword), 1.0 = pure dense (semantic)
    0.5 = equal weight hybrid (recommended starting point)
    """
    # Dense embedding
    dense_response = openai_client.embeddings.create(
        model="text-embedding-3-small",
        input=query
    )
    dense_vector = dense_response.data[0].embedding
    
    # Sparse BM25 encoding
    sparse_vector = bm25.encode_queries(query)
    
    # Hybrid query
    results = index.query(
        vector=dense_vector,
        sparse_vector=sparse_vector,
        alpha=alpha,     # Balance between dense and sparse
        top_k=top_k,
        include_metadata=True
    )
    
    return results.matches

# Usage examples:
# For product catalog (exact matches matter): alpha=0.3 (more keyword)
# For FAQ/support docs (concepts matter): alpha=0.7 (more semantic)
# Default balanced: alpha=0.5

6. Implementation: Hybrid Search with Weaviate

Weaviate has first-class hybrid search support with its own BM25 implementation built in:

import weaviate
from weaviate.classes.query import HybridFusion

client = weaviate.connect_to_wcs(
    cluster_url="your-weaviate-url",
    auth_credentials=weaviate.auth.AuthApiKey("your-key"),
)

collection = client.collections.get("Products")

# Hybrid search — Weaviate handles BM25 + vector automatically
results = collection.query.hybrid(
    query="iPhone 12 return policy",
    alpha=0.5,  # 50% dense semantic, 50% sparse BM25
    fusion_type=HybridFusion.RELATIVE_SCORE,  # Alternative to RRF
    limit=5,
    return_metadata=weaviate.classes.query.MetadataQuery(score=True)
)

for result in results.objects:
    print(f"Score: {result.metadata.score:.4f} | {result.properties['title']}")

7. Tuning the Alpha Parameter

The alpha (or balance) parameter is the most important tuning knob in hybrid search. Here's a data-driven guide:

Use CaseAlphaRationale
Product catalog / e-commerce0.25Exact SKUs, model numbers matter most
Legal document search0.4Balance: specific terms + conceptual context
General FAQ / support docs0.5Default: balanced hybrid
Academic paper retrieval0.6Concepts matter more than exact phrasing
Creative writing / storytelling0.8Semantic meaning dominates
Pure semantic Q&A1.0Disable sparse entirely

8. Adding a Reranker for Even Better Results

After hybrid search returns 20 candidates, a cross-encoder reranker re-scores them with full query-document interaction. This dramatically improves precision:

import cohere

co = cohere.Client("your-cohere-key")

# Step 1: Hybrid search for 20 candidates
candidates = hybrid_search(query, top_k=20)

# Step 2: Rerank top 20 to get the best 5
reranked = co.rerank(
    model="rerank-english-v3.0",
    query=query,
    documents=[c.metadata["text"] for c in candidates],
    top_n=5  # Return only the best 5
)

# Reranking adds ~50ms latency but typically improves 
# retrieval quality by 15-25% on benchmark datasets

Troubleshooting Hybrid Search

Issue: Hybrid search returns worse results than pure semantic

Your BM25 model may not be trained on your domain vocabulary. Retrain bm25.fit() on your specific corpus. Also try reducing alpha (less weight on sparse) if users are asking conceptual questions.

Issue: Exact product names still not found

Increase sparse weight (lower alpha). Also add metadata filtering — use ChromaDB where clauses for exact field matches instead of relying on search alone for structured data like SKUs or categories.

Frequently Asked Questions

Is hybrid search always better than pure vector search?

It depends on your data. For general conversational Q&A over prose, pure vector search often performs equally well. Hybrid search shines when your corpus contains proper nouns, identifiers, technical terms, or version-specific content. Always benchmark both on a sample of real queries.

What sparse/keyword libraries should I use?

For Python: rank_bm25 (standalone), Elasticsearch/OpenSearch (distributed). For managed solutions: Pinecone natively handles sparse vectors; Weaviate has built-in BM25. If using LangChain, EnsembleRetriever combines any two retrievers with configurable weights.

How does this affect latency?

Running two retrievals in parallel adds ~5-20ms vs single vector search. Adding a reranker adds another 50-100ms. For most production use cases this is acceptable. If latency is critical, skip the reranker and tune alpha instead.

Conclusion

Hybrid search is the production-ready standard for serious RAG systems. Pure vector search is a great starting point, but when you care about precision — when your users are searching for specific products, policies, error codes, or technical terms — the combination of BM25 and embedding-based retrieval consistently outperforms either approach alone. Start with alpha=0.5, measure precision@5 on a labeled test set, and tune from there.

Continue Reading

👨‍💻
Written by

Vivek

AI Engineer

Full-stack AI engineer with 4+ years building LLM-powered products, autonomous agents, and RAG pipelines. I've shipped AI features to production for startups and worked hands-on with GPT-4o, LangChain, LlamaIndex, and the Vercel AI SDK. I started OpnCrafter to share everything I wish I had when learning — no fluff, just working code and real-world context.

GPT-4oLangChainNext.jsVector DBsRAGVercel AI SDK