Interaction Signals
Capture implicit user behavior — clicks, views, dwell time, purchases — to feed into retrieval optimization.Signal Strength
| Signal | Weight | When to track |
|---|---|---|
impression | Low | Document appeared in results |
click | Medium | User clicked a result |
view | Medium | User viewed the detail page |
dwell | High | User spent significant time |
purchase / convert | Highest | User completed a goal action |
position for every signal — it’s critical for learning which results should rank higher.
Interaction API →
Fusion Strategies
When a retriever has multiple search stages, fusion strategies determine how scores combine into the final ranking.| Strategy | How it works | Best for |
|---|---|---|
| RRF (Reciprocal Rank Fusion) | Combines ranks, not scores. 1/(k + rank) | Default — works well with no tuning |
| DBSF (Distribution-Based Score Fusion) | Normalizes score distributions then averages | When scores have different scales |
| Weighted | Manual weights per stage | When you know which stage matters more |
| Max | Takes the highest score across stages | When any match is sufficient |
| Learned | Auto-tunes weights from interaction signals | When you have 500+ interactions |
Evaluations
Measure retriever quality against ground truth datasets with standard IR metrics.Analytics
Monitor retriever performance in production:- Stage latency breakdown — identify which stages are slow
- Cache hit rates — verify caching is effective
- Score distributions — detect relevance drift
- Query patterns — understand what users search for
The Feedback Loop
- Users search via retrievers
- Interaction signals capture what they engage with
- Learned fusion adjusts stage weights automatically
- Annotations provide explicit ground truth for edge cases
- Evaluations measure improvement quantitatively
- The cycle repeats — retrieval improves with usage

