Why did we open-source our inference engine? Read the post

mixedbread-ai/mxbai-rerank-base-v2

The crispy rerank family from Mixedbread.

Architecture
Qwen2
Parameters
150M
Tasks
Score
Outputs
Score
Max Sequence Length
8,192 tokens
License
apache-2.0
Languages
af, am, ar, as, az, be, bg, bn, br, bs, ca, cs, cy, da, de, el, en, eo, es, et, eu, fa, ff, fi, fr, fy, ga, gd, gl, gn, gu, ha, he, hi, hr, ht, hu, hy, id, ig, is, it, ja, jv, ka, kk, km, kn, ko, ku, ky, la, lg, li, ln, lo, lt, lv, mg, mk, ml, mn, mr, ms, my, ne, nl, no, ns, om, or, pa, pl, ps, pt, qu, rm, ro, ru, sa, sc, sd, si, sk, sl, so, sq, sr, ss, su, sv, sw, ta, te, th, tl, tn, tr, ug, uk, ur, uz, vi, wo, xh, yi, yo, zh, zu

Benchmarks

AskUbuntuDupQuestions

technology reranking en

Duplicate question detection from AskUbuntu

Corpus: 6,743 Queries: 360
Quality
ndcg at 10 0.6638
map at 10 0.5047
mrr at 10 0.7531
Performance L4 b1 c16
Query TPS 5.0K
Query p50 58.6ms
Reference →

CMedQAv1Reranking

medical reranking zh

Chinese medical question answering reranking (v1)

Corpus: 100,000 Queries: 2,000
Quality
map at 10 0.7981
mrr at 10 0.8403
Reference →

CMedQAv2Reranking

medical reranking zh

Chinese medical question answering reranking (v2)

Corpus: 108,000 Queries: 4,000
Quality
map at 10 0.8032
mrr at 10 0.8469
Reference →

CQADupstackPhysicsRetrieval

scientific retrieval en

Duplicate question retrieval from StackExchange Physics

Corpus: 38,314 Queries: 1,039
Performance L4 b1 c16
Query TPS 4.1K
Query p50 593.2ms
Reference →

CosQA

technology retrieval en

Code search with natural language queries

Corpus: 6,267 Queries: 500
Performance L4 b1 c16
Query TPS 2.1K
Query p50 444.7ms
Reference →

LegalBenchConsumerContractsQA

legal retrieval en

Question answering on consumer contracts

Corpus: 153 Queries: 396
Performance L4 b1 c16
Query TPS 14.6K
Query p50 450.9ms
Reference →

MMarcoReranking

general reranking zh

Multilingual MARCO passage reranking (Chinese)

Quality
map at 10 0.3216
mrr at 10 0.3464
Performance L4 b1 c16
Reference →

SCIDOCS

scientific retrieval en

Citation prediction, document classification, and recommendation for scientific papers

Corpus: 25,656 Queries: 1,000
Performance L4 b1 c16
Query TPS 7.0K
Query p50 457.1ms
Reference →

StackOverflowQA

technology retrieval en

Programming question answering from Stack Overflow

Corpus: 19,931 Queries: 1,994
Performance L4 b1 c16
Query TPS 11.4K
Query p50 534.8ms
Reference →

T2Reranking

general reranking zh

Chinese passage ranking benchmark

Quality
map at 10 0.5466
mrr at 10 0.7734
Reference →

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.