Why did we open-source our inference engine? Read the post

mixedbread-ai/mxbai-rerank-large-v2

The crispy rerank family from Mixedbread.

Architecture
Qwen2
Parameters
435M
Tasks
Score
Outputs
Score
Max Sequence Length
8,192 tokens
License
apache-2.0
Languages
af, am, ar, as, az, be, bg, bn, br, bs, ca, cs, cy, da, de, el, en, eo, es, et, eu, fa, ff, fi, fr, fy, ga, gd, gl, gn, gu, ha, he, hi, hr, ht, hu, hy, id, ig, is, it, ja, jv, ka, kk, km, kn, ko, ku, ky, la, lg, li, ln, lo, lt, lv, mg, mk, ml, mn, mr, ms, my, ne, nl, no, ns, om, or, pa, pl, ps, pt, qu, rm, ro, ru, sa, sc, sd, si, sk, sl, so, sq, sr, ss, su, sv, sw, ta, te, th, tl, tn, tr, ug, uk, ur, uz, vi, wo, xh, yi, yo, zh, zu

Benchmarks

AskUbuntuDupQuestions

technology reranking en

Duplicate question detection from AskUbuntu

Corpus: 6,743 Queries: 360
Quality
ndcg at 10 0.6914
map at 10 0.5401
mrr at 10 0.7788
Performance L4 b1 c16
Query TPS 2.6K
Query p50 132.2ms
Reference →

CMedQAv1Reranking

medical reranking zh

Chinese medical question answering reranking (v1)

Corpus: 100,000 Queries: 2,000
Quality
map at 10 0.8304
mrr at 10 0.8633
Reference →

CMedQAv2Reranking

medical reranking zh

Chinese medical question answering reranking (v2)

Corpus: 108,000 Queries: 4,000
Quality
map at 10 0.8282
mrr at 10 0.8628
Reference →

CosQA

technology retrieval en

Code search with natural language queries

Corpus: 6,267 Queries: 500
Performance L4 b1 c16
Query TPS 1.9K
Query p50 535.4ms
Reference →

FiQA2018

finance retrieval en

Financial opinion mining and question answering

Corpus: 57,599 Queries: 648
Performance L4 b1 c16
Query TPS 1.3K
Query p50 1.4s
Reference →

LegalBenchConsumerContractsQA

legal retrieval en

Question answering on consumer contracts

Corpus: 153 Queries: 396
Performance L4 b1 c16
Query TPS 7.5K
Query p50 767.2ms
Reference →

MMarcoReranking

general reranking zh

Multilingual MARCO passage reranking (Chinese)

Quality
map at 10 0.3258
mrr at 10 0.3500
Performance L4 b1 c16
Reference →

NFCorpus

medical retrieval en

Biomedical literature search from NutritionFacts.org

Corpus: 3,593 Queries: 323
Performance L4 b1 c16
Query TPS 2.3K
Query p50 1.7s
Reference →

SciFact

scientific retrieval en

Scientific claim verification using research literature

Corpus: 5,183 Queries: 300
Performance L4 b1 c16
Query TPS 2.2K
Query p50 1.7s
Reference →

T2Reranking

general reranking zh

Chinese passage ranking benchmark

Quality
map at 10 0.5458
mrr at 10 0.7742
Reference →

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.