Why did we open-source our inference engine? Read the post

BAAI/bge-reranker-large

We have updated the new reranker, supporting larger lengths, more languages, and achieving better performance.

Architecture
XLM-RoBERTa
Parameters
560M
Tasks
Score
Outputs
Score
Max Sequence Length
512 tokens
License
mit
Languages
en, zh

Benchmarks

AskUbuntuDupQuestions

technology reranking en

Duplicate question detection from AskUbuntu

Corpus: 6,743 Queries: 360
Quality
ndcg at 10 0.6404
map at 10 0.4835
mrr at 10 0.7114
Performance L4 b1 c16
Query TPS 6.6K
Query p50 41.4ms
Reference →

CMedQAv1Reranking

medical reranking zh

Chinese medical question answering reranking (v1)

Corpus: 100,000 Queries: 2,000
Quality
map at 10 0.8164
mrr at 10 0.8492
Reference →

CMedQAv2Reranking

medical reranking zh

Chinese medical question answering reranking (v2)

Corpus: 108,000 Queries: 4,000
Quality
map at 10 0.8361
mrr at 10 0.8697
Reference →

MMarcoReranking

general reranking zh

Multilingual MARCO passage reranking (Chinese)

Quality
map at 10 0.3638
mrr at 10 0.3668
Performance L4 b1 c16
Reference →

T2Reranking

general reranking zh

Chinese passage ranking benchmark

Quality
map at 10 0.5623
mrr at 10 0.7770
Reference →

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.