Why did we open-source our inference engine? Read the post

BAAI/bge-reranker-base

We have updated the new reranker, supporting larger lengths, more languages, and achieving better performance.

Architecture
XLM-RoBERTa
Parameters
278M
Tasks
Score
Outputs
Score
Max Sequence Length
512 tokens
License
mit
Languages
en, zh

Benchmarks

AskUbuntuDupQuestions

technology reranking en

Duplicate question detection from AskUbuntu

Corpus: 6,743 Queries: 360
Quality
ndcg at 10 0.5926
map at 10 0.4326
mrr at 10 0.6741
Performance L4 b1 c16
Query TPS 5.0K
Query p50 33.2ms
Reference →

CMedQAv1Reranking

medical reranking zh

Chinese medical question answering reranking (v1)

Corpus: 100,000 Queries: 2,000
Quality
map at 10 0.8073
mrr at 10 0.8414
Reference →

CMedQAv2Reranking

medical reranking zh

Chinese medical question answering reranking (v2)

Corpus: 108,000 Queries: 4,000
Quality
map at 10 0.8358
mrr at 10 0.8679
Reference →

MMarcoReranking

general reranking zh

Multilingual MARCO passage reranking (Chinese)

Quality
map at 10 0.3422
mrr at 10 0.3460
Performance L4 b1 c16
Reference →

T2Reranking

general reranking zh

Chinese passage ranking benchmark

Quality
map at 10 0.5590
mrr at 10 0.7716
Reference →

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.