Why did we open-source our inference engine? Read the post

jinaai/jina-reranker-v2-base-multilingual

Trained by Jina AI.

Architecture
XLM-RoBERTa
Parameters
278M
Tasks
Score
Outputs
Score
Max Sequence Length
8,192 tokens
License
cc-by-nc-4.0
Languages
multilingual

Benchmarks

AskUbuntuDupQuestions

technology reranking en

Duplicate question detection from AskUbuntu

Corpus: 6,743 Queries: 360
Quality
ndcg at 10 0.6546
map at 10 0.5008
mrr at 10 0.7429
Performance L4 b1 c16
Query TPS 8.3K
Query p50 32.0ms
Reference →

CMedQAv1Reranking

medical reranking zh

Chinese medical question answering reranking (v1)

Corpus: 100,000 Queries: 2,000
Quality
map at 10 0.2034
mrr at 10 0.2747
Reference →

CMedQAv2Reranking

medical reranking zh

Chinese medical question answering reranking (v2)

Corpus: 108,000 Queries: 4,000
Quality
map at 10 0.2063
mrr at 10 0.2837
Reference →

MMarcoReranking

general reranking zh

Multilingual MARCO passage reranking (Chinese)

Quality
map at 10 0.3433
mrr at 10 0.3622
Performance L4 b1 c16
Reference →

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.