Why did we open-source our inference engine? Read the post

cross-encoder/ms-marco-MiniLM-L-12-v2

This model was trained on the MS Marco Passage Ranking task.

Architecture
BERT
Parameters
33M
Tasks
Score
Outputs
Score
Max Sequence Length
512 tokens
License
apache-2.0
Languages
en

Benchmarks

AskUbuntuDupQuestions

technology reranking en

Duplicate question detection from AskUbuntu

Corpus: 6,743 Queries: 360
Quality
ndcg at 10 0.6145
map at 10 0.4558
mrr at 10 0.6921
Performance L4 b1 c16
Query TPS 8.2K
Query p50 31.7ms
Reference →

CMedQAv1Reranking

medical reranking zh

Chinese medical question answering reranking (v1)

Corpus: 100,000 Queries: 2,000
Quality
map at 10 0.1016
mrr at 10 0.1528
Reference →

CMedQAv2Reranking

medical reranking zh

Chinese medical question answering reranking (v2)

Corpus: 108,000 Queries: 4,000
Quality
map at 10 0.1218
mrr at 10 0.1812
Reference →

MMarcoReranking

general reranking zh

Multilingual MARCO passage reranking (Chinese)

Quality
map at 10 0.0426
mrr at 10 0.0446
Performance L4 b1 c16
Reference →

T2Reranking

general reranking zh

Chinese passage ranking benchmark

Quality
map at 10 0.5184
mrr at 10 0.7511
Reference →

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.