Why did we open-source our inference engine? Read the post

opensearch-project/opensearch-neural-sparse-encoding-doc-v3-gte

The model should be selected considering search relevance, model inference and retrieval efficiency(FLOPS). We benchmark models' performance on a subset of BEIR benchmark: TrecCovid,NFCorpus,NQ,HotpotQA,FiQA,ArguAna,Touche,DBPedia,SCIDOCS,FEVER,Climate FEVER,SciFact,Quora.

Architecture
ModernBERT
Parameters
137M
Tasks
Encode
Outputs
Sparse
Dimensions
Sparse: 30,522
Max Sequence Length
512 tokens
License
apache-2.0
Languages
en

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Duplicate question retrieval from StackExchange Physics

Corpus: 38,314 Queries: 1,039
Quality
ndcg at 10 0.4057
map at 10 0.3518
mrr at 10 0.4049
Performance A10G b1 c16
Corpus TPS 1
Corpus p50 4.0s
Query TPS 0
Query p50 32.5s
Performance L4 b1 c16
Corpus TPS 24.3K
Corpus p50 75.0ms
Query TPS 4.2K
Query p50 40.6ms
Reference →

CosQA

technology retrieval en

Code search with natural language queries

Corpus: 6,267 Queries: 500
Quality
ndcg at 10 0.2244
map at 10 0.1739
mrr at 10 0.1860
Performance A10G b1 c16
Corpus TPS 24
Corpus p50 42.5s
Query TPS 24
Query p50 3.6s
Performance L4 b1 c16
Corpus TPS 12.4K
Corpus p50 61.4ms
Query TPS 2.1K
Query p50 43.2ms
Reference →

FiQA2018

finance retrieval en

Financial opinion mining and question answering

Corpus: 57,599 Queries: 648
Quality
ndcg at 10 0.4062
map at 10 0.3301
mrr at 10 0.4849
Performance A10G b1 c16
Corpus TPS 0
Corpus p50 2.0s
Query TPS 0
Query p50 0.0ms
Performance L4 b1 c16
Corpus TPS 29.2K
Corpus p50 78.9ms
Query TPS 4.4K
Query p50 40.6ms
Reference →

LegalBenchConsumerContractsQA

legal retrieval en

Question answering on consumer contracts

Corpus: 153 Queries: 396
Quality
ndcg at 10 0.7290
map at 10 0.6704
mrr at 10 0.6712
Performance L4 b1 c16
Corpus TPS 59.1K
Corpus p50 127.0ms
Query TPS 6.2K
Query p50 41.7ms
Reference →

NFCorpus

medical retrieval en

Biomedical literature search from NutritionFacts.org

Corpus: 3,593 Queries: 323
Quality
ndcg at 10 0.3606
map at 10 0.1391
mrr at 10 0.5725
Performance L4 b1 c16
Corpus TPS 37.7K
Corpus p50 114.2ms
Query TPS 1.7K
Query p50 43.9ms
Reference →

SCIDOCS

scientific retrieval en

Citation prediction, document classification, and recommendation for scientific papers

Corpus: 25,656 Queries: 1,000
Quality
ndcg at 10 0.1586
map at 10 0.0918
mrr at 10 0.2747
Performance L4 b1 c16
Corpus TPS 34.2K
Corpus p50 86.0ms
Query TPS 4.2K
Query p50 41.2ms
Reference →

SciFact

scientific retrieval en

Scientific claim verification using research literature

Corpus: 5,183 Queries: 300
Quality
ndcg at 10 0.6262
map at 10 0.5830
mrr at 10 0.5966
Performance L4 b1 c16
Corpus TPS 40.0K
Corpus p50 103.7ms
Query TPS 5.9K
Query p50 43.4ms
Reference →

StackOverflowQA

technology retrieval en

Programming question answering from Stack Overflow

Corpus: 19,931 Queries: 1,994
Quality
ndcg at 10 0.7470
map at 10 0.7160
mrr at 10 0.7160
Performance L4 b1 c16
Corpus TPS 34.1K
Corpus p50 101.3ms
Query TPS 78.1K
Query p50 57.0ms
Reference →

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.