Why did we open-source our inference engine? Read the post

naver/splade-v3

This checkpoint corresponds to a model that starts from SPLADE++SelfDistil (`naver/splade-cocondenser-selfdistil`), and is trained with a mix of KL-Div and MarginMSE, with 8 negatives per query sampled from SPLADE++SelfDistil. We used the original MS MARCO collection without the titles.

Architecture
BERT
Parameters
110M
Tasks
Encode
Outputs
Sparse
Dimensions
Sparse: 30,522
Max Sequence Length
512 tokens
License
cc-by-nc-sa-4.0
Languages
en

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Duplicate question retrieval from StackExchange Physics

Corpus: 38,314 Queries: 1,039
Quality
ndcg at 10 0.3639
map at 10 0.3082
mrr at 10 0.3560
Performance L4 b1 c16
Corpus TPS 20.7K
Corpus p50 73.7ms
Query TPS 3.0K
Query p50 52.9ms
Reference →

CosQA

technology retrieval en

Code search with natural language queries

Corpus: 6,267 Queries: 500
Quality
ndcg at 10 0.2045
map at 10 0.1567
mrr at 10 0.1702
Performance L4 b1 c16
Corpus TPS 9.1K
Corpus p50 62.2ms
Query TPS 1.6K
Query p50 51.3ms
Reference →

FiQA2018

finance retrieval en

Financial opinion mining and question answering

Corpus: 57,599 Queries: 648
Quality
ndcg at 10 0.2768
map at 10 0.2113
mrr at 10 0.3451
Performance L4 b1 c16
Corpus TPS 24.5K
Corpus p50 67.6ms
Query TPS 3.3K
Query p50 51.2ms
Reference →

LegalBenchConsumerContractsQA

legal retrieval en

Question answering on consumer contracts

Corpus: 153 Queries: 396
Quality
ndcg at 10 0.7393
map at 10 0.6784
mrr at 10 0.6805
Performance L4 b1 c16
Corpus TPS 56.0K
Corpus p50 115.2ms
Query TPS 4.5K
Query p50 52.2ms
Reference →

NFCorpus

medical retrieval en

Biomedical literature search from NutritionFacts.org

Corpus: 3,593 Queries: 323
Quality
ndcg at 10 0.3404
map at 10 0.1300
mrr at 10 0.5417
Performance L4 b1 c16
Corpus TPS 37.0K
Corpus p50 98.4ms
Query TPS 1.4K
Query p50 52.6ms
Performance RTX-4090 b1 c16
Corpus TPS 108.8K
Corpus p50 40.3ms
Query TPS 3.5K
Query p50 19.1ms
Reference →

SCIDOCS

scientific retrieval en

Citation prediction, document classification, and recommendation for scientific papers

Corpus: 25,656 Queries: 1,000
Quality
ndcg at 10 0.1543
map at 10 0.0878
mrr at 10 0.2686
Performance L4 b1 c16
Corpus TPS 26.2K
Corpus p50 83.4ms
Query TPS 3.0K
Query p50 53.6ms
Reference →

SciFact

scientific retrieval en

Scientific claim verification using research literature

Corpus: 5,183 Queries: 300
Quality
ndcg at 10 0.6846
map at 10 0.6371
mrr at 10 0.6524
Performance L4 b1 c16
Corpus TPS 35.8K
Corpus p50 89.7ms
Query TPS 4.5K
Query p50 52.1ms
Reference →

StackOverflowQA

technology retrieval en

Programming question answering from Stack Overflow

Corpus: 19,931 Queries: 1,994
Quality
ndcg at 10 0.7380
map at 10 0.7057
mrr at 10 0.7057
Performance L4 b1 c16
Corpus TPS 33.0K
Corpus p50 84.0ms
Query TPS 50.8K
Query p50 91.0ms
Reference →

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.