Why did we open-source our inference engine? Read the post

nomic-ai/nomic-embed-text-v2-moe

Blog | Technical Report | AWS SageMaker | Atlas Embedding and Unstructured Data Analytics Platform

Architecture
NomicBERT
Parameters
475M
Tasks
Encode
Outputs
Dense
Dimensions
Dense: 768
Max Sequence Length
2,048 tokens
License
apache-2.0
Languages
en, es, fr, de, it, pt, pl, nl, tr, ja, vi, ru, id, ar, cs, ro, sv, el, uk, zh, hu, da, no, hi, fi, bg, ko, sk, th, he, ca, lt, fa, ms, sl, lv, mr, bn, sq, cy, be, ml, kn, mk, ur, fy, te, eu, sw, so, sd, uz, co, hr, gu, ce, eo, jv, la, zu, mn, si, ga, ky, tg, my, km, mg, pa, sn, ha, ht, su, gd, ny, ps, ku, am, ig, lo, mi, nn, sm, yi, st, tl, xh, yo, af, ta, tn, ug, az, ba, bs, dv, et, gl, gn, gv, hy

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Duplicate question retrieval from StackExchange Physics

Corpus: 38,314 Queries: 1,039
Performance L4 b1 c16
Corpus TPS 13.0K
Corpus p50 149.6ms
Query TPS 1.2K
Query p50 143.2ms
Reference →

CosQA

technology retrieval en

Code search with natural language queries

Corpus: 6,267 Queries: 500
Performance L4 b1 c16
Corpus TPS 807
Corpus p50 595.7ms
Query TPS 139
Query p50 634.4ms
Reference →

NanoFiQA2018Retrieval

finance retrieval en

Smaller subset of the FiQA financial QA dataset

Quality
ndcg at 10 0.5207
map at 10 0.4283
mrr at 10 0.5634
Performance L4 b1 c16
Corpus TPS 20.1K
Corpus p50 135.4ms
Query TPS 1.7K
Query p50 119.2ms
Reference →

SCIDOCS

scientific retrieval en

Citation prediction, document classification, and recommendation for scientific papers

Corpus: 25,656 Queries: 1,000
Performance L4 b1 c16
Corpus TPS 2.4K
Corpus p50 1.3s
Query TPS 74
Query p50 1.7s
Reference →

StackOverflowQA

technology retrieval en

Programming question answering from Stack Overflow

Corpus: 19,931 Queries: 1,994
Performance L4 b1 c16
Corpus TPS 24.1K
Corpus p50 145.6ms
Query TPS 33.4K
Query p50 142.9ms
Reference →

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.