Why did we open-source our inference engine? Read the post

Alibaba-NLP/gte-Qwen2-1.5B-instruct

gte-Qwen2-1.5B-instruct is the latest model in the gte (General Text Embedding) model family. The model is built on Qwen2-1.5B LLM model and use the same training data and strategies as the gte-Qwen2-7B-instruct model.

Architecture
Qwen2
Parameters
1.8B
Tasks
Encode
Outputs
Dense
Dimensions
Dense: 1,536
Max Sequence Length
32,768 tokens
License
apache-2.0

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Duplicate question retrieval from StackExchange Physics

Corpus: 38,314 Queries: 1,039
Performance L4 b1 c16
Corpus TPS 11.6K
Corpus p50 178.5ms
Query TPS 2.2K
Query p50 69.6ms
Reference →

CosQA

technology retrieval en

Code search with natural language queries

Corpus: 6,267 Queries: 500
Performance L4 b1 c16
Corpus TPS 9.3K
Corpus p50 96.2ms
Query TPS 1.2K
Query p50 66.8ms
Reference →

FiQA2018

finance retrieval en

Financial opinion mining and question answering

Corpus: 57,599 Queries: 648
Performance L4 b1 c16
Corpus TPS 11.8K
Corpus p50 222.9ms
Query TPS 2.1K
Query p50 73.4ms
Reference →

LegalBenchConsumerContractsQA

legal retrieval en

Question answering on consumer contracts

Corpus: 153 Queries: 396
Performance L4 b1 c16
Corpus TPS 12.3K
Corpus p50 735.3ms
Query TPS 3.1K
Query p50 71.9ms
Reference →

NFCorpus

medical retrieval en

Biomedical literature search from NutritionFacts.org

Corpus: 3,593 Queries: 323
Quality
ndcg at 10 0.3925
map at 10 0.1502
mrr at 10 0.6051
Performance L4 b1 c16
Corpus TPS 12.7K
Corpus p50 384.4ms
Query TPS 821
Query p50 90.2ms
Reference →

NanoFiQA2018Retrieval

finance retrieval en

Smaller subset of the FiQA financial QA dataset

Quality
ndcg at 10 0.6524
map at 10 0.5848
mrr at 10 0.7032
Performance L4 b1 c16
Corpus TPS 11.3K
Corpus p50 251.5ms
Query TPS 1.9K
Query p50 88.7ms
Reference →

SCIDOCS

scientific retrieval en

Citation prediction, document classification, and recommendation for scientific papers

Corpus: 25,656 Queries: 1,000
Performance L4 b1 c16
Corpus TPS 12.4K
Corpus p50 261.1ms
Query TPS 2.5K
Query p50 66.4ms
Reference →

SciFact

scientific retrieval en

Scientific claim verification using research literature

Corpus: 5,183 Queries: 300
Performance L4 b1 c16
Corpus TPS 12.6K
Corpus p50 370.4ms
Query TPS 3.1K
Query p50 74.9ms
Reference →

StackOverflowQA

technology retrieval en

Programming question answering from Stack Overflow

Corpus: 19,931 Queries: 1,994
Performance L4 b1 c16
Corpus TPS 12.4K
Corpus p50 299.2ms
Query TPS 11.4K
Query p50 421.4ms
Reference →

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.