tbaker2 commited on
Commit
847e440
1 Parent(s): 5a5ff64

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -3
README.md CHANGED
@@ -1,3 +1,75 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # intel-optimized-model-for-embeddings-int8-v1
5
+
6
+ This is a text embedding model model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. For sample code that uses this model in a torch serve container see [Intel-Optimized-Container-for-Embeddings](https://github.com/intel/Intel-Optimized-Container-for-Embeddings). The model was quantized using static quantization from the [Intel Neural Compressor](https://github.com/intel/neural-compressor) library.
7
+
8
+ ## Usage
9
+
10
+ Install the required packages:
11
+ ```
12
+ pip install -U torch==2.3.1+cpu --extra-index-url https://download.pytorch.org/whl/cpu
13
+ pip install -U transformers==4.42.4 intel-extension-for-pytorch==2.3.100
14
+ ```
15
+
16
+ Use the following example below to load the model with the transformers library, tokenize the text, run the model, and apply pooling to the output.
17
+
18
+ ```
19
+ import os
20
+ import torch
21
+ from transformers import AutoTokenizer, AutoModel
22
+ import intel_extension_for_pytorch as ipex
23
+
24
+
25
+ def mean_pooling(model_output, attention_mask):
26
+ token_embeddings = model_output[0]
27
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
28
+ return torch.sum(token_embeddings * input_mask_expanded,
29
+ 1) / torch.clamp(input_mask_expanded.sum(1),
30
+ min=1e-9)
31
+
32
+ # load model
33
+ tokenizer = AutoTokenizer.from_pretrained('Intel/intel-optimized-model-for-embeddings-int8-v1')
34
+ file_name = "pytorch_model.bin"
35
+ model_file_path = os.path.join(model_dir, file_name)
36
+ model = torch.jit.load(model_file_path)
37
+ model = ipex.optimize(model, level="O1",auto_kernel_selection=True,
38
+ conv_bn_folding=False, dtype=torch.int8)
39
+ model = torch.jit.freeze(model.eval())
40
+
41
+ text = ["This is a test."]
42
+
43
+ with torch.no_grad(), torch.autocast(device_type='cpu', cache_enabled=False, dtype=torch.int8):
44
+ tokenized_text = tokenizer(text, padding=True, truncation=True, return_tensors='pt')
45
+ model_output = model(**tokenized_text)
46
+ sentence_embeddings = mean_pooling((model_output["last_hidden_state"], ),
47
+ tokenized_text['attention_mask'])
48
+ embeddings = sentence_embeddings[0].tolist()
49
+
50
+ # Embeddings output
51
+ print(embeddings)
52
+ ```
53
+
54
+ ## Model Details
55
+
56
+ ### Model Description
57
+
58
+ This model was fine-tuned using the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) library
59
+ based on the [BERT-Medium_L-8_H-512_A-8](https://huggingface.co/nreimers/BERT-Medium_L-8_H-512_A-8) model
60
+ using [UAE-Large-V1](https://huggingface.co/WhereIsAI/UAE-Large-V1) as a teacher.
61
+
62
+
63
+ ### Training Datasets
64
+
65
+ | Dataset | Description | License |
66
+ | ------------- |:-------------:| -----:|
67
+ | beir/dbpedia-entity | DBpedia-Entity is a standard test collection for entity search over the DBpedia knowledge base. | CC BY-SA 3.0 license |
68
+ | beir/nq | To help spur development in open-domain question answering, the Natural Questions (NQ) corpus has been created, along with a challenge website based on this data. | CC BY-SA 3.0 license |
69
+ | beir/scidocs | SciDocs is a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation. | CC-BY-SA-4.0 license |
70
+ | beir/trec-covid | TREC-COVID followed the TREC model for building IR test collections through community evaluations of search systems. | CC-BY-SA-4.0 license |
71
+ | beir/touche2020 | Given a question on a controversial topic, retrieve relevant arguments from a focused crawl of online debate portals. | CC BY 4.0 license |
72
+ | WikiAnswers | The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases. | MIT |
73
+ | Cohere/wikipedia-22-12-en-embeddings Dataset | The Cohere/Wikipedia dataset is a processed version of the wikipedia-22-12 dataset. It is English only, and the articles are broken up into paragraphs. | Apache 2.0 |
74
+ | MLNI | GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems. | MIT |
75
+