RaphaelMourad commited on
Commit
e7c188c
1 Parent(s): f3b25ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -6,9 +6,9 @@ tags:
6
  - peptide
7
  ---
8
 
9
- # Model Card for Mistral-Peptide-v1-16M (Mistral for peptide)
10
 
11
- The Mistral-Peptide-v1-16M Large Language Model (LLM) is a pretrained generative peptide molecule model with 15.2M parameters.
12
  It is derived from Mixtral-8x7B-v0.1 model, which was simplified for protein: the number of layers and the hidden size were reduced.
13
  The model was pretrained using 863499 peptide strings.
14
 
@@ -26,8 +26,8 @@ Like Mixtral-8x7B-v0.1, it is a transformer model, with the following architectu
26
  import torch
27
  from transformers import AutoTokenizer, AutoModel
28
 
29
- tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/Mistral-Peptide-v1-16M", trust_remote_code=True)
30
- model = AutoModel.from_pretrained("RaphaelMourad/Mistral-Peptide-v1-16M", trust_remote_code=True)
31
  ```
32
 
33
  ## Calculate the embedding of a protein sequence
@@ -48,7 +48,7 @@ Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
48
 
49
  ## Notice
50
 
51
- Mistral-Peptide-v1-16M is a pretrained base model for peptide.
52
 
53
  ## Contact
54
 
 
6
  - peptide
7
  ---
8
 
9
+ # Model Card for Mistral-Peptide-v1-15M (Mistral for peptide)
10
 
11
+ The Mistral-Peptide-v1-15M Large Language Model (LLM) is a pretrained generative peptide molecule model with 15.2M parameters.
12
  It is derived from Mixtral-8x7B-v0.1 model, which was simplified for protein: the number of layers and the hidden size were reduced.
13
  The model was pretrained using 863499 peptide strings.
14
 
 
26
  import torch
27
  from transformers import AutoTokenizer, AutoModel
28
 
29
+ tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/Mistral-Peptide-v1-15M", trust_remote_code=True)
30
+ model = AutoModel.from_pretrained("RaphaelMourad/Mistral-Peptide-v1-15M", trust_remote_code=True)
31
  ```
32
 
33
  ## Calculate the embedding of a protein sequence
 
48
 
49
  ## Notice
50
 
51
+ Mistral-Peptide-v1-15M is a pretrained base model for peptide.
52
 
53
  ## Contact
54