hdallatorre commited on
Commit
643ff8c
1 Parent(s): 10c8554

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -34,12 +34,13 @@ pip install --upgrade git+https://github.com/huggingface/transformers.git
34
  ```
35
 
36
  A small snippet of code is given here in order to retrieve both logits and embeddings from a dummy DNA sequence.
 
37
  ⚠️ The maximum sequence length is set by default at the training length of 30,000 nucleotides, or 5001 tokens (accounting for the CLS token). However,
38
  Segment-NT-multi-species has been shown to generalize up to sequences of 50,000 bp. In case you need to infer on sequences between 30kbp and 50kbp, make sure to change
39
  the `rescaling_factor` of the Rotary Embedding layer in the esm model `num_dna_tokens_inference / max_num_tokens_nt` where `num_dna_tokens_inference` is the number of tokens at inference
40
  (i.e 6669 for a sequence of 40008 base pairs) and `max_num_tokens_nt` is the max number of tokens on which the backbone nucleotide-transformer was trained on, i.e `2048`.
41
 
42
- The `./inference_segment_nt.ipynb` has been set up to run in Google Colab and shows how to set the rescaling factor and infer on a 50kb genic sequence of the human chromosome 20.
43
 
44
  ```python
45
  # Load model and tokenizer
 
34
  ```
35
 
36
  A small snippet of code is given here in order to retrieve both logits and embeddings from a dummy DNA sequence.
37
+
38
  ⚠️ The maximum sequence length is set by default at the training length of 30,000 nucleotides, or 5001 tokens (accounting for the CLS token). However,
39
  Segment-NT-multi-species has been shown to generalize up to sequences of 50,000 bp. In case you need to infer on sequences between 30kbp and 50kbp, make sure to change
40
  the `rescaling_factor` of the Rotary Embedding layer in the esm model `num_dna_tokens_inference / max_num_tokens_nt` where `num_dna_tokens_inference` is the number of tokens at inference
41
  (i.e 6669 for a sequence of 40008 base pairs) and `max_num_tokens_nt` is the max number of tokens on which the backbone nucleotide-transformer was trained on, i.e `2048`.
42
 
43
+ 🧬 The `./inference_segment_nt.ipynb` has been set up to run in Google Colab and shows how to set the rescaling factor and infer on a 50kb genic sequence of the human chromosome 20.
44
 
45
  ```python
46
  # Load model and tokenizer