--- license: cc-by-nc-sa-4.0 widget: - text: ACCTGATTCTGAGTC tags: - DNA - biology - genomics - segmentation --- # segment-nt-30kb-multi-species Segment-NT-30kb-multi-species is a segmentation model leveraging the [Nucleotide Transformer](https://huggingface.co/InstaDeepAI/nucleotide-transformer-v2-500m-multi-species) (NT) DNA foundation model to predict the location of several types of genomics elements in a sequence at a single nucleotide resolution. It is the result of finetuning the [Segment-NT-30kb](https://huggingface.co/InstaDeepAI/segment_nt_30kb) model on a dataset encompassing the human genome but also the genomes of 5 selected species: mouse, chicken, fly, zebrafish and worm. For the finetuning on the multi-species genomes, we curated a dataset of a subset of the annotations used to train **Segment-NT-30kb**, mainly because only this subset of annotations is available for these species. The annotations therefore concern the 7 main gene elements available from Ensembl [REF], namely protein-coding gene, 5’UTR, 3’UTR, intron, exon, splice acceptor and donor sites. **Developed by:** [InstaDeep](https://huggingface.co/InstaDeepAI) ### Model Sources - **Repository:** [Nucleotide Transformer](https://github.com/instadeepai/nucleotide-transformer) - **Paper:** [Segmenting the genome at single-nucleotide resolution with DNA foundation models]() TODO: Add link to preprint ### How to use Until its next release, the `transformers` library needs to be installed from source with the following command in order to use the models: ```bash pip install --upgrade git+https://github.com/huggingface/transformers.git ``` A small snippet of code is given here in order to retrieve both logits and embeddings from a dummy DNA sequence. ```python # Load model and tokenizer from transformers import AutoTokenizer, AutoModel import torch features = [ "protein_coding_gene", "lncRNA", "exon", "intron", "splice_donor", "splice_acceptor", "5UTR", "3UTR", "CTCF-bound", "polyA_signal", "enhancer_Tissue_specific", "enhancer_Tissue_invariant", "promoter_Tissue_specific", "promoter_Tissue_invariant", ] tokenizer = AutoTokenizer.from_pretrained("InstaDeepAI/segment_nt_30kb_multi_species", trust_remote_code=True) model = AutoModel.from_pretrained("InstaDeepAI/segment_nt_30kb_multi_species", trust_remote_code=True) # Choose the length to which the input sequences are padded. By default, the # model max length is chosen, but feel free to decrease it as the time taken to # obtain the embeddings increases significantly with it. # The number of DNA tokens (excluding the CLS token prepended) needs to be dividible by # 2 to the power of the number of downsampling block, i.e 4. max_length = 12 + 1 assert (max_length - 1) % 4 == 0, ( "The number of DNA tokens (excluding the CLS token prepended) needs to be dividible by" "2 to the power of the number of downsampling block, i.e 4.") # Create a dummy dna sequence and tokenize it sequences = ["ATTCCGATTCCGATTCCG", "ATTTCTCTCTCTCTCTGAGATCGATCGATCGAT"] tokens = tokenizer.batch_encode_plus(sequences, return_tensors="pt", padding="max_length", max_length = max_length)["input_ids"] # Infer attention_mask = tokens != tokenizer.pad_token_id outs = model( tokens, attention_mask=attention_mask, output_hidden_states=True ) # Obtain the logits over the genomic features logits = outs.logits.detach() # Transform them in probabilities probabilities = torch.nn.functional.softmax(logits, dim=-1) print(f"Probabilities shape: {probabilities.shape}") # Get probabilities associated with intron idx_intron = features.index("intron") probabilities_intron = probabilities[:,:,idx_intron] print(f"Intron probabilities shape: {probabilities_intron.shape}") ``` ## Training data The **segment-nt-30kb-multi-species** model was finetuned on human, mouse, chicken, fly, zebrafish and worm genomes. For each specie, a subset of chromosomes is kept as validation for training monitoring and test for final evaluation. ## Training procedure ### Preprocessing The DNA sequences are tokenized using the Nucleotide Transformer Tokenizer, which tokenizes sequences as 6-mers tokens as described in the [Tokenization](https://github.com/instadeepai/nucleotide-transformer#tokenization-abc) section of the associated repository. This tokenizer has a vocabulary size of 4105. The inputs of the model are then of the form: ``` ``` ### Training The model was finetuned on a DGXH100 node with 8 GPUs on a total of 8B tokens for 3 days. ### Architecture The model is composed of the [nucleotide-transformer-v2-50m-multi-species](https://huggingface.co/InstaDeepAI/nucleotide-transformer-v2-500m-multi-species) encoder, from which we removed the language model head and replaced it by a 1-dimensional U-Net segmentation head [4] made of 2 downsampling convolutional blocks and 2 upsampling convolutional blocks. Each of these blocks is made of 2 convolutional layers with 1, 024 and 2, 048 kernels respectively. This additional segmentation head accounts for 53 million parameters, bringing the total number of parameters to 562M. ### BibTeX entry and citation info #TODO: Add bibtex citation here ```bibtex ```