etrop commited on
Commit
2a1736a
1 Parent(s): 1e9588a

Fix grammar typos

Browse files
Files changed (1) hide show
  1. app.py +4 -4
app.py CHANGED
@@ -47,9 +47,9 @@ _LAST_UPDATED = "Sept 15, 2023"
47
  banner_url = "./assets/logo.png"
48
  _BANNER = f'<div style="display: flex; justify-content: space-around;"><img src="{banner_url}" alt="Banner" style="width: 40vw; min-width: 300px; max-width: 600px;"> </div>' # noqa
49
 
50
- _INTRODUCTION_TEXT = """The 🤗 Nucleotide Transformer Leaderboard aims to track, rank and evaluate DNA foundational models on a set of curated downstream tasks introduced in the huggingface dataset [nucleotide_transformer_downstream_tasks](https://huggingface.co/datasets/InstaDeepAI/nucleotide_transformer_downstream_tasks), with a standardized evaluation protocole presented in the "ℹ️ Methods" tab.\n\n
51
 
52
- This leaderboard has been designed to provide, to the best of our ability, fair and robust comparisons between models. If you have any question or concern regarding our methodology or if you would like another model to appear in that leaderboard, please reach out to [email protected] and [email protected]. While we may not be able to take into consideration all requests, the team will always do its best to ensure that benchmark stays as fair, relevant and up-to-date as possible.\n\n
53
  """ # noqa
54
 
55
  _METHODS_TEXT = """
@@ -59,10 +59,10 @@ This leaderboard uses the downstream tasks benchmark and evaluation methdology d
59
  PLease keep in mind that the Enformer has been originally trained in a supervised fashion to solve gene expression tasks. For the sake of benchmarking, we re-used the provided model torso as a pre-trained model for our benchmark, which is not the intended and recommended use of the original paper. Though we think this comparison is interesting to hilight the differences between self-supervised and supervised learning for pre-training and observe that the Enformer is a very competitive baseline even for tasks that differ from gene expression.
60
  \n\n
61
 
62
- For the sake of clarity the tasks being shown by default in this eladerboard are the human related tasks while the original Nucleotide Transformer paper show performance over both yeast and human related tasks. To obtain the same results as the one shown in the paper, please check all the tasks boxes above.
63
  \n\n
64
 
65
- Note also that the performance shown for some methods in that table may differ slightly from the one reported in the HyenaDNA and DNABert papers. This might come from the usage of different train and test splits as well as from our ten-fold systamtic evaluation.
66
  \n\n
67
  """ # noqa
68
 
 
47
  banner_url = "./assets/logo.png"
48
  _BANNER = f'<div style="display: flex; justify-content: space-around;"><img src="{banner_url}" alt="Banner" style="width: 40vw; min-width: 300px; max-width: 600px;"> </div>' # noqa
49
 
50
+ _INTRODUCTION_TEXT = """The 🤗 Nucleotide Transformer Leaderboard aims to track, rank and evaluate DNA foundational models on a set of curated downstream tasks introduced in the huggingface dataset [nucleotide_transformer_downstream_tasks](https://huggingface.co/datasets/InstaDeepAI/nucleotide_transformer_downstream_tasks), with a standardized evaluation protocol presented in the "ℹ️ Methods" tab.\n\n
51
 
52
+ This leaderboard has been designed to provide, to the best of our ability, fair and robust comparisons between models. If you have any question or concern regarding our methodology or if you would like another model to appear in this leaderboard, please reach out to [email protected] and [email protected]. While we may not be able to take into consideration all requests, the team will always do its best to ensure that benchmark stays as fair, relevant and up-to-date as possible.\n\n
53
  """ # noqa
54
 
55
  _METHODS_TEXT = """
 
59
  PLease keep in mind that the Enformer has been originally trained in a supervised fashion to solve gene expression tasks. For the sake of benchmarking, we re-used the provided model torso as a pre-trained model for our benchmark, which is not the intended and recommended use of the original paper. Though we think this comparison is interesting to hilight the differences between self-supervised and supervised learning for pre-training and observe that the Enformer is a very competitive baseline even for tasks that differ from gene expression.
60
  \n\n
61
 
62
+ For the sake of clarity the tasks being shown by default in this leaderboard are the human related tasks while the original Nucleotide Transformer paper shows performance over both yeast and human related tasks. To obtain the same results as the one shown in the paper, please check all the tasks boxes above.
63
  \n\n
64
 
65
+ Note also that the performance shown for some methods in this table may differ slightly from the one reported in the HyenaDNA and DNABert papers. This might come from the usage of different train and test splits as well as from our ten-fold systamtic evaluation.
66
  \n\n
67
  """ # noqa
68