Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
Tags:
acronym-identification
License:
Update files from the datasets library (from 1.4.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.4.0
README.md
CHANGED
@@ -49,27 +49,27 @@ task_ids:
|
|
49 |
|
50 |
- **Homepage:** https://sites.google.com/view/sdu-aaai21/shared-task
|
51 |
- **Repository:** https://github.com/amirveyseh/AAAI-21-SDU-shared-task-1-AI
|
52 |
-
- **Paper:** https://arxiv.org/pdf/2010.14678v1.pdf
|
53 |
- **Leaderboard:** https://competitions.codalab.org/competitions/26609
|
54 |
- **Point of Contact:** [More Information Needed]
|
55 |
|
56 |
### Dataset Summary
|
57 |
|
58 |
-
|
59 |
|
60 |
### Supported Tasks and Leaderboards
|
61 |
|
62 |
-
|
63 |
|
64 |
### Languages
|
65 |
|
66 |
-
|
67 |
|
68 |
## Dataset Structure
|
69 |
|
70 |
### Data Instances
|
71 |
|
72 |
-
A sample training set is provided below
|
73 |
|
74 |
```
|
75 |
{'id': 'TR-0',
|
@@ -94,50 +94,62 @@ A sample training set is provided below
|
|
94 |
'.']}
|
95 |
```
|
96 |
|
97 |
-
Please note that in test set
|
98 |
-
Labels in the test set are all `O`
|
99 |
|
100 |
### Data Fields
|
101 |
|
102 |
-
|
|
|
|
|
|
|
|
|
103 |
|
104 |
### Data Splits
|
105 |
|
106 |
-
|
107 |
|
108 |
## Dataset Creation
|
109 |
|
110 |
### Curation Rationale
|
111 |
|
112 |
-
|
|
|
|
|
|
|
|
|
113 |
|
114 |
### Source Data
|
115 |
|
116 |
-
[More Information Needed]
|
117 |
-
|
118 |
#### Initial Data Collection and Normalization
|
119 |
|
120 |
-
|
|
|
|
|
|
|
121 |
|
122 |
#### Who are the source language producers?
|
123 |
|
124 |
-
[
|
125 |
|
126 |
### Annotations
|
127 |
|
128 |
-
[More Information Needed]
|
129 |
-
|
130 |
#### Annotation process
|
131 |
|
132 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
133 |
|
134 |
#### Who are the annotators?
|
135 |
|
136 |
-
|
137 |
|
138 |
### Personal and Sensitive Information
|
139 |
|
140 |
-
|
141 |
|
142 |
## Considerations for Using the Data
|
143 |
|
@@ -161,11 +173,31 @@ Labels in the test set are all `O`
|
|
161 |
|
162 |
### Licensing Information
|
163 |
|
164 |
-
|
165 |
|
166 |
### Citation Information
|
167 |
|
168 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
169 |
|
170 |
### Contributions
|
171 |
|
|
|
49 |
|
50 |
- **Homepage:** https://sites.google.com/view/sdu-aaai21/shared-task
|
51 |
- **Repository:** https://github.com/amirveyseh/AAAI-21-SDU-shared-task-1-AI
|
52 |
+
- **Paper:** [What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambiguation](https://arxiv.org/pdf/2010.14678v1.pdf)
|
53 |
- **Leaderboard:** https://competitions.codalab.org/competitions/26609
|
54 |
- **Point of Contact:** [More Information Needed]
|
55 |
|
56 |
### Dataset Summary
|
57 |
|
58 |
+
This dataset contains the training, validation, and test data for the **Shared Task 1: Acronym Identification** of the AAAI-21 Workshop on Scientific Document Understanding.
|
59 |
|
60 |
### Supported Tasks and Leaderboards
|
61 |
|
62 |
+
The dataset supports an `acronym-identification` task, where the aim is to predic which tokens in a pre-tokenized sentence correspond to acronyms. The dataset was released for a Shared Task which supported a [leaderboard](https://competitions.codalab.org/competitions/26609).
|
63 |
|
64 |
### Languages
|
65 |
|
66 |
+
The sentences in the dataset are in English (`en`).
|
67 |
|
68 |
## Dataset Structure
|
69 |
|
70 |
### Data Instances
|
71 |
|
72 |
+
A sample from the training set is provided below:
|
73 |
|
74 |
```
|
75 |
{'id': 'TR-0',
|
|
|
94 |
'.']}
|
95 |
```
|
96 |
|
97 |
+
Please note that in test set sentences only the `id` and `tokens` fields are available. `labels` can be ignored for test set. Labels in the test set are all `O`
|
|
|
98 |
|
99 |
### Data Fields
|
100 |
|
101 |
+
The data instances have the following fields:
|
102 |
+
|
103 |
+
- `id`: a `string` variable representing the example id, unique across the full dataset
|
104 |
+
- `tokens`: a list of `string` variables representing the word-tokenized sentence
|
105 |
+
- `labels`: a list of `categorical` variables with possible values `["B-long", "B-short", "I-long", "I-short", "O"]` corresponding to a BIO scheme. `-long` corresponds to the expanded acronym, such as *controlled natural language* here, and `-short` to the abbrviation, `CNL` here.
|
106 |
|
107 |
### Data Splits
|
108 |
|
109 |
+
The training, validation, and test set contain `14,006`, `1,717`, and `1750` sentences respectively.
|
110 |
|
111 |
## Dataset Creation
|
112 |
|
113 |
### Curation Rationale
|
114 |
|
115 |
+
> First, most of the existing datasets for acronym identification (AI) are either limited in their sizes or created using simple rule-based methods.
|
116 |
+
> This is unfortunate as rules are in general not able to capture all the diverse forms to express acronyms and their long forms in text.
|
117 |
+
> Second, most of the existing datasets are in the medical domain, ignoring the challenges in other scientific domains.
|
118 |
+
> In order to address these limitations this paper introduces two new datasets for Acronym Identification.
|
119 |
+
> Notably, our datasets are annotated by human to achieve high quality and have substantially larger numbers of examples than the existing AI datasets in the non-medical domain.
|
120 |
|
121 |
### Source Data
|
122 |
|
|
|
|
|
123 |
#### Initial Data Collection and Normalization
|
124 |
|
125 |
+
> In order to prepare a corpus for acronym annotation, we collect a corpus of 6,786 English papers from arXiv.
|
126 |
+
> These papers consist of 2,031,592 sentences that would be used for data annotation for AI in this work.
|
127 |
+
|
128 |
+
The dataset paper does not report the exact tokenization method.
|
129 |
|
130 |
#### Who are the source language producers?
|
131 |
|
132 |
+
The language was comes from papers hosted on the online digital archive [arXiv](https://arxiv.org/). No more information is available on the selection process or identity of the writers.
|
133 |
|
134 |
### Annotations
|
135 |
|
|
|
|
|
136 |
#### Annotation process
|
137 |
|
138 |
+
> Each sentence for annotation needs to contain at least one word in which more than half of the characters in are capital letters (i.e., acronym candidates).
|
139 |
+
> Afterward, we search for a sub-sequence of words in which the concatenation of the first one, two or three characters of the words (in the order of the words in the sub-sequence could form an acronym candidate.
|
140 |
+
> We call the sub-sequence a long form candidate. If we cannot find any long form candidate, we remove the sentence.
|
141 |
+
> Using this process, we end up with 17,506 sentences to be annotated manually by the annotators from Amazon Mechanical Turk (MTurk).
|
142 |
+
> In particular, we create a HIT for each sentence and ask the workers to annotate the short forms and the long forms in the sentence.
|
143 |
+
> In case of disagreements, if two out of three workers agree on an annotation, we use majority voting to decide the correct annotation.
|
144 |
+
> Otherwise, a fourth annotator is hired to resolve the conflict
|
145 |
|
146 |
#### Who are the annotators?
|
147 |
|
148 |
+
Workers were recruited through Amazon MEchanical Turk and paid $0.05 per annotation. No further demographic information is provided.
|
149 |
|
150 |
### Personal and Sensitive Information
|
151 |
|
152 |
+
Papers published on arXiv are unlikely to contain much personal information, although some do include some poorly chosen examples revealing personal details, so the data should be used with care.
|
153 |
|
154 |
## Considerations for Using the Data
|
155 |
|
|
|
173 |
|
174 |
### Licensing Information
|
175 |
|
176 |
+
The dataset provided for this shared task is licensed under CC BY-NC-SA 4.0 international license.
|
177 |
|
178 |
### Citation Information
|
179 |
|
180 |
+
```
|
181 |
+
@inproceedings{Veyseh2020,
|
182 |
+
author = {Amir Pouran Ben Veyseh and
|
183 |
+
Franck Dernoncourt and
|
184 |
+
Quan Hung Tran and
|
185 |
+
Thien Huu Nguyen},
|
186 |
+
editor = {Donia Scott and
|
187 |
+
N{\'{u}}ria Bel and
|
188 |
+
Chengqing Zong},
|
189 |
+
title = {What Does This Acronym Mean? Introducing a New Dataset for Acronym
|
190 |
+
Identification and Disambiguation},
|
191 |
+
booktitle = {Proceedings of the 28th International Conference on Computational
|
192 |
+
Linguistics, {COLING} 2020, Barcelona, Spain (Online), December 8-13,
|
193 |
+
2020},
|
194 |
+
pages = {3285--3301},
|
195 |
+
publisher = {International Committee on Computational Linguistics},
|
196 |
+
year = {2020},
|
197 |
+
url = {https://doi.org/10.18653/v1/2020.coling-main.292},
|
198 |
+
doi = {10.18653/v1/2020.coling-main.292}
|
199 |
+
}
|
200 |
+
```
|
201 |
|
202 |
### Contributions
|
203 |
|