File size: 3,105 Bytes
4316d1e
 
1803a2d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7349ec2
 
6494bc8
663a894
 
 
 
4316d1e
7349ec2
 
 
 
 
 
663a894
 
 
 
 
 
 
 
 
 
7349ec2
 
 
663a894
7349ec2
 
 
 
 
 
663a894
7349ec2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
license: mit
dataset_info:
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 129624
    num_examples: 10000
  - name: validation_top1
    num_bytes: 10754
    num_examples: 1000
  - name: test_top1
    num_bytes: 10948
    num_examples: 1000
  - name: validation_1_10
    num_bytes: 11618
    num_examples: 1000
  - name: test_1_10
    num_bytes: 11692
    num_examples: 1000
  - name: validation_10_20
    num_bytes: 13401
    num_examples: 1000
  - name: test_10_20
    num_bytes: 13450
    num_examples: 1000
  - name: validation_20_30
    num_bytes: 15112
    num_examples: 1000
  - name: test_20_30
    num_bytes: 15069
    num_examples: 1000
  - name: validation_bottom50
    num_bytes: 15204
    num_examples: 1000
  - name: test_bottom50
    num_bytes: 15076
    num_examples: 1000
  download_size: 241234
  dataset_size: 261948
language:
- en
viewer: true
task_categories:
- text-generation
size_categories:
- 1K<n<10K
---

# WikiSpell

## Description
This dataset is a **custom implementation** of the WikiSpell dataset introduced in [Character-Aware Models Improve Visual Text Rendering](https://arxiv.org/pdf/2212.10562.pdf) by Liu et al. (2022).

Similarly to the original WikiSpell dataset, the training set is composed of 5000 words taken uniformly from the 50% least common Wiktionary words (taken from [this Wiktionary extraction](https://kaikki.org/dictionary/rawdata.html)), and 5000 words sampled according to their frequencies taken from the 50% most common Wiktionary words.

The validation and test are splitted in 5 sets, sampled depending on their frequency in the corpus:
- 1% most common words
- 1 - 10% most common words
- 10 - 20% most common words
- 20 - 30% most common words
- 50% least common words

Contrary to the original WikiSpell dataset, we compute the frequency of the words using the first 100k sentences from OpenWebText ([Skylion007/openwebtext](https://huggingface.co/datasets/Skylion007/openwebtext)) instead of mC4.


## Usage
This dataset is used for testing spelling in Large Language Models. To do so, the labels should be computed like in the following snippet:

```python
sample = ds["train"][0]
label = " ".join(sample["text"])
```

**The labels are not included in the dataset files directly.**

## Citation

Please cite the original paper introducing WikiSpell if you're using this dataset:

```
@inproceedings{liu-etal-2023-character,
    title = "Character-Aware Models Improve Visual Text Rendering",
    author = "Liu, Rosanne  and
      Garrette, Dan  and
      Saharia, Chitwan  and
      Chan, William  and
      Roberts, Adam  and
      Narang, Sharan  and
      Blok, Irina  and
      Mical, Rj  and
      Norouzi, Mohammad  and
      Constant, Noah",
    booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.acl-long.900",
    pages = "16270--16297",
}
```