orionweller's picture
Update README.md
818ec4c verified
metadata
language:
  - af
  - ar
  - az
  - bn
  - cs
  - de
  - en
  - es
  - et
  - fa
  - fi
  - fr
  - ga
  - gl
  - gu
  - he
  - hi
  - hr
  - id
  - it
  - ja
  - ka
  - kk
  - km
  - ko
  - lt
  - lv
  - mk
  - ml
  - mn
  - mr
  - my
  - ne
  - nl
  - pl
  - ps
  - pt
  - ro
  - ru
  - si
  - sl
  - sv
  - ta
  - th
  - tr
  - uk
  - ur
  - vi
  - xh
  - zh
pretty_name: MegaWika Citation Source Text
configs:
  - config_name: af
    data_files:
      - split: train
        path: af_0.jsonl
  - config_name: ar
    data_files:
      - split: train
        path: ar_0.jsonl
  - config_name: az
    data_files:
      - split: train
        path: az_0.jsonl
  - config_name: bn
    data_files:
      - split: train
        path: bn_0.jsonl
  - config_name: cs
    data_files:
      - split: train
        path: cs_0.jsonl
  - config_name: de
    data_files:
      - split: train
        path: de_0.jsonl
  - config_name: en
    data_files:
      - split: train
        path: en*.jsonl
  - config_name: es
    data_files:
      - split: train
        path: es_0.jsonl
  - config_name: et
    data_files:
      - split: train
        path: et_0.jsonl
  - config_name: fa
    data_files:
      - split: train
        path: fa_0.jsonl
  - config_name: fi
    data_files:
      - split: train
        path: fi_0.jsonl
  - config_name: fr
    data_files:
      - split: train
        path: fr*.jsonl
  - config_name: ga
    data_files:
      - split: train
        path: ga_0.jsonl
  - config_name: gl
    data_files:
      - split: train
        path: gl_0.jsonl
  - config_name: gu
    data_files:
      - split: train
        path: gu_0.jsonl
  - config_name: he
    data_files:
      - split: train
        path: he_0.jsonl
  - config_name: hi
    data_files:
      - split: train
        path: hi_0.jsonl
  - config_name: hr
    data_files:
      - split: train
        path: hr_0.jsonl
  - config_name: id
    data_files:
      - split: train
        path: id_0.jsonl
  - config_name: it
    data_files:
      - split: train
        path: it_0.jsonl
  - config_name: ja
    data_files:
      - split: train
        path: ja_0.jsonl
  - config_name: ka
    data_files:
      - split: train
        path: ka_0.jsonl
  - config_name: kk
    data_files:
      - split: train
        path: kk_0.jsonl
  - config_name: km
    data_files:
      - split: train
        path: km_0.jsonl
  - config_name: ko
    data_files:
      - split: train
        path: ko_0.jsonl
  - config_name: lt
    data_files:
      - split: train
        path: lt_0.jsonl
  - config_name: lv
    data_files:
      - split: train
        path: lv_0.jsonl
  - config_name: mk
    data_files:
      - split: train
        path: mk_0.jsonl
  - config_name: ml
    data_files:
      - split: train
        path: ml_0.jsonl
  - config_name: mn
    data_files:
      - split: train
        path: mn_0.jsonl
  - config_name: mr
    data_files:
      - split: train
        path: mr_0.jsonl
  - config_name: my
    data_files:
      - split: train
        path: my_0.jsonl
  - config_name: ne
    data_files:
      - split: train
        path: ne_0.jsonl
  - config_name: nl
    data_files:
      - split: train
        path: nl_0.jsonl
  - config_name: pl
    data_files:
      - split: train
        path: pl_0.jsonl
  - config_name: ps
    data_files:
      - split: train
        path: ps_0.jsonl
  - config_name: pt
    data_files:
      - split: train
        path: pt_0.jsonl
  - config_name: ro
    data_files:
      - split: train
        path: ro_0.jsonl
  - config_name: ru
    data_files:
      - split: train
        path: ru*.jsonl
  - config_name: si
    data_files:
      - split: train
        path: si_0.jsonl
  - config_name: sl
    data_files:
      - split: train
        path: sl_0.jsonl
  - config_name: sv
    data_files:
      - split: train
        path: sv_0.jsonl
  - config_name: ta
    data_files:
      - split: train
        path: ta_0.jsonl
  - config_name: th
    data_files:
      - split: train
        path: th_0.jsonl
  - config_name: tr
    data_files:
      - split: train
        path: tr_0.jsonl
  - config_name: uk
    data_files:
      - split: train
        path: uk_0.jsonl
  - config_name: ur
    data_files:
      - split: train
        path: ur_0.jsonl
  - config_name: vi
    data_files:
      - split: train
        path: vi_0.jsonl
  - config_name: xh
    data_files:
      - split: train
        path: xh_0.jsonl
  - config_name: zh
    data_files:
      - split: train
        path: zh_0.jsonl

MegaWika Citation Source Text

Dataset Access

You can access the dataset using the Hugging Face datasets library:

from datasets import load_dataset
dataset = load_dataset("hltcoe/megawika-citation-text-only")

Dataset Information

MegaWika Citation Source Text is a large-scale, multilingual collection of Wikipedia citations and their source text.

Key Features

  • Covers 51 languages
  • Contains full text of Wikipedia citations
  • Mostly text-only version (little HTML or other markup)

Data Format

Each entry in the dataset is a JSON object with the following structure:

{
  "title": "Title of the Wikipedia article the citation came from",
  "last_revision": "The last revision of the Wikipedia page (not the citation)",
  "url": "URL of the citation",
  "source_text": "The citation source text"
}

Language Files

The dataset is split into multiple files, one for each language. For languages with a large amount of data, the files are further split into parts. Here's a sample of the available files:

  • af.jsonl
  • ar.jsonl
  • az.jsonl
  • bn.jsonl
  • cs.jsonl
  • de.jsonl
  • en.split.1.jsonl, en.split.2.jsonl, ..., en.split.8.jsonl
  • es.jsonl
  • fr.split.1.jsonl, fr.split.2.jsonl, fr.split.3.jsonl, fr.split.4.jsonl
  • ...

For a complete list of available files, please refer to the dataset's file structure on Hugging Face.

Usage Examples

Here is an example of how you might use this dataset:

from datasets import load_dataset

dataset = load_dataset("hltcoe/megawika-citation-text-only", "en")["train"]
texts = dataset["source_text"]