The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ParserError Message: Error tokenizing data. C error: Expected 2 fields in line 28, saw 3 Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2643, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1659, in _head return _examples_to_batch(list(self.take(n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1816, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1347, in __iter__ for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 318, in __iter__ for key, pa_table in self.generate_tables_fn(**gen_kwags): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 190, in _generate_tables for batch_idx, df in enumerate(csv_file_reader): File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__ return self.get_chunk() File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk return self.read(nrows=size) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1923, in read ) = self._engine.read( # type: ignore[attr-defined] File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read chunks = self._reader.read_low_memory(nrows) File "parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory File "parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status File "parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error pandas.errors.ParserError: Error tokenizing data. C error: Expected 2 fields in line 28, saw 3
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for CA-IT Parallel Corpus
Dataset Summary
The CA-IT Parallel Corpus is a Catalan-Italian dataset parallel sentences created to support Catalan in NLP tasks, specifically Machine Translation.
Supported Tasks and Leaderboards
The dataset can be used to train Bilingual Machine Translation models between Italian and Catalan in any direction, as well as Multilingual Machine Translation models.
Languages
The sentences included in the dataset are in Catalan (CA) and Italian (IT).
Dataset Structure
Data Instances
A single tsv file is provided with the sentences sorted in the same order and a header containing the two-letter ISO language code for the language in each column: ca-it_corpus.tsv.
Data Fields
[N/A]
Data Splits
The dataset contains a single split: train
.
Dataset Creation
Curation Rationale
This dataset is aimed at promoting the development of Machine Translation between Catalan and other languages, specifically Italian.
Source Data
Initial Data Collection and Normalization
The dataset is a combination of the following original datasets collected from Opus: CCMatrix, MultiCCAligned, WikiMatrix, GNOME, KDE4, OpenSubtitles, GlobalVoices.
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75. This is done using sentence embeddings calculated using LaBSE. The filtered datasets are then concatenated to form the final corpus.
Who are the source language producers?
Annotations
Annotation process
The dataset does not contain any annotations.
Who are the annotators?
[N/A]
Personal and Sensitive Information
Given that this dataset is partly derived from pre-existing datasets that may contain crawled data, and that no specific anonymisation process has been applied, personal and sensitive information may be present in the data. This needs to be considered when using the data for training models.
Considerations for Using the Data
Social Impact of Dataset
By providing this resource, we intend to promote the use of Catalan across NLP tasks, thereby improving the accessibility and visibility of the Catalan language.
Discussion of Biases
No specific bias mitigation strategies were applied to this dataset. Inherent biases may exist within the data.
Other Known Limitations
The dataset contains data of a general domain. Applications of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
Additional Information
Dataset Curators
Language Technologies Unit at the Barcelona Supercomputing Center ([email protected]).
This work has been promoted and financed by the Generalitat de Catalunya through the Aina project.
Licensing Information
This work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.
Citation Information
[N/A]
Contributions
[N/A]
- Downloads last month
- 2