Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:

Regarding to the newly updated indexes(writen as deduplication issues)

#29
by kimcando - opened

Hey,

I am wondering what kinds of deduplication issues were there in the previous version.
Comparing to the previous version, it seems like the updated volume of data is rather getting bigger instead of being smaller. (since my first thought was that if there is an issue on the dedup, that may be undeduplicated version or loosly deduplicated data might have been uploaded. therefore the dataset should be getting smaller. but instead it gets bigger!)
so could you please share what kinds of issue found?
belows are what I figured out from the previous and updated information of indexes on README.

image.png

Thanks in advance
cheers!

HuggingFaceFW org

Hi!
Basically our mapping between files was broken between different steps of minhash deduplication, and so while the correct number of documents had been removed from each of these dumps, those documents were not actually the duplicates that we had identified. In general the biggest duplicate clusters are below average in length, which explains the increase in the total size of these dumps

super thank you for your quick reply.!!! :)

Then, one more quick question!
I saw the latest work 'fineweb-edu' ! considering the timeline, fineweb-edu dataset which is in the period of those wrongly uploaded indexes are also have duplicated problems?
To paraphrase the question, the edu-classifier is applied on the previous version(v1.0.0) which has the duplication issue between the CC-MAIN-2024-10~CC-main-2021-49, ? or newly uploaded version(v1.1.0)?

HuggingFaceFW org

FineWeb-Edu was applied on the fixed version

Thank you!

Uhm, these questions are little different from the issue itself, but I actually am lost about you guys deduplication results in fineweb blog.
especailly, you guys mentioned in 'TAKING A STEP BACK: INDIVIDUAL DUMP DEDUP' section as follows (i split the paragraph on my own to clearly ask the question )

  1. We hypothesize that the main improvement gained from deduplication is the removal of very large clusters that are present in every single dump (you will find some examples of these clusters in the RefinedWeb paper, each containing hundreds of thousands of documents)
    -> my understanding : Each document sample in the bigger duplicated cluster(which has many duplicates) is more likely to be 'low quality' such as proxy, ip redirection or type of articles that are too common so no need to learn repeatedly.

  2. and that further deduplication for clusters with a low number of duplicates (less than ~100 i.e. the number of dumps) actually harms performance
    -> my understanding : in constrast to 1), smaller duplicate clusters are better to be kept. but no specific reason. maybe those document have a quite more information than the doucment belonging to the bigger clusters. ?

  3. data that does not find a duplicate match in any other dump might actually be worse quality/more out of distribution (as evidenced by the results on the 2013-48 data).
    -> -> my understanding : Even if the unique document might be regarded as good because it is unique , which means 'new' among the dataset, but the unique doucment may not be always unique because it is something too far from the normal quality document.

First, I can't get the clear conclusion in this paragraph. Is this the statement you want to say?? -> "documents in the bigger cluster are prone to bad, but the single unique document whose cluster has only one document can be also bad because it might be out of distribution. THREFORE the deduplication recipe should be better to 1) remove the documents in the bigger cluster, and 2) use rigid filtering rule to remove the unique cluster documents?"

Second, I don't get the connection between the conclusion in this paragraph and the statement in this repo saying 'While we originally intended to deduplicate the dataset as a whole, our ablations showed that training on a sampling of individually deduplicated dumps/crawls outperformed training on a sampling of all the dumps/crawls deduplicated together. You will find more details on our blogpost.' Based on the first claim, the globally deduplicated version might remove the real big clusters and remove several OOD style documents. Furthermore, I thought the global deduplication as using minhash to all dumps across, but it seems that you use url, line deduplication which might be way too aggresive. THEREFORE could be still held that deduplicating global data is worse? what if you just apply minhash to the all dumps.(of course it might take a lot of computation.

Thank you for your quick reply again and it's so grateful having opportunities to discuss those mysterious questions to me finally!

HuggingFaceFW org
edited Jun 3

Hi,
We did apply minhash to all the dumps. This is detailed in the "More deduplication is always better, right?" section. This resulted in around 4T tokens. Training on 350GT sampled from these 4T gave performance equivalent to training on 350GT randomly sampled tokens pre-dedup (all this deduplication did not improve the overall perf).
We then tried deduplicating each dump individually with the same minhash code. This is the "Taking a step back: individual dump dedup" section. Sampling 350GT from all the individually deduplicated dumps yielded much better results. This is in line with the claim on the dataset page/repo.
Finally, we tried applying some other methods globally (url, some variations of line dedup, which match the lines exactly). We consider these methods to be lighter than minhash as you require a larger "exact match" (either the full URL, or individual lines), but they also performed worse than just the independent dedup.

As to the first part of your question:

my understanding : Each document sample in the bigger duplicated cluster(which has many duplicates) is more likely to be 'low quality' such as proxy, ip redirection or type of articles that are too common so no need to learn repeatedly.

We do not have a super conclusive answer on this, I am not sure they are necessarily worse quality but they are repeated often millions or even billions of times. Here I think the main point is that they waste a lot of compute/model capacity

my understanding : in constrast to 1), smaller duplicate clusters are better to be kept. but no specific reason. maybe those document have a quite more information than the doucment belonging to the bigger clusters. ?

These here refer to documents that are repeated once per dump (since we in the end do not deduplicate across dumps). If they are recrawled every single dump, then possibly they are better quality.

my understanding : Even if the unique document might be regarded as good because it is unique , which means 'new' among the dataset, but the unique doucment may not be always unique because it is something too far from the normal quality document.

Yes, specifically by the very last (the oldest dump), whatever is left there at the end must be dissimilar to EVERY single other document in the full dataset. From observation they were mostly nonsensical text or weird formatting

Hope that clears it up a bit

Sign up or log in to comment