You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thank you for your release. I've been reviewing the method we use to calculate the repetition score for identifying duplicate content in documents, specifically the segment where we compute this score based on the number of characters within duplicate n-grams:
I noticed that we're using character counts (word_lengths) to determine the extent of duplication. This approach focuses on the granularity of characters rather than whole words. Could you help me understand the rationale behind choosing character-level analysis for this metric instead of basing our calculations directly on word counts? Are there specific advantages or scenarios where character-level detail provides better insights into data quality or model training effectiveness that might not be as apparent with word-level analysis?
Looking forward to your insights.
The text was updated successfully, but these errors were encountered:
Hi @luc1fer3 and thanks for your question. This repetition scores measure the ratio between the number characters that appear in duplicated n-grams, and the total number of characters in the document. As such, this score contains both information at a character level, and at a (word-)ngram level. Choosing to compute character-based metrics essentially means you normalize at a higher level of granularity, taking into account more information than when using the number of words (eg, think of long words which are repeated often). It's possible though that a combination with word-level statistics also gives you a good indicator.
Hi, thank you for your release. I've been reviewing the method we use to calculate the repetition score for identifying duplicate content in documents, specifically the segment where we compute this score based on the number of characters within duplicate n-grams:
RedPajama-Data/app/src/core/quality_signals/repetitions.py
Lines 136 to 138 in bb594b0
I noticed that we're using character counts (word_lengths) to determine the extent of duplication. This approach focuses on the granularity of characters rather than whole words. Could you help me understand the rationale behind choosing character-level analysis for this metric instead of basing our calculations directly on word counts? Are there specific advantages or scenarios where character-level detail provides better insights into data quality or model training effectiveness that might not be as apparent with word-level analysis?
Looking forward to your insights.
The text was updated successfully, but these errors were encountered: