Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
Dataset Viewer (First 5GB)
Auto-converted to Parquet Duplicate
uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,869,038,153,728
arxiv
\section{Introduction} \label{s1} Let $V(n)$ be the set of all positive divisors of a positive integer $n$ as defined in~(\ref{e01}). For instance, $V(20) = \{1, 2, 4, 5, 10, 20\}$. The partial order called the {\em divides} relation, $a$ divides $b$ denoted $a|b$, is applied to $V(n)$ and yields two types of...
2,869,038,153,729
arxiv
\section{Introduction} \label{intro} Einstein's equivalence principle (EEP) is at the core of our understanding of gravitation and is among the most important postulates of modern physics. It is under constant scrutiny since a violation of any of its pillars would lead to new physics beyond general relativity (GR) a...
2,869,038,153,730
arxiv
\section{Introduction} Black holes are believed to play a key role in a number of highly energetic astrophysical phenomena, from active galactic nuclei to gamma-ray bursts to ultraluminous X-ray binaries. The extraordinary amounts of energy released during such events may have two different origins. It can be the g...
2,869,038,153,731
arxiv
"\\section{Introduction}\nIn this article we show that the adele class space\\footnote{More specific(...TRUNCATED)
2,869,038,153,732
arxiv
"\n\\section{Introduction}\n\\label{sec:intro}\n\nFlow states that remain exactly the same under som(...TRUNCATED)
2,869,038,153,733
arxiv
"\\section{Introduction}\n\\noindent\nThe symmetric group $S_n$ may be viewed as the subgroup of the(...TRUNCATED)
2,869,038,153,734
arxiv
"\\section{Introduction} \\vspace{-\\parskip}\nBulges of early-type spirals\nand elliptical galaxies(...TRUNCATED)
2,869,038,153,735
arxiv
"\\section{Introduction}\\label{sec:intro}\n\nThe cycle 4 {\\it Spitzer Space Telescope} Legacy proj(...TRUNCATED)
2,869,038,153,736
arxiv
"\\section{#1}}\n\n\n\\renewcommand{\\baselinestretch}{1}\n\\topmargin -0.6cm\n\\oddsidemargin=-0.75(...TRUNCATED)
2,869,038,153,737
arxiv
"\\section*{Introduction}\n\n\nStarting with the seminal works \\cite{Beil} and \\cite{BGG} on deriv(...TRUNCATED)
End of preview. Expand in Data Studio

Dataset Description

To facilitate researchers to use NanoLM for comparative analysis across different model designs, we build a curated pre-training dataset from those of existing large-scale models (i.e., Llama, Falcon, GPT-3). It covers diverse domains to improve the generalization capabilities of the resultant models.

Dataset Creation

The data is mainly post-processed and filtered from RedPajama and RedPajamaV2. We develop a series of cleaning steps to remove redundant formatting, garbled characters, formula errors, duplicated paragraphs, low-quality text, and other unwanted content. After interleaved deduplication on document level of each independent subset, we finally obtain a high-quality dataset.

Dataset Summary

Dataset Num Tokens (B)
CommonCrawl 67.00
C4 15.00
Wikipedia (En) 5.14
Books 4.48
ArXiv 2.50
StackExchange 2.00
Total 97.12

We release the data with approximate 100B tokens. Furthermore, we recommend users to add code dataset such as Starcode, The Stack V2 to enrich model's performance on code and reasoning.

Citation

To cite NanoLM, please use:


@misc{yao2024nanolm,

title={nanoLM: an Affordable LLM Pre-training Benchmark via Accurate Loss Prediction across Scales},

author={Yiqun Yao and Siqi fan and Xiusheng Huang and Xuezhi Fang and Xiang Li and Ziyi Ni and Xin Jiang and Xuying Meng and Peng Han and Shuo Shang and Kang Liu and Aixin Sun and Yequan Wang},

year={2024},

eprint={2304.06875},

archivePrefix={arXiv},

primaryClass={cs.CL}

}

Acknowledgement

The data is mainly curated and filtered from RedPajama and RedPajamaV2. We extend our gratitude to the original authors for their innovative work and for making it available to the community.

License

The code of NanoLM used to process the dataset and loss prediction is licensed under the Apache 2.0 license.

For curated data, please refer to the licenses of the original ones.

Downloads last month
92

Paper for CofeAI/NanoData