Since OpenAI opened access to ChatGPT,large language models(LLMs)become an increasingly popular topic attracting researchers’attention from abundant domains.However,public researchers meet some problems when developi...Since OpenAI opened access to ChatGPT,large language models(LLMs)become an increasingly popular topic attracting researchers’attention from abundant domains.However,public researchers meet some problems when developing LLMs given that most of the LLMs are produced by industries and the training details are typically unrevealed.Since datasets are an important setup of LLMs,this paper does a holistic survey on the training datasets used in both the pre-train and fine-tune processes.The paper first summarizes 16 pre-train datasets and 16 fine-tune datasets used in the state-of-the-art LLMs.Secondly,based on the properties of the pre-train and fine-tune processes,it comments on pre-train datasets from quality,quantity,and relation with models,and comments on fine-tune datasets from quality,quantity,and concerns.This study then critically figures out the problems and research trends that exist in current LLM datasets.The study helps public researchers train and investigate LLMs by visual cases and provides useful comments to the research community regarding data development.To the best of our knowledge,this paper is the first to summarize and discuss datasets used in both autoregressive and chat LLMs.The survey offers insights and suggestions to researchers and LLM developers as they build their models,and contributes to the LLM study by pointing out the existing problems of LLM studies from the perspective of data.展开更多
文摘Since OpenAI opened access to ChatGPT,large language models(LLMs)become an increasingly popular topic attracting researchers’attention from abundant domains.However,public researchers meet some problems when developing LLMs given that most of the LLMs are produced by industries and the training details are typically unrevealed.Since datasets are an important setup of LLMs,this paper does a holistic survey on the training datasets used in both the pre-train and fine-tune processes.The paper first summarizes 16 pre-train datasets and 16 fine-tune datasets used in the state-of-the-art LLMs.Secondly,based on the properties of the pre-train and fine-tune processes,it comments on pre-train datasets from quality,quantity,and relation with models,and comments on fine-tune datasets from quality,quantity,and concerns.This study then critically figures out the problems and research trends that exist in current LLM datasets.The study helps public researchers train and investigate LLMs by visual cases and provides useful comments to the research community regarding data development.To the best of our knowledge,this paper is the first to summarize and discuss datasets used in both autoregressive and chat LLMs.The survey offers insights and suggestions to researchers and LLM developers as they build their models,and contributes to the LLM study by pointing out the existing problems of LLM studies from the perspective of data.