期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
A Survey of LLM Datasets:From Autoregressive Model to AI Chatbot
1
作者 杜非 马新建 +5 位作者 杨婧如 柳熠 罗超然 王学斌 姜海鸥 景翔 《Journal of Computer Science & Technology》 SCIE EI CSCD 2024年第3期542-566,共25页
Since OpenAI opened access to ChatGPT,large language models(LLMs)become an increasingly popular topic attracting researchers’attention from abundant domains.However,public researchers meet some problems when developi... Since OpenAI opened access to ChatGPT,large language models(LLMs)become an increasingly popular topic attracting researchers’attention from abundant domains.However,public researchers meet some problems when developing LLMs given that most of the LLMs are produced by industries and the training details are typically unrevealed.Since datasets are an important setup of LLMs,this paper does a holistic survey on the training datasets used in both the pre-train and fine-tune processes.The paper first summarizes 16 pre-train datasets and 16 fine-tune datasets used in the state-of-the-art LLMs.Secondly,based on the properties of the pre-train and fine-tune processes,it comments on pre-train datasets from quality,quantity,and relation with models,and comments on fine-tune datasets from quality,quantity,and concerns.This study then critically figures out the problems and research trends that exist in current LLM datasets.The study helps public researchers train and investigate LLMs by visual cases and provides useful comments to the research community regarding data development.To the best of our knowledge,this paper is the first to summarize and discuss datasets used in both autoregressive and chat LLMs.The survey offers insights and suggestions to researchers and LLM developers as they build their models,and contributes to the LLM study by pointing out the existing problems of LLM studies from the perspective of data. 展开更多
关键词 large language model(LLM) autoregressive model AI chatbot natural language processing(NLP)corpora OpenAI
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部