Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, a...Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, and more. However, their widespread usage emphasizes the critical need to enhance their security posture to ensure the integrity and reliability of their outputs and minimize harmful effects. Prompt injections and training data poisoning attacks are two of the most prominent vulnerabilities in LLMs, which could potentially lead to unpredictable and undesirable behaviors, such as biased outputs, misinformation propagation, and even malicious content generation. The Common Vulnerability Scoring System (CVSS) framework provides a standardized approach to capturing the principal characteristics of vulnerabilities, facilitating a deeper understanding of their severity within the security and AI communities. By extending the current CVSS framework, we generate scores for these vulnerabilities such that organizations can prioritize mitigation efforts, allocate resources effectively, and implement targeted security measures to defend against potential risks.展开更多
Artificial Intelligence (AI) is transforming organizational dynamics, and revolutionizing corporate leadership practices. This research paper delves into the question of how AI influences corporate leadership, examini...Artificial Intelligence (AI) is transforming organizational dynamics, and revolutionizing corporate leadership practices. This research paper delves into the question of how AI influences corporate leadership, examining both its advantages and disadvantages. Positive impacts of AI are evident in communication, feedback systems, tracking mechanisms, and decision-making processes within organizations. AI-powered communication tools, as exemplified by Slack, facilitate seamless collaboration, transcending geographical barriers. Feedback systems, like Adobe’s Performance Management System, employ AI algorithms to provide personalized development opportunities, enhancing employee growth. AI-based tracking systems optimize resource allocation, as exemplified by studies like “AI-Based Tracking Systems: Enhancing Efficiency and Accountability.” Additionally, AI-powered decision support, demonstrated during the COVID-19 pandemic, showcases the capability to navigate complex challenges and maintain resilience. However, AI adoption poses challenges in human resources, potentially leading to job displacement and necessitating upskilling efforts. Managing AI errors becomes crucial, as illustrated by instances like Amazon’s biased recruiting tool. Data privacy concerns also arise, emphasizing the need for robust security measures. The proposed solution suggests leveraging Local Machine Learning Models (LLMs) to address data privacy issues. Approaches such as federated learning, on-device learning, differential privacy, and homomorphic encryption offer promising strategies. By exploring the evolving dynamics of AI and leadership, this research advocates for responsible AI adoption and proposes LLMs as a potential solution, fostering a balanced integration of AI benefits while mitigating associated risks in corporate settings.展开更多
With the continuous evolution and expanding applications of Large Language Models (LLMs), there has been a noticeable surge in the size of the emerging models. It is not solely the growth in model size, primarily meas...With the continuous evolution and expanding applications of Large Language Models (LLMs), there has been a noticeable surge in the size of the emerging models. It is not solely the growth in model size, primarily measured by the number of parameters, but also the subsequent escalation in computational demands, hardware and software prerequisites for training, all culminating in a substantial financial investment as well. In this paper, we present novel techniques like supervision, parallelization, and scoring functions to get better results out of chains of smaller language models, rather than relying solely on scaling up model size. Firstly, we propose an approach to quantify the performance of a Smaller Language Models (SLM) by introducing a corresponding supervisor model that incrementally corrects the encountered errors. Secondly, we propose an approach to utilize two smaller language models (in a network) performing the same task and retrieving the best relevant output from the two, ensuring peak performance for a specific task. Experimental evaluations establish the quantitative accuracy improvements on financial reasoning and arithmetic calculation tasks from utilizing techniques like supervisor models (in a network of model scenario), threshold scoring and parallel processing over a baseline study.展开更多
This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assist...This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner.展开更多
随着互联网的普及,农业知识和信息的获取变得更加便捷,但信息大多固定且通用,无法针对具体情况提供定制化的解决方案。在此背景下,大语言模型(Large Language Models,LLMs)作为一种高效的人工智能工具,逐渐在农业领域中获得关注和应用...随着互联网的普及,农业知识和信息的获取变得更加便捷,但信息大多固定且通用,无法针对具体情况提供定制化的解决方案。在此背景下,大语言模型(Large Language Models,LLMs)作为一种高效的人工智能工具,逐渐在农业领域中获得关注和应用。目前,LLMs技术在农业领域大模型的相关综述中只是简单描述,并没有系统地介绍LLMs构建流程。本文重点介绍了农业垂直领域大语言模型构建流程,包括数据采集和预处理、选择适当的LLMs基模型、微调训练、检索增强生成(Retrieval Augmented Generation,RAG)技术、评估过程。以及介绍了LangChain框架在农业问答系统中的构建。最后,总结出当前构建农业垂直领域大语言模型的一些挑战,包括数据安全挑战、模型遗忘挑战和模型幻觉挑战,以及提出了未来农业垂直领域大语言的发展方向,包括多模态数据融合、强时效数据更新、多语言知识表达和微调成本优化,以进一步提高农业生产的智能化和现代化水平。展开更多
Aspect-Based Sentiment Analysis(ABSA)is a fundamental area of research in Natural Language Processing(NLP).Within ABSA,Aspect Sentiment Quad Prediction(ASQP)aims to accurately identify sentiment quadruplets in target ...Aspect-Based Sentiment Analysis(ABSA)is a fundamental area of research in Natural Language Processing(NLP).Within ABSA,Aspect Sentiment Quad Prediction(ASQP)aims to accurately identify sentiment quadruplets in target sentences,including aspect terms,aspect categories,corresponding opinion terms,and sentiment polarity.However,most existing research has focused on English datasets.Consequently,while ASQP has seen significant progress in English,the Chinese ASQP task has remained relatively stagnant.Drawing inspiration from methods applied to English ASQP,we propose Chinese generation templates and employ prompt-based instruction learning to enhance the model’s understanding of the task,ultimately improving ASQP performance in the Chinese context.Ultimately,under the same pre-training model configuration,our approach achieved a 5.79%improvement in the F1 score compared to the previously leading method.Furthermore,when utilizing a larger model with reduced training parameters,the F1 score demonstrated an 8.14%enhancement.Additionally,we suggest a novel evaluation metric based on the characteristics of generative models,better-reflecting model generalization.Experimental results validate the effectiveness of our approach.展开更多
在过去20年中,语言建模(Language models,LM)已经成为一种主要方法,用于语言理解和生成,同时作为自然语言处理(Natural language processing,NLP)领域下游的关键技术受到广泛关注.近年来,大语言模型(Large language models,LLMs),例如Ch...在过去20年中,语言建模(Language models,LM)已经成为一种主要方法,用于语言理解和生成,同时作为自然语言处理(Natural language processing,NLP)领域下游的关键技术受到广泛关注.近年来,大语言模型(Large language models,LLMs),例如ChatGPT等技术,取得了显著进展,对人工智能乃至其他领域的变革和发展产生了深远的影响.鉴于LLMs迅猛的发展,本文首先对LLMs相关技术架构和模型规模等方面的演进历程进行了全面综述,总结了模型训练方法、优化技术以及评估手段.随后,分析了LLMs在教育、医疗、金融、工业等领域的应用现状,同时讨论了它们的优势和局限性.此外,还探讨了大语言模型针对社会伦理、隐私和安全等方面引发的安全性与一致性问题及技术措施.最后,展望了大语言模型未来的研究趋势,包括模型的规模与效能、多模态处理、社会影响等方面的发展方向.本文通过全面分析当前研究状况和未来走向,旨在为研究者提供关于大语言模型的深刻见解和启发,以推动该领域的进一步发展.展开更多
文摘Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, and more. However, their widespread usage emphasizes the critical need to enhance their security posture to ensure the integrity and reliability of their outputs and minimize harmful effects. Prompt injections and training data poisoning attacks are two of the most prominent vulnerabilities in LLMs, which could potentially lead to unpredictable and undesirable behaviors, such as biased outputs, misinformation propagation, and even malicious content generation. The Common Vulnerability Scoring System (CVSS) framework provides a standardized approach to capturing the principal characteristics of vulnerabilities, facilitating a deeper understanding of their severity within the security and AI communities. By extending the current CVSS framework, we generate scores for these vulnerabilities such that organizations can prioritize mitigation efforts, allocate resources effectively, and implement targeted security measures to defend against potential risks.
文摘Artificial Intelligence (AI) is transforming organizational dynamics, and revolutionizing corporate leadership practices. This research paper delves into the question of how AI influences corporate leadership, examining both its advantages and disadvantages. Positive impacts of AI are evident in communication, feedback systems, tracking mechanisms, and decision-making processes within organizations. AI-powered communication tools, as exemplified by Slack, facilitate seamless collaboration, transcending geographical barriers. Feedback systems, like Adobe’s Performance Management System, employ AI algorithms to provide personalized development opportunities, enhancing employee growth. AI-based tracking systems optimize resource allocation, as exemplified by studies like “AI-Based Tracking Systems: Enhancing Efficiency and Accountability.” Additionally, AI-powered decision support, demonstrated during the COVID-19 pandemic, showcases the capability to navigate complex challenges and maintain resilience. However, AI adoption poses challenges in human resources, potentially leading to job displacement and necessitating upskilling efforts. Managing AI errors becomes crucial, as illustrated by instances like Amazon’s biased recruiting tool. Data privacy concerns also arise, emphasizing the need for robust security measures. The proposed solution suggests leveraging Local Machine Learning Models (LLMs) to address data privacy issues. Approaches such as federated learning, on-device learning, differential privacy, and homomorphic encryption offer promising strategies. By exploring the evolving dynamics of AI and leadership, this research advocates for responsible AI adoption and proposes LLMs as a potential solution, fostering a balanced integration of AI benefits while mitigating associated risks in corporate settings.
文摘With the continuous evolution and expanding applications of Large Language Models (LLMs), there has been a noticeable surge in the size of the emerging models. It is not solely the growth in model size, primarily measured by the number of parameters, but also the subsequent escalation in computational demands, hardware and software prerequisites for training, all culminating in a substantial financial investment as well. In this paper, we present novel techniques like supervision, parallelization, and scoring functions to get better results out of chains of smaller language models, rather than relying solely on scaling up model size. Firstly, we propose an approach to quantify the performance of a Smaller Language Models (SLM) by introducing a corresponding supervisor model that incrementally corrects the encountered errors. Secondly, we propose an approach to utilize two smaller language models (in a network) performing the same task and retrieving the best relevant output from the two, ensuring peak performance for a specific task. Experimental evaluations establish the quantitative accuracy improvements on financial reasoning and arithmetic calculation tasks from utilizing techniques like supervisor models (in a network of model scenario), threshold scoring and parallel processing over a baseline study.
文摘This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner.
文摘随着互联网的普及,农业知识和信息的获取变得更加便捷,但信息大多固定且通用,无法针对具体情况提供定制化的解决方案。在此背景下,大语言模型(Large Language Models,LLMs)作为一种高效的人工智能工具,逐渐在农业领域中获得关注和应用。目前,LLMs技术在农业领域大模型的相关综述中只是简单描述,并没有系统地介绍LLMs构建流程。本文重点介绍了农业垂直领域大语言模型构建流程,包括数据采集和预处理、选择适当的LLMs基模型、微调训练、检索增强生成(Retrieval Augmented Generation,RAG)技术、评估过程。以及介绍了LangChain框架在农业问答系统中的构建。最后,总结出当前构建农业垂直领域大语言模型的一些挑战,包括数据安全挑战、模型遗忘挑战和模型幻觉挑战,以及提出了未来农业垂直领域大语言的发展方向,包括多模态数据融合、强时效数据更新、多语言知识表达和微调成本优化,以进一步提高农业生产的智能化和现代化水平。
基金supported by the National Key Research and Development Program(Nos.2021YFF0901705,2021YFF0901700)the State Key Laboratory of Media Convergence and Communication,Communication University of China+1 种基金the Fundamental Research Funds for the Central Universitiesthe High-Quality and Cutting-Edge Disciplines Construction Project for Universities in Beijing(Internet Information,Communication University of China).
文摘Aspect-Based Sentiment Analysis(ABSA)is a fundamental area of research in Natural Language Processing(NLP).Within ABSA,Aspect Sentiment Quad Prediction(ASQP)aims to accurately identify sentiment quadruplets in target sentences,including aspect terms,aspect categories,corresponding opinion terms,and sentiment polarity.However,most existing research has focused on English datasets.Consequently,while ASQP has seen significant progress in English,the Chinese ASQP task has remained relatively stagnant.Drawing inspiration from methods applied to English ASQP,we propose Chinese generation templates and employ prompt-based instruction learning to enhance the model’s understanding of the task,ultimately improving ASQP performance in the Chinese context.Ultimately,under the same pre-training model configuration,our approach achieved a 5.79%improvement in the F1 score compared to the previously leading method.Furthermore,when utilizing a larger model with reduced training parameters,the F1 score demonstrated an 8.14%enhancement.Additionally,we suggest a novel evaluation metric based on the characteristics of generative models,better-reflecting model generalization.Experimental results validate the effectiveness of our approach.
文摘在过去20年中,语言建模(Language models,LM)已经成为一种主要方法,用于语言理解和生成,同时作为自然语言处理(Natural language processing,NLP)领域下游的关键技术受到广泛关注.近年来,大语言模型(Large language models,LLMs),例如ChatGPT等技术,取得了显著进展,对人工智能乃至其他领域的变革和发展产生了深远的影响.鉴于LLMs迅猛的发展,本文首先对LLMs相关技术架构和模型规模等方面的演进历程进行了全面综述,总结了模型训练方法、优化技术以及评估手段.随后,分析了LLMs在教育、医疗、金融、工业等领域的应用现状,同时讨论了它们的优势和局限性.此外,还探讨了大语言模型针对社会伦理、隐私和安全等方面引发的安全性与一致性问题及技术措施.最后,展望了大语言模型未来的研究趋势,包括模型的规模与效能、多模态处理、社会影响等方面的发展方向.本文通过全面分析当前研究状况和未来走向,旨在为研究者提供关于大语言模型的深刻见解和启发,以推动该领域的进一步发展.
文摘自从OpenAI在2022年11月推出其生成式人工智能(AIGC,artificial intelligence generative content,也有人使用generative AI)产品——ChatGPT后,整个世界都为之颠覆.生成式人工智能主要有两个主流:大型语言模型(LLM,large language model)和扩散模型(diffusion model),新的应用和研究每天都在加速发表.在本文中,我们首先对大型语言模型表现出来的智能水平提出了一个严肃的问题:它是否真的拥有像普通人的智能能力一样的通用人工智能(AGI,artificial general intelligence)能力?在本文中,我首先提出了一个重要的假说:作为一个封闭的系统,通过一个大型的语言模型被设计成表示和存储人类的巨大知识和智能的能力和行为,并配备了最高的价值标准,即模型必须符合人类的价值,但大型语言模型内部结构和性质并没有显示其拥有通用人工智能能力.然而,作为一个开放的系统,一旦我们输入一些隐含人类知识和智能的格式化文本,我们就会突然发现,大型语言模型的输出显示出某些人类智能和行为的特征.其中格式化的输入文本被称为提示(prompt),提示的智能程度越高,模型的智能输出就越好.换句话说,大型语言模型拥有某种以prompt提示为条件的通用人工智能AGI能力.经济学研究和其他社会科学研究如政治、历史、语言学等包括了最复杂的社会形态和人类最深刻的思想,因此本文试图通过总结其他研究者最新的研究成果来探讨大语言模型的通用人工智能是事实还是错觉?以及大语言模型其他经济功能和效用,对于这个模型的类通用人工智能的能力,我们总结这些研究学者的最新研究成果,包括大语言模型的智商水平,生成式人工智能的产业经济学,生成式人工智能下的计算社会科学研究,大语言模型的商业决策制定,经济学和其他社会科学,以及虛拟生成式人工智能经济学家的范式研究等问题.