Purpose-Hate speech is an expression of intense hatred.Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors.Hate speech detection with social media data has witnessed spe...Purpose-Hate speech is an expression of intense hatred.Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors.Hate speech detection with social media data has witnessed special research attention in recent studies,hence,the need to design a generic metadata architecture and efficient feature extraction technique to enhance hate speech detection.Design/methodology/approach-This study proposes a hybrid embeddings enhanced with a topic inference method and an improved cuckoo search neural network for hate speech detection in Twitter data.The proposed method uses a hybrid embeddings technique that includes Term Frequency-Inverse Document Frequency(TF-IDF)for word-level feature extraction and Long Short Term Memory(LSTM)which is a variant of recurrent neural networks architecture for sentence-level feature extraction.The extracted features from the hybrid embeddings then serve as input into the improved cuckoo search neural network for the prediction of a tweet as hate speech,offensive language or neither.Findings-The proposed method showed better results when tested on the collected Twitter datasets compared to other related methods.In order to validate the performances of the proposed method,t-test and post hoc multiple comparisons were used to compare the significance and means of the proposed method with other related methods for hate speech detection.Furthermore,Paired Sample t-Test was also conducted to validate the performances of the proposed method with other related methods.Research limitations/implications-Finally,the evaluation results showed that the proposed method outperforms other related methods with mean F1-score of 91.3.Originality/value-The main novelty of this study is the use of an automatic topic spotting measure based on na€ıve Bayes model to improve features representation.展开更多
In recent years,the usage of social networking sites has considerably increased in the Arab world.It has empowered individuals to express their opinions,especially in politics.Furthermore,various organizations that op...In recent years,the usage of social networking sites has considerably increased in the Arab world.It has empowered individuals to express their opinions,especially in politics.Furthermore,various organizations that operate in the Arab countries have embraced social media in their day-to-day business activities at different scales.This is attributed to business owners’understanding of social media’s importance for business development.However,the Arabic morphology is too complicated to understand due to the availability of nearly 10,000 roots and more than 900 patterns that act as the basis for verbs and nouns.Hate speech over online social networking sites turns out to be a worldwide issue that reduces the cohesion of civil societies.In this background,the current study develops a Chaotic Elephant Herd Optimization with Machine Learning for Hate Speech Detection(CEHOML-HSD)model in the context of the Arabic language.The presented CEHOML-HSD model majorly concentrates on identifying and categorising the Arabic text into hate speech and normal.To attain this,the CEHOML-HSD model follows different sub-processes as discussed herewith.At the initial stage,the CEHOML-HSD model undergoes data pre-processing with the help of the TF-IDF vectorizer.Secondly,the Support Vector Machine(SVM)model is utilized to detect and classify the hate speech texts made in the Arabic language.Lastly,the CEHO approach is employed for fine-tuning the parameters involved in SVM.This CEHO approach is developed by combining the chaotic functions with the classical EHO algorithm.The design of the CEHO algorithm for parameter tuning shows the novelty of the work.A widespread experimental analysis was executed to validate the enhanced performance of the proposed CEHOML-HSD approach.The comparative study outcomes established the supremacy of the proposed CEHOML-HSD model over other approaches.展开更多
Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning...Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.展开更多
Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hate...Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hatespeech, but they still suffer from ambiguity when differentiating between hateful and offensive content and theyalso lack accuracy. The work suggested in this paper uses a combination of the Whale Optimization Algorithm(WOA) and Particle Swarm Optimization (PSO) to adjust the weights of two Multi-Layer Perceptron (MLPs)for neutrosophic sets classification. During the training process of the MLP, the WOA is employed to exploreand determine the optimal set of weights. The PSO algorithm adjusts the weights to optimize the performanceof the MLP as fine-tuning. Additionally, in this approach, two separate MLP models are employed. One MLPis dedicated to predicting degrees of truth membership, while the other MLP focuses on predicting degrees offalse membership. The difference between these memberships quantifies uncertainty, indicating the degree ofindeterminacy in predictions. The experimental results indicate the superior performance of our model comparedto previous work when evaluated on the Davidson dataset.展开更多
Considering the prevalence of online hate speech and its harm and risks to the targeted people, democratic discourse and public security, it is necessary to combat online hate speech. For this purpose, interact interm...Considering the prevalence of online hate speech and its harm and risks to the targeted people, democratic discourse and public security, it is necessary to combat online hate speech. For this purpose, interact intermediaries play a crucial role as new governors of online speech. However, there is no universal definition of hate speech. Rules concerning this vary in different countries depending on their social, ethical, legal and religious backgrounds. The answer to the question of who can be liable for online hate speech also varies in different countries depending on the social, cultural, history, legal and political backgrounds. The First Amendment, cyberliberalism and the priority of promoting the emerging internet industry lead to the U.S. model, which offers intermediaries wide exemptions from liability for third-party illegal content. Conversely, the Chinese model of cyberpaternalism prefers to control online content on ideological, political and national security grounds through indirect methods, whereas the European Union (EU) and most European countries, including Germany, choose the middle ground to achieve balance between restricting online illegal hate speech and the freedom of speech as well as internet innovation. It is worth noting that there is a heated discussion on whether intermediary liability exemptions are still suitable for the world today, and there is a tendency in the EU to expand intermediary liability by imposing obligation on online platforms to tackle illegal hate speech. However, these reforms are again criticized as they could lead to erosion of the EU legal framework as well as privatization of law enforcement through algorithmic tools. Those critical issues relate to the central questions of whether intermediaries should be liable for user-generated illegal hate speech at all and, if so, how should they fulfill these liabilities? Based on the analysis of the different basic standpoints of cyberliberalists and cyberpaternalists on the internet regulation as well as the arg展开更多
Wutering Heights is an exceptionally successful novel written by Emily Bronte. The novel is organized by the love between Heathcliff and Catherine. This thesis attempts to analyze the dual personality of Heathcliff. T...Wutering Heights is an exceptionally successful novel written by Emily Bronte. The novel is organized by the love between Heathcliff and Catherine. This thesis attempts to analyze the dual personality of Heathcliff. Through the journey of love, revenge, death and restoration of humanity, his character is influenced by both his own characteristics and social environment.展开更多
Frankenstein,as the first science fiction in the world,mainly talks about the life of a young scientist,Victor Frankenstein,how he created the monster and how the monster destroyed his life.In the novel,love existed i...Frankenstein,as the first science fiction in the world,mainly talks about the life of a young scientist,Victor Frankenstein,how he created the monster and how the monster destroyed his life.In the novel,love existed in everyone’s heart including the monster.On the other side,hate also existed in the characters in the novel.Love and hate were described in the novel,and at last love was more powerful than hate and overcome hate.展开更多
Automatic identification of cyberbullying is a problem that is gaining traction,especially in the Machine Learning areas.Not only is it complicated,but it has also become a pressing necessity,considering how social me...Automatic identification of cyberbullying is a problem that is gaining traction,especially in the Machine Learning areas.Not only is it complicated,but it has also become a pressing necessity,considering how social media has become an integral part of adolescents’lives and how serious the impacts of cyberbullying and online harassment can be,particularly among teenagers.This paper contains a systematic literature review of modern strategies,machine learning methods,and technical means for detecting cyberbullying and the aggressive command of an individual in the information space of the Internet.We undertake an in-depth review of 13 papers from four scientific databases.The article provides an overview of scientific literature to analyze the problem of cyberbullying detection from the point of view of machine learning and natural language processing.In this review,we consider a cyberbullying detection framework on social media platforms,which includes data collection,data processing,feature selection,feature extraction,and the application ofmachine learning to classify whether texts contain cyberbullying or not.This article seeks to guide future research on this topic toward a more consistent perspective with the phenomenon’s description and depiction,allowing future solutions to be more practical and effective.展开更多
Arabic is the world’s first language,categorized by its rich and complicated grammatical formats.Furthermore,the Arabic morphology can be perplexing because nearly 10,000 roots and 900 patterns were the basis for ver...Arabic is the world’s first language,categorized by its rich and complicated grammatical formats.Furthermore,the Arabic morphology can be perplexing because nearly 10,000 roots and 900 patterns were the basis for verbs and nouns.The Arabic language consists of distinct variations utilized in a community and particular situations.Social media sites are a medium for expressing opinions and social phenomena like racism,hatred,offensive language,and all kinds of verbal violence.Such conduct does not impact particular nations,communities,or groups only,extending beyond such areas into people’s everyday lives.This study introduces an Improved Ant Lion Optimizer with Deep Learning Dirven Offensive and Hate Speech Detection(IALODL-OHSD)on Arabic Cross-Corpora.The presented IALODL-OHSD model mainly aims to detect and classify offensive/hate speech expressed on social media.In the IALODL-OHSD model,a threestage process is performed,namely pre-processing,word embedding,and classification.Primarily,data pre-processing is performed to transform the Arabic social media text into a useful format.In addition,the word2vec word embedding process is utilized to produce word embeddings.The attentionbased cascaded long short-term memory(ACLSTM)model is utilized for the classification process.Finally,the IALO algorithm is exploited as a hyperparameter optimizer to boost classifier results.To illustrate a brief result analysis of the IALODL-OHSD model,a detailed set of simulations were performed.The extensive comparison study portrayed the enhanced performance of the IALODL-OHSD model over other approaches.展开更多
Applied linguistics is one of the fields in the linguistics domain and deals with the practical applications of the language studies such as speech processing,language teaching,translation and speech therapy.The ever-...Applied linguistics is one of the fields in the linguistics domain and deals with the practical applications of the language studies such as speech processing,language teaching,translation and speech therapy.The ever-growing Online Social Networks(OSNs)experience a vital issue to confront,i.e.,hate speech.Amongst the OSN-oriented security problems,the usage of offensive language is the most important threat that is prevalently found across the Internet.Based on the group targeted,the offensive language varies in terms of adult content,hate speech,racism,cyberbullying,abuse,trolling and profanity.Amongst these,hate speech is the most intimidating form of using offensive language in which the targeted groups or individuals are intimidated with the intent of creating harm,social chaos or violence.Machine Learning(ML)techniques have recently been applied to recognize hate speech-related content.The current research article introduces a Grasshopper Optimization with an Attentive Recurrent Network for Offensive Speech Detection(GOARN-OSD)model for social media.The GOARNOSD technique integrates the concepts of DL and metaheuristic algorithms for detecting hate speech.In the presented GOARN-OSD technique,the primary stage involves the data pre-processing and word embedding processes.Then,this study utilizes the Attentive Recurrent Network(ARN)model for hate speech recognition and classification.At last,the Grasshopper Optimization Algorithm(GOA)is exploited as a hyperparameter optimizer to boost the performance of the hate speech recognition process.To depict the promising performance of the proposed GOARN-OSD method,a widespread experimental analysis was conducted.The comparison study outcomes demonstrate the superior performance of the proposed GOARN-OSD model over other state-of-the-art approaches.展开更多
The internet has brought together people from diverse cultures,backgrounds,and languages,forming a global community.However,this unstoppable growth in online presence and user numbers has introduced several new challe...The internet has brought together people from diverse cultures,backgrounds,and languages,forming a global community.However,this unstoppable growth in online presence and user numbers has introduced several new challenges.The structure of the cyberspace panopticon,the utilization of big data and its manipulation by interest groups,and the emergence of various ethical issues in digital media,such as deceptive content,deepfakes,and echo chambers,have become significant concerns.When combined with the characteristics of digital dissemination and rapid global interaction,these factors have paved the way for ethical problems related to the production,proliferation,and legitimization of hate speech.Moreover,certain images have gained widespread acceptance as though they were real,despite having no factual basis.This recent realization that much of the information and imagery considered to be true is,in fact,a virtual illusion,is a commonly discussed truth.The alarming increase and growing legitimacy of hate speech within the digital realm,made possible by social media,are leading us toward an unavoidable outcome.This study aims to investigate the reality of hate speech in this context.To achieve this goal,the research question is formulated as follows:“Does social media,particularly Twitter,contain content that includes hate speech,incendiary information,and news?”The study’s population is social media,with the sample consisting of hate speech content found on Twitter.Qualitative research methods are intended to be employed in this study.展开更多
文摘Purpose-Hate speech is an expression of intense hatred.Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors.Hate speech detection with social media data has witnessed special research attention in recent studies,hence,the need to design a generic metadata architecture and efficient feature extraction technique to enhance hate speech detection.Design/methodology/approach-This study proposes a hybrid embeddings enhanced with a topic inference method and an improved cuckoo search neural network for hate speech detection in Twitter data.The proposed method uses a hybrid embeddings technique that includes Term Frequency-Inverse Document Frequency(TF-IDF)for word-level feature extraction and Long Short Term Memory(LSTM)which is a variant of recurrent neural networks architecture for sentence-level feature extraction.The extracted features from the hybrid embeddings then serve as input into the improved cuckoo search neural network for the prediction of a tweet as hate speech,offensive language or neither.Findings-The proposed method showed better results when tested on the collected Twitter datasets compared to other related methods.In order to validate the performances of the proposed method,t-test and post hoc multiple comparisons were used to compare the significance and means of the proposed method with other related methods for hate speech detection.Furthermore,Paired Sample t-Test was also conducted to validate the performances of the proposed method with other related methods.Research limitations/implications-Finally,the evaluation results showed that the proposed method outperforms other related methods with mean F1-score of 91.3.Originality/value-The main novelty of this study is the use of an automatic topic spotting measure based on na€ıve Bayes model to improve features representation.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R263)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.This study is supported via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2024/R/1445).
文摘In recent years,the usage of social networking sites has considerably increased in the Arab world.It has empowered individuals to express their opinions,especially in politics.Furthermore,various organizations that operate in the Arab countries have embraced social media in their day-to-day business activities at different scales.This is attributed to business owners’understanding of social media’s importance for business development.However,the Arabic morphology is too complicated to understand due to the availability of nearly 10,000 roots and more than 900 patterns that act as the basis for verbs and nouns.Hate speech over online social networking sites turns out to be a worldwide issue that reduces the cohesion of civil societies.In this background,the current study develops a Chaotic Elephant Herd Optimization with Machine Learning for Hate Speech Detection(CEHOML-HSD)model in the context of the Arabic language.The presented CEHOML-HSD model majorly concentrates on identifying and categorising the Arabic text into hate speech and normal.To attain this,the CEHOML-HSD model follows different sub-processes as discussed herewith.At the initial stage,the CEHOML-HSD model undergoes data pre-processing with the help of the TF-IDF vectorizer.Secondly,the Support Vector Machine(SVM)model is utilized to detect and classify the hate speech texts made in the Arabic language.Lastly,the CEHO approach is employed for fine-tuning the parameters involved in SVM.This CEHO approach is developed by combining the chaotic functions with the classical EHO algorithm.The design of the CEHO algorithm for parameter tuning shows the novelty of the work.A widespread experimental analysis was executed to validate the enhanced performance of the proposed CEHOML-HSD approach.The comparative study outcomes established the supremacy of the proposed CEHOML-HSD model over other approaches.
基金This work is part of the research projects LaTe4PoliticES(PID2022-138099OBI00)funded by MICIU/AEI/10.13039/501100011033the European Regional Development Fund(ERDF)-A Way of Making Europe and LT-SWM(TED2021-131167B-I00)funded by MICIU/AEI/10.13039/501100011033the European Union NextGenerationEU/PRTR.Mr.Ronghao Pan is supported by the Programa Investigo grant,funded by the Region of Murcia,the Spanish Ministry of Labour and Social Economy and the European Union-NextGenerationEU under the“Plan de Recuperación,Transformación y Resiliencia(PRTR).”。
文摘Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.
文摘Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hatespeech, but they still suffer from ambiguity when differentiating between hateful and offensive content and theyalso lack accuracy. The work suggested in this paper uses a combination of the Whale Optimization Algorithm(WOA) and Particle Swarm Optimization (PSO) to adjust the weights of two Multi-Layer Perceptron (MLPs)for neutrosophic sets classification. During the training process of the MLP, the WOA is employed to exploreand determine the optimal set of weights. The PSO algorithm adjusts the weights to optimize the performanceof the MLP as fine-tuning. Additionally, in this approach, two separate MLP models are employed. One MLPis dedicated to predicting degrees of truth membership, while the other MLP focuses on predicting degrees offalse membership. The difference between these memberships quantifies uncertainty, indicating the degree ofindeterminacy in predictions. The experimental results indicate the superior performance of our model comparedto previous work when evaluated on the Davidson dataset.
文摘Considering the prevalence of online hate speech and its harm and risks to the targeted people, democratic discourse and public security, it is necessary to combat online hate speech. For this purpose, interact intermediaries play a crucial role as new governors of online speech. However, there is no universal definition of hate speech. Rules concerning this vary in different countries depending on their social, ethical, legal and religious backgrounds. The answer to the question of who can be liable for online hate speech also varies in different countries depending on the social, cultural, history, legal and political backgrounds. The First Amendment, cyberliberalism and the priority of promoting the emerging internet industry lead to the U.S. model, which offers intermediaries wide exemptions from liability for third-party illegal content. Conversely, the Chinese model of cyberpaternalism prefers to control online content on ideological, political and national security grounds through indirect methods, whereas the European Union (EU) and most European countries, including Germany, choose the middle ground to achieve balance between restricting online illegal hate speech and the freedom of speech as well as internet innovation. It is worth noting that there is a heated discussion on whether intermediary liability exemptions are still suitable for the world today, and there is a tendency in the EU to expand intermediary liability by imposing obligation on online platforms to tackle illegal hate speech. However, these reforms are again criticized as they could lead to erosion of the EU legal framework as well as privatization of law enforcement through algorithmic tools. Those critical issues relate to the central questions of whether intermediaries should be liable for user-generated illegal hate speech at all and, if so, how should they fulfill these liabilities? Based on the analysis of the different basic standpoints of cyberliberalists and cyberpaternalists on the internet regulation as well as the arg
文摘Wutering Heights is an exceptionally successful novel written by Emily Bronte. The novel is organized by the love between Heathcliff and Catherine. This thesis attempts to analyze the dual personality of Heathcliff. Through the journey of love, revenge, death and restoration of humanity, his character is influenced by both his own characteristics and social environment.
文摘Frankenstein,as the first science fiction in the world,mainly talks about the life of a young scientist,Victor Frankenstein,how he created the monster and how the monster destroyed his life.In the novel,love existed in everyone’s heart including the monster.On the other side,hate also existed in the characters in the novel.Love and hate were described in the novel,and at last love was more powerful than hate and overcome hate.
文摘Automatic identification of cyberbullying is a problem that is gaining traction,especially in the Machine Learning areas.Not only is it complicated,but it has also become a pressing necessity,considering how social media has become an integral part of adolescents’lives and how serious the impacts of cyberbullying and online harassment can be,particularly among teenagers.This paper contains a systematic literature review of modern strategies,machine learning methods,and technical means for detecting cyberbullying and the aggressive command of an individual in the information space of the Internet.We undertake an in-depth review of 13 papers from four scientific databases.The article provides an overview of scientific literature to analyze the problem of cyberbullying detection from the point of view of machine learning and natural language processing.In this review,we consider a cyberbullying detection framework on social media platforms,which includes data collection,data processing,feature selection,feature extraction,and the application ofmachine learning to classify whether texts contain cyberbullying or not.This article seeks to guide future research on this topic toward a more consistent perspective with the phenomenon’s description and depiction,allowing future solutions to be more practical and effective.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R263)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4340237DSR43.
文摘Arabic is the world’s first language,categorized by its rich and complicated grammatical formats.Furthermore,the Arabic morphology can be perplexing because nearly 10,000 roots and 900 patterns were the basis for verbs and nouns.The Arabic language consists of distinct variations utilized in a community and particular situations.Social media sites are a medium for expressing opinions and social phenomena like racism,hatred,offensive language,and all kinds of verbal violence.Such conduct does not impact particular nations,communities,or groups only,extending beyond such areas into people’s everyday lives.This study introduces an Improved Ant Lion Optimizer with Deep Learning Dirven Offensive and Hate Speech Detection(IALODL-OHSD)on Arabic Cross-Corpora.The presented IALODL-OHSD model mainly aims to detect and classify offensive/hate speech expressed on social media.In the IALODL-OHSD model,a threestage process is performed,namely pre-processing,word embedding,and classification.Primarily,data pre-processing is performed to transform the Arabic social media text into a useful format.In addition,the word2vec word embedding process is utilized to produce word embeddings.The attentionbased cascaded long short-term memory(ACLSTM)model is utilized for the classification process.Finally,the IALO algorithm is exploited as a hyperparameter optimizer to boost classifier results.To illustrate a brief result analysis of the IALODL-OHSD model,a detailed set of simulations were performed.The extensive comparison study portrayed the enhanced performance of the IALODL-OHSD model over other approaches.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R281)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia+1 种基金Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: (22UQU4331004DSR031)supported via funding from Prince Sattam bin Abdulaziz University project number (PSAU/2023/R/1444).
文摘Applied linguistics is one of the fields in the linguistics domain and deals with the practical applications of the language studies such as speech processing,language teaching,translation and speech therapy.The ever-growing Online Social Networks(OSNs)experience a vital issue to confront,i.e.,hate speech.Amongst the OSN-oriented security problems,the usage of offensive language is the most important threat that is prevalently found across the Internet.Based on the group targeted,the offensive language varies in terms of adult content,hate speech,racism,cyberbullying,abuse,trolling and profanity.Amongst these,hate speech is the most intimidating form of using offensive language in which the targeted groups or individuals are intimidated with the intent of creating harm,social chaos or violence.Machine Learning(ML)techniques have recently been applied to recognize hate speech-related content.The current research article introduces a Grasshopper Optimization with an Attentive Recurrent Network for Offensive Speech Detection(GOARN-OSD)model for social media.The GOARNOSD technique integrates the concepts of DL and metaheuristic algorithms for detecting hate speech.In the presented GOARN-OSD technique,the primary stage involves the data pre-processing and word embedding processes.Then,this study utilizes the Attentive Recurrent Network(ARN)model for hate speech recognition and classification.At last,the Grasshopper Optimization Algorithm(GOA)is exploited as a hyperparameter optimizer to boost the performance of the hate speech recognition process.To depict the promising performance of the proposed GOARN-OSD method,a widespread experimental analysis was conducted.The comparison study outcomes demonstrate the superior performance of the proposed GOARN-OSD model over other state-of-the-art approaches.
文摘The internet has brought together people from diverse cultures,backgrounds,and languages,forming a global community.However,this unstoppable growth in online presence and user numbers has introduced several new challenges.The structure of the cyberspace panopticon,the utilization of big data and its manipulation by interest groups,and the emergence of various ethical issues in digital media,such as deceptive content,deepfakes,and echo chambers,have become significant concerns.When combined with the characteristics of digital dissemination and rapid global interaction,these factors have paved the way for ethical problems related to the production,proliferation,and legitimization of hate speech.Moreover,certain images have gained widespread acceptance as though they were real,despite having no factual basis.This recent realization that much of the information and imagery considered to be true is,in fact,a virtual illusion,is a commonly discussed truth.The alarming increase and growing legitimacy of hate speech within the digital realm,made possible by social media,are leading us toward an unavoidable outcome.This study aims to investigate the reality of hate speech in this context.To achieve this goal,the research question is formulated as follows:“Does social media,particularly Twitter,contain content that includes hate speech,incendiary information,and news?”The study’s population is social media,with the sample consisting of hate speech content found on Twitter.Qualitative research methods are intended to be employed in this study.