Due to ever-growing soccer data collection approaches and progressing artificial intelligence(AI) methods, soccer analysis, evaluation, and decision-making have received increasing interest from not only the professio...Due to ever-growing soccer data collection approaches and progressing artificial intelligence(AI) methods, soccer analysis, evaluation, and decision-making have received increasing interest from not only the professional sports analytics realm but also the academic AI research community. AI brings gamechanging approaches for soccer analytics where soccer has been a typical benchmark for AI research. The combination has been an emerging topic. In this paper, soccer match analytics are taken as a complete observation-orientation-decision-action(OODA) loop.In addition, as in AI frameworks such as that for reinforcement learning, interacting with a virtual environment enables an evolving model. Therefore, both soccer analytics in the real world and virtual domains are discussed. With the intersection of the OODA loop and the real-virtual domains, available soccer data, including event and tracking data, and diverse orientation and decisionmaking models for both real-world and virtual soccer matches are comprehensively reviewed. Finally, some promising directions in this interdisciplinary area are pointed out. It is claimed that paradigms for both professional sports analytics and AI research could be combined. Moreover, it is quite promising to bridge the gap between the real and virtual domains for soccer match analysis and decision-making.展开更多
As the use of blockchain for digital payments continues to rise,it becomes susceptible to various malicious attacks.Successfully detecting anomalies within blockchain transactions is essential for bolstering trust in ...As the use of blockchain for digital payments continues to rise,it becomes susceptible to various malicious attacks.Successfully detecting anomalies within blockchain transactions is essential for bolstering trust in digital payments.However,the task of anomaly detection in blockchain transaction data is challenging due to the infrequent occurrence of illicit transactions.Although several studies have been conducted in the field,a limitation persists:the lack of explanations for the model’s predictions.This study seeks to overcome this limitation by integrating explainable artificial intelligence(XAI)techniques and anomaly rules into tree-based ensemble classifiers for detecting anomalous Bitcoin transactions.The shapley additive explanation(SHAP)method is employed to measure the contribution of each feature,and it is compatible with ensemble models.Moreover,we present rules for interpreting whether a Bitcoin transaction is anomalous or not.Additionally,we introduce an under-sampling algorithm named XGBCLUS,designed to balance anomalous and non-anomalous transaction data.This algorithm is compared against other commonly used under-sampling and over-sampling techniques.Finally,the outcomes of various tree-based single classifiers are compared with those of stacking and voting ensemble classifiers.Our experimental results demonstrate that:(i)XGBCLUS enhances true positive rate(TPR)and receiver operating characteristic-area under curve(ROC-AUC)scores compared to state-of-the-art under-sampling and over-sampling techniques,and(ii)our proposed ensemble classifiers outperform traditional single tree-based machine learning classifiers in terms of accuracy,TPR,and false positive rate(FPR)scores.展开更多
Seismic engineering,a critical field with significant societal implications,often presents communication challenges due to the complexity of its concepts.This paper explores the role of Artificial Intelligence(AI),spe...Seismic engineering,a critical field with significant societal implications,often presents communication challenges due to the complexity of its concepts.This paper explores the role of Artificial Intelligence(AI),specifically OpenAI’s ChatGPT,in bridging these communication gaps.The study delves into how AI can simplify intricate seismic engineering terminologies and concepts,fostering enhanced understanding among students,professionals,and policymakers.It also presents several intuitive case studies to demonstrate the practical application of ChatGPT in seismic engineering.Further,the study contemplates the potential implications of AI,highlighting its potential to transform decision-making processes,augment education,and increase public engagement.While acknowledging the promising future of AI in seismic engineering,the study also considers the inherent challenges and limitations,including data privacy and potential oversimplification of content.It advocates for the collaborative efforts of AI researchers and seismic experts in overcoming these obstacles and enhancing the utility of AI in the field.This exploration provides an insightful perspective on the future of seismic engineering,which could be closely intertwined with the evolution of AI.展开更多
In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroi...In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroid cancer enhance early detection,improve resource allocation,and reduce overtreatment.However,the widespread adoption of these models in clinical practice demands predictive performance along with interpretability and transparency.This paper proposes a novel association-rule based feature-integratedmachine learning model which shows better classification and prediction accuracy than present state-of-the-artmodels.Our study also focuses on the application of SHapley Additive exPlanations(SHAP)values as a powerful tool for explaining thyroid cancer prediction models.In the proposed method,the association-rule based feature integration framework identifies frequently occurring attribute combinations in the dataset.The original dataset is used in trainingmachine learning models,and further used in generating SHAP values fromthesemodels.In the next phase,the dataset is integrated with the dominant feature sets identified through association-rule based analysis.This new integrated dataset is used in re-training the machine learning models.The new SHAP values generated from these models help in validating the contributions of feature sets in predicting malignancy.The conventional machine learning models lack interpretability,which can hinder their integration into clinical decision-making systems.In this study,the SHAP values are introduced along with association-rule based feature integration as a comprehensive framework for understanding the contributions of feature sets inmodelling the predictions.The study discusses the importance of reliable predictive models for early diagnosis of thyroid cancer,and a validation framework of explainability.The proposed model shows an accuracy of 93.48%.Performance metrics such as precision,recall,F1-score,and the area under the receiver operating characteristic(AUROC)展开更多
Facing the escalating effects of climate change,it is critical to improve the prediction and understanding of the hurricane evacuation decisions made by households in order to enhance emergency management.Current stud...Facing the escalating effects of climate change,it is critical to improve the prediction and understanding of the hurricane evacuation decisions made by households in order to enhance emergency management.Current studies in this area often have relied on psychology-driven linear models,which frequently exhibited limitations in practice.The present study proposed a novel interpretable machine learning approach to predict household-level evacuation decisions by leveraging easily accessible demographic and resource-related predictors,compared to existing models that mainly rely on psychological factors.An enhanced logistic regression model(that is,an interpretable machine learning approach) was developed for accurate predictions by automatically accounting for nonlinearities and interactions(that is,univariate and bivariate threshold effects).Specifically,nonlinearity and interaction detection were enabled by low-depth decision trees,which offer transparent model structure and robustness.A survey dataset collected in the aftermath of Hurricanes Katrina and Rita,two of the most intense tropical storms of the last two decades,was employed to test the new methodology.The findings show that,when predicting the households’ evacuation decisions,the enhanced logistic regression model outperformed previous linear models in terms of both model fit and predictive capability.This outcome suggests that our proposed methodology could provide a new tool and framework for emergency management authorities to improve the prediction of evacuation traffic demands in a timely and accurate manner.展开更多
In the rapidly evolving landscape of healthcare,the integration of Artificial Intelligence(AI)and Natural Language Processing(NLP)holds immense promise for revolutionizing data analytics and decision-making processes....In the rapidly evolving landscape of healthcare,the integration of Artificial Intelligence(AI)and Natural Language Processing(NLP)holds immense promise for revolutionizing data analytics and decision-making processes.Current techniques for personalized medicine,disease diagnosis,treatment recommendations,and resource optimization in the Internet of Medical Things(IoMT)vary widely,including methods such as rule-based systems,machine learning algorithms,and data-driven approaches.However,many of these techniques face limitations in accuracy,scalability,and adaptability to complex clinical scenarios.This study investigates the synergistic potential of AI-driven optimization techniques and NLP applications in the context of the IoMT.Through the integration of advanced data analytics methodologies with NLP capabilities,we propose a comprehensive framework designed to enhance personalized medicine,streamline disease diagnosis,provide treatment recommendations,and optimize resource allocation.Using a systematic methodology data was collected from open data repositories,then preprocessed using data cleaning,missing value imputation,feature engineering,and data normalization and scaling.Optimization algorithms,such as Gradient Descent,Adam Optimization,and Stochastic Gradient Descent,were employed in the framework to enhance model performance.These were integrated with NLP processes,including Text Preprocessing,Tokenization,and Sentiment Analysis to facilitate comprehensive analysis of the data to provide actionable insights from the vast streams of data generated by IoMT devices.Lastly,through a synthesis of existing research and real-world case studies,we demonstrated the impact of AI-NLP fusion on healthcare outcomes and operational efficiency.The simulation produced compelling results,achieving an average diagnostic accuracy of 93.5%for the given scenarios,and excelled even further in instances involving rare diseases,achieving an accuracy rate of 98%.With regard to patient-specific treatment plans it generated them w展开更多
With the breakthrough of AlphaGo,human-computer gaming AI has ushered in a big explosion,attracting more and more researchers all over the world.As a recognized standard for testing artificial intelligence,various hum...With the breakthrough of AlphaGo,human-computer gaming AI has ushered in a big explosion,attracting more and more researchers all over the world.As a recognized standard for testing artificial intelligence,various human-computer gaming AI systems(AIs)have been developed,such as Libratus,OpenAI Five,and AlphaStar,which beat professional human players.The rapid development of human-computer gaming AIs indicates a big step for decision-making intelligence,and it seems that current techniques can handle very complex human-computer games.So,one natural question arises:What are the possible challenges of current techniques in human-computer gaming and what are the future trends?To answer the above question,in this paper,we survey recent successful game AIs,covering board game AIs,card game AIs,first-person shooting game AIs,and real-time strategy game AIs.Through this survey,we 1)compare the main difficulties among different kinds of games and the corresponding techniques utilized for achieving professional human-level AIs;2)summarize the mainstream frameworks and techniques that can be properly relied on for developing AIs for complex human-computer games;3)raise the challenges or drawbacks of current techniques in the successful AIs;and 4)try to point out future trends in human-computer gaming AIs.Finally,we hope that this brief review can provide an introduction for beginners and inspire insight for researchers in the field of AI in human-computer gaming.展开更多
Energy resilience is about ensuring a business and end-use consumers have a reliable,regular supply of energy and contingency measures in place in the event of a power failure,generating a source of power such as elec...Energy resilience is about ensuring a business and end-use consumers have a reliable,regular supply of energy and contingency measures in place in the event of a power failure,generating a source of power such as electricity for daily needs from an uninterrupted source of energy no matter either renewable or nonrenewable.Causes of resilience issues include power surges,weather,natural disasters,or man-made accidents,and even equipment failure.The human operational error can also be an issue for grid-power supply to go down and should be factored into resilience planning.As the energy landscape undergoes a radical transformation,from a world of large,centralized coal plants to a decentralized energy world made up of small-scale gas-fired production and renewables,the stability of electricity supply will begin to affect energy pricing.Businesses must plan for this change.The challenges that the growth of renewables brings to the grid in terms of intermittency mean that transmission and distribution costs consume an increasing proportion of bills.With progress in the technology of AI(Artificial Intelligence)integration of such progressive technology in recent decades,we are improving our resiliency of energy flow,so we prevent any unexpected interruption of this flow.Ensuring your business is energy resilient helps insulate against price increases or fluctuations in supply,becoming critical to maintaining operations and reducing commercial risk.In the form short TM(Technical Memorandum),this paper covers this issue.展开更多
This research explores the increasing importance of Artificial Intelligence(AI)and Machine Learning(ML)with relation to smart cities.It discusses the AI and ML’s ability to revolutionize various aspects of urban envi...This research explores the increasing importance of Artificial Intelligence(AI)and Machine Learning(ML)with relation to smart cities.It discusses the AI and ML’s ability to revolutionize various aspects of urban environments,including infrastructure,governance,public safety,and sustainability.The research presents the definition and characteristics of smart cities,highlighting the key components and technologies driving initiatives for smart cities.The methodology employed in this study involved a comprehensive review of relevant literature,research papers,and reports on the subject of AI and ML in smart cities.Various sources were consulted to gather information on the integration of AI and ML technologies in various aspects of smart cities,including infrastructure optimization,public safety enhancement,and citizen services improvement.The findings suggest that AI and ML technologies enable data-driven decision-making,predictive analytics,and optimization in smart city development.They are vital to the development of transport infrastructure,optimizing energy distribution,improving public safety,streamlining governance,and transforming healthcare services.However,ethical and privacy considerations,as well as technical challenges,need to be solved to guarantee the ethical and responsible usage of AI and ML in smart cities.The study concludes by discussing the challenges and future directions of AI and ML in shaping urban environments,highlighting the importance of collaborative efforts and responsible implementation.The findings highlight the transformative potential of AI and ML in optimizing resource utilization,enhancing citizen services,and creating more sustainable and resilient smart cities.Future studies should concentrate on addressing technical limitations,creating robust policy frameworks,and fostering fairness,accountability,and openness in the use of AI and ML technologies in smart cities.展开更多
The World Health Organization(WHO)refers to the 2019 new coronavirus epidemic as COVID-19,and it has caused an unprecedented global crisis for several nations.Nearly every country around the globe is now very concerne...The World Health Organization(WHO)refers to the 2019 new coronavirus epidemic as COVID-19,and it has caused an unprecedented global crisis for several nations.Nearly every country around the globe is now very concerned about the effects of the COVID-19 outbreaks,which were previously only experienced by Chinese residents.Most of these nations are now under a partial or complete state of lockdown due to the lack of resources needed to combat the COVID-19 epidemic and the concern about overstretched healthcare systems.Every time the pandemic surprises them by providing new values for various parameters,all the connected research groups strive to understand the behavior of the pandemic to determine when it will stop.The prediction models in this research were created using deep neural networks and Decision Trees(DT).DT employs the support vector machine method,which predicts the transition from an initial dataset to actual figures using a function trained on a model.Extended short-term memory networks(LSTMs)are a special sort of recurrent neural network(RNN)that can pick up on long-term dependencies.As an added bonus,it is helpful when the neural network can both recall current events and recall past events,resulting in an accurate prediction for COVID-19.We provided a solid foundation for intelligent healthcare by devising an intelligence COVID-19 monitoring framework.We developed a data analysis methodology,including data preparation and dataset splitting.We examine two popular algorithms,LSTM and Decision tree on the official datasets.Moreover,we have analysed the effectiveness of deep learning and machine learning methods to predict the scale of the pandemic.Key issues and challenges are discussed for future improvement.It is expected that the results these methods provide for the Health Scenario would be reliable and credible.展开更多
基金supported by the National Key Research,Development Program of China (2020AAA0103404)the Beijing Nova Program (20220484077)the National Natural Science Foundation of China (62073323)。
文摘Due to ever-growing soccer data collection approaches and progressing artificial intelligence(AI) methods, soccer analysis, evaluation, and decision-making have received increasing interest from not only the professional sports analytics realm but also the academic AI research community. AI brings gamechanging approaches for soccer analytics where soccer has been a typical benchmark for AI research. The combination has been an emerging topic. In this paper, soccer match analytics are taken as a complete observation-orientation-decision-action(OODA) loop.In addition, as in AI frameworks such as that for reinforcement learning, interacting with a virtual environment enables an evolving model. Therefore, both soccer analytics in the real world and virtual domains are discussed. With the intersection of the OODA loop and the real-virtual domains, available soccer data, including event and tracking data, and diverse orientation and decisionmaking models for both real-world and virtual soccer matches are comprehensively reviewed. Finally, some promising directions in this interdisciplinary area are pointed out. It is claimed that paradigms for both professional sports analytics and AI research could be combined. Moreover, it is quite promising to bridge the gap between the real and virtual domains for soccer match analysis and decision-making.
文摘As the use of blockchain for digital payments continues to rise,it becomes susceptible to various malicious attacks.Successfully detecting anomalies within blockchain transactions is essential for bolstering trust in digital payments.However,the task of anomaly detection in blockchain transaction data is challenging due to the infrequent occurrence of illicit transactions.Although several studies have been conducted in the field,a limitation persists:the lack of explanations for the model’s predictions.This study seeks to overcome this limitation by integrating explainable artificial intelligence(XAI)techniques and anomaly rules into tree-based ensemble classifiers for detecting anomalous Bitcoin transactions.The shapley additive explanation(SHAP)method is employed to measure the contribution of each feature,and it is compatible with ensemble models.Moreover,we present rules for interpreting whether a Bitcoin transaction is anomalous or not.Additionally,we introduce an under-sampling algorithm named XGBCLUS,designed to balance anomalous and non-anomalous transaction data.This algorithm is compared against other commonly used under-sampling and over-sampling techniques.Finally,the outcomes of various tree-based single classifiers are compared with those of stacking and voting ensemble classifiers.Our experimental results demonstrate that:(i)XGBCLUS enhances true positive rate(TPR)and receiver operating characteristic-area under curve(ROC-AUC)scores compared to state-of-the-art under-sampling and over-sampling techniques,and(ii)our proposed ensemble classifiers outperform traditional single tree-based machine learning classifiers in terms of accuracy,TPR,and false positive rate(FPR)scores.
文摘Seismic engineering,a critical field with significant societal implications,often presents communication challenges due to the complexity of its concepts.This paper explores the role of Artificial Intelligence(AI),specifically OpenAI’s ChatGPT,in bridging these communication gaps.The study delves into how AI can simplify intricate seismic engineering terminologies and concepts,fostering enhanced understanding among students,professionals,and policymakers.It also presents several intuitive case studies to demonstrate the practical application of ChatGPT in seismic engineering.Further,the study contemplates the potential implications of AI,highlighting its potential to transform decision-making processes,augment education,and increase public engagement.While acknowledging the promising future of AI in seismic engineering,the study also considers the inherent challenges and limitations,including data privacy and potential oversimplification of content.It advocates for the collaborative efforts of AI researchers and seismic experts in overcoming these obstacles and enhancing the utility of AI in the field.This exploration provides an insightful perspective on the future of seismic engineering,which could be closely intertwined with the evolution of AI.
文摘In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroid cancer enhance early detection,improve resource allocation,and reduce overtreatment.However,the widespread adoption of these models in clinical practice demands predictive performance along with interpretability and transparency.This paper proposes a novel association-rule based feature-integratedmachine learning model which shows better classification and prediction accuracy than present state-of-the-artmodels.Our study also focuses on the application of SHapley Additive exPlanations(SHAP)values as a powerful tool for explaining thyroid cancer prediction models.In the proposed method,the association-rule based feature integration framework identifies frequently occurring attribute combinations in the dataset.The original dataset is used in trainingmachine learning models,and further used in generating SHAP values fromthesemodels.In the next phase,the dataset is integrated with the dominant feature sets identified through association-rule based analysis.This new integrated dataset is used in re-training the machine learning models.The new SHAP values generated from these models help in validating the contributions of feature sets in predicting malignancy.The conventional machine learning models lack interpretability,which can hinder their integration into clinical decision-making systems.In this study,the SHAP values are introduced along with association-rule based feature integration as a comprehensive framework for understanding the contributions of feature sets inmodelling the predictions.The study discusses the importance of reliable predictive models for early diagnosis of thyroid cancer,and a validation framework of explainability.The proposed model shows an accuracy of 93.48%.Performance metrics such as precision,recall,F1-score,and the area under the receiver operating characteristic(AUROC)
基金supported by the National Science Foundation under Grant Nos.2303578,2303579, 05 27699,0838654,and 1212790by an Early-Career Research Fellowship from the Gulf Research Program of the National Academies of Sciences,Engineering,and Medicine
文摘Facing the escalating effects of climate change,it is critical to improve the prediction and understanding of the hurricane evacuation decisions made by households in order to enhance emergency management.Current studies in this area often have relied on psychology-driven linear models,which frequently exhibited limitations in practice.The present study proposed a novel interpretable machine learning approach to predict household-level evacuation decisions by leveraging easily accessible demographic and resource-related predictors,compared to existing models that mainly rely on psychological factors.An enhanced logistic regression model(that is,an interpretable machine learning approach) was developed for accurate predictions by automatically accounting for nonlinearities and interactions(that is,univariate and bivariate threshold effects).Specifically,nonlinearity and interaction detection were enabled by low-depth decision trees,which offer transparent model structure and robustness.A survey dataset collected in the aftermath of Hurricanes Katrina and Rita,two of the most intense tropical storms of the last two decades,was employed to test the new methodology.The findings show that,when predicting the households’ evacuation decisions,the enhanced logistic regression model outperformed previous linear models in terms of both model fit and predictive capability.This outcome suggests that our proposed methodology could provide a new tool and framework for emergency management authorities to improve the prediction of evacuation traffic demands in a timely and accurate manner.
基金the Researchers Supporting Project number(RSP2024R281),King Saud University,Riyadh,Saudi Arabia.
文摘In the rapidly evolving landscape of healthcare,the integration of Artificial Intelligence(AI)and Natural Language Processing(NLP)holds immense promise for revolutionizing data analytics and decision-making processes.Current techniques for personalized medicine,disease diagnosis,treatment recommendations,and resource optimization in the Internet of Medical Things(IoMT)vary widely,including methods such as rule-based systems,machine learning algorithms,and data-driven approaches.However,many of these techniques face limitations in accuracy,scalability,and adaptability to complex clinical scenarios.This study investigates the synergistic potential of AI-driven optimization techniques and NLP applications in the context of the IoMT.Through the integration of advanced data analytics methodologies with NLP capabilities,we propose a comprehensive framework designed to enhance personalized medicine,streamline disease diagnosis,provide treatment recommendations,and optimize resource allocation.Using a systematic methodology data was collected from open data repositories,then preprocessed using data cleaning,missing value imputation,feature engineering,and data normalization and scaling.Optimization algorithms,such as Gradient Descent,Adam Optimization,and Stochastic Gradient Descent,were employed in the framework to enhance model performance.These were integrated with NLP processes,including Text Preprocessing,Tokenization,and Sentiment Analysis to facilitate comprehensive analysis of the data to provide actionable insights from the vast streams of data generated by IoMT devices.Lastly,through a synthesis of existing research and real-world case studies,we demonstrated the impact of AI-NLP fusion on healthcare outcomes and operational efficiency.The simulation produced compelling results,achieving an average diagnostic accuracy of 93.5%for the given scenarios,and excelled even further in instances involving rare diseases,achieving an accuracy rate of 98%.With regard to patient-specific treatment plans it generated them w
基金National Natural Science Foundation of China(No.61906197).
文摘With the breakthrough of AlphaGo,human-computer gaming AI has ushered in a big explosion,attracting more and more researchers all over the world.As a recognized standard for testing artificial intelligence,various human-computer gaming AI systems(AIs)have been developed,such as Libratus,OpenAI Five,and AlphaStar,which beat professional human players.The rapid development of human-computer gaming AIs indicates a big step for decision-making intelligence,and it seems that current techniques can handle very complex human-computer games.So,one natural question arises:What are the possible challenges of current techniques in human-computer gaming and what are the future trends?To answer the above question,in this paper,we survey recent successful game AIs,covering board game AIs,card game AIs,first-person shooting game AIs,and real-time strategy game AIs.Through this survey,we 1)compare the main difficulties among different kinds of games and the corresponding techniques utilized for achieving professional human-level AIs;2)summarize the mainstream frameworks and techniques that can be properly relied on for developing AIs for complex human-computer games;3)raise the challenges or drawbacks of current techniques in the successful AIs;and 4)try to point out future trends in human-computer gaming AIs.Finally,we hope that this brief review can provide an introduction for beginners and inspire insight for researchers in the field of AI in human-computer gaming.
文摘Energy resilience is about ensuring a business and end-use consumers have a reliable,regular supply of energy and contingency measures in place in the event of a power failure,generating a source of power such as electricity for daily needs from an uninterrupted source of energy no matter either renewable or nonrenewable.Causes of resilience issues include power surges,weather,natural disasters,or man-made accidents,and even equipment failure.The human operational error can also be an issue for grid-power supply to go down and should be factored into resilience planning.As the energy landscape undergoes a radical transformation,from a world of large,centralized coal plants to a decentralized energy world made up of small-scale gas-fired production and renewables,the stability of electricity supply will begin to affect energy pricing.Businesses must plan for this change.The challenges that the growth of renewables brings to the grid in terms of intermittency mean that transmission and distribution costs consume an increasing proportion of bills.With progress in the technology of AI(Artificial Intelligence)integration of such progressive technology in recent decades,we are improving our resiliency of energy flow,so we prevent any unexpected interruption of this flow.Ensuring your business is energy resilient helps insulate against price increases or fluctuations in supply,becoming critical to maintaining operations and reducing commercial risk.In the form short TM(Technical Memorandum),this paper covers this issue.
文摘This research explores the increasing importance of Artificial Intelligence(AI)and Machine Learning(ML)with relation to smart cities.It discusses the AI and ML’s ability to revolutionize various aspects of urban environments,including infrastructure,governance,public safety,and sustainability.The research presents the definition and characteristics of smart cities,highlighting the key components and technologies driving initiatives for smart cities.The methodology employed in this study involved a comprehensive review of relevant literature,research papers,and reports on the subject of AI and ML in smart cities.Various sources were consulted to gather information on the integration of AI and ML technologies in various aspects of smart cities,including infrastructure optimization,public safety enhancement,and citizen services improvement.The findings suggest that AI and ML technologies enable data-driven decision-making,predictive analytics,and optimization in smart city development.They are vital to the development of transport infrastructure,optimizing energy distribution,improving public safety,streamlining governance,and transforming healthcare services.However,ethical and privacy considerations,as well as technical challenges,need to be solved to guarantee the ethical and responsible usage of AI and ML in smart cities.The study concludes by discussing the challenges and future directions of AI and ML in shaping urban environments,highlighting the importance of collaborative efforts and responsible implementation.The findings highlight the transformative potential of AI and ML in optimizing resource utilization,enhancing citizen services,and creating more sustainable and resilient smart cities.Future studies should concentrate on addressing technical limitations,creating robust policy frameworks,and fostering fairness,accountability,and openness in the use of AI and ML technologies in smart cities.
基金The authors are grateful to the Taif University Researchers Supporting Project Number(TURSP-2020/36),Taif University,Taif,Saudi Arabia.
文摘The World Health Organization(WHO)refers to the 2019 new coronavirus epidemic as COVID-19,and it has caused an unprecedented global crisis for several nations.Nearly every country around the globe is now very concerned about the effects of the COVID-19 outbreaks,which were previously only experienced by Chinese residents.Most of these nations are now under a partial or complete state of lockdown due to the lack of resources needed to combat the COVID-19 epidemic and the concern about overstretched healthcare systems.Every time the pandemic surprises them by providing new values for various parameters,all the connected research groups strive to understand the behavior of the pandemic to determine when it will stop.The prediction models in this research were created using deep neural networks and Decision Trees(DT).DT employs the support vector machine method,which predicts the transition from an initial dataset to actual figures using a function trained on a model.Extended short-term memory networks(LSTMs)are a special sort of recurrent neural network(RNN)that can pick up on long-term dependencies.As an added bonus,it is helpful when the neural network can both recall current events and recall past events,resulting in an accurate prediction for COVID-19.We provided a solid foundation for intelligent healthcare by devising an intelligence COVID-19 monitoring framework.We developed a data analysis methodology,including data preparation and dataset splitting.We examine two popular algorithms,LSTM and Decision tree on the official datasets.Moreover,we have analysed the effectiveness of deep learning and machine learning methods to predict the scale of the pandemic.Key issues and challenges are discussed for future improvement.It is expected that the results these methods provide for the Health Scenario would be reliable and credible.