A kind of trust mechanism-based task scheduling model was presented. Referring to the trust relationship models of social persons, trust relationship is built among Grid nodes, and the trustworthiness of nodes is eval...A kind of trust mechanism-based task scheduling model was presented. Referring to the trust relationship models of social persons, trust relationship is built among Grid nodes, and the trustworthiness of nodes is evaluated by utilizing the Bayes method. Integrating the trustworthiness of nodes into a Dynamic Level Scheduling (DLS) algorithm, the Trust-Dynamic Level Scheduling (Trust-DLS) algorithm is proposed. Theoretical analysis and simulations prove that the Trust-DLS algorithm can efficiently meet the requirement of Grid tasks in trust, sacrificing fewer time costs, and assuring the execution of tasks in a security way in Grid environment.展开更多
To measure the trustworthiness of Intemetware, we need to understand the existing problems and design appropriate trustworthy metrics. The developing and running system of Internetware is analyzed in terms of process,...To measure the trustworthiness of Intemetware, we need to understand the existing problems and design appropriate trustworthy metrics. The developing and running system of Internetware is analyzed in terms of process, keystone, methods and techniques. According to the main related factors of Internetware trustworthiness, two important models, namely trustworthy metrics hierarchy model of components (TMHMC) with computing steps and local-global trustworthy metrics model of platform (LGTMMP) with algorithm respectively, are employed to evaluate the internal and external trustworthiness of Internetware benefiting for the development of Internetware.展开更多
In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at proc...In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at processing natural images,often lack interpretability and adaptability when processing high-resolution digital pathological images.This limitation is particularly evident in pathological diagnosis,which is the gold standard of cancer diagnosis and relies on a pathologist’s careful examination and analysis of digital pathological slides to identify the features and progression of the disease.Therefore,the integration of interpretable AI into smart medical diagnosis is not only an inevitable technological trend but also a key to improving diagnostic accuracy and reliability.In this paper,we introduce an innovative Multi-Scale Multi-Branch Feature Encoder(MSBE)and present the design of the CrossLinkNet Framework.The MSBE enhances the network’s capability for feature extraction by allowing the adjustment of hyperparameters to configure the number of branches and modules.The CrossLinkNet Framework,serving as a versatile image segmentation network architecture,employs cross-layer encoder-decoder connections for multi-level feature fusion,thereby enhancing feature integration and segmentation accuracy.Comprehensive quantitative and qualitative experiments on two datasets demonstrate that CrossLinkNet,equipped with the MSBE encoder,not only achieves accurate segmentation results but is also adaptable to various tumor segmentation tasks and scenarios by replacing different feature encoders.Crucially,CrossLinkNet emphasizes the interpretability of the AI model,a crucial aspect for medical professionals,providing an in-depth understanding of the model’s decisions and thereby enhancing trust and reliability in AI-assisted diagnostics.展开更多
Recently,intelligent fault diagnosis based on deep learning has been extensively investigated,exhibiting state-of-the-art performance.However,the deep learning model is often not truly trusted by users due to the lack...Recently,intelligent fault diagnosis based on deep learning has been extensively investigated,exhibiting state-of-the-art performance.However,the deep learning model is often not truly trusted by users due to the lack of interpretability of“black box”,which limits its deployment in safety-critical applications.A trusted fault diagnosis system requires that the faults can be accurately diagnosed in most cases,and the human in the deci-sion-making loop can be found to deal with the abnormal situa-tion when the models fail.In this paper,we explore a simplified method for quantifying both aleatoric and epistemic uncertainty in deterministic networks,called SAEU.In SAEU,Multivariate Gaussian distribution is employed in the deep architecture to compensate for the shortcomings of complexity and applicability of Bayesian neural networks.Based on the SAEU,we propose a unified uncertainty-aware deep learning framework(UU-DLF)to realize the grand vision of trustworthy fault diagnosis.Moreover,our UU-DLF effectively embodies the idea of“humans in the loop”,which not only allows for manual intervention in abnor-mal situations of diagnostic models,but also makes correspond-ing improvements on existing models based on traceability analy-sis.Finally,two experiments conducted on the gearbox and aero-engine bevel gears are used to demonstrate the effectiveness of UU-DLF and explore the effective reasons behind.展开更多
As AI technology continues to evolve,it plays an increasingly significant role in everyday life and social governance.However,the frequent occurrence of issues such as algorithmic bias,privacy breaches,and data leaks ...As AI technology continues to evolve,it plays an increasingly significant role in everyday life and social governance.However,the frequent occurrence of issues such as algorithmic bias,privacy breaches,and data leaks has led to a crisis of trust in AI among the public,presenting numerous challenges to social governance.Establishing technical trust in Al,reducing uncertainties in AI development,and enhancing its effectiveness in social governance have become a consensus among policymakers and researchers.By comparing different types of AI,the paper proposes and conceptualizes the idea of trustworthy Al,then discusses its characteristics and its value and impact pathways in social governance.The analysis addresses how mismatches in technological trust can affect social stability and the advancement of AI strategies.The paper highlights the potential of trustworthy AI to improve the efficiency of social governance and solve complex social problems.展开更多
Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of po...Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of possible future trajectories can be consid-erable(multi-modal).Most prior approaches proposed to address multi-modal motion prediction are based on complex machine learning systems that have limited interpret-ability.Moreover,the metrics used in current benchmarks do not evaluate all aspects of the problem,such as the diversity and admissibility of the output.The authors aim to advance towards the design of trustworthy motion prediction systems,based on some of the re-quirements for the design of Trustworthy Artificial Intelligence.The focus is on evaluation criteria,robustness,and interpretability of outputs.First,the evaluation metrics are comprehensively analysed,the main gaps of current benchmarks are identified,and a new holistic evaluation framework is proposed.Then,a method for the assessment of spatial and temporal robustness is introduced by simulating noise in the perception system.To enhance the interpretability of the outputs and generate more balanced results in the proposed evaluation framework,an intent prediction layer that can be attached to multi-modal motion prediction models is proposed.The effectiveness of this approach is assessed through a survey that explores different elements in the visualisation of the multi-modal trajectories and intentions.The proposed approach and findings make a significant contribution to the development of trustworthy motion prediction systems for autono-mous vehicles,advancing the field towards greater safety and reliability.展开更多
Recently artificial intelligence(AI)and machine learning(ML)models have demonstrated remarkable progress with applications developed in various domains.It is also increasingly discussed that AI and ML models and appli...Recently artificial intelligence(AI)and machine learning(ML)models have demonstrated remarkable progress with applications developed in various domains.It is also increasingly discussed that AI and ML models and applications should be transparent,explainable,and trustworthy.Accordingly,the field of Explainable AI(XAI)is expanding rapidly.XAI holds substantial promise for improving trust and transparency in AI-based systems by explaining how complex models such as the deep neural network(DNN)produces their outcomes.Moreover,many researchers and practitioners consider that using provenance to explain these complex models will help improve transparency in AI-based systems.In this paper,we conduct a systematic literature review of provenance,XAI,and trustworthy AI(TAI)to explain the fundamental concepts and illustrate the potential of using provenance as a medium to help accomplish explainability in AI-based systems.Moreover,we also discuss the patterns of recent developments in this area and offer a vision for research in the near future.We hope this literature review will serve as a starting point for scholars and practitioners interested in learning about essential components of provenance,XAI,and TAI.展开更多
The application of blockchain beyond cryptocurrencies has received increasing attention from industry and scholars alike.Given predicted looming food crises,some of the most impactful deployments of blockchains are li...The application of blockchain beyond cryptocurrencies has received increasing attention from industry and scholars alike.Given predicted looming food crises,some of the most impactful deployments of blockchains are likely to concern food supply chains.This study outlined how blockchain adoption can result in positive affordances in the food supply chain.Using Q-methodology,this study explored the current status of the agri-food supply chain and how blockchain technology could be useful in addressing existing challenges.This theorization leads to the proposition of the 3TIC value-driver framework for determining the enabling affordances of blockchain that would increase shared value for stakeholders.First,we propose a framework based on the most promising features of blockchain technology to overcome current challenges in the agri-food industry.Our value-driver framework is driven by the Q-study findings of respondents closely associated with the agri-food supply chain.This framework can provide supply chain stakeholders with a clear perception of blockchain affordances and serve as a guideline for utilizing appropriate features of technology that match organizations’capabilities,core competencies,goals,and limitations.Therefore,it could assist top-level decision-makers in systematically evaluating parts of the organization to focus on and improve the infrastructure for successful blockchain implementation along the agri-food supply chain.We conclude by noting certain significant challenges that must be carefully addressed to successfully adopt blockchain technology.展开更多
基金the National Natural Science Foundation of China (Grant No. 60673157)the Ministry of Education Key Project (Grant No. 105071)SEC E-Institute: Shanghai High Institutions Grid (Grant No. 200301)
文摘A kind of trust mechanism-based task scheduling model was presented. Referring to the trust relationship models of social persons, trust relationship is built among Grid nodes, and the trustworthiness of nodes is evaluated by utilizing the Bayes method. Integrating the trustworthiness of nodes into a Dynamic Level Scheduling (DLS) algorithm, the Trust-Dynamic Level Scheduling (Trust-DLS) algorithm is proposed. Theoretical analysis and simulations prove that the Trust-DLS algorithm can efficiently meet the requirement of Grid tasks in trust, sacrificing fewer time costs, and assuring the execution of tasks in a security way in Grid environment.
基金the Program for New Century Excellent Talents in University (NCET-06-0762)the Specialized Research Fund for the Doctoral Program of Higher Education (20060611009)the Natural Science Foundations of Chongqing (CSTC2007BA2003, CSTC2006BB2003)
文摘To measure the trustworthiness of Intemetware, we need to understand the existing problems and design appropriate trustworthy metrics. The developing and running system of Internetware is analyzed in terms of process, keystone, methods and techniques. According to the main related factors of Internetware trustworthiness, two important models, namely trustworthy metrics hierarchy model of components (TMHMC) with computing steps and local-global trustworthy metrics model of platform (LGTMMP) with algorithm respectively, are employed to evaluate the internal and external trustworthiness of Internetware benefiting for the development of Internetware.
基金supported by the National Natural Science Foundation of China(Grant Numbers:62372083,62072074,62076054,62027827,62002047)the Sichuan Provincial Science and Technology Innovation Platform and Talent Program(Grant Number:2022JDJQ0039)+1 种基金the Sichuan Provincial Science and Technology Support Program(Grant Numbers:2022YFQ0045,2022YFS0220,2021YFG0131,2023YFS0020,2023YFS0197,2023YFG0148)the CCF-Baidu Open Fund(Grant Number:202312).
文摘In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at processing natural images,often lack interpretability and adaptability when processing high-resolution digital pathological images.This limitation is particularly evident in pathological diagnosis,which is the gold standard of cancer diagnosis and relies on a pathologist’s careful examination and analysis of digital pathological slides to identify the features and progression of the disease.Therefore,the integration of interpretable AI into smart medical diagnosis is not only an inevitable technological trend but also a key to improving diagnostic accuracy and reliability.In this paper,we introduce an innovative Multi-Scale Multi-Branch Feature Encoder(MSBE)and present the design of the CrossLinkNet Framework.The MSBE enhances the network’s capability for feature extraction by allowing the adjustment of hyperparameters to configure the number of branches and modules.The CrossLinkNet Framework,serving as a versatile image segmentation network architecture,employs cross-layer encoder-decoder connections for multi-level feature fusion,thereby enhancing feature integration and segmentation accuracy.Comprehensive quantitative and qualitative experiments on two datasets demonstrate that CrossLinkNet,equipped with the MSBE encoder,not only achieves accurate segmentation results but is also adaptable to various tumor segmentation tasks and scenarios by replacing different feature encoders.Crucially,CrossLinkNet emphasizes the interpretability of the AI model,a crucial aspect for medical professionals,providing an in-depth understanding of the model’s decisions and thereby enhancing trust and reliability in AI-assisted diagnostics.
基金supported in part by the National Natural Science Foundation of China(52105116)Science Center for gas turbine project(P2022-DC-I-003-001)the Royal Society award(IEC\NSFC\223294)to Professor Asoke K.Nandi.
文摘Recently,intelligent fault diagnosis based on deep learning has been extensively investigated,exhibiting state-of-the-art performance.However,the deep learning model is often not truly trusted by users due to the lack of interpretability of“black box”,which limits its deployment in safety-critical applications.A trusted fault diagnosis system requires that the faults can be accurately diagnosed in most cases,and the human in the deci-sion-making loop can be found to deal with the abnormal situa-tion when the models fail.In this paper,we explore a simplified method for quantifying both aleatoric and epistemic uncertainty in deterministic networks,called SAEU.In SAEU,Multivariate Gaussian distribution is employed in the deep architecture to compensate for the shortcomings of complexity and applicability of Bayesian neural networks.Based on the SAEU,we propose a unified uncertainty-aware deep learning framework(UU-DLF)to realize the grand vision of trustworthy fault diagnosis.Moreover,our UU-DLF effectively embodies the idea of“humans in the loop”,which not only allows for manual intervention in abnor-mal situations of diagnostic models,but also makes correspond-ing improvements on existing models based on traceability analy-sis.Finally,two experiments conducted on the gearbox and aero-engine bevel gears are used to demonstrate the effectiveness of UU-DLF and explore the effective reasons behind.
文摘As AI technology continues to evolve,it plays an increasingly significant role in everyday life and social governance.However,the frequent occurrence of issues such as algorithmic bias,privacy breaches,and data leaks has led to a crisis of trust in AI among the public,presenting numerous challenges to social governance.Establishing technical trust in Al,reducing uncertainties in AI development,and enhancing its effectiveness in social governance have become a consensus among policymakers and researchers.By comparing different types of AI,the paper proposes and conceptualizes the idea of trustworthy Al,then discusses its characteristics and its value and impact pathways in social governance.The analysis addresses how mismatches in technological trust can affect social stability and the advancement of AI strategies.The paper highlights the potential of trustworthy AI to improve the efficiency of social governance and solve complex social problems.
基金European Commission,Joint Research Center,Grant/Award Number:HUMAINTMinisterio de Ciencia e Innovación,Grant/Award Number:PID2020‐114924RB‐I00Comunidad de Madrid,Grant/Award Number:S2018/EMT‐4362 SEGVAUTO 4.0‐CM。
文摘Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of possible future trajectories can be consid-erable(multi-modal).Most prior approaches proposed to address multi-modal motion prediction are based on complex machine learning systems that have limited interpret-ability.Moreover,the metrics used in current benchmarks do not evaluate all aspects of the problem,such as the diversity and admissibility of the output.The authors aim to advance towards the design of trustworthy motion prediction systems,based on some of the re-quirements for the design of Trustworthy Artificial Intelligence.The focus is on evaluation criteria,robustness,and interpretability of outputs.First,the evaluation metrics are comprehensively analysed,the main gaps of current benchmarks are identified,and a new holistic evaluation framework is proposed.Then,a method for the assessment of spatial and temporal robustness is introduced by simulating noise in the perception system.To enhance the interpretability of the outputs and generate more balanced results in the proposed evaluation framework,an intent prediction layer that can be attached to multi-modal motion prediction models is proposed.The effectiveness of this approach is assessed through a survey that explores different elements in the visualisation of the multi-modal trajectories and intentions.The proposed approach and findings make a significant contribution to the development of trustworthy motion prediction systems for autono-mous vehicles,advancing the field towards greater safety and reliability.
基金supported by the National Science Foundation under Grants No.2019609the National Aeronautics and Space Administration under Grant No.80NSSC21M0028.
文摘Recently artificial intelligence(AI)and machine learning(ML)models have demonstrated remarkable progress with applications developed in various domains.It is also increasingly discussed that AI and ML models and applications should be transparent,explainable,and trustworthy.Accordingly,the field of Explainable AI(XAI)is expanding rapidly.XAI holds substantial promise for improving trust and transparency in AI-based systems by explaining how complex models such as the deep neural network(DNN)produces their outcomes.Moreover,many researchers and practitioners consider that using provenance to explain these complex models will help improve transparency in AI-based systems.In this paper,we conduct a systematic literature review of provenance,XAI,and trustworthy AI(TAI)to explain the fundamental concepts and illustrate the potential of using provenance as a medium to help accomplish explainability in AI-based systems.Moreover,we also discuss the patterns of recent developments in this area and offer a vision for research in the near future.We hope this literature review will serve as a starting point for scholars and practitioners interested in learning about essential components of provenance,XAI,and TAI.
文摘The application of blockchain beyond cryptocurrencies has received increasing attention from industry and scholars alike.Given predicted looming food crises,some of the most impactful deployments of blockchains are likely to concern food supply chains.This study outlined how blockchain adoption can result in positive affordances in the food supply chain.Using Q-methodology,this study explored the current status of the agri-food supply chain and how blockchain technology could be useful in addressing existing challenges.This theorization leads to the proposition of the 3TIC value-driver framework for determining the enabling affordances of blockchain that would increase shared value for stakeholders.First,we propose a framework based on the most promising features of blockchain technology to overcome current challenges in the agri-food industry.Our value-driver framework is driven by the Q-study findings of respondents closely associated with the agri-food supply chain.This framework can provide supply chain stakeholders with a clear perception of blockchain affordances and serve as a guideline for utilizing appropriate features of technology that match organizations’capabilities,core competencies,goals,and limitations.Therefore,it could assist top-level decision-makers in systematically evaluating parts of the organization to focus on and improve the infrastructure for successful blockchain implementation along the agri-food supply chain.We conclude by noting certain significant challenges that must be carefully addressed to successfully adopt blockchain technology.