Human adoption of artificial intelligence(AI)technique is largely hampered because of the increasing complexity and opacity of AI development.Explainable AI(XAI)techniques with various methods and tools have been deve...Human adoption of artificial intelligence(AI)technique is largely hampered because of the increasing complexity and opacity of AI development.Explainable AI(XAI)techniques with various methods and tools have been developed to bridge this gap between high-performance black-box AI models and human understanding.However,the current adoption of XAI technique stil lacks"human-centered"guidance for designing proper solutions to meet different stakeholders'needs in XAI practice.We first summarize a human-centered demand framework to categorize different stakeholders into five key roles with specific demands by reviewing existing research and then extract six commonly used human-centered XAI evaluation measures which are helpful for validating the effect of XAI.In addition,a taxonomy of XAI methods is developed for visual computing with analysis of method properties.Holding clearer human demands and XAI methods in mind,we take a medical image diagnosis scenario as an example to present an overview of how extant XAI approaches for visual computing fulfil stakeholders'human-centered demands in practice.And we check the availability of open-source XAI tools for stakeholders'use.This survey provides further guidance for matching diverse human demands with appropriate XAI methods or tools in specific applications with a summary of main challenges and future work toward human-centered XAI in practice.展开更多
Generative AI models for music and the arts in general are increasingly complex and hard to understand.The field of ex-plainable AI(XAI)seeks to make complex and opaque AI models such as neural networks more understan...Generative AI models for music and the arts in general are increasingly complex and hard to understand.The field of ex-plainable AI(XAI)seeks to make complex and opaque AI models such as neural networks more understandable to people.One ap-proach to making generative AI models more understandable is to impose a small number of semantically meaningful attributes on gen-erative AI models.This paper contributes a systematic examination of the impact that different combinations of variational auto-en-coder models(measureVAE and adversarialVAE),configurations of latent space in the AI model(from 4 to 256 latent dimensions),and training datasets(Irish folk,Turkish folk,classical,and pop)have on music generation performance when 2 or 4 meaningful musical at-tributes are imposed on the generative model.To date,there have been no systematic comparisons of such models at this level of com-binatorial detail.Our findings show that measureVAE has better reconstruction performance than adversarialVAE which has better musical attribute independence.Results demonstrate that measureVAE was able to generate music across music genres with inter-pretable musical dimensions of control,and performs best with low complexity music such as pop and rock.We recommend that a 32 or 64 latent dimensional space is optimal for 4 regularised dimensions when using measureVAE to generate music across genres.Our res-ults are the first detailed comparisons of configurations of state-of-the-art generative AI models for music and can be used to help select and configure AI models,musical features,and datasets for more understandable generation of music.展开更多
基金supported by National Natural Science Foundation of China(Nos.61772111 and 72010107002).
文摘Human adoption of artificial intelligence(AI)technique is largely hampered because of the increasing complexity and opacity of AI development.Explainable AI(XAI)techniques with various methods and tools have been developed to bridge this gap between high-performance black-box AI models and human understanding.However,the current adoption of XAI technique stil lacks"human-centered"guidance for designing proper solutions to meet different stakeholders'needs in XAI practice.We first summarize a human-centered demand framework to categorize different stakeholders into five key roles with specific demands by reviewing existing research and then extract six commonly used human-centered XAI evaluation measures which are helpful for validating the effect of XAI.In addition,a taxonomy of XAI methods is developed for visual computing with analysis of method properties.Holding clearer human demands and XAI methods in mind,we take a medical image diagnosis scenario as an example to present an overview of how extant XAI approaches for visual computing fulfil stakeholders'human-centered demands in practice.And we check the availability of open-source XAI tools for stakeholders'use.This survey provides further guidance for matching diverse human demands with appropriate XAI methods or tools in specific applications with a summary of main challenges and future work toward human-centered XAI in practice.
文摘Generative AI models for music and the arts in general are increasingly complex and hard to understand.The field of ex-plainable AI(XAI)seeks to make complex and opaque AI models such as neural networks more understandable to people.One ap-proach to making generative AI models more understandable is to impose a small number of semantically meaningful attributes on gen-erative AI models.This paper contributes a systematic examination of the impact that different combinations of variational auto-en-coder models(measureVAE and adversarialVAE),configurations of latent space in the AI model(from 4 to 256 latent dimensions),and training datasets(Irish folk,Turkish folk,classical,and pop)have on music generation performance when 2 or 4 meaningful musical at-tributes are imposed on the generative model.To date,there have been no systematic comparisons of such models at this level of com-binatorial detail.Our findings show that measureVAE has better reconstruction performance than adversarialVAE which has better musical attribute independence.Results demonstrate that measureVAE was able to generate music across music genres with inter-pretable musical dimensions of control,and performs best with low complexity music such as pop and rock.We recommend that a 32 or 64 latent dimensional space is optimal for 4 regularised dimensions when using measureVAE to generate music across genres.Our res-ults are the first detailed comparisons of configurations of state-of-the-art generative AI models for music and can be used to help select and configure AI models,musical features,and datasets for more understandable generation of music.