The ability to intelligently utilize resources to meet the need of growing diversity in services and user behavior marks the future of wireless communication systems. Intelligent wireless communications aims at enabli...The ability to intelligently utilize resources to meet the need of growing diversity in services and user behavior marks the future of wireless communication systems. Intelligent wireless communications aims at enabling the system to perceive and assess the available resources, to autonomously learn to adapt to the perceived wireless environment, and to reconfigure its operating mode to maximize the utility of the available resources. The perception capability and reconfigurability are the essential features of cognitive radio while modern machine learning techniques project great potential in system adaptation. In this paper, we discuss the development of the cognitive radio technology and machine learning techniques and emphasize their roles in improving spectrum and energy utility of wireless communication systems. We describe the state-of-the-art of relevant techniques, covering spectrum sensing and access approaches and powerful machine learning algorithms that enable spectrum and energy-efficient communications in dynamic wireless environments. We also present practical applications of these techniques and identify further research challenges in cognitive radio and machine learning as applied to the existing and future wireless communication systems.展开更多
Neuroinformatics is a fascinating research field that applies computational models and analytical tools to high dimensional experimental neuroscience data for a better understanding of how the brain functions or dysfu...Neuroinformatics is a fascinating research field that applies computational models and analytical tools to high dimensional experimental neuroscience data for a better understanding of how the brain functions or dysfunctions in brain diseases. Neuroinformaticians work in the intersection of neuroscience and informatics supporting the integration of various sub-disciplines(behavioural neuroscience, genetics, cognitive psychology, etc.) working on brain research. Neuroinformaticians are the pathway of information exchange between informaticians and clinicians for a better understanding of the outcome of computational models and the clinical interpretation of the analysis. Machine learning is one of the most significant computational developments in the last decade giving tools to neuroinformaticians and finally to radiologists and clinicians for an automatic and early diagnosis-prognosis of a brain disease. Random forest(RF) algorithm has been successfully applied to high-dimensional neuroimaging data for feature reduction and also has been applied to classify the clinical label of a subject using single or multi-modal neuroimaging datasets. Our aim was to review the studies where RF was applied to correctly predict the Alzheimer's disease(AD), the conversion from mild cognitive impairment(MCI) and its robustness to overfitting, outliers and handling of non-linear data. Finally, we described our RF-based model that gave us the 1 ^(st) position in an international challenge for automated prediction of MCI from MRI data.展开更多
The invention concept of Robotic Process Automation (RPA) has emerged as a transformative technology that has revolved the local business processes by programming repetitive task and efficiency adjusting the operation...The invention concept of Robotic Process Automation (RPA) has emerged as a transformative technology that has revolved the local business processes by programming repetitive task and efficiency adjusting the operations. This research had focused on developing the RPA environment and its future features in order to elaborate on the projected policies based on its comprehensive experiences. The current and previous situations of industry are looking for IT solutions to fully scale their company Improve business flexibility, improve customer satisfaction, improve productivity, accuracy and reduce costs, quick scalability in RPA has currently appeared as an advance technology with exceptional performance. It emphasizes future trends and foresees the evolution of RPA by integrating artificial intelligence, learning of machine and cognitive automation into RPA frameworks. Moreover, it has analyzed the technical constraints, including the scalability, security issues and interoperability, while investigating regulatory and ethical considerations that are so important to the ethical utilization of RPA. By providing a comprehensive analysis of RPA with new future trends in this study, researcher’s ambitions to provide valuable insights the benefits of it on industrial performances from the gap observed so as to guide the strategic decision and future implementation of the RPA.展开更多
Artificial intelligence, often referred to as AI, is a branch of computer science focused on developing systems that exhibit intelligent behavior. Broadly speaking, AI researchers aim to develop technologies that can ...Artificial intelligence, often referred to as AI, is a branch of computer science focused on developing systems that exhibit intelligent behavior. Broadly speaking, AI researchers aim to develop technologies that can think and act in a way that mimics human cognition and decision-making [1]. The foundations of AI can be traced back to early philosophical inquiries into the nature of intelligence and thinking. However, AI is generally considered to have emerged as a formal field of study in the 1940s and 1950s. Pioneering computer scientists at the time theorized that it might be possible to extend basic computer programming concepts using logic and reasoning to develop machines capable of “thinking” like humans. Over time, the definition and goals of AI have evolved. Some theorists argued for a narrower focus on developing computing systems able to efficiently solve problems, while others aimed for a closer replication of human intelligence. Today, AI encompasses a diverse set of techniques used to enable intelligent behavior in machines. Core disciplines that contribute to modern AI research include computer science, mathematics, statistics, linguistics, psychology and cognitive science, and neuroscience. Significant AI approaches used today involve statistical classification models, machine learning, and natural language processing. Classification methods are widely applicable to problems in various domains like healthcare, such as informing diagnostic or treatment decisions based on patterns in data. Dean and Goldreich, 1998, define ML as an approach through which a computer has to learn a model by itself from the data provided but no specification on the sort of model is provided to the computer. They can then predict values for things that are different from the values used in training the models. NLP looks at two interrelated concerns, the task of training computers to understand human languages and the fact that since natural languages are so complex, they lend themselves very well to serving a number 展开更多
基金support from the National Science Foundation under Grants 1443894,1560437,and 1731017Louisiana Board of Regents under Grant LEQSF(2017-20)-RD-A-29a research gift from Intel Corporation
文摘The ability to intelligently utilize resources to meet the need of growing diversity in services and user behavior marks the future of wireless communication systems. Intelligent wireless communications aims at enabling the system to perceive and assess the available resources, to autonomously learn to adapt to the perceived wireless environment, and to reconfigure its operating mode to maximize the utility of the available resources. The perception capability and reconfigurability are the essential features of cognitive radio while modern machine learning techniques project great potential in system adaptation. In this paper, we discuss the development of the cognitive radio technology and machine learning techniques and emphasize their roles in improving spectrum and energy utility of wireless communication systems. We describe the state-of-the-art of relevant techniques, covering spectrum sensing and access approaches and powerful machine learning algorithms that enable spectrum and energy-efficient communications in dynamic wireless environments. We also present practical applications of these techniques and identify further research challenges in cognitive radio and machine learning as applied to the existing and future wireless communication systems.
基金supported by Medical Research Council(MRC)grant MR/K004360/1 to SIDMARIE CURIE COFUND EU-UK Research Fellowship to SID
文摘Neuroinformatics is a fascinating research field that applies computational models and analytical tools to high dimensional experimental neuroscience data for a better understanding of how the brain functions or dysfunctions in brain diseases. Neuroinformaticians work in the intersection of neuroscience and informatics supporting the integration of various sub-disciplines(behavioural neuroscience, genetics, cognitive psychology, etc.) working on brain research. Neuroinformaticians are the pathway of information exchange between informaticians and clinicians for a better understanding of the outcome of computational models and the clinical interpretation of the analysis. Machine learning is one of the most significant computational developments in the last decade giving tools to neuroinformaticians and finally to radiologists and clinicians for an automatic and early diagnosis-prognosis of a brain disease. Random forest(RF) algorithm has been successfully applied to high-dimensional neuroimaging data for feature reduction and also has been applied to classify the clinical label of a subject using single or multi-modal neuroimaging datasets. Our aim was to review the studies where RF was applied to correctly predict the Alzheimer's disease(AD), the conversion from mild cognitive impairment(MCI) and its robustness to overfitting, outliers and handling of non-linear data. Finally, we described our RF-based model that gave us the 1 ^(st) position in an international challenge for automated prediction of MCI from MRI data.
文摘The invention concept of Robotic Process Automation (RPA) has emerged as a transformative technology that has revolved the local business processes by programming repetitive task and efficiency adjusting the operations. This research had focused on developing the RPA environment and its future features in order to elaborate on the projected policies based on its comprehensive experiences. The current and previous situations of industry are looking for IT solutions to fully scale their company Improve business flexibility, improve customer satisfaction, improve productivity, accuracy and reduce costs, quick scalability in RPA has currently appeared as an advance technology with exceptional performance. It emphasizes future trends and foresees the evolution of RPA by integrating artificial intelligence, learning of machine and cognitive automation into RPA frameworks. Moreover, it has analyzed the technical constraints, including the scalability, security issues and interoperability, while investigating regulatory and ethical considerations that are so important to the ethical utilization of RPA. By providing a comprehensive analysis of RPA with new future trends in this study, researcher’s ambitions to provide valuable insights the benefits of it on industrial performances from the gap observed so as to guide the strategic decision and future implementation of the RPA.
文摘Artificial intelligence, often referred to as AI, is a branch of computer science focused on developing systems that exhibit intelligent behavior. Broadly speaking, AI researchers aim to develop technologies that can think and act in a way that mimics human cognition and decision-making [1]. The foundations of AI can be traced back to early philosophical inquiries into the nature of intelligence and thinking. However, AI is generally considered to have emerged as a formal field of study in the 1940s and 1950s. Pioneering computer scientists at the time theorized that it might be possible to extend basic computer programming concepts using logic and reasoning to develop machines capable of “thinking” like humans. Over time, the definition and goals of AI have evolved. Some theorists argued for a narrower focus on developing computing systems able to efficiently solve problems, while others aimed for a closer replication of human intelligence. Today, AI encompasses a diverse set of techniques used to enable intelligent behavior in machines. Core disciplines that contribute to modern AI research include computer science, mathematics, statistics, linguistics, psychology and cognitive science, and neuroscience. Significant AI approaches used today involve statistical classification models, machine learning, and natural language processing. Classification methods are widely applicable to problems in various domains like healthcare, such as informing diagnostic or treatment decisions based on patterns in data. Dean and Goldreich, 1998, define ML as an approach through which a computer has to learn a model by itself from the data provided but no specification on the sort of model is provided to the computer. They can then predict values for things that are different from the values used in training the models. NLP looks at two interrelated concerns, the task of training computers to understand human languages and the fact that since natural languages are so complex, they lend themselves very well to serving a number