Group LASSO for Change-points in Functional Time Series Chang Xiong CHI Rong Mao ZHANG Abstract Multiple change-points estimation for functional time series is studied in this paper.The change-point problem is first t...Group LASSO for Change-points in Functional Time Series Chang Xiong CHI Rong Mao ZHANG Abstract Multiple change-points estimation for functional time series is studied in this paper.The change-point problem is first transformed into a high-dimensional sparse estimation problem via basis functions.Group least absolute shrinkage and selection operator(LASSO) is then applied to estimate the number and the locations of possible change points.However,the group LASSO(GLASSO) always overestimate the true points.展开更多
Human Interaction Recognition(HIR)was one of the challenging issues in computer vision research due to the involvement of multiple individuals and their mutual interactions within video frames generated from their mov...Human Interaction Recognition(HIR)was one of the challenging issues in computer vision research due to the involvement of multiple individuals and their mutual interactions within video frames generated from their movements.HIR requires more sophisticated analysis than Human Action Recognition(HAR)since HAR focuses solely on individual activities like walking or running,while HIR involves the interactions between people.This research aims to develop a robust system for recognizing five common human interactions,such as hugging,kicking,pushing,pointing,and no interaction,from video sequences using multiple cameras.In this study,a hybrid Deep Learning(DL)and Machine Learning(ML)model was employed to improve classification accuracy and generalizability.The dataset was collected in an indoor environment with four-channel cameras capturing the five types of interactions among 13 participants.The data was processed using a DL model with a fine-tuned ResNet(Residual Networks)architecture based on 2D Convolutional Neural Network(CNN)layers for feature extraction.Subsequently,machine learning models were trained and utilized for interaction classification using six commonly used ML algorithms,including SVM,KNN,RF,DT,NB,and XGBoost.The results demonstrate a high accuracy of 95.45%in classifying human interactions.The hybrid approach enabled effective learning,resulting in highly accurate performance across different interaction types.Future work will explore more complex scenarios involving multiple individuals based on the application of this architecture.展开更多
The recent development of light field cameras has received growing interest, as their rich angular information has potential benefits for many computer vision tasks. In this paper, we introduce a novel method to obtai...The recent development of light field cameras has received growing interest, as their rich angular information has potential benefits for many computer vision tasks. In this paper, we introduce a novel method to obtain a dense disparity map by use of ground control points(GCPs) in the light field.Previous work optimizes the disparity map by local estimation which includes both reliable points and unreliable points. To reduce the negative effect of the unreliable points, we predict the disparity at non-GCPs from GCPs. Our method performs more robustly in shadow areas than previous methods based on GCP work, since we combine color information and local disparity. Experiments and comparisons on a public dataset demonstrate the effectiveness of our proposed method.展开更多
The International Software Benchmarking and Standards Group (ISBSG) data-base was used to build estimation models for estimating software functional test effort. The analysis of the data revealed three test productivi...The International Software Benchmarking and Standards Group (ISBSG) data-base was used to build estimation models for estimating software functional test effort. The analysis of the data revealed three test productivity patterns representing economies or diseconomies of scale and these patterns served as a basis for investigating the characteristics of the corresponding projects. Three groups of projects related to the three different productivity patterns, characterized by domain, team size, elapsed time and rigor of verification and validation carried out during development, were found to be statistically significant. Within each project group, the variations in test effort can be explained, in addition to functional size, by 1) the processes executed during development, and 2) the processes adopted for testing. Portfolios of estimation models were built using combinations of the three independent variables. Performance of the estimation models built using the function point method innovated by the Common Software Measurement International Consortium (COSMIC) known as COSMIC Function Points, and the one advocated by the International Function Point Users Group (IFPUG) known as IFPUG Function Points, were compared to evaluate the impact of these respective sizing methods on test effort estimation.展开更多
The International Software Benchmarking Standards Group (ISBSG) provides to researchers and practitioners a repository of software projects’ data that has been used to date mostly for benchmarking and project estimat...The International Software Benchmarking Standards Group (ISBSG) provides to researchers and practitioners a repository of software projects’ data that has been used to date mostly for benchmarking and project estimation purposes, but rarely for software defects analysis. Sigma, in statistics, measures how far a process deviates from its goal. Six Sigma focuses on reducing variations within processes, because such variations may lead to an inconsistency in achieving projects’ specifications which represent “defects”, which mean not meeting customers’ satisfaction. Six Sigma provides two methodologies to solve organizations’ problems: “Define-Measure-Analyze-Improve-Control” process cycle (DMAIC) and Design of Six Sigma (DFSS). The DMAIC focuses on improving the existed processes, while the DFSS focuses on redesigning the existing processes and developing new processes. This paper presents an approach to provide an analysis of ISBSG repository based on Six Sigma measurements. It investigates the use of the ISBSG data repository with some of the related Six Sigma measurement aspects, including Sigma defect measurement and software defect estimation. This study presents the dataset preparation consisting of two levels of data preparations, and then analyzed the quality-related data fields in the ISBSG MS-Excel data extract (Release 12 - 2013). It also presents an analysis of the extracted dataset of software projects. This study has found that the ISBSG MS-Excel data extract has a high ratio of missing data within the data fields of “Total Number of Defects” variable, which represents a serious challenge when the ISBSG dataset is being used for software defect estimation.展开更多
文摘Group LASSO for Change-points in Functional Time Series Chang Xiong CHI Rong Mao ZHANG Abstract Multiple change-points estimation for functional time series is studied in this paper.The change-point problem is first transformed into a high-dimensional sparse estimation problem via basis functions.Group least absolute shrinkage and selection operator(LASSO) is then applied to estimate the number and the locations of possible change points.However,the group LASSO(GLASSO) always overestimate the true points.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.RS-2023-00218176)and the Soonchunhyang University Research Fund.
文摘Human Interaction Recognition(HIR)was one of the challenging issues in computer vision research due to the involvement of multiple individuals and their mutual interactions within video frames generated from their movements.HIR requires more sophisticated analysis than Human Action Recognition(HAR)since HAR focuses solely on individual activities like walking or running,while HIR involves the interactions between people.This research aims to develop a robust system for recognizing five common human interactions,such as hugging,kicking,pushing,pointing,and no interaction,from video sequences using multiple cameras.In this study,a hybrid Deep Learning(DL)and Machine Learning(ML)model was employed to improve classification accuracy and generalizability.The dataset was collected in an indoor environment with four-channel cameras capturing the five types of interactions among 13 participants.The data was processed using a DL model with a fine-tuned ResNet(Residual Networks)architecture based on 2D Convolutional Neural Network(CNN)layers for feature extraction.Subsequently,machine learning models were trained and utilized for interaction classification using six commonly used ML algorithms,including SVM,KNN,RF,DT,NB,and XGBoost.The results demonstrate a high accuracy of 95.45%in classifying human interactions.The hybrid approach enabled effective learning,resulting in highly accurate performance across different interaction types.Future work will explore more complex scenarios involving multiple individuals based on the application of this architecture.
基金supported by National Natural Science Foundation of China (Nos. 61272287, 61531014)the State Key Laboratory of Virtual Reality Technology and Systems (No. BUAA-VR-15KF-10)
文摘The recent development of light field cameras has received growing interest, as their rich angular information has potential benefits for many computer vision tasks. In this paper, we introduce a novel method to obtain a dense disparity map by use of ground control points(GCPs) in the light field.Previous work optimizes the disparity map by local estimation which includes both reliable points and unreliable points. To reduce the negative effect of the unreliable points, we predict the disparity at non-GCPs from GCPs. Our method performs more robustly in shadow areas than previous methods based on GCP work, since we combine color information and local disparity. Experiments and comparisons on a public dataset demonstrate the effectiveness of our proposed method.
文摘The International Software Benchmarking and Standards Group (ISBSG) data-base was used to build estimation models for estimating software functional test effort. The analysis of the data revealed three test productivity patterns representing economies or diseconomies of scale and these patterns served as a basis for investigating the characteristics of the corresponding projects. Three groups of projects related to the three different productivity patterns, characterized by domain, team size, elapsed time and rigor of verification and validation carried out during development, were found to be statistically significant. Within each project group, the variations in test effort can be explained, in addition to functional size, by 1) the processes executed during development, and 2) the processes adopted for testing. Portfolios of estimation models were built using combinations of the three independent variables. Performance of the estimation models built using the function point method innovated by the Common Software Measurement International Consortium (COSMIC) known as COSMIC Function Points, and the one advocated by the International Function Point Users Group (IFPUG) known as IFPUG Function Points, were compared to evaluate the impact of these respective sizing methods on test effort estimation.
文摘The International Software Benchmarking Standards Group (ISBSG) provides to researchers and practitioners a repository of software projects’ data that has been used to date mostly for benchmarking and project estimation purposes, but rarely for software defects analysis. Sigma, in statistics, measures how far a process deviates from its goal. Six Sigma focuses on reducing variations within processes, because such variations may lead to an inconsistency in achieving projects’ specifications which represent “defects”, which mean not meeting customers’ satisfaction. Six Sigma provides two methodologies to solve organizations’ problems: “Define-Measure-Analyze-Improve-Control” process cycle (DMAIC) and Design of Six Sigma (DFSS). The DMAIC focuses on improving the existed processes, while the DFSS focuses on redesigning the existing processes and developing new processes. This paper presents an approach to provide an analysis of ISBSG repository based on Six Sigma measurements. It investigates the use of the ISBSG data repository with some of the related Six Sigma measurement aspects, including Sigma defect measurement and software defect estimation. This study presents the dataset preparation consisting of two levels of data preparations, and then analyzed the quality-related data fields in the ISBSG MS-Excel data extract (Release 12 - 2013). It also presents an analysis of the extracted dataset of software projects. This study has found that the ISBSG MS-Excel data extract has a high ratio of missing data within the data fields of “Total Number of Defects” variable, which represents a serious challenge when the ISBSG dataset is being used for software defect estimation.