Macronutrient deficiency inhibits the growth and development of chili plants.One of the non-destructive methods that plays a role in processing plant image data based on specific characteristics is computer vision.Thi...Macronutrient deficiency inhibits the growth and development of chili plants.One of the non-destructive methods that plays a role in processing plant image data based on specific characteristics is computer vision.This study uses 5166 image data after augmentation process for six plant health conditions.But the analysis of one feature cannot represent plant health condition.Therefore,a careful combination of features is required.This study combines three types of features with HSV and RGB for color,GLCM and LBP for texture,and Hu moments and centroid distance for shapes.Each feature and its combination are trained and tested using the same MLP architecture.The combination of RGB,GLCM,Hu moments,and Distance of centroid features results the best performance.In addition,this study compares the MLP architecture used with previous studies such as SVM,Random Forest Technique,Naive Bayes,and CNN.CNN produced the best performance,followed by SVM and MLP,with accuracy reaching 97.76%,90.55%and 89.70%,respectively.Although MLP has lower accuracy than CNN,the model for identifying plant health conditions has a reasonably good success rate to be applied in a simple agricultural environment.展开更多
One aspect of cybersecurity,incorporates the study of Portable Executables(PE)files maleficence.Artificial Intelligence(AI)can be employed in such studies,since AI has the ability to discriminate benign from malicious...One aspect of cybersecurity,incorporates the study of Portable Executables(PE)files maleficence.Artificial Intelligence(AI)can be employed in such studies,since AI has the ability to discriminate benign from malicious files.In this study,an exclusive set of 29 features was collected from trusted implementations,this set was used as a baseline to analyze the presented work in this research.A Decision Tree(DT)and Neural Network Multi-Layer Perceptron(NN-MLPC)algorithms were utilized during this work.Both algorithms were chosen after testing a few diverse procedures.This work implements a method of subgrouping features to answer questions such as,which feature has a positive impact on accuracy when added?Is it possible to determine a reliable feature set to distinguish a malicious PE file from a benign one?when combining features,would it have any effect on malware detection accuracy in a PE file?Results obtained using the proposed method were improved and carried few observations.Generally,the obtained results had practical and numerical parts,for the practical part,the number of features and which features included are the main factors impacting the calculated accuracy,also,the combination of features is as crucial in these calculations.Numerical results included,finding accuracies with enhanced values,for example,NN_MLPC attained 0.979 and 0.98;for DT an accuracy of 0.9825 and 0.986 was attained.展开更多
In this paper, a new parallel compact integration scheme based on multi-layer perceptron (MLP) networks is proposed to solve handwritten Chinese character recognition (HCCR) problems. The idea of metasynthesis is appl...In this paper, a new parallel compact integration scheme based on multi-layer perceptron (MLP) networks is proposed to solve handwritten Chinese character recognition (HCCR) problems. The idea of metasynthesis is applied to HCCR, and compact MLP network classifier is defined. Human intelligence and computer capabilities are combined together effectively through a procedure of two-step supervised learning. Compared with previous integration schemes, this scheme is characterized with parallel compact structure and better performance. It provides a promising way for applying MLP to large vocabulary classification.展开更多
Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained ...Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained Convolutional Neural Network (CNN) architectures to extract powerful features from images for object recognition purposes. We have built on the existing concept of extending the learning from pre-trained CNNs to new databases through activations by proposing to consider multiple deep layers. We have exploited the progressive learning that happens at the various intermediate layers of the CNNs to construct Deep Multi-Layer (DM-L) based Feature Extraction vectors to achieve excellent object recognition performance. Two popular pre-trained CNN architecture models i.e. the VGG_16 and VGG_19 have been used in this work to extract the feature sets from 3 deep fully connected multiple layers namely “fc6”, “fc7” and “fc8” from inside the models for object recognition purposes. Using the Principal Component Analysis (PCA) technique, the Dimensionality of the DM-L feature vectors has been reduced to form powerful feature vectors that have been fed to an external Classifier Ensemble for classification instead of the Softmax based classification layers of the two original pre-trained CNN models. The proposed DM-L technique has been applied to the Benchmark Caltech-101 object recognition database. Conventional wisdom may suggest that feature extractions based on the deepest layer i.e. “fc8” compared to “fc6” will result in the best recognition performance but our results have proved it otherwise for the two considered models. Our experiments have revealed that for the two models under consideration, the “fc6” based feature vectors have achieved the best recognition performance. State-of-the-Art recognition performances of 91.17% and 91.35% have been achieved by utilizing the “fc6” based feature vectors for the VGG_16 and VGG_19 models respectively. The recognition performance has been 展开更多
基金funded by the Directorate of Research and Community Service,Deputy for Strengthening Research and Development,Ministry of Research,Technology/National Research and Innovation Agency of the Republic of Indonesia in the PMDSU program with grant ID 018/E5/PG.02.00.PT/2022 and 1773/UN1/DITLIT/Dit-Lit/PT.01.03/2022.
文摘Macronutrient deficiency inhibits the growth and development of chili plants.One of the non-destructive methods that plays a role in processing plant image data based on specific characteristics is computer vision.This study uses 5166 image data after augmentation process for six plant health conditions.But the analysis of one feature cannot represent plant health condition.Therefore,a careful combination of features is required.This study combines three types of features with HSV and RGB for color,GLCM and LBP for texture,and Hu moments and centroid distance for shapes.Each feature and its combination are trained and tested using the same MLP architecture.The combination of RGB,GLCM,Hu moments,and Distance of centroid features results the best performance.In addition,this study compares the MLP architecture used with previous studies such as SVM,Random Forest Technique,Naive Bayes,and CNN.CNN produced the best performance,followed by SVM and MLP,with accuracy reaching 97.76%,90.55%and 89.70%,respectively.Although MLP has lower accuracy than CNN,the model for identifying plant health conditions has a reasonably good success rate to be applied in a simple agricultural environment.
文摘One aspect of cybersecurity,incorporates the study of Portable Executables(PE)files maleficence.Artificial Intelligence(AI)can be employed in such studies,since AI has the ability to discriminate benign from malicious files.In this study,an exclusive set of 29 features was collected from trusted implementations,this set was used as a baseline to analyze the presented work in this research.A Decision Tree(DT)and Neural Network Multi-Layer Perceptron(NN-MLPC)algorithms were utilized during this work.Both algorithms were chosen after testing a few diverse procedures.This work implements a method of subgrouping features to answer questions such as,which feature has a positive impact on accuracy when added?Is it possible to determine a reliable feature set to distinguish a malicious PE file from a benign one?when combining features,would it have any effect on malware detection accuracy in a PE file?Results obtained using the proposed method were improved and carried few observations.Generally,the obtained results had practical and numerical parts,for the practical part,the number of features and which features included are the main factors impacting the calculated accuracy,also,the combination of features is as crucial in these calculations.Numerical results included,finding accuracies with enhanced values,for example,NN_MLPC attained 0.979 and 0.98;for DT an accuracy of 0.9825 and 0.986 was attained.
文摘In this paper, a new parallel compact integration scheme based on multi-layer perceptron (MLP) networks is proposed to solve handwritten Chinese character recognition (HCCR) problems. The idea of metasynthesis is applied to HCCR, and compact MLP network classifier is defined. Human intelligence and computer capabilities are combined together effectively through a procedure of two-step supervised learning. Compared with previous integration schemes, this scheme is characterized with parallel compact structure and better performance. It provides a promising way for applying MLP to large vocabulary classification.
文摘Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained Convolutional Neural Network (CNN) architectures to extract powerful features from images for object recognition purposes. We have built on the existing concept of extending the learning from pre-trained CNNs to new databases through activations by proposing to consider multiple deep layers. We have exploited the progressive learning that happens at the various intermediate layers of the CNNs to construct Deep Multi-Layer (DM-L) based Feature Extraction vectors to achieve excellent object recognition performance. Two popular pre-trained CNN architecture models i.e. the VGG_16 and VGG_19 have been used in this work to extract the feature sets from 3 deep fully connected multiple layers namely “fc6”, “fc7” and “fc8” from inside the models for object recognition purposes. Using the Principal Component Analysis (PCA) technique, the Dimensionality of the DM-L feature vectors has been reduced to form powerful feature vectors that have been fed to an external Classifier Ensemble for classification instead of the Softmax based classification layers of the two original pre-trained CNN models. The proposed DM-L technique has been applied to the Benchmark Caltech-101 object recognition database. Conventional wisdom may suggest that feature extractions based on the deepest layer i.e. “fc8” compared to “fc6” will result in the best recognition performance but our results have proved it otherwise for the two considered models. Our experiments have revealed that for the two models under consideration, the “fc6” based feature vectors have achieved the best recognition performance. State-of-the-Art recognition performances of 91.17% and 91.35% have been achieved by utilizing the “fc6” based feature vectors for the VGG_16 and VGG_19 models respectively. The recognition performance has been