The assessment of the completeness of earthquake catalogs is a prerequisite for studying the patterns of seismic activity.In traditional approaches,the minimum magnitude of completeness(MC)is employed to evaluate cata...The assessment of the completeness of earthquake catalogs is a prerequisite for studying the patterns of seismic activity.In traditional approaches,the minimum magnitude of completeness(MC)is employed to evaluate catalog completeness,with events below MC being discarded,leading to the underutilization of the data.Detection probability is a more detailed measure of the catalog's completeness than MC;its use results in better model compatibility with data in seismic activity modeling and allows for more comprehensive utilization of seismic observation data across temporal,spatial,and magnitude dimensions.Using the magnitude-rank method and Maximum Curvature(MAXC)methods,we analyzed temporal variations in earthquake catalog completeness,fi nding that MC stabilized after 2010,which closely coincides with improvements in monitoring capabilities and the densifi cation of seismic networks.Employing the probability-based magnitude of completeness(PMC)and entire magnitude range(EMR)methods,grounded in distinct foundational assumptions and computational principles,we analyzed the 2010-2023 earthquake catalog for the northern margin of the Ordos Block,aiming to assess the detection probability of earthquakes and the completeness of the earthquake catalog.The PMC method yielded the detection probability distribution for 76 stations in the distance-magnitude space.A scoring metric was designed based on station detection capabilities for small earthquakes in the near fi eld.From the detection probabilities of stations,we inferred detection probabilities of the network for diff erent magnitude ranges and mapped the spatial distribution of the probability-based completeness magnitude.In the EMR method,we employed a segmented model fi tted to the observed data to determine the detection probability and completeness magnitude for every grid point in the study region.We discussed the sample dependency and low-magnitude failure phenomena of the PMC method,noting the potential overestimation of detection probabilities for lower magni展开更多
We advance here a novel methodology for robust intelligent biometric information management with inferences and predictions made using randomness and complexity concepts. Intelligence refers to learning, adap- tation,...We advance here a novel methodology for robust intelligent biometric information management with inferences and predictions made using randomness and complexity concepts. Intelligence refers to learning, adap- tation, and functionality, and robustness refers to the ability to handle incomplete and/or corrupt adversarial information, on one side, and image and or device variability, on the other side. The proposed methodology is model-free and non-parametric. It draws support from discriminative methods using likelihood ratios to link at the conceptual level biometrics and forensics. It further links, at the modeling and implementation level, the Bayesian framework, statistical learning theory (SLT) using transduction and semi-supervised lea- rning, and Information Theory (IY) using mutual information. The key concepts supporting the proposed methodology are a) local estimation to facilitate learning and prediction using both labeled and unlabeled data;b) similarity metrics using regularity of patterns, randomness deficiency, and Kolmogorov complexity (similar to MDL) using strangeness/typicality and ranking p-values;and c) the Cover – Hart theorem on the asymptotical performance of k-nearest neighbors approaching the optimal Bayes error. Several topics on biometric inference and prediction related to 1) multi-level and multi-layer data fusion including quality and multi-modal biometrics;2) score normalization and revision theory;3) face selection and tracking;and 4) identity management, are described here using an integrated approach that includes transduction and boosting for ranking and sequential fusion/aggregation, respectively, on one side, and active learning and change/ outlier/intrusion detection realized using information gain and martingale, respectively, on the other side. The methodology proposed can be mapped to additional types of information beyond biometrics.展开更多
The task of few‐shot object detection is to classify and locate objects through a few annotated samples.Although many studies have tried to solve this problem,the results are still not satisfactory.Recent studies hav...The task of few‐shot object detection is to classify and locate objects through a few annotated samples.Although many studies have tried to solve this problem,the results are still not satisfactory.Recent studies have found that the class margin significantly impacts the classification and representation of the targets to be detected.Most methods use the loss function to balance the class margin,but the results show that the loss‐based methods only have a tiny improvement on the few‐shot object detection problem.In this study,the authors propose a class encoding method based on the transformer to balance the class margin,which can make the model pay more attention to the essential information of the features,thus increasing the recognition ability of the sample.Besides,the authors propose a multi‐target decoding method to aggregate RoI vectors generated from multi‐target images with multiple support vectors,which can significantly improve the detection ability of the detector for multi‐target images.Experiments on Pascal visual object classes(VOC)and Microsoft Common Objects in Context datasets show that our proposed Few‐Shot Object Detection via Class Encoding and Multi‐Target Decoding significantly improves upon baseline detectors(average accuracy improvement is up to 10.8%on VOC and 2.1%on COCO),achieving competitive performance.In general,we propose a new way to regulate the class margin between support set vectors and a way of feature aggregation for images containing multiple objects and achieve remarkable results.Our method is implemented on mmfewshot,and the code will be available later.展开更多
The special seismic tectonic environment and frequent seismicity in the southeastern margin of the Qinghai-Tibet Plateau show that this area is an ideal location to study the present tectonic movement and background o...The special seismic tectonic environment and frequent seismicity in the southeastern margin of the Qinghai-Tibet Plateau show that this area is an ideal location to study the present tectonic movement and background of strong earthquakes in China's Mainland and to predict future strong earthquake risk zones. Studies of the structural environment and physical characteristics of the deep structure in this area are helpful to explore deep dynamic effects and deformation field characteristics, to strengthen our understanding of the roles of anisotropy and tectonic deformation and to study the deep tectonic background of the seismic origin of the block's interior. In this paper, the three-dimensional (3D) P-wave velocity structure of the crust and upper mantle under the southeastern margin of the Qinghai-Tibet Plateau is obtained via observational data from 224 permanent seismic stations in the regional digital seismic network of Yunnan and Sichuan Provinces and from 356 mobile China seismic arrays in the southern section of the north-south seismic belt using a joint inversion method of the regional earthquake and teleseismic data. The results indicate that the spatial distribution of the P-wave velocity anomalies in the shallow upper crust is closely related to the surface geological structure, terrain and lithology. Baoxing and Kangding, with their basic volcanic rocks and volcanic clastic rocks, present obvious high-velocity anomalies. The Chengdu Basin shows low-velocity anomalies associated with the Quaternary sediments. The Xichang Mesozoic Basin and the Butuo Basin are characterised by low- velocity anomalies related to very thick sedimentary layers. The upper and middle crust beneath the Chuan-Dian and Songpan-Ganzi Blocks has apparent lateral heterogeneities, including low-velocity zones of different sizes. There is a large range of low-velocity layers in the Songpan-Ganzi Block and the sub-block northwest of Sichuan Province, showing that the middle and lower crust is relatively weak. The Sichuan Basin展开更多
基金funded by Director Fund of the Inner Mongolia Autonomous Region Seismological Bureau(No.2023GG02,2023MS05)the Inner Mongolia Natural Science Foundation(No.2024MS04021)。
文摘The assessment of the completeness of earthquake catalogs is a prerequisite for studying the patterns of seismic activity.In traditional approaches,the minimum magnitude of completeness(MC)is employed to evaluate catalog completeness,with events below MC being discarded,leading to the underutilization of the data.Detection probability is a more detailed measure of the catalog's completeness than MC;its use results in better model compatibility with data in seismic activity modeling and allows for more comprehensive utilization of seismic observation data across temporal,spatial,and magnitude dimensions.Using the magnitude-rank method and Maximum Curvature(MAXC)methods,we analyzed temporal variations in earthquake catalog completeness,fi nding that MC stabilized after 2010,which closely coincides with improvements in monitoring capabilities and the densifi cation of seismic networks.Employing the probability-based magnitude of completeness(PMC)and entire magnitude range(EMR)methods,grounded in distinct foundational assumptions and computational principles,we analyzed the 2010-2023 earthquake catalog for the northern margin of the Ordos Block,aiming to assess the detection probability of earthquakes and the completeness of the earthquake catalog.The PMC method yielded the detection probability distribution for 76 stations in the distance-magnitude space.A scoring metric was designed based on station detection capabilities for small earthquakes in the near fi eld.From the detection probabilities of stations,we inferred detection probabilities of the network for diff erent magnitude ranges and mapped the spatial distribution of the probability-based completeness magnitude.In the EMR method,we employed a segmented model fi tted to the observed data to determine the detection probability and completeness magnitude for every grid point in the study region.We discussed the sample dependency and low-magnitude failure phenomena of the PMC method,noting the potential overestimation of detection probabilities for lower magni
文摘We advance here a novel methodology for robust intelligent biometric information management with inferences and predictions made using randomness and complexity concepts. Intelligence refers to learning, adap- tation, and functionality, and robustness refers to the ability to handle incomplete and/or corrupt adversarial information, on one side, and image and or device variability, on the other side. The proposed methodology is model-free and non-parametric. It draws support from discriminative methods using likelihood ratios to link at the conceptual level biometrics and forensics. It further links, at the modeling and implementation level, the Bayesian framework, statistical learning theory (SLT) using transduction and semi-supervised lea- rning, and Information Theory (IY) using mutual information. The key concepts supporting the proposed methodology are a) local estimation to facilitate learning and prediction using both labeled and unlabeled data;b) similarity metrics using regularity of patterns, randomness deficiency, and Kolmogorov complexity (similar to MDL) using strangeness/typicality and ranking p-values;and c) the Cover – Hart theorem on the asymptotical performance of k-nearest neighbors approaching the optimal Bayes error. Several topics on biometric inference and prediction related to 1) multi-level and multi-layer data fusion including quality and multi-modal biometrics;2) score normalization and revision theory;3) face selection and tracking;and 4) identity management, are described here using an integrated approach that includes transduction and boosting for ranking and sequential fusion/aggregation, respectively, on one side, and active learning and change/ outlier/intrusion detection realized using information gain and martingale, respectively, on the other side. The methodology proposed can be mapped to additional types of information beyond biometrics.
基金This work was supported by STI 2030-Major Projects No.2021ZD0201403in part by NSFC No.62088101 Autonomous Intelligent Unmanned Systemsin part by the Open Research Project of the State Key Laboratory of Industrial Control Technology,Zhejiang University,China(No.ICT2022B04).
文摘The task of few‐shot object detection is to classify and locate objects through a few annotated samples.Although many studies have tried to solve this problem,the results are still not satisfactory.Recent studies have found that the class margin significantly impacts the classification and representation of the targets to be detected.Most methods use the loss function to balance the class margin,but the results show that the loss‐based methods only have a tiny improvement on the few‐shot object detection problem.In this study,the authors propose a class encoding method based on the transformer to balance the class margin,which can make the model pay more attention to the essential information of the features,thus increasing the recognition ability of the sample.Besides,the authors propose a multi‐target decoding method to aggregate RoI vectors generated from multi‐target images with multiple support vectors,which can significantly improve the detection ability of the detector for multi‐target images.Experiments on Pascal visual object classes(VOC)and Microsoft Common Objects in Context datasets show that our proposed Few‐Shot Object Detection via Class Encoding and Multi‐Target Decoding significantly improves upon baseline detectors(average accuracy improvement is up to 10.8%on VOC and 2.1%on COCO),achieving competitive performance.In general,we propose a new way to regulate the class margin between support set vectors and a way of feature aggregation for images containing multiple objects and achieve remarkable results.Our method is implemented on mmfewshot,and the code will be available later.
基金supported by China earthquake scientific array exploration Southern section of North South seismic belt(201008001)Northern section of North South seismic belt(20130811)+1 种基金National Natural Science Foundation of China(41474057)Science for Earthquake Resllience of China Earthquake Administration(XH15040Y)
文摘The special seismic tectonic environment and frequent seismicity in the southeastern margin of the Qinghai-Tibet Plateau show that this area is an ideal location to study the present tectonic movement and background of strong earthquakes in China's Mainland and to predict future strong earthquake risk zones. Studies of the structural environment and physical characteristics of the deep structure in this area are helpful to explore deep dynamic effects and deformation field characteristics, to strengthen our understanding of the roles of anisotropy and tectonic deformation and to study the deep tectonic background of the seismic origin of the block's interior. In this paper, the three-dimensional (3D) P-wave velocity structure of the crust and upper mantle under the southeastern margin of the Qinghai-Tibet Plateau is obtained via observational data from 224 permanent seismic stations in the regional digital seismic network of Yunnan and Sichuan Provinces and from 356 mobile China seismic arrays in the southern section of the north-south seismic belt using a joint inversion method of the regional earthquake and teleseismic data. The results indicate that the spatial distribution of the P-wave velocity anomalies in the shallow upper crust is closely related to the surface geological structure, terrain and lithology. Baoxing and Kangding, with their basic volcanic rocks and volcanic clastic rocks, present obvious high-velocity anomalies. The Chengdu Basin shows low-velocity anomalies associated with the Quaternary sediments. The Xichang Mesozoic Basin and the Butuo Basin are characterised by low- velocity anomalies related to very thick sedimentary layers. The upper and middle crust beneath the Chuan-Dian and Songpan-Ganzi Blocks has apparent lateral heterogeneities, including low-velocity zones of different sizes. There is a large range of low-velocity layers in the Songpan-Ganzi Block and the sub-block northwest of Sichuan Province, showing that the middle and lower crust is relatively weak. The Sichuan Basin