Well logging curves serve as indicators of strata attribute changes and are frequently utilized for stratigraphic analysis and comparison.Deep learning,known for its robust feature extraction capabilities,has seen con...Well logging curves serve as indicators of strata attribute changes and are frequently utilized for stratigraphic analysis and comparison.Deep learning,known for its robust feature extraction capabilities,has seen continuous adoption by scholars in the realm of well logging stratigraphic correlation tasks.Nonetheless,current deep learning algorithms often struggle to accurately capture feature changes occurring at layer boundaries within the curves.Moreover,when faced with data imbalance issues,neural networks encounter challenges in accurately modeling the one-hot encoded curve stratifi cation positions,resulting in signifi cant deviations between predicted and actual stratifi cation positions.Addressing these challenges,this study proposes a novel well logging curve stratigraphic comparison algorithm based on uniformly distributed soft labels.In the training phase,a label smoothing loss function is introduced to comprehensively account for the substantial loss stemming from data imbalance and to consider the similarity between diff erent layer data.Concurrently,spatial attention and channel attention mechanisms are incorporated into the shallow and deep encoder stages of U²-Net,respectively,to better focus on changes in stratifi cation positions.During the prediction phase,an optimized confi dence threshold algorithm is proposed to constrain stratifi cation results and solve the problem of reduced prediction accuracy because of occasional layer repetition.The proposed method is applied to real-world well logging data in oil fi elds.Quantitative evaluation results demonstrate that within error ranges of 1,2,and 3 m,the accuracy of well logging curve stratigraphic division reaches 87.27%,92.68%,and 95.08%,respectively,thus validating the eff ectiveness of the algorithm presented in this paper.展开更多
The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-atten...The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-attention mechanisms falter when applied to datasets with intricate semantic content and extensive dependency structures.In response,this paper introduces a Diffusion Sampling and Label-Driven Co-attention Neural Network(DSLD),which adopts a diffusion sampling method to capture more comprehensive semantic information of the data.Additionally,themodel leverages the joint correlation information of labels and data to introduce the computation of text representation,correcting semantic representationbiases in thedata,andincreasing the accuracyof semantic representation.Ultimately,the model computes the corresponding classification results by synthesizing these rich data semantic representations.Experiments on seven benchmark datasets show that our proposed model achieves competitive results compared to state-of-the-art methods.展开更多
Text classification means to assign a document to one or more classes or categories according to content. Text classification provides convenience for users to obtain data. Because of the polysemy of text data, multi-...Text classification means to assign a document to one or more classes or categories according to content. Text classification provides convenience for users to obtain data. Because of the polysemy of text data, multi-label classification can handle text data more comprehensively. Multi-label text classification become the key problem in the data mining. To improve the performances of multi-label text classification, semantic analysis is embedded into the classification model to complete label correlation analysis, and the structure, objective function and optimization strategy of this model is designed. Then, the convolution neural network(CNN) model based on semantic embedding is introduced. In the end, Zhihu dataset is used for evaluation. The result shows that this model outperforms the related work in terms of recall and area under curve(AUC) metrics.展开更多
基金supported by the CNPC Advanced Fundamental Research Projects(No.2023ycq06).
文摘Well logging curves serve as indicators of strata attribute changes and are frequently utilized for stratigraphic analysis and comparison.Deep learning,known for its robust feature extraction capabilities,has seen continuous adoption by scholars in the realm of well logging stratigraphic correlation tasks.Nonetheless,current deep learning algorithms often struggle to accurately capture feature changes occurring at layer boundaries within the curves.Moreover,when faced with data imbalance issues,neural networks encounter challenges in accurately modeling the one-hot encoded curve stratifi cation positions,resulting in signifi cant deviations between predicted and actual stratifi cation positions.Addressing these challenges,this study proposes a novel well logging curve stratigraphic comparison algorithm based on uniformly distributed soft labels.In the training phase,a label smoothing loss function is introduced to comprehensively account for the substantial loss stemming from data imbalance and to consider the similarity between diff erent layer data.Concurrently,spatial attention and channel attention mechanisms are incorporated into the shallow and deep encoder stages of U²-Net,respectively,to better focus on changes in stratifi cation positions.During the prediction phase,an optimized confi dence threshold algorithm is proposed to constrain stratifi cation results and solve the problem of reduced prediction accuracy because of occasional layer repetition.The proposed method is applied to real-world well logging data in oil fi elds.Quantitative evaluation results demonstrate that within error ranges of 1,2,and 3 m,the accuracy of well logging curve stratigraphic division reaches 87.27%,92.68%,and 95.08%,respectively,thus validating the eff ectiveness of the algorithm presented in this paper.
基金the Communication University of China(CUC230A013)the Fundamental Research Funds for the Central Universities.
文摘The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-attention mechanisms falter when applied to datasets with intricate semantic content and extensive dependency structures.In response,this paper introduces a Diffusion Sampling and Label-Driven Co-attention Neural Network(DSLD),which adopts a diffusion sampling method to capture more comprehensive semantic information of the data.Additionally,themodel leverages the joint correlation information of labels and data to introduce the computation of text representation,correcting semantic representationbiases in thedata,andincreasing the accuracyof semantic representation.Ultimately,the model computes the corresponding classification results by synthesizing these rich data semantic representations.Experiments on seven benchmark datasets show that our proposed model achieves competitive results compared to state-of-the-art methods.
文摘Text classification means to assign a document to one or more classes or categories according to content. Text classification provides convenience for users to obtain data. Because of the polysemy of text data, multi-label classification can handle text data more comprehensively. Multi-label text classification become the key problem in the data mining. To improve the performances of multi-label text classification, semantic analysis is embedded into the classification model to complete label correlation analysis, and the structure, objective function and optimization strategy of this model is designed. Then, the convolution neural network(CNN) model based on semantic embedding is introduced. In the end, Zhihu dataset is used for evaluation. The result shows that this model outperforms the related work in terms of recall and area under curve(AUC) metrics.