The massive amount and multi-sourced,multi-structured data in the upstream petroleum industry impose great challenge on data integration and smart application.Knowledge graph,as an emerging technology,can potentially ...The massive amount and multi-sourced,multi-structured data in the upstream petroleum industry impose great challenge on data integration and smart application.Knowledge graph,as an emerging technology,can potentially provide a way to tackle the challenges associated with oil and gas big data.This paper proposes an engineering-based method that can improve upon traditional natural language processing to construct the domain knowledge graph based on a petroleum exploration and development ontology.The exploration and development knowledge graph is constructed by assembling Sinopec’s multi-sourced heterogeneous database,and millions of nodes.The two applications based on the constructed knowledge graph are developed and validated for effectiveness and advantages in providing better knowledge services for the oil and gas industry.展开更多
In any classical value-based reinforcement learning method,an agent,despite of its continuous interactions with the environment,is yet unable to quickly generate a complete and independent description of the entire en...In any classical value-based reinforcement learning method,an agent,despite of its continuous interactions with the environment,is yet unable to quickly generate a complete and independent description of the entire environment,leaving the learning method to struggle with a difficult dilemma of choosing between the two tasks,namely exploration and exploitation.This problem becomes more pronounced when the agent has to deal with a dynamic environment,of which the configuration and/or parameters are constantly changing.In this paper,this problem is approached by first mapping a reinforcement learning scheme to a directed graph,and the set that contains all the states already explored shall continue to be exploited in the context of such a graph.We have proved that the two tasks of exploration and exploitation eventually converge in the decision-making process,and thus,there is no need to face the exploration vs.exploitation tradeoff as all the existing reinforcement learning methods do.Rather this observation indicates that a reinforcement learning scheme is essentially the same as searching for the shortest path in a dynamic environment,which is readily tackled by a modified Floyd-Warshall algorithm as proposed in the paper.The experimental results have confirmed that the proposed graph-based reinforcement learning algorithm has significantly higher performance than both standard Q-learning algorithm and improved Q-learning algorithm in solving mazes,rendering it an algorithm of choice in applications involving dynamic environments.展开更多
基金support is gratefully acknowledged to the National Natural Science Foundation of China(Grant No.42050104)National Science and Technology Support Program(Grant No.2012BAH34F00)National Oil and Gas Major Special Project(Grant No.2016ZX05033005).
文摘The massive amount and multi-sourced,multi-structured data in the upstream petroleum industry impose great challenge on data integration and smart application.Knowledge graph,as an emerging technology,can potentially provide a way to tackle the challenges associated with oil and gas big data.This paper proposes an engineering-based method that can improve upon traditional natural language processing to construct the domain knowledge graph based on a petroleum exploration and development ontology.The exploration and development knowledge graph is constructed by assembling Sinopec’s multi-sourced heterogeneous database,and millions of nodes.The two applications based on the constructed knowledge graph are developed and validated for effectiveness and advantages in providing better knowledge services for the oil and gas industry.
基金This research work is supported by Fujian Province Nature Science Foundation under Grant No.2018J01553.
文摘In any classical value-based reinforcement learning method,an agent,despite of its continuous interactions with the environment,is yet unable to quickly generate a complete and independent description of the entire environment,leaving the learning method to struggle with a difficult dilemma of choosing between the two tasks,namely exploration and exploitation.This problem becomes more pronounced when the agent has to deal with a dynamic environment,of which the configuration and/or parameters are constantly changing.In this paper,this problem is approached by first mapping a reinforcement learning scheme to a directed graph,and the set that contains all the states already explored shall continue to be exploited in the context of such a graph.We have proved that the two tasks of exploration and exploitation eventually converge in the decision-making process,and thus,there is no need to face the exploration vs.exploitation tradeoff as all the existing reinforcement learning methods do.Rather this observation indicates that a reinforcement learning scheme is essentially the same as searching for the shortest path in a dynamic environment,which is readily tackled by a modified Floyd-Warshall algorithm as proposed in the paper.The experimental results have confirmed that the proposed graph-based reinforcement learning algorithm has significantly higher performance than both standard Q-learning algorithm and improved Q-learning algorithm in solving mazes,rendering it an algorithm of choice in applications involving dynamic environments.