当前位置:网站首页>EFFICIENT PROBABILISTIC LOGIC REASONING WITH GRAPH NEURAL NETWORKS
EFFICIENT PROBABILISTIC LOGIC REASONING WITH GRAPH NEURAL NETWORKS
2022-07-03 10:32:00 【kormoie】
EFFICIENT PROBABILISTIC LOGIC REASONING WITH GRAPH NEURAL NETWORKS( Use neural network to carry out effective probabilistic logic reasoning )
Introduce :
Because the knowledge map is incorrect 、 Incomplete or duplicate data , Therefore, it is very important to complete the knowledge map . This paper uses Markov logic network Markov Logic Networks (MLNs) And graph neural networks graph neural networks (GNNs) Combined with variational reasoning based on knowledge map , And put forward GNN A variation of the ,ExpressGNN.ExpressGNN Strike a good balance between model expressiveness and simplicity .Markov Logic Networks (MLNs):
Markov logic network combines hard logic rules and probability graph model , It can be applied to various tasks of knowledge map . Logical rules combine prior knowledge , allow MLNS Promote in tasks with a small amount of labeled data . Graphical models provide a principled framework for dealing with uncertain data . However ,MLN The reasoning in is computationally intensive , Generally, the number of entities increases exponentially , Limited real-world applications . Logical rules can only cover a small part of the possible combination of knowledge graph relationships , Therefore, it limits the application of models based purely on logical rules .
Graph neural networks (GNNs):
GNN Effectively solve many graph related problems .GNN It is required to have enough tag data applied to specific end tasks to achieve good performance , However, the knowledge map has a long tail problem . The problem of data scarcity in this long tail relationship poses a severe challenge to the pure data-driven approach .
The article explores a combination MLNs and GNN A data-driven method that can utilize prior knowledge encoded in logical rules . This paper designs a simple graph neural network ExpressGNN, To effectively train in variation EM Under the framework of MLN.

Related work :
Statistical relational learning
Markov Logic Networks
Graph neural networks
Knowledge graph embedding
The main method :
- data
Knowledge map triplet K = (C,R,O)
Set of entities C = (c1,…, cM)
Relational sets R =(r1,…, rN)
A collection of observable facts O = (o1,…,oL).
Entities are constants , Relation is also called predicate . Every predicate uses C Defined logic function . For a specific set of entities assigned to parameters , The predicate is called ground The predicate . Each predicate has ground predicate = a binary random variable.
ar = (c,c0), be ground The predicate r(c, c0) Can be expressed as r(ar)
Every observed fact uses truth {0,1} Represent and assign to ground predicate.
For example, a fact o It can be expressed as [L(c; c0) = 1].
Because there are far more unobserved facts than observed facts , therefore unobserved facts = latent variables
More clearly , as follows :
For a knowledge map K, Express with complete bipartite graph GK = (C,O,E), Set of constants C, Observable facts O, Side set E=(e1,…, eT), such as edge e = (c,o,i), At node c and o between .
And o The relevant predicate is in the i Parameter positions are used c As a parameter , Pictured 2 Of GK
Markov logic network uses logic formula to define potential function in undirected graph model .
The form of a logical formula :
A binary function defined by a combination of several predicates .
such as f(c, c0) Can be expressed as 
Similar to predicates , We mean to assign a constant to the formula f The parameters of are expressed as af, The entire set of constant consistent assignments 
The joint probability distribution of observable facts and observed facts :
KG and MLN Different :KG sparse ,MLN dense .
EM Maximum expectation algorithm
VARIATIONAL EM FOR MARKOV LOGIC NETWORKS
Markov logic network establishes the joint probability distribution model of all observed variables and potential variables . This model can be trained by maximizing the log likelihood of all observed facts . It is difficult to directly maximize the goal , Because it needs to calculate the partition function and integrate all H and O Variable . therefore , We optimize the lower bound of variational evidence for log likelihood of data (ELBO), As shown below
E Step ---- Reasoning


Add a supervised learning goal to enhance the reasoning network :
Objective function :
- M Step ---- Study


- ExpressGNN

experiment :
reference :
边栏推荐
- Leetcode刷题---977
- Out of the box high color background system
- Leetcode刷题---1
- Boston house price forecast (tensorflow2.9 practice)
- 【SQL】一篇带你掌握SQL数据库的查询与修改相关操作
- A super cool background permission management system
- Policy gradient Method of Deep Reinforcement learning (Part One)
- Leetcode刷题---278
- 20220609其他:多数元素
- openCV+dlib实现给蒙娜丽莎换脸
猜你喜欢

Deep Reinforcement learning with PyTorch

Leetcode-513:找树的左下角值

Hands on deep learning pytorch version exercise solution - 2.4 calculus

深度学习入门之自动求导(Pytorch)

LeetCode - 715. Range module (TreeSet)*****

Timo background management system

openCV+dlib實現給蒙娜麗莎換臉

A complete answer sheet recognition system

Ut2016 learning notes

Flutter 退出当前操作二次确认怎么做才更优雅?
随机推荐
2-program logic
Leetcode - 705 design hash set (Design)
Leetcode - the k-th element in 703 data flow (design priority queue)
[graduation season] the picture is rich, and frugality is easy; Never forget chaos and danger in peace.
Neural Network Fundamentals (1)
Hands on deep learning pytorch version exercise solution - 2.6 probability
Boston house price forecast (tensorflow2.9 practice)
Leetcode - 706 design hash mapping (Design)*
Leetcode-106:根据中后序遍历序列构造二叉树
【毕业季】图匮于丰,防俭于逸;治不忘乱,安不忘危。
Tensorflow - tensorflow Foundation
Leetcode-404:左叶子之和
Stroke prediction: Bayesian
20220607 others: sum of two integers
Ut2011 learning notes
Notes - regular expressions
Yolov5 creates and trains its own data set to realize mask wearing detection
Leetcode刷题---44
【SQL】一篇带你掌握SQL数据库的查询与修改相关操作
Hands on deep learning pytorch version exercise solution - 3.1 linear regression