当前位置:网站首页>EFFICIENT PROBABILISTIC LOGIC REASONING WITH GRAPH NEURAL NETWORKS
EFFICIENT PROBABILISTIC LOGIC REASONING WITH GRAPH NEURAL NETWORKS
2022-07-03 10:32:00 【kormoie】
EFFICIENT PROBABILISTIC LOGIC REASONING WITH GRAPH NEURAL NETWORKS( Use neural network to carry out effective probabilistic logic reasoning )
Introduce :
Because the knowledge map is incorrect 、 Incomplete or duplicate data , Therefore, it is very important to complete the knowledge map . This paper uses Markov logic network Markov Logic Networks (MLNs) And graph neural networks graph neural networks (GNNs) Combined with variational reasoning based on knowledge map , And put forward GNN A variation of the ,ExpressGNN.ExpressGNN Strike a good balance between model expressiveness and simplicity .Markov Logic Networks (MLNs):
Markov logic network combines hard logic rules and probability graph model , It can be applied to various tasks of knowledge map . Logical rules combine prior knowledge , allow MLNS Promote in tasks with a small amount of labeled data . Graphical models provide a principled framework for dealing with uncertain data . However ,MLN The reasoning in is computationally intensive , Generally, the number of entities increases exponentially , Limited real-world applications . Logical rules can only cover a small part of the possible combination of knowledge graph relationships , Therefore, it limits the application of models based purely on logical rules .
Graph neural networks (GNNs):
GNN Effectively solve many graph related problems .GNN It is required to have enough tag data applied to specific end tasks to achieve good performance , However, the knowledge map has a long tail problem . The problem of data scarcity in this long tail relationship poses a severe challenge to the pure data-driven approach .
The article explores a combination MLNs and GNN A data-driven method that can utilize prior knowledge encoded in logical rules . This paper designs a simple graph neural network ExpressGNN, To effectively train in variation EM Under the framework of MLN.

Related work :
Statistical relational learning
Markov Logic Networks
Graph neural networks
Knowledge graph embedding
The main method :
- data
Knowledge map triplet K = (C,R,O)
Set of entities C = (c1,…, cM)
Relational sets R =(r1,…, rN)
A collection of observable facts O = (o1,…,oL).
Entities are constants , Relation is also called predicate . Every predicate uses C Defined logic function . For a specific set of entities assigned to parameters , The predicate is called ground The predicate . Each predicate has ground predicate = a binary random variable.
ar = (c,c0), be ground The predicate r(c, c0) Can be expressed as r(ar)
Every observed fact uses truth {0,1} Represent and assign to ground predicate.
For example, a fact o It can be expressed as [L(c; c0) = 1].
Because there are far more unobserved facts than observed facts , therefore unobserved facts = latent variables
More clearly , as follows :
For a knowledge map K, Express with complete bipartite graph GK = (C,O,E), Set of constants C, Observable facts O, Side set E=(e1,…, eT), such as edge e = (c,o,i), At node c and o between .
And o The relevant predicate is in the i Parameter positions are used c As a parameter , Pictured 2 Of GK
Markov logic network uses logic formula to define potential function in undirected graph model .
The form of a logical formula :
A binary function defined by a combination of several predicates .
such as f(c, c0) Can be expressed as 
Similar to predicates , We mean to assign a constant to the formula f The parameters of are expressed as af, The entire set of constant consistent assignments 
The joint probability distribution of observable facts and observed facts :
KG and MLN Different :KG sparse ,MLN dense .
EM Maximum expectation algorithm
VARIATIONAL EM FOR MARKOV LOGIC NETWORKS
Markov logic network establishes the joint probability distribution model of all observed variables and potential variables . This model can be trained by maximizing the log likelihood of all observed facts . It is difficult to directly maximize the goal , Because it needs to calculate the partition function and integrate all H and O Variable . therefore , We optimize the lower bound of variational evidence for log likelihood of data (ELBO), As shown below
E Step ---- Reasoning


Add a supervised learning goal to enhance the reasoning network :
Objective function :
- M Step ---- Study


- ExpressGNN

experiment :
reference :
边栏推荐
猜你喜欢

ECMAScript -- "ES6 syntax specification # Day1

深度学习入门之线性回归(PyTorch)

The imitation of jd.com e-commerce project is coming

ThreadLocal原理及使用场景

3.2 Off-Policy Monte Carlo Methods & case study: Blackjack of off-Policy Evaluation

Hands on deep learning pytorch version exercise solution - 2.5 automatic differentiation

Leetcode-112:路径总和

Ut2016 learning notes

An open source OA office automation system

Hands on deep learning pytorch version exercise solution - 2.6 probability
随机推荐
20220604 Mathematics: square root of X
丢弃法Dropout(Pytorch)
神经网络入门之矩阵计算(Pytorch)
Stroke prediction: Bayesian
Leetcode刷题---35
20220610其他:任务调度器
High imitation wechat
Discrete-event system
20220601 Mathematics: zero after factorial
Tensorflow - tensorflow Foundation
Free online markdown to write a good resume
I really want to be a girl. The first step of programming is to wear women's clothes
Leetcode刷题---832
Ut2016 learning notes
3.1 Monte Carlo Methods & case study: Blackjack of on-Policy Evaluation
openCV+dlib實現給蒙娜麗莎換臉
Leetcode-513:找树的左下角值
Leetcode - 706 design hash mapping (Design)*
Seata分布式事务失效,不生效(事务不回滚)的常见场景
20220531 Mathematics: Happy numbers