Augmenting Neural Networks with First-order Logic

Tao Li, Vivek Srikumar, ACL 2019

This paper addresses the problem of incorporating declarative knowledge into a Neural Network. They propose converting the (easily available) first-order logic representation of the knowledge into a network and provide a framework to augment this network to any neural network of choice. The main motivation to use the declarative knowledge as an inductive bias is to reduce the dependency of the data, to achieve comparative performance with less examples.

To convert the FOL rules to a network, each predicate in the rule is mapped to a named neuron. For example, given a rule , the network will have 3 named neurons: and with arrow from and to . The Łukasiewicz T-norm and T-conorm are used as functions for the logical operators, inspired by probabilistic soft logic literature. Auxiliary variables and auxiliary named neurons are included as needed to compute logical operations. For example, is converted to with and . The benefit of using Łukasiewicz functions is that they are differentiable. This network doesn't have any parameters hence do not require any learning.

To ensure that the network is acyclic, the authors recommend using contrapositive statements when needed. For example, if the rule is introducing cycle in the network, then use its contrapositive equivalent instead.

This rule network is added as constraint to some layer of the original neural network. The constrained neural layer is defined as follows with hyperparameter handling the importance factor.

Authors empirically evaluate their proposed augmented NN for three tasks: machine comprehension, natural language inference, and text chunking. In each of these tasks the augmentation is performed at different layers. In machine comprehension task, where the use BiDAF as the base neural network, the constrained augmentation is done for attention nodes. In natural language inference task, they use L-DAtt as the base method and augment attention node as well as label nodes. In the text chunking task, they augment the label layer. These experiments confirm their hypothesis that using the knowledge improve the performance, but only when the data is less. With more data, the augmented knowledge does not improve performance significantly.


  • The framework of augmenting NN proposed is very general and hence can be potentially used in any task where deep neural networks are used.
  • I haven't quite understood the emphasis on the differentiability of the augmented network since there are no parameters to be learnt there. The hyperparameter is tuned.
  • The right hand side of the rule looks pretty limited. The rules used in the experiments are also very simple.
  • In the text chunking task we would assume that the bidirectional LSTM would be able to learn rules . It is not clear from experiments which rule improves the results in this task.