Prof. Dr. C. Lee Giles - Pennsylvania State University
Oct. 11, 2019, 1:30 p.m. - Oct. 11, 2019, 2:30 p.m.
Wilson 105
Hosted by: Prof. Xue Liu
Neural networks are often considered to be black box models. However, discrete time recurrent neural networks (RNNs), which are one of the most commonly used, have properties that lend themselves to similarities with automata and formal grammars and thus to the extraction and insertion of grammar rules. Assume that we have a discrete time RNN that has been trained on sequential data. For each discrete step in time, or a collection thereof, an input can be associated with the RNNs current and previous activations. We can then cluster these activations into states to obtain a previous state to current state transition that is governed by an input. From a formal grammar perspective, these state-to-state transitions can be considered to be production rules. Once the rules are extracted, a minimal unique set of states can be readily obtained. It can be shown that, for learning known production rules of regular grammars, the rules extracted are stable and independent of initial conditions and, at times, outperform the trained source neural network in terms of classification accuracy. Theoretical work has also shown that regular expression production rules can be easily inserted into certain types of RNNs and proved that the resulting systems are stable. Since for many problem areas such as finance, medicine, security, etc., black box models are not acceptable, the methods discussed here have the potential to uncover what the trained RNN is doing from a regular grammar and finite state machine perspective. We will discuss the strengths, weaknesses, and issues associated with using these and associated methods and applications such as verification.