Dataset Viewer
text
stringlengths 1
7.76k
| source
stringlengths 17
81
|
|---|---|
Natural Language Processing Jacob Eisenstein November 13, 2018
|
nlp_Page_1_Chunk1
|
Contents Contents 1 Preface i Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i How to use this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii 1 Introduction 1 1.1 Natural language processing and its neighbors . . . . . . . . . . . . . . . . . 1 1.2 Three themes in natural language processing . . . . . . . . . . . . . . . . . . 6 1.2.1 Learning and knowledge . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 Search and learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2.3 Relational, compositional, and distributional perspectives . . . . . . 9 I Learning 11 2 Linear text classification 13 2.1 The bag of words . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2 Na¨ıve Bayes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.1 Types and tokens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.2.2 Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2.3 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2.4 Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2.5 Setting hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3 Discriminative learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.3.1 Perceptron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.3.2 Averaged perceptron . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.4 Loss functions and large-margin classification . . . . . . . . . . . . . . . . . 27 2.4.1 Online large margin classification . . . . . . . . . . . . . . . . . . . . 30 2.4.2 *Derivation of the online support vector machine . . . . . . . . . . . 32 2.5 Logistic regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
|
nlp_Page_3_Chunk2
|
. . . 35 1
|
nlp_Page_3_Chunk3
|
2 CONTENTS 2.5.1 Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.5.2 Gradients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.6 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.6.1 Batch optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.6.2 Online optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.7 *Additional topics in classification . . . . . . . . . . . . . . . . . . . . . . . . 41 2.7.1 Feature selection by regularization . . . . . . . . . . . . . . . . . . . . 41 2.7.2 Other views of logistic regression . . . . . . . . . . . . . . . . . . . . . 41 2.8 Summary of learning algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 43 3 Nonlinear classification 47 3.1 Feedforward neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2 Designing neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.2.1 Activation functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.2.2 Network structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.2.3 Outputs and loss functions . . . . . . . . . . . . . . . . . . . . . . . . 52 3.2.4 Inputs and lookup layers . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.3 Learning neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.3.1 Backpropagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.3.2 Regularization and dropout . . . . . . . . . . . . . . . . . . . . . . . . 57 3.3.3 *Learning theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.3.4 Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.4 Convolutional neural networks . . . . . . . . . . . . . . . . . . . . .
|
nlp_Page_4_Chunk4
|
. . . . . 62 4 Linguistic applications of classification 69 4.1 Sentiment and opinion analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.1.1 Related problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.1.2 Alternative approaches to sentiment analysis . . . . . . . . . . . . . . 72 4.2 Word sense disambiguation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.2.1 How many word senses? . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.2.2 Word sense disambiguation as classification . . . . . . . . . . . . . . 75 4.3 Design decisions for text classification . . . . . . . . . . . . . . . . . . . . . . 76 4.3.1 What is a word? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.3.2 How many words? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.3.3 Count or binary? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.4 Evaluating classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.4.1 Precision, recall, and F -MEASURE . . . . . . . . . . . . . . . . . . . . 81 4.4.2 Threshold-free metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.4.3 Classifier comparison and statistical significance . . . . . . . . . . . . 84 4.4.4 *Multiple comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.5 Building datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_4_Chunk5
|
CONTENTS 3 4.5.1 Metadata as labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.5.2 Labeling data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5 Learning without supervision 95 5.1 Unsupervised learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 5.1.1 K-means clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 5.1.2 Expectation-Maximization (EM) . . . . . . . . . . . . . . . . . . . . . 98 5.1.3 EM as an optimization algorithm . . . . . . . . . . . . . . . . . . . . . 102 5.1.4 How many clusters? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.2 Applications of expectation-maximization . . . . . . . . . . . . . . . . . . . . 104 5.2.1 Word sense induction . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.2.2 Semi-supervised learning . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.2.3 Multi-component modeling . . . . . . . . . . . . . . . . . . . . . . . . 106 5.3 Semi-supervised learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 5.3.1 Multi-view learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 5.3.2 Graph-based algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5.4 Domain adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 5.4.1 Supervised domain adaptation . . . . . . . . . . . . . . . . . . . . . . 111 5.4.2 Unsupervised domain adaptation . . . . . . . . . . . . . . . . . . . . 112 5.5 *Other approaches to learning with latent variables . . . . . . . . . . . . . . 114 5.5.1 Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 5.5.2 Spectral learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 II Sequences and trees 123 6 Language models 125 6.1 N-gram language models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 6.2 Smoothing and discounting . . . . . . . . . . . . .
|
nlp_Page_5_Chunk6
|
. . . . . . . . . . . . . . . 129 6.2.1 Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 6.2.2 Discounting and backoff . . . . . . . . . . . . . . . . . . . . . . . . . . 130 6.2.3 *Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 6.2.4 *Kneser-Ney smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . 133 6.3 Recurrent neural network language models . . . . . . . . . . . . . . . . . . . 133 6.3.1 Backpropagation through time . . . . . . . . . . . . . . . . . . . . . . 136 6.3.2 Hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 6.3.3 Gated recurrent neural networks . . . . . . . . . . . . . . . . . . . . . 137 6.4 Evaluating language models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 6.4.1 Held-out likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 6.4.2 Perplexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 6.5 Out-of-vocabulary words . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_5_Chunk7
|
4 CONTENTS 7 Sequence labeling 145 7.1 Sequence labeling as classification . . . . . . . . . . . . . . . . . . . . . . . . 145 7.2 Sequence labeling as structure prediction . . . . . . . . . . . . . . . . . . . . 147 7.3 The Viterbi algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 7.3.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 7.3.2 Higher-order features . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 7.4 Hidden Markov Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 7.4.1 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 7.4.2 Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 7.5 Discriminative sequence labeling with features . . . . . . . . . . . . . . . . . 157 7.5.1 Structured perceptron . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 7.5.2 Structured support vector machines . . . . . . . . . . . . . . . . . . . 160 7.5.3 Conditional random fields . . . . . . . . . . . . . . . . . . . . . . . . . 162 7.6 Neural sequence labeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 7.6.1 Recurrent neural networks . . . . . . . . . . . . . . . . . . . . . . . . 167 7.6.2 Character-level models . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 7.6.3 Convolutional Neural Networks for Sequence Labeling . . . . . . . . 170 7.7 *Unsupervised sequence labeling . . . . . . . . . . . . . . . . . . . . . . . . . 170 7.7.1 Linear dynamical systems . . . . . . . . . . . . . . . . . . . . . . . . . 172 7.7.2 Alternative unsupervised learning methods . . . . . . . . . . . . . . 172 7.7.3 Semiring notation and the generalized viterbi algorithm . . . . . . . 172 8 Applications of sequence labeling 175 8.1 Part-of-speech tagging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 8.1.1 Parts-of-Speech . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 8.1.2 Accurate part-of-speech tagging . . . . . . . .
|
nlp_Page_6_Chunk8
|
. . . . . . . . . . . . . 180 8.2 Morphosyntactic Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 8.3 Named Entity Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 8.4 Tokenization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 8.5 Code switching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 8.6 Dialogue acts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 9 Formal language theory 191 9.1 Regular languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 9.1.1 Finite state acceptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 9.1.2 Morphology as a regular language . . . . . . . . . . . . . . . . . . . . 194 9.1.3 Weighted finite state acceptors . . . . . . . . . . . . . . . . . . . . . . 196 9.1.4 Finite state transducers . . . . . . . . . . . . . . . . . . . . . . . . . . 201 9.1.5 *Learning weighted finite state automata . . . . . . . . . . . . . . . . 206 9.2 Context-free languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 9.2.1 Context-free grammars . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_6_Chunk9
|
CONTENTS 5 9.2.2 Natural language syntax as a context-free language . . . . . . . . . . 211 9.2.3 A phrase-structure grammar for English . . . . . . . . . . . . . . . . 213 9.2.4 Grammatical ambiguity . . . . . . . . . . . . . . . . . . . . . . . . . . 218 9.3 *Mildly context-sensitive languages . . . . . . . . . . . . . . . . . . . . . . . 218 9.3.1 Context-sensitive phenomena in natural language . . . . . . . . . . . 219 9.3.2 Combinatory categorial grammar . . . . . . . . . . . . . . . . . . . . 220 10 Context-free parsing 225 10.1 Deterministic bottom-up parsing . . . . . . . . . . . . . . . . . . . . . . . . . 226 10.1.1 Recovering the parse tree . . . . . . . . . . . . . . . . . . . . . . . . . 227 10.1.2 Non-binary productions . . . . . . . . . . . . . . . . . . . . . . . . . . 227 10.1.3 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 10.2 Ambiguity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 10.2.1 Parser evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 10.2.2 Local solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 10.3 Weighted Context-Free Grammars . . . . . . . . . . . . . . . . . . . . . . . . 232 10.3.1 Parsing with weighted context-free grammars . . . . . . . . . . . . . 234 10.3.2 Probabilistic context-free grammars . . . . . . . . . . . . . . . . . . . 235 10.3.3 *Semiring weighted context-free grammars . . . . . . . . . . . . . . . 237 10.4 Learning weighted context-free grammars . . . . . . . . . . . . . . . . . . . . 238 10.4.1 Probabilistic context-free grammars . . . . . . . . . . . . . . . . . . . 238 10.4.2 Feature-based parsing . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 10.4.3 *Conditional random field parsing . . . . . . . . . . . . . . . . . . . . 240 10.4.4 Neural context-free grammars . . . . . . . . . . . . . . . . . . . . . . 242 10.5 Grammar refinement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 10.5.1 Parent annotations and other tree transformations . . . . . . . . . . . 243 10.5.2 Lexicalized context-free grammars . . . . . . . . . .
|
nlp_Page_7_Chunk10
|
. . . . . . . . . . 244 10.5.3 *Refinement grammars . . . . . . . . . . . . . . . . . . . . . . . . . . 248 10.6 Beyond context-free parsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 10.6.1 Reranking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 10.6.2 Transition-based parsing . . . . . . . . . . . . . . . . . . . . . . . . . . 251 11 Dependency parsing 257 11.1 Dependency grammar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 11.1.1 Heads and dependents . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 11.1.2 Labeled dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 11.1.3 Dependency subtrees and constituents . . . . . . . . . . . . . . . . . 260 11.2 Graph-based dependency parsing . . . . . . . . . . . . . . . . . . . . . . . . 262 11.2.1 Graph-based parsing algorithms . . . . . . . . . . . . . . . . . . . . . 264 11.2.2 Computing scores for dependency arcs . . . . . . . . . . . . . . . . . 265 11.2.3 Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_7_Chunk11
|
6 CONTENTS 11.3 Transition-based dependency parsing . . . . . . . . . . . . . . . . . . . . . . 268 11.3.1 Transition systems for dependency parsing . . . . . . . . . . . . . . . 269 11.3.2 Scoring functions for transition-based parsers . . . . . . . . . . . . . 273 11.3.3 Learning to parse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 11.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 III Meaning 283 12 Logical semantics 285 12.1 Meaning and denotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 12.2 Logical representations of meaning . . . . . . . . . . . . . . . . . . . . . . . . 287 12.2.1 Propositional logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 12.2.2 First-order logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 12.3 Semantic parsing and the lambda calculus . . . . . . . . . . . . . . . . . . . . 291 12.3.1 The lambda calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 12.3.2 Quantification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 12.4 Learning semantic parsers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 12.4.1 Learning from derivations . . . . . . . . . . . . . . . . . . . . . . . . . 297 12.4.2 Learning from logical forms . . . . . . . . . . . . . . . . . . . . . . . . 299 12.4.3 Learning from denotations . . . . . . . . . . . . . . . . . . . . . . . . 301 13 Predicate-argument semantics 305 13.1 Semantic roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 13.1.1 VerbNet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 13.1.2 Proto-roles and PropBank . . . . . . . . . . . . . . . . . . . . . . . . . 309 13.1.3 FrameNet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 13.2 Semantic role labeling . . . . . . . . . . . . . . . . . . . . . . . . .
|
nlp_Page_8_Chunk12
|
. . . . . . 312 13.2.1 Semantic role labeling as classification . . . . . . . . . . . . . . . . . . 312 13.2.2 Semantic role labeling as constrained optimization . . . . . . . . . . 315 13.2.3 Neural semantic role labeling . . . . . . . . . . . . . . . . . . . . . . . 317 13.3 Abstract Meaning Representation . . . . . . . . . . . . . . . . . . . . . . . . . 318 13.3.1 AMR Parsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 14 Distributional and distributed semantics 325 14.1 The distributional hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 14.2 Design decisions for word representations . . . . . . . . . . . . . . . . . . . . 327 14.2.1 Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 14.2.2 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 14.2.3 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 14.3 Latent semantic analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_8_Chunk13
|
CONTENTS 7 14.4 Brown clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 14.5 Neural word embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 14.5.1 Continuous bag-of-words (CBOW) . . . . . . . . . . . . . . . . . . . . 334 14.5.2 Skipgrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 14.5.3 Computational complexity . . . . . . . . . . . . . . . . . . . . . . . . 335 14.5.4 Word embeddings as matrix factorization . . . . . . . . . . . . . . . . 337 14.6 Evaluating word embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 14.6.1 Intrinsic evaluations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 14.6.2 Extrinsic evaluations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 14.6.3 Fairness and bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 14.7 Distributed representations beyond distributional statistics . . . . . . . . . . 341 14.7.1 Word-internal structure . . . . . . . . . . . . . . . . . . . . . . . . . . 341 14.7.2 Lexical semantic resources . . . . . . . . . . . . . . . . . . . . . . . . . 343 14.8 Distributed representations of multiword units . . . . . . . . . . . . . . . . . 344 14.8.1 Purely distributional methods . . . . . . . . . . . . . . . . . . . . . . 344 14.8.2 Distributional-compositional hybrids . . . . . . . . . . . . . . . . . . 345 14.8.3 Supervised compositional methods . . . . . . . . . . . . . . . . . . . 346 14.8.4 Hybrid distributed-symbolic representations . . . . . . . . . . . . . . 346 15 Reference Resolution 351 15.1 Forms of referring expressions . . . . . . . . . . . . . . . . . . . . . . . . . . 352 15.1.1 Pronouns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 15.1.2 Proper Nouns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 15.1.3 Nominals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 15.2 Algorithms for coreference resolution . . . . . . . . . . . . . . . . . . .
|
nlp_Page_9_Chunk14
|
. . . 358 15.2.1 Mention-pair models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 15.2.2 Mention-ranking models . . . . . . . . . . . . . . . . . . . . . . . . . 360 15.2.3 Transitive closure in mention-based models . . . . . . . . . . . . . . . 361 15.2.4 Entity-based models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 15.3 Representations for coreference resolution . . . . . . . . . . . . . . . . . . . . 367 15.3.1 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 15.3.2 Distributed representations of mentions and entities . . . . . . . . . . 370 15.4 Evaluating coreference resolution . . . . . . . . . . . . . . . . . . . . . . . . . 373 16 Discourse 379 16.1 Segments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 16.1.1 Topic segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 16.1.2 Functional segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . 381 16.2 Entities and reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 16.2.1 Centering theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 16.2.2 The entity grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_9_Chunk15
|
8 CONTENTS 16.2.3 *Formal semantics beyond the sentence level . . . . . . . . . . . . . . 384 16.3 Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 16.3.1 Shallow discourse relations . . . . . . . . . . . . . . . . . . . . . . . . 385 16.3.2 Hierarchical discourse relations . . . . . . . . . . . . . . . . . . . . . . 389 16.3.3 Argumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 16.3.4 Applications of discourse relations . . . . . . . . . . . . . . . . . . . . 393 IV Applications 401 17 Information extraction 403 17.1 Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 17.1.1 Entity linking by learning to rank . . . . . . . . . . . . . . . . . . . . 406 17.1.2 Collective entity linking . . . . . . . . . . . . . . . . . . . . . . . . . . 408 17.1.3 *Pairwise ranking loss functions . . . . . . . . . . . . . . . . . . . . . 409 17.2 Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 17.2.1 Pattern-based relation extraction . . . . . . . . . . . . . . . . . . . . . 412 17.2.2 Relation extraction as a classification task . . . . . . . . . . . . . . . . 413 17.2.3 Knowledge base population . . . . . . . . . . . . . . . . . . . . . . . . 416 17.2.4 Open information extraction . . . . . . . . . . . . . . . . . . . . . . . 419 17.3 Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 17.4 Hedges, denials, and hypotheticals . . . . . . . . . . . . . . . . . . . . . . . . 422 17.5 Question answering and machine reading . . . . . . . . . . . . . . . . . . . . 424 17.5.1 Formal semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424 17.5.2 Machine reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 18 Machine translation 431 18.1 Machine translation as a task . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 18.1.1 Evaluating translations . . . . . . .
|
nlp_Page_10_Chunk16
|
. . . . . . . . . . . . . . . . . . . 433 18.1.2 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 18.2 Statistical machine translation . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 18.2.1 Statistical translation modeling . . . . . . . . . . . . . . . . . . . . . . 437 18.2.2 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 18.2.3 Phrase-based translation . . . . . . . . . . . . . . . . . . . . . . . . . . 439 18.2.4 *Syntax-based translation . . . . . . . . . . . . . . . . . . . . . . . . . 441 18.3 Neural machine translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 18.3.1 Neural attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 18.3.2 *Neural machine translation without recurrence . . . . . . . . . . . . 446 18.3.3 Out-of-vocabulary words . . . . . . . . . . . . . . . . . . . . . . . . . 448 18.4 Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 18.5 Training towards the evaluation metric . . . . . . . . . . . . . . . . . . . . . 451 Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_10_Chunk17
|
CONTENTS 9 19 Text generation 457 19.1 Data-to-text generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 19.1.1 Latent data-to-text alignment . . . . . . . . . . . . . . . . . . . . . . . 459 19.1.2 Neural data-to-text generation . . . . . . . . . . . . . . . . . . . . . . 460 19.2 Text-to-text generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 19.2.1 Neural abstractive summarization . . . . . . . . . . . . . . . . . . . . 464 19.2.2 Sentence fusion for multi-document summarization . . . . . . . . . . 465 19.3 Dialogue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 19.3.1 Finite-state and agenda-based dialogue systems . . . . . . . . . . . . 467 19.3.2 Markov decision processes . . . . . . . . . . . . . . . . . . . . . . . . 468 19.3.3 Neural chatbots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470 A Probability 475 A.1 Probabilities of event combinations . . . . . . . . . . . . . . . . . . . . . . . . 475 A.1.1 Probabilities of disjoint events . . . . . . . . . . . . . . . . . . . . . . 476 A.1.2 Law of total probability . . . . . . . . . . . . . . . . . . . . . . . . . . 477 A.2 Conditional probability and Bayes’ rule . . . . . . . . . . . . . . . . . . . . . 477 A.3 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 A.4 Random variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480 A.5 Expectations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481 A.6 Modeling and estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 B Numerical optimization 485 B.1 Gradient descent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 B.2 Constrained optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 B.3 Example: Passive-aggressive online learning . . . . . . . . . . . . . . . . . . 487 Bibliography 489 Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_11_Chunk18
|
Preface The goal of this text is focus on a core subset of the natural language processing, unified by the concepts of learning and search. A remarkable number of problems in natural language processing can be solved by a compact set of methods: Search. Viterbi, CKY, minimum spanning tree, shift-reduce, integer linear programming, beam search. Learning. Maximum-likelihood estimation, logistic regression, perceptron, expectation- maximization, matrix factorization, backpropagation. This text explains how these methods work, and how they can be applied to a wide range of tasks: document classification, word sense disambiguation, part-of-speech tagging, named entity recognition, parsing, coreference resolution, relation extraction, discourse analysis, language modeling, and machine translation. Background Because natural language processing draws on many different intellectual traditions, al- most everyone who approaches it feels underprepared in one way or another. Here is a summary of what is expected, and where you can learn more: Mathematics and machine learning. The text assumes a background in multivariate cal- culus and linear algebra: vectors, matrices, derivatives, and partial derivatives. You should also be familiar with probability and statistics. A review of basic proba- bility is found in Appendix A, and a minimal review of numerical optimization is found in Appendix B. For linear algebra, the online course and textbook from Strang (2016) provide an excellent review. Deisenroth et al. (2018) are currently preparing a textbook on Mathematics for Machine Learning, a draft can be found online.1 For an introduction to probabilistic modeling and estimation, see James et al. (2013); for 1https://mml-book.github.io/ i
|
nlp_Page_13_Chunk19
|
ii PREFACE a more advanced and comprehensive discussion of the same material, the classic reference is Hastie et al. (2009). Linguistics. This book assumes no formal training in linguistics, aside from elementary concepts likes nouns and verbs, which you have probably encountered in the study of English grammar. Ideas from linguistics are introduced throughout the text as needed, including discussions of morphology and syntax (chapter 9), semantics (chapters 12 and 13), and discourse (chapter 16). Linguistic issues also arise in the application-focused chapters 4, 8, and 18. A short guide to linguistics for students of natural language processing is offered by Bender (2013); you are encouraged to start there, and then pick up a more comprehensive introductory textbook (e.g., Ak- majian et al., 2010; Fromkin et al., 2013). Computer science. The book is targeted at computer scientists, who are assumed to have taken introductory courses on the analysis of algorithms and complexity theory. In particular, you should be familiar with asymptotic analysis of the time and memory costs of algorithms, and with the basics of dynamic programming. The classic text on algorithms is offered by Cormen et al. (2009); for an introduction to the theory of computation, see Arora and Barak (2009) and Sipser (2012). How to use this book After the introduction, the textbook is organized into four main units: Learning. This section builds up a set of machine learning tools that will be used through- out the other sections. Because the focus is on machine learning, the text represen- tations and linguistic phenomena are mostly simple: “bag-of-words” text classifica- tion is treated as a model example. Chapter 4 describes some of the more linguisti- cally interesting applications of word-based text analysis. Sequences and trees. This section introduces the treatment of language as a structured phenomena. It describes sequence and tree representations and the algorithms that they facilitate, as well as the limitations that these representations impose. Chap- ter 9 introduces finite state automata and briefly overviews a context-free account of English syntax. Meaning. This section takes a broad view of efforts to represent and compute meaning from text, ranging from formal logic to neural word embeddings. It also includes two topics that are closely related to semantics: resolution of ambiguous references, and analysis of multi-sentence discourse structure. Applications. The final section offers chapter-length treatments on three of the most promi- nent applications of natural language processing: information extraction, machine Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_14_Chunk20
|
iii translation, and text generation. Each of these applications merits a textbook length treatment of its own (Koehn, 2009; Grishman, 2012; Reiter and Dale, 2000); the chap- ters here explain some of the most well known systems using the formalisms and methods built up earlier in the book, while introducing methods such as neural at- tention. Each chapter contains some advanced material, which is marked with an asterisk. This material can be safely omitted without causing misunderstandings later on. But even without these advanced sections, the text is too long for a single semester course, so instructors will have to pick and choose among the chapters. Chapters 1-3 provide building blocks that will be used throughout the book, and chap- ter 4 describes some critical aspects of the practice of language technology. Language models (chapter 6), sequence labeling (chapter 7), and parsing (chapter 10 and 11) are canonical topics in natural language processing, and distributed word embeddings (chap- ter 14) have become ubiquitous. Of the applications, machine translation (chapter 18) is the best choice: it is more cohesive than information extraction, and more mature than text generation. Many students will benefit from the review of probability in Appendix A. • A course focusing on machine learning should add the chapter on unsupervised learning (chapter 5). The chapters on predicate-argument semantics (chapter 13), reference resolution (chapter 15), and text generation (chapter 19) are particularly influenced by recent progress in machine learning, including deep neural networks and learning to search. • A course with a more linguistic orientation should add the chapters on applica- tions of sequence labeling (chapter 8), formal language theory (chapter 9), semantics (chapter 12 and 13), and discourse (chapter 16). • For a course with a more applied focus, I recommend the chapters on applications of sequence labeling (chapter 8), predicate-argument semantics (chapter 13), infor- mation extraction (chapter 17), and text generation (chapter 19). Acknowledgments Several colleagues, students, and friends read early drafts of chapters in their areas of expertise, including Yoav Artzi, Kevin Duh, Heng Ji, Jessy Li, Brendan O’Connor, Yuval Pinter, Shawn Ling Ramirez, Nathan Schneider, Pamela Shapiro, Noah A. Smith, Sandeep Soni, and Luke Zettlemoyer. I also thank the anonymous reviewers, particularly reviewer 4, who provided detailed line-by-line edits and suggestions. The text benefited from high- level discussions with my editor Marie Lufkin Lee, as well as Kevin Murphy, Shawn Ling Ramirez, and Bonnie Webber. In addition, there are many students, colleagues, friends, and family who found mistakes in early drafts, or who recommended key references. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_15_Chunk21
|
iv PREFACE These include: Parminder Bhatia, Kimberly Caras, Jiahao Cai, Justin Chen, Rodolfo Del- monte, Murtaza Dhuliawala, Yantao Du, Barbara Eisenstein, Luiz C. F. Ribeiro, Chris Gu, Joshua Killingsworth, Jonathan May, Taha Merghani, Gus Monod, Raghavendra Murali, Nidish Nair, Brendan O’Connor, Dan Oneata, Brandon Peck, Yuval Pinter, Nathan Schnei- der, Jianhao Shen, Zhewei Sun, Rubin Tsui, Ashwin Cunnapakkam Vinjimur, Denny Vrandeˇci´c, William Yang Wang, Clay Washington, Ishan Waykul, Aobo Yang, Xavier Yao, Yuyu Zhang, and several anonymous commenters. Clay Washington tested some of the programming exercises, and Varun Gupta tested some of the written exercises. Thanks to Kelvin Xu for sharing a high-resolution version of Figure 19.3. Most of the book was written while I was at Georgia Tech’s School of Interactive Com- puting. I thank the School for its support of this project, and I thank my colleagues there for their help and support at the beginning of my faculty career. I also thank (and apol- ogize to) the many students in Georgia Tech’s CS 4650 and 7650 who suffered through early versions of the text. The book is dedicated to my parents. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_16_Chunk22
|
Notation As a general rule, words, word counts, and other types of observations are indicated with Roman letters (a, b, c); parameters are indicated with Greek letters (α, β, θ). Vectors are indicated with bold script for both random variables x and parameters θ. Other useful notations are indicated in the table below. Basics exp x the base-2 exponent, 2x log x the base-2 logarithm, log2 x {xn}N n=1 the set {x1, x2, . . . , xN} xj i xi raised to the power j x(j) i indexing by both i and j Linear algebra x(i) a column vector of feature counts for instance i, often word counts xj:k elements j through k (inclusive) of a vector x [x; y] vertical concatenation of two column vectors [x, y] horizontal concatenation of two column vectors en a “one-hot” vector with a value of 1 at position n, and zero everywhere else θ⊤ the transpose of a column vector θ θ · x(i) the dot product PN j=1 θj × x(i) j X a matrix xi,j row i, column j of matrix X Diag(x) a matrix with x on the diagonal, e.g., x1 0 0 0 x2 0 0 0 x3 X−1 the inverse of matrix X v
|
nlp_Page_17_Chunk23
|
vi PREFACE Text datasets wm word token at position m N number of training instances M length of a sequence (of words or tags) V number of words in vocabulary y(i) the true label for instance i ˆy a predicted label Y the set of all possible labels K number of possible labels K = |Y| □ the start token ■ the stop token y(i) a structured label for instance i, such as a tag sequence Y(w) the set of possible labelings for the word sequence w ♦ the start tag ♦ the stop tag Probabilities Pr(A) probability of event A Pr(A | B) probability of event A, conditioned on event B pB(b) the marginal probability of random variable B taking value b; written p(b) when the choice of random variable is clear from context pB|A(b | a) the probability of random variable B taking value b, conditioned on A taking value a; written p(b | a) when clear from context A ∼p the random variable A is distributed according to distribution p. For example, X ∼N(0, 1) states that the random variable X is drawn from a normal distribution with zero mean and unit variance. A | B ∼p conditioned on the random variable B, A is distributed according to p.2 Machine learning Ψ(x(i), y) the score for assigning label y to instance i f(x(i), y) the feature vector for instance i with label y θ a (column) vector of weights ℓ(i) loss on an individual instance i L objective function for an entire dataset L log-likelihood of a dataset λ the amount of regularization Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_18_Chunk24
|
Chapter 1 Introduction Natural language processing is the set of methods for making human language accessi- ble to computers. In the past decade, natural language processing has become embedded in our daily lives: automatic machine translation is ubiquitous on the web and in so- cial media; text classification keeps our email inboxes from collapsing under a deluge of spam; search engines have moved beyond string matching and network analysis to a high degree of linguistic sophistication; dialog systems provide an increasingly common and effective way to get and share information. These diverse applications are based on a common set of ideas, drawing on algo- rithms, linguistics, logic, statistics, and more. The goal of this text is to provide a survey of these foundations. The technical fun starts in the next chapter; the rest of this current chapter situates natural language processing with respect to other intellectual disciplines, identifies some high-level themes in contemporary natural language processing, and ad- vises the reader on how best to approach the subject. 1.1 Natural language processing and its neighbors Natural language processing draws on many other intellectual traditions, from formal linguistics to statistical physics. This section briefly situates natural language processing with respect to some of its closest neighbors. Computational Linguistics Most of the meetings and journals that host natural lan- guage processing research bear the name “computational linguistics”, and the terms may be thought of as essentially synonymous. But while there is substantial overlap, there is an important difference in focus. In linguistics, language is the object of study. Computa- tional methods may be brought to bear, just as in scientific disciplines like computational biology and computational astronomy, but they play only a supporting role. In contrast, 1
|
nlp_Page_19_Chunk25
|
2 CHAPTER 1. INTRODUCTION natural language processing is focused on the design and analysis of computational al- gorithms and representations for processing natural human language. The goal of natu- ral language processing is to provide new computational capabilities around human lan- guage: for example, extracting information from texts, translating between languages, an- swering questions, holding a conversation, taking instructions, and so on. Fundamental linguistic insights may be crucial for accomplishing these tasks, but success is ultimately measured by whether and how well the job gets done. Machine Learning Contemporary approaches to natural language processing rely heav- ily on machine learning, which makes it possible to build complex computer programs from examples. Machine learning provides an array of general techniques for tasks like converting a sequence of discrete tokens in one vocabulary to a sequence of discrete to- kens in another vocabulary — a generalization of what one might informally call “transla- tion.” Much of today’s natural language processing research can be thought of as applied machine learning. However, natural language processing has characteristics that distin- guish it from many of machine learning’s other application domains. • Unlike images or audio, text data is fundamentally discrete, with meaning created by combinatorial arrangements of symbolic units. This is particularly consequential for applications in which text is the output, such as translation and summarization, because it is not possible to gradually approach an optimal solution. • Although the set of words is discrete, new words are always being created. Further- more, the distribution over words (and other linguistic elements) resembles that of a power law1 (Zipf, 1949): there will be a few words that are very frequent, and a long tail of words that are rare. A consequence is that natural language processing algo- rithms must be especially robust to observations that do not occur in the training data. • Language is compositional: units such as words can combine to create phrases, which can combine by the very same principles to create larger phrases. For ex- ample, a noun phrase can be created by combining a smaller noun phrase with a prepositional phrase, as in the whiteness of the whale. The prepositional phrase is created by combining a preposition (in this case, of) with another noun phrase (the whale). In this way, it is possible to create arbitrarily long phrases, such as, (1.1) . . . huge globular pieces of the whale of the bigness of a human head.2 The meaning of such a phrase must be analyzed in accord with the underlying hier- archical structure. In this case, huge globular pieces of the whale acts as a single noun 1Throughout the text, boldface will be used to indicate keywords that appear in the index. 2Throughout the text, this notation will be used to introduce linguistic examples. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_20_Chunk26
|
1.1. NATURAL LANGUAGE PROCESSING AND ITS NEIGHBORS 3 phrase, which is conjoined with the prepositional phrase of the bigness of a human head. The interpretation would be different if instead, huge globular pieces were con- joined with the prepositional phrase of the whale of the bigness of a human head — implying a disappointingly small whale. Even though text appears as a sequence, machine learning methods must account for its implicit recursive structure. Artificial Intelligence The goal of artificial intelligence is to build software and robots with the same range of abilities as humans (Russell and Norvig, 2009). Natural language processing is relevant to this goal in several ways. On the most basic level, the capacity for language is one of the central features of human intelligence, and is therefore a prerequi- site for artificial intelligence.3 Second, much of artificial intelligence research is dedicated to the development of systems that can reason from premises to a conclusion, but such algorithms are only as good as what they know (Dreyfus, 1992). Natural language pro- cessing is a potential solution to the “knowledge bottleneck”, by acquiring knowledge from texts, and perhaps also from conversations. This idea goes all the way back to Tur- ing’s 1949 paper Computing Machinery and Intelligence, which proposed the Turing test for determining whether artificial intelligence had been achieved (Turing, 2009). Conversely, reasoning is sometimes essential for basic tasks of language processing, such as resolving a pronoun. Winograd schemas are examples in which a single word changes the likely referent of a pronoun, in a way that seems to require knowledge and reasoning to decode (Levesque et al., 2011). For example, (1.2) The trophy doesn’t fit into the brown suitcase because it is too [small/large]. When the final word is small, then the pronoun it refers to the suitcase; when the final word is large, then it refers to the trophy. Solving this example requires spatial reasoning; other schemas require reasoning about actions and their effects, emotions and intentions, and social conventions. Such examples demonstrate that natural language understanding cannot be achieved in isolation from knowledge and reasoning. Yet the history of artificial intelligence has been one of increasing specialization: with the growing volume of research in subdisci- plines such as natural language processing, machine learning, and computer vision, it is 3This view is shared by some, but not all, prominent researchers in artificial intelligence. Michael Jordan, a specialist in machine learning, has said that if he had a billion dollars to spend on any large research project, he would spend it on natural language processing (https://www.reddit.com/r/ MachineLearning/comments/2fxi6v/ama_michael_i_jordan/). On the other hand, in a public dis- cussion about the future of artificial intelligence in February 2018, computer vision researcher Yann Lecun argued that despite its many practical applications, language is perhaps “number 300” in the priority list for artificial intelligence research, and that it would be a great achievement if AI could attain the capa- bilities of an orangutan, which do not include language (http://www.abigailsee.com/2018/02/21/ deep-learning-structure-and-innate-priors.html). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_21_Chunk27
|
4 CHAPTER 1. INTRODUCTION difficult for anyone to maintain expertise across the entire field. Still, recent work has demonstrated interesting connections between natural language processing and other ar- eas of AI, including computer vision (e.g., Antol et al., 2015) and game playing (e.g., Branavan et al., 2009). The dominance of machine learning throughout artificial intel- ligence has led to a broad consensus on representations such as graphical models and computation graphs, and on algorithms such as backpropagation and combinatorial opti- mization. Many of the algorithms and representations covered in this text are part of this consensus. Computer Science The discrete and recursive nature of natural language invites the ap- plication of theoretical ideas from computer science. Linguists such as Chomsky and Montague have shown how formal language theory can help to explain the syntax and semantics of natural language. Theoretical models such as finite-state and pushdown au- tomata are the basis for many practical natural language processing systems. Algorithms for searching the combinatorial space of analyses of natural language utterances can be analyzed in terms of their computational complexity, and theoretically motivated approx- imations can sometimes be applied. The study of computer systems is also relevant to natural language processing. Large datasets of unlabeled text can be processed more quickly by parallelization techniques like MapReduce (Dean and Ghemawat, 2008; Lin and Dyer, 2010); high-volume data sources such as social media can be summarized efficiently by approximate streaming and sketching techniques (Goyal et al., 2009). When deep neural networks are imple- mented in production systems, it is possible to eke out speed gains using techniques such as reduced-precision arithmetic (Wu et al., 2016). Many classical natural language process- ing algorithms are not naturally suited to graphics processing unit (GPU) parallelization, suggesting directions for further research at the intersection of natural language process- ing and computing hardware (Yi et al., 2011). Speech Processing Natural language is often communicated in spoken form, and speech recognition is the task of converting an audio signal to text. From one perspective, this is a signal processing problem, which might be viewed as a preprocessing step before nat- ural language processing can be applied. However, context plays a critical role in speech recognition by human listeners: knowledge of the surrounding words influences percep- tion and helps to correct for noise (Miller et al., 1951). For this reason, speech recognition is often integrated with text analysis, particularly with statistical language models, which quantify the probability of a sequence of text (see chapter 6). Beyond speech recognition, the broader field of speech processing includes the study of speech-based dialogue sys- tems, which are briefly discussed in chapter 19. Historically, speech processing has often been pursued in electrical engineering departments, while natural language processing Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_22_Chunk28
|
1.1. NATURAL LANGUAGE PROCESSING AND ITS NEIGHBORS 5 has been the purview of computer scientists. For this reason, the extent of interaction between these two disciplines is less than it might otherwise be. Ethics As machine learning and artificial intelligence become increasingly ubiquitous, it is crucial to understand how their benefits, costs, and risks are distributed across differ- ent kinds of people. Natural language processing raises some particularly salient issues around ethics, fairness, and accountability: Access. Who is natural language processing designed to serve? For example, whose lan- guage is translated from, and whose language is translated to? Bias. Does language technology learn to replicate social biases from text corpora, and does it reinforce these biases as seemingly objective computational conclusions? Labor. Whose text and speech comprise the datasets that power natural language pro- cessing, and who performs the annotations? Are the benefits of this technology shared with all the people whose work makes it possible? Privacy and internet freedom. What is the impact of large-scale text processing on the right to free and private communication? What is the potential role of natural lan- guage processing in regimes of censorship or surveillance? This text lightly touches on issues related to fairness and bias in § 14.6.3 and § 18.1.1, but these issues are worthy of a book of their own. For more from within the field of computational linguistics, see the papers from the annual workshop on Ethics in Natural Language Processing (Hovy et al., 2017; Alfano et al., 2018). For an outside perspective on ethical issues relating to data science at large, see boyd and Crawford (2012). Others Natural language processing plays a significant role in emerging interdisciplinary fields like computational social science and the digital humanities. Text classification (chapter 4), clustering (chapter 5), and information extraction (chapter 17) are particularly useful tools; another is probabilistic topic models (Blei, 2012), which are not covered in this text. Information retrieval (Manning et al., 2008) makes use of similar tools, and conversely, techniques such as latent semantic analysis (§ 14.3) have roots in information retrieval. Text mining is sometimes used to refer to the application of data mining tech- niques, especially classification and clustering, to text. While there is no clear distinction between text mining and natural language processing (nor between data mining and ma- chine learning), text mining is typically less concerned with linguistic structure, and more interested in fast, scalable algorithms. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_23_Chunk29
|
6 CHAPTER 1. INTRODUCTION 1.2 Three themes in natural language processing Natural language processing covers a diverse range of tasks, methods, and linguistic phe- nomena. But despite the apparent incommensurability between, say, the summarization of scientific articles (§ 16.3.4) and the identification of suffix patterns in Spanish verbs (§ 9.1.4), some general themes emerge. The remainder of the introduction focuses on these themes, which will recur in various forms through the text. Each theme can be expressed as an opposition between two extreme viewpoints on how to process natural language. The methods discussed in the text can usually be placed somewhere on the continuum between these two extremes. 1.2.1 Learning and knowledge A recurring topic of debate is the relative importance of machine learning and linguistic knowledge. On one extreme, advocates of “natural language processing from scratch” (Col- lobert et al., 2011) propose to use machine learning to train end-to-end systems that trans- mute raw text into any desired output structure: e.g., a summary, database, or transla- tion. On the other extreme, the core work of natural language processing is sometimes taken to be transforming text into a stack of general-purpose linguistic structures: from subword units called morphemes, to word-level parts-of-speech, to tree-structured repre- sentations of grammar, and beyond, to logic-based representations of meaning. In theory, these general-purpose structures should then be able to support any desired application. The end-to-end approach has been buoyed by recent results in computer vision and speech recognition, in which advances in machine learning have swept away expert- engineered representations based on the fundamentals of optics and phonology (Krizhevsky et al., 2012; Graves and Jaitly, 2014). But while machine learning is an element of nearly every contemporary approach to natural language processing, linguistic representations such as syntax trees have not yet gone the way of the visual edge detector or the auditory triphone. Linguists have argued for the existence of a “language faculty” in all human be- ings, which encodes a set of abstractions specially designed to facilitate the understanding and production of language. The argument for the existence of such a language faculty is based on the observation that children learn language faster and from fewer examples than would be possible if language was learned from experience alone.4 From a practi- cal standpoint, linguistic structure seems to be particularly important in scenarios where training data is limited. There are a number of ways in which knowledge and learning can be combined in natural language processing. Many supervised learning systems make use of carefully engineered features, which transform the data into a representation that can facilitate 4The Language Instinct (Pinker, 2003) articulates these arguments in an engaging and popular style. For arguments against the innateness of language, see Elman et al. (1998). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_24_Chunk30
|
1.2. THREE THEMES IN NATURAL LANGUAGE PROCESSING 7 learning. For example, in a task like search, it may be useful to identify each word’s stem, so that a system can more easily generalize across related terms such as whale, whales, whalers, and whaling. (This issue is relatively benign in English, as compared to the many other languages which include much more elaborate systems of prefixed and suffixes.) Such features could be obtained from a hand-crafted resource, like a dictionary that maps each word to a single root form. Alternatively, features can be obtained from the output of a general-purpose language processing system, such as a parser or part-of-speech tagger, which may itself be built on supervised machine learning. Another synthesis of learning and knowledge is in model structure: building machine learning models whose architectures are inspired by linguistic theories. For example, the organization of sentences is often described as compositional, with meaning of larger units gradually constructed from the meaning of their smaller constituents. This idea can be built into the architecture of a deep neural network, which is then trained using contemporary deep learning techniques (Dyer et al., 2016). The debate about the relative importance of machine learning and linguistic knowl- edge sometimes becomes heated. No machine learning specialist likes to be told that their engineering methodology is unscientific alchemy;5 nor does a linguist want to hear that the search for general linguistic principles and structures has been made irrelevant by big data. Yet there is clearly room for both types of research: we need to know how far we can go with end-to-end learning alone, while at the same time, we continue the search for linguistic representations that generalize across applications, scenarios, and languages. For more on the history of this debate, see Church (2011); for an optimistic view of the potential symbiosis between computational linguistics and deep learning, see Manning (2015). 1.2.2 Search and learning Many natural language processing problems can be written mathematically in the form of optimization,6 ˆy = argmax y∈Y(x) Ψ(x, y; θ), [1.1] where, • x is the input, which is an element of a set X; • y is the output, which is an element of a set Y(x); 5Ali Rahimi argued that much of deep learning research was similar to “alchemy” in a presentation at the 2017 conference on Neural Information Processing Systems. He was advocating for more learning theory, not more linguistics. 6Throughout this text, equations will be numbered by square brackets, and linguistic examples will be numbered by parentheses. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_25_Chunk31
|
8 CHAPTER 1. INTRODUCTION • Ψ is a scoring function (also called the model), which maps from the set X × Y to the real numbers; • θ is a vector of parameters for Ψ; • ˆy is the predicted output, which is chosen to maximize the scoring function. This basic structure can be applied to a huge range of problems. For example, the input x might be a social media post, and the output y might be a labeling of the emotional sentiment expressed by the author (chapter 4); or x could be a sentence in French, and the output y could be a sentence in Tamil (chapter 18); or x might be a sentence in English, and y might be a representation of the syntactic structure of the sentence (chapter 10); or x might be a news article and y might be a structured record of the events that the article describes (chapter 17). This formulation reflects an implicit decision that language processing algorithms will have two distinct modules: Search. The search module is responsible for computing the argmax of the function Ψ. In other words, it finds the output ˆy that gets the best score with respect to the input x. This is easy when the search space Y(x) is small enough to enumerate, or when the scoring function Ψ has a convenient decomposition into parts. In many cases, we will want to work with scoring functions that do not have these properties, moti- vating the use of more sophisticated search algorithms, such as bottom-up dynamic programming (§ 10.1) and beam search (§ 11.3.1). Because the outputs are usually discrete in language processing problems, search often relies on the machinery of combinatorial optimization. Learning. The learning module is responsible for finding the parameters θ. This is typ- ically (but not always) done by processing a large dataset of labeled examples, {(x(i), y(i))}N i=1. Like search, learning is also approached through the framework of optimization, as we will see in chapter 2. Because the parameters are usually continuous, learning algorithms generally rely on numerical optimization to iden- tify vectors of real-valued parameters that optimize some function of the model and the labeled data. Some basic principles of numerical optimization are reviewed in Appendix B. The division of natural language processing into separate modules for search and learning makes it possible to reuse generic algorithms across many tasks and models. Much of the work of natural language processing can be focused on the design of the model Ψ — identifying and formalizing the linguistic phenomena that are relevant to the task at hand — while reaping the benefits of decades of progress in search, optimization, and learning. This textbook will describe several classes of scoring functions, and the corresponding algorithms for search and learning. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_26_Chunk32
|
1.2. THREE THEMES IN NATURAL LANGUAGE PROCESSING 9 When a model is capable of making subtle linguistic distinctions, it is said to be ex- pressive. Expressiveness is often traded off against efficiency of search and learning. For example, a word-to-word translation model makes search and learning easy, but it is not expressive enough to distinguish good translations from bad ones. Many of the most im- portant problems in natural language processing seem to require expressive models, in which the complexity of search grows exponentially with the size of the input. In these models, exact search is usually impossible. Intractability threatens the neat modular de- composition between search and learning: if search requires a set of heuristic approxima- tions, then it may be advantageous to learn a model that performs well under these spe- cific heuristics. This has motivated some researchers to take a more integrated approach to search and learning, as briefly mentioned in chapters 11 and 15. 1.2.3 Relational, compositional, and distributional perspectives Any element of language — a word, a phrase, a sentence, or even a sound — can be described from at least three perspectives. Consider the word journalist. A journalist is a subcategory of a profession, and an anchorwoman is a subcategory of journalist; further- more, a journalist performs journalism, which is often, but not always, a subcategory of writing. This relational perspective on meaning is the basis for semantic ontologies such as WORDNET (Fellbaum, 2010), which enumerate the relations that hold between words and other elementary semantic units. The power of the relational perspective is illustrated by the following example: (1.3) Umashanthi interviewed Ana. She works for the college newspaper. Who works for the college newspaper? The word journalist, while not stated in the ex- ample, implicitly links the interview to the newspaper, making Umashanthi the most likely referent for the pronoun. (A general discussion of how to resolve pronouns is found in chapter 15.) Yet despite the inferential power of the relational perspective, it is not easy to formalize computationally. Exactly which elements are to be related? Are journalists and reporters distinct, or should we group them into a single unit? Is the kind of interview performed by a journalist the same as the kind that one undergoes when applying for a job? Ontology designers face many such thorny questions, and the project of ontology design hearkens back to Borges’ (1993) Celestial Emporium of Benevolent Knowledge, which divides animals into: (a) belonging to the emperor; (b) embalmed; (c) tame; (d) suckling pigs; (e) sirens; (f) fabulous; (g) stray dogs; (h) included in the present classification; (i) frenzied; (j) innumerable; (k) drawn with a very fine camelhair brush; (l) et cetera; (m) having just broken the water pitcher; (n) that from a long way off resemble flies. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_27_Chunk33
|
10 CHAPTER 1. INTRODUCTION Difficulties in ontology construction have led some linguists to argue that there is no task- independent way to partition up word meanings (Kilgarriff, 1997). Some problems are easier. Each member in a group of journalists is a journalist: the -s suffix distinguishes the plural meaning from the singular in most of the nouns in English. Similarly, a journalist can be thought of, perhaps colloquially, as someone who produces or works on a journal. (Taking this approach even further, the word journal derives from the French jour+nal, or day+ly = daily.) In this way, the meaning of a word is constructed from the constituent parts — the principle of compositionality. This principle can be applied to larger units: phrases, sentences, and beyond. Indeed, one of the great strengths of the compositional view of meaning is that it provides a roadmap for understanding entire texts and dialogues through a single analytic lens, grounding out in the smallest parts of individual words. But alongside journalists and anti-parliamentarians, there are many words that seem to be linguistic atoms: think, for example, of whale, blubber, and Nantucket. Idiomatic phrases like kick the bucket and shoot the breeze have meanings that are quite different from the sum of their parts (Sag et al., 2002). Composition is of little help for such words and expressions, but their meanings can be ascertained — or at least approximated — from the contexts in which they appear. Take, for example, blubber, which appears in such contexts as: (1.4) a. The blubber served them as fuel. b. . . . extracting it from the blubber of the large fish ... c. Amongst oily substances, blubber has been employed as a manure. These contexts form the distributional properties of the word blubber, and they link it to words which can appear in similar constructions: fat, pelts, and barnacles. This distribu- tional perspective makes it possible to learn about meaning from unlabeled data alone; unlike relational and compositional semantics, no manual annotation or expert knowl- edge is required. Distributional semantics is thus capable of covering a huge range of linguistic phenomena. However, it lacks precision: blubber is similar to fat in one sense, to pelts in another sense, and to barnacles in still another. The question of why all these words tend to appear in the same contexts is left unanswered. The relational, compositional, and distributional perspectives all contribute to our un- derstanding of linguistic meaning, and all three appear to be critical to natural language processing. Yet they are uneasy collaborators, requiring seemingly incompatible represen- tations and algorithmic approaches. This text presents some of the best known and most successful methods for working with each of these representations, but future research may reveal new ways to combine them. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_28_Chunk34
|
Part I Learning 11
|
nlp_Page_29_Chunk35
|
Chapter 2 Linear text classification We begin with the problem of text classification: given a text document, assign it a dis- crete label y ∈Y, where Y is the set of possible labels. Text classification has many ap- plications, from spam filtering to the analysis of electronic health records. This chapter describes some of the most well known and effective algorithms for text classification, from a mathematical perspective that should help you understand what they do and why they work. Text classification is also a building block in more elaborate natural language processing tasks. For readers without a background in machine learning or statistics, the material in this chapter will take more time to digest than most of the subsequent chap- ters. But this investment will pay off as the mathematical principles behind these basic classification algorithms reappear in other contexts throughout the book. 2.1 The bag of words To perform text classification, the first question is how to represent each document, or instance. A common approach is to use a column vector of word counts, e.g., x = [0, 1, 1, 0, 0, 2, 0, 1, 13, 0 . . .]⊤, where xj is the count of word j. The length of x is V ≜|V|, where V is the set of possible words in the vocabulary. In linear classification, the classi- fication decision is based on a weighted sum of individual feature counts, such as word counts. The object x is a vector, but it is often called a bag of words, because it includes only information about the count of each word, and not the order in which the words appear. With the bag of words representation, we are ignoring grammar, sentence boundaries, paragraphs — everything but the words. Yet the bag of words model is surprisingly effective for text classification. If you see the word whale in a document, is it fiction or non- fiction? What if you see the word molybdenum? For many labeling problems, individual words can be strong predictors. 13
|
nlp_Page_31_Chunk36
|
14 CHAPTER 2. LINEAR TEXT CLASSIFICATION To predict a label from a bag-of-words, we can assign a score to each word in the vo- cabulary, measuring the compatibility with the label. For example, for the label FICTION, we might assign a positive score to the word whale, and a negative score to the word molybdenum. These scores are called weights, and they are arranged in a column vector θ. Suppose that you want a multiclass classifier, where K ≜|Y| > 2. For example, you might want to classify news stories about sports, celebrities, music, and business. The goal is to predict a label ˆy, given the bag of words x, using the weights θ. For each label y ∈Y, we compute a score Ψ(x, y), which is a scalar measure of the compatibility between the bag-of-words x and the label y. In a linear bag-of-words classifier, this score is the vector inner product between the weights θ and the output of a feature function f(x, y), Ψ(x, y) = θ · f(x, y) = X j θjfj(x, y). [2.1] As the notation suggests, f is a function of two arguments, the word counts x and the label y, and it returns a vector output. For example, given arguments x and y, element j of this feature vector might be, fj(x, y) = ( xwhale, if y = FICTION 0, otherwise [2.2] This function returns the count of the word whale if the label is FICTION, and it returns zero otherwise. The index j depends on the position of whale in the vocabulary, and of FICTION in the set of possible labels. The corresponding weight θj then scores the compatibility of the word whale with the label FICTION.1 A positive score means that this word makes the label more likely. The output of the feature function can be formalized as a vector: f(x, y = 1) = [x; 0; 0; . . . ; 0 | {z } (K−1)×V ] [2.3] f(x, y = 2) = [0; 0; . . . ; 0 | {z } V ; x; 0; 0; . . . ; 0 | {z } (K−2)×V ] [2.4] f(x, y = K) = [0; 0; . . . ; 0 | {z } (K−1)×V ; x], [2.5] where [0; 0; . . . ; 0 | {z } (K−1)×V ] is a column vector of (K −1) × V zeros, and the semicolon indicates vertical concatenation. For each of the K possible labels, the feature function returns a 1In practice, both f and θ may be implemented as a dictionary rather than vectors, so that it is not necessary to explicitly identify j. In such an implementation, the tuple (whale, FICTION) acts as a key in both dictionaries; the values in f are feature counts, and the values in θ are weights. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_32_Chunk37
|
2.1. THE BAG OF WORDS 15 vector that is mostly zeros, with a column vector of word counts x inserted in a location that depends on the specific label y. This arrangement is shown in Figure 2.1. The notation may seem awkward at first, but it generalizes to an impressive range of learning settings, particularly structure prediction, which is the focus of Chapters 7-11. Given a vector of weights, θ ∈RV K, we can now compute the score Ψ(x, y) by Equa- tion 2.1. This inner product gives a scalar measure of the compatibility of the observation x with label y.2 For any document x, we predict the label ˆy, ˆy = argmax y∈Y Ψ(x, y) [2.6] Ψ(x, y) =θ · f(x, y). [2.7] This inner product notation gives a clean separation between the data (x and y) and the parameters (θ). While vector notation is used for presentation and analysis, in code the weights and feature vector can be implemented as dictionaries. The inner product can then be com- puted as a loop. In python: def compute_score(x,y,weights): total = 0 for feature,count in feature_function(x,y).items(): total += weights[feature] * count return total This representation is advantageous because it avoids storing and iterating over the many features whose counts are zero. It is common to add an offset feature at the end of the vector of word counts x, which is always 1. We then have to also add an extra zero to each of the zero vectors, to make the vector lengths match. This gives the entire feature vector f(x, y) a length of (V + 1) × K. The weight associated with this offset feature can be thought of as a bias for or against each label. For example, if we expect most emails to be spam, then the weight for the offset feature for y = SPAM should be larger than the weight for the offset feature for y = NOT-SPAM. Returning to the weights θ, where do they come from? One possibility is to set them by hand. If we wanted to distinguish, say, English from Spanish, we can use English and Spanish dictionaries, and set the weight to one for each word that appears in the 2Only V × (K −1) features and weights are necessary. By stipulating that Ψ(x, y = K) = 0 regardless of x, it is possible to implement any classification rule that can be achieved with V × K features and weights. This is the approach taken in binary classification rules like y = Sign(β·x+a), where β is a vector of weights, a is an offset, and the label set is Y = {−1, 1}. However, for multiclass classification, it is more concise to write θ · f(x, y) for all y ∈Y. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_33_Chunk38
|
16 CHAPTER 2. LINEAR TEXT CLASSIFICATION It was the best of times, it was the worst of times... x it was the best worst times 1 2 of 2 2 2 2 1 0 ... 0 0 ... 0 ... 0 ... 0 ... 0 ... 0 ... 0 ... 0 1 x 0 0 0 f(x ,y=News) y=Fiction y=News y=Gossip y=Sports Bag of words Feature vector Original text <OFFSET> aardvark zyxt Figure 2.1: The bag-of-words and feature vector representations, for a hypothetical text classification task. associated dictionary. For example,3 θ(E,bicycle) =1 θ(S,bicycle) =0 θ(E,bicicleta) =0 θ(S,bicicleta) =1 θ(E,con) =1 θ(S,con) =1 θ(E,ordinateur) =0 θ(S,ordinateur) =0. Similarly, if we want to distinguish positive and negative sentiment, we could use posi- tive and negative sentiment lexicons (see § 4.1.2), which are defined by social psycholo- gists (Tausczik and Pennebaker, 2010). But it is usually not easy to set classification weights by hand, due to the large number of words and the difficulty of selecting exact numerical weights. Instead, we will learn the weights from data. Email users manually label messages as SPAM; newspapers label their own articles as BUSINESS or STYLE. Using such instance labels, we can automatically acquire weights using supervised machine learning. This chapter will discuss several machine learning approaches for classification. The first is based on probability. For a review of probability, consult Appendix A. 3In this notation, each tuple (language, word) indexes an element in θ, which remains a vector. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_34_Chunk39
|
2.2. NA¨IVE BAYES 17 2.2 Na¨ıve Bayes The joint probability of a bag of words x and its true label y is written p(x, y). Suppose we have a dataset of N labeled instances, {(x(i), y(i))}N i=1, which we assume are indepen- dent and identically distributed (IID) (see § A.3). Then the joint probability of the entire dataset, written p(x(1:N), y(1:N)), is equal to QN i=1 pX,Y (x(i), y(i)).4 What does this have to do with classification? One approach to classification is to set the weights θ so as to maximize the joint probability of a training set of labeled docu- ments. This is known as maximum likelihood estimation: ˆθ = argmax θ p(x(1:N), y(1:N); θ) [2.8] = argmax θ N Y i=1 p(x(i), y(i); θ) [2.9] = argmax θ N X i=1 log p(x(i), y(i); θ). [2.10] The notation p(x(i), y(i); θ) indicates that θ is a parameter of the probability function. The product of probabilities can be replaced by a sum of log-probabilities because the log func- tion is monotonically increasing over positive arguments, and so the same θ will maxi- mize both the probability and its logarithm. Working with logarithms is desirable because of numerical stability: on a large dataset, multiplying many probabilities can underflow to zero.5 The probability p(x(i), y(i); θ) is defined through a generative model — an idealized random process that has generated the observed data.6 Algorithm 1 describes the gener- ative model underlying the Na¨ıve Bayes classifier, with parameters θ = {µ, φ}. • The first line of this generative model encodes the assumption that the instances are mutually independent: neither the label nor the text of document i affects the label or text of document j.7 Furthermore, the instances are identically distributed: the 4The notation pX,Y (x(i), y(i)) indicates the joint probability that random variables X and Y take the specific values x(i) and y(i) respectively. The subscript will often be omitted when it is clear from context. For a review of random variables, see Appendix A. 5Throughout this text, you may assume all logarithms and exponents are base 2, unless otherwise indi- cated. Any reasonable base will yield an identical classifier, and base 2 is most convenient for working out examples by hand. 6Generative models will be used throughout this text. They explicitly define the assumptions underlying the form of a probability distribution over observed and latent variables. For a readable introduction to generative models in statistics, see Blei (2014). 7Can you think of any cases in which this assumption is too strong? Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_35_Chunk40
|
18 CHAPTER 2. LINEAR TEXT CLASSIFICATION Algorithm 1 Generative process for the Na¨ıve Bayes classification model for Instance i ∈{1, 2, . . . , N} do: Draw the label y(i) ∼Categorical(µ); Draw the word counts x(i) | y(i) ∼Multinomial(φy(i)). distributions over the label y(i) and the text x(i) (conditioned on y(i)) are the same for all instances i. In other words, we make the assumption that every document has the same distribution over labels, and that each document’s distribution over words depends only on the label, and not on anything else about the document. We also assume that the documents don’t affect each other: if the word whale appears in document i = 7, that does not make it any more or less likely that it will appear again in document i = 8. • The second line of the generative model states that the random variable y(i) is drawn from a categorical distribution with parameter µ. Categorical distributions are like weighted dice: the column vector µ = [µ1; µ2; . . . ; µK] gives the probabilities of each label, so that the probability of drawing label y is equal to µy. For example, if Y = {POSITIVE, NEGATIVE, NEUTRAL}, we might have µ = [0.1; 0.7; 0.2]. We require P y∈Y µy = 1 and µy ≥0, ∀y ∈Y: each label’s probability is non-negative, and the sum of these probabilities is equal to one. 8 • The third line describes how the bag-of-words counts x(i) are generated. By writing x(i) | y(i), this line indicates that the word counts are conditioned on the label, so that the joint probability is factored using the chain rule, pX,Y (x(i), y(i)) = pX|Y (x(i) | y(i)) × pY (y(i)). [2.11] The specific distribution pX|Y is the multinomial, which is a probability distribu- tion over vectors of non-negative counts. The probability mass function for this distribution is: pmult(x; φ) =B(x) VY j=1 φxj j [2.12] B(x) = PV j=1 xj ! QV j=1(xj!) . [2.13] 8Formally, we require µ ∈∆K−1, where ∆K−1 is the K −1 probability simplex, the set of all vectors of K nonnegative numbers that sum to one. Because of the sum-to-one constraint, there are K −1 degrees of freedom for a vector of size K. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_36_Chunk41
|
2.2. NA¨IVE BAYES 19 As in the categorical distribution, the parameter φj can be interpreted as a probabil- ity: specifically, the probability that any given token in the document is the word j. The multinomial distribution involves a product over words, with each term in the product equal to the probability φj, exponentiated by the count xj. Words that have zero count play no role in this product, because φ0 j = 1. The term B(x) is called the multinomial coefficient. It doesn’t depend on φ, and can usually be ignored. Can you see why we need this term at all?9 The notation p(x | y; φ) indicates the conditional probability of word counts x given label y, with parameter φ, which is equal to pmult(x; φy). By specifying the multinomial distribution, we describe the multinomial Na¨ıve Bayes classifier. Why “na¨ıve”? Because the multinomial distribution treats each word token indepen- dently, conditioned on the class: the probability mass function factorizes across the counts.10 2.2.1 Types and tokens A slight modification to the generative model of Na¨ıve Bayes is shown in Algorithm 2. Instead of generating a vector of counts of types, x, this model generates a sequence of tokens, w = (w1, w2, . . . , wM). The distinction between types and tokens is critical: xj ∈ {0, 1, 2, . . . , M} is the count of word type j in the vocabulary, e.g., the number of times the word cannibal appears; wm ∈V is the identity of token m in the document, e.g. wm = cannibal. The probability of the sequence w is a product of categorical probabilities. Algorithm 2 makes a conditional independence assumption: each token w(i) m is independent of all other tokens w(i) n̸=m, conditioned on the label y(i). This is identical to the “na¨ıve” independence assumption implied by the multinomial distribution, and as a result, the optimal parame- ters for this model are identical to those in multinomial Na¨ıve Bayes. For any instance, the probability assigned by this model is proportional to the probability under multinomial Na¨ıve Bayes. The constant of proportionality is the multinomial coefficient B(x). Because B(x) ≥1, the probability for a vector of counts x is at least as large as the probability for a list of words w that induces the same counts: there can be many word sequences that correspond to a single vector of counts. For example, man bites dog and dog bites man correspond to an identical count vector, {bites : 1, dog : 1, man : 1}, and B(x) is equal to the total number of possible word orderings for count vector x. 9Technically, a multinomial distribution requires a second parameter, the total number of word counts in x. In the bag-of-words representation is equal to the number of words in the document. However, this parameter is irrelevant for classification. 10You can plug in any probability distribution to the generative story and it will still be Na¨ıve Bayes, as long as you are making the “na¨ıve” assumption that the features are conditionally independent, given the label. For example, a multivariate Gaussian with diagonal covariance is na¨ıve in exactly the same sense. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_37_Chunk42
|
20 CHAPTER 2. LINEAR TEXT CLASSIFICATION Algorithm 2 Alternative generative process for the Na¨ıve Bayes classification model for Instance i ∈{1, 2, . . . , N} do: Draw the label y(i) ∼Categorical(µ); for Token m ∈{1, 2, . . . , Mi} do: Draw the token w(i) m | y(i) ∼Categorical(φy(i)). Sometimes it is useful to think of instances as counts of types, x; other times, it is better to think of them as sequences of tokens, w. If the tokens are generated from a model that assumes conditional independence, then these two views lead to probability models that are identical, except for a scaling factor that does not depend on the label or the parameters. 2.2.2 Prediction The Na¨ıve Bayes prediction rule is to choose the label y which maximizes log p(x, y; µ, φ): ˆy = argmax y log p(x, y; µ, φ) [2.14] = argmax y log p(x | y; φ) + log p(y; µ) [2.15] Now we can plug in the probability distributions from the generative story. log p(x | y; φ) + log p(y; µ) = log B(x) VY j=1 φxj y,j + log µy [2.16] = log B(x) + V X j=1 xj log φy,j + log µy [2.17] = log B(x) + θ · f(x, y), [2.18] where θ = [θ(1); θ(2); . . . ; θ(K)] [2.19] θ(y) = [log φy,1; log φy,2; . . . ; log φy,V ; log µy] [2.20] The feature function f(x, y) is a vector of V word counts and an offset, padded by zeros for the labels not equal to y (see Equations 2.3-2.5, and Figure 2.1). This construction ensures that the inner product θ · f(x, y) only activates the features whose weights are in θ(y). These features and weights are all we need to compute the joint log-probability log p(x, y) for each y. This is a key point: through this notation, we have converted the problem of computing the log-likelihood for a document-label pair (x, y) into the compu- tation of a vector inner product. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_38_Chunk43
|
2.2. NA¨IVE BAYES 21 2.2.3 Estimation The parameters of the categorical and multinomial distributions have a simple interpre- tation: they are vectors of expected frequencies for each possible event. Based on this interpretation, it is tempting to set the parameters empirically, φy,j = count(y, j) PV j′=1 count(y, j′) = P i:y(i)=y x(i) j PV j′=1 P i:y(i)=y x(i) j′ , [2.21] where count(y, j) refers to the count of word j in documents with label y. Equation 2.21 defines the relative frequency estimate for φ. It can be justified as a maximum likelihood estimate: the estimate that maximizes the probability p(x(1:N), y(1:N); θ). Based on the generative model in Algorithm 1, the log-likelihood is, L(φ, µ) = N X i=1 log pmult(x(i); φy(i)) + log pcat(y(i); µ), [2.22] which is now written as a function L of the parameters φ and µ. Let’s continue to focus on the parameters φ. Since p(y) is constant with respect to φ, we can drop it: L(φ) = N X i=1 log pmult(x(i); φy(i)) = N X i=1 log B(x(i)) + V X j=1 x(i) j log φy(i),j, [2.23] where B(x(i)) is constant with respect to φ. Maximum-likelihood estimation chooses φ to maximize the log-likelihood L. How- ever, the solution must obey the following constraints: V X j=1 φy,j = 1 ∀y [2.24] These constraints can be incorporated by adding a set of Lagrange multipliers to the objec- tive (see Appendix B for more details). To solve for each θy, we maximize the Lagrangian, ℓ(φy) = X i:y(i)=y V X j=1 x(i) j log φy,j −λ( V X j=1 φy,j −1). [2.25] Differentiating with respect to the parameter φy,j yields, ∂ℓ(φy) ∂φy,j = X i:y(i)=y x(i) j /φy,j −λ. [2.26] Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_39_Chunk44
|
22 CHAPTER 2. LINEAR TEXT CLASSIFICATION The solution is obtained by setting each element in this vector of derivatives equal to zero, λφy,j = X i:y(i)=y x(i) j [2.27] φy,j ∝ X i:y(i)=y x(i) j = N X i=1 δ y(i) = y x(i) j = count(y, j), [2.28] where δ
|
nlp_Page_40_Chunk45
|
2.2. NA¨IVE BAYES 23 Smoothing reduces variance, but moves us away from the maximum likelihood esti- mate: it imposes a bias. In this case, the bias points towards uniform probabilities. Ma- chine learning theory shows that errors on heldout data can be attributed to the sum of bias and variance (Mohri et al., 2012). In general, techniques for reducing variance often increase the bias, leading to a bias-variance tradeoff. • Unbiased classifiers may overfit the training data, yielding poor performance on unseen data. • But if the smoothing is too large, the resulting classifier can underfit instead. In the limit of α →∞, there is zero variance: you get the same classifier, regardless of the data. However, the bias is likely to be large. Similar issues arise throughout machine learning. Later in this chapter we will encounter regularization, which controls the bias-variance tradeoff for logistic regression and large- margin classifiers (§ 2.5.1); § 3.3.2 describes techniques for controlling variance in deep learning; chapter 6 describes more elaborate methods for smoothing empirical probabili- ties. 2.2.5 Setting hyperparameters Returning to Na¨ıve Bayes, how should we choose the best value of hyperparameters like α? Maximum likelihood will not work: the maximum likelihood estimate of α on the training set will always be α = 0. In many cases, what we really want is accuracy: the number of correct predictions, divided by the total number of predictions. (Other mea- sures of classification performance are discussed in § 4.4.) As we will see, it is hard to opti- mize for accuracy directly. But for scalar hyperparameters like α, tuning can be performed by a simple heuristic called grid search: try a set of values (e.g., α ∈{0.001, 0.01, 0.1, 1, 10}), compute the accuracy for each value, and choose the setting that maximizes the accuracy. The goal is to tune α so that the classifier performs well on unseen data. For this reason, the data used for hyperparameter tuning should not overlap the training set, where very small values of α will be preferred. Instead, we hold out a development set (also called a tuning set) for hyperparameter selection. This development set may consist of a small fraction of the labeled data, such as 10%. We also want to predict the performance of our classifier on unseen data. To do this, we must hold out a separate subset of data, called the test set. It is critical that the test set not overlap with either the training or development sets, or else we will overestimate the performance that the classifier will achieve on unlabeled data in the future. The test set should also not be used when making modeling decisions, such as the form of the feature function, the size of the vocabulary, and so on (these decisions are reviewed in chapter 4.) The ideal practice is to use the test set only once — otherwise, the test set is used to guide Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_41_Chunk46
|
24 CHAPTER 2. LINEAR TEXT CLASSIFICATION the classifier design, and test set accuracy will diverge from accuracy on truly unseen data. Because annotated data is expensive, this ideal can be hard to follow in practice, and many test sets have been used for decades. But in some high-impact applications like machine translation and information extraction, new test sets are released every year. When only a small amount of labeled data is available, the test set accuracy can be unreliable. K-fold cross-validation is one way to cope with this scenario: the labeled data is divided into K folds, and each fold acts as the test set, while training on the other folds. The test set accuracies are then aggregated. In the extreme, each fold is a single data point; this is called leave-one-out cross-validation. To perform hyperparameter tuning in the context of cross-validation, another fold can be used for grid search. It is important not to repeatedly evaluate the cross-validated accuracy while making design decisions about the classifier, or you will overstate the accuracy on truly unseen data. 2.3 Discriminative learning Na¨ıve Bayes is easy to work with: the weights can be estimated in closed form, and the probabilistic interpretation makes it relatively easy to extend. However, the assumption that features are independent can seriously limit its accuracy. Thus far, we have defined the feature function f(x, y) so that it corresponds to bag-of-words features: one feature per word in the vocabulary. In natural language, bag-of-words features violate the as- sumption of conditional independence — for example, the probability that a document will contain the word na¨ıve is surely higher given that it also contains the word Bayes — but this violation is relatively mild. However, good performance on text classification often requires features that are richer than the bag-of-words: • To better handle out-of-vocabulary terms, we want features that apply to multiple words, such as prefixes and suffixes (e.g., anti-, un-, -ing) and capitalization. • We also want n-gram features that apply to multi-word units: bigrams (e.g., not good, not bad), trigrams (e.g., not so bad, lacking any decency, never before imagined), and beyond. These features flagrantly violate the Na¨ıve Bayes independence assumption. Consider what happens if we add a prefix feature. Under the Na¨ıve Bayes assumption, the joint probability of a word and its prefix are computed with the following approximation:12 Pr(word = unfit, prefix = un- | y) ≈Pr(prefix = un- | y) × Pr(word = unfit | y). 12The notation Pr(·) refers to the probability of an event, and p(·) refers to the probability density or mass for a random variable (see Appendix A). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_42_Chunk47
|
2.3. DISCRIMINATIVE LEARNING 25 To test the quality of the approximation, we can manipulate the left-hand side by applying the chain rule, Pr(word = unfit, prefix = un- | y) = Pr(prefix = un- | word = unfit, y) [2.31] × Pr(word = unfit | y) [2.32] But Pr(prefix = un- | word = unfit, y) = 1, since un- is guaranteed to be the prefix for the word unfit. Therefore, Pr(word = unfit, prefix = un- | y) =1 × Pr(word = unfit | y) [2.33] ≫Pr(prefix = un- | y) × Pr(word = unfit | y), [2.34] because the probability of any given word starting with the prefix un- is much less than one. Na¨ıve Bayes will systematically underestimate the true probabilities of conjunctions of positively correlated features. To use such features, we need learning algorithms that do not rely on an independence assumption. The origin of the Na¨ıve Bayes independence assumption is the learning objective, p(x(1:N), y(1:N)), which requires modeling the probability of the observed text. In clas- sification problems, we are always given x, and are only interested in predicting the label y. In this setting, modeling the probability of the text x seems like a difficult and unnec- essary task. Discriminative learning algorithms avoid this task, and focus directly on the problem of predicting y. 2.3.1 Perceptron In Na¨ıve Bayes, the weights can be interpreted as parameters of a probabilistic model. But this model requires an independence assumption that usually does not hold, and limits our choice of features. Why not forget about probability and learn the weights in an error- driven way? The perceptron algorithm, shown in Algorithm 3, is one way to do this. The algorithm is simple: if you make a mistake, increase the weights for features that are active with the correct label y(i), and decrease the weights for features that are active with the guessed label ˆy. Perceptron is an online learning algorithm, since the classifier weights change after every example. This is different from Na¨ıve Bayes, which is a batch learning algorithm: it computes statistics over the entire dataset, and then sets the weights in a single operation. Algorithm 3 is vague about when this online learning procedure terminates. We will return to this issue shortly. The perceptron algorithm may seem like an unprincipled heuristic: Na¨ıve Bayes has a solid foundation in probability, but the perceptron is just adding and subtracting constants from the weights every time there is a mistake. Will this really work? In fact, there is some nice theory for the perceptron, based on the concept of linear separability. Informally, a dataset with binary labels (y ∈{0, 1}) is linearly separable if it is possible to draw a Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_43_Chunk48
|
26 CHAPTER 2. LINEAR TEXT CLASSIFICATION Algorithm 3 Perceptron learning algorithm 1: procedure PERCEPTRON(x(1:N), y(1:N)) 2: t ←0 3: θ(0) ←0 4: repeat 5: t ←t + 1 6: Select an instance i 7: ˆy ←argmaxy θ(t−1) · f(x(i), y) 8: if ˆy ̸= y(i) then 9: θ(t) ←θ(t−1) + f(x(i), y(i)) −f(x(i), ˆy) 10: else 11: θ(t) ←θ(t−1) 12: until tired 13: return θ(t) hyperplane (a line in many dimensions), such that on each side of the hyperplane, all instances have the same label. This definition can be formalized and extended to multiple labels: Definition 1 (Linear separability). The dataset D = {(x(i), y(i))}N i=1 is linearly separable iff (if and only if) there exists some weight vector θ and some margin ρ such that for every instance (x(i), y(i)), the inner product of θ and the feature function for the true label, θ · f(x(i), y(i)), is at least ρ greater than inner product of θ and the feature function for every other possible label, θ · f(x(i), y′). ∃θ, ρ > 0 : ∀(x(i), y(i)) ∈D, θ · f(x(i), y(i)) ≥ρ + max y′̸=y(i) θ · f(x(i), y′). [2.35] Linear separability is important because of the following guarantee: if your data is linearly separable, then the perceptron algorithm will find a separator (Novikoff, 1962).13 So while the perceptron may seem heuristic, it is guaranteed to succeed, if the learning problem is easy enough. How useful is this proof? Minsky and Papert (1969) famously proved that the simple logical function of exclusive-or is not separable, and that a perceptron is therefore inca- pable of learning this function. But this is not just an issue for the perceptron: any linear classification algorithm, including Na¨ıve Bayes, will fail on this task. Text classification problems usually involve high dimensional feature spaces, with thousands or millions of 13It is also possible to prove an upper bound on the number of training iterations required to find the separator. Proofs like this are part of the field of machine learning theory (Mohri et al., 2012). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_44_Chunk49
|
2.4. LOSS FUNCTIONS AND LARGE-MARGIN CLASSIFICATION 27 features. For these problems, it is very likely that the training data is indeed separable. And even if the dataset is not separable, it is still possible to place an upper bound on the number of errors that the perceptron algorithm will make (Freund and Schapire, 1999). 2.3.2 Averaged perceptron The perceptron iterates over the data repeatedly — until “tired”, as described in Algo- rithm 3. If the data is linearly separable, the perceptron will eventually find a separator, and we can stop once all training instances are classified correctly. But if the data is not linearly separable, the perceptron can thrash between two or more weight settings, never converging. In this case, how do we know that we can stop training, and how should we choose the final weights? An effective practical solution is to average the perceptron weights across all iterations. This procedure is shown in Algorithm 4. The learning algorithm is nearly identical, but we also maintain a vector of the sum of the weights, m. At the end of the learning procedure, we divide this sum by the total number of updates t, to compute the average weights, θ. These average weights are then used for prediction. In the algorithm sketch, the average is computed from a running sum, m ←m + θ. However, this is inefficient, because it requires |θ| operations to update the running sum. When f(x, y) is sparse, |θ| ≫|f(x, y)| for any individual (x, y). This means that computing the running sum will be much more expensive than computing of the update to θ itself, which requires only 2 × |f(x, y)| operations. One of the exercises is to sketch a more efficient algorithm for computing the averaged weights. Even if the dataset is not separable, the averaged weights will eventually converge. One possible stopping criterion is to check the difference between the average weight vectors after each pass through the data: if the norm of the difference falls below some predefined threshold, we can stop training. Another stopping criterion is to hold out some data, and to measure the predictive accuracy on this heldout data. When the accuracy on the heldout data starts to decrease, the learning algorithm has begun to overfit the training set. At this point, it is probably best to stop; this stopping criterion is known as early stopping. Generalization is the ability to make good predictions on instances that are not in the training data. Averaging can be proven to improve generalization, by computing an upper bound on the generalization error (Freund and Schapire, 1999; Collins, 2002). 2.4 Loss functions and large-margin classification Na¨ıve Bayes chooses the weights θ by maximizing the joint log-likelihood log p(x(1:N), y(1:N)). By convention, optimization problems are generally formulated as minimization of a loss function. The input to a loss function is the vector of weights θ, and the output is a Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_45_Chunk50
|
28 CHAPTER 2. LINEAR TEXT CLASSIFICATION Algorithm 4 Averaged perceptron learning algorithm 1: procedure AVG-PERCEPTRON(x(1:N), y(1:N)) 2: t ←0 3: θ(0) ←0 4: repeat 5: t ←t + 1 6: Select an instance i 7: ˆy ←argmaxy θ(t−1) · f(x(i), y) 8: if ˆy ̸= y(i) then 9: θ(t) ←θ(t−1) + f(x(i), y(i)) −f(x(i), ˆy) 10: else 11: θ(t) ←θ(t−1) 12: m ←m + θ(t) 13: until tired 14: θ ←1 t m 15: return θ non-negative number, measuring the performance of the classifier on a training instance. Formally, the loss ℓ(θ; x(i), y(i)) is then a measure of the performance of the weights θ on the instance (x(i), y(i)). The goal of learning is to minimize the sum of the losses across all instances in the training set. We can trivially reformulate maximum likelihood as a loss function, by defining the loss function to be the negative log-likelihood: log p(x(1:N), y(1:N); θ) = N X i=1 log p(x(i), y(i); θ) [2.36] ℓNB(θ; x(i), y(i)) = −log p(x(i), y(i); θ) [2.37] ˆθ = argmin θ N X i=1 ℓNB(θ; x(i), y(i)) [2.38] = argmax θ N X i=1 log p(x(i), y(i); θ). [2.39] The problem of minimizing ℓNB is thus identical to maximum-likelihood estimation. Loss functions provide a general framework for comparing learning objectives. For example, an alternative loss function is the zero-one loss, ℓ0-1(θ; x(i), y(i)) = ( 0, y(i) = argmaxy θ · f(x(i), y) 1, otherwise [2.40] Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_46_Chunk51
|
2.4. LOSS FUNCTIONS AND LARGE-MARGIN CLASSIFICATION 29 The zero-one loss is zero if the instance is correctly classified, and one otherwise. The sum of zero-one losses is proportional to the error rate of the classifier on the training data. Since a low error rate is often the ultimate goal of classification, this may seem ideal. But the zero-one loss has several problems. One is that it is non-convex,14 which means that there is no guarantee that gradient-based optimization will be effective. A more serious problem is that the derivatives are useless: the partial derivative with respect to any parameter is zero everywhere, except at the points where θ·f(x(i), y) = θ·f(x(i), ˆy) for some ˆy. At those points, the loss is discontinuous, and the derivative is undefined. The perceptron optimizes a loss function that has better properties for learning: ℓPERCEPTRON(θ; x(i), y(i)) = max y∈Y θ · f(x(i), y) −θ · f(x(i), y(i)), [2.41] When ˆy = y(i), the loss is zero; otherwise, it increases linearly with the gap between the score for the predicted label ˆy and the score for the true label y(i). Plotting this loss against the input maxy∈Y θ · f(x(i), y) −θ · f(x(i), y(i)) gives a hinge shape, motivating the name hinge loss. To see why this is the loss function optimized by the perceptron, take the derivative with respect to θ, ∂ ∂θℓPERCEPTRON(θ; x(i), y(i)) = f(x(i), ˆy) −f(x(i), y(i)). [2.42] At each instance, the perceptron algorithm takes a step of magnitude one in the opposite direction of this gradient, ∇θℓPERCEPTRON = ∂ ∂θℓPERCEPTRON(θ; x(i), y(i)). As we will see in § 2.6, this is an example of the optimization algorithm stochastic gradient descent, applied to the objective in Equation 2.41. *Breaking ties with subgradient descent 15 Careful readers will notice the tacit assump- tion that there is a unique ˆy that maximizes θ · f(x(i), y). What if there are two or more labels that maximize this function? Consider binary classification: if the maximizer is y(i), then the gradient is zero, and so is the perceptron update; if the maximizer is ˆy ̸= y(i), then the update is the difference f(x(i), y(i)) −f(x(i), ˆy). The underlying issue is that the perceptron loss is not smooth, because the first derivative has a discontinuity at the hinge point, where the score for the true label y(i) is equal to the score for some other label ˆy. At this point, there is no unique gradient; rather, there is a set of subgradients. A vector v is 14A function f is convex iff αf(xi)+(1−α)f(xj) ≥f(αxi+(1−α)xj), for all α ∈[0, 1] and for all xi and xj on the domain of the function. In words, any weighted average of the output of f applied to any two points is larger than the output of f when applied to the weighted average of the same two points. Convexity implies that any local minimum is also a global minimum, and there are many effective techniques for optimizing convex functions (Boyd and Vandenberghe, 2004). See Appendix B for a brief review. 15Throughout this text, advanced topics will be marked with an asterisk. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_47_Chunk52
|
30 CHAPTER 2. LINEAR TEXT CLASSIFICATION a subgradient of the function g at u0 iff g(u) −g(u0) ≥v · (u −u0) for all u. Graphically, this defines the set of hyperplanes that include g(u0) and do not intersect g at any other point. As we approach the hinge point from the left, the gradient is f(x, ˆy)−f(x, y); as we approach from the right, the gradient is 0. At the hinge point, the subgradients include all vectors that are bounded by these two extremes. In subgradient descent, any subgradient can be used (Bertsekas, 2012). Since both 0 and f(x, ˆy) −f(x, y) are subgradients at the hinge point, either one can be used in the perceptron update. This means that if multiple labels maximize θ · f(x(i), y), any of them can be used in the perceptron update. Perceptron versus Na¨ıve Bayes The perceptron loss function has some pros and cons with respect to the negative log-likelihood loss implied by Na¨ıve Bayes. • Both ℓNB and ℓPERCEPTRON are convex, making them relatively easy to optimize. How- ever, ℓNB can be optimized in closed form, while ℓPERCEPTRON requires iterating over the dataset multiple times. • ℓNB can suffer infinite loss on a single example, since the logarithm of zero probability is negative infinity. Na¨ıve Bayes will therefore overemphasize some examples, and underemphasize others. • The Na¨ıve Bayes classifier assumes that the observed features are conditionally in- dependent, given the label, and the performance of the classifier depends on the extent to which this assumption holds. The perceptron requires no such assump- tion. • ℓPERCEPTRON treats all correct answers equally. Even if θ only gives the correct answer by a tiny margin, the loss is still zero. 2.4.1 Online large margin classification This last comment suggests a potential problem with the perceptron. Suppose a test ex- ample is very close to a training example, but not identical. If the classifier only gets the correct answer on the training example by a small amount, then it may give a different answer on the nearby test instance. To formalize this intuition, define the margin as, γ(θ; x(i), y(i)) = θ · f(x(i), y(i)) −max y̸=y(i) θ · f(x(i), y). [2.43] The margin represents the difference between the score for the correct label y(i), and the score for the highest-scoring incorrect label. The intuition behind large margin clas- sification is that it is not enough to label the training data correctly — the correct label should be separated from other labels by a comfortable margin. This idea can be encoded Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_48_Chunk53
|
2.4. LOSS FUNCTIONS AND LARGE-MARGIN CLASSIFICATION 31 −2 −1 0 1 2 θ · f(x(i), y(i)) −θ · f(x(i), ˆy) 0 1 2 3 loss 0/1 loss margin loss logistic loss Figure 2.2: Margin, zero-one, and logistic loss functions. into a loss function, ℓMARGIN(θ; x(i), y(i)) = ( 0, γ(θ; x(i), y(i)) ≥1, 1 −γ(θ; x(i), y(i)), otherwise [2.44] = 1 −γ(θ; x(i), y(i)) + , [2.45] where (x)+ = max(0, x). The loss is zero if there is a margin of at least 1 between the score for the true label and the best-scoring alternative ˆy. This is almost identical to the perceptron loss, but the hinge point is shifted to the right, as shown in Figure 2.2. The margin loss is a convex upper bound on the zero-one loss. The margin loss can be minimized using an online learning rule that is similar to per- ceptron. We will call this learning rule the online support vector machine, for reasons that will be discussed in the derivation. Let us first generalize the notion of a classifica- tion error with a cost function c(y(i), y). We will focus on the simple cost function, c(y(i), y) = ( 1, y(i) ̸= ˆy 0, otherwise, [2.46] but it is possible to design specialized cost functions that assign heavier penalties to espe- cially undesirable errors (Tsochantaridis et al., 2004). This idea is revisited in chapter 7. Using the cost function, we can now define the online support vector machine as the Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_49_Chunk54
|
32 CHAPTER 2. LINEAR TEXT CLASSIFICATION following classification rule: ˆy = argmax y∈Y θ · f(x(i), y) + c(y(i), y) [2.47] θ(t) ←(1 −λ)θ(t−1) + f(x(i), y(i)) −f(x(i), ˆy) [2.48] This update is similar in form to the perceptron, with two key differences. • Rather than selecting the label ˆy that maximizes the score of the current classifi- cation model, the argmax searches for labels that are both strong, as measured by θ · f(x(i), y), and wrong, as measured by c(y(i), y). This maximization is known as cost-augmented decoding, because it augments the maximization objective to favor high-cost labels. If the highest-scoring label is y = y(i), then the margin loss for this instance is zero, and no update is needed. If not, then an update is required to reduce the margin loss — even if the current model classifies the instance correctly. Cost augmentation is only done while learning; it is not applied when making pre- dictions on unseen data. • The previous weights θ(t−1) are scaled by (1 −λ), with λ ∈(0, 1). The effect of this term is to cause the weights to “decay” back towards zero. In the support vector machine, this term arises from the minimization of a specific form of the margin, as described below. However, it can also be viewed as a form of regularization, which can help to prevent overfitting (see § 2.5.1). In this sense, it plays a role that is similar to smoothing in Na¨ıve Bayes (see § 2.2.4). 2.4.2 *Derivation of the online support vector machine The derivation of the online support vector machine is somewhat involved, but gives further intuition about why the method works. Begin by returning the idea of linear sep- arability (Definition 1): if a dataset is linearly separable, then there is some hyperplane θ that correctly classifies all training instances with margin ρ. This margin can be increased to any desired value by multiplying the weights by a constant. Now, for any datapoint (x(i), y(i)), the geometric distance to the separating hyper- plane is given by γ(θ;x(i),y(i)) ||θ||2 , where the denominator is the norm of the weights, ||θ||2 = qP j θ2 j. The geometric distance is sometimes called the geometric margin, in contrast to the functional margin γ(θ; x(i), y(i)). Both are shown in Figure 2.3. The geometric margin is a good measure of the robustness of the separator: if the functional margin is large, but the norm ||θ||2 is also large, then a small change in x(i) could cause it to be misclassified. We therefore seek to maximize the minimum geometric margin across the dataset, subject Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_50_Chunk55
|
2.4. LOSS FUNCTIONS AND LARGE-MARGIN CLASSIFICATION 33 functional margin geometric margin Figure 2.3: Functional and geometric margins for a binary classification problem. All separators that satisfy the margin constraint are shown. The separator with the largest geometric margin is shown in bold. to the constraint that the margin loss is always zero: max θ min i=1,2,...N γ(θ; x(i), y(i)) ||θ||2 s.t. γ(θ; x(i), y(i)) ≥1, ∀i. [2.49] This is a constrained optimization problem, where the second line describes constraints on the space of possible solutions θ. In this case, the constraint is that the functional margin always be at least one, and the objective is that the minimum geometric margin be as large as possible. Constrained optimization is reviewed in Appendix B. In this case, further manipula- tion yields an unconstrained optimization problem. First, note that the norm ||θ||2 scales linearly: ||aθ||2 = a||θ||2. Furthermore, the functional margin γ is a linear function of θ, so that γ(aθ, x(i), y(i)) = aγ(θ, x(i), y(i)). As a result, any scaling factor on θ will cancel in the numerator and denominator of the geometric margin. If the data is linearly separable at any ρ > 0, it is always possible to rescale the functional margin to 1 by multiplying θ by a scalar constant. We therefore need only minimize the denominator ||θ||2, subject to the constraint on the functional margin. The minimizer of ||θ||2 is also the minimizer of 1 2||θ||2 2 = 1 2 P θ2 j, which is easier to work with. This yields a simpler optimization prob- Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_51_Chunk56
|
34 CHAPTER 2. LINEAR TEXT CLASSIFICATION lem: min θ . 1 2||θ||2 2 s.t. γ(θ; x(i), y(i)) ≥1, ∀i. [2.50] This problem is a quadratic program: the objective is a quadratic function of the pa- rameters, and the constraints are all linear inequalities. One solution to this problem is to incorporate the constraints through Lagrange multipliers αi ≥0, i = 1, 2, . . . , N. The instances for which αi > 0 are called support vectors; other instances are irrelevant to the classification boundary. This motivates the name support vector machine. Thus far we have assumed linear separability, but many datasets of interest are not linearly separable. In this case, there is no θ that satisfies the margin constraint. To add more flexibility, we can introduce a set of slack variables ξi ≥0. Instead of requiring that the functional margin be greater than or equal to one, we require that it be greater than or equal to 1 −ξi. Ideally there would not be any slack, so the slack variables are penalized in the objective function: min θ,ξ 1 2||θ||2 2 + C N X i=1 ξi s.t. γ(θ; x(i), y(i)) + ξi ≥1, ∀i ξi ≥0, ∀i. [2.51] The hyperparameter C controls the tradeoff between violations of the margin con- straint and the preference for a low norm of θ. As C →∞, slack is infinitely expensive, and there is only a solution if the data is separable. As C →0, slack becomes free, and there is a trivial solution at θ = 0. Thus, C plays a similar role to the smoothing parame- ter in Na¨ıve Bayes (§ 2.2.4), trading off between a close fit to the training data and better generalization. Like the smoothing parameter of Na¨ıve Bayes, C must be set by the user, typically by maximizing performance on a heldout development set. To solve the constrained optimization problem defined in Equation 2.51, we can first solve for the slack variables, ξi ≥(1 −γ(θ; x(i), y(i)))+. [2.52] The inequality is tight: the optimal solution is to make the slack variables as small as possible, while still satisfying the constraints (Ratliff et al., 2007; Smith, 2011). By plugging in the minimum slack variables back into Equation 2.51, the problem can be transformed into the unconstrained optimization, min θ λ 2 ||θ||2 2 + N X i=1 (1 −γ(θ; x(i), y(i)))+, [2.53] Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_52_Chunk57
|
2.5. LOGISTIC REGRESSION 35 where each ξi has been substituted by the right-hand side of Equation 2.52, and the factor of C on the slack variables has been replaced by an equivalent factor of λ = 1 C on the norm of the weights. Equation 2.53 can be rewritten by expanding the margin, min θ λ 2 ||θ||2 2 + N X i=1 max y∈Y θ · f(x(i), y) + c(y(i), y) −θ · f(x(i), y(i)) + , [2.54] where c(y, y(i)) is the cost function defined in Equation 2.46. We can now differentiate with respect to the weights, ∇θLSVM =λθ + N X i=1 f(x(i), ˆy) −f(x(i), y(i)), [2.55] where LSVM refers to minimization objective in Equation 2.54 and ˆy = argmaxy∈Y θ · f(x(i), y) + c(y(i), y). The online support vector machine update arises from the appli- cation of stochastic gradient descent (described in § 2.6.2) to this gradient. 2.5 Logistic regression Thus far, we have seen two broad classes of learning algorithms. Na¨ıve Bayes is a prob- abilistic method, where learning is equivalent to estimating a joint probability distribu- tion. The perceptron and support vector machine are discriminative, error-driven algo- rithms: the learning objective is closely related to the number of errors on the training data. Probabilistic and error-driven approaches each have advantages: probability makes it possible to quantify uncertainty about the predicted labels, but the probability model of Na¨ıve Bayes makes unrealistic independence assumptions that limit the features that can be used. Logistic regression combines advantages of discriminative and probabilistic classi- fiers. Unlike Na¨ıve Bayes, which starts from the joint probability pX,Y , logistic regression defines the desired conditional probability pY |X directly. Think of θ · f(x, y) as a scoring function for the compatibility of the base features x and the label y. To convert this score into a probability, we first exponentiate, obtaining exp (θ · f(x, y)), which is guaranteed to be non-negative. Next, we normalize, dividing over all possible labels y′ ∈Y. The resulting conditional probability is defined as, p(y | x; θ) = exp (θ · f(x, y)) P y′∈Y exp (θ · f(x, y′)). [2.56] Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_53_Chunk58
|
36 CHAPTER 2. LINEAR TEXT CLASSIFICATION Given a dataset D = {(x(i), y(i))}N i=1, the weights θ are estimated by maximum condi- tional likelihood, log p(y(1:N) | x(1:N); θ) = N X i=1 log p(y(i) | x(i); θ) [2.57] = N X i=1 θ · f(x(i), y(i)) −log X y′∈Y exp θ · f(x(i), y′) . [2.58] The final line is obtained by plugging in Equation 2.56 and taking the logarithm.16 Inside the sum, we have the (additive inverse of the) logistic loss, ℓLOGREG(θ; x(i), y(i)) = −θ · f(x(i), y(i)) + log X y′∈Y exp(θ · f(x(i), y′)) [2.59] The logistic loss is shown in Figure 2.2 on page 31. A key difference from the zero-one and hinge losses is that logistic loss is never zero. This means that the objective function can always be improved by assigning higher confidence to the correct label. 2.5.1 Regularization As with the support vector machine, better generalization can be obtained by penalizing the norm of θ. This is done by adding a multiple of the squared norm λ 2||θ||2 2 to the minimization objective. This is called L2 regularization, because ||θ||2 2 is the squared L2 norm of the vector θ. Regularization forces the estimator to trade off performance on the training data against the norm of the weights, and this can help to prevent overfitting. Consider what would happen to the unregularized weight for a base feature j that is active in only one instance x(i): the conditional log-likelihood could always be improved by increasing the weight for this feature, so that θ(j,y(i)) →∞and θ(j,˜y̸=y(i)) →−∞, where (j, y) is the index of feature associated with x(i) j and label y in f(x(i), y). In § 2.2.4 (footnote 11), we saw that smoothing the probabilities of a Na¨ıve Bayes clas- sifier can be justified as a form of maximum a posteriori estimation, in which the param- eters of the classifier are themselves random variables, drawn from a prior distribution. The same justification applies to L2 regularization. In this case, the prior is a zero-mean Gaussian on each term of θ. The log-likelihood under a zero-mean Gaussian is, log N(θj; 0, σ2) ∝− 1 2σ2 θ2 j, [2.60] so that the regularization weight λ is equal to the inverse variance of the prior, λ = 1 σ2 . 16The log-sum-exp term is a common pattern in machine learning. It is numerically unstable, because it will underflow if the inner product is small, and overflow if the inner product is large. Scientific computing libraries usually contain special functions for computing logsumexp, but with some thought, you should be able to see how to create an implementation that is numerically stable. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_54_Chunk59
|
2.6. OPTIMIZATION 37 2.5.2 Gradients Logistic loss is minimized by optimization along the gradient. Specific algorithms are de- scribed in the next section, but first let’s compute the gradient with respect to the logistic loss of a single example: ℓLOGREG = −θ · f(x(i), y(i)) + log X y′∈Y exp θ · f(x(i), y′) [2.61] ∂ℓ ∂θ = −f(x(i), y(i)) + 1 P y′′∈Y exp
|
nlp_Page_55_Chunk60
|
38 CHAPTER 2. LINEAR TEXT CLASSIFICATION • In the support vector machine, the objective is the regularized margin loss, LSVM = λ 2 ||θ||2 2 + N X i=1 (max y∈Y (θ · f(x(i), y) + c(y(i), y)) −θ · f(x(i), y(i)))+, [2.68] There is no closed-form solution, but the objective is convex. The perceptron algo- rithm minimizes a similar objective. • In logistic regression, the objective is the regularized negative log-likelihood, LLOGREG = λ 2 ||θ||2 2 − N X i=1 θ · f(x(i), y(i)) −log X y∈Y exp θ · f(x(i), y) [2.69] Again, there is no closed-form solution, but the objective is convex. These learning algorithms are distinguished by what is being optimized, rather than how the optimal weights are found. This decomposition is an essential feature of con- temporary machine learning. The domain expert’s job is to design an objective function — or more generally, a model of the problem. If the model has certain characteristics, then generic optimization algorithms can be used to find the solution. In particular, if an objective function is differentiable, then gradient-based optimization can be employed; if it is also convex, then gradient-based optimization is guaranteed to find the globally optimal solution. The support vector machine and logistic regression have both of these properties, and so are amenable to generic convex optimization techniques (Boyd and Vandenberghe, 2004). 2.6.1 Batch optimization In batch optimization, each update to the weights is based on a computation involving the entire dataset. One such algorithm is gradient descent, which iteratively updates the weights, θ(t+1) ←θ(t) −η(t)∇θL, [2.70] where ∇θL is the gradient computed over the entire training set, and η(t) is the learning rate at iteration t. If the objective L is a convex function of θ, then this procedure is guaranteed to terminate at the global optimum, for appropriate schedule of learning rates, η(t).17 17Convergence proofs typically require the learning rate to satisfy the following conditions: P∞ t=1 η(t) = ∞and P∞ t=1(η(t))2 < ∞(Bottou et al., 2016). These properties are satisfied by any learning rate schedule η(t) = η(0)t−α for α ∈[1, 2]. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_56_Chunk61
|
2.6. OPTIMIZATION 39 In practice, gradient descent can be slow to converge, as the gradient can become infinitesimally small. Faster convergence can be obtained by second-order Newton opti- mization, which incorporates the inverse of the Hessian matrix, Hi,j = ∂2L ∂θi∂θj [2.71] The size of the Hessian matrix is quadratic in the number of features. In the bag-of-words representation, this is usually too big to store, let alone invert. Quasi-Network optimiza- tion techniques maintain a low-rank approximation to the inverse of the Hessian matrix. Such techniques usually converge more quickly than gradient descent, while remaining computationally tractable even for large feature sets. A popular quasi-Newton algorithm is L-BFGS (Liu and Nocedal, 1989), which is implemented in many scientific computing environments, such as SCIPY and MATLAB. For any gradient-based technique, the user must set the learning rates η(t). While con- vergence proofs usually employ a decreasing learning rate, in practice, it is common to fix η(t) to a small constant, like 10−3. The specific constant can be chosen by experimentation, although there is research on determining the learning rate automatically (Schaul et al., 2013; Wu et al., 2018). 2.6.2 Online optimization Batch optimization computes the objective on the entire training set before making an up- date. This may be inefficient, because at early stages of training, a small number of train- ing examples could point the learner in the correct direction. Online learning algorithms make updates to the weights while iterating through the training data. The theoretical basis for this approach is a stochastic approximation to the true objective function, N X i=1 ℓ(θ; x(i), y(i)) ≈N × ℓ(θ; x(j), y(j)), (x(j), y(j)) ∼{(x(i), y(i))}N i=1, [2.72] where the instance (x(j), y(j)) is sampled at random from the full dataset. In stochastic gradient descent, the approximate gradient is computed by randomly sampling a single instance, and an update is made immediately. This is similar to the perceptron algorithm, which also updates the weights one instance at a time. In mini- batch stochastic gradient descent, the gradient is computed over a small set of instances. A typical approach is to set the minibatch size so that the entire batch fits in memory on a graphics processing unit (GPU; Neubig et al., 2017). It is then possible to speed up learn- ing by parallelizing the computation of the gradient over each instance in the minibatch. Algorithm 5 offers a generalized view of gradient descent. In standard gradient de- scent, the batcher returns a single batch with all the instances. In stochastic gradient de- Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_57_Chunk62
|
40 CHAPTER 2. LINEAR TEXT CLASSIFICATION Algorithm 5 Generalized gradient descent. The function BATCHER partitions the train- ing set into B batches such that each instance appears in exactly one batch. In gradient descent, B = 1; in stochastic gradient descent, B = N; in minibatch stochastic gradient descent, 1 < B < N. 1: procedure GRADIENT-DESCENT(x(1:N), y(1:N), L, η(1...∞), BATCHER, Tmax) 2: θ ←0 3: t ←0 4: repeat 5: (b(1), b(2), . . . , b(B)) ←BATCHER(N) 6: for n ∈{1, 2, . . . , B} do 7: t ←t + 1 8: θ(t) ←θ(t−1) −η(t)∇θL(θ(t−1); x(b(n) 1 ,b(n) 2 ,...), y(b(n) 1 ,b(n) 2 ,...)) 9: if Converged(θ(1,2,...,t)) then 10: return θ(t) 11: until t ≥Tmax 12: return θ(t) scent, it returns N batches with one instance each. In mini-batch settings, the batcher returns B minibatches, 1 < B < N. There are many other techniques for online learning, and research in this area is on- going (Bottou et al., 2016). Some algorithms use an adaptive learning rate, which can be different for every feature (Duchi et al., 2011). Features that occur frequently are likely to be updated frequently, so it is best to use a small learning rate; rare features will be updated infrequently, so it is better to take larger steps. The AdaGrad (adaptive gradient) algorithm achieves this behavior by storing the sum of the squares of the gradients for each feature, and rescaling the learning rate by its inverse: gt =∇θL(θ(t); x(i), y(i)) [2.73] θ(t+1) j ←θ(t) j − η(t) qPt t′=1 g2 t,j gt,j, [2.74] where j iterates over features in f(x, y). In most cases, the number of active features for any instance is much smaller than the number of weights. If so, the computation cost of online optimization will be dominated by the update from the regularization term, λθ. The solution is to be “lazy”, updating each θj only as it is used. To implement lazy updating, store an additional parameter τj, which is the iteration at which θj was last updated. If θj is needed at time t, the t −τ regularization updates can be performed all at once. This strategy is described in detail by Kummerfeld et al. (2015). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_58_Chunk63
|
2.7. *ADDITIONAL TOPICS IN CLASSIFICATION 41 2.7 *Additional topics in classification This section presents some additional topics in classification that are particularly relevant for natural language processing, especially for understanding the research literature. 2.7.1 Feature selection by regularization In logistic regression and large-margin classification, generalization can be improved by regularizing the weights towards 0, using the L2 norm. But rather than encouraging weights to be small, it might be better for the model to be sparse: it should assign weights of exactly zero to most features, and only assign non-zero weights to features that are clearly necessary. This idea can be formalized by the L0 norm, L0 = ||θ||0 = P j δ (θj ̸= 0), which applies a constant penalty for each non-zero weight. This norm can be thought of as a form of feature selection: optimizing the L0-regularized conditional likelihood is equivalent to trading off the log-likelihood against the number of active features. Reduc- ing the number of active features is desirable because the resulting model will be fast, low-memory, and should generalize well, since irrelevant features will be pruned away. Unfortunately, the L0 norm is non-convex and non-differentiable. Optimization under L0 regularization is NP-hard, meaning that it can be solved efficiently only if P=NP (Ge et al., 2011). A useful alternative is the L1 norm, which is equal to the sum of the absolute values of the weights, ||θ||1 = P j |θj|. The L1 norm is convex, and can be used as an approxima- tion to L0 (Tibshirani, 1996). Conveniently, the L1 norm also performs feature selection, by driving many of the coefficients to zero; it is therefore known as a sparsity inducing regularizer. The L1 norm does not have a gradient at θj = 0, so we must instead optimize the L1-regularized objective using subgradient methods. The associated stochastic sub- gradient descent algorithms are only somewhat more complex than conventional SGD; Sra et al. (2012) survey approaches for estimation under L1 and other regularizers. Gao et al. (2007) compare L1 and L2 regularization on a suite of NLP problems, finding that L1 regularization generally gives similar accuracy to L2 regularization, but that L1 regularization produces models that are between ten and fifty times smaller, because more than 90% of the feature weights are set to zero. 2.7.2 Other views of logistic regression In binary classification, we can dispense with the feature function, and choose y based on the inner product of θ · x. The conditional probability pY |X is obtained by passing this Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_59_Chunk64
|
42 CHAPTER 2. LINEAR TEXT CLASSIFICATION inner product through a logistic function, σ(a) ≜ exp(a) 1 + exp(a) = (1 + exp(−a))−1 [2.75] p(y | x; θ) =σ(θ · x). [2.76] This is the origin of the name “logistic regression.” Logistic regression can be viewed as part of a larger family of generalized linear models (GLMs), in which various other link functions convert between the inner product θ · x and the parameter of a conditional probability distribution. Logistic regression and related models are sometimes referred to as log-linear, be- cause the log-probability is a linear function of the features. But in the early NLP liter- ature, logistic regression was often called maximum entropy classification (Berger et al., 1996). This name refers to an alternative formulation, in which the goal is to find the max- imum entropy probability function that satisfies moment-matching constraints. These constraints specify that the empirical counts of each feature should match the expected counts under the induced probability distribution pY |X;θ, N X i=1 fj(x(i), y(i)) = N X i=1 X y∈Y p(y | x(i); θ)fj(x(i), y), ∀j [2.77] The moment-matching constraint is satisfied exactly when the derivative of the condi- tional log-likelihood function (Equation 2.65) is equal to zero. However, the constraint can be met by many values of θ, so which should we choose? The entropy of the conditional probability distribution pY |X is, H(pY |X) = − X x∈X pX(x) X y∈Y pY |X(y | x) log pY |X(y | x), [2.78] where X is the set of all possible feature vectors, and pX(x) is the probability of observing the base features x. The distribution pX is unknown, but it can be estimated by summing over all the instances in the training set, ˜H(pY |X) = −1 N N X i=1 X y∈Y pY |X(y | x(i)) log pY |X(y | x(i)). [2.79] If the entropy is large, the likelihood function is smooth across possible values of y; if it is small, the likelihood function is sharply peaked at some preferred value; in the limiting case, the entropy is zero if p(y | x) = 1 for some y. The maximum-entropy cri- terion chooses to make the weakest commitments possible, while satisfying the moment- matching constraints from Equation 2.77. The solution to this constrained optimization problem is identical to the maximum conditional likelihood (logistic-loss) formulation that was presented in § 2.5. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_60_Chunk65
|
2.8. SUMMARY OF LEARNING ALGORITHMS 43 2.8 Summary of learning algorithms It is natural to ask which learning algorithm is best, but the answer depends on what characteristics are important to the problem you are trying to solve. Na¨ıve Bayes Pros: easy to implement; estimation is fast, requiring only a single pass over the data; assigns probabilities to predicted labels; controls overfitting with smooth- ing parameter. Cons: often has poor accuracy, especially with correlated features. Perceptron Pros: easy to implement; online; error-driven learning means that accuracy is typically high, especially after averaging. Cons: not probabilistic; hard to know when to stop learning; lack of margin can lead to overfitting. Support vector machine Pros: optimizes an error-based metric, usually resulting in high accuracy; overfitting is controlled by a regularization parameter. Cons: not proba- bilistic. Logistic regression Pros: error-driven and probabilistic; overfitting is controlled by a reg- ularization parameter. Cons: batch learning requires black-box optimization; logistic loss can “overtrain” on correctly labeled examples. One of the main distinctions is whether the learning algorithm offers a probability over labels. This is useful in modular architectures, where the output of one classifier is the input for some other system. In cases where probability is not necessary, the sup- port vector machine is usually the right choice, since it is no more difficult to implement than the perceptron, and is often more accurate. When probability is necessary, logistic regression is usually more accurate than Na¨ıve Bayes. Additional resources A machine learning textbook will offer more classifiers and more details (e.g., Murphy, 2012), although the notation will differ slightly from what is typical in natural language processing. Probabilistic methods are surveyed by Hastie et al. (2009), and Mohri et al. (2012) emphasize theoretical considerations. Bottou et al. (2016) surveys the rapidly mov- ing field of online learning, and Kummerfeld et al. (2015) empirically review several opti- mization algorithms for large-margin learning. The python toolkit SCIKIT-LEARN includes implementations of all of the algorithms described in this chapter (Pedregosa et al., 2011). Appendix B describes an alternative large-margin classifier, called passive-aggressive. Passive-aggressive is an online learner that seeks to make the smallest update that satisfies the margin constraint at the current instance. It is closely related to MIRA, which was used widely in NLP in the 2000s (Crammer and Singer, 2003). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_61_Chunk66
|
44 CHAPTER 2. LINEAR TEXT CLASSIFICATION Exercises There will be exercises at the end of each chapter. In this chapter, the exercises are mostly mathematical, matching the subject material. In other chapters, the exercises will empha- size linguistics or programming. 1. Let x be a bag-of-words vector such that PV j=1 xj = 1. Verify that the multinomial probability pmult(x; φ), as defined in Equation 2.12, is identical to the probability of the same document under a categorical distribution, pcat(w; φ). 2. Suppose you have a single feature x, with the following conditional distribution: p(x | y) = α, X = 0, Y = 0 1 −α, X = 1, Y = 0 1 −β, X = 0, Y = 1 β, X = 1, Y = 1. [2.80] Further suppose that the prior is uniform, Pr(Y = 0) = Pr(Y = 1) = 1 2, and that both α > 1 2 and β > 1 2. Given a Na¨ıve Bayes classifier with accurate parameters, what is the probability of making an error? 3. Derive the maximum-likelihood estimate for the parameter µ in Na¨ıve Bayes. 4. The classification models in the text have a vector of weights for each possible label. While this is notationally convenient, it is overdetermined: for any linear classifier that can be obtained with K ×V weights, an equivalent classifier can be constructed using (K −1) × V weights. a) Describe how to construct this classifier. Specifically, if given a set of weights θ and a feature function f(x, y), explain how to construct alternative weights and feature function θ′ and f ′(x, y), such that, ∀y, y′ ∈Y, θ · f(x, y) −θ · f(x, y′) = θ′ · f ′(x, y) −θ′ · f ′(x, y′). [2.81] b) Explain how your construction justifies the well-known alternative form for binary logistic regression, Pr(Y = 1 | x; θ) = 1 1+exp(−θ′·x) = σ(θ′ · x), where σ is the sigmoid function. 5. Suppose you have two labeled datasets D1 and D2, with the same features and la- bels. • Let θ(1) be the unregularized logistic regression (LR) coefficients from training on dataset D1. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_62_Chunk67
|
2.8. SUMMARY OF LEARNING ALGORITHMS 45 • Let θ(2) be the unregularized LR coefficients (same model) from training on dataset D2. • Let θ∗be the unregularized LR coefficients from training on the combined dataset D1 ∪D2. Under these conditions, prove that for any feature j, θ∗ j ≥min(θ(1) j , θ(2) j ) θ∗ j ≤max(θ(1) j , θ(2) j ). 6. Let ˆθ be the solution to an unregularized logistic regression problem, and let θ∗be the solution to the same problem, with L2 regularization. Prove that ||θ∗||2 2 ≤||ˆθ||2 2. 7. As noted in the discussion of averaged perceptron in § 2.3.2, the computation of the running sum m ←m + θ is unnecessarily expensive, requiring K × V operations. Give an alternative way to compute the averaged weights θ, with complexity that is independent of V and linear in the sum of feature sizes PN i=1 |f(x(i), y(i))|. 8. Consider a dataset that is comprised of two identical instances x(1) = x(2) with distinct labels y(1) ̸= y(2). Assume all features are binary, xj ∈{0, 1} for all j. Now suppose that the averaged perceptron always trains on the instance (xi(t), yi(t)), where i(t) = 2 −(t mod 2), which is 1 when the training iteration t is odd, and 2 when t is even. Further suppose that learning terminates under the following con- dition: ϵ ≥max j 1 t X t θ(t) j − 1 t −1 X t θ(t−1) j . [2.82] In words, the algorithm stops when the largest change in the averaged weights is less than or equal to ϵ. Compute the number of iterations before the averaged per- ceptron terminates. 9. Prove that the margin loss is convex in θ. Use this definition of the margin loss: L(θ) = −θ · f(x, y∗) + max y θ · f(x, y) + c(y∗, y), [2.83] where y∗is the gold label. As a reminder, a function f is convex iff, f(αx1 + (1 −α)x2) ≤αf(x1) + (1 −α)f(x2), [2.84] for any x1, x2 and α ∈[0, 1]. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_63_Chunk68
|
46 CHAPTER 2. LINEAR TEXT CLASSIFICATION 10. If a function f is m-strongly convex, then for some m > 0, the following inequality holds for all x and x′ on the domain of the function: f(x′) ≤f(x) + (∇xf) · (x′ −x) + m 2 ||x′ −x||2 2. [2.85] Let f(x) = L(θ(t)), representing the loss of the classifier at iteration t of gradient descent; let f(x′) = L(θ(t+1)). Assuming the loss function is m-convex, prove that L(θ(t+1)) ≤L(θ(t)) for an appropriate constant learning rate η, which will depend on m. Explain why this implies that gradient descent converges when applied to an m-strongly convex loss function with a unique minimum. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_64_Chunk69
|
Chapter 3 Nonlinear classification Linear classification may seem like all we need for natural language processing. The bag- of-words representation is inherently high dimensional, and the number of features is often larger than the number of labeled training instances. This means that it is usually possible to find a linear classifier that perfectly fits the training data, or even to fit any ar- bitrary labeling of the training instances! Moving to nonlinear classification may therefore only increase the risk of overfitting. Furthermore, for many tasks, lexical features (words) are meaningful in isolation, and can offer independent evidence about the instance label — unlike computer vision, where individual pixels are rarely informative, and must be evaluated holistically to make sense of an image. For these reasons, natural language processing has historically focused on linear classification. But in recent years, nonlinear classifiers have swept through natural language pro- cessing, and are now the default approach for many tasks (Manning, 2015). There are at least three reasons for this change. • There have been rapid advances in deep learning, a family of nonlinear meth- ods that learn complex functions of the input through multiple layers of compu- tation (Goodfellow et al., 2016). • Deep learning facilitates the incorporation of word embeddings, which are dense vector representations of words. Word embeddings can be learned from large amounts of unlabeled data, and enable generalization to words that do not appear in the an- notated training data (word embeddings are discussed in detail in chapter 14). • While CPU speeds have plateaued, there have been rapid advances in specialized hardware called graphics processing units (GPUs), which have become faster, cheaper, and easier to program. Many deep learning models can be implemented efficiently on GPUs, offering substantial performance improvements over CPU-based comput- ing. 47
|
nlp_Page_65_Chunk70
|
48 CHAPTER 3. NONLINEAR CLASSIFICATION This chapter focuses on neural networks, which are the dominant approach for non- linear classification in natural language processing today.1 Historically, a few other non- linear learning methods have been applied to language data. • Kernel methods are generalizations of the nearest-neighbor classification rule, which classifies each instance by the label of the most similar example in the training set. The application of the kernel support vector machine to information extraction is described in chapter 17. • Decision trees classify instances by checking a set of conditions. Scaling decision trees to bag-of-words inputs is difficult, but decision trees have been successful in problems such as coreference resolution (chapter 15), where more compact feature sets can be constructed (Soon et al., 2001). • Boosting and related ensemble methods work by combining the predictions of sev- eral “weak” classifiers, each of which may consider only a small subset of features. Boosting has been successfully applied to text classification (Schapire and Singer, 2000) and syntactic analysis (Abney et al., 1999), and remains one of the most suc- cessful methods on machine learning competition sites such as Kaggle (Chen and Guestrin, 2016). Hastie et al. (2009) provide an excellent overview of these techniques. 3.1 Feedforward neural networks Consider the problem of building a classifier for movie reviews. The goal is to predict a label y ∈{GOOD, BAD, OKAY} from a representation of the text of each document, x. But what makes a good movie? The story, acting, cinematography, editing, soundtrack, and so on. Now suppose the training set contains labels for each of these additional features, z = [z1, z2, . . . , zKz]⊤. With a training set of such information, we could build a two-step classifier: 1. Use the text x to predict the features z. Specifically, train a logistic regression clas- sifier to compute p(zk | x), for each k ∈{1, 2, . . . , Kz}. 2. Use the features z to predict the label y. Again, train a logistic regression classifier to compute p(y | z). On test data, z is unknown, so we will use the probabilities p(z | x) from the first layer as the features. This setup is shown in Figure 3.1, which describes the proposed classifier in a computa- tion graph: the text features x are connected to the middle layer z, which is connected to the label y. 1I will use “deep learning” and “neural networks” interchangeably. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_66_Chunk71
|
3.1. FEEDFORWARD NEURAL NETWORKS 49 . . . . . . x z y Figure 3.1: A feedforward neural network. Shaded circles indicate observed features, usually words; squares indicate nodes in the computation graph, which are computed from the information carried over the incoming arrows. If we assume that each zk is binary, zk ∈{0, 1}, then the probability p(zk | x) can be modeled using binary logistic regression: Pr(zk = 1 | x; Θ(x→z)) = σ(θ(x→z) k · x) = (1 + exp(−θ(x→z) k · x))−1, [3.1] where σ is the sigmoid function (shown in Figure 3.2), and the matrix Θ(x→z) ∈RKz×V is constructed by stacking the weight vectors for each zk, Θ(x→z) = [θ(x→z) 1 , θ(x→z) 2 , . . . , θ(x→z) Kz ]⊤. [3.2] We will assume that x contains a term with a constant value of 1, so that a corresponding offset parameter is included in each θ(x→z) k . The output layer is computed by the multi-class logistic regression probability, Pr(y = j | z; Θ(z→y), b) = exp(θ(z→y) j · z + bj) P j′∈Y exp(θ(z→y) j′ · z + bj′) , [3.3] where bj is an offset for label j, and the output weight matrix Θ(z→y) ∈RKy×Kz is again constructed by concatenation, Θ(z→y) = [θ(z→y) 1 , θ(z→y) 2 , . . . , θ(z→y) Ky ]⊤. [3.4] The vector of probabilities over each possible value of y is denoted, p(y | z; Θ(z→y), b) = SoftMax(Θ(z→y)z + b), [3.5] where element j in the output of the SoftMax function is computed as in Equation 3.3. This set of equations defines a multilayer classifier, which can be summarized as, p(z | x; Θ(x→z)) =σ(Θ(x→z)x) [3.6] p(y | z; Θ(z→y), b) = SoftMax(Θ(z→y)z + b), [3.7] Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_67_Chunk72
|
50 CHAPTER 3. NONLINEAR CLASSIFICATION 3 2 1 0 1 2 3 1 0 1 2 3 values sigmoid tanh ReLU 3 2 1 0 1 2 3 0.0 0.2 0.4 0.6 0.8 1.0 derivatives Figure 3.2: The sigmoid, tanh, and ReLU activation functions where the function σ is now applied elementwise to the vector of inner products, σ(Θ(x→z)x) = [σ(θ(x→z) 1 · x), σ(θ(x→z) 2 · x), . . . , σ(θ(x→z) Kz · x)]⊤. [3.8] Now suppose that the hidden features z are never observed, even in the training data. We can still construct the architecture in Figure 3.1. Instead of predicting y from a discrete vector of predicted values z, we use the probabilities σ(θk · x). The resulting classifier is barely changed: z =σ(Θ(x→z)x) [3.9] p(y | x; Θ(z→y), b) = SoftMax(Θ(z→y)z + b). [3.10] This defines a classification model that predicts the label y ∈Y from the base features x, through a“hidden layer” z. This is a feedforward neural network.2 3.2 Designing neural networks There several ways to generalize the feedforward neural network. 3.2.1 Activation functions If the hidden layer is viewed as a set of latent features, then the sigmoid function in Equa- tion 3.9 represents the extent to which each of these features is “activated” by a given input. However, the hidden layer can be regarded more generally as a nonlinear trans- formation of the input. This opens the door to many other activation functions, some of which are shown in Figure 3.2. At the moment, the choice of activation functions is more art than science, but a few points can be made about the most popular varieties: 2The architecture is sometimes called a multilayer perceptron, but this is misleading, because each layer is not a perceptron as defined in the previous chapter. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_68_Chunk73
|
3.2. DESIGNING NEURAL NETWORKS 51 • The range of the sigmoid function is (0, 1). The bounded range ensures that a cas- cade of sigmoid functions will not “blow up” to a huge output, and this is impor- tant for deep networks with several hidden layers. The derivative of the sigmoid is ∂ ∂aσ(a) = σ(a)(1 −σ(a)). This derivative becomes small at the extremes, which can make learning slow; this is called the vanishing gradient problem. • The range of the tanh activation function is (−1, 1): like the sigmoid, the range is bounded, but unlike the sigmoid, it includes negative values. The derivative is ∂ ∂a tanh(a) = 1 −tanh(a)2, which is steeper than the logistic function near the ori- gin (LeCun et al., 1998). The tanh function can also suffer from vanishing gradients at extreme values. • The rectified linear unit (ReLU) is zero for negative inputs, and linear for positive inputs (Glorot et al., 2011), ReLU(a) = ( a, a ≥0 0, otherwise. [3.11] The derivative is a step function, which is 1 if the input is positive, and zero other- wise. Once the activation is zero, the gradient is also zero. This can lead to the prob- lem of “dead neurons”, where some ReLU nodes are zero for all inputs, throughout learning. A solution is the leaky ReLU, which has a small positive slope for negative inputs (Maas et al., 2013), Leaky-ReLU(a) = ( a, a ≥0 .0001a, otherwise. [3.12] Sigmoid and tanh are sometimes described as squashing functions, because they squash an unbounded input into a bounded range. Glorot and Bengio (2010) recommend against the use of the sigmoid activation in deep networks, because its mean value of 1 2 can cause the next layer of the network to be saturated, leading to small gradients on its own pa- rameters. Several other activation functions are reviewed in the textbook by Goodfellow et al. (2016), who recommend ReLU as the “default option.” 3.2.2 Network structure Deep networks stack up several hidden layers, with each z(d) acting as the input to the next layer, z(d+1). As the total number of nodes in the network increases, so does its capacity to learn complex functions of the input. Given a fixed number of nodes, one must decide whether to emphasize width (large Kz at each layer) or depth (many layers). At present, this tradeoff is not well understood.3 3With even a single hidden layer, a neural network can approximate any continuous function on a closed and bounded subset of RN to an arbitrarily small non-zero error; see section 6.4.1 of Goodfellow et al. (2016) Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_69_Chunk74
|
52 CHAPTER 3. NONLINEAR CLASSIFICATION It is also possible to “short circuit” a hidden layer, by propagating information directly from the input to the next higher level of the network. This is the idea behind residual net- works, which propagate information directly from the input to the subsequent layer (He et al., 2016), z = f(Θ(x→z)x) + x, [3.13] where f is any nonlinearity, such as sigmoid or ReLU. A more complex architecture is the highway network (Srivastava et al., 2015; Kim et al., 2016), in which an addition gate controls an interpolation between f(Θ(x→z)x) and x, t =σ(Θ(t)x + b(t)) [3.14] z =t ⊙f(Θ(x→z)x) + (1 −t) ⊙x, [3.15] where ⊙refers to an elementwise vector product, and 1 is a column vector of ones. As before, the sigmoid function is applied elementwise to its input; recall that the output of this function is restricted to the range (0, 1). Gating is also used in the long short-term memory (LSTM), which is discussed in chapter 6. Residual and highway connections address a problem with deep architectures: repeated application of a nonlinear activation function can make it difficult to learn the parameters of the lower levels of the network, which are too distant from the supervision signal. 3.2.3 Outputs and loss functions In the multi-class classification example, a softmax output produces probabilities over each possible label. This aligns with a negative conditional log-likelihood, −L = − N X i=1 log p(y(i) | x(i); Θ). [3.16] where Θ = {Θ(x→z), Θ(z→y), b} is the entire set of parameters. This loss can be written alternatively as follows: ˜yj ≜Pr(y = j | x(i); Θ) [3.17] −L = − N X i=1 ey(i) · log ˜y [3.18] for a survey of these theoretical results. However, depending on the function to be approximated, the width of the hidden layer may need to be arbitrarily large. Furthermore, the fact that a network has the capacity to approximate any given function does not imply that it is possible to learn the function using gradient-based optimization. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_70_Chunk75
|
3.3. LEARNING NEURAL NETWORKS 53 where ey(i) is a one-hot vector of zeros with a value of 1 at position y(i). The inner product between ey(i) and log ˜y is also called the multinomial cross-entropy, and this terminology is preferred in many neural networks papers and software packages. It is also possible to train neural networks from other objectives, such as a margin loss. In this case, it is not necessary to use softmax at the output layer: an affine transformation of the hidden layer is enough: Ψ(y; x(i), Θ) =θ(z→y) y · z + by [3.19] ℓMARGIN(Θ; x(i), y(i)) = max y̸=y(i) 1 + Ψ(y; x(i), Θ) −Ψ(y(i); x(i), Θ) + . [3.20] In regression problems, the output is a scalar or vector (see § 4.1.2). For these problems, a typical loss function is the squared error (y −ˆy)2 or squared norm ||y −ˆy||2 2. 3.2.4 Inputs and lookup layers In text classification, the input layer x can refer to a bag-of-words vector, where xj is the count of word j. The input to the hidden unit zk is then PV j=1 θ(x→z) j,k xj, and word j is represented by the vector θ(x→z) j . This vector is sometimes described as the embedding of word j, and can be learned from unlabeled data, using techniques discussed in chapter 14. The columns of Θ(x→z) are each Kz-dimensional word embeddings. Chapter 2 presented an alternative view of text documents, as a sequence of word tokens, w1, w2, . . . , wM. In a neural network, each word token wm is represented with a one-hot vector, ewm, with dimension V . The matrix-vector product Θ(x→z)ewm returns the embedding of word wm. The complete document can represented by horizontally concatenating these one-hot vectors, W = [ew1, ew2, . . . , ewM ], and the bag-of-words rep- resentation can be recovered from the matrix-vector product W[1, 1, . . . , 1]⊤, which sums each row over the tokens m = {1, 2, . . . , M}. The matrix product Θ(x→z)W contains the horizontally concatenated embeddings of each word in the document, which will be use- ful as the starting point for convolutional neural networks (see § 3.4). This is sometimes called a lookup layer, because the first step is to lookup the embeddings for each word in the input text. 3.3 Learning neural networks The feedforward network in Figure 3.1 can now be written as, z ←f(Θ(x→z)x(i)) [3.21] ˜y ←SoftMax Θ(z→y)z + b [3.22] ℓ(i) ←−ey(i) · log ˜y, [3.23] Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_71_Chunk76
|
54 CHAPTER 3. NONLINEAR CLASSIFICATION where f is an elementwise activation function, such as σ or ReLU, and ℓ(i) is the loss at instance i. The parameters Θ(x→z), Θ(z→y), and b can be estimated using online gradient- based optimization. The simplest such algorithm is stochastic gradient descent, which was discussed in § 2.6. Each parameter is updated by the gradient of the loss, b ←b −η(t)∇bℓ(i) [3.24] θ(z→y) k ←θ(z→y) k −η(t)∇θ(z→y) k ℓ(i) [3.25] θ(x→z) n ←θ(x→z) n −η(t)∇θ(x→z) n ℓ(i), [3.26] where η(t) is the learning rate on iteration t, ℓ(i) is the loss on instance (or minibatch) i, and θ(x→z) n is column n of the matrix Θ(x→z), and θ(z→y) k is column k of Θ(z→y). The gradients of the negative log-likelihood on b and θ(z→y) k are similar to the gradi- ents in logistic regression. For θ(z→y), the gradient is, ∇θ(z→y) k ℓ(i) = ∂ℓ(i) ∂θ(z→y) k,1 , ∂ℓ(i) ∂θ(z→y) k,2 , . . . , ∂ℓ(i) ∂θ(z→y) k,Ky ⊤ [3.27] ∂ℓ(i) ∂θ(z→y) k,j = − ∂ ∂θ(z→y) k,j θ(z→y) y(i) · z −log X y∈Y exp θ(z→y) y · z [3.28] = Pr(y = j | z; Θ(z→y), b) −δ j = y(i) zk, [3.29] where δ
|
nlp_Page_72_Chunk77
|
3.3. LEARNING NEURAL NETWORKS 55 θ(x→z) k · x. For example, if f is the sigmoid function, then the derivative is, ∂ℓ(i) ∂θ(x→z) n,k =∂ℓ(i) ∂zk × σ(θ(x→z) k · x) × (1 −σ(θ(x→z) k · x)) × xn [3.33] =∂ℓ(i) ∂zk × zk × (1 −zk) × xn. [3.34] For intuition, consider each of the terms in the product. • If the negative log-likelihood ℓ(i) does not depend much on zk, then ∂ℓ(i) ∂zk ≈0. In this case it doesn’t matter how zk is computed, and so ∂ℓ(i) ∂θ(x→z) n,k ≈0. • If zk is near 1 or 0, then the curve of the sigmoid function is nearly flat (Figure 3.2), and changing the inputs will make little local difference. The term zk × (1 −zk) is maximized at zk = 1 2, where the slope of the sigmoid function is steepest. • If xn = 0, then it does not matter how we set the weights θ(x→z) n,k , so ∂ℓ(i) ∂θ(x→z) n,k = 0. 3.3.1 Backpropagation The equations above rely on the chain rule to compute derivatives of the loss with respect to each parameter of the model. Furthermore, local derivatives are frequently reused: for example, ∂ℓ(i) ∂zk is reused in computing the derivatives with respect to each θ(x→z) n,k . These terms should therefore be computed once, and then cached. Furthermore, we should only compute any derivative once we have already computed all of the necessary “inputs” demanded by the chain rule of differentiation. This combination of sequencing, caching, and differentiation is known as backpropagation. It can be generalized to any directed acyclic computation graph. A computation graph is a declarative representation of a computational process. At each node t, compute a value vt by applying a function ft to a (possibly empty) list of parent nodes, πt. Figure 3.3 shows the computation graph for a feedforward network with one hidden layer. There are nodes for the input x(i), the hidden layer z, the predicted output ˆy, and the parameters Θ. During training, there is also a node for the ground truth label y(i) and the loss ℓ(i). The predicted output ˆy is one of the parents of the loss (the other is the label y(i)); its parents include Θ and z, and so on. Computation graphs include three types of nodes: Variables. In the feedforward network of Figure 3.3, the variables include the inputs x, the hidden nodes z, the outputs y, and the loss function. Inputs are variables that do not have parents. Backpropagation computes the gradients with respect to all Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_73_Chunk78
|
56 CHAPTER 3. NONLINEAR CLASSIFICATION Algorithm 6 General backpropagation algorithm. In the computation graph G, every node contains a function ft and a set of parent nodes πt; the inputs to the graph are x(i). 1: procedure BACKPROP(G = {ft, πt}T t=1}, x(i)) 2: vt(n) ←x(i) n for all n and associated computation nodes t(n). 3: for t ∈TOPOLOGICALSORT(G) do ▷Forward pass: compute value at each node 4: if |πt| > 0 then 5: vt ←ft(vπt,1, vπt,2, . . . , vπt,Nt) 6: gobjective = 1 ▷Backward pass: compute gradients at each node 7: for t ∈REVERSE(TOPOLOGICALSORT(G)) do 8: gt ←P t′:t∈πt′ gt′ × ∇vtvt′ ▷Sum over all t′ that are children of t, propagating the gradient gt′, scaled by the local gradient ∇vtvt′ 9: return {g1, g2, . . . , gT } variables except the inputs, and propagates these gradients backwards to the pa- rameters. Parameters. In a feedforward network, the parameters include the weights and offsets. In Figure 3.3, the parameters are summarized in the node Θ, but we could have separate nodes for Θ(x→z), Θ(z→y), and any offset parameters. Parameter nodes do not have parents; they are not computed from other nodes, but rather, are learned by gradient descent. Loss. The loss ℓ(i) is the quantity that is to be minimized during training. The node rep- resenting the loss in the computation graph is not the parent of any other node; its parents are typically the predicted label ˆy and the true label y(i). Backpropagation begins by computing the gradient of the loss, and then propagating this gradient backwards to its immediate parents. If the computation graph is a directed acyclic graph, then it is possible to order the nodes with a topological sort, so that if node t is a parent of node t′, then t < t′. This means that the values {vt}T t=1 can be computed in a single forward pass. The topolog- ical sort is reversed when computing gradients: each gradient gt is computed from the gradients of the children of t, implementing the chain rule of differentiation. The general backpropagation algorithm for computation graphs is shown in Algorithm 6. While the gradients with respect to each parameter may be complex, they are com- posed of products of simple parts. For many networks, all gradients can be computed through automatic differentiation. This means that you need only specify the feedfor- ward computation, and the gradients necessary for learning can be obtained automati- cally. There are many software libraries that perform automatic differentiation on compu- Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_74_Chunk79
|
3.3. LEARNING NEURAL NETWORKS 57 x(i) z ˆy ℓ(i) y(i) Θ vx vz vˆy vΘ gˆy gℓ gz gz vy vΘ gℓ gˆy Figure 3.3: A computation graph for the feedforward neural network shown in Figure 3.1. tation graphs, such as TORCH (Collobert et al., 2011), TENSORFLOW (Abadi et al., 2016), and DYNET (Neubig et al., 2017). One important distinction between these libraries is whether they support dynamic computation graphs, in which the structure of the compu- tation graph varies across instances. Static computation graphs are compiled in advance, and can be applied to fixed-dimensional data, such as bag-of-words vectors. In many nat- ural language processing problems, each input has a distinct structure, requiring a unique computation graph. A simple case occurs in recurrent neural network language models (see chapter 6), in which there is one node for each word in a sentence. More complex cases include recursive neural networks (see chapter 14), in which the network is a tree structure matching the syntactic organization of the input. 3.3.2 Regularization and dropout In linear classification, overfitting was addressed by augmenting the objective with a reg- ularization term, λ||θ||2 2. This same approach can be applied to feedforward neural net- works, penalizing each matrix of weights: L = N X i=1 ℓ(i) + λz→y||Θ(z→y)||2 F + λx→z||Θ(x→z)||2 F , [3.35] where ||Θ||2 F = P i,j θ2 i,j is the squared Frobenius norm, which generalizes the L2 norm to matrices. The bias parameters b are not regularized, as they do not contribute to the sensitivity of the classifier to the inputs. In gradient-based optimization, the practical effect of Frobenius norm regularization is that the weights “decay” towards zero at each update, motivating the alternative name weight decay. Another approach to controlling model complexity is dropout, which involves ran- domly setting some computation nodes to zero during training (Srivastava et al., 2014). For example, in the feedforward network, on each training instance, with probability ρ we Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_75_Chunk80
|
58 CHAPTER 3. NONLINEAR CLASSIFICATION set each input xn and each hidden layer node zk to zero. Srivastava et al. (2014) recom- mend ρ = 0.5 for hidden units, and ρ = 0.2 for input units. Dropout is also incorporated in the gradient computation, so if node zk is dropped, then none of the weights θ(x→z) k will be updated for this instance. Dropout prevents the network from learning to depend too much on any one feature or hidden node, and prevents feature co-adaptation, in which a hidden unit is only useful in combination with one or more other hidden units. Dropout is a special case of feature noising, which can also involve adding Gaussian noise to inputs or hidden units (Holmstrom and Koistinen, 1992). Wager et al. (2013) show that dropout is approximately equivalent to “adaptive” L2 regularization, with a separate regularization penalty for each feature. 3.3.3 *Learning theory Chapter 2 emphasized the importance of convexity for learning: for convex objectives, the global optimum can be found efficiently. The negative log-likelihood and hinge loss are convex functions of the parameters of the output layer. However, the output of a feed- forward network is generally not a convex function of the parameters of the input layer, Θ(x→z). Feedforward networks can be viewed as function composition, where each layer is a function that is applied to the output of the previous layer. Convexity is generally not preserved in the composition of two convex functions — and furthermore, “squashing” activation functions like tanh and sigmoid are not convex. The non-convexity of hidden layer neural networks can also be seen by permuting the elements of the hidden layer, from z = [z1, z2, . . . , zKz] to ˜z = [zπ(1), zπ(2), . . . , zπ(Kz)]. This corresponds to applying π to the rows of Θ(x→z) and the columns of Θ(z→y), resulting in permuted parameter matrices Θ(x→z) π and Θ(z→y) π . As long as this permutation is applied consistently, the loss will be identical, L(Θ) = L(Θπ): it is invariant to this permutation. However, the loss of the linear combination L(αΘ + (1 −α)Θπ) will generally not be identical to the loss under Θ or its permutations. If L(Θ) is better than the loss at any points in the immediate vicinity, and if L(Θ) = L(Θπ), then the loss function does not satisfy the definition of convexity (see § 2.4). One of the exercises asks you to prove this more rigorously. In practice, the existence of multiple optima is not necessary problematic, if all such optima are permutations of the sort described in the previous paragraph. In contrast, “bad” local optima are better than their neighbors, but much worse than the global op- timum. Fortunately, in large feedforward neural networks, most local optima are nearly as good as the global optimum (Choromanska et al., 2015). More generally, a critical point is one at which the gradient is zero. Critical points may be local optima, but they may also be saddle points, which are local minima in some directions, but local maxima in other directions. For example, the equation x2 1 −x2 2 has a saddle point at x = (0, 0). In large networks, the overwhelming majority of critical points are saddle points, rather Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_76_Chunk81
|
3.3. LEARNING NEURAL NETWORKS 59 than local minima or maxima (Dauphin et al., 2014). Saddle points can pose problems for gradient-based optimization, since learning will slow to a crawl as the gradient goes to zero. However, the noise introduced by stochastic gradient descent, and by feature noising techniques such as dropout, can help online optimization to escape saddle points and find high-quality optima (Ge et al., 2015). Other techniques address saddle points directly, using local reconstructions of the Hessian matrix (Dauphin et al., 2014) or higher- order derivatives (Anandkumar and Ge, 2016). Another theoretical puzzle about neural networks is how they are able to generalize to unseen data. Given enough parameters, a two-layer feedforward network can “mem- orize” its training data, attaining perfect accuracy on any training set. A particularly salient demonstration was provided by Zhang et al. (2017), who showed that neural net- works can learn to perfectly classify a training set of images, even when the labels are replaced with random values! Of course, this network attains only chance accuracy when applied to heldout data. The concern is that when such a powerful learner is applied to real training data, it may learn a pathological classification function, which exploits irrel- evant details of the training data and fails to generalize. Yet this extreme overfitting is rarely encountered in practice, and can usually be prevented by regularization, dropout, and early stopping (see § 3.3.4). Recent papers have derived generalization guarantees for specific classes of neural networks (e.g., Kawaguchi et al., 2017; Brutzkus et al., 2018), but theoretical work in this area is ongoing. 3.3.4 Tricks Getting neural networks to work sometimes requires heuristic “tricks” (Bottou, 2012; Goodfellow et al., 2016; Goldberg, 2017b). This section presents some tricks that are espe- cially important. Initialization Initialization is not especially important for linear classifiers, since con- vexity ensures that the global optimum can usually be found quickly. But for multilayer neural networks, it is helpful to have a good starting point. One reason is that if the mag- nitude of the initial weights is too large, a sigmoid or tanh nonlinearity will be saturated, leading to a small gradient, and slow learning. Large gradients can cause training to di- verge, with the parameters taking increasingly extreme values until reaching the limits of the floating point representation. Initialization can help avoid these problems by ensuring that the variance over the initial gradients is constant and bounded throughout the network. For networks with tanh activation functions, this can be achieved by sampling the initial weights from the Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_77_Chunk82
|
60 CHAPTER 3. NONLINEAR CLASSIFICATION following uniform distribution (Glorot and Bengio, 2010), θi,j ∼U " − √ 6 p din(n) + dout(n) , √ 6 p din(n) + dout(n) # , [3.36] [3.37] For the weights leading to a ReLU activation function, He et al. (2015) use similar argu- mentation to justify sampling from a zero-mean Gaussian distribution, θi,j ∼N(0, p 2/din(n)) [3.38] Rather than initializing the weights independently, it can be beneficial to initialize each layer jointly as an orthonormal matrix, ensuring that Θ⊤Θ = I (Saxe et al., 2014). Or- thonormal matrices preserve the norm of the input, so that ||Θx|| = ||x||, which prevents the gradients from exploding or vanishing. Orthogonality ensures that the hidden units are uncorrelated, so that they correspond to different features of the input. Orthonormal initialization can be performed by applying singular value decomposition to a matrix of values sampled from a standard normal distribution: ai,j ∼N(0, 1) [3.39] A ={ai,j}din(j),dout(j) i=1,j=1 [3.40] U, S, V⊤=SVD(A) [3.41] Θ(j) ←U. [3.42] The matrix U contains the singular vectors of A, and is guaranteed to be orthonormal. For more on singular value decomposition, see chapter 14. Even with careful initialization, there can still be significant variance in the final re- sults. It can be useful to make multiple training runs, and select the one with the best performance on a heldout development set. Clipping and normalization Learning can be sensitive to the magnitude of the gradient: too large, and learning can diverge, with successive updates thrashing between increas- ingly extreme values; too small, and learning can grind to a halt. Several heuristics have been proposed to address this issue. • In gradient clipping (Pascanu et al., 2013), an upper limit is placed on the norm of the gradient, and the gradient is rescaled when this limit is exceeded, CLIP(˜g) = ( g ||ˆg|| < τ τ ||g||g otherwise. [3.43] Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_78_Chunk83
|
3.3. LEARNING NEURAL NETWORKS 61 • In batch normalization (Ioffe and Szegedy, 2015), the inputs to each computation node are recentered by their mean and variance across all of the instances in the minibatch B (see § 2.6.2). For example, in a feedforward network with one hidden layer, batch normalization would tranform the inputs to the hidden layer as follows: µ(B) = 1 |B| X i∈B x(i) [3.44] s(B) = 1 |B| X i∈B (x(i) −µ(B))2 [3.45] x(i) =(x(i) −µ(B))/ p s(B). [3.46] Empirically, this speeds convergence of deep architectures. One explanation is that it helps to correct for changes in the distribution of activations during training. • In layer normalization (Ba et al., 2016), the inputs to each nonlinear activation func- tion are recentered across the layer: a =Θ(x→z)x [3.47] µ = 1 Kz Kz X k=1 ak [3.48] s = 1 Kz Kz X k=1 (ak −µ)2 [3.49] z =(a −µ)/√s. [3.50] Layer normalization has similar motivations to batch normalization, but it can be applied across a wider range of architectures and training conditions. Online optimization There is a cottage industry of online optimization algorithms that attempt to improve on stochastic gradient descent. AdaGrad was reviewed in § 2.6.2; its main innovation is to set adaptive learning rates for each parameter by storing the sum of squared gradients. Rather than using the sum over the entire training history, we can keep a running estimate, v(t) j =βv(t−1) j + (1 −β)g2 t,j, [3.51] where gt,j is the gradient with respect to parameter j at time t, and β ∈[0, 1]. This term places more emphasis on recent gradients, and is employed in the AdaDelta (Zeiler, 2012) and Adam (Kingma and Ba, 2014) optimizers. Online optimization and its theoretical background are reviewed by Bottou et al. (2016). Early stopping, mentioned in § 2.3.2, can help to avoid overfitting by terminating training after reaching a plateau in the per- formance on a heldout validation set. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_79_Chunk84
|
62 CHAPTER 3. NONLINEAR CLASSIFICATION Practical advice The bag of tricks for training neural networks continues to grow, and it is likely that there will be several new ones by the time you read this. Today, it is standard practice to use gradient clipping, early stopping, and a sensible initialization of parameters to small random values. More bells and whistles can be added as solutions to specific problems — for example, if it is difficult to find a good learning rate for stochastic gradient descent, then it may help to try a fancier optimizer with an adaptive learning rate. Alternatively, if a method such as layer normalization is used by related models in the research literature, you should probably consider it, especially if you are having trouble matching published results. As with linear classifiers, it is important to evaluate these decisions on a held-out development set, and not on the test set that will be used to provide the final measure of the model’s performance (see § 2.2.5). 3.4 Convolutional neural networks A basic weakness of the bag-of-words model is its inability to account for the ways in which words combine to create meaning, including even simple reversals such as not pleasant, hardly a generous offer, and I wouldn’t mind missing the flight. Computer vision faces the related challenge of identifying the semantics of images from pixel features that are uninformative in isolation. An earlier generation of computer vision research focused on designing filters to aggregate local pixel-level features into more meaningful representations, such as edges and corners (e.g., Canny, 1987). Similarly, earlier NLP re- search attempted to capture multiword linguistic phenomena by hand-designed lexical patterns (Hobbs et al., 1997). In both cases, the output of the filters and patterns could then act as base features in a linear classifier. But rather than designing these feature ex- tractors by hand, a better approach is to learn them, using the magic of backpropagation. This is the idea behind convolutional neural networks. Following § 3.2.4, define the base layer of a neural network as, X(0) = Θ(x→z)[ew1, ew2, . . . , ewM ], [3.52] where ewm is a column vector of zeros, with a 1 at position wm. The base layer has dimen- sion X(0) ∈RKe×M, where Ke is the size of the word embeddings. To merge information across adjacent words, we convolve X(0) with a set of filter matrices C(k) ∈RKe×h. Convo- lution is indicated by the symbol ∗, and is defined, X(1) =f(b + C ∗X(0)) =⇒ x(1) k,m = f bk + Ke X k′=1 h X n=1 c(k) k′,n × x(0) k′,m+n−1 ! , [3.53] where f is an activation function such as tanh or ReLU, and b is a vector of offsets. The convolution operation slides the matrix C(k) across the columns of X(0). At each position m, we compute the elementwise product C(k) ⊙X(0) m:m+h−1, and take the sum. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_80_Chunk85
|
3.4. CONVOLUTIONAL NEURAL NETWORKS 63 X(0) C C * X(1) z convolution pooling prediction M Ke Kf Kf Figure 3.4: A convolutional neural network for text classification A simple filter might compute a weighted average over nearby words, C(k) = 0.5 1 0.5 0.5 1 0.5 . . . . . . . . . 0.5 1 0.5 , [3.54] thereby representing trigram units like not so unpleasant. In one-dimensional convolu- tion, each filter matrix C(k) is constrained to have non-zero values only at row k (Kalch- brenner et al., 2014). This means that each dimension of the word embedding is processed by a separate filter, and it implies that Kf = Ke. To deal with the beginning and end of the input, the base matrix X(0) may be padded with h column vectors of zeros at the beginning and end; this is known as wide convolu- tion. If padding is not applied, then the output from each layer will be h −1 units smaller than the input; this is known as narrow convolution. The filter matrices need not have identical filter widths, so more generally we could write hk to indicate to width of filter C(k). As suggested by the notation X(0), multiple layers of convolution may be applied, so that X(d) is the input to X(d+1). After D convolutional layers, we obtain a matrix representation of the document X(D) ∈ RKz×M. If the instances have variable lengths, it is necessary to aggregate over all M word positions to obtain a fixed-length representation. This can be done by a pooling operation, Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_81_Chunk86
|
64 CHAPTER 3. NONLINEAR CLASSIFICATION Figure 3.5: A dilated convolutional neural network captures progressively larger context through recursive application of the convolutional operator such as max-pooling (Collobert et al., 2011) or average-pooling, z = MaxPool(X(D)) =⇒ zk = max x(D) k,1 , x(D) k,2 , . . . x(D) k,M [3.55] z = AvgPool(X(D)) =⇒ zk = 1 M M X m=1 x(D) k,m. [3.56] The vector z can now act as a layer in a feedforward network, culminating in a prediction ˆy and a loss ℓ(i). The setup is shown in Figure 3.4. Just as in feedforward networks, the parameters (C(k), b, Θ) can be learned by back- propagating from the classification loss. This requires backpropagating through the max- pooling operation, which is a discontinuous function of the input. But because we need only a local gradient, backpropagation flows only through the argmax m: ∂zk ∂x(D) k,m = ( 1, x(D) k,m = max x(D) k,1 , x(D) k,2 , . . . x(D) k,M 0, otherwise. [3.57] The computer vision literature has produced a huge variety of convolutional archi- tectures, and many of these innovations can be applied to text data. One avenue for improvement is more complex pooling operations, such as k-max pooling (Kalchbrenner et al., 2014), which returns a matrix of the k largest values for each filter. Another innova- tion is the use of dilated convolution to build multiscale representations (Yu and Koltun, 2016). At each layer, the convolutional operator applied in strides, skipping ahead by s steps after each feature. As we move up the hierarchy, each layer is s times smaller than the layer below it, effectively summarizing the input (Kalchbrenner et al., 2016; Strubell et al., 2017). This idea is shown in Figure 3.5. Multi-layer convolutional networks can also be augmented with “shortcut” connections, as in the residual network from § 3.2.2 (John- son and Zhang, 2017). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_82_Chunk87
|
3.4. CONVOLUTIONAL NEURAL NETWORKS 65 Additional resources The deep learning textbook by Goodfellow et al. (2016) covers many of the topics in this chapter in more detail. For a comprehensive review of neural networks in natural lan- guage processing, see Goldberg (2017b). A seminal work on deep learning in natural language processing is the aggressively titled “Natural Language Processing (Almost) from Scratch”, which uses convolutional neural networks to perform a range of language processing tasks (Collobert et al., 2011), although there is earlier work (e.g., Henderson, 2004). This chapter focuses on feedforward and convolutional neural networks, but recur- rent neural networks are one of the most important deep learning architectures for natural language processing. They are covered extensively in chapters 6 and 7. The role of deep learning in natural language processing research has caused angst in some parts of the natural language processing research community (e.g., Goldberg, 2017a), especially as some of the more zealous deep learning advocates have argued that end-to-end learning from “raw” text can eliminate the need for linguistic constructs such as sentences, phrases, and even words (Zhang et al., 2015, originally titled “Text under- standing from scratch”). These developments were surveyed by Manning (2015). While reports of the demise of linguistics in natural language processing remain controversial at best, deep learning and backpropagation have become ubiquitous in both research and applications. Exercises 1. Figure 3.3 shows the computation graph for a feedforward neural network with one layer. a) Update the computation graph to include a residual connection between x and z. b) Update the computation graph to include a highway connection between x and z. 2. Prove that the softmax and sigmoid functions are equivalent when the number of possible labels is two. Specifically, for any Θ(z→y) (omitting the offset b for simplic- ity), show how to construct a vector of weights θ such that, SoftMax(Θ(z→y)z)[0] = σ(θ · z). [3.58] 3. Convolutional neural networks often aggregate across words by using max-pooling (Equation 3.55 in § 3.4). A potential concern is that there is zero gradient with re- spect to the parts of the input that are not included in the maximum. The following Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_83_Chunk88
|
66 CHAPTER 3. NONLINEAR CLASSIFICATION questions consider the gradient with respect to an element of the input, x(0) m,k, and they assume that all parameters are independently distributed. a) First consider a minimal network, with z = MaxPool(X(0)). What is the prob- ability that the gradient ∂ℓ ∂x(0) m,k is non-zero? b) Now consider a two-level network, with X(1) = f(b + C ∗X(0)). Express the probability that the gradient ∂ℓ ∂x(0) m,k is non-zero, in terms of the input length M, the filter size n, and the number of filters Kf. c) Using a calculator, work out the probability for the case M = 128, n = 4, Kf = 32. d) Now consider a three-level network, X(2) = f(b + C ∗X(1)). Give the general equation for the probability that ∂ℓ ∂x(0) m,k is non-zero, and compute the numerical probability for the scenario in the previous part, assuming Kf = 32 and n = 4 at both levels. 4. Design a feedforward network to compute the XOR function: f(x1, x2) = −1, x1 = 1, x2 = 1 1, x1 = 1, x2 = 0 1, x1 = 0, x2 = 1 −1, x1 = 0, x2 = 0 . [3.59] Your network should have a single output node which uses the Sign activation func- tion, f(x) = ( 1, x > 0 −1, x ≤0. . Use a single hidden layer, with ReLU activation func- tions. Describe all weights and offsets. 5. Consider the same network as above (with ReLU activations for the hidden layer), with an arbitrary differentiable loss function ℓ(y(i), ˜y), where ˜y is the activation of the output node. Suppose all weights and offsets are initialized to zero. Show that gradient descent will not learn the desired function from this initialization. 6. The simplest solution to the previous problem relies on the use of the ReLU activa- tion function at the hidden layer. Now consider a network with arbitrary activations on the hidden layer. Show that if the initial weights are any uniform constant, then gradient descent will not learn the desired function from this initialization. 7. Consider a network in which: the base features are all binary, x ∈{0, 1}M; the hidden layer activation function is sigmoid, zk = σ(θk · x); and the initial weights are sampled independently from a standard normal distribution, θj,k ∼N(0, 1). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_84_Chunk89
|
3.4. CONVOLUTIONAL NEURAL NETWORKS 67 • Show how the probability of a small initial gradient on any weight, ∂zk ∂θj,k < α, depends on the size of the input M. Hint: use the lower bound, Pr(σ(θk · x) × (1 −σ(θk · x)) < α) ≥ 2 Pr(σ(θk · x) < α), [3.60] and relate this probability to the variance V [θk · x]. • Design an alternative initialization that removes this dependence. 8. The ReLU activation function can lead to “dead neurons”, which can never be acti- vated on any input. Consider the following two-layer feedforward network with a scalar output y: zi =ReLU(θ(x→z) i · x + bi) [3.61] y =θ(z→y) · z. [3.62] Suppose that the input is a binary vector of observations, x ∈{0, 1}D. a) Under what condition is node zi “dead”? Your answer should be expressed in terms of the parameters θ(x→z) i and bi. b) Suppose that the gradient of the loss on a given instance is ∂ℓ ∂y = 1. Derive the gradients ∂ℓ ∂bi and ∂ℓ ∂θ(x→z) j,i for such an instance. c) Using your answers to the previous two parts, explain why a dead neuron can never be brought back to life during gradient-based learning. 9. Suppose that the parameters Θ = {Θ(x→z), Θ(z →y), b} are a local optimum of a feedforward network in the following sense: there exists some ϵ > 0 such that, ||˜Θ(x→z) −Θ(x→z)||2 F + ||˜Θ(z→y) −Θ(z→y)||2 F + ||˜b −b||2 2 < ϵ ⇒ L(˜Θ) > L(Θ) [3.63] Define the function π as a permutation on the hidden units, as described in § 3.3.3, so that for any Θ, L(Θ) = L(Θπ). Prove that if a feedforward network has a local optimum in the sense of Equation 3.63, then its loss is not a convex function of the parameters Θ, using the definition of convexity from § 2.4 10. Consider a network with a single hidden layer, and a single output, y = θ(z→y) · g(Θ(x→z)x). [3.64] Assume that g is the ReLU function. Show that for any matrix of weights Θ(x→z), it is permissible to rescale each row to have a norm of one, because an identical output can be obtained by finding a corresponding rescaling of θ(z→y). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_85_Chunk90
|
Chapter 4 Linguistic applications of classification Having covered several techniques for classification, this chapter shifts the focus from mathematics to linguistic applications. Later in the chapter, we will consider the design decisions involved in text classification, as well as best practices for evaluation. 4.1 Sentiment and opinion analysis A popular application of text classification is to automatically determine the sentiment or opinion polarity of documents such as product reviews and social media posts. For example, marketers are interested to know how people respond to advertisements, ser- vices, and products (Hu and Liu, 2004); social scientists are interested in how emotions are affected by phenomena such as the weather (Hannak et al., 2012), and how both opin- ions and emotions spread over social networks (Coviello et al., 2014; Miller et al., 2011). In the field of digital humanities, literary scholars track plot structures through the flow of sentiment across a novel (Jockers, 2015).1 Sentiment analysis can be framed as a direct application of document classification, assuming reliable labels can be obtained. In the simplest case, sentiment analysis is a two or three-class problem, with sentiments of POSITIVE, NEGATIVE, and possibly NEU- TRAL. Such annotations could be annotated by hand, or obtained automatically through a variety of means: • Tweets containing happy emoticons can be marked as positive, sad emoticons as negative (Read, 2005; Pak and Paroubek, 2010). 1Comprehensive surveys on sentiment analysis and related problems are offered by Pang and Lee (2008) and Liu (2015). 69
|
nlp_Page_87_Chunk91
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 6