Foundations and Trends® in Machine Learning >
Vol 4 > Issue 4

Charles Sutton and Andrew McCallum (2012), "An Introduction to Conditional Random Fields", Foundations and TrendsĀ® in Machine Learning: Vol. 4: No. 4, pp 267-373. http://dx.doi.org/10.1561/2200000013

© 2012 C. Sutton and A. McCallum

Download article
**In this article:**

1 Introduction

2 Modeling

3 Overview of Algorithms

4 Inference

5 Parameter Estimation

6 Related Work and Future Directions

Acknowledgments

References

Many tasks involve predicting a large number of variables that depend on each other as well as on other observed variables. Structured prediction methods are essentially a combination of classification and graphical modeling. They combine the ability of graphical models to compactly model multivariate data with the ability of classification methods to perform prediction using large sets of input features. This survey describes *conditional random fields*, a popular probabilistic method for structured prediction. CRFs have seen wide application in many areas, including natural language processing, computer vision, and bioinformatics. We describe methods for inference and parameter estimation for CRFs, including practical issues for implementing large-scale CRFs. We do not assume previous knowledge of graphical modeling, so this survey is intended to be useful to practitioners in a wide variety of fields.

124 pp. $85.00

Buy book
124 pp. $110.00

Buy E-book (.pdf)
1: Introduction

2: Modeling

3: Inference

4: Parameter Estimation

5: Related Work and Future Directions

Acknowledgements

References

In modern applications of machine learning, predicting a single class label is often not enough. Instead we want to predict a large number of variables that depend on each other, such as a class label for every word in a document or for every region in an image. This structured prediction problem is significantly harder than the simple classification problem because we want to learn how the different labels depend on each other. Conditional random fields provide a powerful solution to this problem. They combine the advantages of classification and graphical modeling as they join the ability of graphical models to compactly model multivariate data with the ability of classification methods to perform prediction using large sets of input features. In the past ten years, there has been an explosion of interest in CRFs with applications as diverse as natural language processing, computer vision, and bioinformatics. An Introduction to Conditional Random Fields provides a comprehensive tutorial aimed at application-oriented practitioners seeking to apply CRFs. This survey does not assume previous knowledge of graphical modeling, and so is intended to be useful to practitioners in a wide variety of fields. It includes discussion of feature construction, inference, and parameter estimation in CRFs. Additionally, the monograph also includes sections on practical "tips of the trade" for CRFs that are difficult to find in the published literature.