Natural languages are characterized by rich relational structures and tight integration with world knowledge. In recent years, there has been increasing interest in applying joint inference to leverage such relations and prior knowledge in both machine learning and NLP communities. Recent work in statistical relational learning and structured prediction has shown that joint inference can not only substantially improve prediction accuracy, but also enable effective learning with little or no labeled information. Markov logic is a unifying framework for joint inference, and has enabled a series of successful NLP applications, ranging from information extraction to unsupervised semantic parsing. In this talk, I will review recent work in Markov logic and its NLP applications at the University of Washington, and outline exciting directions for future work.
Back to symposium main page