TWiki> Main Web>ThesisAbstracts (revision 6)EditAttach

Lab member thesis topics

Please add a short description of your thesis (MA or PhD) here, with a date indicating when the description was last added/modified.

Master's students

Utilizing Multilingual Resources for Automatic Lexical Acquisition (MA)

Michael Wayne Goodman

(this is very much a rough draft. expect further revisions later this term) I will explore methods for using resources such as the Turing Center's Transgraph to automatically map words to lexical types, as well as methods to evaluate the performance of such a system. Some of the expected problems include: how to extract, derive, or assume syntactic constraints for words when faced with minimal resources, how to deal with source words that don't map to a single word in the target language (eg. English "hurt" vs. Italian "make harm"), how to deal with small and incomplete grammars (eg. not all possible lexical types are represented), when/how to solicit information from the user/linguist, etc. (updated 2008.04.02)

Generating Referring Expressions (MA)

Margaret Ann Mitchell

I'm exploring the problem of how to refer to entities naturally. This is a sub-task within natural language generation, mapping nonlinguistic data to a linguistic output. My focus is primarily on creating distinguishing descriptions, ie, given a set of objects from which one object is selected, what noun phrase will be used to refer to it? Current approaches are based off Dale & Reiter's Incremental Algorithm, which uses the Gricean Maxims as a guide to naturalness, but these maxims are prescriptive, not descriptive, and fail to capture what humans actually do. I intend to do some data-mining to create a basic preference ordering for adjectives, and use this to propose a new algorithm that better captures human referring expression generation. I also want to touch on which types of referring expressions are used in which contexts, in an attempt to help natural language generators decide how to refer to entities in a stream of text. The domain will probably be limited to the Wall Street Journal. (updated 2008.04.02)

-- Main.itallow - 02 Apr 2008

Ph.D. students

Dealing with imperfection in using statistical syntax for machine translation

Jeremy G. Kahn

(tentative title, abstract)I am exploring the various ways to use statistical syntax (e.g. the Charniak parser) for (statistical) machine translation (SMT). My research includes using syntax for word-alignment, MT evaluation, and tuning upstream systems (such as ASR). Current SMT systems do not incorporate syntax, and use "phrases' that are quite explicitly non -syntactic, which raises challenges for the inclusion of syntax in translation modeling. I am particularly interested in:
  • using dependency extraction as a measure of syntactic/semantic similarity
  • how to cope with (or better, make use of) the uncertainty of a statistical parser in these contexts: how can that uncertainty be made useful?
-- Main.jgk - 02 Apr 2008

Mass Text Annotation With Mechanical Turk

Bill McNeill

I am using Amazon's Mechanical Turk service to do multi-user annotation of linguistic phenomena in Wikipedia text. I'm trying to see if I can get good inter-annotator agreement for different kinds of noun phrase annotation. The hope is that this could be an cheaper alternative way of producing annotated corpora. Along the way I am developing reusable Ruby libraries to efficiently parse web text, extract constituents matching certain criteria, and automatically generate Mechanical Turk questions. --April 2008
Edit | Attach | Print version | History: r9 < r8 < r7 < r6 < r5 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r6 - 2008-04-03 - 00:43:24 - goodmami
 

This site is powered by the TWiki collaboration platformCopyright & by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback
Privacy Statement Terms & Conditions