---+ Lab member thesis topics Please add a short description of your thesis (MA or !PhD) here, with a date indicating when the description was last added/modified. ---++ Master's students ---+++ Utilizing Multilingual Resources for Automatic Lexical Acquisition (MA) ---++++ Michael Wayne Goodman I'm investigating how we can leverage the knowledge built into the lexicons of large, mature grammars to help bootstrap the lexicons of much smaller grammars. For my test, I am using the Jacy Japanese grammar as the source and the Ita Italian MMT grammar as the target. I am using the [[http://www.cs.washington.edu/research/panlingual/][Turing Center's Transgraph]] project to provide word translations, and some hand-built type mappings from one grammar to the other to figure out the types a word can have. Because of the nature of the project, many spurious items are produced, so I need to apply some filtering to the data to try and remove them. Another aspect of the project is to try and automatically learn transfer rules between the grammars involved. This becomes difficult when source words do not transfer to a single target word, when they change argument structure, etc. *(updated 2008.06.27)* ---++++ Glenn Slayden I am developing a new grammar engineering environment for DELPH-IN style TDL grammars. Building upon a considerable repertoire of established techniques in unification parsing, the work explores ideas which have become more relevant in today's computing ecosystem, such as low-lock concurrent chart parsing and cache-friendly TFS representation. For details on the project, please see [[ http://wiki.delph-in.net/moin/TgcsTop ]]. -- Main.gslayden - 2010-11-19 ---+++ Generating Referring Expressions (MA) ---++++ Margaret Ann Mitchell I'm exploring the problem of how to refer to entities naturally. This is a sub-task within natural language generation, mapping nonlinguistic data to a linguistic output. My focus is primarily on creating distinguishing descriptions, ie, given a set of objects from which one object is selected, what noun phrase will be used to refer to it? Current approaches are based off Dale & Reiter's [[http://citeseer.ist.psu.edu/dale94computational.html][Incremental Algorithm]], which uses [[http://www.sfu.ca/~jeffpell/Cogs300/GriceLogicConvers75.pdf][the Gricean Maxims]] as a guide to naturalness, but these maxims are prescriptive, not descriptive, and fail to capture what humans actually do. I intend to do some data-mining to create a basic preference ordering for adjectives, and use this to propose a new algorithm that better captures human referring expression generation. I also want to touch on which types of referring expressions are used in which contexts, in an attempt to help natural language generators decide how to refer to entities in a stream of text. The domain will probably be limited to the Wall Street Journal. *(updated 2008.04.02)* -- Main.itallow - 02 Apr 2008 ---++ Ph.D. students ---+++ Dealing with imperfection in using statistical syntax for machine translation ---++++ Jeremy G. Kahn (tentative title, abstract)I am exploring the various ways to use statistical syntax (e.g. the Charniak parser) for (statistical) machine translation (SMT). My research includes using syntax for word-alignment, MT evaluation, and tuning upstream systems (such as ASR). Current SMT systems do not incorporate syntax, and use "phrases' that are quite explicitly _non_ -syntactic, which raises challenges for the inclusion of syntax in translation modeling. I am particularly interested in: * using dependency extraction as a measure of syntactic/semantic similarity * how to cope with (or better, make use of) the uncertainty of a statistical parser in these contexts: how can that uncertainty be made useful? -- Main.jgk - 02 Apr 2008 ---+++ Mass Text Annotation With Mechanical Turk ---++++ Bill McNeill I am using Amazon's Mechanical Turk service to do multi-user annotation of linguistic phenomena in Wikipedia text. I'm trying to see if I can get good inter-annotator agreement for different kinds of noun phrase annotation. The hope is that this could be an cheaper alternative way of producing annotated corpora. Along the way I am developing reusable Ruby libraries to efficiently parse web text, extract constituents matching certain criteria, and automatically generate Mechanical Turk questions. --April 2008
ore topic actions
Topic revision: r8 - 2010-11-19 - 07:24:03 - gslayden
Create New Topic
Copyright &© by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Terms & Conditions