What's in a thesis statement?
Jacob O. Wobbrock, Ph.D.
Updated: July 29, 2024
A thesis statement communicates the core claims of a doctoral dissertation, a large culminating piece
of original research and writing that advances knowledge in an academic field. These claims
are what the author purports to demonstrate convincingly through solid evidence and sound argument.
For a successful dissertation, each claim in a thesis statement must be clearly demonstrated.
Outside the formal sciences like mathematics and logic, thesis statements are not proved, but they
are demonstrated in light of evidence. A good thesis statement is specific
in its claims but does not have to operationalize its terms—that is left for the dissertation.
Good claims should be novel, non-obvious, operationalizable, demonstrable, and important to some
identifiable stakeholder, group, or population.
My own thesis statement was:
|
Jacob O. Wobbrock (2006): EdgeWrite: A Versatile Design for Text Entry and Control
A versatile design for text entry and control called "EdgeWrite," which
uses physical edges, goal crossing, and a minimized need for sensing, is
effective on handhelds and desktops for people with motor and
situational impairments.
|
For this and other thesis statements, we can (and should) ask:
- What is the full list of things the author must demonstrate?
• The invention of a working technology called "EdgeWrite;"
• that is somehow "versatile;"
• that uses physical edges, goal crossing, and minimal sensing; and
• that is somehow "effective" on both handheld devices and desktop computers ...
• ... for people with motor impairments or in motor-impairing situations.
- What terms in the statement must be operationalized?
What does it mean for EdgeWrite to be "versatile?"
What does it mean for EdgeWrite to be "effective?"
What constitutes a "minimized need" for sensing?
- Are the claims non-obvious, warranting original research?
Yes, EdgeWrite is a new invention; therefore, it is not clear how it achieves these goals until it is created, demonstrated, and evaluated.
- What, if anything, is already known about the truth of the claims?
Nothing prior to the design, construction, and evaluation of EdgeWrite.
- Do the claims have any chance of turning out false?
Yes. Depending how "versatile" is operationalized, reviewers could disagree that EdgeWrite is sufficiently versatile.
Similarly, depending how "effective" is operationalized, reviewers could find that EdgeWrite is not actually
effective on handhelds or desktop computers, or for people with motor impairments or in impairing situations. Reviewers could also disagree
that EdgeWrite requires minimal sensing. Any number of these aspects could turn out to be false, or require "versatile" or "effective"
to adopt unconvincing operational definitions.
- To whom would it matter if the claims were shown to be true?
The thesis, if demonstrated, would be of interest and value to users of text entry methods on handhelds and desktops,
particularly those with motor impairments or in impairing situations. Given the pervasive need for text entry on computers, and
accessible text entry in particular, the audience benefitting from this work could be quite large.
Below are the thesis statements from my graduated doctoral students from the
ACE Lab. These statements are from the
field of human-computer interaction (HCI), and draw upon information science, computer science,
social science, and/or design. A useful exercise is to ask and answer questions 1-6 of these statements,
as demonstrated above.
|
Rachel Franz (2024): Supporting the Design, Selection, and Evaluation of Accessible Interaction Techniques for Virtual Reality
The accessibility of virtual reality (VR) can be improved by the selection, design, and
evaluation of scene-viewing and locomotion techniques using design frameworks,
testbed evaluations, and movement analysis and modeling.
|
|
Ather Sharif (2024): Improving the Accessibility of Online Data Visualizations for Screen-Reader Users and Visualization Creators
A multi-modal, customizable, and interactive JavaScript plug-in called "VoxLens" improves the experiences
of screen-reader users in extracting information from simple and complex online data visualizations while
also enhancing the knowledge of visualization creators to make online data visualizations accessible.
|
|
Mingrui (Ray) Zhang (2022): Towards More Intelligent, Intuitive, and Inclusive Communication with Computers in Text and Images
Artificial intelligence can enable and improve advanced text production and accessible interactions with pictures;
in addition, new metrics for text entry enable the evaluation of advanced capabilities.
|
|
Anne Spencer Ross (2021): A Large-Scale, Multi-Factor Approach to Understanding and Improving Mobile Application Accessibility
In mobile app accessibility, applying an epidemiology-inspired framework that emphasizes
multi-factor and large-scale analyses can:
(1) reveal population-level trends of accessibility failures;
(2) aid in identifying a range of intrinsic to extrinsic factors that can impact app accessibility; and
(3) inform the design of tools for identifying and repairing accessibility failures.
|
|
Abdullah X. Ali (2020): Distributed Interaction Design
Using a custom-built platform to conduct Distributed Interaction Design (DXD)
enables: creating user-elicited interactions; evaluating the guessability,
learnability, and memorability of interaction designs; and the recruitment
of participants through third party services in a timely manner.
|
|
Alexander T. Mariakakis (2019): Making Medical Assessments Available and Objective Using Smartphone Sensors
Technological and scalability barriers to some medical assessments can be
addressed through smartphone-based sensing tools; moreover, the acceptability
of these tools can be addressed through surveys that reveal how these tools
and their results are regarded by users.
|
|
Martez E. Mott (2018): Improving Touch Accuracy for People with Motor Impairments
Ability-based touch models can improve touch accuracy on touch screens
compared to native touch sensors or existing statistical models for people
with motor impairments and for people in motor-impairing situations.
|
|
Abigail C. Evans (2018): Adaptive Support for Face-to-Face Collaborative Learning at Tabletop Computers
Interactive tabletop software that can automatically detect breakdowns
in collaboration and adapt in real-time to scaffold effective social
regulation can improve secondary school students' collaboration skills.
|
|
Kathleen O'Leary (2017): Designing Chat Guidance for Positive Psychological Change
Online chat guidance can provide low-barrier access to psychotherapy
techniques, help peers to form supportive relationships through deeply
insightful chats, and promote positive changes in feelings, thoughts,
and motivations.
|
|
Kristen Shinohara (2017): Design for Social Accessibility: Incorporating Social Factors in the Design of Accessible Technologies
Design for Social Accessibility produces technology designs judged
by people with and without visual impairments to be functionally
and socially accessible, addressing feelings of self-consciousness
and self-confidence in technology use.
|
|
Shiri Azenkot (2014) Eyes-Free Input on Mobile Devices
Gesture-based input methods that use simple multi-touch taps, and
speech-based input methods that facilitate error detection and correction,
can both enable blind people to enter text more effectively on touchscreens
than the de facto standard methods.
|
|
Jessica J. Tran (2014): Human-Centered Optimization of Mobile Sign Language Video Communication
Mobile sign language video transmitted at frame rates and bit rates below
recommended standards (ITU-T vs. 10 fps/50 kbps), which saves
bandwidth and battery life by about 30 minutes, is still intelligible and
can facilitate real-time mobile video communication.
|
|
Parmit K. Chilana (2013): Supporting Users After Software Deployment through Selection-Based Crowdsourced Contextual Help
A selection-based contextual help system that allows users to find
questions and answers from other users and support staff can be helpful,
intuitive, and desirable for reuse for end users, and can provide new
insights to software teams about frequently asked questions.
|
|
Jeff Huang (2013): Modeling User Behavior and Attention in Search
Users' mouse cursor interactions can be collected efficiently on the Web,
used to understand users' search behaviors, and can be useful in the design
of Web search engines.
|
|
Shaun K. Kane (2011): Understanding and Creating Accessible Touch Screen Interactions for Blind People
Accessible gesture-based interfaces, designed to support the spatial and
tactile abilities of blind people, can enable blind people to effectively
use touch screens, including on mobile devices, tablet and tabletop computers,
and public information kiosks.
|
|
Susumu Harada (2010): Harnessing the Capacity of the Human Voice for Fluidly Controlling Computer Interfaces
Non-speech vocal input can be used on its own and in conjunction with other
input modalities to enable people—especially those with motor disabilities—to
control computer interfaces effectively.
|
|
Krzysztof Z. Gajos (2008): Automatically Generating Personalized User Interfaces
Automatically generated user interfaces, which are adapted to a person's
devices, tasks, preferences, and abilities, can improve people's satisfaction
and performance compared to traditional manually designed "one size fits all"
interfaces.
|