A thesis statement communicates the core claims of a doctoral dissertation, a large culminating piece of original research and writing that advances knowledge in an academic field. These claims are what the author purports to demonstrate convincingly through evidence and argumentation. For a successful dissertation, each claim in a thesis statement must be clearly demonstrated. Outside the formal sciences like mathematics and logic, thesis statements are not proved, but they are demonstrated to be true in light of the original work of the author. A good thesis statement is specific in its claims but does not have to operationalize its terms—that is left for the dissertation. The claims should be novel, non-obvious, operationalizable, and important to some identifiable stakeholder.
My own thesis statement was:
A versatile design for text entry and control called "EdgeWrite," which uses physical edges, goal crossing, and a minimized need for sensing, is effective on handhelds and desktops for people with motor and situational impairments.
For this and other thesis statements, we can (and should) ask:
The existence of a working invention called "EdgeWrite;" that it is somehow "versatile;" that it uses physical edges, goal crossing, and minimal sensing; and that it is somehow "effective" on both handheld devices and desktop computers for people with motor impairments or in impairing situations.
What does it mean for EdgeWrite to be "versatile?" What does it mean for EdgeWrite to be "effective?"
Yes, as EdgeWrite is a new invention.
Nothing prior to the design, construction, and evaluation of EdgeWrite.
Yes. Depending how "versatile" is operationalized, reviewers could disagree that EdgeWrite is sufficiently versatile. Similarly, depending how "effective" is operationalized, reviewers could find that EdgeWrite is insufficiently effective on handhelds or desktop computers, or for people with motor impairments or in impairing situations. Any number of these aspects could turn out to be false, or require "versatile" or "effective" to adopt unconvincing operational definitions.
Users of text entry methods on handhelds and desktops, particularly those with motor impairments or in impairing situations. Given the pervasive need for text entry on computers, and accessible text entry in particular, the audience benefitting from EdgeWrite could be conceivably large.
Below are the thesis statements from the graduated doctoral students from the ACE Lab. These statements are from the field of human-computer interaction (HCI), with some statements situated more in computer science and others situated more in information science, social science, or design. A useful exercise is to ask and answer questions 1-6 of these statements, as demonstrated above.
In mobile app accessibility, applying an epidemiology-inspired framework that emphasizes multi-factor and large-scale analyses can: (1) reveal population-level trends of accessibility failures; (2) aid in identifying a range of intrinsic to extrinsic factors that can impact app accessibility; and (3) inform the design of tools for identifying and repairing accessibility failures.
Abdullah X. Ali (2020): Distributed Interaction Design
Using a custom-built platform to conduct Distributed Interaction Design (DXD) enables: creating user-elicited interactions; evaluating the guessability, learnability, and memorability of interaction designs; and the recruitment of participants through third party services in a timely manner.
Technological and scalability barriers to some medical assessments can be addressed through smartphone-based sensing tools; moreover, the acceptability of these tools can be addressed through surveys that reveal how these tools and their results are regarded by users.
Martez E. Mott (2018): Improving Touch Accuracy for People with Motor Impairments
Ability-based touch models can improve touch accuracy on touch screens compared to native touch sensors or existing statistical models for people with motor impairments and for people in motor-impairing situations.
Interactive tabletop software that can automatically detect breakdowns in collaboration and adapt in real-time to scaffold effective social regulation can improve secondary school students' collaboration skills.
Kathleen O'Leary (2017): Designing Chat Guidance for Positive Psychological Change
Online chat guidance can provide low-barrier access to psychotherapy techniques, help peers to form supportive relationships through deeply insightful chats, and promote positive changes in feelings, thoughts, and motivations.
Design for Social Accessibility produces technology designs judged by people with and without visual impairments to be functionally and socially accessible, addressing feelings of self-consciousness and self-confidence in technology use.
Shiri Azenkot (2014) Eyes-Free Input on Mobile Devices
Gesture-based input methods that use simple multi-touch taps, and speech-based input methods that facilitate error detection and correction, can both enable blind people to enter text more effectively on touchscreens than the de facto standard methods.
Mobile sign language video transmitted at frame rates and bit rates below recommended standards (ITU-T vs. 10 fps/50 kbps), which saves bandwidth and battery life by about 30 minutes, is still intelligible and can facilitate real-time mobile video communication.
Parmit K. Chilana (2013): Supporting Users After Software Deployment through Selection-Based Crowdsourced Contextual Help
A selection-based contextual help system that allows users to find questions and answers from other users and support staff can be helpful, intuitive, and desirable for reuse for end users, and can provide new insights to software teams about frequently asked questions.
Users' mouse cursor interactions can be collected efficiently on the Web, used to understand users' search behaviors, and can be useful in the design of Web search engines.
Shaun K. Kane (2011): Understanding and Creating Accessible Touch Screen Interactions for Blind People
Accessible gesture-based interfaces, designed to support the spatial and tactile abilities of blind people, can enable blind people to effectively use touch screens, including on mobile devices, tablet and tabletop computers, and public information kiosks.
Non-speech vocal input can be used on its own and in conjunction with other input modalities to enable people—especially those with motor disabilities—to control computer interfaces effectively.
Krzysztof Z. Gajos (2008): Automatically Generating Personalized User Interfaces
Automatically generated user interfaces, which are adapted to a person's devices, tasks, preferences, and abilities, can improve people's satisfaction and performance compared to traditional manually designed "one size fits all" interfaces.