Back to Conference Homepage

International Association
of Forensic Linguists

7th Biennial Conference on Forensic Linguistics/Language and Law

1st ~ 4th July 2005
Cardiff University, UK

Conference Abstracts

(ordered alphabetically by name of principal presenter)

Michelle Aldridge and June Luchjenbroers
Cardiff University and University of Wales, Bangor

Questions, Metaphors & Frames: Methods of manipulation, and subsequent analysis

Using methodological tools from recent innovations in semantic analysis and cognitive linguistics, we aim here to cast further light on gender representation and manipulation in legal settings. The concepts to be discussed include metaphor, conceptual blends and frames, within the context of how questions are used. Earlier research, in the analysis of metaphor, has shown that where men are represented as pro-active and assertive; born to succeed, women are represented as under-developed; emotional and in need of protection – cf. early stages of human development and non-human entities such as animals (Luchjenbroers, 1997). In brief, metaphor captures cultural stereotypes, encoded in a speaker's choice of descriptors, and is a powerful tool in the investigation of speaker attitudes. Frames are culturally accepted information sets surrounding every lexical term. So, the rape victim who ‘corrects’ a barrister’s characterization of the type of establishment she had been to the night of the said rape (‘bar’ is replaced by ‘club’), is rejecting more than the name: she is rejecting the entire social scene and conduct associated with a ‘bar’. The appropriateness of the chosen frame is also of key importance to ‘smuggling information’. For example, when the prosecution barrister asks the defence witness, and did he tell you of his love of duck-shooting? the frame associated with ‘love’ requires an activity that happened often and with a passion. Given that the defendant is on trial for murder (with a shotgun), this is clearly an undesirable presentation of the defendant’s character (however accurate). Our analysis will present a number of linguistic strategies of importance to manipulation from a range of discourse types, with specific examples taken from courtroom interactions and police interrogations.

Allan Bell
Centre for Communication Research
Auckland University of Technology

Advocating indigenous language rights in the courts: Maori language television in Aotearoa/New Zealand

The founding document of the nation of Aotearoa/ New Zealand is the Treaty of Waitangi, signed in 1840 between Maori chiefs and the British Crown. The Maori text of the Treaty proclaims protection of Maori ‘taonga’ or ‘treasures’. The Treaty has formed the basis over the past two decades of a series of legal case resulting in reparation in particular of land confiscations. The Treaty’s principles have also been applied to te reo Maori, the Maori language, which has been classed as a ‘taonga’ requiring the active protection of the New Zealand Government under its Treaty obligations. Maori is an endangered language, with a large majority of Maori people now monolingual in English, and little more than 10 percent of the Maori population claiming fluency in the language, most of them middle aged or older. In 1985 the first of a series of cases was brought seeking that the Government regulate broadcasting to protect and promote the Maori language. I was called in as an expert witness in these cases, which made their way through all levels of the New Zealand courts to the final tribunal, the Privy Council in London. My evidence presented international precedents for the role of broadcasting in language maintenance, the role of broadcasting in language transmission, and in particular the standing of broadcasting in promoting the mana (reputation) of a language, especially through state intervention. I emphasised the case for mainstreaming Maori language on television at a minimum of one half-hour per night on each of the three main channels in prime time, a total of 10.5 hours per week. The successive court cases were all lost, but in the process the tenor of the courts’ judgments persuaded the New Zealand Government to recognize its political obligation to the language. The result has been the establishment of a Maori television channel, which began transmission a year ago. Mainstreaming however shows no sign of occurring in the highly deregulated and commercially competitive New Zealand television environment.

Susan Berk-Seligson
Vanderbilt University

False confession: linguistic and extralinguistic evidence of coercion in a police interrogation

Discourse analyses of ‘stance’ point to a variety of linguistic mechanisms that speakers use to express their “attitudes, feelings, judgements, or commitment to the propositional content of their message” (Biber and Finegan 1989). Grammatical elements together with lexicon play an important role in creating stance. The studies of Biber and Finegan (1989), Conrad and Biber (2000), Halliday (1994), Johnstone (2004), and Stubbbs (1996) enumerate specific discursive elements that contribute to achieving stance. Stance-taking is particularly relevant to adversarial interaction, and especially salient in situations of highly asymmetrical power relations, such as police interrogations. In such interaction, the interrogator through a variety of means conveys to the suspect that he, the interrogator, believes that the suspect has been involved in a crime, and the suspect--even if guilty--works at making himself appear blameless. This paper analyzes stance-taking in a case involving a Mexican young man convicted in the USA for the grisly murder of a latino couple and the kidnapping of their three-month old baby and three-year old son. He currently sits on death row. The police interrogator was a bilingual U.S. latino, who conducted the interrogation in Spanish, and who in addition acted as interpreter for the defendant and the monolingual English-speaking district attorney who drew up the defendant’s confession statement. The paper demonstrates that coercion was involved in obtaining the signed confession, and that there is convincing evidence of the defendant’s innocence. Linguistic evidence will be drawn from affidavits, together with pretrial and trial testimony of the police detective and the defendant.

Sue Blackwell, Jess Shapero and Willem Meijs
University of Birmingham

Texts of Murder and Martyrdom

On 2nd November 2004, the Netherlands was shocked by the murder of the controversial journalist Theo van Gogh. He had been both shot and stabbed repeatedly, and a knife was left pinning a letter to his body: a lengthy diatribe addressed to Ayaan Hirsi Ali, a Somali-born Dutch MP who had recently made a film with van Gogh about domestic violence against Muslim women. The letter informed Ali that she was next on the killer's hit-list. A further letter was found on the suspect, Mohammed Bouyeri. Like the first it was written in fluent Dutch, but this text was in verse and described the writer's expectations of martyrdom. Of the 40% or so of suicides in Western Europe who leave notes, very few are also killers. The corollary of this is that very few murderers go on to kill themselves and leave suicide notes. Typically, suicide notes are full of affection and, according to some research, have a positive outlook. This paper confronts a type of suicide which is different from that usually considered in academic research: the issue of "intent" is less applicable in these cases and it seems, anecdotally at least, that most of these suicides do leave notes. Yet even in cases of suicide bombings in the Middle East where video testaments are common, texts left at the scene of the attack are unheard of. The van Gogh texts probably constitute the first example of an attacker documenting in a European language their expectations of both killing and dying in an intended martyrdom operation. This paper will analyse the language of both the "murder note" and the "suicide note" in this case and will consider questions such as "How do murder-suicide notes differ from the typical suicide note?" and "should one expect the language of a suicide note to be typical of the writer's usual personal style?"

Victor J. Boucher
Université de Montréal

On the measurable linguistic correlates of deceit in recounting passed events

Deceit has been defined as a deliberate attempt to create by a communicative act, and
without forewarning, a belief that is untrue or inaccurate. This presentation deals with
deceit in a situation where speakers recount passed events that cannot be immediately
verified. In this context, a number of linguistic variables can serve to reveal a speaker’s
intention to deceive, thereby establishing the need for investigators to verify recounted
events. A comprehensive bibliography will be made available on 41 variables that have
been proposed. Of these, only 13 linguistic variables present acceptable degrees of
reliability and do not require acoustic analyses beyond phonetic transcription.
Considering these variables, an experiment was performed with the purpose of isolating
the specific aspects that can discriminate deceitful from truthful discourse across speakers
and “themes”. Twenty university students were asked to recount, in turn, events that took
place and that did not take place in five different places (in a hospital, at school, in a bar,
on vacation, and at work). The recorded speech provided a corpus of 200 three-to-five
minute discourses varying in theme, with half representing truthful and half inaccurate or
untrue accounts. Currently, 50% of the recorded samples representing ten subjects have
been transcribed in ASCII format using particular coding. Descriptive statistics were
derived by applying routines of an available freeware program. Paired t-tests for the
analysed samples show significant differences on given variables such as hesitation,
lexical repetition (type/token) and utterance length. A multivariate discriminant analysis
will be applied to determine the successfulness of linear combinations of variables in
correctly classifying deceitful discourse. This analysis will also determine how
classification rates vary with individuals and themes. The discussion will stress the
advantage of speech over written testimony as a means of soliciting variations in the
particular variables that can reveal deceit.

Deborah Bradford and Jane Goodman-Delahunty
University of New South Wales

Truthful vs. deceptive confessions: Investigating the performance of statement analysis techniques at detecting deception in the confessional context

Detection of deception and the accuracy of credibility assessments is an important forensic concern. The prevalence of false confessions is unknown, but detecting false confessions is significant to the successful functioning of the legal system as false confessions may lead to false conviction or imprisonment of innocent persons (Davis & O’Donohue, 2003). Whether genuine and deceptive confessions can be successfully distinguished has received scant research attention. In the current investigation we examined the utility of text-based techniques focussing on statement content, such as the Aberdeen Report Judgment Scales (ARJS) (Sporer, 1998) to distinguish truthful and false confessions. In this study, participants gave either genuine or deceptive accounts of autobiographical events according to the real/simulating paradigm (Ashkar, 2002). Participants in the genuine condition gave a detailed confessional narrative to a second experimenter of an act they had previously committed. Participants in the deceptive condition were “simulators” required to adopt a genuine confession as their own and convince a second experimenter that they had actually experienced the event. All narratives were transcribed and subsequently analysed by lay observers and police officers trained in the application of ARJS criteria. Findings are discussed in terms of the performance of law enforcement professionals versus laypersons, the viability of statement analysis techniques to assess credibility of confession evidence and the legal implications in the confessional context.

Christian Braun, Silvia Hansen-Schirra, Kerstin Kunz and Stella Neumann
Saarland University

The Syntactic Complexity of German Legalese - An Empirical Approach

This paper presents an empirical analysis of the syntactic complexity of German legal texts. In order to gain insights into characteristics of legal language we compare German Federal Constitutional Court decisions and press releases as well as newspaper reports on these decisions with respect to a range of grammatical features. Our main interest is clausal as well as phrasal complexity. We claim that the court decisions display features of legal LSP (Language for Special Purposes) which make them hard to understand. These features are expected to be less salient in the press releases and to almost disappear in the newspaper reports. In our analysis of clauses, we process the above mentioned three corpora using a topological parser. It is based on the theory-neutral notion of sentence topology specifically developed for German. The parser structures sentences into a series of neighbouring and embedded sub-parts, so-called topological fields. The phrases are analysed manually using an XML editor. Both the automatic output of the parser and the output of the manual annotation are double-checked by the annotators. We interpret syntactic complexity by counting the number of tokens and phrases per field and analyse nesting on clausal and phrasal level. This enables us to pinpoint patterns which are unique to court decisions as compared to press releases as well as newspaper reports. Since grammatical complexity has a major impact on comprehensibility, the results of the analysis serve our long-term goal to develop linguistic strategies for improving the comprehensibility of legal texts. We expect that a number of phenomena found in our analysis - e.g. overloaded 'Vorfelds', deep clausal nesting, heavy phrases - will play an important role in formulating such strategies.

Ron Butters and Jackson Nichols
Duke University

What Can Go Wrong When Linguists Testify in American Trademark Litigation

“Dictionary definitions” is a legally determinative factor that American courts may consider in deciding whether a putative trademark is GENERIC (and hence not legally protectable). Linguists who testify in such cases are permitted to interpret and expand upon existing dictionary entries, using established linguistic and lexicographical methodologies. Unfortunately, the role of the linguist in such cases is more complex and perilous than is commonly perceived. Our presentation examines how a noted linguist’s apparently flawed understanding of genericness led to testimony that helped to created a still-unresolved legal muddle. A Federal District Court (the 6th in 1969) ruled that opry ‘medley of country music, dance, and comedy routines’ was not generic and hence a legitimate trademark of the Grand Ole Opry (which had registered the trademark in 1950 after having used it for some 22 years). However, another court (the 8th Circuit in 1982) ruled instead that opry is generic, hence not protectable. The crucial difference in the presentations of the two cases: in 1982, the attorneys challenging the claims of the Grand Ole Opry employed a respected linguist and lexicographer, Fred Cassidy, who testified that opry (a dialectal variation of opera) is "generic" because it is (indisputably) a term long applied in American folk speech to various kinds of theatrical performances. Legally, however, this conclusion seems open to serious question because, as used by the Grand Ole Opry (and in attempted use by rivals), Opry is far more specialized in meaning than is the ordinary dialect term opry. As such, it was not, at least at its inception, the “name of the thing itself” but rather an imaginative semantic narrowing of the original word. Cassidy’s testimony, apparently a fundamental underpinning of the 1982 ruling, while no doubt given in good faith, essentially blurred the legal and linguistic issues.

Sophie Cacciaguidi-Fahy
National University of Ireland, Galway

The ‘liaison dangereuse’ between individual liability and collective guilt: interpreting the language of Article 7(1) of the ICTY statute

Despite the methodical recognition of individual liability in international criminal law to avoid the collectivisation of guilt, we continue to assume that crimes against humanity, such as genocide, express the actions and the implicit guilt of entire groups of people. The overall purpose of this paper is to demonstrate how the language of one article of the ICTY statute has been used by judicial discourse to construct a peculiar legal doctrine in international criminal law. I will endeavour to do this by analysing the paradoxical relationship between a collectivist signifier, ‘joint criminal enterprise’, used deliberately to establish the individual nature of a signified, ‘individual liability’ in the quest to avoid collective guilt. I will begin my analysis by reflecting on the nature of the joint criminal enterprise doctrine first enunciated by the ICTY as implicitly deriving from the language of Article 7(1) of the ICTY Statute. I will then reflect on the teleological interpretation by the Appeal Chamber of the language of article 7(1) to extend the concept of mens rea beyond its established frontiers and the implications this linguistic/purposive interpretation has had in redefining the concepts of individual and collective criminal liability. In conclusion, I will attempt to demonstrate that, although joint criminal enterprise is a sufficiently ambiguous concept to stand as a point solution to establish individual liability of multiple actors within a group and induce the necessary ‘stigmatisation’ of serious crimes against humanity, from a legal discourse standpoint, judicial discourse has continued to reason in the traditional legal framework of liability constructed on ‘simplistic’ notions of responsibility based on the notions of principal or accessory, using the same rhetoric and language of that traditional model.

Carole Chaski
Institute for Linguistic Evidence, Inc

Alternative Distance Measures for Validating the Syntactic Analysis Method

Working with 85 documents authored by 17 American writers, the syntactic analysis method (Chaski 2001, 2004) correctly classified at least 90% up to 99.2% of all the documents in several different experiments. These experiments included multiple-author classification as well as pair-of-authors classification. Further, the experiments included author-to-author as well as intra-author document classification. Finally, the experiments included several alternative distance measures implemented in SPSS, SAS JMP and Excel. These results demonstrate the validity of the method as a tool for determining authorship and suggest standard protocols under which the method optimally operates.

Jordi Cicres and M Teresa Turell
Universitat Pompeu Fabra, Barcelona

Short and long-term variation in intonation patterns: a preliminary study for speaker identification

In this study we examine some preliminary data related to intonation patterns in Catalan as useful variables for speaker identification. A corpus of read texts – recorded in two different sessions with a temporal gap of one year – of 6 Catalan speakers is used. There are many different ways to pronounce a sentence, depending on pragmatic and discursive uses, such as questions, statements, exclamations, hesitation, sarcasm etc. In order to achieve these pragmatic and discursive goals, speakers need to modify several aspects of their voice, implementing a melodic line (or intonation) in their speech. Pitch, voice quality, tempo, rhythm and loudness are the phonetic correlates of intonation (Clark and Yallop 1995; Prieto 2002). In this study, only pitch is taken into account. The hypothesis tested in this study is that there will be less intra-speaker than inter-speaker differences in the intonation patterns of both short and long-term samples. Further research will involve conducting the same experiment with spontaneous speech. To carry out the labelling and analysis of the data, the International Transcription System for Intonation (INTSINT) (Hirst, Di Cristo and Espesser 2000) is used. The fundamental frequency (F0) contour has been previously stylized by means of the MOMEL algorithm (Hirst and Espesser 1993).

Clark, J. and Yallop, C. (1995). An Introduction to Phonetics and Phonology. Second edition. Oxford. Blackwell Publishing.
Hirst, D. and Espesser, R. (1993). ‘Automatic modelling of fundamental frequency using a quadratic spline function’. In Travaux de l’Institut de Phonétique d’Aix 15: 71-85.
Prieto, P. (2002). Entonació. Models, teoria, mètodes. Barcelona. Ariel.
Hirst, D.; Di Cristo, A. and Espesser, R. (2000). ‘Levels of representation and levels of analysis for the description of intonation systems’. In Horne, M. (ed.). Prosody: Theory and Experiment. The Netherlands. Kluwer Academic Publishers. 55-88.

Burns Cooper
University of Alaska Fairbanks

Justice à la Mode: Potential pitfalls of electronic modes of courtroom discourse

Electronic modes of communicating and presenting information are becoming increasingly common in courtrooms in the USA (mirroring their increased use in other fields such as education and business). “E-courtrooms” and video arraignments are increasingly accepted. In some cases, judges, prosecutors, and defense attorneys are all pleased with such arrangements, both because they can reduce the length of trials and because they make conveniences such as speech-to-text transcription more feasible (e.g., “E-Courtrooms Designed to Reduce Trial Time,” AP, 21 September 2004). However, there are potential downsides to electronic modes that should be considered before embracing them wholesale. Linguists are well aware that transmission of words (putting packets of information through a “conduit”) is only part of the work accomplished by oral communication. Physical presence in the courtroom can give important pragmatic cues to interpreting speech, including the physical orientation of speakers, better access to facial expressions, gestures, and perhaps intonational cues, and ease of recognizing who is talking at any one time. For especially vulnerable defendants and witnesses, these cues can be crucial to their ability to contribute meaningfully to the discourse and thus (in the case of defendants) to their own defense. Lacking such cues can undermine some of the most basic pragmatic and communicative principles, such as the Gricean maxim of Relevance and some felicity conditions for speech acts. This paper will survey the prevalence of certain electronic modes of courtroom discourse, and then focus on a particular case in Fairbanks, Alaska, in which a video arraignment of a mentally and physically disabled man may have contributed to a highly problematic result. The paper will examine the possibility that for persons whose ability to understand what is happening in a legal case is compromised in some way to begin with (and this could include not only the mentally disabled, but others such as the deaf, the blind, persons of limited proficiency in English, and persons whose cultural backgrounds involve assumptions about discourse that differ from the prevailing legal culture), the abstraction and loss of nonverbal information involved in incorporating electronic discourse may add to their disadvantage.

Richard Creech

Language Law and Celtic Identity in the European Union

The Celts once roamed over much of the land that now constitutes the European Union. Today, however, speakers of Celtic languages (Irish, Welsh, Scots Gaelic and Breton) are found mainly in isolated pockets on the periphery of this territory. This paper examines how the EU’s legal system has interacted with these languages, both with regards to their status within the EU’s own internal organs as well as its effects on language policy within the three Member States where Celtic languages are spoken. The EU does not accord the Celtic languages the same treatment it bestows on other languages, nor does it treat all Celtic languages equally. This variation flows from the differences in the attitudes of Member States towards their respective Celtic identities, which range from open (if ambiguous) celebration in Ireland, to tolerance in the United Kingdom, to overt hostility in France. Irish, as the only Celtic language from an officially Celtic state, is the only Celtic language that is used in EU legal settings, but it has been given a highly qualified status that reflects the Irish Republic’s own ambivalence to its indigenous tongue. The use of the Celtic languages within the various Member States is also affected by the EU’s substantive economic law, as rules regarding domestic language usage may run afoul of the treaty-based rights that citizens of the EU have to engage in free commercial movement throughout the Union. The European Court of Justice in Groener v. Minister for Education (Case C-379/87, 1989 ECR 3987) considered the impact of an Irish-language requirement that operated to deny a Dutch woman employment at an art college in Dublin. This paper will analyze the Court’s opinion and discuss its implications for language policy in other Celtic lands, with particular attention to the linguistic situation in Wales.

Bart DeFrancq
Ghent University

Europe's constitution: a terminological battleground

2004 was one of the most successful years for the European integration process, with significant progress on the enlargement front with the accession of ten new member States, but also on the integration front, with the adoption by the European Council of a European Constitution. It seems, however, that Europe has reached the limits of its potential to further enlarge or integrate. Turkey's possible accession provokes strong reservations and the Constitution itself seems to put an end to a tradition of visionary wording in the Treaties that preceded it. My paper will focus on this and other aspects of the language used in the Constitution and, particularly, on what the language reveals about the negotiations that led to the Constitution. It will discuss the efforts that have been made towards the use of plain language and a greater transparency of the institutional terminology, but it will also highlight the half-bakedness of many of the results (what is, for instance, a /legislative function/ or the /Community way/method/ when the European Community(ies) has(ve) ceased to exist?). On the basis of a comparison with other constitutions and other European Treaties, I will argue that too much progress on these two fronts would have brought the Constitution intolerably close to the model of ordinary constitutions, which in the view of many and not in the least of the Convention's Chairman was to be avoided. I will also briefly touch upon the role played by the Académie française in correcting the already approved draft version of the Constitution.

Du Jinbang
Guangdong University of Foreign Studies

Information Processing and Utilization in Courtroom Interactions

In the domain of legal language, courtroom discourse is representative of information processing and utilization which constitute the core of the interaction between participants in the courtroom. Thus the research into this aspect may be illuminating for the whole process of courtroom trial. Some researches of information processing mainly enquire into communication and communication strategies. Others go further to focus on the linguistic aspect. Both schools are contributive to research in courtroom discourse. The present paper, in the light of previous research in communication and information processing, tries to develop a model of information processing in the courtroom setting. This model comprises three tiers: the first is the interactive orientations of participants who often have different attitudes toward a specific piece of information, and the social relations of the participants as are formed in the courtroom context, on which the attitudes are based; the second is the linguistic strategies which are employed by the participants to realize their goals of communication; the third is the specific linguistic devices used which are the embodiment of the strategies. For setting up this model and testifying its feasibility, data are collected from some Chinese courts. Cases of different kinds are included to ensure the extensive applicability of the model. Though the model is to be set up in the Chinese context, it may hopefully cast light upon similar studies of different legal languages and in different legal contexts. One evident limitation is that the model is a tentative exploration into information utilization in the courtroom context. A lot of ensuing research has to be done before the model is generally accepted as satisfactory.

Sandra Evans
The University of the West Indies, St. Augustine Campus, Trinidad and Tobago.

The Use of Court Clerks as ''Makeshift' Legal Interpreters in St. Lucian Courts

St. Lucia’s legal system is modeled on British law and the official language of the law is Standard English. However there are native St. Lucians who lack written and/or spoken proficiency in English. These persons generally speak St. Lucian French-lexicon Creole, which is the unofficial, national language of St. Lucia. This language is not mutually intelligible with any form of English. When these Creole-speaking persons appear as defendants, witnesses or victims before the court, they require the assistance of a legal interpreter to facilitate their comprehension of and participation in the court proceedings. At present, there are no trained legal interpreters in French-lexicon Creole in Saint Lucia. Consequently, as the need for an interpreter arises in the courts, in both criminal and civil matters, a regular Clerk of Courts becomes a ‘makeshift’ legal interpreter. These Clerks play a crucial role in the proceedings, as their interpretation determines what information is recorded by the magistrate. This paper examines the inadequacies and implications of using Court Clerks as makeshift legal interpreters in St. Lucian courts. It also highlights some of the difficulties experienced by these interpreters resulting from a lack of training and standardized procedures.

Lysbeth Ford and Dominic McCormack
Batchelor Institute of Indigenous Tertiary Education, NT Australia and MARLUK Link-Up, Darwin, NT Australia

The Murrinh-patha Legal Glossary: a bridge between laws

This paper reports on a ground-breaking project to produce a glossary of commonly-used English legal terms in an Australian Indigenous language. It is a response to the statistically significant incarceration rates of Indigenous Australians whose limited command or complete lack of English puts them at a severe disadvantage when they are confronted head-on by the Australian legal system. Magistrates, police and legal aid services regularly find themselves hamstrung by the lack of a common language to communicate with defendants, detainees and clients alike. Indigenous interpreters, while fluent in the first language of their clients, often lack sufficient specific English competency to be able to decipher the meanings of English legal terms. Funded by the Australian Institute of Aboriginal and Torres Strait Islander Studies and The Law Society of the Northern Territory Public Purposes Trust, the project was undertaken by Ford, an experienced linguist, several Murrinh-patha elders, John Sheldon, a Darwin-based legal aid lawyer, and McCormack, a lawyer fluent in Murrinh-patha. Murrinh-patha was chosen because it, rather than English or Kriol, is the lingua franca of approximately 2,500 Indigenous Australians in the Thamarrurr Region, 400km south-west of Darwin in the Daly River Aboriginal Land Trust area of the Northern Territory of Australia, and the Kimberley region of Western Australia.

The project has had several successful outcomes:-

- a legal glossary of the most commonly used English legal terms;
- increased understanding of the legal process by the Murrinh-patha elders who worked on the project;
- increased understanding of the subtleties of Murrinh-patha by the non-Indigenous researchers.

The glossary has also proved a useful pedagogical tool in the training of Indigenous interpreters, because it consists largely of paraphrases which provide an Australian Indigenous context for each legal concept. The rigour with which it was conducted is evident in the morpheme by morpheme glosses provided for each Murrinh-patha translation.

Clive Forrester
University of the West Indies, Mona, Jamaica

The Discourse of Time in the Jamaican Courtroom: Creole speaking witness meets English speaking counsel

Within the context of the courtroom the clarification of issues surrounding time is a very crucial matter. In fact, in many cases, establishing that a crime took place at a particular time and over a particular temporal span is as significant a piece of evidence as any other – linguistic or otherwise. Not only does prosecuting counsel have the burden of placing the accused at the scene of the crime but at the “time of the crime.” For the most part, witness testimonies in criminal trials take the form of a narrative. The smallest unit of the narrative is an event and these are what witnesses must recall. However, witnesses are not merely asked to recall events in a linear fashion, they may be asked to (a) say which events were occurring simultaneously, (b) break down events into smaller sub-events, and (c) open the temporal constitutent of an event and make judgments on duration among other things. This paper will discuss the linguistic devices used by Creole speaking Jamaican witnesses in the Jamaican courtroom to make these distinctions between events. The paper will also highlight the problems that can be encountered as a result of the cross-linguistic interaction between Creole speaking witness and English speaking counsel when issues of time are to be considered for the courtroom. Finally, the paper will consider methods by which any level of miscommunication can be alleviated.

Georgia Frantzeskou, Efstathios Stamatatos and Stefanos Gritzalis
University of the Aegean, Greece

Source Code Authorship Analysis using N-grams

Source Code Authorship Analysis is the particular field that attempts to identify the author of a computer program by treating each program as a linguistically analyzable entity. This is usually done based on other program samples from the same programmer. There are several cases where the application of such a method could be of a major benefit:

* Tracing the source of code left in the system after a cyber attack.
* Authorship disputes, proof of authorship in court.

The methodology that has been followed in all research efforts till now is divided into two steps. The first step is the extraction of software metrics and the second step is using these metrics to develop models that are capable of discriminating between several authors, using Discriminant Analysis, Neural Networks and Case Base Reasoning. Some features of this methodology are:

* The Software Metrics used are programming - language dependant.
* The Modeling techniques used have provided prediction accuracy rates of up to 88%.

In this paper we present our innovative approach, which is an extension of a method that has been implemented in text authorship identification. The accuracy of the results of our research work is at the level of the current state-of-the-art approaches or higher in many cases. In our method, byte-level N-grams are utilised together with author profiles of restricted size. Additionally, a simple algorithm is presented for calculating similarity. We have implemented a number of experiments using byte level N-grams and the results have shown the following:

* Our methodology used is programming - language independent.
* Prediction accuracy reached was between 90% - 100%, depending on the size of the training sample.
* Comments do not play a significant role in classification accuracy. This has been demonstrated by an experiment with programs without any comments.
* Programs can be classified correctly of up to 99%, by using their comments only.

Isabel Gomez Diez
Universiteit Antwerpen (Belgium) and Universidad Pompeu Fabra (Spain)

The transformation of asylum seekers’ narratives through the asylum procedure”

In the asylum process in Belgium, applicants must attend two interviews: the first in which they are screened and the second in which they are interviewed in depth. During the first interview an official must take contemporaneous notes in order to draw up a report. This report, as Blommaert (2000) notes, has the status of an affidavit. From this moment onwards, it is the source and reference text for all other steps in the procedure. This paper describes the variation between the interview and the written report and determines if these transformations entail “alteration” (Olsson 1997) of the sense of the applicant’s discourse. The corpus studied for this paper is composed of six screening interviews which took place at the Immigration Office in Brussels during the Autumn of 2004 and the corresponding reports written by the interviewers. Most of these interviews were conducted in French and most applicants came from African countries. In order to explain the differences between the interview and the report, the paper identifies linguistic features at the discursive, morpho-syntactic and lexical level, which are explained in terms of linguistic and functional dimensions based mainly on existing models of variation between oral and written texts (Halliday (1989); Biber (1988); Chafe (1986); Tannen (1986); Jonsson and Linell (1991); Gibbons (2003): such as epistemic modality (dubitative-declarative), event perspective (depersonalization) or contextualization (explicit vs. situation-dependent reference). From this analysis it can be seen that, with regard to alteration, the interviewer does not systematically transcribe the questions he or she asks during the interview, because the official’s purpose is to make a homogeneous text that integrates all answers given by the interviewee. This implies a conversion from a dialogic mode to a monologic mode and the blurring of source distinctions. Nevertheless, it is possible to find linguistic evidence of the deletion of questions, such as negative clauses and absence of inter-clausal cohesion. Moreover, the transformations that can have an impact over a decision of inaccessibility must be taken into consideration. These include A) Omission of hesitating markers in the report, substitution of adverbs of degree (pas tellement) by polarity adverbs (yes/no, always/never). All things that move the discourse from the field of the fuzzy memory (hesitating mood) towards the field of certainty (declarative mood) B) Inconsistency in the use of reported speech structures by the official. C) Introduction of terminology of established legal categories that contributes to the depersonalisation of the story and therefore to the impression of generality associated with fraudulent stories.

Tim Grant
University of Leicester

Quantifying evidence in forensic authorship analysis

The judicial interest in 'scientific' evidence has driven interest in quantified results for forensic linguistic authorship analysis. Through a worked example this paper examines methodological issues which hinder the possibility of quantifying results in forensic authorship analysis work. The solution suggested to some of the obstacles is a sampling and testing strategy which helps identify potentially useful, valid and reliable markers of authorship. An important feature of the sampling strategy is that these markers are then retested for use in specific authorship problems. The approach for drawing quantified conclusions combines discriminant function analysis and Bayesian likelihood measures. The worked example starts with 20 comparison texts for each of three potential authors and then uses a progressively smaller comparison corpus, reducing to 15, 10, 5 and 3 texts per author. This example demonstrates how reducing data affects the way conclusions can be drawn. With greater numbers of reference texts quantified and safe attributions are shown to be possible, but as the number of reference texts reduces the analysis shows how the conclusion can and should be reached that with these texts and with this amount of data, no attribution is possible. The testing process at no point results instances of a misattribution. The method is evaluated against in the light of evidentiary standards in the UK and USA jurisdictions.

Gillian Grebler
Linguistic and Cultural Forensics

“And then what did she say?” Making confession statements believable and compelling (even when they are false)

There are many elements in a legal confession, true or false, that give it credence and persuasive power. In the case of a false confession these work together to produce a
“narrative truth” from an actual fiction. In the hands of police and prosecutors this fictional account becomes compelling legal evidence. With powerful persuasive effect on judge and jury, false confessions have often been the primary basis for wrongful convictions. Among the elements that police interrogators seek to include in suspects’ accounts are idiosyncratic details about the particular crime, as well as the thoughts and feelings of the suspect. These sensual and cognitive details make the story vivid and lend it verisimilitude. Also sought is
talk: any verbal comment or conversation that the suspect remembers. This reported speech is important: as Randy Kandel says (“Why Evidence Scholars Should Study Conversation, ICE Journal, 2002) reported speech gives “a strong appearance of veracity, specificity and accuracy of memory… that afford an adept user…an edge in convincing other participants, triers of fact, decision-makers and consensus builders…”. In this paper I look at a number of proven false confessions from the UK, USA, considering the discourse features, in particular reported speech, which made these confessions compelling, believable and in many cases the basis for wrongful convictions.

Mel Greenlee
California Appellate Project

“At that time my speaking doesn’t make any sense”: Interpreters and capital error

The United States Supreme Court has ruled that in trials in which a death sentence may be imposed, a heightened standard of due process should apply, with scrupulous attention to fundamental fairness. In the past decade, the exoneration of over one hundred Death Row prisoners in the U.S. has led to increased public doubt about whether these procedural safeguards are, in fact, effective. Accurate recording of trial testimony is one crucial component of a fair trial. California courts have generally engaged in the legal (and linguistic) fiction that when witnesses do not speak English, the interpreted English version, the only version actually recorded, accurately reflects the original. Similarly, courts engage in the legal fiction that jurors’ assessment of witnesses’ credibility is unaffected by interpretation, despite ample linguistic research to the contrary. (See, e.g., Berk-Seligson, The Bilingual Courtroom (2002).) On appeal, legal claims of error based on interpreters have had limited success. The present work tests and highlights these legal/linguistic fictions in a subset of California capital cases, examining interpreted testimony in the appellate record of trial proceedings. The examination reveals that even for such critical witnesses as defendants and surviving victims, interpreted testimony may be crucially marred by grammatical error, ambiguity and nonsequiturs which impair its assessment by the jury and the reviewing court. Because the testimony is garbled, jurors may rely instead on recaps, repairs or summaries of testimony provided by counsel, despite instructions that arguments are not evidence. The paper concludes with a renewed call for complete and accurate recording of testimony in the original language and for collaborative efforts between linguists and legal scholars to ensure greater accountability and equity in capital litigation.

Mark Griffiths
Cardiff University

Towards the audiofit - non-linguists’ perceptions and articulations of unknown voices in a forensic context

Recordings of crimes in action rarely exist and criminal investigations are often reliant on the accounts of the eye and earwitness. However, unlike the vast body of research that exists on eyewitness perceptions, and in contrast to the videofit technology that allows police to elicit and refine eyewitness descriptions, little if any research has been conducted into non-linguist perceptions of unknown voices in a forensic context. Without this research, we are a long way from developing an audiofit – an instrument to facilitate meaningful and reliable non-linguist accounts of the voices heard. Based on on-going research conducted at Cardiff University using voice donors and respondents from South Wales and the South West, South East and Midlands areas of England, this paper exemplifies non-linguist perceptions of unknown voices in forensic contexts; specifically, the perceptual categories used by non-linguists to differentiate and contrast features of a voice and accent, and the vocabulary employed by non-linguists to articulate their perceptions. Furthermore, by examining interactions between voice, accent, crime type and attendant social stereotypes, this paper provides both warnings and encouragement to those wishing to conduct further research in this field.

Chris Heffer
Nottingham Trent University, UK

Jury Instruction as Communication Process

Clear and effective instruction of juries prior to deliberation is crucial to the validity of jury trial. One key aspect of that communication event is the extent to which the jury instructions themselves are comprehended by jurors. Consequently, most research and debate in both legal and social scientific contexts has focussed on the comprehensibility of orally-delivered written jury instructions. Since early research (particularly in the US) showed that the majority of jurors are not able to comprehend the majority of such jury instructions, subsequent research has tended to focus on rewriting the jury instructions themselves. This paper argues that we need to consider jury instruction as a communication process involving much more than simply the clarity of written instructions. Drawing on a number of frameworks from linguistics, psychology and communication science, this paper considers some of the central factors involved in the jury instruction process and suggests that focussing only on the clarity of pre-prepared ‘pattern’ jury instructions might be missing the mark. The discussion reveals a fundamental distinction in approach between seeing jury instructions as an inflexible discoursal product and seeing jury instruction as a variable performative process. This difference is highlighted by comparing Judge Ito’s delivery of the Californian standard of proof instruction in the OJ Simpson trial with a number of quite distinct performance variations on the English ‘specimen direction’ drawn from the author’s corpus of 100 judicial summings-up to the jury. The paper suggests that effective communication between legal professionals and lay jurors involves much more than a clear interpretation and presentation of legal concepts.

Blake Stephen Howald
University of Detroit Mercy School of Law

Comparative and Non-Comparative Forensic Linguistic Analysis Techniques: Methodologies for Negotiating the Interface of Linguistics and Evidentiary Jurisprudence in American Criminal Procedure

This paper surveys, from the point of view of the legal practitioner, the treatment of what I will refer to as Forensic Linguistic Analysis Techniques (FLATs) as evidence in American grand jury proceedings and criminal trials. FLATs generally fall into comparative (e.g. voice print analysis and author attribution) and non-comparative (e.g. authorship profiling and psycholinguistic threat analysis) categories. (For an overview of these techniques see e.g. Coulthard 1994; Gibbons 2003; Kniffka 1993, 1994, 1999; McMenamin 2002; Olsson 2004; Shuy 2002.) Unfortunately, due to the development of many FLATs by non-linguists, the stigma of “more art than science” has permeated into many aspects of both comparative and non-comparative FLATs. Comparative FLATs, which seek to attribute a text or recording to a potential suspect, are most likely to be accepted in criminal proceedings where the rules of evidence apply and the burden of proof is highest (e.g. trial). Non-comparative FLATs, which analyze a text or recording in isolation, have minimal acceptance even at the level of the grand jury, where the rules of evidence do not apply. Despite comparative FLATs enjoying a higher rate of evidentiary admission due to statistically quantifiable comparisons, non-comparative FLATs are no less useful or investigatively relevant; quite the contrary. I will show that it is the non-comparative FLATs that enjoy the greatest possibility for admission in criminal trials, because there is more opportunity for the trained linguist to pull the FLAT into the realm of independent scientific evidence under the Daubert standard. From the standpoint of the legal practitioner, the main obstacle to the admission of both comparative and non-comparative FLATs is the lack of normalization and scientific backing from the field of linguistics itself. Increased involvement by trained linguists will not only work to solidify the role of forensic linguistics in criminal procedure, but also work toward the increased admission of FLATs in criminal evidentiary proceedings. I will present an analysis of American case law that indicates a predictive trend of admissibility for FLATs. I demonstrate that this predictive trend provides guidance to the field of linguistics for the refinement of many existing techniques through scientific analysis by illustrating the evidentiary requirement necessary for admission. Furthermore, the predictive trend can be used as a tool for the legal practitioner faced with presenting FLATs in criminal evidentiary proceedings by illustrating the current state of accepted FLATs and where potential deficiencies exist that may create problems for admission.

Alison Johnson
University of Birmingham

"From where are sat" Achieving a change of state in suspect and witness knowledge through evaluation in the police interview.

Labov and Waletzky’s (1997) model of narrative has been applied to both oral and written narratives in a range of contexts. Edwards (1997), though, points out that they overlook the presence of evaluation in the things that "occasion" the narrative, looking only at narrative as monologue rather than as an interaction. In the police interview the narrative is produced in dyadic interaction within an institutional context where the key goal is to occasion a narrative as evidence and this is done through questioning and elicitation sequences. This paper examines police interviews with suspects, looking at discourse markers and evaluative patterns and frameworks that reveal how evaluation is carried out in a range of question and response speech acts (Stenstrom, 1984) and looks at the marking of stance by interviewers in relation to the evidential value of the elicited narrative. The analysis illustrates features of contested and collaborative evaluation, marked in turns that reveal concessive and adversative positions. It shows how discourse rules and roles develop in an evolving and "renewing" context (Heritage, 1984) with interviewers and interviewees changing their stance and altering their footing, moving from interrogator to therapist and from cooperation to non-cooperation and back again. I draw on pragmatic principles from conversational analysis of institutional interaction (Drew & Heritage, 1992), from Forensic discourse analysis and from appraisal theory, particularly Martin (1993), Hunston and Sinclair and Hunston and Thompson (1993). Conclusions point to the function of discourse markers and evaluative frames as important features of interviewer activity. It is suggested that these function to achieve a change of state in suspect and witness knowledge through the interview. They also have a role in the elicitation of a confession, in the case of suspects. It sheds some light on the marking of narrative evaluation in stories that are elicited rather than performed and for legal rather than social purposes.

Cecilia Joseph
University of Malaya and Cardiff University

Listening to manage resistance to talk: Balancing neutrality and affiliation in forensic interviews with children

This study seeks to identify the types of resistance to talk that children display in forensic police interviews and to examine how the police officers orient their responses towards these forms of resistance. The data is taken from 11 police interviews with children of ages ranging from 4 to 8 years old. The analysis demonstrates three types of resistance 1) silent withdrawals, 2) utterances disclaiming knowledge such as ‘I dunno’ and ‘I don’t remember’ and, 3) utterances displaying non-cooperation such as ‘Nothing happened’ and ‘No’. The police interviewers’ responses to these forms of resistance suggest that they seek to negotiate the delicate balance between treating these forms of resistance as cognitively salient responses and as non-cognitive avoidance strategies. This has further implications for the role of the police interviewer in terms of reconciling the dual obligations of maintaining neutrality as dictated by the norms of the interview protocols and displaying affiliation to encourage further talk from the children and pursue the prescribed agenda of the interview. The implications of these tensions on the efficacy of listening to the child in the context of forensic interviews will be discussed.

Silke Kirschner
Serious Crime Analysis Section, National Crime and Operations Faculty

Behavioural consistency? Linguistic concepts of variation and the analysis of speech in case linkage

The Serious Crime Analysis Section (SCAS) is a service for police forces in the UK (and Northern Ireland), assisting them in the detection of serious crimes (murder, rape and abduction). The main aspect of the unit’s work is to collect all related case papers, to input relevant data onto the SCAS database (ViCLAS), to analyse the offence and search ViCLAS for similar cases. The analytical process is based on the “consistency premise” with the following main pillars:

- Individuals are not random in their behaviour.
- The degree of variation within one individual’s actions is less than the range of variation across a number of individuals.
- The consistent actions of an individual can discriminate him/her from others.

First, analysing the conversational interaction between rapists and victims is one part of the crime analysis. So far, classification systems for analysing speech have been used within SCAS that are not based on linguistic concepts. Second, however, if we consider language as one element that constitutes human behaviour, it is worthwhile discussing what causes variation and consistency from a linguistic perspective. Could concepts such as idiolect, register and genre make a contribution to comparative case analysis and case linkage? Lastly, it will be examined what “realistic” prospects there are of introducing those linguistic concepts into “everyday” case analysis.

Hannes Kniffka
Bonn University

Orthographic Data in forensic linguistic authorship analysis

The role and status of orthographic data (OD) in a forensic linguistic anonymous authorship analysis is a much debated issue in a much debated field of forensic linguistics. Several methodological, theoretical, and, even more so, practical-analytical questions have not been dealt with satisfactorily. Some have not been given adequate answers to date. Others haven’t even been stated properly as questions. Yet others haven’t found the “general linguist’s” attention though the practitioner’s concerns and interests have been articulated thoroughly (and vice versa, practitioners have not taken “general linguistic” data into account properly). There obviously is some kind of miscommunication going on, at times, between the two. Since some people work as both, it may be miscommunication with the “alter ego”. This paper tries to elaborate on the methodological and theoretical status of orthographic data by giving heuristic taxonomies of classes of OD from a systemic grammatical, sociolinguistic, and text linguistic perspective, focussing on the “diagnostic potential” that may or may not be assessed to OD in the context of authorship analysis. ALL data (items, features, analyses) presented are taken from real life forensic cases, dating from 1974 to the present time, in which the author was asked to give expert testimony in/for German courts and other authorities. The aim of the paper is to illustrate the complex status of OD by an extensional definition of the phenomena, the introduction of some additional necessary distinctions, and a clarification of the position of OD in the total argumentation context of a forensic linguistic authorship analysis.

Krzysztof Kredens
University of Lodz

Lexicography and the ownership of language – a case study

In forensic linguistic discussions of plagiarism the focus is primarily on issues of detection. Whether, or to what extent, a particular instance of replication constitutes copyright infringement is a legal concern. However, linguistic and legal aspects of plagiarism can converge in complex ways, to the effect that the actual determination of the legal status of text by triers of fact may not be possible without a more significant linguistic contribution. This paper treats of such a contribution in a real case and, drawing on its implications, re-examines the issue of language ownership. A recent case heard in a Polish civil court has involved two software packages, each containing a Polish-English and English-Polish electronic dictionary. Approached by the applicant, I gave an opinion indicating that the dictionaries shared over 70 per cent of all entries and that it was highly unlikely that they had been created independently. In this paper, I will present the methodology that lead to my conclusions and identify implications for the discussion of plagiarism, authorship, and ownership of language. I will also make references to a long-running, high-profile legal battle involving the publishers of two monolingual dictionaries of Polish, in which the respondent, apart from arguing that the Polish language belongs to the whole nation, claims that dictionaries are alike by their very nature and thus preclude lexicographic originality. Obviously fallacious for monolingual dictionary-making, this argument cannot be discounted in the case of bilingual dictionaries, where a significant proportion of entries for the same lexical items are bound to be identical.

Dennis Kurzon
University of Haifa, Israel

Interpretation Without Linguistics: The case of Newdow

It may be expected, especially among linguists, that when a court – the judges and lawyers – deal with the interpretation of the language of statutes, constitutions, and other legal documents, some reference be made to linguistic phenomena, although not necessarily in terms linguists would use. In the case under examination, which had became a cause célèbre in the United States from 2002 until the Supreme Court gave its judgment in 2004, Michael Newdow, an avowed atheist, claimed that the Pledge of Allegiance with the addition of the phrase "one Nation under God", which his daughter had to say everyday at school, is unconstitutional, contrary to the First Amendment's prohibition of the establishment of religion. From a reading of the judgment of the State Appeal Court (Ninth Circuit) and documents presented to the Supreme Court and its own final judgment, it may be seen that with one exception – in the State Appeal Court, where the judge giving the majority opinion refers once to Austin's speech act theory in relation to the Pledge, issues such as the speech act status of prayer and logical and pragmatic presuppositions were not addressed. For example, the judges assumed that the Pledge is not a prayer, without showing why this is so. The courts focused on legal issues such as prudential standing, coercion as well as precedents. The question that may be asked is whether communication between lawyers and linguists in this particular case and in other cases centering on interpretation and on forensic linguistic evidence, is unidirectional -- from the linguist to the lawyer – and reception is poor at the lawyer's end.

Wojciech Kwarcinski

Adam Mickiewicz University, Poznan, Poland

Some parallels between pragmalinguistic and legal models of interpretation

This paper examines some of the pragmalinguistic aspects of the multi-level method of statutory interpretation that has been proposed recently in legal theory. It attempts to demonstrate that although the multi-level analysis of legal texts originates from jurisprudence, the principles on which it is based are homologous in many important respects to the central concepts of speech act theory that relate to utterance and text understanding. First, the derivation of the propositional content of utterances in speech act theory is remarkably similar to the procedure followed at the descriptive level of statutory interpretation. Second, given that all legal speech acts are intended to change the legal situation, they should be regarded as having the primary illocutionary force of a declaration. Since such utterances are conventional acts of a higher order, their function is determined mainly by the institutional context of the legal speech event in which they are performed. These contextual factors relevant to the production and interpretation of legal speech acts can be captured in terms of constitutive rules which closely correspond to the maxims of statutory interpretation postulated at the directive level. And finally, the notion of presupposition formation is employed not only in pragmatic accounts of “ordinary” language but also in the final stage of the legal interpretation of statutes considered here.
The study shows that sufficient compatibility exists between the two frameworks to suggest that the process of understanding of legal texts displays some distinctive features that must be explored in any valid analysis of statutory interpretation. It also presents evidence that, as far as institutional speech acts are concerned, the insights provided by the pragmalinguistic inferential model of derivation of the content and function of utterances are independently confirmed by legal theory.

Nicola Langton
Cardiff University

Cleaning up the Act

A common critique of common law language is that is not straight forward at its best and incomprehensible at its worst. Some argue that the style of legal writing is necessary if the document is to successfully set out and define the duties, obligations and rights of the parties involved; provide necessary qualifications to limit the scope of these duties etc; be clear, precise, unambiguous and all-inclusive; and achieve its intended communicative purpose. Is this really so? The purpose of this paper is to show that, in some instances, there is no significant loss of purpose or scope (or even kudos) if the legal texts are written in a different way. Indeed, if the text genre and communicative purposes are clearly understood, many of the problems of unusual sentence/grammar structures, information overload and unwieldily qualifications can be clarified and the legal text 'reformulated’ (my term) in ways that aid not only legal practitioners, researchers and translators, but also ordinary members of the public. This paper will therefore illustrate ways in which legislative texts and contracts can be analysed and unpacked to reveal the cognitive structure and communicative purposes that underpin their construction (Bhatia 1993). Methods for easifying (Bhatia 1983) and reformulating the texts in ways that adhere to the principles of Plain English (Asprey 1991; Kimble 1992) (and plain grammar), while respecting not only the cognitive structure and communicative structures but also the legislative drafting norms (Crystal & Davy 1969; Fung & Watson-Brown 1994), will also be illustrated for discussion and feedback.


Asprey, M.M. (1991) Plain Language for Lawyers. Federation Press pp157 -162
Bhatia, V. K., (1983): Simplification v Easification: the case of legal texts. Applied Linguistics. 4,1, (42-54)
Bhatia, V.K. (1993) Analysing Genre: Language Use in Professional Settings. London: Longman
Crystal, D. & Davy, D. (1969) Investigating English Style. Longman: London
Danet, B. (1980). Language in the Legal Process, Law and Society Review, Vol 14, No 3, (Spring 980) pp.445 –564
Fung.S & Watson-Brown A: The Template. A Guide for the Analysis of Complex Legislation, 1994, pp 34-37, 69-71.
Kimble, J. (1992) Plain English: A Charter for Clear Writing. Thomas M. Cooley Law Review 9:1, pp11-14
Mellinkoff, D. (1963) The Language of the Law. Boston: Little Brown

Ruth Lanouette
Lawrence University, USA

An Examination of a Pro Se Defense

The focus of this paper is a close examination of a pro se defense involving a murder trial. During the trial, which lasted five days, the defendant chose to represent himself for the first three days. For the final two days, a court-appointed attorney took over the defense after the defendant informed the judge that he felt unable to continue his own defense. In this study, I examine several aspects of the trial, including such elements as the types of questions posed by the defendant, the number and type of objections raised by the district attorney, and the interaction between the defendant-attorney and witnesses. Because all of the witnesses for the defense testified during the final two days of the trial, there are no examples of direct examination by the defendant. Following a general overview of the structure of the trial, and ways in which it differs from other trials I have looked at, I will focus on aspects of the defendant’s approach which seem to be particularly problematic. These include the defendant’s tendency to repeat the same question over and over while receiving the same answer from the witness, his sudden shift of topic, his frequent inability to formulate comprehensible questions, his obvious show of frustration when a witness would not see things his way, and, most importantly, his failure to respond to the case the district attorney was building against him. The defendant’s sometimes chaotic, and sometimes overly focused defense stands in stark contrast to the district attorney’s carefully planned, well-structured, and logical approach.

Weiming Liu
Northwest University of Political Science & Law, Xi'an, Shaanxi, P. R. China

Illocutionary and Perlocutionary Acts in Chinese Judge’s Attached Discourse

From 2002 to 2003, more and more courts in China are introducing some judicial reforms. One of them is to improve trial language. In these courts, the judges attach what they would like to say or comment on the cases concerned to the end of court verdicts in a written form. This is the judge’s attached discourse, which is similar to obiter dictum, but not exactly the same. Since Shanghai No.2 Intermediate People’s Court took this special measure in 1998, there have appeared some doubts and strong oppositions. These two quite different ideas have aroused heated academic debate. This paper is to investigate and analyse the judge’s attached discourse according to the theory -- illocutionary and perlocutionary acts, and try to prove its feasibility and reasonableness, also its limitations. By analysing three civil cases through linguistic theories, the paper shows that the judge’s attached discourse really works. It is really convincing. Just because of this, it can be called “tender court decision” to some extent, and so we can come to the conclusion that the judge’s attached discourse is feasible and reasonable. In conclusion, the paper discusses three types of limitations and problems the judge’s attached discourse has. Firstly, a perlocutionary act does not have necessary connections with an illocutionary act because many factors have an effect on the consequence. Secondly, this problem triggers the second. — the expressions of the attached discourse. The expressions should be deeply studied and carefully advised, so as to bring in strong illocotionary force and to move the litigants. Without proper expressions, the judge’s attached discourse will not achieve the expected perlocutionary act. Thirdly, the judge’s attached discourse is mainly applicable to such cases as support for parents and grandparents, bringing up children, succession, divorce and the like.

Carol Morgan

“What happens” in child forensic interviewing: Increasing event detail by improving narrative elicitation.

Problems in child forensic interviews are usually of two types: quality (answers tainted by poor memory or suggestive questioning, and quantity (answers lacking detail). Research aiming to improve these problems has often focused on improving quality—in particular, avoiding specific and potentially suggestive questioning. But such a focus, while important, is problematic. Most practitioners know that overly specific questions are less than ideal; what to replace them is less obvious. Attempts to train interviewers to avoid specific questions often involve grammatical terms and categories that are difficult to learn and awkward to put into practice. In addition, the ‘general’ questions that are suggested to replace specific ones often fail unexpectedly. For these reasons, efforts to improve interviews should focus on identifying problems causing reduced quantity of detail in child responses. Even if suggestive questioning is eliminated, inadequate detail will remain and perhaps even worsen, but focusing on quantity in addition to quality will improve both quantity and quality. If the level of detail is improved, benefits will include eliminating the need for the specific questions that often are the last resort in interviewing a reticent child, and the general questions that often are unsuccessful. One main component in the strategy to improve quantity is to focus on enhancing elicitation of narratives, which can give the greatest amount of coherent and contextual detail without the pitfalls of specific questions. This presentation will describe and explain existing problems with the approach of teaching interviewers to use general questions, and will highlight the conversation and discourse features of a more successful interview that concentrates on improving narratives. Specific strategies, including improvements to the rapport section of the interview, better narrative prompting, and practice and modeling of desired responses, suggest the more promising possibilities of this type of focus.

Luciana Romano Morilas
UNESP Araraquara, São Paulo, Brazil

Interaction and power in written Brazilian forensic texts

In Brazilian forensic context, written interaction is the main source for the judges to know what problems the parties want to be solved. In some processes there may be no oral interaction. Therefore we analyze in written texts which strategies lawyers and judges use to convince the addressers and show power. In Brazilian civil lawsuits, parties are represented by their lawyers. It means their voice is not apparent in the proceeding but what the lawyers wrote according to what was said to them. Thus only the lawyer’s speech is known and his/her power is clear in the written interaction. The judge is going to decide according to what he/she read in the petitions and according to the proofs brought to the proceeding, which can be only written proofs. There is going to be oral interaction only if the parties have any witness that they want to be heard, but this will not be part of this present analysis. Analyzing a case where there is no other proof than written arguments only, we can realize which powerful attributes are necessary to show power and reach the main objective: convince the judge that his/her client has got the reason. These attributes are 1) establishing a positive contact (causing positive impression); 2) using logic; 3) using rhetoric arguments; 4) using interdiscoursivity; 5) using specific convincing syntactic-semantic constructions; 6) organizing a clear, objective and comprehensible text. These are the factors that must be verified by lawyers when building their texts so that they may convince the judge they must be the winners in the lawsuit. This is part of doctor thesis results presented by the end of 2003 at UNESP, a state university in Brazil.

Ogone John Obiero
Maseno University, Kenya

Injustice in Discourse of Cross-Examination

In the legal domain, practitioners-judges,juries,lawyers, probation personnel all execute their duties in line with certain institutionalized relationships already formulated and existing in the judicial system. This arrangement necessarily leads to the unequal encounter between those in the legal arena and those outside: accused and / or the cross-examined, such that rights
to control such encounters are attributed to the practitioners in legal discourse.
The effect this (unequal positioning in relation to each other in the conventions of legal discourse) has on justice is subject to investigation. And i pose-"with the legal practicioners controlling and constraining the contribution of the cross-examined, is it possible to find justice in the courts?" This paper examines the control of text and talk in the genre of cross-examination from the perspective of one of Kenya's recent commission of inquiries-"the Goldenberg commission of inquiry".


Farinde Raifu Olanrewaju
University of Wales, Bangor

Power and Asymmetries in the Nigerian Courtroom System

The focus of this study is the investigation of power and asymmetries in the Nigerian courtroom system. It has already been argued that there are power and asymmetries in institutional discourse. (see Adelsward et al 1978,1988, Agar 1985, Atkinson and Drew 1979, Danet 1984, Harris 1985 etc) Courtroom discourse is essentially power laden. This is so because the legal professionals are vested with institutional authority and knowledge. I am motivated to study power and asymmetries in the Nigerian courtroom system because in a non-native English speaking context such as Nigeria, power and asymmetries are very pronounced. The study reveals among other things that power and asymmetries in the courtroom situation in Nigeria are synonymous with the lawyers/barristers’ control of the witnesses/ accused persons through the type of questions they ask. The study further reveals a lot of manipulation of questioning forms by the barristers which enable them to control witnesses/accused person’s testimony. For example, lawyers ask a lot of leading questions such as declaratives questions, Tag questions and Yes/no questions which structured the witness/accused persons to the required answer.

Alma Ortiz
CELE-UNAM-MÉXICO Foreign Language Learning Centre – National Autonomous University of México.

Testing the Reading Ability of University Law Students in Mexico.

The Testing Department of the Foreign Language Learning Centre at the National Autonomous University of Mexico is responsible for certifying the reading ability of Law students (nearly 1000 per year) as a graduating requirement. The challenges in creating a valid and reliable bank of exams for undergraduates and graduates lie in selecting the texts and designing the exams with the appropriate techniques. Efforts have been made in the selection of appropriate texts, mainly due to the fact that these students have no formal training in reading English. Neither their teachers or courses demand them to read texts in English, but still they have to take a graduating requirement in Reading Comprehension. Text designers’ criteria rely on text accessibility, topic, and readability. One of the text designers has a B.A. in Law, therefore acting as the law consultant for the rest. The Flesch formula as a very first simple step to measure the texts’ readability is used; a second step is the comparison of a sample of the text against Nation’s Academic List of University Vocabulary. Some initial results, after these two measures, have indicated the level of readability for possible exams at different study level of Law students (undergraduate, graduate and postgraduate students). The most popular assessing technique has been ‘multiple choice’, last year ‘cloze’ as a testing and readability technique for this population was introduced. ‘Cloze’ created a very adverse reaction and the Department decide not to use it anymore with this population; Science and Psychology students have been tested with ‘cloze’ for a long time but no negative reaction was provoked. The paper will share findings in an on going research on text readability and its implications in the designing of language requirement exams.

María Ángeles Orts
Universidad de Murcia

Business, company or corporation? The hidden dimension to contemplate when translating Company Law into Spanish.

Like the languages of many other social activities, the language of the law is widely considered as culturally bound. In line with this assertion, and departing from Edward T. Hall’s assumptions regarding the cultural dimension as the “hidden dimension” of communication, this study tries to highlight the fact that legal culture vastly influences legal interaction and, hence, varies from system to system. Accordingly, the attempt to translate Company Law from Legal English – either British or American- into Legal Spanish, taking into consideration cultural variance, involves a great deal of complexity as far as terminological equivalence is concerned. Such a complexity is regarded in the present article as resulting from the culturally different ways in which the corporate world is contemplated, both from the point of view of the Common Law and the Spanish Continental systems. Through a detailed analysis of the peculiarities of Corporate legislation in these different systems, a discussion on the possibility of terminological equivalence is established, and some solutions for translation provided. From sole proprietorships to joint corporations, through partnerships and private limited companies, explanations will be supplied as to membership, liability, possibility of collapse and other specific legal requirements for each kind of business structure, considering the similarities and differences found in the systems mentioned above. In addition, the clarifications and results supplied will try to arouse the awareness of the pragmatic side of legal translation as a basic starting point for the translator’s task.

Viktoria Papp
Rice University, Houston TX

Perception of similar voices and the similarity criterion in voice lineups

Several factors may render a voice lineup (voice parade) biased and therefore inadmissible as evidence. This presentation focuses on one of these factors, the similarity between the voice quality of the suspect and the foils used in the identity parade. The goal of the experiments was to measure the perceived voice similarity with the help of pair-wise comparisons. The perceptual results of the four groups, foreign vs. native, lay vs. phonetically trained subjects, were compared against objective measurements of the main acoustic correlate of pitch, the fundamental frequency. In analyzing the confusion matrices, five patterns were discussed at all subject groups: 1) accuracy in recognizing the same-speaker pairs, 2) false positives, 3) false negatives, 4) “runners-up” (speaker(s) with whom a speaker is most frequently confused), and 5) the confidence of the decisions. It is suggested that this method of correlating acoustic and perceptual data on pitch, together with other perceptually based measurements, can be used in both preparing a voice identity parade, and analyzing the outcome thereof. The measured accuracy and confidence in the two experiments were used to evaluate the admissibility of speaker voice recognition by foreign vs. native, and lay vs. phonetically trained witnesses. The results showed that when subjects are given less information to work with, the recognition accuracy rate indeed exhibits a serious drop. In these situations the “confusability relation” (caused by altered proximities in the acoustic perceptual space) changed between speakers, but not in the same way. The speakers suffering more misidentification pattern consistently differently for native and non-native speakers of English, and also for lay and trained witnesses. With the decline of recognition accuracy, the rate of false positives and false negatives both increased significantly in all groups, but the pattern of increase differed from group to group.

Amy Pi-Chan Hu
National Cheng-chi University, Taipei, Taiwan

On the Precision of Pronouns: The Mountain Is 'Ours'

Pronouns are semantically degenerate, and thus do not contain sufficient information to name the individual to which they are intended to refer, therefore their uncertain nature commonly produces errors in interpretation. Our understanding of a pronoun is affected by three separate factors: the pronoun’s case, number and gender; the syntactic structure within which the pronoun is placed; and the context in which it is used. Unfortunately, the language and the context do not always specify which antecedent is meant. Every specialized field develops its own technical jargon, therefore technical discourse is vital to avoid misunderstandings between different fields. In ROC v. Wang, decided in 2004 by the Chia-yi District Court, the court quoted the record of the accused saying during his interrogation, “The mountain is ours, so the honey is ours,” (which was translated from Tsou, an aboriginal language in Taiwan, into Chinese) to signify the accused’s intention to rob the victim, and found the accused guilty as charged. Wang, an aboriginal chief of the Tsou tribe, has a limited knowledge of Chinese. This paper will focus on the interpretation of the pronouns “we” and “our(s)” in Tsou and Chinese, the nature of the language deficiency of the aboriginal people, and the availability of trained interpreters during trials. From a linguistic viewpoint, every word has its own mental space in the individual’s mind. This paper will reconstruct these two words, in order to determine what caused the different interpretations between the accused and the judges. Despite the emphasis on precision in law, it is undeniable that language remains vague in its own domain. Moreover, aboriginals frequently communicate in ways that are unfamiliar to people of non-aboriginal descent, which can lead to misunderstandings in a courtroom.

Richard Powell
Nihon Unversity, Tokyo

Motivations for codeswitching in Malaysian courtrooms

Malaysia's legal system is relatively unusual in formally assigning roles to two languages: Malay and English. While Malay is the de jure medium of court submissions and proceedings, English is readily admissible without interpretation at the discretion of the court, and is the de facto medium of a minority of lower court cases and a majority of interactions in the higher courts. This overall pattern is complicated by endemic Malay-English codeswitching, again with the former tending to predominate in the lower courts and the latter in the upper. Various types and degrees of codeswitching may be identified in Malaysian courtrooms, from embedding lexical items or cultural idioms from one language into discourse primarily in the other, through altering the medium according to the perceived formality of a speech-act, to shifting languages when switching interlocutors and also maintaining extended mixed-code discourse with the same interlocutor. Whilst shifting education policies have certainly produced uneven linguistic competence across generations, this paper, based on observation of criminal and civil proceedings in both lower and higher courts, is more concerned with strategic explanations for courtroom codeswitching. One question to be addressed is whether microlinguistic considerations such as convergence and divergence among lawyers and between lawyers and witnesses are as influential as macrolinguistic factors such as national language policy and professional training. Another question is whether Malaysian courtrooms bear out claims that codeswitching in Malaysia, unlike in most multilingual societies, is prevalent not only in informal but also in formal registers. It will be suggested here that the lesser-used language (i.e. English in the lower courts and Malay in the higher) frequently becomes a resource for establishing intimacy and informality among interlocutants.

Judith Rochecouste & Rhonda Oliver
Monash University and Edith Cowan University, Australia

Evidential Strategies Used By Expert Witnesses

Evidential devices in academic and similar professional discourses express attitudes to knowledge and are commonly used to moderate or hedge statements in anticipation of criticism by peers. In the courtroom, however, different criteria apply where more unequivocal or definitive expression of knowledge is required. This places very different demands on the language use of expert witnesses from academic professions. While some hedging devices are accepted from expert witnesses to express opinion and hypothesis, in a controversial Australian trial numerous other strategies were used by expert witnesses which moderated the reliability of the expert evidence presented. This paper demonstrates the range of linguistic strategies which expert witnesses used to appear to conform to the discourse requirements of the courtroom; to avoid categorical claims through hedging, qualification, etc; to engender favour with the presiding judge; and to reinforce power relations and responsibilities between the expert witnesses themselves and counsel. In this trial, expert witnesses for the defence exhibited many of these strategies. Expert witnesses appearing for the prosecution, on the other hand, used more unequivocal statements, even though these witnesses were unable to interview the accused. While hedging by witnesses will not be new to the legal fraternity, an understanding of the range and complexity these strategies can alert counsel to further ways to counter evidence against their respective cases. The study also has implications for the training of witnesses and for advising juries when summing up in that it expands consideration of ‘what is said’ to include ‘how it is said’ and what impact this has on the reliability of the evidence.

Frances Rock
Roehampton University

Writing rights right or writing them off? The utility of written rights information in police custody

In England and Wales, people arrested by the police have five main rights which are explained through speech, to some extent, but predominantly through writing. This paper explores a recent attempt to revise the system for administering rights in Anglo-Welsh Police stations. The paper begins by sketching the background of the rights administration system. It then explores lexical, syntactic and discoursal aspects of current and previous rights texts, illustrating the degree to which those texts use or avoid such features as jargon, grammatical metaphor, subordination and their ways of dealing with such characteristics as intertextuality. Having provided a linguistic snapshot of written rights texts in this way, the paper then examines the place of rights administration in custody using interviews with police officers and detainees and ethnographic observation of police station. This more sociolinguistic section of the paper supports the assertion that rights texts cannot be considered in isolation from their uses and users. It illustrates this by demonstrating whether and how the texts are appropriated. This section considers how many detainees read, or even receive, written rights texts and considers how rights are communicated, both inside and outside detention, if not through the prescribed written texts. This section also considers pragmatic aspects of rights administration, presenting evidence that responding to rights texts is about more than just invoking or waiving rights. The paper concludes by using the work presented to review current Government policy on rights administration.

Robert Rodman, Erik Eriksson and Robert Hubal
North Carolina State University, USA, Umeå University, Sweden and RTI International, North Carolina, USA

Deducing emotions from speech: Forensic implications

Computational forensic linguistics implies the use of computing systems to analyze language and speech for forensic purposes. For example, speaker identification and authorship determination are two interest areas of this broad field. Another interest area is the automatic detection of the emotional state of a person based on visual and aural clues. Here, we focus specifically on emotion detection based on vocal affect during speech. Automatic emotion detection from speech brings together two seemingly disparate areas: the psychology of emotions and speech signal processing. The former concerns itself with the taxonomy of human emotions and their interaction. The latter, as it applies here, concerns itself with the acoustic cues of any vocal affect that accompanies a particular emotion, and is therefore one means of detecting the presence of that emotion. In this paper we review and summarize the copious amount of research that has taken place with regard to defining emotions and scales along which to measure them, and with regard to defining acoustic cues and correlating them to emotional states with particular degrees of certainty. Throughout our focus is on forensic implications. We also scrutinize present-day databases being used for emotion detection studies, and address the question of what characteristics should be sought for such a database. For example, is it effective to use actors to create emotion data? Should emotion data be collected and processed with a certain application area in mind? We also address the question of speaker and cultural dependencies that may be encountered in automatic emotion detection. No computer system exists today that takes speech as input and unequivocally produces the emotional state of the speaker as output. But work is progressing toward such a goal. We speculate—and invite conference participants to join us—as to where in forensic processes such a computer system would prove useful. Some examples are jury selection, witness interrogation, and courtroom speech such as summation statements.

Terry Royce
Teachers College, Columbia University (Tokyo, Japan)

The Negotiator and the Bomber: an interactive analysis of active listening in crisis negotiations

McMains and Mullins (2001) suggest that one of the most important skills for a police negotiator to be able to settle a crisis is to have considerable expertise in ‘discussing or conferring”, and that integral to this is the ability to hear what the other person is saying. In other words, the negotiator needs to be proficient at Active Listening. Active listening as an interpersonal skill is taught and utilised across a wide range of communicational contexts, some of which include: dispute resolution and mediation; self-help and parenting advice; journalism; sales and marketing techniques; marital, religious and educational counselling; and phone-in suicide prevention. Its specific definition varies based on the ways that it is approached in these various contexts, but generally it deals with listening constructively, with a focus on an understanding of the other person's feelings, or empathising with them to build rapport. This paper extends Hammer and Rogan’s (in Rogan, Hammer, & Van Zandt, 1997) communication-based, interactive model of crisis negotiation by examining the role of active listening by a Police negotiator in New South Wales, Australia in the serving of a “high-risk warrant” on an armed and dangerous perpetrator who is expected to resist. Through an analysis of the interaction between the perpetrator and the negotiator, this paper demonstrates that the use of active listening in the early stages of the negotiation sets the foundation for the resolution of this crisis.

McMains M. J. and W. C. Mullins. (2001). 2nd Ed. Crisis Negotiations: Managing critical incidents and hostage situations in law enforcement and corrections. Cincinnati: Anderson Pub. Co.
Rogan, R. G., M. R. Hammer, and C. Van Zandt. (1997). Dynamic Processes of Crisis
Negotiation. Westport, Connecticut: Praeger.
Royce, T. (forthcoming) “The Negotiator and the Bomber: an interactive analysis of the critical role of active listening in crisis negotiations”, Negotiation Journal 21(1), (January 2005).

Marta Sanchez, Jaume Llopis and M. Teresa Turell
Institut Universitari de Lingüística Aplicada, Universitat Pompeu Fabra, Barcelona

Intra and Inter-author comparisons: the case of function Words: Are function words really functional in stylometric studies of authorship attribution?

Stylometric analyses of function words for forensic linguistic purposes haven’t been very productive. In our view this is due to the fact that these analyses have been either purely qualitative or exclusively quantitative. The purpose of this paper is to use different approaches to the data - only qualitative, only quantitative, or a combination of both qualitative and quantitative - in order to establish which is the best way of showing that function words are very idiosyncratic traits of an author’s style, both in terms of frequency and use, and can be used as markers of authorship in intra and inter-author comparisons. The hypothesis is that if type-token ratios already provide significant results as to the differences in use of lexical words between authors, an analysis of the differences shown by functions words would help to narrow this inter-author dimension and thus be relevant in stylometric studies of authorship attribution. The corpus consisted of 120 newspaper articles written in Spanish by 6 different authors (20 articles each), including different varieties of Spanish, in order to avoid reducing the study to peninsular Spanish. Once the texts had been processed with SCP and Wordsmith, the density of function words vis-à-vis lexical words in each text was calculated and a list with the five most frequent words was devised. In this case, an index of similarity drawn from Genetics was developed, by calculating the similarity values of each text in relation to all the other texts, in order to be able to establish whether the similarity index was closer between the texts written by the same author (intra-author) than between the texts written by different authors (inter-author).

Roger Shuy

Georgetown University

When All Else Fails, Be Ambiguous: A Prosecution Strategy in the International Criminal Tribunal on Yugoslavia

Ambiguity is one of the favorite conversation strategies used by law enforcement to create the illusion of criminality. When interrogators are deceptive or ambiguous in the way they seek information from suspects, the chance of getting a conviction greatly increases. One of the defendants accused of genocide in the International Criminal Tribunal for Yugoslavia was difficult for the prosecutors to crack. After failing to get from him anything inculpatory, they resorted to using four ambiguity strategies in their questioning, apparently hoping that Major Dragan Jokic would stumble and give the appearance of involvement. This paper describes how the prosecutors conducting the interviews used these four ambiguity strategies:

1. confusing what the major "knew" with what he merely "heard about" after the fact
2. confusing his involvement in "passing along an order" to send and use digging machines to bury the dead in Srebrenica with his having been asked about the availability of such machines
3. mixing their questions about what actually happened with hypothetical questions about what "could have" happened.
4. using ambiguous referencing in their questions, leading to confusion about who and what was being discussed, when and where.

Despite the major's surprisingly good resistance to such ambiguity, he was indicted and tried for committing genocide. At the time of this writing, the trial is still in progress.

Peter Smith and Gea de Jong
City University, London

Speaker Identification: Function Words and Beyond

Speaker identification of audio material in a forensic context would be carried out by a phonetician who produces a report on the likelihood that the samples are by the same speaker. Recently, larger volume transcriptions have become available that makes it possible to also carry out a linguistic analysis with a view to speaker identification. We envisage this to be complementary to, rather than as an alternative to a phonetic analysis of the audio material. Function word based approaches to stylometric analysis have achieved some impressive results in author identification of written texts. Significant problems arise when applying these technique to forensic data. There is the problem of data volume: forensic samples are much smaller than literary texts. There is the difficulty in handling transcribed spoken data and the problem of transcription which often introduces errors into the data. Stylometric analyses using multivariate function word frequency distributions show some interesting patterns in the data. We have noted a consistently greater variance with some function words than others these words tend to be those that are used in a wider range of grammatical constructs. For example, the function word “that” has some 28 different grammatical uses. We also observed that speakers tend to use function words in a consistently different pattern to others. Our test for speaker identification is based on this observation. We examined seven pieces of forensic text, all of them transcriptions of audio material. These are a mixture of police interviews and recorded telephone conversations. Using this data, we examined the usage of different function words in different grammatical constructs, paying particular attention to features such as discourse markers. We propose that this analysis can be used to accompany and strengthen a phonetic analysis. Preliminary results are promising and we are also examining ways of producing a precise statistical test.

Larry Solan
Brooklyn Law School

Pernicious Ambiguity in Legal Texts

The law regards legal texts, such as statutes and contracts, as either plain or ambiguous. If plain, principles of interpretation call for the language of the document to be given special deference. If ambiguous, then those interpreting the document are permitted to adduce additional extrinsic evidence in an effort to ascertain the drafters’ intent. The word “ambiguous” is used to describe all kinds of indeterminacy, from linguistic ambiguity, to vagueness, to terms that simply underspecify certain aspects of meaning. This paper addresses a problem that the law treats inconsistently: What happens when two parties regard a text as plain, but they disagree about what the plain meaning is? A second version of this problem arises when different courts assign inconsistent interpretations to the same legal terms, and do so with certainty. Should linguistic variation of this nature be considered evidence of ambiguity in the legal sense? If not, should courts choose one or another of the various interpretations and deem the language plain? Courts are in disagreement as to how such cases should be handled. The paper attempts to describe and explain the linguistic circumstances in which this phenomenon is most likely to occur. These disagreements occur far more often when the linguistic problem is vagueness, rather than ambiguity. People are not in consensus over borderline cases, but are able to recognize two different senses of a word, or two different syntactic analyses of a sentence. The paper concludes with a discussion of experimental work, now in progress, to test whether people purporting to find language plain in the teeth of inconsistent interpretations by others are being entirely sincere.

Godfrey A. Steele
University of the West Indies, St. Augustine

The language of the law and the interpretation of the law: Issues of language and communication

On November 20 2003, The Privy Council [PCA 18 2003] dismissed an appeal against the sentence of murder [Roodal v Trinidad and Tobago], which had earlier been dismissed by the Court of Appeal. The appeal was based on the question of whether the imposition of the death penalty was mandatory or discretionary. The Privy Council ruling was based on an approach to three issues 1) whether section 4 of the Offences Against the Person Act 1925 was consistent with section 62 of the Interpretation Act 1962, 2) whether section 4 needed to be modified to provide a discretionary penalty, and 3) whether the imposition of a death penalty violates the constitutional principle of the separation of powers. An analysis of the 50-page, 110-paragraph judgement revealed 13 specific references in 9 paragraphs to the language of the law and 73 references in 44 paragraphs to the interpretation of the law. Using the text in the Privy Council’s judgement, this paper examines the significance of the treatment of the language of the law and the interpretation of the law relating to the imposition of the death penalty in Trinidad and Tobago. A case is made for a course of study designed to sensitize legal practitioners to the language and communication issues that affect their understanding of the language of the law and their interpretation of the law.

Natalie Stroud
Monash University, Melbourne, Australia

The Koori Court in Victoria: a response to the over-representation of Indigenous Australians in the Criminal Justice System

The Koori Court pilot program, initiated by the Victorian Government in 2002 following recommendations made by the Royal Commission into Aboriginal Deaths in Custody (1991), addresses the concerns of a number of linguists including Eades, Gibbons, Walsh and Cooke regarding the cultural and language disadvantages experienced by Indigenous Australians in the formal court context. This paper will examine ways in which the specific linguistic features that have been identified as problematic for Indigenous Australians in the courtroom context are neutralised through the operation of the Koori Courts, such as alternatives to the question/answer format and recognition of cultural meanings attached to silence. Other associated innovations will be reviewed. Since the introduction of the Koori Courts, under the jurisdiction of the Magistrates’ Court of Victoria, findings indicate an overall increase in the level of awareness of legal professionals involved in this initiative, with a positive response from many of the Indigenous community groups who have appreciated involvement in the legal process. New programs and specialist services have also been implemented by many supporting agencies, including partnership programs with Indigenous community groups by both Victoria Police and the Courts. Victoria now has three Koori Courts in operation, with two more courts to be opened in 2005, including a Koori Children’s Court. Changes in awareness of all participants through on-going education and training have resulted in many of the special needs of Indigenous Australians being addressed, leading to a reduction in recidivism and a greater participation of local Indigenous communities in the legal process.

Yves Talla Sando Ouafeu
Universität Freiburg in Breigau, Germany

“The Use of Prosodic clues in the identification of the national origin of English-speaking African Asylum seekers in Germany: Focus on Cameroon and Nigerian English speakers”

Quite a great deal of research has been undertaken to point out the extent to which a given asylum seeker’s real country of origin can be determined on the basis of linguistic evidence. As regards Asylum seekers from the English-speaking regions of Africa, Simo Bobda et al (1999) demonstrate that it is very possible to identify a given speaker’s real country of origin on the basis of phonetic and phonological features, of lexical features, of sociolinguistic clues and of sociocultural clues. These various clues are very plausible, but prosodic cues as well can contribute significantly to the identification of Africans seeking asylum in Germany. It is the aim of this paper to point out some prosodic features, namely some tonal or intonational features which can be vital in helping associate a given speaker with his or her country of origin. Bearing in mind the fact that the intonation patterns of most African Englishes have not been comprehensively characterized, this paper will restrict itself to two West African varieties, namely Nigerian English and Cameroon English.

Simo Bobda, Augustin, Hans-Georg Wolf and Lothar Peter(1999) “Identifying regional and national origin of English-speaking Africans seeking asylum in Germany”. Forensic Linguistics 6(2): 302-321.

Sam Tomblin
Cardiff University

Author Online: Evaluating the Use of the World Wide Web in Cases of Forensic Authorship Analysis

Increasingly, forensic linguists are using the World Wide Web (WWW) as a tool in forensic authorship analysis cases as a representative corpus of language. Perhaps most notable was Fitzgerald’s use of the WWW in the FBI investigation of the Unabomber, in which a combination of twelve lexical items and phrases were used as search queries on the WWW in order to demonstrate the idolectal habits of the Unabomber. Using lexis as a marker of authorship, and the Unabom investigation as a case study, this research begins to evaluate the effectiveness of using the WWW in cases of forensic authorship analysis. Firstly, the process of devising a lexical profile for an author based on their lexical habits across a series of known writings is described. The individual lexical profiles of an author are then used as search queries on the WWW, on the premise that if lexical habits are constant, other documents authored by the same person, but not used in the study, should be returned by the search engine, which may then be more indicative of common authorship. For example, when Fitzgerald entered twelve lexical items as a search query on the WWW, it may have been more conclusive that Kaczynski was the author of the Manifesto if other documents also authored by Kaczynski had been returned by the search engine, rather than multiple variations of the Manifesto, which may be more suggestive of an investigation into plagiarism through lexical overlap. The research concludes by arguing that the effectiveness of using the WWW in cases of forensic authorship analysis is diminished because search engines are not yet developed robustly enough, leaving large proportions of the WWW un-searched. Also, there is variation between the results returned by different search engines and additionally when repeated after a short period of time, suggesting that reliability of the WWW is low. Implications for authorship analysis are discussed, including the effect of different genres as comparison texts. Additionally, it is suggested that successful and conclusive attribution of a Questioned Document to an author using the WWW in this way would require every author in the world to have published extensively on the WWW.

Fleur van der Houwen
University of Southern California

Negotiating disputes through formulations and decisions

According to a survey by The National Center for State Courts (1999), 40.5% of respondents indicated that they rely ‘sometimes’ or ‘regularly’ on televised small claims courts for information about the U.S. legal system. It appears that such productions serve not only to entertain, but may also provide an important role molding lay views of institutionalized dispute resolution. The TV court show Judge Judy, presided over by Sheindlin, a former family court judge, has been criticized both in the popular press and by some from the legal community (Lynch 1999, Dershowitz 2000, Lovel Banks 2003) to misrepresent the proceedings in regular small claims courts and portraying itself as a ‘court of common sense’ (Lynch (1999-5). This paper is part of a larger critical discourse analysis project grounded in conversation analysis, and analyzes the linguistic transformations litigants’ stories undergo, focusing on formulations and decisions (Watson and Heritage 1979). It finds that Sheindlin, reframes disputes through various types of formulation, and in addition to some of the types found by Watson and Heritage (1979); Heritage (1985); and Gafaranga & Britten (2004), it finds forms of formulation specially suited to accomplish some of the interactional tasks in this court setting, namely, bridging formulation, legal formulation, and decision formulation. Through formulations and decisions Sheindlin transforms the two opposing stories into a new version of events by acknowledging, denying, emphasizing, deemphasizing, ignoring, or even creating elements in the stories litigants tell. By doing so, she can reconstruct the often very complex and opposing versions of events that litigants give to a simple story leading to coherent judgments with clearly winning and losing litigants.

Susanne van der Kleij
Radboud University Nijmegen/Ministry of Justice, The Netherlands

Interruptions in Asylum Narratives at the Dutch Immigration Service

In the Dutch asylum procdure, a foreigner is subjected to two interviews in which he provides all the relevant information for his application. In the second interview the foreigner is asked to explain the reasons why he left his country. It is the immigration officer's task to let the foreigner tell his asylum narrative in his own words and without interruption, or as little interruption as possible. There are several potential reasons why the foreigner would not be able to tell his story with no interruption at all. The first reason is an obvious one: the immigration officer and the foreigner don't understand each other's language and need an interpreter to assist them. The foreigner has to cut up his asylum narrative in order to allow the interpreter to provide a translation. Other reasons why the foreigner can be interrupted have to do with the length of the narrative and the relevance of the information. The open question that is used by the immigration officer to invite the foreigner to tell his asylum narrative can lead to a overextended story, starting many years before the departure from the country of origin and in which no detail will be omitted. However, not all information will be relevant for the immigration officer. Also, the officer is likely to set a time-limit to the interview -- normally an interview takes about two to three hours. For these reasons the immigration officer will sometimes interfere. In this paper I will use the information from eight different asylum interviews to show to what extent a foreigner is interrupted during his asylum narrative, and for what reasons and by which verbal means these interruptions take place.

Margaret van Naerssen
Immaculata University & University of Pennsylvania

Testing for Possible Faking of Language Proficiency by Defendants

Attorneys and law enforcement officers sometimes need to determine the oral proficiency of a defendant/ suspect. It becomes especially challenging when defendants/ suspects claim low language proficiency. However, even language testing can be problematic if the speaker decides to shut down or tries to manipulate the testing situation. This report is about a testing sequence involving a proficiency interview and a bilingual task. They were used in a federal case in the US. The bilingual task shows promise for wider use, but more research is needed. A widely used foreign language interview protocol (ACTFL OPI) was used as the primary assessment tool in this case; however, more was needed. Under most foreign language testing conditions, examinees want to display their highest level of proficiency. However, in legal contexts defendants/ suspects may decide it’s in their interests to pretend a lower proficiency to trigger procedural challenges about legal rights or to avoid taking responsibility for their actions. Thus, they may try to manipulate the testing situation. However, analyses of recordings of the testing can be compared to recordings of other interactions (wire taps, police interviews or other recordings) to look for patterns of language use, especially along language development lines. However, without that additional audio evidence what can be done? A bilingual story-retelling task was developed using the defendant’s Chinese dialect and English. After hearing the story and doing a short non-verbal task (to reduce the effect of short-term memory), the defendant retold the story in Chinese. A propositional analysis was done of the retold story and compared with the original bilingual story. The assumption was if the defendant understood any of the English version (containing some different information), some would probably appear in the retold story. The development of this task and its use in the case will be described.

Maurice Varney

"You have already won a guaranteed major prize". The use of language to deceive and rob

Every day, people throughout the UK open their post to find they have unexpectedly won a draw, lottery or other exciting prize. While some of this mail is genuine, most of it is a dishonest attempt to trap people into parting with money with nothing or little to show for it. Usually, it looks too good to be true. So why are so many people taken in and persuaded to follow up? This presentation will examine the choice of language used in these declarations and try to establish how it takes people in and to what it appeals : stupidity; greed; vanity; loneliness; fantasy; credibility; ignorance; etc. The language will also be examined from the point of view of Trading Standards Departments securing a conviction for fraud. One offer which promised "£2,000" actually gave vouchers (of very limited worth) for High Street Use. They should have said "to the value of £2,000". Is there a case here? Can we find linguistic evidence which can help to stop these fraudulent thieves at their game?

Michael Walsh
University of Sydney, Australia

Educating the judge? Linguistic evidence in Native Title and land claim cases in Australia

For nearly 30 years linguists have been appearing as expert witnesses in a wide array of cases involving land rights for Indigenous Australians. These linguists have often enough found themselves trying to explain some of the niceties of linguistics to a legal audience with varying degrees of receptivity. In particular it is useful if the judge hearing the case has apprehended these niceties with an acceptable degree of accuracy. Among the issues to be considered are: the link between an Indigenous language and a particular tract of land; the relatedness or otherwise of neighbouring languages; knowledge and use of a language; the stability of a language group in a particular area; and of course, lexical semantics. Occasionally there are expert witnesses in linguistics appearing for different interests so the judge must reach an appropriate conclusion on disputed issues from these expert witnesses. There has been considerable debate among anthropological experts in such cases about their role in the proceedings – including the extent to which they should appear as a ‘neutral’ expert or as an advocate. Among the growing literature in this area is a paper entitled ‘Don’t Educate the Judge: Court Experts and Court Expertise in the Social Disciplines’! In this paper we survey some of the attempts in a range of Native Title and land claim cases in Australia – whether blatant or by stealth – to educate the judge. This will draw on transcripts of proceedings, expert reports by linguists and judge’s reports/decisions. We conclude with some suggestions for more effective practice by linguists in this rather specialized and problematic legal arena.

Amy Wang
Lancaster University

When precision meets vagueness: a corpus-assisted approach to vagueness in Taiwanese and British courtrooms

Vagueness in language often carries negative connotations, and this is particularly the case in legal contexts. Yet vagueness is a natural part of language (e.g. Channell, 1994, Williamson, 1994). It conveys various functions in language use; for example, it enables speakers to flag their utterance as uncertain, and to compensate for a lack of more precise information. Precision and imprecision in legal language has already attracted the attention of scholars (e.g. Solan, 1993), but has tended to focus on written legal texts. This paper examines vagueness in (i) spoken contexts, namely courtroom discourse and (ii) two different cultural settings (British and Taiwanese). It explores the interaction of some opposing tendencies in courtroom exchanges: how the expectation in high precision of court language interacts with the natural phenomenon of language vagueness, and how high-precision legalese interacts with the language used by lay people (e.g. plaintiffs, defendants and witnesses). Drawing upon corpora consisting of transcriptions of Taiwanese and British criminal court proceedings, the paper compares the patterns of linguistic vagueness between these two legal systems, of which the former belongs to the continental law system and the latter an adversarial system. It will be demonstrated that some vagueness patterns and functions are common to both the British and Taiwanese legal settings (e.g. self-protection, modifying the illocutionary force and marking speaker’s subjective assumption), but they are realized in markedly different ways in each. Finally, it will also be shown that manipulation of vagueness in court is closely connected to participants’ goals in the context.

Bencie Woll and Joanna Atkinson

City University London

Deaf and No Language: Communication and the Law

There is a large and comprehensive body of literature on the effects of linguistic isolation on the acquisition of language but few studies considering the outcomes in terms of the law and interpreting. Deaf children with hearing parents may receive very limited input of spoken language because of their deafness. In addition to lack of access to spoken language, some individuals may not be exposed to a sign language in childhood or adolescence. Such cases have been reported among individuals who live in non-Western communities where they are the only deaf person in the community and where there is no access to formal schooling. The case described here is of a deaf man (Muniram), who was first exposed to a sign language when he immigrated to the UK at the age of 25 in 1980, and who has been recently charged with a number of criminal offences. The presentation is in three parts. In the first part, a general introduction will be presented of semi-lingualism in deaf people with limited exposure to language. In the second part, the results of a detailed cognitive and linguistic assessment undertaken with Muniram will be discussed; and the third section will present an analysis of interpreted police interviews with him in the light of the assessment findings. The conclusions will be concerned with the implications for the participation of such individuals in the legal system, and the issues for interpreters called upon to work with people like Muniram.