Eyewitness Testimony

Eyewitness Testimony

Description

 

 

Review the attached two peer-reviewed journal articles on eyewitness testimony,  then do the following:

  1. Briefly summarize the findings from each article.
  2. Based upon the information read, discuss if eyewitness testimony is reliable or unreliable.

Running head: ASSIGNMENT TITLE HERE Typing Template for APA Papers: A Sample of Proper Formatting for the APA 6th Edition Student A. Sample Grand Canyon University: 1 ASSIGNMENT TITLE HERE 2 Typing Template for APA Papers: A Sample of Proper Formatting for the APA 6th Edition This is an electronic template for papers written in APA style (American Psychological Association, 2010). The purpose of the template is to help the student set the margins and spacing. Margins are set at 1 inch for top, bottom, left, and right. The type is left-justified only— that means the left margin is straight, but the right margin is ragged. Each paragraph is indented five spaces. It is best to use the tab key to indent. The line spacing is double throughout the paper, even on the reference page. One space is used after punctuation at the end of sentences. The font style used in this template is Times New Roman and the font size is 12. First Heading The heading above would be used if you want to have your paper divided into sections based on content. This is the first level of heading, and it is centered and bolded with each word of four letters or more capitalized. The heading should be a short descriptor of the section. Note that not all papers will have headings or subheadings in them. First Subheading The subheading above would be used if there are several sections within the topic labeled in a heading. The subheading is flush left and bolded, with each word of four letters or more capitalized. Second Subheading APA dictates that you should avoid having only one subsection heading and subsection within a section. In other words, use at least two subheadings under a main heading, or do not use any at all. When you are ready to write, and after having read these instructions completely, you can delete these directions and start typing. The formatting should stay the same. However, one item ASSIGNMENT TITLE HERE 3 that you will have to change is the page header, which is placed at the top of each page along with the page number. The words included in the page header should be reflective of the title of your paper, so that if the pages are intermixed with other papers they will be identifiable. When using Word 2003, double click on the words in the page header. This should enable you to edit the words. You should not have to edit the page numbers. In addition to spacing, APA style includes a special way of citing resource articles. See the APA manual for specifics regarding in-text citations. The APA manual also discusses the desired tone of writing, grammar, punctuation, formatting for numbers, and a variety of other important topics. Although the APA style rules are used in this template, the purpose of the template is only to demonstrate spacing and the general parts of the paper. The student will need to refer to the APA manual for other format directions. GCU has prepared an APA Style Guide available in the Student Writing Center for additional help in correctly formatting according to APA style. The reference list should appear at the end of a paper (see the next page). It provides the information necessary for a reader to locate and retrieve any source you cite in the body of the paper. Each source you cite in the paper must appear in your reference list; likewise, each entry in the reference list must be cited in your text. A sample reference page is included below; this page includes examples of how to format different reference types (e.g., books, journal articles, information from a website). The examples on the following page include examples taken directly from the APA manual. ASSIGNMENT TITLE HERE 4 References American Psychological Association. (2010). Publication manual of the American Psychological Association (6th ed.). Washington, DC: Author. Daresh, J. C. (2004). Beginning the assistant principalship: A practical guide for new school administrators. Thousand Oaks, CA: Corwin. Herbst-Damm, K. L., & Kulik, J. A. (2005). Volunteer support, marital status, and the survival times of terminally ill patients. Health Psychology, 24, 225-229. doi:10.1037/02786133.24.2.225 U.S. Department of Health and Human Services, National Institutes of Health, National Heart, Lung, and Blood Institute. (2003). Managing asthma: A guide for schools (NIH Publication No. 02-2650). Retrieved from http://www.nhlbi.nih.gov/ health/prof/asthma/asth_sch.pdf Behavioral Sciences and the Law Behav. Sci. Law 31: 637–651 (2013) Published online 3 September 2013 in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/bsl.2080 Expert Testimony on Eyewitness Evidence: In Search of Common Sense Kate A. Houston*, Lorraine Hope†, Amina Memon‡ and J. Don Read§ Surveys on knowledge of eyewitness issues typically indicate that legal professionals and jurors alike can be insensitive to factors that are detrimental to eyewitness accuracy. One aim of the current research was to assess the extent to which judges, an underrepresented sample in the extant literature, are aware of factors that may undermine the accuracy and reliability of eyewitness evidence (Study 1). We also sought to assess the knowledge of a jury-eligible sample of the general public (drawn from the same population as the judges) and compared responses from a multiple choice survey with a scenario-based, response-generation survey in order to investigate whether questionnaire format alters the accuracy of responses provided (Study 2). Overall, judges demonstrated a reasonable level of knowledge regarding general eyewitness memory issues. Further, the jury-eligible general public respondents completing a multiple choice format survey produced more responses consistent with experts than did participants who were required to generate their own responses. The results are discussed in terms of the future training requirements for legal professionals and the ability of jurors to apply the knowledge they have to the legal context. Copyright # 2013 John Wiley & Sons, Ltd. Mistaken eyewitness testimony is considered by many to be responsible for 75% of 301 cases of wrongful imprisonment in the US (The Innocence Project, 2012; Wells, Memon, & Penrod, 2006; Wise & Safer, 2004). In many of these cases, the judge and/or juries were convinced by testimony from a witness which implicated the defendant but which was, in fact, inaccurate (e.g. Wells et al., 2006). Research in psychology has been directed at identifying the factors that may affect the accuracy and reliability of eyewitness memory. However, the extent to which scientific findings align with the “common sense” of legal professionals and jurors is less clear. Investigations to date of the knowledge transfer among academics, potential jurors and legal professionals have presented a somewhat bleak picture (e.g. Magnussen et al., 2008; Wise & Safer, 2004; Wise, Safer, & Maro, 2011). Knowledge Base of Legal Professionals Regarding Eyewitness Testimony In a survey of 160 US judges’ knowledge and beliefs about eyewitness testimony, Wise and Safer (2004) identified a number of areas in which judges’ beliefs did not reflect *Correspondence to: Dr. Kate A. Houston, Department of Psychology, University of Texas at El Paso, El Paso, TX 79968, U.S.A. E-mail: [email protected] † University of Portsmouth. ‡ Royal Holloway, University of London. § Simon Fraser University. Copyright # 2013 John Wiley & Sons, Ltd. 638 K. A. Houston et al. current research evidence. For instance, judges’ knowledge differed from research evidence in relation to: the optimum methods for line-up presentation and administration; memory decline; how the presence of a weapon might affect memory for the perpetrator; the effects of disguise on the ability to accurately describe and identify the perpetrator; and the potential effects of post-event information on eyewitness testimony (Wise & Safer, 2004). Further surveys outside the US suggest similar low levels of knowledge regarding eyewitness memory issues by legal professionals. Granhag, Strömwall, and Hartwig (2005) found that the responses of Swedish police, lawyers and judges were in line with expert opinion on issues such as the possible effects of weapon presence during a crime, but not on issues such as line-up construction and administration. Furthermore, Granhag et al. (2005) found that the legal professionals surveyed seldom agreed with each other on whether certain factors might affect the reliability and accuracy of eyewitness testimony. They reported that the judges, in particular, were more likely than police officers or lawyers to respond “don’t know” to questionnaire items. Indicative of this pattern of responding, Granhag et al. (2005) also reported that the legal professionals who were surveyed felt they were not up-to-date with research on the reliability of eyewitness testimony. In a further survey, Magnussen et al. (2008) found comparable low levels of knowledge regarding eyewitness memory among a Norwegian judicial sample. Taken together, the findings from the Innocence Project and surveys such as Wise and Safer (2004) and Magnussen et al. (2008) suggest that the knowledge base of legal professionals on eyewitness testimony is inadequate. However, in spite of these findings, the acceptance of expert testimony on the quality of witness evidence in court is relatively uncommon. One argument is that an understanding of the potential weaknesses or inaccuracies of eyewitness testimony falls within the domain of common sense of both legal professionals and jurors. The extant literature, on the other hand, suggests that the workings and vulnerabilities of eyewitness memory fall largely outwith the realms of “common sense”. Knowledge Base of Jurors Regarding Eyewitness Testimony Surveys of mock jurors’ knowledge of eyewitness issues are more numerous than those of judges and legal professionals. However, the findings of mock-juror surveys are similar to surveys of legal professionals, as respondents typically demonstrate an understanding of eyewitness testimony that is at odds with research findings (Benton et al., 2006; Brigham & Wolfskeil, 1983; Deffenbacher & Loftus, 1982; McConkey & Roche, 1989; Noon & Hollin, 1987). Mock jurors tend to be insensitive to biased procedures used by law enforcement, such as poorly constructed line-ups, misleading feedback or biased instructions (Shaw, Garcia, & McClure, 1999). Potential jurors also find it difficult to distinguish between accurate and inaccurate witnesses (e.g. Lindsay, Wells, & O’Connor, 1989; Lindsay, Wells, & Rumpel, 1981). Part of the underlying cause of poor juror understanding regarding eyewitness memory issues may be a lack of knowledge regarding the ability and performance parameters of memory more generally. For instance, Simons and Chabris (2012) surveyed the general public regarding common memory myths, such as events being recorded in the memory akin to a video tape and able to be reviewed and inspected at a later date, and that once a memory has been formed for an event it will not change. Simons and Chabris (2012) found that their respondents claimed these memory myths Copyright # 2013 John Wiley & Sons, Ltd. Behav. Sci. Law 31: 637–651 (2013) DOI: 10.1002/bsl Expert testimony on eyewitness evidence 639 were truths over 50% of the time. Furthermore, previous experience as a witness does not appear to be related to knowledge of eyewitness issues (Noon & Hollin, 1987). Desmarais and Read (2011) conducted a meta-analysis of 23 experiments that documented the knowledge of the general public about factors that may affect the reliability of eyewitness testimony. Desmarais and Read (2011) found that the responses of the general public matched the general consensus expert opinion in the field for two-thirds of questions. However, the meta-analysis also revealed inconsistencies in the ways in which “expert agreement” is reached on a topic and the topics that are generally assessed across studies (Desmarais & Read, 2011). Owing to variability across studies in terms of the topics on which potential jurors were assessed and the differences in methods used to measure knowledge, it is difficult to evaluate the precise nature of potential juror understanding. Current Research Research has typically investigated either the knowledge of legal professionals or the knowledge of a jury-eligible general public regarding eyewitness evidence. However, no research to date has assessed the knowledge of judges and the knowledge of a jury-eligible general public from the same population. Consequently, an accurate representation of the level of knowledge regarding eyewitness issues present in a given courtroom as a function of either common sense (jurors) or professional experience (judges) is absent from the extant literature. The current series of studies assessed the levels of knowledge regarding eyewitness testimony among judges and jurors from the same population, that is, individuals whose home nation is Scotland, UK. Both groups (judges and jurors) completed the same questionnaire. STUDY 1 The commonly asserted assumption by judges that eyewitness memory is a matter of common sense contradicts research findings demonstrating that potential jurors (Cutler & Penrod, 1995; Cutler, Penrod, & Dexter, 1990; Kassin & Barndollar, 1992; Shaw et al., 1999) and legal professionals (Granhag et al., 2005; Wise & Safer, 2004) are typically rather limited in their understanding of factors affecting eyewitness accuracy. Therefore, in an attempt to assess the knowledge base of judges and jurors within the same population, our first experiment assessed judges’ knowledge regarding eyewitness testimony. Method Participants Ninety-nine judges took part in our survey.1 Judges were recruited to take part in the research during a routine professional development seminar run by the Judicial Studies 1 The judges were informed that the provision of all information was voluntary. All of the judges surveyed declined to complete the demographic information sheet provided to them. Therefore, the demographic information presented here represents demographic information available from the Judiciary of Scotland website: http://www.scotland-judiciary.org.uk Copyright # 2013 John Wiley & Sons, Ltd. Behav. Sci. Law 31: 637–651 (2013) DOI: 10.1002/bsl 640 K. A. Houston et al. Committee. The judges surveyed were all at the Scottish rank of “sheriff,” which means they had at least 10 years’ experience as an advocate, solicitor or lawyer as well as considerable court experience. Sheriffs deal with the majority of criminal court cases in Scotland and must retire from the bench on the day of their 70th birthday (Sheriffs’ Association, 2012). Of the current 142 sheriffs in Scotland, 112 are male and 30 are female (Sheriffs’ Association, 2012). The sample recruited for the current study represents 70% of all active sheriffs in the jurisdiction. Survey Development and Administration A pool of statements concerning eyewitness identification issues, including statements used by Kassin et al. (2001); Read and Desmarais (2009a) and Deffenbacher and Loftus (1982), was generated. From this initial pool a multiple choice questionnaire was developed. The eyewitnesses topics selected for inclusion were based on ratings of reliability supplied in the Kassin et al. (2001) expert survey in response to the question, “Do you think this phenomenon is reliable enough for psychologists to present in courtroom testimony?” Topics with levels of agreement among experts below 90% with respect to reliability and/or research basis were not included in the survey, with the exception of a trained observers question. The trained observer item in Kassin et al. (2001) only received 39% consensus amongst experts and a 75% agreement that there was a research basis for the conclusion that trained observers are no better or worse than untrained eyewitnesses. However, given that police officers frequently deliver eyewitness evidence in court, the perceived accuracy of police (and other trained observers) as eyewitnesses was included as an item, based upon Read and Desmarais (2009a) and Kassin et al. (2001). The question was as follows: police officers often witness various crimes and have to report their memories of them. In your opinion, police officers: (a) make more accurate witnesses than the average person; (b) are as accurate as the average person; (c) make less accurate witnesses than the average person; or (d) I don’t know. Response option (b) was designated as consistent with expert opinion in line with Kassin et al. (2001). The multiple choice (MC) questionnaire adopted a choice response format such that respondents were required to complete the statement by circling their preferred response option. Response options were provided for each question and all included an “I don’t know” option. Each set of response options also included a response consistent with the current understanding of that particular phenomenon within the literature as determined by expert agreement within the Kassin et al. (2001) survey and evaluation by the authors. One such example [drawn from the Read and Desmarais (2009a) survey] was: “Sometimes witnesses experience crimes under the influence of alcohol. In your opinion, alcohol intoxication: (a) improves a witness’s ability to later recall people/crimes; (b) has no influence on a witness’s ability to later recall people/ crimes; (c) reduces a witness’s ability to later recall people/crimes; or (d) I don’t know. Initial piloting of the questionnaire resulted in the rewording of a number of statements to improve respondents’ understanding of the issues. As in Read and Desmarais (2009a), this typically required some rewording to make the statements more comprehensible to a non-academic audience (i.e. ensure the meaning of the question was clear and did not contain unfamiliar jargon). Following piloting, the final questionnaire comprised 11 topic areas (see Table 1). For 10 of the topics selected, expert agreement according to the Kassin et al. (2001) results indicated an overall mean agreed reliability Copyright # 2013 John Wiley & Sons, Ltd. Behav. Sci. Law 31: 637–651 (2013) DOI: 10.1002/bsl Expert testimony on eyewitness evidence 641 of 90%, while an overall mean of 94% agreed that there was a research basis for this conclusion, with the exception of the trained observers item. An introductory paragraph at the outset of the questionnaire informed respondents that the aim of the study was to examine how they believed a typical witness would behave in particular circumstances. Respondents were asked to indicate which response they believed was most accurate. They were also instructed not to guess and to use the “I don’t know” response if they could not identify the accurate response. Respondents were instructed to read and complete each question carefully in the order presented and in their own time. The survey was administered during a training course. Judges were requested to complete the survey individually without discussion and were allocated as much time as necessary to complete it. Responses to the questionnaire were either consistent or inconsistent with expert opinion, in line with the findings of Kassin et al. (2001). For example, for the question on post-event information “When witnesses are asked to report about a crime they saw, their report generally: (a) includes not only what they actually saw but also information they learned after the crime; (b) includes only what they actually saw; or (c) I don’t know, response (a) would be coded as consistent with expert opinion, response (b) would be coded as inconsistent with expert opinion and response (c) would be coded as a “don’t know” response. Therefore, a coding packet, designating responses as either consistent or inconsistent with expert opinion as above, was generated in line with the Kassin et al. (2001) findings. Thus, given that only one of the MC responses per topic was consistent with expert opinion and therefore could be objectively coded, responses were categorized by one independent coder. Results and Discussion Initial analysis Initial coding of the data resulted in three response options: consistent with expert opinion, inconsistent with expert opinion, and I don’t know. In order to identify questions for which the distribution of responses did not differ from chance, we ran multiple Table 1. Percentage of judges’ responses that were consistent with expert opinion, inconsistent with expert opinion or “don’t know” responses Consistent Inconsistent Don’t know 42.7 96.9 40.2 61.2 64.9 83.7 52.6 53.1 1.0 42.3 21.4 28.9 11.2 45.4 4.2 2.1 17.5 17.4 6.2 5.1 2.1 90.6 83.7 77.3 45.4 67.2 2.1 16.3 13.4 26.8 23.8 7.3 0.0 9.3 27.8 9.0 Estimator variables Exposure duration Alcohol intoxication Weapon focus Cross-race bias Accuracy confidence Unconscious transference Trained observers System variables Post-event information Wording of questions Child suggestibility Mugshot-induced bias Totals Copyright # 2013 John Wiley & Sons, Ltd. Behav. Sci. Law 31: 637–651 (2013) DOI: 10.1002/bsl 642 K. A. Houston et al. chi-squared analysis with all three response options included for each item. This analysis showed that every distribution differed from chance, χ 2(2) ≥ 98.0, p < 0.001, Cramer’s ø = 1.00. Consistent with Read and Desmarais (2009a), we then removed the “I don’t know” responses from our analysis and re-ran the chi-squared analyses. Removal of all “I don’t know” responses did little to change the results, with all distributions differing from chance. As can be seen from Table 1, the judges only selected the “I don’t know” option on average 9% of the time. Overall consistency with expert opinion Judges provided responses that were consistent with expert opinion 67% of the time (see Table 1). Although it is difficult to compare this percentage with those reported in other articles, due to differences in question topics/response options, this percentage does appear to fall within the 19–94% agreement reported by Wise and Safer (2004). This result is similar to that of Granhag et al. (2005) in that judges answered two-thirds of the questions posed to them correctly. The lowest percentage of responses that were consistent with expert opinion was 40%, which was in response to the question on mugshot bias. However, mugshot bias also resulted in the largest don’t know response, comprising 28% of responses. Item-specific consistency with expert opinion Of all the questions asked, judges displayed the highest level of consistency with expert opinion in relation to the effects of alcohol intoxication on a witness at the time of the crime (97% of responses consistent with expert opinion). Judges also demonstrated knowledge that an eyewitness’s statement may contain post-event information, with 91% of responses to this question being consistent with expert opinion. This is not surprising, given that these two factors are probably commonly experienced in the courtroom. Judges were less well informed about the effects of exposure duration on memory, weapon focus, and mugshot bias, with less than 50% of responses to these questions consistent with expert opinion. Within our sample, responses that were consistent with expert opinion ranged from a low of 40% for weapon focus to a high of 97% for alcohol intoxication, showing large variability. It is clear from Table 1 that while the research on certain variables that affect eyewitness memory are being successfully communicated to the Scottish courts, more work needs to be done on topics such as exposure duration, weapon focus, the use of police officers as witnesses, the cross-race effect, mugshot bias and the relationship between confidence in memory and the accuracy of testimony. Judges beliefs regarding juror knowledge A further aim of our questionnaire was to assess the beliefs of judges regarding the abilities of the jury. According to the revised Jury Instruction Manual published by the Judicial Studies Committee (2012), judges are required to provide directions to a jury regarding the role of the jury, the role of the judge, the verdicts available to them and the innocent until proven guilty code. The judge may also instruct the jury on the evidence presented at hand, but it is entirely up to judges to determine how much and how specific their instructions on the evidence are (Judicial Studies Committee, 2012). Copyright # 2013 John Wiley & Sons, Ltd. Behav. Sci. Law 31: 637–651 (2013) DOI: 10.1002/bsl Expert testimony on eyewitness evidence 643 Table 2. Percentage of agree, disagree and neither agree nor disagree responses to the statements regarding the ability of jurors and the use of expert evidence Statement Agree Neither Disagree Experts required to provide guidance on the reliability of eyewitness testimony Experts are not needed for matters of common sense Eyewitness testimony can be evaluated by common sense alone Jurors are able to tell accurate from inaccurate eyewitnesses More training is required in the reliability of eyewitness testimony as evidence 28.3 74.7 72.8 63.6 58.6 14.1 11.1 17.2 25.3 19.2 56.6 11.1 8.1 11.1 21.2 Note. Some of these figures will not total 100% as some questions were left blank. Therefore, if judges believe that jury members are able to distinguish between an accurate and an inaccurate witness on their own and/or that weaknesses in the testimony of a witness are a matter of common sense, they may not instruct the jury regarding factors that affect the reliability of eyewitness memory. As can be seen from Table 2, an overwhelming 73% of judges surveyed indicated that the reliability of eyewitness testimony is a matter of common sense, with 75% also responding that experts are not required to inform the court of matters of “common sense”. Consistency with previous research Judges sampled for this research provided responses consistent with expert opinion 67% of the time, on average. However, across our questions, judges’ responses demonstrated a large degree of variability in their consistency with expert opinion (ranging from a low of 40% to a high of 97%). Our findings, therefore, are similar to those of Wise and Safer (2004) and Granhag et al. (2005) in that there is a degree of variability in the level of knowledge exhibited by the judges. Taken together, the findings from the current survey add to the message of previous work suggesting that knowledge transfer between judges and academics/researchers is incomplete, the result of which is that judges appear able to maintain specialized knowledge in some, but not all, aspects of eyewitness memory. Furthermore, the majority of judges also believed that jurors would be able to tell the difference between accurate and inaccurate witnesses on their own. This is at odds with the findings of Wise and Safer (2004), who found that only 46% of US judges responded that they were confident in the abilities of their jury-eligible citizens to recognize factors that affect identification accuracy. Thus, Scottish judges appear more confident in the ability of jury-eligible citizens in Scotland to discriminate between reliable and unreliable eyewitnesses than judges surveyed in the US. We wondered to what extent this confidence was justified and conducted Study 2 to assess the “common sense” beliefs of a jury-eligible general public derived from the same nation as the judges. STUDY 2 Convictions that originally relied heavily on eyewitness testimony but are now known to have been in error illustrate quite clearly that jurors and legal professionals alike are often unable either to generate or to apply the common sense expected of them Copyright # 2013 John Wiley & Sons, Ltd. Behav. Sci. Law 31: 637–651 (2013) DOI: 10.1002/bsl 644 K. A. Houston et al. by the courts. For example, Shaw et al. (1999) reported that while jury-eligible participants were aware of factors such as length of exposure to the perpetrator, age of witness and delay between encountering the crime and making an identification, they were unaware of how the different test procedures and interview tactics of the police can influence the accuracy of eyewitness testimony. However, having knowledge of the right response and applying that knowledge appropriately could be conceptualized as rather different tasks. A closer examination of the literature reveals that some methodologies assess the ability of participants to recognize the correct response whilst others assess the ability of participants to apply that knowledge appropriately (for a review, see Read & Desmarais, 2009b). Read and Desmarais (2009b) suggested that the use of different methods may give an inaccurate representation of the level of juror understanding regarding eyewitness testimony evidence. For instance, some surveys have used MC format questions to evaluate juror knowledge (e.g. Deffenbacher & Loftus, 1982; Noon & Hollin, 1987), effectively relying upon the participants to recognize the correct answer. Others have used a more evaluative response format of Likert-type scales with agree–disagree anchors (Kassin & Barndollar, 1992). An evaluative format has also been used in one of the more recent investigations of juror knowledge in the US, which used the Kassin et al. (2001) survey items with limited modifications (Benton et al., 2006). Previous research has shown that self-constructed response formats (where the respondent generates the response unaided) measure higher-order reasoning abilities while MC taps lower-level cognitive processes (such as familiarity or recognition responses) or factual knowledge (Katz, Bennet, & Berger, 2000). Furthermore, MC items can provide unintended hints and these hints may be a potential source of construct-irrelevant variance (Messick, 1989). It is, of course, also the case that questions that require evaluation or inference may promote responses or articulate thinking in ways that would not otherwise occur spontaneously (Chan & Kennedy, 2002). In the light of these findings, it might be asked whether MC-type surveys of juror knowledge simply overestimate what jurors actually know about eyewitness issues. However, to date, there are no empirical data on this topic (Read & Desmarais, 2009b). Therefore, in order to address this question, the current experiment evaluated mock juror understanding of factors influencing the reliability of eyewitness testimony with both MC and response generation (RG) questionnaires. Our aim was to determine whether jurors could spontaneously generate “common sense” responses in relation to various eyewitness issues – as opposed to relying on potential cues inherent in some (but not all) MC alternatives. To achieve this, we adopted a methodological strategy employed by Chan and Kennedy (2002), who matched MC and RG questions with identical stems such that our potential jurors were required to answer equivalent questions but in different formats. In the MC version, respondents were required to choose their answer from either three or four response alternatives (identical to Study 1), while in the RG condition they were required to produce the answer on their own. Methods Participants A total of 192 potential jurors (81 males and 115 females) completed the survey in Scotland with 96 participants completing the MC questionnaire, and 96 participants Copyright # 2013 John Wiley & Sons, Ltd. Behav. Sci. Law 31: 637–651 (2013) DOI: 10.1002/bsl Expert testimony on eyewitness evidence 645 completing the RG questionnaire. Scotland is one of four constituent countries within the UK (the others being England, Wales, and Northern Ireland) and whilst not an independent country, Scotland has its own legal system and Parliament (The National Archives, 2003). Within this jurisdiction, jurors must be between 18 and 65 years of age, be cited on the electoral register and have lived in the UK for a period of at least 5 years since the age of 13 years. In the current study, respondents ranged in age from 18 to 65 years (M = 37.70; SD = 15.24). Almost all respondents (98%) had obtained some level of educational qualification. Most had completed secondary education (61%), while a further 38% indicated that they had completed undergraduate university-level education. Only 10% of the sample reported previous experience of jury service. All respondents spoke English as their first or main language and were UK residents. It should be noted that jurors in Scotland and the rest of the UK are randomly selected and there is no voir dire process. Survey development and administration The same MC questionnaire used in Study 1 was also employed in this study. All question topics were the same for both the RG and the MC questionnaires. The stems used for the MC questionnaire were turned into scenarios for the purposes of the RG questionnaire. Thus, for the RG questionnaire, respondents were first of all required to indicate a categorical response to the statement (yes, no, I don’t know) and were then asked to explain their answer in free text. For example, “People sometimes witness crimes under the influence of alcohol. If a witness were intoxicated at the time of the crime, would this affect their ability to remember what they saw? Respondents then ticked either a yes/no/I don’t know response before moving to the second part of the question: “If yes, what exactly do you think the effect might be and why? If no, why do you think this is the case?” Researchers managed a volunteer desk at a shopping mall in a busy city centre in Scotland, UK, for a period of 14 days. Respondents were invited to participate on a voluntary basis. Respondents were randomly assigned to complete either the RG or MC version of the questionnaire. The questionnaires were completed individually and respondents took between 20 and 30 min, with the RG questionnaire taking longer on average to complete. On completion, respondents were debriefed and thanked for their contribution. As in Study 1, responses to the MC questionnaire were coded as either consistent or inconsistent with expert opinion using the coding packet based upon Kassin et al. (2001). Responses for the RG questionnaire were independently coded by the second and third authors as either consistent or inconsistent with expert opinion as identified by the Kassin et al. (2001) survey. Responses that were illegible or in other ways unable to be coded (such as completely off-topic or irrelevant responses) were eliminated as “not codable”. Inter-coder correlations indicated a mean agreement of r = 0.72. Where there was divergence of opinion between coders, the response was re-analyzed and discussed by both coders until an agreed code was reached. The re-coded responses from both questionnaires were then combined to form a final dataset comprising the following response categories: “consistent with expert opinion”, “inconsistent with expert opinion” and “I don’t know” responses. Copyright # 2013 John Wiley & Sons, Ltd. Behav. Sci. Law 31: 637–651 (2013) DOI: 10.1002/bsl 646 K. A. Houston et al. Table 3. Percentage of consistent (C), inconsistent (I) and don’t know (DK) responses by questionnaire type. Multiple choice Estimator variables Exposure duration Alcohol intoxication* Weapon focus* Cross-race bias Accuracy confidence Unconscious transference* Trained observers* System variables Post-event information Wording of questions* Child suggestibility Mugshot-induced bias Totals Response generation C I DK C I DK 67.7 91.7 49.0 60.4 31.3 76.0 37.5 30.2 3.1 42.7 30.2 63.5 19.8 48.3 2.1 5.2 8.3 9.4 4.2 4.2 4.2 68.8 62.5 26.0 35.4 21.9 36.5 11.5 17.7 30.2 57.3 33.3 43.8 38.5 76.0 13.5 7.3 16.7 31.3 18.8 25.0 12.5 62.5 66.7 63.5 64.6 61.1 26.0 29.2 24.0 18.8 31.4 11.5 4.2 12.5 16.7 7.5 72.9 75.0 59.4 58.3 48.0 15.6 9.4 25.0 18.8 33.2 11.5 15.6 15.6 22.9 18.7 *p ≤ 0.004. Results and Discussion The aim of the current study was to assess juror knowledge of eyewitness issues elicited by two different questionnaire formats whereby respondents completed either an MC format survey or an RG survey (wherein they were required to generate their own responses). Responses in both surveys were coded as either consistent or inconsistent with documented expert opinion (from Kassin et al., 2001). Overall results are presented in Table 3 (including the “I don’t know” responses). Overall consistency with expert opinion Participants scored an average of 61% consistency with expert opinion on the MC questionnaire. However, only 48% of participants’ responses on the RG questionnaire were consistent with expert opinion (see Table 3). In comparison to the previous literature, both these consistency scores are below the overall two-thirds agreement reported by Desmarais and Read (2011) in their recent meta-analysis. Item-specific consistency with expert opinion across questionnaire formats Consistent with Read and Desmarais (2009a), responses to the MC and RG questionnaire formats were analyzed with and without the “I don’t know” responses. Inclusion of the “I don’t know” responses resulted in significant differences in the consistency of responses with expert opinion between the MC and RG questionnaires on eight topics. However, removal of the “I don’t know” response option reduced the number of significant differences to five topics.2 Therefore, the more conservative analysis with 2 Topics of alcohol intoxication, weapon focus, trained observers, wording of questions, and unconscious transference. Copyright # 2013 John Wiley & Sons, Ltd. Behav. Sci. Law 31: 637–651 (2013) DOI: 10.1002/bsl Expert testimony on eyewitness evidence 647 the “I don’t know” responses removed will be reported. Furthermore, owing to multiple comparisons (11), a critical value of α = 0.004 was employed. As can be seen from Table 3, averaged across questionnaire formats and question topics, our community sample provided responses consistent with the expert opinions expressed in Kassin et al. (2001) 55% of the time. However, consistency of response with expert opinion appeared to be associated with questionnaire format for the topics of alcohol intoxication, weapon focus, unconscious transference, trained observers, and wording of questions [ χ 2(1) ≥ 8.0, p < 0.004, Cramer’s ø ≥ 0.20] (full statistics for each comparison can be found in Table 4). On the topic of alcohol intoxication, 92% of responses were consistent with expert opinion in the MC questionnaire and this dropped to 62% when the same question was asked in an RG format. For weapon focus, 49% of MC responses were consistent with expert opinion, compared with 26% of RG responses. The pattern evident for trained observers was similar: there was a higher percentage of responses consistent with expert opinion for the MC questionnaire (37%) than for the RG questionnaire (11%), as was the case for unconscious transference, with 76% of MC responses consistent with expert opinion, compared with 36% of RG responses. However, for the wording of questions topic, the pattern was reversed, with a higher percentage of responses to the RG questionnaire being consistent with expert opinion (75%) than was the case for the MC questionnaire (67%). Consistency with previous research Read and Desmarais (2009a) report a 67% consistency rate between a Canadian community sample and items of the Kassin et al. (2001) survey on which experts had reached consensus. The current survey reported a 61% consistency rate with experts. However, this relatively high rate of consistency with expert opinion was only found when participants completed the MC and not the RG version of the survey. For all estimator factors, juror knowledge appeared to be reasonably high across a number of topics when multiple-response alternatives were provided and respondents were simply required to choose what they believed to be the correct response. Yet when participants Table 4. Comparison of consistent (C) responses by questionnaire type Estimator variables Exposure duration Alcohol intoxication* Weapon focus* Cross-race bias Accuracy confidence Unconscious transference* Trained observers* System variables Post-event information Wording of questions* Child suggestibility Mugshot-induced bias MC RG χ 67.7 91.7 49.0 60.4 31.3 76.0 37.5 68.8 62.5 26.0 35.4 21.9 36.5 11.5 62.5 66.7 63.5 64.6 72.9 75.0 59.4 58.3 2 p ø 2.464 26.403 8.402 3.594 0.691 16.969 15.206 .116