Skip to main content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Med Care. Author manuscript; available in PMC 2015 May 1.
Published in final edited form as:
PMCID: PMC3981942
NIHMSID: NIHMS558263
PMID: 24556893

Relationship between Self-Assessed and Tested Non-English Language Proficiency among Primary Care Providers

Lisa Diamond, MD, MPH, Sukyung, Chung, PhD, MPH, Warren Ferguson, MD, Javier Gonzalez, Elizabeth A. Jacobs, MD, MAPP, and Francesca Gany, MD, MPH

Abstract

Background

Individuals with limited English proficiency (LEP) experience poor patient-clinician communication. Most studies of language concordance have not measured clinician non-English language proficiency.

Objectives

To evaluate the accuracy of self-assessment of non-English language proficiency by clinicians compared to an oral proficiency interview.

Subjects

Primary care providers (PCPs) in California and Massachusetts.

Measures

PCPs first completed a self-assessment of non-English language proficiency using a version of the Interagency Language Roundtable (ILR) Scale, followed by the Clinician Cultural and Linguistic Assessment (CCLA), a validated oral proficiency interview. We used non-parametric approaches to analyze CCLA scores at each ILR scale level and the correlation between CCLA and ILR scale scores.

Results

Sixteen PCPs in California and 51 in Massachusetts participated (n=67). Participants spoke Spanish (79%), followed by Cantonese, Mandarin, French, Portuguese, and Vietnamese. The respondents self-assessed as having “Excellent” proficiency 9% of the time, “Very Good” 24%, “Good” 46%, “Fair” 18% and “Poor” proficiency 3% of the time. The average CCLA score was 76/100. There was a positive correlation between self-reported ILR scale and CCLA score (rho=0.49, p<0.001). The variance in CCLA scores was wider in the middle categories than in the low or high ILR categories (p=0.003).

Conclusions

Self-assessment of non-English language proficiency using the ILR correlates to tested language proficiency, particularly on the low and high ends of the scale. Participants who self-assess in the middle of the scale may require additional testing. Further research needs to be done to identify the characteristics of PCP whose self-assessments are inaccurate and, thus, require proficiency testing.

Keywords: Health disparities, language, doctor-patient interaction, primary care, provider behavior

Introduction

Clear communication between clinicians and patients is essential.1 Patient-clinician communication impacts patient satisfaction, adherence to recommendations, and health outcomes,2 and is influenced by language differences.3 Individuals with limited English proficiency (LEP) often experience poor patient-clinician communication.4, 5

Patient-clinician language concordance is associated with better patient satisfaction,6, 7 medication adherence,8 understanding of diagnoses and treatment,9 patient centeredness,10 and health education.7, 11 Language concordance leads to lower emergency room use, likelihood of missing medications, and cost.8, 12, 13 While the few language concordance and cancer screening studies have found lower screening rates for LEP patients with language concordant clinicians,14-16 it is unclear if the “language concordant” clinicians in these studies were truly fluent or had adequate non-English language skills.

There are two main structured methods used to assess clinician language proficiency

Standardized oral proficiency interviews can formally assess a person's general speaking ability.17 Self assessment tools include the Interagency Language Roundtable (ILR) scale, the Industry Collaboration Effort (ICE) scale, and others designed by individual investigators.10, 13, 18-20 Self-assessment tools take less time and cost less than oral proficiency interviews. However, the accuracy of self-reporting tools has not been sufficiently validated. In one study, medical student Spanish proficiency self-assessments correlated with standardized oral fluency test performance.21 In Kaiser Permanente data, 86% of physicians who self-reported non-English language fluency passed an oral proficiency test.22 Other research has shown that self-assessment is problematic. One large study of physicians, nurses, and clerical staff, self-assessed as adequate for LEP encounters, showed that 20% had inadequate non-English language proficiency with an oral proficiency test.23 Most of these studies only included clinicians with strong self-assessed language proficiency, excluding non-fluent clinicians.

Clinician non-English language proficiency standards are needed to ensure quality communication with LEP patients. While some healthcare organizations have instituted bilingual staff language proficiency testing using interpreting skills tests,23 few have begun testing clinicians.22, 24 More studies are necessary to provide healthcare organizations with practical information on assessing clinician non-English language proficiency and inform guidelines.

To address this gap, we performed a study to evaluate the accuracy of a structured non-English language proficiency self-assessment against a validated clinician oral proficiency interview. Based on previous research,25-27 we hypothesized that clinician oral proficiency interview results on the low and high ends of the self-assessment scale would be more accurate than those in the middle.

Methods

Setting

The study was conducted in two settings, the Palo Alto Medical Foundation (PAMF) and Massachusetts Community Health Centers (MA CHC). PAMF, a multi-specialty organization in the San Francisco Bay Area, had approximately 350 Primary Care Providers (PCPs) at the time of the study (2010). PAMF catchment areas have over 27% LEP adults.25, 28 Approximately 25% of PAMF PCPs self-assess fluency in a non-English language; 46% report some proficiency, most commonly in Spanish, Mandarin, and Cantonese. The MA CHCs top five represented study sites care for approximately 125,000 patients; over one-third of adult patients prefer medical care in a non-English language. The most common languages spoken at the MA CHCs are Spanish, Portuguese, and Vietnamese. Both settings have large LEP patient populations but represent different practice types, geographic regions and patient socioeconomic backgrounds. PCPs were physicians, physician assistants (PAs), nurse practitioners (NPs), and nurse midwives (CNMs).

Recruitment

Data was collected from 16 PCPs at PAMF and 51 in the MA CHCs (n=67). Because Cantonese, Mandarin, French, Portuguese, Spanish, and Vietnamese are the most commonly spoken patient languages at the sites, we recruited clinicians with varying self-reported levels of proficiency in these languages. PCPs were recruited via email to complete a survey which asked them to self-report their language proficiencies using the ILR, followed by testing with the Clinician Cultural and Linguistic Assessment (CCLA). Participants were given a $50 gift card or charity donation. The PAMF IRB approved the study.

Non-English Language Proficiency Assessment

We used the ILR Scale for self-assessment.18 Other organizations, such as the American Council on the Teaching of Foreign Languages (ACTFL), have adapted the ILR scale for their own proficiency guidelines but it has not been widely adopted within health-care. The scale consists of 5 levels, each with descriptive explanations, adapted by the authors for medical situations (Table 1).25 Completion takes less than 5 minutes.

Table 1

Adapted ILR Scale for Physicians 18

Excellent Speaks proficiently, equivalent to that of an educated speaker, and is skilled at incorporating appropriate medical terminology and concepts into communication. Has complete fluency in the language such that speech in all levels is fully accepted by educated native speakers in all its features, including breadth of vocabulary and idioms, colloquialisms, and pertinent cultural references.
Very Good Able to use the language fluently and accurately on all levels related to work needs in a healthcare setting. Can understand and participate in any conversation within the range of his/her experience with a high degree of fluency and precision of vocabulary. Unaffected by rate of speech. Language ability only rarely hinders him/her in performing any task requiring language; yet, the individual would seldom be perceived as a native.
Good Able to speak the language with sufficient accuracy and vocabulary to have effective formal and informal conversations on most familiar topics. Although cultural references, proverbs and the implications of nuances and idiom may not be fully understood, the individual can easily repair the conversation. May have some difficulty communicating necessary health concepts.
Fair Meets basic conversational needs. Able to understand and respond to simple questions. Can handle casual conversation about work, school, and family. Has difficulty with vocabulary and grammar. The individual can get the gist of most everyday conversations but has difficulty communicating about healthcare concepts.
Poor Satisfies elementary needs and minimum courtesy requirements. Able to understand and respond to 2-3 word entry level questions. May require slow speech and repetition to understand. Unable to understand or communicate most healthcare concepts.

After ILR completion, clinicians were invited to take the CCLA, the only validated oral proficiency interview designed to assess clinicians’ ability to communicate directly with LEP patients in their preferred language. Other oral proficiency interviews focus either on non-medical settings or on a person's ability to function as an interpreter.29 The CCLA has been validated in 17 languages, including those we tested. Clinicians were given the test's description, practice questions, and an outline of the scoring procedure. The CCLA, available 24 hours per day, is administered via telephone, costs $100 per test, and takes about 30-40 minutes, comparable to other available tests.

Testing was paid for by The California Endowment, which funded the study. Prompts and instructions during the test are pre-recorded by native speakers to ensure reliability. Assessments are scored separately by two professional raters.24 The CCLA passing score of 80% was established by test development experts.30

Analysis

The Wilcoxon-Mann-Whitney test was used to assess equality in CCLA test score by language; the Spearman test assessed correlation between CCLA scores and the ILR scale for overall sample and by language. The Kruskal-Wallis squared rank test assessed equality of CCLA score variance across ILR categories because means and variances in test scores in each ILR group were correlated. We used non-parametric approaches because the sample size in some groups was too small and the test scores in the cells were not normally distributed.31 Language proficiency overestimation was defined as having a CCLA score <80 with an ILR scale score of “Very Good” or “Excellent.” Underestimation was defined as having a CCLA score ≥ 80 with an ILR scale score less than “Very Good.” The analyses were performed using Stata version 11.2 (StataCorp, College Station, TX).

Results

Sixty-seven PCPs participated. Spanish was the most common language (n=53; 79%) followed by Chinese (Mandarin and Cantonese combined, n=9; 13%), Portuguese (n=3), Vietnamese (n=1) and French (n=1) (Table 2). Most PCPs (88%) were physicians. The majority (78%) were female. The respondents rated their ILR proficiency levels as “Good” (46%), “Very Good” (24%), “Fair” (18%), “Excellent” (9%), and “Poor” (3%). The mean CCLA test score was 76 out of 100 (SD=75.9; range 10-92).

Table 2

Summary Statistics of Sample Characteristics

Freq (%) or Mean [SD]
Language
    Spanish53 (79.1%)
    Chinese9 (13.4%)
    Portuguese3 (4.5%)
    Vietnamese1 (1.5%)
    French1 (1.5%)
Female52 (77.6%)
Provider title
    MD or DO59 (88.1%)
    PA, NP or CNM16 (23.9%)
Test score75.9 [15.7]
ILR
    Excellent6 (9.0%)
    Very Good16 (23.9%)
    Good31 (46.3%)
    Fair12 (17.9%)
    Poor2 (3.0%)

As illustrated in Figure 1, there was a positive correlation between self-reported ILR proficiency and CCLA score (rho=0.49, p<0.001). Twenty three clinicians (34.3%) underestimated their skill (i.e., passed the CCLA test but self-reported below “Very Good”), and 5 clinicians (7.5%) overestimated (i.e., failed the CCLA test but self-reported as “Very Good” or “Excellent”). Respondents who self-reported “Excellent” on the ILR scored 87 on average on the CCLA, and all of them passed the CCLA test.

An external file that holds a picture, illustration, etc.
Object name is nihms-558263-f0001.jpg

Relationship between ILR Scale1 and CCLA2 Test Score, by Language

There were only 2 points difference in the mean CCLA score between those who reported “Very Good” (80.8) and those who reported “Good” (78.5) on the ILR scale. Variance in CCLA scores was narrow for high ILR categories: “Excellent” (SD: 3.5; range: 83-92), “Very Good” (SD: 7.0; range: 59-91), and was wider for middle and low ILR categories: “Good” (SD: 12.6; range: 37-92), “Fair” (SD: 16.4; range: 30-88), and “Poor” (SD: 33.2; range: 10-57) (p<0.01).

The correlation between ILR scale and CCLA scores was significant for those tested in Spanish (n=53, rho=0.45, p<0.001) or other languages combined (n=5, rho=0.95, p<0.05) while there was no significant correlation for those tested in Chinese (n=9, rho=0.42, p=0.25). On average, Spanish respondents scored higher (77.9) than Chinese (60.8) (p<0.05).

Conclusions

We found a positive correlation between self-assessed non-English language proficiency and test scores on a validated measure of non-English language proficiency. This is in part due to anchoring at the lower and higher ends of the ILR scale. Clinicians who self-assessed their non-English language proficiency as “Fair” or “Poor” and those who reported “Excellent” appear to be more accurate than those reporting they were in the middle of the proficiency scale. We also found a difference in average scores by language, with Spanish CCLA scores higher than Cantonese or Mandarin. This may reflect a need for different CCLA passing scores by language or our participants’ lower Chinese proficiency. Our Chinese sample size was too small for further analyses. Further research will be done to identify whether the ILR scale works well for Chinese languages.

Most prior self-assessed language proficiency validation studies have only included clinicians who self-assess on the high end. In one study, 75% of medical students whose self-assessed Spanish abilities were at “intermediate,” “advanced” or “native speaker” levels accurately assessed their own Spanish proficiency levels.21 Only two of these students were native speakers – the rest were Spanish language learners. As the authors note, these participants likely represented a select group interested in pursuing a Spanish course. Kaiser Permanente found that only 86% of physicians who self-reported as “fluent” passed the CCLA, but the study was limited to physicians at this high self-reported level.22 One study assessed the relationship between physician self-rated Spanish-language ability and their Spanish-speaking patients’ reports of interpersonal care processes. Patients of physicians who rated themselves as “fluent” scored their physicians significantly higher on their ability to elicit and respond to the patients’ problems and concerns.10 However, the physician self-assessment scale was not validated. It is troubling that other studies on language concordance and quality of care have not reported their clinician language proficiency measures.8, 12, 14-16 No prior studies have attempted to assess clinicians at all levels of the non-English language proficiency spectrum.

Structured, validated tools are used less commonly outside of research. Written proficiency tests may not successfully assess oral communication ability. Bilingual staff assessment interviews are also employed, but generally without a validated tool. Many structured, non-validated oral proficiency interviews exist. According to a review of testing options (prior to publication of the CCLA, used in our study),24 approximately 35 oral proficiency tests were available, but most evaluated interpreting skills, not direct patient-clinician interactions.29 Most were based on the ACTFL guidelines, which led the creation of national standards for foreign language learning. Tests ranged from $50-325 and took between 10 minutes and 4 hours. Although no studies have examined how frequently these approaches are applied in practice, organizations are likely using informal assessments without understanding their limitations.

Healthcare organizations could use the adapted ILR to screen clinicians who wish to use their non-English language skills with patients. Policies about the use of non-English language skills among physicians at the low and high ends of self-assessed language proficiency should be clear.26 Those at the lower end should always use professional interpreters. Those at the higher end may be able to use their own non-English language skills without proficiency testing. Both groups should document their use of interpreters or their own skills in the patients’ records. Setting policy for clinicians in the middle range is less clear. Our findings suggest that self-assessment and oral proficiency testing for such clinicians do not correlate well. Further research is needed to assess the impact of using middle range non-English language skills with LEP patients on outcomes. It is also not clear why some clinicians are inaccurate in their non-English language self-assessments using the ILR.

Our study has limitations. It was small and focused only on language proficiency. We are currently conducting research to better understand how clinician demographics, non-English language acquisition, and interpreter use vary by non-English language proficiency. The ILR scale, while validated, is not usually self-administered although it has been used this way in non-medical settings.18 The ILR was adapted by the authors to address clinician-patient interactions.25 However, there are no existing, validated self-assessment tools for clinician non-English language proficiency that could be substituted.

Language concordance improves healthcare quality. It is essential that healthcare organizations and providers know how to accurately measure clinicians’ non-English language proficiency. There is a need for further research to determine what level of clinical interaction is acceptable for clinicians who score in the middle of the ILR. Future research will need to determine best practices for partially bilingual clinicians.

Acknowledgments

Funding: National Cancer Institute R21 (1 R21 CA168489-01), The California Endowment Grant # 20082043

Contributor Information

Lisa Diamond, Immigrant Health and Cancer Disparities Service, Department of Psychiatry and Behavioral Sciences and Department of Medicine, Memorial Sloan-Kettering Cancer Center, 1275 York Ave., New York, NY 10021 Department of Public Health, Weill Cornell Medical College, 402 E. 67th St., New York, NY 10065.

Sukyung, Palo Alto Medical Foundation Research Institute, 795 El Camino Real, Ames Building, Palo Alto, CA 94301-2302.

Warren Ferguson, Department of Family Medicine and Community Health, University of Massachusetts Medical School, 55 Lake Avenue North, Worcester, MA 01655.

Javier Gonzalez, Immigrant Health and Cancer Disparities Service, Department of Psychiatry and Behavioral Sciences, Memorial Sloan-Kettering Cancer Center, 1275 York Ave., New York, NY 10021.

Elizabeth A. Jacobs, Department of Medicine & Health Innovation Program, University of Wisconsin- Madison, 800 University Bay Drive, Suite 210, MC 9445, Madison, WI 53705.

Francesca Gany, Immigrant Health and Cancer Disparities Service, Department of Psychiatry and Behavioral Sciences and Department of Medicine, Memorial Sloan-Kettering Cancer Center, 1275 York Ave., New York, NY 10021 Department of Public Health and Medicine, Weill Cornell Medical College, 402 E. 67th St., New York, NY 10065.

References

1. Institute of Medicine . Crossing the Quality Chasm: A New Health System for the 21st Century: The National Academies Press, Committee on Quality of Health Care in America; 2001. [Google Scholar]
2. Stewart M, Brown JB, Boon H, Galajda J, Meredith L, Sangster M. Evidence on patient-doctor communication. Cancer Prevention & Control. 1999 Feb;3(1):25–30. PubMed PMID: 10474749. [PubMed] [Google Scholar]
3. Institute of Medicine . Committee on Understanding and Eliminating Racial and Ethnic Disparities in Health Care. Unequal Treatment: Confronting Racial and Ethnic Disparities in Health Care. National Academies Press; Washington, DC: 2003. [PubMed] [Google Scholar]
4. Lauderdale DS, Wen M, Jacobs EA, Kandula NR. Immigrant perceptions of discrimination in health care: the California Health Interview Survey 2003. Medical care. 2006 Oct;44(10):914–20. PubMed PMID: 17001262. [PubMed] [Google Scholar]
5. Wisnivesky JP, Kattan M, Evans D, Leventhal H, Musumeci-Szabo TJ, McGinn T, et al. Assessing the relationship between language proficiency and asthma morbidity among inner-city asthmatics. Medical care. Feb. 2009;47(2):243–9. PubMed PMID: 19169126. [PubMed] [Google Scholar]
6. Green AR, Ngo-Metzger Q, Legedza AT, Massagli MP, Phillips RS, Iezzoni LI. Interpreter services, language concordance, and health care quality. Experiences of Asian Americans with limited English proficiency. Journal of general internal medicine. 2005;20(11):1050–6. [PMC free article] [PubMed] [Google Scholar]
7. Ngo-Metzger Q, Sorkin D, Phillips R, Greenfield S, Massagli M, Glarridge B, et al. Providing High-Quality Care for Limited English Proficient Patients: The Importance of Language Concordance and Interpreter Use. Journal of general internal medicine. 2007;22(Suppl 2):324–30. [PMC free article] [PubMed] [Google Scholar]
8. Manson A. Language Concordance as a Determinant of Patient Compliance and Emergency Room Use in Patients with Asthma. Medical care. 1988;26(12):1119–28. [PubMed] [Google Scholar]
9. Baker DW, Parker RM, Williams MV, Coates WC, Pitkin K. Use and effectiveness of interpreters in an emergency department. JAMA : the journal of the American Medical Association. 1996 Mar 13;275(10):783–8. PubMed PMID: 8598595. [PubMed] [Google Scholar]
10. Fernandez A, Schillinger D, Grumbach K, Rosenthal A, Stewart AL, Wang F, et al. Physician language ability and cultural competence. An exploratory study of communication with Spanish-speaking patients. Journal of general internal medicine. 2004 Feb;19(2):167–74. PubMed PMID: 15009796. [PMC free article] [PubMed] [Google Scholar]
11. Eamranond PP, Davis RB, Phillips RS, Wee CC. Patient-Physician Language Concordance and Lifestyle Counseling Among Spanish-Speaking Patients. Journal of Immigrant and Minority Health. 2009;11(6):494–8. [PubMed] [Google Scholar]
12. Carter-Pokras O, O'Neill MJ, Cheanvechai V, Menis M, Fan T, Solera A, et al. Providing linguistically appropriate services to persons with limited English proficiency: a needs and resources investigation. American Journal of Managed Care. 2004 Sep;10:15481434. Spec No:SP29-36. PubMed PMID. [PubMed] [Google Scholar]
13. Jacobs E, Sadowski L, Rathouz P. The impact of an enhanced interpreter service intervention on hospital costs and patient satisfaction. Journal of general internal medicine. 2007 Nov;22(Suppl 2):306–11. [PMC free article] [PubMed] [Google Scholar]
14. Eamranond PP. Patient-physician language concordance and primary care screening among Spanish-speaking patients. Medical care. 2011;49(7):668–72. [PMC free article] [PubMed] [Google Scholar]
15. Jo A, Maxwell A, Wong W, Bastani R. Colorectal cancer screening among underserved Korean Americans in Los Angeles County. Journal of Immigrant and Minority Health. 2008;10(2):119–26. [PMC free article] [PubMed] [Google Scholar]
16. Linsky A. Patient-provider language concordance and colorectal cancer screening. Journal of general internal medicine. 2011;26(2):142–7. [PMC free article] [PubMed] [Google Scholar]
17. American Council on the Teaching of Foreign Languages: Testing for Proficiency. 2013 [cited 2013 June 29]. Available from: http://www.actfl.org/professional-development/certified-proficiency-testing-program/testing-proficiency.
18. Interagency Language Roundtable. 2012 [cited 2013 May 9]. Available from: http://www.govtilr.org.
19. Tuot DS, Lopez M, Miller C, Karliner LS. Impact of an easy-access telephonic interpreter program in the acute care setting: an evaluation of a quality improvement intervention. Joint Commission journal on quality and patient safety / Joint Commission Resources. 2012 Feb;38(2):81–8. PubMed PMID: 22372255. Epub 2012/03/01. eng. [PubMed] [Google Scholar]
20. Better Communication, Better Care: Provider Tools to Care for Diverse Populations: Industry Colaboration Effort. 2010 [cited 2012 January 5]. Available from: http://www.iceforhealth.org/library/documents/ICE_C&L_Provider_Toolkit_7.10.pdf.
21. Reuland D, Frasier P, Olson M, Slatt L, Aleman M, Fernandez A. Accuracy of Self-assessed Spanish Fluency in Medical Students. Teaching and Learning in Medicine. 2009;21(4):305–9. [PubMed] [Google Scholar]
22. Tidwell L. Health Care Interpreter Network: From Ad-Hoc to Best Practices in Healthcare Interpreting. Oakland, CA: Jul 16-17, 2009. Kaiser Permanente-Southern California Physicians Language Concordance Program: Meeting the Needs of LEP Patients. [Google Scholar]
23. Moreno M, Otero-Sabogal R, Newman J. Assessing dual-role staff-interpreter linguistic competency in an integrated healthcare system. Journal of general internal medicine. 2007;22(Suppl 2):331–5. [PMC free article] [PubMed] [Google Scholar]
24. Tang G, Lanza O, Rodriguez F, Chang A. The Kaiser Permanente Clinician Cultural and Linguistic Assessment Initiative: research and development in patient-provider language concordance. American Journal of Public Health. 2011;101(2):205–8. [PMC free article] [PubMed] [Google Scholar]
25. Diamond LC, Luft HS, Chung S, Jacobs EA. “Does this doctor speak my language?” Improving the characterization of physician non-English language skills. Health services research. 2012 Feb;47(1 Pt 2):556–69. 22091825. PubMed PMID. Epub 2011/11/19. eng. [PMC free article] [PubMed] [Google Scholar]
26. Diamond LC, Reuland DS. Describing Physician Language Fluency: Deconstructing Medical Spanish. JAMA : the journal of the American Medical Association. 2009;301(4):426–8. [PubMed] [Google Scholar]
27. Diamond LC, Tuot DS, Karliner LS. The use of Spanish language skills by physicians and nurses: policy implications for teaching and testing. Journal of general internal medicine. 2012 Jan;27(1):117–23. PubMed PMID: 21773850. Pubmed Central PMCID: PMC3250531. Epub 2011/07/21. eng. [PMC free article] [PubMed] [Google Scholar]
28. US Census Bureau: Profiles of General Demographic Characteristics, 2000 Census of Population and Housing. 2001 [cited 2013 October 1]. Available from: http://www.census.gov/prod/cen2000/dp1/2khus.pdf.
29. Language Testing Options: Hablamos Juntos/Robert Wood Johnson Foundation. 2008 [cited 2012 November 19]. Available from: http://www.hablamosjuntos.org/resourcecenter/pdf/Language_Testing_Options.pdf.
30. Employment Tests and Selection Procedures: The U.S. Equal Employment Opportunity Commission. 2008 [updated June 23; cited 2009 July 15]. Available from: http://www.eeoc.gov/policy/docs/factemployment_procedures.html.
31. Conover WJ. Practical Nonparametric Statistics. 3rd ed Wiley: 1999. [Google Scholar]