• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Am J Health Behav. Author manuscript; available in PMC Sep 1, 2012.
Published in final edited form as:
PMCID: PMC3375173

Developing an Oropharyngeal Cancer (OPC) Knowledge and Behaviors Survey

Virginia J. Dodd, PhD, MPH, Associate Professor, Joseph L. Riley, III, PhD, Professor, and Henrietta L. Logan, PhD, Professor and Director



To use the community participation research model to (1) develop a survey assessing knowledge about mouth and throat cancer and (2) field test and establish test-retest reliability with newly developed instrument.


Cognitive interviews with primarily rural African American adults to assess their perception and interpretation of survey items. Test-retest reliability was established with a racially diverse rural population.


Test-retest reliabilities ranged from .79 to .40 for screening awareness and .74 to .19 for knowledge. Coefficients increased for composite scores.


Community participation methodology provided a culturally appropriate survey instrument that demonstrated acceptable levels of reliability.

Keywords: community participation research, oral cancer awareness, survey development, health disparities, oral cancer screening

Each year, oropharyngeal cancer (OPC) is diagnosed in about 35,000 Americans, and approximately 8000 people die of these malignancies.1 The overall incidence and mortality rates for OPC are 10.4 per 100,000 and 2.9 per 100,000, respectively. Data suggest that significant disparities exist, with both African American males and females more likely to be diagnosed at later stages than are white males and females. Between 1992 and 2001, OPC was the 4th most common cancer and ranked 10th among the most common causes of death in African American men.2 Currently, the National Institute of Dental and Craniofacial Research (NIDCR) Web site describes African American men as one of the groups at highest risk for developing OPC.3 Despite advances in surgery, radiation, and chemotherapy, the 5-year relative survival rate has not improved dramatically and remains at about 50%.1

In most cases, visible changes in the oral mucosa precede OPC development, allowing clinicians to detect and effectively treat early intraepithelial stages of carcinogenesis.4 Survival varies dramatically by the stage of disease, making cancer diagnosis at a premalignant or early stage crucial to reducing morbidity.5 Nevertheless, most oral cancers are currently detected at late stages, with more than 50% of OPC in the United States diagnosed at late stages. For example, among adults with cancer of the tongue, in approximately 70% of African Americans and 53% of whites, the cancer had metastasized to the regional lymph nodes or a more distant site at the time of diagnosis.6 Similar rates of late-stage diagnosis also were reported for cancer of the floor of the mouth and cancers affecting the gingival, palate, buccal mucosa, or vestibule. The 5-year relative survival rate among cases that were localized at diagnosis show disparities between African Americans and whites in both males (56% vs 66%) and females (64% vs 71%). Among the tongue cancer cases that were found to have spread to regional lymph nodes at diagnosis, survival again was lower among African Americans than whites (21% vs 37% for males and 28% vs 38% for females).6

The principal screening test for OPC consists of inspection and palpation of the oral cavity. Early detection and treatment of precancerous lesions are considered by most experts to be the best approach to secondary prevention of these cancers.7 The American Dental Association suggests an annual OPC examination for adults ages 40 and older or for anyone at high risk of developing the disease.8 Although an examination is fast and usually painless and the number of OPC screenings is on the rise, many people report never having been examined for OPC.8 Exam prevalence is lower among African Americans than whites and among current smokers than former smokers.9 In 1992, only 14% of the adult population indicated ever having been examined for OPC.10 However, more recent studies report higher lifetime OPC screening rates approaching upward of 30%.1113

Identified reasons for low OPC screening rates include an overall lack of public awareness of the disease signs, symptoms, and risk factors. Low screening rates place certain groups, such as males, African Americans, and people of lower socioeconomic status, at an increased risk for OPC. In the African American community, lack of knowledge is further complicated by issues relating to distrust, misinformation, and perceptions of racism in the medical establishment.14 Furthermore, poorer oral health among rural Americans has been well documented,15 making living in a rural residential setting an additional risk factor for OPC.

Clearly, increasing the numbers of OPC examinations among African Americans and other at-risk population groups is needed if existing oral health disparities are to be addressed. Accurately assessing awareness of the signs and symptoms of and risk factors for OPC among members of at-risk populations is critical.

The literature offers compelling arguments for reassessing survey validity and reliability, especially when collecting survey data from different ethnic or racial groups.16,17 Adebimpe’s work in mental health notes significant differences between mental disorder prevalence rates among blacks and whites, but offers other data indicating only a modest difference in prevalence rates.16 The author refers to these misperceptions as creating “unintended inequities” and attributes them to the lack of survey items assessing differences in treatment experiences among blacks and whites.16 The “questionable disparities” produced by the data lack validity because they are artifacts of the survey design. Also alluded to in the literature are “questionable disparities” attributed to survey designs that fail to include questions asking about the presence of unique or more prevalent issues among those of different ethnicities.17

These assertions have implications for oral health disparities research. Clearly, previous studies have not been specific to priority populations and have overlooked racial differences in care seeking and service access. In light of this, reasons for stagnant OPC relative survival rates among African American men may be related to issues not yet revealed by current survey design and methodology. Clearly, development of culturally tailored, culturally sensitive, and issue-responsive instruments is critical for identifying valid intervention points and effective oral health interventions.

To accomplish this, use of a valid and reliable survey instrument, consisting of easily comprehended questions, as well as questions with meaning and relevance among members of the target population, is a priority. Valid and reliable survey instruments require clear and unambiguous wording to ensure responses that relate to the questions asked. Involvement of the target population during survey development and/or initial pilot testing for comprehension and adequate response options via focus groups and/or individual interviews is necessary for gaining this crucial in-depth understanding of the target audience.

Absent from this area of scientific inquiry are valid and reliable survey instruments that have been designed within a theoretical framework. Theoretical frameworks allow researchers to better interpret findings and to make inferences regarding the likelihood of changes in knowledge, attitudes, and behavior. Additionally, health behaviors such as seeking preventive screenings are influenced by community norms and peer interactions, which suggests promise for the use of community-based participatory research interventions to affect and quantify meaningful behavioral change. This project is part of a longitudinal study built on the Elaboration Likelihood Model (ELM)18 and designed to test an OPC awareness campaign primarily targeted at African American men aged 30 and older living in rural communities in north central Florida. This paper describes the methodology for community participation in the development and field testing of a survey instrument assessing OPC awareness. The specific goals were to establish a survey instrument for measuring OPC awareness, knowledge, and screenings that consisted of theoretically framed items and response options that were culturally appropriate and comprehensible and to test the reliability of established psychosocial instruments.


Following university Institutional Review Board approval, the development of a culturally appropriate and relevant survey instrument took place in 2 phases, with each phase using input from the study population. Using focus group methodology, Phase 1 aimed to develop and adapt survey items based on previously developed surveys appearing in the literature,10 whereas Phase 2 employed field testing and establishment of test-retest reliability and validity of the measures. Figure 1 demonstrates community engagement in the project and outlines the iterative developmental process.

Figure 1
Community Involvement in Developmental Process

Community Involvement

The community was engaged in several ways, with initial input sought from community leaders, local residents, and groups. A resident from one of the participating communities was employed by the project as a community liaison. The community liaison is the face of a community-based project and serves as a bridge between the project and the community. The community liaison is critical to successful participant recruiting for focus groups and the garnering of interest within the community.

Phase 1: Survey Item Development

Focus groups

Focus groups were conducted to gain input from the target population on survey item development. The specific intent of the focus groups was to identify unclear words or phrases, review different versions of the same question, consider response formats, and explore how the questions worked in eliciting responses. The extent of participant-perceived response burden was also assessed.

The first 2 focus groups guided development of survey content and explored the audiences’ perceived meanings of OPC and associated terminology. The subsequent 2 focus groups concentrated on seeking input and guidance relating to refinement of the survey items and response options.

Community residents were recruited to participate in both phases of the focus groups through fliers posted in various community locations and by word of mouth. The community liaison coordinated focus group participation and location. Participants were primarily African American community members 30 years of age and older. Four focus groups (N=34) consisting of between 7 and 9 people were conducted (2 groups in each phase, respectively) in March, April, and May of 2009 at locations in 2 rural areas of north central Florida. Participants were provided a meal and a $35 gift card for approximately 60 minutes of their time. The study included 2 focus groups (N=17) consisting of men—the primary focus of the intervention— and 2 of women (N=17). The inclusion of women in the study stems from previous work in this area8,19 demonstrating the prominent role of African American women in managing their families’ health and establishing them as an influential secondary audience. Therefore, understanding their knowledge of and attitudes toward OPC, as well as screening behaviors, was important for subsequent message development and information targeting.

Item development

Initially, an item pool of 98 questions encompassing OPC awareness, symptoms, risk factors, and screenings was generated through a review of previous OPC surveys found in the literature and on the National Institute of Dental and Craniofacial Research Web site,9 as well as in consultation with a team of content and methodology experts. The preliminary items were assembled for use in cognitive interviews with focus groups to assess the perception, usefulness, and interpretation of each item and to gain insight and understanding of the general knowledge level of OPC among the study population.

Prior to the focus groups, all potential survey questions and response options were recorded by telephone interviewers employed by the company retained to administer the final computer-assisted telephone interview (CATI). The purpose of this approach was to allow focus group participants to hear and respond to questions in a format mirroring that of the final survey administration. This approach also allowed participants to guide not only the content of the question but also the clarity of enunciation and speed with which the interviewer asked the questions. The focus group moderator provided the introduction to the study, explained the purpose of the focus group, explicitly stating that the group’s purpose was to test only the survey questions, not their OPC or health-related knowledge. Participants were asked to listen to a recording of each question and response options. After hearing the individual question, participants were asked if they understood the question, if they could paraphrase the question, or if they thought the question should be asked in another way. They were also asked if they could answer the question using the response options provided. The number of times the group could ask to hear the questions was not limited. This cognitive interviewing approach allowed tailoring of the survey items to better fit the culture, health literacy levels, and demographics of the study population and allowed for participants to judge the acceptability of the interviewers’ conversational clarity and speed. Also present was the ability of participants to judge types of questions that could be asked, as well as those that could not be asked.20 A content analysis of the verbatim focus group transcripts was used to identify problems with the text, phrasing, and format of the questions and accompanying responses.

Following completion of the focus groups, the research team met to review the findings and to develop the final survey instrument. Survey questions included those developed by the research team as well as those used in previous surveys. In previously used questions, when indicated, changes were guided by audience input. Varied interpretations of the questions were considered, and whenever possible, questions were revised to reflect the language used by members of our target population.

The final survey consisted of 38 questions, with 3 relating to OPC awareness, 18 pertaining to knowledge of risk factors, 11 measuring knowledge of symptoms, and 6 asking about history of oral cancer. Members of the community advisory committee reviewed the final items for clarity and comprehension.

Phase 2: Survey Field Test

The Bureau of Economic and Business Research (BEBR) conducted telephone interviews to pilot test the survey instrument and to assess test-retest reliability. BEBR conducted telephone interviews with adults in 4 rural census tracts in 3 northern Florida counties (Columbia, Alachua, and Gadsden) between August and September 2009. CATI software was used to administer the survey, which allowed for the managing, scheduling, and rescheduling of calls. Listed phone numbers, purchased from the census tracts, were dialed up to a maximum of 10 times throughout the week. Respondents were chosen based on a series of screening questions, with only those 25 years or older eligible for the survey. The first option for an eligible respondent was the oldest male in the household. If no male was in the household or if the oldest male was under 25 or declined participation, the person answering the phone was asked to take part in the survey—provided that person was eligible. Once respondents completed the survey, they were told they would be receiving a call in several days to complete the survey again. Respondents were asked to indicate the best day and time for the second call. Respondents were mailed a $15 gift card promptly so that most gift cards were received before the second interview.

Phone numbers for the 93 households with completed interviews were reloaded in CATI and redialed 7 days later to assess test-retest reliability. If 10 consecutive CATI calls were unanswered, numbers were dialed manually at the times respondents specifically identified as best to reach them. Eighty-eight respondents took part in the second interview. Respondents were mailed a second $15 gift card for completing the second interview.

The dispositions for not participating in the retest were refusals, answering machines, and no answers. Throughout the initial survey and retest, research team members observed the live administration of the survey to (1) ensure quality control of the survey administration process and (2) assess the presence of any difficulties, discrepancies, and errors experienced by participants or interviewers during the survey process.

Survey reliability

The following standardized scales with known reliability were administered to allow comparison of reliability coefficients between these measures and the developed OPC instrument, as well as to establish reliability of these established indices with the population under study:

The Center for Epidemiologic Studies Depression Scale – 10 (CESD-10), a short version of the self-report CES-D-10, provides a total score for the assessment of depressive symptoms in nonclinical samples and is appropriate for telephone administration.21

Medical Outcomes Study, Social Support Short Scale (MOS-SS) provides a global measure of functional social support.22

The Medical Outcomes Survey Short Form-8 questionnaire (SF-8) provides a measure of health-related quality of life when summed across all 8 items.22

Socioeconomic status (SES) was determined by the sum of 2 questions that have been shown to be a valid measure of financial status in several longitudinal studies of oral health.23,24

Data analyses were performed using Statistical Package for the Social Sciences (SPSS) software. Frequency distributions were calculated for each of the OPC items. Pearson correlation coefficients were calculated for respondents’ scores from the first (T1) and second (T2) administrations of the survey.


Tailoring Methodology to Study Population

Data collection methodologies considered for this work included individual cognitive interviews and focus groups. Although there are advantages and disadvantages inherent in each choice, transportation challenges, low levels of literacy, and limited availability of participants due to job constraints caused focus groups to be a more feasible approach. Additionally, because these groups were early in the study, levels of trust between the community and the research team were still developing. Our advisory group felt community members would feel more at ease participating in focus groups along with others from their community, rather than in individual interviews. Use of focus groups provided a type of “verbal group think” that proved valuable in gaining comments from most of the focus group participants. An unexpected benefit of this approach was the familiarity of participants with one another. Although most participants were not close acquaintances, they recognized fellow community members. While completing demographic forms and other paperwork, participants of lower literacy were more likely to seek help with form completion from other participants than from the research staff. In some cases, group members recognized signs of low literacy and proactively extended aid, saving participants from any embarrassment they may have felt expressing their lack of functional literacy to the research staff.

Generally, during individual cognitive interviews, respondents often express embarrassment or disappointment at their inability to interpret the meaning or respond appropriately to the many questions posed by the interviewer, a person they perceive to know all the answers. Typically, their repeated inability to respond is attributed to a personal attribute, namely their inability to read well, instead of to poorly worded questions. When using individual interviews to test lengthy surveys, the likelihood of a respondent-initiated premature conclusion to the interview increases as each question posed may intensify individual feelings of inferiority. In the original survey, many questions were not easily comprehended; use of focus group interviewing lessened participants’ individual response burdens. The wisdom of this decision was evidenced during focus group, as participants sometimes expressed audible relief once they realized the lack of understanding was universal among the group. When this occurred, the lack of understanding was immediately attributed to a bad question and not to individual cognitive deficiencies. Overall, focus groups allowed for a richer and more productive information exchange than would occur during an individual interview. Focus group participants departed the group expressing interest and enjoyment instead of alone and wondering “how they did on the test.”

Survey Item Development

Community involvement in the survey development process had a considerable impact on the final product. The first round of focus groups examined and defined terms to be used in the final survey instrument and subsequent parts of the intervention.

Emerging challenges from the focus groups included the need to clarify meanings and definitions of words and phrases. For instance, focus group participants lacked understanding of the term disease state. Also, participants did not understand what terms like head and neck cancer encompassed. Overall, most participants defined head and neck cancer as including facial skin cancer or possibly brain cancer; inclusion of the teeth, tongue, and/or throat were absent from their definitions. When the moderator provided the definition, focus group participants agreed that the term mouth and throat cancer was more easily understood. Early questions asked participants about their experiences with “oral cancer screening,” specifically, whether they had ever been “screened” for oral cancer. To improve comprehension, participants suggested replacing the term screening with exam. Other problems involved the use of formal terminology in the survey such as influenza virus infection. Overwhelmingly, participants advised use of the term the flu. Interestingly, a previously developed survey item used the term HPV for human papilloma virus. Although all focus group participants could define the acronym HIV, none knew the meaning of HPV. However, participants were familiar with the term human papilloma virus. As a result, references to HPV in the survey were replaced with “human papilloma virus.”25

In addition to the survey questions, participant comprehension of response options was tested. Overall, focus group participants preferred concrete response options. For example, existing survey response options used the term older age when asking about risk factors for OPC. Focus group participants had difficulty defining older age and suggested that improvement would result by adding an age anchor such as “over the age of 60.” Original survey questions used 5-point Likert scale response options. Overwhelmingly, focus group participants had difficulty with the “somewhat likely” and “somewhat unlikely” response options. One person stated, with others agreeing, “Either you know it or you don’t; somewhat unsure doesn’t make sense.” As a result, final response options were revised to include a 3-point scale of “yes,”“no,” and “not sure.”

Survey Field Test

Survey field testing occurred during Phase 2 of this project. The baseline administration of the survey (T1) produced 93 completed interviews and yielded a response rate of 32%. Of the total sample of 330 numbers dialed, 291 numbers were deemed eligible; 39 were ineligible because they were a fax or data line, nonworking, disconnected, or a place of business. The mean duration of the survey was 21.8 minutes. All subjects who started the survey completed it and appeared to understand and respond thoughtfully to all items. The 93 completed interviews comprised 74 whites (80%), 13 African Americans (14%), and 6 other (6%), by race, and 43 males (46) and 50 females (54%), by sex. The census tracts sampled consisted of 4653 whites (59%), 2590 African Americans (33%), and 604 other (8%), by race, and 3742 males (48%) and 4105 females (52%), by sex. The average age of respondents in the initial telephone interview was 56.2 and ranged from 25 to 94; 62 of the respondents reported having at least a high school education and 31, less than 12 years of education. The 7 nonrespondents in the retest administration (T2) consisted of 6 whites and 1 African American, by race, and 3 males and 4 females, by sex.

Test-retest coefficients for the OPC awareness and screening questions are presented in Table 1. Test-retest was not calculated for the single item asking about having had an OPC exam because the T1 questions used to determine whether an examination had occurred potentially contaminated the T2 assessment. Reliability coefficients ranged from .79 for lifetime mouth and throat cancer screening to .40 for having heard of a mouth and throat cancer examination. Participants’ responses changed significantly for questions pertaining to how much they knew about OPC (t=2.385, P=.02) and whether they had ever heard of an OPC exam (t=2.196, P=.04), both of which reflect increased awareness at T2. No T1/T2 differences for the other awareness or exam questions were found.

Table 1
Test-retest and Stability of Responses for Oral Cancer Awareness and Oral Cancer Examination Items

Test-retest coefficients for the OPC risk factor questions ranged from .74 for using smokeless tobacco to .39 for smoking cigarettes, pipes, or cigars, with no significant T1/T2 differences found (Table 2). At the composite score level, the T1/T2 reliability coefficient for risk factors was .79, which included correct identification of the 7 actual risk factors, and .71 for non-risk factor identification.

Table 2
Oral Cancer Risk Factors

Table 3 shows the test-retest coefficients for signs and symptoms of OPC. Coefficients for positive signs ranged from .63 for swelling in the neck or throat to .19 for a sore or ulcer in the mouth that does not go away. At the composite score level, the T1/T2 reliability coefficient for the actual signs and symptoms was .70, and the coefficient for correctly identifying the foils was .77. Participants tended to provide more positive responses at T2 than T1. Ten respondents increased their ratings of “how much they know about oral cancer” (t=2.375, P=.02) as well as for “have you ever heard of an oral cancer examination?” (t=2.320, P=.02), compared to only 1 who reported less knowledge at T2. A similar pattern also was seen for the risk and symptoms questions, but was statistically significant only for “cough that does not go away” (t=2.144, P=.03).

Table 3
Oral Cancer Signs and Symptoms

In considering T1 responses, no gender, race, or ethnic differences were found in the composite scores for correctly identifying risk factors, signs, or symptoms. Specifically, test-retest for gender, race, and ethnicity was all r=1.0. Test-retest coefficients for the pre-existing questionnaires were CESD-10, r=.81, mean=14.1, SD=4.9; SF-8, r=.89, mean=17.0, SD=6.6; MOS-SS, r=.70, mean=19.0, SD=4.9. The reliability coefficient for the SES index was .93.


This study is the first stage of a translational project building on previous work by the research team.8,12,19 Focus group participants influenced changes in both questions and response items that resulted in increased comprehension among respondents.

Previous studies employed 4-category and 2-category response formats for OPC knowledge questions. Horowitz13 and Tomar12 each employed a 4-category risk response format similar to the 1990 National Health Interview Survey (NHIS)26 items that ranged from “definitely” increases to “definitely does not” increase a person’s chance of getting mouth or lip cancer. In the North Carolina study, Patton switched to a “yes” or “no” format to identify 3 risk factors or 2 nonrisk factors after confusion arose with the 4-category response format during preliminary testing.11 Focus groups in the present study uncovered a similar concern, requiring a discussion among the research team centering on the need to reduce the number of response options. Although the optimal number of response options could be debated, the team agreed that, despite advantages of the 5-point response option scale from a statistical analysis perspective, increased comprehension of the 3-point scale by respondents was less likely to compromise item reliability. Thus, the research team felt the improved level of comprehension among respondents warranted use of the “yes,” “no,” and “not sure” format. Field testing resulted in similar findings compared to other national or state-level surveys,1113,26,27 supporting construct validity of the adapted items. Test-retest coefficients for the OPC knowledge composite scores and screening history questions suggest acceptable reliability.

The survey field test sample reflected the census tract characteristics, with only two-thirds of the respondents possessing high school diplomas. This education level is similar to that of survey respondents in North Carolina and urban Florida areas, with reported high school education attained by 62% to 85%, respectively.11,12

The use of preliminary testing of the NHIS item is not unique because some form of pilot testing has been conducted in subsequent state-level studies. For example, the items used in the Horowitz study were pretested on a convenience sample of 12 Maryland adults and revised, with the new instrument pilot-tested on 30 adults and then modified slightly, although not described.13 Patton’s study reported that items were modified in some content areas and pretested on a random sample of 42 adults in North Carolina.11 Tomar indicated that telephone interviews were conducted with a small number of respondents to ensure the interviewers and subjects understood the questions.12 Specific details of preliminary testing in these studies have not been published.

As evidenced in this study, the test-retest method raises the potential for bias. Because respondents learned or began thinking about OPC as they answered questions at T1, participation in the survey may have increased their awareness of OPC. T2 responses may have been influenced as a result of their heightened awareness.

Although reliability was low (r=.46) for the question asking whether the respondent had ever had an OPC screening, reliability improved to an acceptable level (r=.79) upon the added explanation that a screening entailed a doctor or dentist pulling on a patient’s tongue and feeling under the tongue and inside the cheeks (Table 1). Thus, learning about what constitutes an OPC examination likely biased responses on the second administration of the survey. A number of individual knowledge items—often those with a higher number of correct responses—lacked satisfactory reliability in a test-retest paradigm. However, the coefficients improved to acceptable levels when composite scores were calculated, supporting strategies used by several other studies to combine responses into a single measure of knowledge.23,24 Whereas the reliability coefficients for the CESD-10 and SF-8 were the 2 highest, the OPC composite scores were in the range achieved by other measures, which suggests that respondents took participation in this pilot study seriously.

Test-retest reliability for CESD-10 and SF-8 showed coefficients of greater than .8, indicating that these measures function acceptably well with this rural population. The MOS-SS test-retest coefficient is of greater concern, and additional work with this scale may be warranted.

Despite the small sample used in the test-retest phase of the project, comparison of study findings provide evidence for construct validity of the survey questions consistent with that of national surveys26 and state surveys conducted in Florida,12 Maryland,13 North Carolina,11 and New York.27 The present study, when compared to other studies12,27 found a similar percentage of respondents correctly identified smoking (96%) and alcohol (53%) as risk factors for OPC. In 2 earlier studies, respondents correctly identified smoking (95% and 94% respectively)12,27 and alcohol consumption (44% and 49% respectively)12,27 as OPC risk factors. However, greater variability was observed in the present study than that of Patton11 for the risk factors of being excessively exposed to the sun (40%, 63%), eating spicy foods (69%, 82%), and biting lips (40%, 65%). Knowledge of OPC (95%) or having heard of an examination (46%) were more common among the pilot study respondents than those in the Horowitz13 study (85%, 21%), although increased awareness over time may account for much of the difference. Among the T1 survey respondents indicating receipt of an OPC examination, 63% reported those examinations taking place in dental offices. This location is consistent with other studies in New York (72%), Maryland (64%), and North Carolina (61%). However, these other studies preceded this one by 8 to 14 years.

In summary, we believe the careful development of these survey items has resulted in an instrument that will be useful in future research for assessing public awareness of OPC signs, symptoms, and risk factors, particularly in rural areas where lower levels of education or socioeconomic status characterize many of the residents.3 Focus group participants overwhelmingly advised us of the need for simple and concrete response options. In addition, they repeatedly stressed the need for questions written in everyday language to facilitate understanding among respondents of any age. The recognition that not all acronyms are recognized or understood by lay people was brought to light, as was evidence of acronyms that differ in definition among population groups. The increased confidence and willingness of people to answer easily comprehended questions was evident in the focus groups as input improved question clarity and content. Based on these findings, a reasonable assumption is that survey fatigue and the resulting noncompletion rates will decline when surveys are better tailored to the population of interest. Previous research points to an association between individual literacy levels and item nonresponse. This work stresses the importance of accounting for population literacy level during the survey design process.28

As a result, generalizability of survey questions and response options between differing populations of interest may be less than previously thought. The need for assessing levels of comprehension and comfort with survey items in different demographic and regional populations is necessary for valid and reliable survey findings, as well as improved response rates producing data to appropriately guide health promotion campaigns. Survey items should be written using language characteristics and preferences of the sample population. In this context, the concept of standardized survey questions may need to be reconsidered.

Generalizability of the current methodology is limited to groups of participants with similar levels of education or literacy. In the few cases in which a large discrepancy between educational levels was present, those of lower educational attainment were less likely to offer information spontaneously. Aside from the participant eligibility issues, which are present in all focus groups, generalizability is expected to mirror that of any focus group research. Generalizability of the survey produced by these groups is also limited to the ethnic and demographic group for which it was designed.

Although our preliminary work provides no new concrete knowledge about OPC prevention and control, the persisting rates of late stage OPC diagnosis among priority populations commands attention. The possibility of misperceptions resulting from misinterpretation of data produced by survey instruments with inappropriate or confusing questions and/or response options inappropriately phrased for the sample population must be considered.

Testing and revision of survey instruments used with priority populations will move us closer to learning how to effectively motivate behavior change. Valid survey data can increase the efficacy of health message campaigns promoting OPC screenings among African American men, which will eventually lead to the identification and treatment of earlier stage OPC lesions. We believe the revision of this survey is a first step in effectively reducing the rates of OPC morbidity and mortality among African American men.


This survey, supported by NIH U54-DE019261, will be used as an outcome measure in a community-based media intervention promoting oral and pharyngeal cancer as part of the Southeast Center for Research to Reduce Disparities in Oral Health.

Contributor Information

Virginia J. Dodd, Department of Community Dentistry and Behavioral Science, College of Dentistry, University of Florida, Gainesville, FL.

Joseph L. Riley, III, Department of Community Dentistry and Behavioral Science, College of Dentistry, University of Florida, Gainesville, FL.

Henrietta L. Logan, Southeast Center for Research to Reduce Disparities in Oral Health, College of Dentistry, University of Florida, Gainesville, FL.


1. Jemal A, Siegel R, Ward E, et al. Cancer statistics. CA Cancer J Clin. 2007;57(1):43–66. [PubMed]
2. Jemal A, Clegg L, Ward E, et al. Annual report to the nation on the status of cancer, 1975–2001, with a special feature regarding survival. Cancer. 2004;101:3–27. [PubMed]
3. National Institute for Dental and Craniofacial Research. [Accessed September 10, 2011]; Available at: http://www.nidcr.nih.gov/NR/exeres/16842437-FC3B-41C1-AB42-5073EC53D71D.htm.
4. Mignogna M, Fedele S, LoRusso L, et al. Oral and pharyngeal cancer: Lack of prevention and early detection by health care providers. Eur J Cancer Prev. 2001;10:381–383. [PubMed]
5. Smith RA, Cokkinides V. Cancer Society guidelines for the early detection of cancer. CA Cancer J Clin. 2006;56:11–25. [PubMed]
6. Shiboski CH, Schmidt BL, Jordan RC. Racial disparity in stage at diagnosis and survival among adults with oral cancer in the US. Community Dent Oral Epidemiol. 2007;35(3):233–240. [PubMed]
7. American Cancer Society. [Accessed September 30, 2011];Cancer Facts & Figures for African Americans 2011–2012. Available at: http://www.cancer.org/Research/CancerFactsFigures/CancerFacts FiguresforAfricanAmericans/cancer-facts-figures-afam-2011-2012.
8. Dodd VJ, Watson JM, Choi Y, et al. Oral cancer in African Americans: addressing health disparities. Am J Health Behav. 2008;32(6):684–692. [PubMed]
9. National Institute of Dental and Craniofacial Research. [Accessed September 15, 2011]; Available at: http://report.nih.gov/NIHfactsheets/ViewFactSheet.aspx?csid=106&key=O#O.
10. National Health Interview Survey, 1992: Cancer Control Supplement. [Accessed July 13, 2011];1992 Available at: http://www.icpsr.umich.edu/icpsrweb/NACDA/studies/6344.
11. Patton LL, Agans R, Elter JR, et al. Oral cancer knowledge and examination experiences among North Carolina adults. J Public Health Dent. 2004;64(3):173–180. [PubMed]
12. Tomar SL, Logan HL. Florida adults’ oral cancer knowledge and examination experiences. J Public Health Dent. 2005;65(4):221–230. [PubMed]
13. Horowitz AM, Moon HS, Goodman HS, et al. Maryland adults’ knowledge of oral cancer and having oral cancer examinations. J Public Health Dent. 1998;58:281–287. [PubMed]
14. Adegbembo AO, Tomar SL, Logan HL. Perception of racism explains the difference between blacks’ and whites’ level of healthcare trust. Ethn Dis. 2006;16:792–798. [PubMed]
15. Skillman SM, Doescher MP, Mouradian WE, et al. The challenge to delivering oral health services in rural America. J Public Health Dent. 2010 Jun;70(Suppl 1):S49–S57. [PubMed]
16. Adebimpe VR. Race, racism and epidemiological surveys. Hosp Community Psychiatry. 1994;45(1):27–31. [PubMed]
17. Jackson JS, Torres M, Caldwell CH, et al. The National Survey of American Life: a study of racial, ethnic and cultural influences on mental disorders and mental health. Int J Methods Psychiatr Res. 2004;13:196–207. [PubMed]
18. Petty RE, Cacioppo JT. The elaboration likelihood model of persuasion. Adv Consum Res. 1984;11:673–675.
19. Choi Y, Dodd VJ, Watson J, et al. Perspectives of African Americans and dentists on oral cancer and dentist-patient communication. Patient Educ Couns. 2008 Apr;71(1):41–51. [PubMed]
20. Willis GB. Cognitive Interviewing: A “How To” guide. [Internet] [Accessed September 14, 2011];Working Paper #7, National Center for Health Statistics Research Triangle Institute. 1999 [cited 2011 Sept 15]. Available at: http://appliedresearch.cancer.gov/areas/cognitive/interview.pdf.
21. Andresen EM, Malmgren JA, Carter WB, Patrick DL. Screening for depression in well older adults: Evaluation of a short-form of the CES-D (Center for Epidemiologic Studies Depression Scale) Am J Prev Med. 1994;4:77–84. [PubMed]
22. Sherbourne C, Stewart A. The MOS Social Support Survey. Soc Sci Med. 1991;32(6):705–714. [PubMed]
23. Riley JL, 3rd, Gilbert GH, Heft MW. Socioeconomic and demographic disparities in symptoms of orofacial pain. J Public Health Dent. 2003;34(4):289–298. [PubMed]
24. Riley JL, 3rd, Gilbert GH, Heft MW. Dental attitudes: Proximal basis for oral health disparities in adults. Community Dent Oral Epidemiol. 2006;34(4):289–298. [PubMed]
25. Ryerson AB, Peters ES, Coughlin S, et al. Burden of potentially human papillomavirus-associated cancers of the oropharynx and oral cavity in the US, 1998–2003. Cancer. 2008;113(Suppl 10):2901–2909. [PubMed]
26. Horowitz AM, Nourjah P, Gift HC. U.S. adult knowledge of risk factors and signs of oral cancers, 1990. J Am Dent Assoc. 1995;126(1):39–45. [PubMed]
27. Oh J, Kumar J, Cruz G. Racial and ethnic disparity in oral cancer awareness and examination: 2003 New York state BRFSS. J Public Health Dent. 2008;68(1):30–38. [PubMed]
28. Olson K, Smyth J, Wang Y, Pearson J. The self-assessed literacy index: Reliability and validity. Social Science Research. 2011;40:1465–1476.
29. Riley JL, 3rd, Dodd VJ, Muller KE, et al. Psychosocial factors associated with mouth and throat cancer exams in rural Florida. Am J Public Health. 2012;102:e7–e14. [PMC free article] [PubMed]
PubReader format: click here to try


Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...


Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...