U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.

Cover of Handbook of eHealth Evaluation: An Evidence-based Approach

Handbook of eHealth Evaluation: An Evidence-based Approach [Internet].

Show details

Chapter 13 Methods for Survey Studies

.

13.1. Introduction

The survey is a popular means of gauging people’s opinion of a particular topic, such as their perception or reported use of an eHealth system. Yet surveying as a scientific approach is often misconstrued. And while a survey seems easy to conduct, ensuring that it is of high quality is much more difficult to achieve. Often the terms “survey” and “questionnaire” are used interchangeably as if they are the same. But strictly speaking, the survey is a research approach where subjective opinions are collected from a sample of subjects and analyzed for some aspects of the study population that they represent. On the other hand, a questionnaire is one of the data collection methods used in the survey approach, where subjects are asked to respond to a predefined set of questions.

The eHealth literature is replete with survey studies conducted in different health settings on a variety of topics, for example the perceived satisfaction of ehr systems by ophthalmologists in the United States (Chiang et al., 2008), and the reported impact of emr adoption in primary care in a Canadian province (Paré et al., 2013). The quality of eHealth survey studies can be highly variable depending on how they are designed, conducted, analyzed and reported. It is important to point out there are different types of survey studies that range in nature from the exploratory to the predictive, involving one or more groups of subjects and an eHealth system over a given time period. There are also various published guidelines on how survey studies should be designed, reported and appraised. Increasingly, survey studies are used by health organizations to learn about provider, patient and public perceptions toward eHealth systems. As a consequence, the types of survey studies and their methodological considerations should be of great interest to those involved with eHealth evaluation.

This chapter describes the types of survey studies used in eHealth evaluation and their methodological considerations. Also included are three case examples to show how these studies are done.

13.2. Types of Survey Studies

There are different types of survey study designs depending on the intended purpose and approach taken. Within a given type of survey design, there are different design options with respect to the time period, respondent group, variable choice, data collection and analytical method involved. These design features are described below (Williamson & Johanson, 2013).

13.2.1. The Purpose of Surveys

There are three broad types of survey studies reported in the eHealth literature: exploratory, descriptive, and explanatory surveys. They are described below.

  • Exploratory Surveys – These studies are used to investigate and understand a particular issue or topic area without predetermined notions of the expected responses. The design is mostly qualitative in nature, seeking input from respondents with open-ended questions focused on why and/or how they perceive certain aspects of an eHealth system. An example is the survey by Wells, Rozenblum, Park, Dunn, and Bates (2014) to identify organizational strategies that promote provider and patient uptake of phrs.
  • Descriptive Surveys – These studies are used to describe the perception of respondents and the association of their characteristics with an eHealth system. Perception can be the attitudes, behaviours and reported interactions of respondents with the eHealth system. Association refers to an observed correlation between certain respondent characteristics and the system, such as prior eHealth experience. The design is mostly quantitative and involves the use of descriptive statistics such as frequency distributions of Likert scale responses from participants. An example is the survey on change in end user satisfaction with cpoe over time in intensive care (Hoonakker et al., 2013).
  • Explanatory Surveys – These studies are used to explain or predict one or more hypothesized relationships between some respondent characteristics and the eHealth system. The design is quantitative, involving the use of inferential statistics such as regression and factor analysis to quantify the extent to which certain respondent characteristics lead to or are associated with specific outcomes. An example is the survey to model certain residential care facility characteristics as predictors of ehr use (Holup, Dobbs, Meng, & Hyer, 2013).

13.2.2. Survey Design Options

Within the three broad types of survey studies one can further distinguish their design by time period, respondent group, variable choice, data collection and analytical method. These survey design options are described below.

  • Time Period – Surveys can take on a cross-sectional or longitudinal design based on the time period involved. In cross-sectional design the survey takes place at one point in time giving a snapshot of the participant responses. In longitudinal design the survey is repeated two or more times within a specified period in order to detect changes in participant responses over time.
  • Respondent Group – Surveys can involve a single or multiple cohorts of respondents. With multiple cohorts they are typically grouped by some characteristics for comparison such as age, sex, or eHealth use status (e.g., users versus non-users of emr).
  • Variable Choice – In quantitative surveys one needs to define the dependent and independent variables being studied. A dependent variable refers to the perceived outcome that is measured, whereas an independent variable refers to a respondent characteristic that may influence the outcome (such as age). Typically the variables are defined using a scale that can be nominal, ordinal, interval, or ratio in nature (Layman & Watzlaf, 2009). In a nominal scale, a value is assigned to each response such as 1 or F for female and 2 or M for male. In an ordinal scale, the response can be rank ordered such as user satisfaction that starts from 1 for very unsatisfied to 4 for very satisfied. Interval and ratio scales have numerical meaning where the distance between two responses relate to the numerical values assigned. Ratio is different from interval in that it has a natural zero point. Two examples are weight as a ratio scale and temperature as an interval scale.
  • Data Collection – Surveys can be conducted by questionnaire or by interview with structured, semi-structured or non-structured questions. Questionnaires can be administered by postal mail, telephone, e-mail, or through a website. Interviews can be conducted in-person or by phone individually or in groups. Pretesting or pilot testing of the instrument should be done with a small number of individuals to ensure its content, flow and instructions are clear, consistent, appropriate and easy to follow. Usually there are one or more follow-up reminders sent to increase the response rate.
  • Analytical Method – Survey responses are analyzed in different ways depending on the type of data collected. For textual data such qualitative analyses as content or thematic analysis can be used. Content analysis focuses on classifying words and phrases within the texts into categories based on some initial coding scheme and frequency counts. Thematic analysis focuses on identifying concepts, relationships and patterns from texts as themes. For numeric data, quantitative analysis such as descriptive and inferential statistics can be used. Descriptive statistics involves the use of such measures as mean, range, standard deviation and frequency to summarize the distribution of numeric data. Inferential statistics involve the use of a random sample of data from the study population to make inferences about that population. The inferences are made with parametric and non-parametric tests and multivariate methods. Pearson correlation, t-test and analysis of variance are examples of parametric tests. Sign test, Mann-Witney U test and χ2 are examples of non-parametric tests. Multiple regression, multivariate analysis of variance, and factor analysis are examples of multivariate methods (Forza, 2002).

13.3. Methodological Considerations

The quality of survey studies is dependent on a number of design parameters. These include population and sample, survey instrument, sources of bias, and adherence to reporting standards. These considerations are described below (Williamson & Johanson, 2013).

13.3.1. Population and Sample

For practical reasons, survey studies are often done on a sample of individuals rather than the entire population. Sampling frame refers to the population of interest from which a representative sample is drawn for the study. The two common strategies used to select the study sample are probability and non-probability sampling. These are described below.

  • Probability sampling – This is used in descriptive and explanatory surveys where the sample selected is based on the statistical probability of each individual being included under the assumption of normal distribution. They include such methods as simple random, systematic, stratified, and cluster sampling. The desired confidence level and margin of error are used to determine the required sample size. For example, in a population of 250,000 at 95% confidence level and a ±5% margin of error, a sample of 384 individuals is needed (Research Advisors, n.d.).
  • Non-probability sampling – This is used in exploratory surveys where individuals with specific characteristics that can help understand the topic being investigated are selected as the sample. They include such non-statistical methods as convenience, snowball, quota, and purposeful sampling. For example, to study the effects of the Internet on patients with chronic conditions one can employ purposeful sampling where only individuals known to have a chronic disease and access to the Internet are selected for inclusion.

13.3.2. Survey Instrument

The survey instrument is the tool used to collect data from respondents on the topic being investigated. Ideally one should demonstrate that the survey instrument chosen is both valid and reliable for use in the study. Validity refers to whether the items (i.e., predefined questions and responses) in the instrument are accurate in what they intend to measure. Reliability refers to the extent to which the data collected are reproducible when repeated on the same or similar groups of respondents. These two constructs are elaborated below.

  • Validity – The four types of validity are known as face, content, criterion, and construct validity. Face and content validity are qualitative assessments of the survey instrument for its clarity, comprehensibility and appropriateness. While face validity is typically assessed informally by non-experts, content validity is done formally by experts in the subject matter under study. Criterion and construct validity are quantitative assessments where the instrument is measured against other schemes. In criterion validity the instrument is compared with another reputable test on the same respondents, or against actual future outcomes for the survey’s predictive ability. In construct validity the instrument is compared with the theoretical concepts that the instrument purports to represent to see how well the two align with each other.
  • Reliability – The tests for reliability include test-retest, alternate form and internal consistency. Test-retest reliability correlates results from the same survey instrument administered to the same respondents over two time periods. Alternate form reliability correlates results from different versions of the same instrument on the same or similar individuals. Internal consistency reliability measures how well different items in the same survey that measure the same construct produce similar results.

13.3.3. Sources of Bias

There are four potential sources of bias in survey studies. These are coverage, sampling, non-response, and measurement errors. These potential biases and ways to minimize them are described below.

  • Coverage bias – This occurs when the sampling frame is not representative of the study population such that certain segments of the population are excluded or under-represented. For instance, the use of the telephone directory to select participants would exclude those with unlisted numbers and mobile devices. To address this error one needs to employ multiple sources to select samples that are more representative of the population. For example, in a telephone survey of consumers on their eHealth attitudes and experience, Ancker, Silver, Miller, and Kaushal (2013) included both landline and cell phone to recruit consumers since young adults, men and minorities tend to be under-represented among those with landlines.
  • Sampling bias – This occurs when the sample selected for the study is not representative of the population such that the sample values cannot be generalized to the broader population. For example, in their survey of provider satisfaction and reported usage of cpoe, Lee, Teich, Spurr, and Bates (1996) reported different response rates between physicians and nurses, and between medical and surgical staffs, which could affect the generalizability of the results. To avoid sampling bias one should clearly define the target population and sampling frame, employ systematic methods such as stratified or random sampling to select samples, identify the extent and causes of response differences, and adjust the analysis and interpretation accordingly.
  • Non-response bias – This occurs when individuals who responded to the survey have different attributes than those who did not respond to the survey. For example, in their study to model nurses’ acceptance of barcoded medication administration technology, Holden, Brown, Scanlon, and Karsh (2012) acknowledged their less than 50% response rate could have led to non-response bias affecting the accuracy of their prediction model. To address this error one can offer incentives to increase response rate, follow up with non-respondents to find out the reasons for their lack of response, or compare the characteristics of non-respondents with respondents or known external benchmarks for differences (King & He, 2005). Adjustments can then be made when the cause and extent of non-response are known.
  • Measurement bias – This occurs when there is a difference between the survey results obtained and the true values in the population. One major cause is deficient instrument design due to ambiguous items, unclear instructions, or poor usability. To reduce measurement bias one should apply good survey design practices, adequate pretesting or pilot testing of the instrument, and formal tests for validity and reliability. An example of good Web-based eHealth survey design guidelines is the Checklist for Reporting Results of Internet E-Surveys (cherries) by Eysenbach (2004). The checklist has eight item categories and 31 individual items that can be used by authors to ensure quality design and reporting of their survey studies.

13.3.4. Adherence to Reporting Standards

Currently there are no universally accepted guidelines or standards for reporting survey studies. In the field of management information systems (mis), Grover, Lee, and Durand (1993) published nine ideal survey methodological attributes for analyzing the quality of mis survey research. In their review of ideal survey methodological attributes, Ju, Chen, Sun, and Wu (2006) found two frequent problems in survey studies published in three top mis journals to be the failure to perform statistical tests for non-response errors and not using multiple data collection methods. In healthcare, Kelly, Clark, Brown, and Sitzia (2003) published a checklist of seven key points to be covered when reporting survey studies. They are listed below:

  1. Explain the purpose of the study with explicit mention of the research question.
  2. Explain why the research is needed and mention previous work to provide context.
  3. Provide detail on how study was done that covers: the method and rationale; the instrument with its psychometric properties and references to original development/testing; sample selection and data collection processes.
  4. Describe and justify the analytical methods used.
  5. Present the results in a concise and factual manner.
  6. Interpret and discuss the findings.
  7. Present conclusions and recommendations.

In eHealth, Bassi, Lau, and Lesperance (2012) published a review of survey-based studies on the perceived impact of emr in physician office practices. In the review they used the 9-item assessment tool developed by Grover and colleagues (1993) to appraise the reporting quality of 19 emr survey studies. Using the 9-item tool a score from 0 to 1 was assigned depending on whether the attribute was present or absent, giving a maximum score of 9. Of the 19 survey studies appraised, the quality scores ranged from 0.5 to 8. Over half of the studies did not include a data collection method, the instrument and its validation with respect to pretesting or pilot testing, and non-respondent testing. Only two studies scored 7 or higher which suggested the reporting of the 19 published emr survey studies was highly variable. The criteria used in the 9-item tool are listed below.

  1. Report the approach used to randomize or select samples.
  2. Report a profile of the sample frame.
  3. Report characteristics of the respondents.
  4. Use a combination of personal, telephone and mail data collection methods.
  5. Append the whole or part of the questionnaire in the publication.
  6. Adopt a validated instrument or perform a validity or reliability analysis.
  7. Perform an instrument pretest.
  8. Report on the response rate.
  9. Perform a statistical test to justify the loss of data from non-respondents.

13.4. Case Examples

13.4.1. Clinical Informatics Governance for EHR in Nursing

Collins, Alexander, and Moss (2015) conducted an exploratory survey study to understand clinical informatics (ci) governance for nursing and to propose a governance model with recommended roles, partnerships and councils for ehr adoption and optimization. The study is summarized below.

  • Setting – Integrated healthcare systems in the United States with at least one acute care hospital that had pioneered enterprise-wide ehr implementation projects and had reached the Health Information Management Systems Society (himss) Analytics’ emr Adoption Model (emram) level 6 or greater, or were undergoing enterprise-wide integration, standardization and optimization of existing ehr systems across sites.
  • Population and samples – Nursing informatics leaders in the role of an executive in an integrated healthcare system who could offer their perspective and lessons learned in their organization’s clinical and nursing informatics governance structure and its evolution. The sampling frame was the himss Analytics database that contains detailed information on most u.S. healthcare organizations and their health it status.
  • Design – A cross-sectional survey conducted through semi-structured telephone interviews with probing questions.
  • Measures – The survey had four sections: (a) organizational characteristics; (b) participant characteristics; (c) governance structure; and (d) lessons learned. Questions on governance covered decision-making, committees, collaboration, roles, and facilitators/barriers for success in overall and nursing-specific ci governance.
  • Analysis – Grounded theory techniques of open, axial and selective coding were used to identify overlapping themes on governance structures and ci roles. Data were collected until thematic saturation in open coding was reached. The ci structures of each organization were drawn, compared and synthesized into a proposed model of ci roles, partnerships and councils for nursing. Initial coding was independently validated among the researchers and group consensus was used in thematic coding to develop the model.
  • Results – Twelve nursing executives (made up of six chief nursing information officers, four directors of nursing informatics, one chief information officer, and one chief ci officer) were interviewed by phone. For analysis 128 open codes were created and organized into 18 axial coding categories where further selective coding led to four high-level themes for the proposed model. The four themes (with lessons learned included) identified as important are: inter-professional partnerships; defining role-based levels of practice and competence; integration into existing clinical infrastructure; and governance as an evolving process.
  • Conclusion – The proposed ci governance model can help understand, shape and standardize roles, competencies and structures in ci practices for nursing, as well as be extended to other domains.

13.4.2. Primary Care EMR Adoption, Use and Impacts

Paré et al. (2013) conducted a descriptive survey study to examine the adoption, use and impacts of primary care emrs in a Canadian province. The study is summarized below.

  • Setting – Primary care clinics in the Canadian Province of Quebec that had adopted electronic medical records under the provincial government’s emr adoption incentive and accreditation programs.
  • Population and samples – The population consisted of family physicians as members of the Quebec Federation of General Practitioners that practice in primary care clinics in the province. The sample had three types of physician respondents that: (a) had not adopted emr (type-1); (b) had emr in their clinic but were not using it to support their practice (type-2); or (c) used emr in their clinic to support their practice (type-3).
  • Design – A cross-sectional survey in the form of a pretested online questionnaire in English and French accessible via a secure website. E-mail invitations were sent to all members followed by an e-mail reminder. With a sampling frame of 9,166 active family physicians in Quebec, 370 responses would be needed to obtain a representative sample with a 95% confidence interval and a margin of error of ±5%.
  • Measures – For all three respondent types the measures were clinic and socio-demographic profiles and comments. For type-2 and type-3 respondents, the measures were emr brand and year of implementation. For type-1 the measures were barriers and intent to adopt emr. For type-2 the measures were reasons and influencing factors for not using emr, and intent to use emr in future. For type-3 the measures were emr use experience, level and satisfaction, ease of use with advanced emr features, and individual/organizational impacts associated with emr use.
  • Analysis – Descriptive statistics in frequencies, per cent and mean Likert scores were used on selected measures. Key analyses included comparison of frequencies by: socio-demographic and clinic profiles; barrier and adoption intent; emr feature availability and use; and comparison of mean Likert scores for satisfaction and individual and organizational impacts. Individual impacts included perceived efficiency, quality of care and work satisfaction. Organizational impacts included effects on clinical staff, the clinic’s financial position, and clients.
  • Results – Of 4,845 invited physicians, 780 responded to the survey (16% response rate) that was representative of the population. Just over half of emr users reported the high cost and complexity in emr acquisition and deployment as the main barriers. Half of non-users reported their clinics intended to deploy emr in the next year. emr users made extensive use of basic emr features such as clinical notes, lab results and scheduling, but few used clinical decision support and data sharing features. For work organization, emrs addressed logistical issues with paper systems. For care quality, emrs improved the quality of clinical notes and safety of care provided but not clinical decision-making. For care continuity, emrs had poor ability to transfer clinical data among providers.
  • Conclusionemr impacts related to a physician’s experience where the perceived benefits were tied to the duration of emr use. Health organizations should continue to certify emr products to ensure alignment with the provincial ehr.

13.4.3. Nurses’ Acceptance of Barcoded Medication Administration Technology

Holden and colleagues (2012) conducted an explanatory survey study to identify predictors of nurses’ acceptance of barcoded medication administration (bcma) in a u.S. pediatric hospital. The study is summarized below.

  • Setting – A 236-bed free standing academic pediatric hospital in the midwestern U.S. that had recently adopted bcma. The hospital also had cpoe, a pharmacy information system and automated medication-dispensing units.
  • Population and Sample – The population consisted of registered nurses that worked at least 24 hours per week at the hospital. The sample consisted of nurses from three care units that had used bcma for three or more months.
  • Design – A cross-sectional paper survey with reminders was conducted to test the hypothesis that bcma acceptance would be best predicted by a larger set of contextualized variables than the base variables in the Technology Acceptance Model (tam). A multi-item scales survey instrument, validated in previous studies with several added items, was used. The psychometric properties of the survey scales were pretested with 16 non-study nurses.
  • Measures – Seven bcma-related perceptions: ease of use, usefulness for the job, non-specific social influence, training, technical support, usefulness for patient care, and social influence from patients/families. Responses were 7-point scales from not-at-all to a-great-deal. Also tracked were variables for age in five categories, as well as experience measured as job tenure in years and months. Two bcma acceptance variables: behavioural intention to use and satisfaction.
  • Analysis – Regression of all subsets of perceptions to identify the best predictors of bcma acceptance using five goodness-of-fit indicators (i.e., R2, root mean square error, Mallow’s Cp statistics, Akaike information criterion, and Bayesian information criterion). An a priori α criterion of 0.05 was used and 95% confidence intervals were computed around parameter estimates.
  • Results – Ninety-four of 202 nurses returned a survey (46.5% response rate) but 11 worked less than 24 hours per week and were excluded, leaving a final sample of 83 respondents. Nurses perceived moderate ease of use and low usefulness of bcma. They perceived moderate or higher social influence to use bcma, and were moderately positive about bcma training and technical support. Behavioural intention to use bcma was high but satisfaction was low. Behavioural intention to use bcma was best predicted by perceived ease of use, non-specific social influence and usefulness for patient care (56% variance explained). Satisfaction was best predicted by perceived ease of use, usefulness for patient care and social influence from patients/families (76% variances explained).
  • Conclusion – Predicting bcma acceptance benefited from using a larger set of perceptions and adapting variables.

13.5. Summary

This chapter introduced three types of surveys, namely exploratory, descriptive and explanatory surveys. The methodological considerations addressed included population and sample, survey instrument, variable choice and reporting standards. Three case examples were also included to show how eHealth survey studies are done.

References

  • Ancker J. S., Silver M., Miller M. C., Kaushal R. Consumer experience with and attitudes toward health information technology: a nationwide survey. Journal of American Medical Informatics Association. 2013;20(1):152–156. [PMC free article: PMC3555333] [PubMed: 22847306]
  • Bassi J., Lau F., Lesperance M. Perceived impact of electronic medical records in physician office practices: A review of survey-based research. Interactive Journal of Medical Research. 2012;1(2):e3.1–e3.23. [PMC free article: PMC3626136] [PubMed: 23611832]
  • Chiang M. F., Boland M. V., Margolis J. W., Lum F., Abramoff M. D., Hildbrand L. Adoption and perceptions of electronic health record systems by ophthalmologists: An American Academy of Ophthalmology survey. Ophthalmology. 2008;115(9):1591–1597. [PubMed: 18486218]
  • Collins S. A., Alexander D., Moss J. Nursing domain of ci governance: recommendations for health it adoption and optimization. Journal of American Medical Informatics Association. 2015;22(3):697–706. [PubMed: 25670752]
  • Eysenbach G. Improving the quality of web surveys: the checklist for reporting results of Internet e-surveys (cherries). Journal of Medical Internet Research. 2004;6(3):e34. [PMC free article: PMC1550605] [PubMed: 15471760]
  • Forza C. Survey research in operations management: a process-based perspective. International Journal of Operations & Production Management. 2002;22(2):152–194.
  • Grover V., Lee C. C., Durand D. Analyzing methodological rigor of mis survey research from 1980-1989. Information & Management. 1993;24(6):305–317.
  • Holden R. J., Brown R. L., Scanlon M. C., Karsh B. - T. Modeling nurses’ acceptance of bar coded medication administration technology at a pediatric hospital. Journal of American Medical Informatics Association. 2012;19(6):1050–1058. [PMC free article: PMC3534453] [PubMed: 22661559]
  • Holup A. A., Dobbs D., Meng H., Hyer K. Facility characteristics associated with the use of electronic health records in residential care facilities. Journal of American Medical Informatics Association. 2013;20(4):787–791. [PMC free article: PMC3721172] [PubMed: 23645538]
  • Hoonakker P. L. T., Carayon P., Brown R. L., Cartmill R. S., Wetterneck T. B., Walker J.M. Changes in end-user satisfaction with computerized provider order entry over time among nurses and providers in intensive care units. Journal of American Medical Informatics Association. 2013;20(2):252–259. [PMC free article: PMC3638190] [PubMed: 23100129]
  • Ju T. L., Chen Y. Y., Sun S. Y., Wu C.Y. Rigor in mis survey research: in search of ideal survey methodological attributes. Journal of Computer Information Systems. 2006;47(2):112–123.
  • Kelly K., Clark B., Brown V., Sitzia J. Good practice in the conduct and reporting of survey research. International Journal for Quality in Health Care. 2003;15(3):261–266. [PubMed: 12803354]
  • King W. R., He J. External validity in is survey research. Communications of the Association for Information Systems. 2005;16:880–894.
  • Layman E. J., Watzlaf V.J. Health informatics research methods: Principles and practice. Chicago: American Health Information Management Association; 2009.
  • Lee F., Teich J. M., Spurr C. D., Bates D.W. Implementation of physician order entry: user satisfaction and self-reported usage patterns. Journal of American Medical Informatics Association. 1996;3(1):42–55. [PMC free article: PMC116286] [PubMed: 8750389]
  • Paré G., de Guinea A. O., Raymond L., Poba-Nzaou P., Trudel M. C., Marsan J., Micheneau T. Computerization of primary care medical clinics in Quebec: Results from a Survey on emr adoption, use and impacts. Montreal: hec Montreal; 2013. October 31. Retrieved from https://www​.infoway-inforoute​.ca/index.php​/resources/reports​/benefits-evaluation​/doc_download/1996-computerization-of-primary-care-medical-clinics-in-quebec-results-from-a-survey-on-emr-adoption-use-and-impacts.
  • Research Advisors. Sample size table. (n.d.) Retrieved from http://research advisors.com/tools/SampleSize.htm.
  • Wells S., Rozenblum R., Park A., Dunn M., Bates D.W. Organizational strategies for promoting patient and provider uptake of personal records. Journal of American Medical Informatics Association. 2014;22(1):213–222. [PMC free article: PMC4433381] [PubMed: 25326601]
  • Williamson K., Johanson G., editors. Research methods: Information, systems and contexts. 1st ed. Prahan, Victoria, Australia: Tilde University Press; 2013.
Copyright © 2016 Francis Lau and Craig Kuziemsky.

This publication is licensed under a Creative Commons License, Attribution-Noncommercial 4.0 International License (CC BY-NC 4.0): see https://creativecommons.org/licenses/by-nc/4.0/

Bookshelf ID: NBK481602

Views

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...