REVIEWER COMMENTRESPONSE
1. Are the objectives, scope, and methods for this review clearly described?
Yes.
1. This is an excellent and comprehensive review with a wealth of very useful information.1. Thank you
2. The objective of the report appears to be a literature synthesis of the feasibility and diagnostic accuracy of PTSD screening tools for primary care settings. This could be slightly clarified in the introduction, rather than the broad statement on literature on screening tools in general, since issues of screening effectiveness and clinical efficiency appear to be beyond the scope of the report.2. We have modified the statement of the objective of the review.
3. For the most part, the key questions are clearly stated, but could avoid using “etc.” in KQ #1 and 2, and instead clearly state the specific characteristics reviewed, and for KQ#2, list the specific psychometric properties of interest. I am not sure that the implementability issue fits better in KQ#2 than it would in KQ#1 or as a separate question, since that information is reviewed separately on page 18, and does not map to the levels of evidence framework used to evaluate the question of diagnostic accuracy (psychometric properties and utility) in KQ#2.3. We have modified KQ1 and 2 as suggested. We agree that the implementation processes of screening would have best been covered in a separate question and have done so to improve clarity of the findings.
4. Explanation and application of levels of evidence need to be clearer, especially in the discrimination of levels II and III:4. The descriptions of levels of evidence were taken from instructions for preparing a Rational Clinical Examination article.
a. The description of the shortcoming for level III, “patients who both underwent and generated definitive results on both the sign and symptom and the application of the gold standard” is not clear. I am wondering if this is an allusion to the verification bias where follow-up or administration of one part of the testing protocol is dependent of results from a prior part of the testing protocol (e.g. administering the gold standard first, and then the screen to all cases but only a sample of controls, as described in Simel). I'm also wondering if this may be an editing glitch, since this text is repeated in the summary for level IV, and this kind of non-independence would be more of a Level IV issue.4a. We agree that the shortcoming for Level III is verification bias – selection of patients for verification rather than inclusion of consecutive patients. We also agree that the section of text was mistakenly repeated in the summary
b. The key element that can take a study from Level II to Level III is the use of non-consecutive patients that are selected on the basis of some factor other than eligibility for screening that would result in a non-representative sample and introduce bias. Such results do not reliably over estimate accuracy, as stated on p. 48. The effect of the bias will be due to how the sample was selected and the ways in which they differ from the target population. Examples cited in the STARD guidelines include: exclusion of patients with comorbid conditions or symptoms that could adversely affect test accuracy but would likely be present in the target population; studies in specialty settings where the spectrum of symptom expression is narrowed; or just non-consecutive and non-random selection of the sample. I would then assume that pronounced violations of sampling assumptions, such as case control studies, would be graded at Level IV.4b. We agree that selection bias is one of the main differences between Level II and Level III studies. We have clarified our application of these ratings in Appendix F.
5. The discussion of each screen under KQ#2 could be more complete and detailed. Not all psychometric properties included in the articles are consistently reported, including key indicators of diagnostic accuracy such as likelihood ratios and (if provided) post-test odds of a positive test. If only sensitivity and specificity are reported, it is important to include the prevalence of PTSD in the sample. This may be a minor issue, since most (but not all) of this information is in Table 5, but it is not clear why some specific statistics are pulled out in the text and that the type of statistics discussed are not completely consistent across measures, so the reader does not get a clear critique of the state of the evidence for each screener.5. We have now made the text more consistent throughout.
YesThank you
Yes and No
Yes, the objectives of the review are clearly described through the three key questions:
Question 1: What tools are used to screen for PTSD in primary care settings, and what are their characteristics (length, format, etc)?
Question 2: What are the psychometric properties and utility of the screening (operating characteristics) and their implementability (ease of administration) in primary care clinics?
Question 3: Do the psychometric properties and utility of each of the screening tools differ according to age, gender, race/ethnicity, substance abuse or other comorbidities?
Yes, the scope of the review is on screening tools used and validated in primary care.
No, the methods for the review are not always logical, accurate or clearly described
1. Study selection
a. Rationale for why studies outside of the US were excluded was not provided. Discussion of how Veterans in VA primary care may differ from civilians in primary care was not addressed. Perhaps there are reasons why screening practices/recommendations might differ in VA versus civilian primary care. Greater rationale for the inclusion/exclusion of studies seems warranted.
1. We have addressed these points in the report.
1a. We included only studies done in the United States because of greater relevance to the care of US Veterans. There were no studies that compared screen efficiency or effectiveness across both Veteran and non-Veteran samples. It may be that a given screen performs better in one population vs. another or for PTSD associated with one type of trauma vs. another; however, given the absence of evidence this would be purely speculative on our part. Available evidence suggests that PTSD is under-recognized in non-Veteran primary care settings (c.f. Graves, 2011) suggesting that, from a healthcare system perspective, screening for PTSD might also facilitate further mental health evaluation and treatment among non-Veterans assuming available mental health resources. As to whether screening practices/recommendations do or should differ in Veteran vs. non-Veteran primary care settings is a matter of policy and resource availability not screen characteristics and so is beyond the scope of this review. We have clarified the rationale for inclusion/exclusion of studies.
b. Why were screens included that did not include PTSD items (e.g., GAD-7)? Was study selection based on administration of a PTSD gold standard in a primary care setting? If yes, other non-PTSD screens may need to be considered in the review (e.g., GHQ)1b. We state that we included screens for multiple psychiatric disorders or multiple anxiety disorders if there was a study that investigated the ability of the screen to identify PTSD in a primary care setting. No other screens identified in our literature search process were eligible for inclusion.
c. If studies with fewer than 50 participants were excluded, why was the Lange et al, (2003) study included? There were only 49 women interviewed with the gold standard interview.1c. We excluded studies with fewer than 50 patients in the screening population.
d. There appears to be an assumption that gold standards are equivalent. This may not be an accurate assumption. Furthermore, it seems important to recognize that there are different scoring algorithms within gold standards. For example, there are at least 9 different scoring rules for the CAPS and the selection of one over another will surely impact diagnostic accuracy. Granted, scoring rules are rarely presented in studies, but the importance of this should not be overlooked.1d. We identified the gold standard diagnostic tool used in each study and noted where scoring for the gold standard differed from the scoring method described in Table 1. We agree that different gold standard instruments or scoring rules could alter the findings in a given study. As the reviewer notes, scoring rules are rarely presented in studies, as was true in the vast majority of studies included in this review. While we do not think that variation in gold standard instrumentation or scoring would appreciably alter the overall findings of the review, we have included a statement of that possibility in our limitations section.
2. Screen/study description
a. The PC-PTSD does not include a stem that asks about traumatic events. This is inaccurately reflected in the description of the measure: “Respondents are asked about symptoms experienced in response to a traumatic event in the past month” (p. 13)2a. This has been clarified.
b. The SPAN was not validated with a primary care sample in the original Meltzer-Brody study. It was “developed in a psychiatry clinic for the purpose of detecting PTSD in psychiatric populations with PTSD prevalence around 50%” (p14). Yes, it was argued that it could be used in settings with a lower prevalence, like primary care, and yes, it was tested in primary care setting in the Yeager et al., study, but it was not developed/validated in primary care.2b. We include the SPAN because there was a study that tested it in primary care setting as noted above.
c. The review correctly recognizes that there are three different versions of the PCL, and three different scoring options (p.14). All three versions of the PCL are represented in the studies reviewed, and information on scoring algorithms is often missing. The review treats the PCL as a single screen and does not mention how scoring options may impact diagnostic accuracy. This seems problematic for the accuracy and validity of the review.2c. We have clarified which version of the PCL was used in each study. However, while there are different versions of the PCL and different scoring approaches to the instrument (e.g., symptoms/symptom cluster, total score, etc.), we believe that the importance of these differences is greatly attenuated when the PCL is used as a screening tool rather than as a diagnostic tool, a tool to assess symptom change in treatment, or as a means to estimate population prevalence rates (see Wilkins et al., 2011). Because the function of a screening tool is to identify individuals in need of further evaluation, all PTSD screening tools have lower discriminability than one would expect from a diagnostic tool. The more relevant scoring issue is cut-score, and we made efforts to include information about multiple cut-scores when studies provided that information. Accordingly, we do not feel that the accuracy or the validity of the review has been compromised.
d. As previously mentioned, it is unclear why the GAD-7/GAD-2 is included in the review. The screen does not include any PTSD items2d. As stated above, we included screens if there was a study that investigated the ability of the screen to identify PTSD in a primary care setting. Although the GAD-7 or GAD-2 may not be specific to PTSD, whether it performs better or worse than a PTSD-specific screen was an empirical question we thought worth considering given an appropriate gold standard and study design.
3. Table 3: summary of screens used in primary care
a. It is not clear which study was used to report on test-retest reliability3a. References have been added to Table (see footnotes).
b. Although scoring may be the same for briefer versions of the PCL, test-retest reliability cannot be assumed to be the same.3b. We have noted this on Table 3.
c. Should internal consistency be presented as well?3c. Internal consistency has been noted on Table 3 where reported (see footnotes).
Yes and No
Some things are clearly described, but further justification is needed for the decisions the authors chose to make, e.g., to include studies of non-Veterans given the target audience of this report. The absence of this content makes it difficult to judge the statement on p. 30 that there is no information as to whether a given screen performs better in Veteran or non-Veteran samples. The absence of such information may be of limited relevance.
We have clarified the inclusion and exclusion criteria. Our literature search yielded no studies comparing the performance of screening tests in Veteran and non-Veteran samples. We have now highlighted results of studies in Veterans in the discussion to make it more relevant for the target audience of the report.
Yes. As stated on page 1, the premise of screening for PTSD is “to facilitate mental health treatment engagement 1) earlier in the course of the illness and 2) to engage patients in treatment who might otherwise not be identified…” For this purpose, the report undertakes to identify PTSD screeners for primary care (pc) settings and evaluate them, using the published literature. Three questions were formulated, which address evidence on the utility (and relative utility) of available scales.
The questions and the methodology to answer them are perhaps too narrowly formulated. This is especially the case when one becomes aware of the fact that the studies that have evaluated PTSD screeners in pc have not evaluated the impact of screenings in engaging mental health workers more effectively, in terms of reaching patients who would not be identified.
As a result, the report is a technical evaluation of the studies that evaluated PTSD screeners in pc: their design, analysis, etc. The lion share of the work—the evaluation of screening (by any means) for mental health delivery, and the outcome in terms of improving health--- remains to be done.
Thank you. We agree, that there is important work that remains to be done involving the impact of screen use on the delivery of mental health care and on health outcomes. We included this in our recommendations.
2. Is there any indication of bias in our synthesis of the evidence?
No. I do not see any evidence of bias.Thank you
NoThank you
Yes and No
A. Not sure about bias, but there are some problematic statements about the PC-PTSD and PCL.
1. Appendix E: Evidence Tables (Prins et al., 2003)
a. The PC-PTSD was evaluated in one VA Health Care Facility, not two different VA's in California.1a. This has been corrected.
b. The CAPS was administered in person, not over the phone1b. Thank you for clarifying this.
c. As noted in the Evidence Table, the use of blind interviews was “not reported”. The assumption was made, however, that interviewers were not blind (versus not reported), and the study was given a level IV rating. Although not clear from the original study, interviewers were indeed blind. Perhaps “not-reported” findings can be followed-up rather than assumed to be negative.1c. Thank you for providing this additional information. Given this clarification, we have now determined that this study should have a rating of Level III.
2. Freedy et al., 2010
a. Similar to Prins et al., 2003 -- It is assumed that interviewers were not blind to the screen results because they were administered on the same day as the diagnostic interview. But, what was the order of administration? Did interviewers know how to interpret screen results (cutoff scores for screens)?2a. We assumed that interviews were not blind not because of their timing relative to administration of the screen, but rather because non-blind evaluations may be biased (similar to RCTs), and so the absence of a clear statement indicating that diagnostic interviews were conducted blindly in most cases means that they were not. However, as suggested by this reviewer, we sent an email to Dr. Freedy requesting further information, but have not received a response in the more than one month since the email was sent.
3. PCL
a. The PCL version used in the Yeager et all study is not clear. In the study, the PCL is described as “a series of 17 questions about symptoms or signs of PTSD resulting from military experiences taking place within the past month”. This suggests that the PCL-M was used.3a. We have clarified that no version was specified in this study.
b. The PCL version used in the Prins et al., 2003 study is also not clear. However, a correction to the article was published with clear reference to the PCL-S (Prins & Ouimette, 2004, Primary Care Psychiatry, 9, p151). The review also states that 124 “woman” [sic], were screened and interviewed. That is incorrect, 167 participants completed both the PCL-S and the PC-PTSD.3b. We have clarified that the PCL-S was used in this study. We have replaced the data from the original paper with the data presented in the Corrigendum.
No. The report gives no indication of bias in any of the decision or text.Thank you.
NoThank you.
3. Are there any published or unpublished studies that we may have overlooked?
Yes. There is some evidence that the PC-PTSD performs adequately in VA substance use populations (p. 37, item 3). See Kimerling et al., (2006) Addictive Behaviors 31(11).We are familiar with the Kimerling (2006) study but did not include it in this review because the study sample was that of patients who were receiving substance abuse treatment and not those presenting in primary care clinics.
No. Question whether it was necessary to include studies done on MH population and instruments that are not specific screens for PTSD – specifically the GAD-7As noted previously, we included screens if there was a study that investigated the ability of the screen to identify PTSD in a primary care setting.
Yes
For excellent reviews of the PCL, including the importance of spectrum effects (e.g., age, race, etc), bias, and prevalence, please see:
1. McDonald, S.D. & Calhoun, P.S. (2010). The diagnostic accuracy of the PTSD Checklist: A critical review. Clinical Psychology Review. doi:10:1016/j.cpr.2010.06.012.
2. Wilkins, K.C., Lang, A.J., & Norman, S.B. (2011). Synthesis of the psychometric properties of the PTSD Checklist (PCL) military, civilian, and specific versions. Depression and Anxiety. doi: 10.1002/da.20837.
Thank you for sharing these references. These reviews provide excellent background information on the PCL but do not focus on studies conducted in primary care.
Yes
The report is so comprehensive that I think it will surprise readers in its presentation of studies they may not know of. However, it could be even more complete in several respects:Thank you. We have addressed your concerns.
1. There is a corrigendum to the Prins et al. 2003 study that reports critically important information about the PCL. There were significant errors in the 2003 report due to a software problem regarding the handling of missing data. The data reported on the PCL need to be based on the 2004 correction.1. We have updated the report based on the Corrigendum.
2. A paper by Calhoun and colleagues (2010) comparing the SPAN and the PC-PTSD may have been overlooked.2. We reviewed this excellent paper but it did not meet our inclusion criteria. Subjects in that study were part of the Mid-Atlantic MIRECC post-deployment registry and consisted of Veterans who served in the military after to September 11, 2001. According to the authors, “Eligible Veterans were recruited through mailings, advertisements, and clinician referrals”. As such, it was not eligible for this review.
3. In meta-analysis it is common to ask authors for data needed to include the paper in the analysis. Was there any attempt to contact investigators for information that could have allowed an excluded paper to be included? If not, I recommend that the authors use this strategy if it possibly could yield additional studies to include in the review3. We did not exclude studies because of missing information. As noted in the Literature Flow (Figure 1) studies were excluded if the study setting, population, or purpose did not meet our inclusion criteria.
No. No overlooked study on screening scales in primary care.Thank you.
4. Please write any additional suggestions or comments below. If applicable, please indicate the page and line numbers from the draft report.
Future directions #6 is an important point, and the authors may want to specifically refer to the need for studies of screening effectiveness in VA.Thank you for this suggestion. We have now made our recommendations more specific.
P. 1: first paragraph: I don't think the screening is meant to “Identify PTSD,” or to facilitate treatment engagement so much as to identify Veterans who need further evaluation and possibly treatment for PTSD. Similar issue in the more detailed paragraph near end of page 5. Screening is not necessarily correlated with reducing delays for treatment – in fact, in VA the typical concern from PTSD teams is that PC refers too many patients because of a positive screen, thus tying up the resources needed to reduce access delays (though screening can lead to earlier diagnosis and an opportunity for intervention earlier in the course of an individual's illness). These issues do receive some discussion in the “clinical consideration” paragraph on page 38.Thank you for this feedback. We clarified the statement on page to indicate that screens are intended to facilitate detection of a condition (in this case PTSD), not to identify it directly. We agree that screening is not correlated with treatment; however, the purpose of screening programs is to increase the rate of treatment, particularly among those early in the course of the illness as you note. The concern you raise about too many patients having positive screens and the effect of this on limited clinical resources is an important one. This suggests that from a clinical standpoint the screen used by VA is too sensitive as it is currently employed; however, altering the however, altering the screen cut score to address this has clear policy implications that may be difficult to resolve
1. “Screening tools that focus on evaluating traumatic experiences are not likely to be clinically useful given the high population prevalence of traumatic events and the much lower conditional probability of developing PTSD (IOM, Breslau, Wang)” p.36
a. True, but the diagnostic precision of screens that include a trauma probe versus those that don't has not been empirically established. Perhaps inclusion of a trauma exposure question will decrease the number of false positives in primary care. Future research could compare screens with and without a trauma probe.
b. Does this statement suggest that screening for military sexual trauma is not warranted?
1a. The statement that you reference was meant to clarify the scope of the review. On the other hand, we agree that whether screen performance would be improved with inclusion of a traumatic exposure item(s) is a worthy empirical question.
2. “Very short screens (i.e., one or two items) performed less well than longer screens with positive likelihood ratios less than 3.0, making them less clinically useful” p. 3 PLUS, “Screens not specific to PTSD but for which there was a study that evaluated the ability of the screen to detect PTSD performed less well than those that focused on the detection of PTSD exclusively” p.371b. No. It simply clarifies the scope of the review.
2a. We agree with the reviewer's conclusion that the available evidence suggests that screens longer than 2 items perform better.
a. Combined, these argue against the use of the SIPS or multi-purpose screens with only 1 or 2 items relevant to PTSD.
b. So, moderate to longer PTSD screens seem to be better but the threshold for acceptable length is not clear. If “successful screening programs utilize instruments that are simple, valid, precise, and acceptable both clinically and socially” (p. 1), the remaining PTSD screens should be evaluated along these dimensions. For example, future research needs to determine preference and ease of administration based on number of items, reading level, response format, etc2b. We did not find any information that any of the screening tools used in the studies cited in this review were unacceptable to patients or administrators. The longest screening tool (27 items) was reported to take patients only 10 minutes to complete, suggesting that none of the screens would be administratively burdensome. However, given the absence of comparative information about patient or provider preferences regarding screening tools, further research would be needed to make definitive statements about these issues.
3. “However, there were no high quality studies examining the performance of the PC-PTSD in a primary care setting” p.37
a. Perhaps Freedy, Prins, and Gore can be contacted for clarification on the QUADAS ratings, and subsequent changes made to level of evidence.
3a. We have updated the information from one of the studies mentioned and adjusted the quality assessment. We contacted the author of another study for clarification but did not receive a response. We did not find anything requiring clarification in the third study.
The report has the potential to be an important guide for both practice and research. It is well done is so many respects but it could be enhanced by additions to the text and tables. It also needs to be cleaned for typos, some of which are important (e.g., on p. 20 the paragraph on the PCL says in one place that there were 2 studies and in another that there were 3, Table 4 shows 3, and the paragraph mentions an additional study by Kimerling (2006) that does not appear in the table). Specific recommendations are as follows:Thank you. We have corrected the typos and clarified the additional studies cited in the paragraph on the PC-PTSD.
1. More detail is needed about how the quality assessment ratings were determined. Although detail is provide in the Appendices, I could not make the crosswalk between the QUADAS evaluation questions in Appendix B, Appendix C, the 5 criteria listed for each study in Appendix E, and the level of evidence rating. In fact, I don't see the clear connection between the QUADAS criteria and the QUADAS questions in Appendix B.
For example, in QUADAS, representativeness is about whether the full range of patients to whom a test would be applied was included in the sample. It appears that sample representativeness—and not spectrum inclusiveness—was more important in evaluating studies for the report. The fact that a study had one site is mentioned in a couple places, even though this is not relevant to evaluating quality according to the QUADAS or RSES systems.
Also, in some cases the problem appears to be missing information. RCES level 1 evidence requires that neither the test result nor gold standard was used to select patients. Yeager's study, which was rated at the highest level, is mentioned as being a random sample of participants from 4 sites, whereas Andrykowski's study is described as “women in remission from breast cancer.”
1. Thank you for pointing this out. We have now included an additional table in our appendices (Appendix F) that clarifies the relationship between the individual QUADAS ratings and the overall Level of Evidence ratings.
Now that we have included the crosswalk table in the report, we hope that study ratings have been clarified. The Andrykowski study was rated as a Level IV because the diagnostic interviews were not conducted blind to patient screening status.
Note that there is a typo in Appendix C and elsewhere in the text: it should be “Rational” not “Rationale” Clinical Examination Series.We have corrected the typo.
2. Figure 1 was illegible when the document was printed, even though it appeared fine on the screen. Also, I recommend providing the N for each reason the 122 excluded studies ruled out.2. We have added the number of studies for each exclusion reason.
3. In Table 3 it would help to know the test-retest interval for each study.3. We have added this information to Table 3 (see footnotes).
4. Table 4 should specify the gold-standard measure used for each study and if relevant how it was scored, e.g., the “1/2” rule for the CAPS.4. We have added the gold standard measure to Table 4. Studies did not typically report how the measure was scored.
5. For Key Question 2, the amount of detail in the text about studies varies unsystematically. For example, there was no information on p. 20 about the sample used in the Prins 2003 study and a lot of information on p. 22 about the sample used in the Dobie 2002 study.5. We have reviewed this and standardized the amount of text.
6. On p. 21 only 2 SPAN studies were discussed but the Table 4 lists 3. Freedy 2010 is excluded.6. We have added Freedy 2010 to the discussion of the SPAN studies.
7. Caution is needed regarding the inferences that are drawn when relevant information is missing. For example, and perhaps most notably, on p. 22, the report says that it is unclear whether CAPS interviewers were blind to PCL scores in the Prins 2003 study but elsewhere the report specifically states that lack of blinding was a major flaw of this study. Lack of information about blinding is not the same thing as lack of blinding. Regardless, things like this are so important that it is worth asking authors for missing information.7. As noted above, we have obtained information from one author and another author did not respond to our inquiry.
8. Table 5 is difficult to read. The use of shading to indicate different screens does not provide enough clarity or distinctiveness. For example, the authors could use a separate leftmost column to indicate the screen, with the study information in a column to the right:8. Thank you for the suggestion. We have modified the table.
ScreenAuthor/YearCutpoints
BreslauFreedy 2010xx
Kimerling 2006xx
PC-PTSDFreedy 2010xx
Gore 2008xx
9. Given that the report includes studies of both Veterans and non-Veterans, can any more be said about whether findings might generalize from one population to the other?9. A comment about Veterans vs. non-Veterans has been added to the discussion.
10. Given that the PC-PTSD is currently used for screening in both VA and DoD settings, can any more be said other than a recommendation for a study comparing it with other screening instruments?10. Our primary recommendation is for VA (and DoD) to evaluate whether use of the screen has improved health outcomes for Veterans and to examine the impact of its use on the healthcare system.
11. I recommend rewording recommendation 3 on p. 38. There is plenty of evidence about how screening tools work in the presence of other comorbidities because comorbidity is the rule rather than the exception in PTSD. What is missing is information about whether there is differential performance as a function of comorbidity.11. Thank you for the suggestion. We have reworded the statement and clarified our point.
12. The relevance of recommendation 4 is unclear or perhaps is not clearly worded. There is evidence about depression and anxiety screening in Veterans.12. We agree that this point needs rewording as well, and incorporated the intended point elsewhere.
It would be of interest to have a review of the literature on screening among Veterans of other countries. Can we learn anything from this literature? Can we learn anything from DoD screening?We chose not to include DoD studies because screening among active duty service members is complicated by limited confidentiality, potential deleterious effect of mental health diagnoses on military careers, and greater levels of stigma related to mental health conditions compared to that seen in non-active duty populations.
5. Are there any clinical performance measures, programs, quality improvement measures, patient care services, or conferences that will be directly affected by this report? If so, please provide detail.
Not at this time. PC-PTSD followed by PCL when indicated is current measure and this report is unlikely to affect that.Thank you
1. It seems like data from the PCMHI office may be able to address the impact of PTSD screening on referrals to co-located care or specialty care (i.e., access to care measure). And, with the new OEF4 performance measure, it might be possible to look at screening and engagement with treatment (8 sessions within 14 weeks).1. We agree that evaluating the impact of screening implementation on service utilization is an important area that should be explored.
2. DSM5 is around the corner. The content validity and predictive validity of PTSD screens will need to be evaluated against these new diagnostic criteria.2. We agree and have now commented on the upcoming DSM-5 modifications.
The performance measure for PTSD screening is simply an indicator of whether screening has occurred, so I think this answer is no.Thank you
6. Please provide any recommendations on how this report can be revised to more directly address or assist implementation needs.
1. Perhaps more focused statements can be made about how the review can inform policy, guide services, support performance measures, and direct future research. For example:
a. Although additional research is needed on what screen is best for detecting PTSD in VA primary care, there are good reasons to screen for PTSD in VA (see guidelines propose by US Preventive Services Task Force).
1a. To our knowledge the USPSTF does not currently recommend routine PTSD screening. However, VA has significant clinical and political impetus for conducting routine PTSD screens on Veterans who use VA services.
b. Currently, if a patient screens positive for PTSD, CPRS presents certain follow-up options/services. Indeed, the clinical reminder is not “resolved” until an option is selected. The report would be strengthened by addressing these options and perhaps making recommendations for additional ones.1b. Although the requirement to institute a particular clinical reminder may be a result of national VA policy, how the clinical reminders are implemented varies across VISNs, Consequently, it would be less helpful to make specific recommendations about how the performance measure should be resolved.
c. As previously noted, the relationship between PTSD screening and access to care, and type of care would enhance implementation needs.1c. We agree.
2. For future research, more specific examples of what should be done is needed. For example,
a. Which screens (moderate and longer screens?) should be compared in VA primary care clinics and on what dimensions (ease of administration, diagnostic accuracy)?2a. We do not recommend any particular screening tool since all have their limitations. Specific recommendations for future research are delineated.
b. How should the impact of spectrum effects be analyzed? Comparing AUC's may not be the best approach.b. If what the reviewer means by “spectrum effects” is subsyndromal PTSD, then we agree that this would have implications for the criterion of a study. Comparisons of screen AUC's across studies requires a comparable outcome criterion.
With the formal adoption of DSM-5 in May 2013, the relevance of the data based on DSM-IV are unclear. Data obtained from DSM-IV versions may not generalize to DSM-5 versions if and when such data would be available. The authors need to address this issue more directly and incorporate it into their recommendations.Agreed. We have now included comments about the relevance of the review with respect to DSM-5.
Image resultsf1

From: APPENDIX D, PEER REVIEW COMMENTS/AUTHOR RESPONSES

Cover of Screening for Post-Traumatic Stress Disorder (PTSD) in Primary Care: A Systematic Review
Screening for Post-Traumatic Stress Disorder (PTSD) in Primary Care: A Systematic Review [Internet].
Spoont M, Arbisi P, Fu S, et al.
Washington (DC): Department of Veterans Affairs (US); 2013 Jan.

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.