If a study is done to compare two treatments then the P-value is the probability of obtaining the results, or something more extreme, if there really was no difference between treatments. Suppose P = 0.03. What this means is that if there really was no difference between treatments then there would only be a 3% chance of getting the kind of results obtained. Since this chance seems quite low we should question the validity of the assumption that there really is no difference between treatments. We would conclude that there probably is a difference between treatments. By convention, where the value of P is below 0.05 (ie less than 5%) the result is seen as *statistically significant*. Where the value of P is 0.001 or less, the result is seen as highly significant. P values just tell us whether an effect can be regarded as statistically significant or not. In no way do they relate to how big the effect might be, for which we need the *confidence interval*.