We have argued that failure to publish SCD
results with small effect sizes is a bias, which
leads to evidence-based practice reviews that
systematically overestimate the effectiveness of
treatments in SCD research. However, at least
some SCD researchers believe that demonstrat
ing a visually compelling functional relation
(and thus a large effect size) is not a bias but
rather is good SCD research practice and
should be an important consideration in publi
cation decisions. These researchers assert that
studies that do not demonstrate a visually large
functional relation are uninterpretable; for
example, a negative result may not mean that
the treatment failed but rather that the
researcher failed to implement the treatment
adequately or failed to measure the outcome
with enough reliability or validity. These state
ments may be true, although it would be better
to base publication decisions on direct evidence
about poor treatment implementation or poor
measurement reliability than on indirect evi
dence of small effect sizes. Even so, a negative
result may sometimes mean the treatment does
not work well. SCD researchers need to better
defifine professional standards for publishing
negative effects and the process for document
ing intervention ineffectiveness. Knowledge of
what does not work should have just as great a
place in evidence-based practice reviews as
knowledge of what does work. Also, studies
with negative results may differ from studies
with positive results in having different kinds of
cases, settings, treatment variations, or out
comes. Omitting results that are negative for
this reason deprives the fifield of knowledge
about what moderates the size of an effect.