One of the things I love the most about reading other academics' blogs is the insight I get into what it's like to do research in other fields. Especially since research in other fields seems so fundamentally different from research in my discipline. A few quick examples: in my discipline, collaboration is not only encouraged, it's expected (very few single-author papers); we submit full papers to conferences and journals alike, and some conferences are prestigious enough to "count" as journal papers on the tenure scorecard. Both of these (collaboration and full-paper-writing) have interesting impacts on what research in my discipline looks like, but this is something for another post.
What's on my mind today is the concept of "negative results". The research I do is very concrete, much like science: you have a hypothesis, you design a system (software, a simulation, a circuit, whatever) to test this hypothesis, you gather data and analyze the data, and then you know if your hypothesis was correct. Usually, you get a concrete answer: yes, this is true; no this is not true; yes, this is true under certain conditions. But what happens when the answer is "no, this is not true"?
What happens currently is that if you disprove something, even if this result is interesting in and of itself, it is very difficult to get this published. No one publishes negative results. There is such a stigma against it.
The technology fields are still pretty new---computers, at least in the modern sense, have only been around for so long. The fields are still being fleshed out; they are fluid, dynamic. So I think we do ourselves a disservice by pretending that everything we ever propose, works. Since we never know what's been tried that doesn't work, we doom ourselves to possibly repeating the work of others. And since we can't publish negative results, anything that doesn't work means time wasted. A frightening prospect when the tenure clock is ticking.
This is on my mind this week because I "wasted" a couple of days this week on analyzing results that ultimately did not prove what I was hoping to prove. I would love to delve into these results some more, and figure out why I was wrong. I think that in itself would make a very interesting paper. Instead, I find myself lamenting those "lost days" and trying to figure out if there's another calculation that I could do that would yield more positive results, so that I can include that in the paper I'm preparing. So in essence, I'll be trying to mask the negative result by replacing it with another result, which may or may not be related to the original hypothesis. It's almost like a game: how can I analyze my data to yield the results that make me look the best?
It would be ideal if there were more openness in the results process: more accounting of what the original data looks like, of the exact analytical procedures used. But there isn't. Typically, papers contain a brief description of the data and an even briefer description of the system design and analysis process. In most cases, it is difficult if not impossible to repeat the experiments---you simply don't have enough information to go on. This doesn't necessarily mean we're all liars, but it does give one an awful lot of leeway to "massage" the data/analysis to fit your hypothesis.
Would removing the stigma against publishing negative results help this? I don't know, but I suspect it would. There would be less of an incentive to hide behind your procedure---it didn't work, so there's no sense keeping it secret. And with no stigma, this might even remove the barriers to disclosure even in the successful experiments.
The thing is, I have no idea how other fields deal with this. Is the stigma against negative results everywhere, or is it just in my discipline? If your field does have a history of accepting negative results, how are negative results presented?
Subscribe to:
Post Comments (Atom)
2 comments:
Yes, the stigma against negative results is widespread. "Publication bias" is widely recognized in fields that use statistical methods, especially medicine and psychology. Studies filled with statistically significant results are more likely to be published. The bias is mentioned in many introductory statistics textbooks. There are now several Journals of Negative Results---ecology has one---but I doubt they've helped much.
Thanks, Anonymous! I kind of suspected that other fields might do this too, but I had no idea there were actually journals out there for this sort of thing. Very interesting!
Post a Comment