Why ghostwriting, ghost management, and fake journals could be pernicious
We often discuss the value of scholarly publications in terms of attribution of credit for promotion, tenure, and maybe even social capital (discussed in Polyani); but their primary purpose is to convey knowledge. The introduction and background sections review the literature and place the new work in context. What is the research problem? Why is this interesting? What do we already know? The methods section is for reproducibility - so that ostensibly someone could come along and repeat the work and come up with similar results, even though we know that tacit knowledge including craftsmanship is needed to actually reproduce many experiments, and that this knowledge is not conveyed through journals (discussed in Shapin). The methods section helps readers to trust the results. Were the methods appropriate to the research problem? Were they applied appropriately? Were any issues addressed? The results section tells you what they found out and the discussion section tells you why this is important or useful and what needs to be done next.
As discussed variously in Social Studies of Knowledge work, scientists cannot repeat every experiment to trust it - if they need to go back and re-derive and re-test every piece of knowledge they use, then they will never be able to do new work. So they must use and trust the scholarly literature as well as other scientists and sources of "public knowledge". But they are skeptical, a Mertonian Norm, and require detailed information about how the information was obtained as well as information about the author, their lab, their funding, their training, etc. This last bit is counter to the Mertonian Norm of universalism which states that author attributes aren't important. We know from empirical studies of relevance and document selection (see for example Wang & Soergel) that researchers do look at author, affiliation, and publication attributes - things besides the actual content of the article (& topical relevance) - to assess value. Active researchers also have a pecking order of journals - which journals are better because they have a low acceptance rate or a high impact factor, strong editors, or just a better reputation.
When you are well-integrated into a research area, when you are part of an invisible college (Price, Crane), then you will know what research is being done in which labs, and you'll know who does good work (Garvey, 1979). You will have access to much of the research prior to the actual publication in a journal. This is particularly true in the case of "normal" science (Kuhn), in which the problems are pretty well defined and new work is somewhat incremental instead of revolutionary. So when you become aware of new work, you know about the journal, the lab, and may even have a personal relationship with the author - having chatted at meetings - so you can incorporate new ideas and findings into your own work. You have a foundation into which you can fit this new information.
Ok, let's go back now to the case of the fake journal. The Scientist
reported that a division of Elsevier compiled re-prints of published articles and new questionable articles supporting the efficacy of a certain medicine into a journal, which they then handed out to physicians in Australia. This was not a real journal - the editor didn't have editorial control, there were no peer reviewers - it only was created to look like one, with a good-sounding name. It was packed with advertisements for the medicine alongside these favorable articles. When we look a little more closely, we understand that this (corporate-funded fake journals) is a service that this division of Elsevier offers, specifically trading on the reputation of Elsevier as a publisher of scholarly scientific and technical information (see quotes compiled by Bill Hooker
Researchers who are integrated into the invisible college will not be fooled by these! They will not know the authors, they will not know the journal, and the fact that the journals are not carried by libraries or indexed in Medline indicates that they aren't well-respected. Medical libraries, who also use extensive collection development heuristics will also not be fooled. But these "journals" are not intended for the researcher in pharmacology or pain management or what have you! They are intended for the clinicians - the practicing physicians who are not personally involved in research, may only have limited access to "the literature", are pretty busy, and who might be just a little bit rusty on evaluating information sources. This is one reason the fake journals could be pernicious - these physicians might be fooled- they might even know that it's marketing stuff, but still think that it's reprinting good articles.
Good articles. If everyone follows the Mertonian norms of universalism, communism, disinterestedness, and organized skepticism, then the exact process of how the article came about is irrelevant. That is to say, features of the author aren't important, scientific information is given freely to increase society's knowledge with only attribution in repayment, it's all about the science (not societal good, not about personal gain, just what makes good science), and question everything. This assumes scientists are all behaving ethically, and that the only contributors to the scientific scholarly communication system are in fact scientists, who are committed to these norms.
However, we understand from reading two recent works by Sismondo (2007,2009) that there are other players in the system, who are not in any way committed to the norms, and who are gaming the system for financial gain. According to these articles, pharmaceutical companies hire companies to design and run experiments, write up the results, select the publication venue, recruit a doctor to sign his name to the article, and then shepard the article through publication. The lead author may have had no control over the research or the writing and is certainly not disinterested (only connection to the work is via pay check). These articles appear alongside other scholarly articles in reputable journals (indexed, carried by libraries, well-cited, well thought of by researchers). Further, the lead author may have been selected and hired because he or she is integrated into the invisible college and could have
done this work.
I am not saying that the actual design of the trials was flawed, or that the results are not supported by the data or that it isn't actually good science. The employees of these companies are trained researchers, but ones who are committed not to science and knowledge, but to providing a service: making their customer's product look good. Scientists in academic settings sure do take money from big companies, but there is arguably more separation. Important questions include: how much of the persuasiveness of the article is due to rhetorical manipulations by players who are paid to make a product look good, are data omitted to insure that the product looks good, and are the discussion and implications sections supported by the results? I'm curious, too, if these articles (if they can be identified) are cited by articles produced the old-fashioned way.
So how big of a deal is this? Clinicians and practitioners are not naive - they may know a lot about these shenanigans - but how are they to assess the evidence with limited time, limited access, and when each article addresses just one small area of the knowledge they need to know for their every day job? Out and out falsifying data is perhaps easy to detect - but fudging the numbers just a bit to make things look just a little more convincing is not. Peer reviewers do not have enough information to detect this so that is not the answer.
Other interested parties include courts, policy makers, and patients. A researcher who is integrated into the invisible college may recognize advertising immediately, but how about the courts, the policy makers, and patients and caregivers who are looking for more information on the course of care their physician has chosen? Are these sponsored articles more findable or accessible than other articles or just the same?
(I have to stop this essay now because of time considerations, but I will try to come back to topics like this as I go)
Crane, D. (1972). Invisible colleges: Diffusion of knowledge in scientific communities. Chicago: University of Chicago Press.
Garvey, W. D. (1979). Communication, the essence of science: Facilitating information exchange among librarians, scientists, engineers, and students. New York: Pergamon Press.Kuhn, T. S. (1996). The structure of scientific revolutions (3rd ed.). Chicago, IL: University of Chicago Press.Polanyi, M. (2000). The republic of science: Its political and economic theory. Minerva: A Review of Science, Learning & Policy, 38(1), 1-21. Originally published 1962.
Price, Derek J. de Solla. (1986). Little science, big science--and beyond. New York: Columbia University Press.
Shapin, S. (1995). Here and everywhere: Sociology of scientific knowledge. Annual Review of Sociology, 21(1), 289-321.
Sismondo, S. (2007). Ghost management: How much of the medical literature is shaped behind the scenes by the pharmaceutical industry? PLoS Med, 4(9), e286. doi:10.1371%2Fjournal.pmed.0040286
Sismondo, S. (2009). Ghosts in the machine: Publication planning in the medical sciences. Social Studies of Science, 39(2), 171-198. doi:10.1177/0306312708101047