We recently covered the spotlight being cast on the threat of legitimate science publishing being murdered to death under the weight of metric fucktons of plagiarism, gibberish, errors, predatory publications1 or just stone cold made-up data.2
One of the aspects driving publication pollution asserted by Art Caplan, that of predatory publishers3, is the subject of a study released last week.4 Specifically this study takes a run (the very first, according to the authors) at spam. No, not your usual spamtastic fare like “Are you a STUD or a DUD – low cost V.1 a g r A” or emails from a Mr. Ntara Sima contacting you re: no heir for $8.5M sitting in an account in the Amalgamated Bank of Nigeria5 or emails seeking your participation in a gout research study (that was a new one to us!). This flavor of spam is unsolicited email targeted at researchers and study authors requesting their submissions to journals. The study attempted to ascertain if and to what extent these journal and publication spam emails sourced from reputable, nominally reputable, questionable or totally fraudtastic and/or predatory sites.
The spam pool used for the study was 1024 publication or journal spam emails received by one of the authors (“an interdisciplinary researcher, working in and among the fields of agriculture, biology, statistics, information sciences, and social sciences”)6 over 391 days, across four different email accounts. Two of the email addresses were directly connected to the author by way of multiple publications, one was used as contact email for a single publication, and the fourth was never used by the author in conjunction with a published article.
Some of the information collected on the spam emails and their respective journals or publishers included:
- where the requests were coming from: journals, publishers, asserted locations, actual locations
- publishing model applied: whether subscription journals7 or open-access (OA)8
- inclusion (or not) in the Directory of Open Access Journals
- inclusion (or not) in Beall’s List: a list compiled periodically by Jeffrey Beall detailing “potential, possible, or probably predatory scholarly open-access publishing”9
Before we get to the nitty-gritty of the study’s conclusions, the authors posit a cheeky bit of logic:
“[s]pamming is a different problem from predation. To be a spammer does not mean being predatory, but being predatory does entail being a spammer.”
While we managed to stay awake in Logic class long enough to catch that “if A, then B” it does not necessarily follow that “if B, then A” – we’re a little fuzzy on the authors’ logic here. We’re not sure what purpose spamming has other than to be predatory (maybe just some nice folks looking for new hobbies since Breaking Bad ended?), and we’re sure there are plenty of ways to be predatory without spamming people. But for the moment, we’ll play along. After all, who has their article published in a respected peer-reviewed journal and who’s writing edutainment pieces for a blog whose other posts include reviews of Victoria’s Secret teevee programs?
Moving on, the authors assert that there are two main means predatory journals make use of to keep on keepin’ on:
- some young researchers = suckers
- there are individuals willing to publish in questionable journals to inflate their publication list or take advantage of a quick review process and publishing turnaround.
The authors’ findings suggest that, given the quantity of emails from publishers and journals, the main source of email addresses appears to be from “articles published in scientific journals and not those in personal web pages at university sites.” That is, the bulk of the spam emails (nearly 90% of journal spam and nearly 80% of publisher spam) came to one of the addresses the author used as contact info for their publications. One journal sent 24 spam emails, while 15 journals sent 10 or more over the 391 days.
The authors also analyzed location information for publishers and journals according to the journal’s webpage, a WHOIS search, and IP address info. The authors found significant discrepancies between the reported and actual locations provided by publishers and journals’ webpages. Also notable – one of the top 3 locations for journals based on the journal webpage is – wait for it – Nigeria! We’re happy to see that Nigerian scammers are expanding beyond the prince scams into a more sophisticated market! That’s called ambition, people.
Because we can see you’re starting to nod off, we’re going to cut to the chase and bullet-point a few of the study’s take-aways:
- most of the journals and publications sending the spam used the open-access model with pay-for-publishing
- on average the time claimed by these journals or publishers for peer review was four weeks – a much quicker turnaround than non-predatory publications can usually offer or produce
- more than 70% of the publishers represented in the spam emails were on Beall’s list
- top 3 countries for headquarter locations of journals or publishers represented in the spam were India, the United States (USA! USA! USA!), and Nigeria
Our prediction10 here at Bitter SciTech is that the time for spamming writers for content has peaked and will soon be a thing of the past. While AI is perhaps not yet upon us in the most strict sense of Turing-testing, it appears that machines can now passably write news stories if not with as much cheekiness as a human, then certainly in a fraction of the time as a human and within acceptable levels of intelligibility.11 Perhaps we’re already well on our way down that merry path and soon Arthur Caplan or the industrious authors of this publication spam study will be unearthing that discovery. Then again, until machines start needing to write checks to predatory publishers to inflate their CVs, it seems likely the stream of publication spam (and suckers or takers-advantage thereof) will continue unabated.
although wait for it – as you’ll see, Nigerians scammers haven’t lost their fraudsies edge! ↩
i.e. funded primarily or substantially by reader subscriptions ↩
i.e. pay-to-publish journals; note the authors assert that some OA journals are “high quality” and use OA as a means to make knowledge widely accessible, while many use pay-for-publish as merely as a means of revenue generation. ↩
The study authors note that Beall’s list comes with its own passel of both supporters and detractors, so they’re careful to denote their neutral stance on the list ↩
going on nothing more than baseless speculation and jumping to spurious conclusions! ↩