Academic publishing
July 7, 2015 — June 9, 2024
Some notes to the connection between reproducibility, scholarly discovery, intellectual property peer-review, academic business models and such.
To explain: What was I imagining the clear distinction would be between this page and publication bias?
1 Economics of publishing
To some extent the production of academic knowledge is a public goods problem. Insofar as academic publisihgin is part of production of academic knowledge, it inherits the oddities of that.
Cameron Neylon runs a cottage industry producing pragmatic publishing critique from an institutional economics perspective:
e.g. The Marginal Costs of Article Publishing or A Journal is a Club:
we’d been talking about communities, cultures, economics, “public-making” but it was the word ‘club’ and its associated concepts, both pejorative and positive that crystalised everything. We were talking about the clubbishness of making knowledge — the term “Knowledge Clubs” emerged quickly — but also the benefits that such a club might gain in choosing to invest in wider sharing.
Working paper: Potts et al. (2016). Alternatively, see Afonso (2014), “How Academia resembles a drug gang”.
In the business setting this often leads incumbent publishers to a kind of spluttering defense of the value they create, while simultaneously complaining that the customer doesn’t appreciate their work. Flip the target slightly and we’d call this “missing the new market opportunity” or “failing to express the value offering clearly”. […]
Lingua, […] has gone from one of the most important journals in analytical linguistics to no longer being in the field, and seems well on its way to becoming irrelevant. How does a company as competent in its business strategy as Elsevier let this happen? I would argue, as I did at the time that the former editorial board of Lingua resigned to form Glossa that it was a failure to understand the assets.
The neoliberal analysis of Lingua showed an asset generating good revenues, with good analytics and a positive ROI. The capitalist analysis focussed on the fixed assets and trademarks. But it turns out these weren’t what was creating value. What was creating value was the community, built around an editorial board and the good will associated with that.
Also, see Pushing costs downstream.`
Here’s a thing I would like to be said a little better, but think is important An Adversarial Review of “Adversarial Generation of Natural Language”: The argument is that even though it’s nice that arxiv avoids some of the problems of traditional publishing, it inherits some of the problems that traditional publishing tries to avoid. No free lunches.
2 Realpolitik of journals
Journal rank and journal impact factor etc. Who cares? Your funders care, against your advice but whatever, they have the money, so you need to care too in order that they will keep funding you.
Latrobe explains it. Scimago Journal rank is the Google Pagerank-inspired slightly hipper journal ranking. Their search tool is probably what you want. Impact factors come from the 60s and are still around, h-Index is also a thing. journalrank might be a factor too?
According to Latrobe, we have the following indices and (partial list of) weaknesses.
2.1 h-Index
Hirsch index: The number of articles in a journal [h] that have received at least [h] citations over a citation period.
Weaknesses:
- Editors can manipulate by requiring contributors to add citations from their journals
- Increases with age so bias towards researchers with long publication records
2.2 JIF
Journal Impact Factor: Citations to a journal in the JCR year to items published in the previous two years, divided by the total number of citable items (articles and reviews) published in the journal in the previous two years.
Weaknesses:
Limited to journals within Web of Science
Cannot be used to compare journals across different subject categories
2.3 SJR
SCImago Journal Rank: Average number of weighted citations received in a year, by articles published in a journal in the previous 3 years.
Weaknesses are that it is “complicated” and that the numbers are small.
So I guess if you must do a journal ranking this is the least bad method?
2.4 CORE/ICORE conference ratings
The ICORE rankings rank conferences by some notion of importance in fields. For example, in my workplace we can expect our travel to be only to “A*” conferences, which optimises fort prestige of the organisation and defensibility of the funding. Are they the best conferences to go to in terms of outcomes? I do not know.
3 As an ecosystem
Hugo Larochelle, in Announcing the Transactions on Machine Learning Research desribes a new journal by the niche it fills, rather than assuming it is complete in itself.
[…] we’re happy to announce that we are founding a new journal, the Transactions on Machine Learning Research (TMLR). This journal is a sister journal of the existing, well-known Journal of Machine Learning Research (JMLR), along with the Proceedings of Machine Learning Research (PMLR) and JMLR Machine Learning Open Source Software (MLOSS). However it departs from JMLR in a few key ways, which we hope will complement our community’s publication needs. Notably, TMLR’s review process will be hosted by OpenReview, and therefore will be open and transparent to the community. Another differentiation from JMLR will be the use of double blind reviewing, the consequence being that the submission of previously published research, even with extension, will not be allowed. Finally, we intend to work hard on establishing a fast-turnaround review process, focusing in particular on shorter-form submissions that are common at machine learning conferences.
As these are all features of conferences like NeurIPS or ICLR, we hope that TMLR will become a welcome and familiar complement to conferences for publishing machine learning research. TMLR will also depart from conferences’ review process in a few key ways.
Anytime submission Being a journal, TMLR will accept submissions throughout the year. For this, we will be implementing a rolling review process which will be executed on a per-paper timeline.
Fast turnaround We are implementing a review timeline that will provide reviews to papers within 4 weeks of submission and decisions within 2 months. To enable this, we will implement a capped workload for action editors (the equivalent of conference area chairs) and reviewers so as to remain lightweight throughout the year, while also requesting a commitment to accept all assignment requests.
Acceptance based on claims Acceptance to TMLR will avoid judgments that are based on more subjective, editorial or speculative elements of typical conference decisions, such as novelty and potential for impact. Instead, the two criteria that will drive our review process will be the answers to the following two questions:
- Are the claims made in the submission supported by accurate, convincing and clear evidence?
- Would some individuals in TMLR’s audience be interested in the findings of this paper?
The first question therefore asks that we focus the evaluation on whether claims are matched by evidence. If they are not, authors will be asked to either provide new evidence or simply adjust their claims, even if that means the implications of the work are reduced (that’s OK!). The second, though somewhat more subjective, aims at ensuring the journal features work that does contribute additional knowledge to our community. A reviewer that is unsure as to whether a submission satisfies this criteria will be asked to assume that it does.
Certifications This will be a unique feature of TMLR, which is aimed at separating editorial statements on submitted work from their claim-based scientific assessment. An accepted paper will have the opportunity of being tagged with certifications, which are distinctions meant to highlight submissions with additional merit. At launch, we will include the following certifications:
- Outstanding Certification, for papers deemed to be of exceptionally high quality and broadly significant for the field (along the lines of a best paper award at a top-tier conference).
- Featured Certification, for papers judged to be of very high quality, along the lines of a conference paper selected for an oral or spotlight.
- Reproducibility Certification, for papers whose primary purpose is reproduction of other published work and that contribute significant added value through additional baselines, analysis, ablations, or insights.
- Survey Certification, for papers that not only meet the criteria for acceptance but also provide an exceptionally thorough or insightful survey of the topic or approach.
4 Tools
See also academic reading workflow for reader-oriented tips.
-
A platform for scholarly publishing and peer review that empowers researchers with the
- Autonomy to pursue their passions,
- Authority to develop and disseminate their work, and
- Access to engage with the international community of scholars.
-
Millions of research papers are available for free on government and university web servers, legally uploaded by the authors themselves, with the express permission of publishers. Unpaywall automatically harvests these freely shared papers from thousands of legal institutional repositories, preprint servers, and publishers, making them all available to you as you read.
Zenodo “is an open dependable home for the long-tail of science, enabling researchers to share and preserve any research outputs in any size, any format and from any science.”
- Research. Shared. — all research outputs from across all fields of science are welcome!
- Citeable. Discoverable. — uploads gets a Digital Object Identifier (DOI) to make them easily and uniquely citeable…
- Flexible licensing — because not everything is under Creative Commons.
- Safe — your research output is stored safely for the future in same cloud infrastructure as research data from CERN’s Large Hadron Collider.
A major win is the easy DOI-linking of data and code for reproducible research. (for free)
-
is a free Web publishing tool that will create a complete Web presence for your scholarly conference. OCS will allow you to:
- create a conference Web site
- compose and send a call for papers
- electronically accept paper and abstract submissions
- allow paper submitters to edit their work
- post conference proceedings and papers in a searchable format
- post, if you wish, the original data sets
- register participants
- integrate post-conference online discussions
-
A peeriodical is a lightweight virtual journal with you as the Editor-in-chief, giving you complete freedom in setting editorial policy to select the most interesting and useful manuscripts for your readers.
I did not find that explanation so useful as the interview the creators gave.
-
is an open access online scholarly publishing platform that employs open post-publication peer review. You guessed it! We think transparency from start to finish is critical in scientific communication. […]
Retraction Watch for sufficiently-high-profile-research is a watchdog blog that has somehow ended up doing well-regarded gatekeeping/exposure.
4.1 Shadow libraries
Seem to be legal in some jurisdictions, but not in others. Check your local laws before relying upon them.
Anna’s Archive is a meta-index of sites that unpaywall journals etc.
📚 The largest truly open library in human history. ⭐️ We mirror Sci-Hub and LibGen. We scrape and open-source Z-Lib, DuXiu, and more. 📈 30,453,135 books, 100,357,111 papers — preserved forever. All our code and data are completely open source.
See wikipedia and TorrentFreak coverage.
scihub is the pirate site that has been the most successful in providing free access to academic papers. It may be shut down due to an indian court case.
🧬 SciDB attempts to continue the work of sci-hub.
5 Prescriptive
Iterations of how this system of review and dissemination system could work better? See Peer review.
6 The new universal libraries
6.1 Copyright activism
The biggest phenomenon in open access, as far as I can tell, is the massive pirate infrastructure providing open access to journals for free.
Copyright activism, Guerilla open access etc.
See, e.g. Jonathan Basile’s essay on AAARG, Who’s Afraid of AAARG?. Generally Free online libraries.
There is as interesting question about mechanism design for the important business of science, which I do not myself pretend to know good answers for.
6.2 Open access
Various open access (and occasionally also open source) journals attempt to disrupt the incumbent publishers with new business models based around the low cost of internet stuff. As with legacy journals, they have variying degrees of success
One cute boutique example:
Open Journals is a collection of open source, open access journals. We currently have four main publications:
- The Journal of Open Source Software
- The Journal of Open Source Education
- The Open Journal of Astrophysics
- The Journal of Brief Ideas
All of our journals run on open source software which is available under our GitHub organization profile: github.com/openjournals.
All of our journals are open access publications with content licensed under a Creative Commons Attribution 4.0 International License. Copyright remains with the submitting authors.
7 Incoming
Is Frontiers predatory?
- Is Frontiers a potential predatory publisher? – For Better Science
- Predatory reports: Is Frontiers Media a Predatory Publisher?
tl;dr: Their review process is perceived by the community to be suspect and journal impact factor is generally decreasing.
PNAS is Not a Good Journal (and Other Hard Truths about Journal Prestige). At this point, most people in science are aware that journals are behaving yet more parasitically. They are sustained by the reputational capital of their self-fulfilling prophecies of prestige, but are spending down that capital. The most generous interpretation of academia’s response is that we are too busy trying not to buttress public trust in science to reform these undeservedly respected institutions. The least generous is that we are symbiotic parasites together with the journals who want to use them for our own reputations at the cost of public trust in science. In between is the story that we are all too busy trying to meet deadlines to solve the collective action problem of boycotting journals.
Identify trusted publishers for your research • Think. Check. Submit.
“Journal Evaluation Tool” by Shilpa Rele, Marie Kennedy et al.
Felix Schönbrodt, My personal reviewing policy: No more billion-dollar donations.
Étienne Fortier-Dubois, Why Is ‘Nature’ Prestigious?
Time for a Change: How Scientific Publishing is Changing For The Better