A closely related paper on open post-publication reviewing:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/post-publication-review.html
Another closely related paper on open-access journals:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/open-access-journals.html
A partial index of discussion notes in this directory is in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREADME.html
An earlier version of the following message was posted on Sat 18 Aug 2018 to senior colleagues in computer science departments in the UK and some non-academic members of the computer science research community.
Note added 16 Mar 2022:
I have recently stumbled across Frank Ritter's criticisms of ORCID numbers,
closely related to the points made below. His comments are available at
http://www.frankritter.com/problems-with-orcid.html
Another objection to the reliance on ORCID numbers is that there are very many distinguished authors who lived before the introduction of that mechanism, and we therefore have to use other means of distinguishing them.
My comments below followed earlier discussion of aspects of the REF (Research Evaluation Framework) process and some of the implications of the Gold Open Access mechanism for publication.
A member of the list had circulated two links to relevant information:
Analysis of the economic and other issues around open access by Andrew Odlyzko. Short note:
http://www.dtc.umn.edu/~odlyzko/doc/open.access.evolution.txt
Full paper:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2211874
I responded with the message below:
Something really strange happened in the UK a few years ago. A report written by Adam Tickell (while he was a senior member of my university) was suddenly announced as having been accepted, without, as far as I can tell, any substantial consultation with researchers.
To my dismay it recommended *gold* open access instead of following what I believe is the system in the USA, namely requiring all results of *publicly* funded research to be freely available to everyone, without preventing publishers from selling printed copies or other services related to the reports.
I wonder if there was any analysis of the costs in money and time required for setting up the mechanisms to support gold open access, in addition to the fees paid to the publishers?
This does not affect me personally since I no longer apply for grants, and I simply post everything I write online.
My own view is that the needs of research will in future be much better served by mechanisms based mainly on post-publication reviewing, and allowing new-versions of corrected or extended research reports to be located in the same place as the original versions. (Perhaps using history mechanisms similar to wikipedia?)
Reviews would then also (mostly) be public (e.g. except where privacy is agreed between reviewer and author).
If I publish in a journal now, I put up with the hassle only because someone wants me to and and I then make my online version slightly different, and use a different format.
The notion that we should pay for editing services doesn't fit the experience many of us have had with publishers' copy editors. E.g. the copy editing of the Elsevier Turing centenary volume, was both inconsistent and dreadfully incompetent:
Alan Turing - His Work and Impact (2013)
Eds. S. B. Cooper and J. van Leeuwen
Table of contents here:
http://www.cs.bham.ac.uk/~axs/amtbook/
In desperation, Barry Cooper had to take over the whole process and do it himself.
[[Example: in some cases the copy editor replaced 'program' with 'programme' (e.g. in my papers) and in another place replaced 'programme' with 'program' (in one of Alan Turing's contributions).]]
I've had similar dreadful experiences with Springer's copy editors, e.g. in this volume
The Incomputable: Journeys Beyond the Turing Barrier
(2017)
Eds S. B. Cooper and Mariya I. Soskova
(Alas Barry Cooper died before it was complete.)
When I was sent a proof to correct, my paper had been so badly mangled that I asked for it to be withdrawn. The only way for me to have errors corrected was to print a copy, mark all the errors in ink, scan it in, and send a pdf file, with the risk of having to repeat the process after someone had read in my corrections. In 2016!!
At that stage the Springer editor managing the project (not the copy editor) volunteered to accept an electronic "differences" file from me and make the changes himself. I don't know how long it took him, and I have not bothered to check how accurately it was done, but the need for that should never have arisen, as there are publishers (e.g. MDPI) who submit their copy-edited version with all changes explicitly marked and easy mechanisms to undo unwanted changes.
I have discovered that many colleagues, over many years, have had multiple complaints about publishers' copy editors (like the psychologist who did not notice that "with IQs in the 50s" in her original had been transformed into "with IQs in the 1950s" by an idiotic copy editor who did not know the difference between and IQ and a date.
I've documented some gripes about publishers copy-editors here: http://www.cs.bham.ac.uk/~axs/publishing.html
I've also had dreadful time-wasting problems with MIT press, CUP, and others, and colleagues have confirmed this.
So the claim that when we pay for gold open access we are paying for valuable services is laughable.
There are deeper issues.
Publication was once a mechanism for disseminating important or interesting research results.
That function has now been *dwarfed* by other functions, e.g. producing numerical evidence regarding citations to be used by selection panels and promotion committees.
THAT SHOULD BE ILLEGAL.
It is a way of failing to use judgement of quality of research and taking decisions that are likely to be unfair to some of the most intelligent researchers with the deepest potential to make progress on very hard problems that require long term research in order to produce final results.
Why should we pay precious funds to publishers to take over our responsibilities for judging and advising colleagues?
Well managed internal, and externally supported, evaluation processes should be able to identify such researchers and not force them to divert their efforts into getting high citation counts, or getting grants, e.g. if what they are working on is a theoretical problem that does not need research assistants, project meetings, regular reports, etc., or if it is a practical project that requires several years of investigation.
[Research councils that insist on research proposals that specify results to be achieved each year clearly have no idea what research is.]
There's a further unmentionable(?) aspect to this.
We used to have a collection of Polytechnics that did an excellent job of
keeping up with research, and teaching students who needed to get knowledge
and skills required for jobs in commerce, industry, school teaching, etc.
Many of the Polys also ran courses and workshops for local companies who wanted their staff to be informed about recent research results, new techniques, etc.
I was once part of a university effort to compete with a local Poly in that activity (under pressure from central management to bring in more income).
It soon became clear that we could not compete successfully, for several reasons, including the differences between people doing that teaching as a chore and people doing it as a job they wanted to do.
But a *Tory* government, of all things (actually Ken Clarke) decided to turn all the Polytechnics into Universities, with many bad consequences, including enormously increased (unmanageable??) workloads for REF panels.
Some of the history is here:
https://moremeansbetter.wordpress.com/2018/02/17/john-major-and-academic-drift/
One of the bad, inhumane, effects: staff in ex-Polytechnics, who had been dedicated teachers, loved their jobs, and did it well, suddenly found themselves under pressure to start generating publications and getting research grants in order to be promoted, etc..
I am lucky -- retired and kindly allowed by my department to go on doing research, and using web resources, desk space, etc. so most of the REF nonsense does not affect me.
But I am very concerned about what we are doing to bright young researchers whose scientific/academic education has, in effect, been truncated in order to turn them into bean generators for bean counters.
Will we never again have a young outstanding researcher announce, after years of quiet research, an astounding new result, having been supported by senior colleagues who may not have understood the research in detail, but had qualities of judgement (and help from external referees if necessary) that allowed them to avoid research-destroying pressures?
I hope the relevance to the mechanisms for gold open access is clear.
Aaron Sloman
http://www.cs.bham.ac.uk/~axs
https://www.gov.uk/government/publications/open-access-to-research-independent-advice-response
Response to independent advice on open access research:
letter from Jo Johnson MP to Professor Adam Tickell
Ref: BIS/16/122 PDF, 77.9KB
Details
This letter sets out the response to Professor Adam Tickell's advice on open access to research policy. The independent advice was requested by Minister for Universities and Science Jo Johnson in July 2015.
Published 11 February 2016
This work, and everything else on my website, is licensed under a
Creative
Commons Attribution 4.0 License.
If you use or comment on my ideas please include a URL if possible, so
that readers can see the original, or the latest version.
This document is maintained by
Aaron Sloman
School of Computer Science
The University of Birmingham