Proofreading: Spellchecker vs Human.
To be able to produce academic writing to a high level takes years to master, requiring thousands of hours of work, and so it is therefore reasonable to assume that the proofreading of an academic’s writing must be carried out by someone with an equal or greater understanding of the prose. So why trust a computer to be able to recognise what is, and what isn’t a good piece of academic writing?
When you have written thousands of words, and revised them, and revised them again, and had them checked over by your supervisor, and responded to your supervisor’s feedback, and realised towards the end that you neglected a really important piece of evidence that means you have to rewrite the whole thing, and now, finally, you’re getting close to completion and submission and printing and binding and you’re going through all the stress of finding jobs and post-doctoral positions – when all this is done, you still have to do the proofreading. So it’s understandable that you might want to rely on the spellchecker on your computer to do your proofreading for you. After all, you’ve read over your work dozens of times, it can’t have that many mistakes in it, and the last thing you want to do is read it again.
Not only will it not find all of the typos and errors in your thesis, it may add more of its own! This is known as the Cupertino Effect after Cupertino, California, where Apple had their headquarters. In early editions of the spellchecker on the Apple Mac, ‘cooperation’ when spelt without a hyphen was not in the dictionary, and the computer automatically corrected ‘cooperation’ to ‘Cupertino’. This mistake was sufficiently prevalent that it gave the Cupertino Effect its name, and can still be seen in the documents of important international organisations like the EU.
The fact that the Cupertino Effect exists is a useful way of explaining how spellcheckers work and therefore explaining why you can’t rely on them. Spellcheckers, to put it as simply as possible, have a database that is a list of strings of characters that are acceptable spellings, and the facility to compare strings of characters in a given document with that database. Any strings of characters – say ‘cooperation’ or ‘entartain’ or ‘theinswe’ – that don’t appear in the database will be flagged with familiar wiggly red lines, or autocorrected. This raises the first problem with spellcheckers: their databases are incomplete. Having too large a database vastly increases the risk of false positives – of the spellchecker identifying words as correctly spelt when they are in fact errors, because they happen to be errors that look like obscure words, even if those words are nonsensical in the context. So databases are limited to fairly common English words. This is particularly a problem for academics, who often use obscure or technical vocabularies which may not be present in spellcheckers’ dictionaries.
The second problem is that, because they operate on a word by word basis, spellcheckers are not good at context or meaning. They can’t tell when a word that is spelt correctly is being used in the wrong place (e.g. ‘in the wrong plaice’), so have a tendency to miss a lot of common spelling errors and typos. Equally, because they can’t identify context, they can also misattribute correct uses in unusual contexts as errors.
So the case against spellcheckers is strong: they don’t find all errors, they can’t always correct those errors they do find, and they identify a lot of things that aren’t wrong as errors. None of these are problems with experienced human proofreaders, who can give documents the individual and contextual attention they need if they are to be perfect. This is particularly the case for academic proofreading, where correctness is so important and the vocabulary is likely to be of the technical kind that spellcheckers struggle with. Oxbridge Proofreading has subject specialist proofreaders who can do your proofreading far more effectively than any computer. For some things humans are still best.