When is it done?
I’ve wondered out loud more than once in recent months about when we consider our work fit to ship. My broad conclusion, on reflection, is that a translation is done when it meets an acceptable standard. But what does that mean, exactly? The whole notion of “acceptable quality” is highly subjective.
Perhaps the client only wants a “gist” translation, or it’s more important for them to have it ASAP than to have it perfect. I dislike both those kinds of work. Gist jobs carry an expectation that there may be imperfections, but how big are the blots allowed to be? (And to what degree are you responsible for them? Perhaps a disclaimer would be in order. I wonder what your professional-indemnity insurer thinks, too.)
Maybe you do all your jobs to the standard that you think the client would accept – implying just good enough for them not to throw your translation out of the window in horror or beautiful enough to have them dancing in delight at your dactylic rhythms and mesmerising mastery of assonance? In a world where almost everyone promises “quality” to a market that’s often unable to judge, quality can be a moveable feast.
So how can we pin it down?
We have the ISO standard now, an approach that amounts to monitoring quality by occasionally auditing whether an LSP has followed its own documented procedures over a period of time. Health warning: this is not the same as ensuring that each individual translation is up to scratch. Indeed, ISO-certified companies have been known to deliver work that has not even been spellchecked, either by the company itself or by the freelance translators it hires. Which rather calls into question the value of ISO, if it can’t ensure quality even at bottom-of-the-pond level. All too often, in many industries, international standards are seen as an overhead, a badge to be gained – at the minimum possible cost – to keep up with the Joneses. (Working as a Quality Manager in an ISO9001-certified firm in a former life was, alas, an eye-opener at times.) The ISO translation standard is being promoted as a great thing for our “industry”, but some of us have our doubts, even if it may not be “on message” to air them in official organs.
To my knowledge, there are no efficient, reliable real-world ways of quantifying (human) translation quality – beyond laboriously classifying and counting mistakes, but that’s only part of the story. ITI assessors are required to categorise errors in terms of accuracy, terminology, register, grammar, syntax, rewording, collocation, spelling, punctuation, layout, presentation, omissions, additions, consistency and tautology; qualified (MITI) status is awarded, broadly speaking, when the weighted error count is below a given threshold. But this isn’t a viable method for assessing translation quality in everyday “production” contexts (nor, of course, does it claim to be). And, in any case, many of those error types are absolutely elementary: no translator worth their sodium chloride would dream of delivering a translation containing grammatical mistakes, inaccuracies, term inconsistencies or omissions. Either they are easy to find and fix (with a CAT tool or ApSIC Xbench, for instance), or they are matters of basic competence. If I catch an error like that while revising or reviewing my translation, I fix it immediately. Easy.
The question of when a translation is done, then, boils down to when all the subtler, more nuanced points meet our approval. Could we have phrased something more crisply? Could we have restructured a sentence for a more natural-sounding rhythm? Could we have used a synonym that resonates more closely with the prevailing lexical field and the brand/authorial voice? In other words, have we nailed it or is it merely pretty good?
Well I don’t know about you, but “pretty good” doesn’t do it for me.
We can do better. And we should. We should keep pushing to do better work, to hone our craft, by reading widely, reflecting on our practice and collaborating with others. By simply not settling for “pretty good”.
Let’s nail this thing.