Translation in the web of semiotics
By Henrik Gottlieb
As semiotics implies semantics – signs, by definition, make sense – any channel of expression in any act of communication carries meaning.
For this reason, even exclusively non-verbal communication deserves the label ‘text’, thus accommodating phenomena as music and graphics, as well as sign language (for the deaf) and messages in Braille (for the blind).
In a Translation Studies context, the two latter categories representing strictly conventionalized communication may very well be considered along with verbal-only (monosemiotic) and multi-channel (polysemiotic) texts. As opposed to what is true of music and graphics, relatively simple algorithms exist that would transform messages in Braille or in one of the world’s many sign languages into a vocal language – either written or spoken. As a case in point, the intersemiotic process of translating from the tactile to the visual mode (or vice versa, cf. Mathias Wagner 2007) – e.g. when a text in Braille is translated into a ‘the same’ text using alphanumeric characters – is certainly simpler and more rule-governed than the process of translating a printed text from one verbal language into another. Both, however, remain conventionalized, as opposed to, say, commentating a baseball match for radio listeners.
Multidimensional Translation: Semantics turned Semiotics