1. Signed languages and linguistics.
In this chapter, we discuss the discovery of signed languages as real
languages and describe their place within modern linguistics. We begin by
defining language and linguistics. First, we explore some of the properties
language shares with other systems of communications, as well as features
that may make language unique. Second, we introduce the field of
linguistics—the scientific study of language—and its major areas of
investigation. We then discuss signed language linguistics and its history,
examine common myths and misconceptions about signed languages, and
describe the relationship between signed languages and other forms of
gestural communication.
1.1. What is language?
One of the aims of the field of linguistics is to understand exactly what
language is, so providing a definition is difficult because the study of
language is very much work in progress. In addition, many contemporary
textbooks in linguistics discuss definitions of language that were proposed
before signed languages were recognised as real languages. Thus, in order to
provide a working definition of language, we will draw on a useful summary
first provided by the researchers Charlotte Baker and Dennis Cokely (1980):
a language is a complex system of communication with a vocabulary of
conventional symbols and grammatical rules that are shared by members of a
community and passed on from one generation to the next, that changes
across time, and that is used to exchange an open-ended range of ideas,
emotions and intentions.
This working definition draws on a number of key features that were
proposed by Charles Hockett (1960) to be central aspects of language
structure and function: the use of arbitrary symbols, grammaticality,
discreteness, duality of patterning, cultural transmission, inter-changeability,
reflexiveness, displacement and creativity. Some of these features are shared
by language and other communication systems, while others may be unique
to human language. We describe each of these characteristics in the
following sections.
1.1.1. Arbitrary symbols.
All communication systems (including, for example, traffic lights, monkey
calls, the dance of honey bees and human language) rely on the use of
symbols to produce meaning. In traffic lights, for example, we have a set of
three coloured lights—green, amber and red. Each of these coloured lights has a relation to a specific meaning: green, for example, means ‘go’ while red
means ‘stop’. Among vervet monkeys, there are three different calls that
mean ‘snake’, ‘leopard’ and ‘eagle’ respectively (Seyfarth, Cheney & Marler,
1980). In response to the ‘snake’ call, other members of a vervet monkey
troupe will stand up and scan the ground, while the ‘leopard’ call will see
them run into the trees. The tail-wagging dance of bees is used to
communicate information about sources of nectar (Frisch, 1967). The
direction of the dance indicates the direction of the flight path to the food, the
speed of the dance signals how rich the source of nectar is, and the tempo of
the movement provides information about the distance. In each
communication system, we see that the symbols involve a relationship
between some form (e.g., a coloured light, a specific call or a movement) and
a meaning.
The words and signs used in languages such as English and Auslan may
also be considered examples of symbols. This link between form and
meaning in signed and spoken language may be arbitrary. Arbitrary words or
signs show no link between their form and meaning. The sound of the word
cat, for example, does not resemble any sound made by a cat. It only means
‘cat’ by a completely conventional association of this sequence of sounds
with this meaning. There is nothing natural about this link between form and
meaning—it results entirely from the long-established use of this word in
English-speaking communities. Other language communities have similar
meanings associated with different sequences of sounds, so that ‘cat’ is neko
in Japanese and paka in Swahili.
Similarly, the sign SISTER in Auslan is produced by tapping the X
handshape twice on the nose. Neither the shape of the hand used in this sign
nor its location or movement have any physical resemblance to the concept
of ‘sister’. The association between this sign and its meaning is nothing more
than customary usage in the Auslan signing community. In fact, this sign also
has this meaning in British Sign Language (BSL) and New Zealand Sign
Language (NZSL), because these three languages are historically related. In
other signed languages, such as American Sign Language (ASL) and Taiwan
Sign Language (TSL), the sign is quite different. In fact, in TSL, there are
two signs—ELDER-SISTER and YOUNGER-SISTER.
FIGURE 1.1.
The Swiss linguist Ferdinand de Saussure claimed that arbitrariness was in
fact a defining feature of language, differentiating it from other
communication systems (Saussure, 1983 [1915]). As we see from the
discussion above, however, arbitrary symbols are not unique to human
language. There is no apparent link between the colours of traffic lights and
their meanings, nor between the particular sound used in the ‘leopard call’ of
vervet monkeys and any sound produced by a leopard. Furthermore, many
symbols in human language are not arbitrary at all. Language also includes
iconic symbols in which some aspect of symbol’s form resembles some
aspect of its meaning. The word for ‘cat’ in Thai, for example, is meo.
Clearly, there is a link between the sound of this word and the sound made by
a cat. English includes some words that use onomatopoeia (a term used to
refer to sound-based iconicity), such as chiffchaff (the name of a particular
songbird whose song alternates from a higher to a lower note), cuckoo, tap,
crash, click, slurp and bang. English also uses links between form and
meaning in other ways as well. In a phenomenon known as sound symbolism,
related sounds tend to occur in words that are similar in meaning, such as the
gl- sequence in glisten, glow, glitter and gleam. Moreover, the order of
sentences in a story usually follows the sequence of events as they actually
occurred (Haiman, 1985). Thus, there is more iconicity in spoken languages
than previously believed.
Many symbols in signed languages are iconic, such as the signs CAT in
Auslan and Japanese Sign Language (Nihon Shuwa or NS). The first appears
to suggest an action typically associated with a cat (i.e., stroking its fur),
while the second seems to represent the typical actions involved in a cat
washing itself.
FIGURE 1.2.
Although some signs in Auslan are arbitrary, signs that are in some way
iconic are more common. In spoken languages, however, the reverse is
true—the link between form and meaning in most words is arbitrary. This
greater degree of iconicity in visual-gestural languages is not particularly
surprising because objects and actions in the external world tend to have
more visual than auditory associations. Many objects (such as a table or cup)
make no distinctive sounds at all, but have characteristic shapes, or are
associated with typical human actions that can be used as the basis of signs.
Thus, one form of the Auslan sign TABLE traces the shape of a tabletop and
legs, and one variant of CUP represents holding a cup and bringing it to one’s
lips.
Despite these differences, what arbitrary and iconic words and signs have
in common is that their association with particular meanings is based on
customary usage within a particular community and thus must be learned by
children, as we will see in §1.1.4. Thus, what is important about the use of
symbols in language is their conventionalisation—the fact that members of a
community share an understanding that particular meanings are conveyed by
particular forms (Deuchar, 1984). Because most symbols in spoken
languages are both arbitrary and conventionalised, it seems some linguists
mistakenly assumed that a defining feature of language was arbitrariness. In
fact, it is conventionalisation that is the key to understanding the relationship
between a symbol’s form and its meaning.
1.1.2 Grammaticality.
Human languages have grammaticality. No human language consists of a
vocabulary of conventional symbols alone—they also have rules for the
appropriate combination of these symbols. This means they have
grammars—rules for the correct grammatical structure of words and
sentences. Other communication systems also have rules of combination. In
the case of traffic lights, for example, the green light can follow a red light,
but an amber light always precedes a red light. The term grammar, however,
is usually reserved for the rules that exist in human languages.
An example of a grammatical rule in English would be the word order in
the phrase the woman has seen the man. Here the subject noun phrase the
woman comes before the verb phrase has seen, and the object noun phrase
the man comes last (the terms noun phrase and verb phrase are explained in
Chapter 7; subject and object are discussed in both Chapters 7 and 10). This
is a grammatically correct sequence of words in English, but it may not be
grammatical in other languages. In German, for example, a different order
would be used for this example: Die Frau hat den Mann gesehen. Literally,
this translates as ‘The woman has the man seen’. Here we can see that part of
the verb phrase (i.e., gesehen ‘seen’) comes at the end of the sentence, and
the word for ‘the’ has two forms (i.e., die and den). In Auslan, the equivalent
may be signed in the following way: PT+lf WOMAN FINISH SEE PT+rt MAN
(see Conventions in the introductory pages to this volume for an explanation
of these and other Auslan examples). Note that the Auslan sentence does not
include a sign meaning ‘has’, unlike both English and German. Instead, a
completed action is signalled in Auslan by the use of the sign FINISH. Also
note that in this example, pointing signs work in the same way as the words
the in English, or die or den in German, but they also may include
information about the relative locations of the two individuals being
discussed. This potential for spatial information is not present in the spoken
language examples. Despite these differences between the three languages, it
is clear that they each share the property of grammaticality.
1.1.3 Discreteness and duality of patterning.
Language structure has discreteness: its symbols are made up of a limited set
of smaller, separate units. The words of spoken languages are made from a
limited set of sounds (e.g., Australian English uses just 44 distinctive
sounds), and the signs of Auslan appear to be made of a limited number of
handshapes (i.e., approximately 35 handshapes are important in the core
vocabulary of the language, as we will see in Chapter 4). Discrete units have
clear, definable boundaries, and do not show gradience. In English, for
example, the sounds /s/ and /z/ are perceived as distinct—speakers appear to
disregard intermediate sounds between the two. Similarly, the handshapes 4
and 5 in Auslan are distinct. Although the position of the thumb may vary in
FOUR, once it is fully extended and visible, the sign becomes FIVE. Even
though spoken and signed languages are both produced as a continuous
stream of sounds and gestures, users are able to segment this connected
speech and signing into a finite number of separate (i.e., discrete) units.
Moreover, language appears to have duality of patterning—it has two
distinct levels of organisation. All languages are able to build meaningful
units (e.g., words or signs) out of smaller units that have no meaning in
themselves. Thus, words in English enter into two patterns of contrast at
once. The word man differs from other words in meaning, contrasting with
woman, boy, girl, etc. The word also differs from other words formationally,
contrasting with can, ban, mat, etc. The sounds in the word man (i.e., the
sounds represented by the letters /m/, /a/, and /n/) have no meaning of their
own. Only a combination of these sounds in the correct order produces a
word with meaning in English—man.
Signed languages also exhibit duality of patterning. For example, the sign
SISTER contrasts in meaning with other signs such as BROTHER, MOTHER,
FATHER, etc. We can see that the sign has a handshape, movement and
location that do not in themselves have any meaning, and that changes in one
of these features of the sign create a different sign. Changing the location to
the cheek produces the sign STRANGE, for example, while moving it to the
chin makes a sign meaning WHO in New South Wales and Queensland. In
each case, the handshape, movement and location do not have meaning of
their own—it is only when the parts are combined into the correct
combination that we produce meaningful signs.
Duality of patterning may be a unique feature of language, although it is
present to a limited extent in some forms of animal communication (e.g., bird
song has individual notes that are combined into particular calls with specific
meanings, see Tchernichovski et al., 2001). Duality is most well developed in
human language, however—it is this feature that makes it possible for the
thousands of words in English and signs in Auslan to be built up from a much
smaller set of units.
Nonetheless, just as words and signs are not all arbitrary, so not all words
or signs in a language need to display discreteness and duality of patterning—the minimal units of some words or signs may have their own
meanings, and some aspects of language may be gradient. As we shall see in
Chapters 4, 5 and 10, many signs are composed of minimal units which may
indeed carry their own meaning or that may be modified to show gradient
meanings.
1.1.4 Cultural transmission.
Spoken and signed languages differ from one part of the world to the next, as
we shall see in §1.3.1. Children born into each different language community
have to learn the vocabulary and grammar of the language (or languages)
used by adult members of that community. This learning is referred to as
cultural transmission. In this regard, language differs from many
communication systems used by animals, such as the calls of the vervet
monkey or tail-wagging dance of the bee. Although some aspects of their
appropriate use may be learned in some animals, many of these non-human
communication systems appear to be entirely innate. Zebra finches that are
deafened during development or reared in isolation will develop the typical
song of their species, although it may not completely match those used by
hearing finches raised with other birds (Lombardino & Nottebohm, 2000).
Some aspects of human behaviour also appear to develop without learning,
such as how to swallow liquids or how to recognise our parent’s voices, but
understanding and producing the specific vocabulary and grammar of one’s
first language is not one of these innate abilities. If language were entirely
innate, then languages would be the same across the globe and children
would not need to learn them. Although children are undoubtedly born with
an innate capacity to make language learning very rapid and effortless in the
first few years of life (and some linguists believe that some general aspects of
language structure may be innate, as we shall see in Chapter 10), it is clear
that the vocabulary and grammar of specific languages are transmitted from
one generation to the next by learning and are not genetically preprogrammed
in the brain.
1.1.5 Inter-changeability and reflexiveness.
All users of human language may send information to and receive it from
other users. This is known as inter-changeability. This makes language
different from some other communication systems. Although drivers can
understand the message sent to them by traffic lights, it is not possible to
communicate with traffic lights by attempting to send information back to
them. Similarly, only worker bees perform the tail-wagging dance (i.e., other
types of bee, such as the queen bee, cannot communicate in this way), and
only male zebra finches can produce their distinctive song. Because speakers
and signers can both send and receive information, this makes it possible for
humans to monitor their own use of language based on the feedback they
receive from their own language production (e.g., users of spoken language
can hear their own talk, while signers can see and feel their own signing).
The ability to monitor one’s own use of language also directly leads to
another possibly unique feature of human language—the ability to use
language to talk about language itself, just as we are doing now. This
characteristic is known as the reflexiveness.
1.1.6 Displacement.
Displacement refers to the unique ability of language users to refer to objects
and actions that are removed from the immediate time and place in which the
language is being used. Thus, speakers and signers can talk about events in
the past or in the future, or at distant locations. Systems of communication
used by animals are generally limited to conveying information about objects
or events in present and immediate situations. Thus, a vervet monkey cannot
discuss a leopard it saw last week, for example. It can only refer to leopards
that are present at the time the call is used. Furthermore, the property of
displacement allows language users to talk about people and places that exist
only in the imagination.
1.1.7 Creativity.
Creativity, like displacement, appears to be another feature that is unique to
human language. All natural languages are able to expand their vocabulary to
express new meanings. For example, signs have developed since the 1990s
for new technology, such as INTERNET (Figure 1.3), EMAIL, MOBILE-PHONE
and DVD. New signs are also appearing in Auslan because of increasing
contact with deaf people from other countries. Many Auslan signs for
countries are now being replaced by signs used by the deaf community in that
country. For example, there are new and old signs for AMERICA, ITALY
(Figure 1.3) and CHINA. This property of language means that languages
change across time, as new words and signs are created, and older ones
abandoned.
FIGURE 1.3.
Creativity does not appear to be found in other communication systems.
Despite changes to their environment, vervet monkeys have not created any
new calls, and honeybees have not modified their tail-wagging dance to
differentiate between different sources of nectar.
1.2. What is linguistics?
Having proposed a definition of language and discussed some of its key
characteristics, we will now turn our attention to the study of language
known as linguistics. More precisely, linguistics may be described as the
scientific study of language. We refer to linguistics as scientific because
linguists approach the study of language in a scientific manner. As Geoffrey
Finch (2000) explained, this means that (1) linguists adopt an objective view
of language and (2) they use scientific methods in their study of language
(i.e., they use observation, description and explanation).
What does it mean to say that linguists adopt an ‘objective’ view of
language? Linguists are mostly interested in how people actually use
language, and less in how people think they should use language. The
approach taken by linguists is thus a descriptive approach. Linguists aim to
give a complete and accurate account of how a language is used at a
particular point in time. Linguists collect and study facts about language
through interviews, experiments and tests. They also gather information from
written sources such as books and newspapers, and by tape-recording or
video-recording people as they use language in real life situations. These
observations are the basis for a description of the language, which attempts to
explain the objective reasons for the ways language is structured, used and
acquired by a community. In our case, our aim in this book is to provide an
unbiased and objective introduction to some aspects of the history, structure
and use of Auslan. We wish to provide information about the structure of
language, for example, that is based on a description of how native signers in
the community actually use the language (native signers are deaf or hearing
people that grew up with the language from birth).
This is in sharp contrast to the prescriptive approach. Prescriptivists set out
rules for what is believed to be correct ways to use language. Often, they use
beliefs about language purity, logic and tradition to create rules of ‘correct’
language use (Crystal, 1997). One well-known example is the Académie
Française, which was established in France in 1635 (Eastman, 1983). It is a
group of 40 individuals that acts as an official authority on the French
language. They publish a dictionary of the language, and make rulings about
norms of French grammar and vocabulary. In particular, they publish lists of
French words that are recommended as replacements for words that are
‘borrowed’ from other languages, particularly English. For example, the
Académie has ruled that the English words Walkman and browser that are
commonly used in France ought to be replaced by the French equivalents
baladeur and logiciel de navigation. These recommendations are made
because the Académie believes it must try and protect the ‘purity’ of the
French language which they see as threatened by the growing influence of
English. These rulings have no legal power, however, and are often ignored
by the French government, media and education system who continue to use
words borrowed from English (McCrum, Cran & MacNeil, 1986). Recently,
the British Deaf Association has established a ‘British Sign Language
Academy’ to protect, promote and preserve BSL. It will be interesting to see
whether this organisation will experience the same fate as its French cousin.
The English language lacks an organisation like the Académie Française,
but English does have a strong tradition of prescriptivism. Beginning in the
eighteenth century, prescriptive books about the structure and use of the
English language began to be published, many of which became very
influential in education (Leith, 1997). Many of these grammar books did not
aim to record actual usage in the community, but instead proposed rules of
English grammar based on the structure of Latin or on the laws of logic. At
the time, Latin was a language still held in high esteem in Europe. For a
thousand years prior, it had been the language used for international
communication in scientific and political affairs, and its grammar was
considered an example of great logic and clarity (although, in fact, it is no
more so than any other human language). Thus, these books suggested that
certain common usages in English should be abandoned because they did not
follow the same grammatical rules found in Latin. A few well-known
examples of ‘correct’ usage proposed by prescriptivists are listed in Table
1.1.
TABLE 1.1.
Some of these usages (such as the use of double negatives like I haven’t
done nothing wrong) were supposedly ‘incorrect’ because they were
considered illogical. Double negatives, however, have existed in English for
several centuries as an emphatic way of expressing negation, and double
negatives are the norm in other languages, such as French. It must be pointed
out that all these so-called ‘incorrect’ ways of speaking and writing reflect
extremely common usage across the entire English-speaking world, and that
it is not clear why Latin grammar or logic should form the basis for
determining standard forms of English.
Prescriptivism also exists in the Auslan signing community. Many Auslan
teachers reject the use of particular signs even though they are used in the
deaf community. This is especially true of those signs that have come into the
language recently from Australasian Signed English (we discuss Australasian
Signed English in Chapter 2) or from foreign signed languages, particularly ASL. Many signers also reject signs that were originally only used in specific
regions of Australia, or that have been created by hearing people, such as
sign language interpreters. Some Auslan teachers instead advocate the
preservation and teaching of older and traditional vocabulary, even when
many younger deaf people do not use or are even unaware of such signs.
In contrast to the prescriptive approach, linguists do not attempt to evaluate
variation in language, or to halt language change, but simply to record the
facts. David Crystal (1997:2) pointed out, however, that it is not easy for any
of us to study language objectively. Good language skills are important and
highly valued, and people make judgements about a person’s family
background, education, intelligence and even attractiveness based on how
they speak or sign. As a result, most readers will come to a book on
linguistics like this one with strong views about what English and Auslan are,
and how these two languages should be used. As Crystal explained,
‘language belongs to everyone; so most people feel they have a right to hold
an opinion about it. And when opinions differ, emotions can run high.’
1.2.1 Areas of linguistics.
The field of linguistics is divided into a number of major areas.
First, some linguists may work in areas that focus on the structure of
languages. The study of the nature of speech sounds and how they are
produced and perceived is known as phonetics. This contrasts with
phonology, which is the study of how sounds are organised into the words
and phrases of different languages. Although phonetics and phonology both
originally referred to the study of sounds in spoken language, they are also
used by sign language researchers to refer to the physical properties of signs
(signed language phonetics) and how signs are created from smaller
formational units (signed language phonology). We explore some aspects of
the phonetics and phonology of Auslan in Chapter 4.
The study of grammar is divided into two areas: morphology (the study of
the grammatical structure of words) and syntax (the study of the grammatical
structure of word sequences, such as phrases and sentences). Lexicology is
the term used to refer to the study of the vocabulary (or the lexicon) of a
language. Discourse analysis is the study of how sequences of sentences are
organised into larger structures, such as conversations or stories. The study of
the grammatical structure of Auslan signs and sentences is explored in
Chapters 5 and 7, while a description of the Auslan lexicon is provided in
Chapter 6. We describe some aspects of Auslan discourse in Chapter 9.
Second, linguists also work in areas that focus on how language is used.
Semantics is the study of how language structures are used to make meaning,
while pragmatics is the relationship between language structure, meaning
and context. These aspects of Auslan are covered in Chapters 8 and 9. The
study of the relationship between language and society, including variation in
language structure and how it relates to social factors (such as gender, age or
region), is known as sociolinguistics (this is discussed briefly in Chapter 2).
A particularly important area of sociolinguistics is the study of bilingualism
(i.e., knowing two or more languages) and language contact (how languages
influence each other as a result of contact between different linguistics
communities). The study of how language changes over time is known as
historical linguistics (we look at the history of Auslan in Chapter 3).
Third, linguists are also interested in how languages are learned and
processed by the mind and brain. The study of how children learn language is
called first language acquisition. Psycholinguistics is the study of how the
mind produces and processes language, and is a subfield of both linguistics
and psychology. Neurolinguistics is specifically concerned with the
biological aspects of language and the brain (which parts of the brain are
involved in producing and processing language and how they work), and thus
overlaps with other fields such as medicine and psychiatry.
Last, the field of applied linguistics refers to the application of knowledge
about the structure and use of language to other areas, particularly to
language teaching (known as second language acquisition), translation and
interpreting, and dictionary making (or lexicography).
Despite these well-established divisions and specialisations within the field
of linguistics, it would be a mistake to see these areas as strictly separate and
to believe that each could be pursued without reference to the others. Many
linguists stress the essential interconnectedness of all the different levels of
language structure and use. They emphasise that grammar cannot be properly
described or studied without reference to semantics. Such linguists see the
lexicon, morphology and syntax as forming a continuum of language
structures that are not separated by clear and unambiguous boundaries in the
way our brief introduction may suggest. We will return to the issue of the
nature of language and linguistic theory in Chapter 10.
1.3 Signed languages: Myths and misconceptions.
Signed languages (also known as sign languages) are the natural languages
of deaf communities. In this book, we used the terms signed language
linguistics or sign linguistics to refer to the scientific study of visual-gestural
languages of deaf communities rather than the auditory-oral languages of
hearing people.
It is very common for books on signed languages to begin with a
discussion of myths and misconceptions. Although Auslan was first formally
recognised as a community language by the Australian government in 1984
(Lo Bianco, 1987), a number of dictionaries have been available since 1989,
and the language has been taught in schools, colleges and universities across
the country, many misunderstandings about the language persist, even within
the signing community itself. As a result, we outline some of the most
common misconceptions in the following sections. Note, however, that we
attempt to point out the reasons that these misunderstandings have emerged,
and indicate that in some cases, there is a grain of truth in each of them.
1.3.1 Sign language is not universal.
As we will see in §1.4 below on the history of signed language research, it
was sometimes assumed in late eighteenth-century Europe that signed
languages used by deaf people were a form of universal language. The Abbé
de l’Epée, for example, who established one of the first public schools for
deaf children in the world in 1760, believed that the signed communication
used in his school in Paris could serve as the basis of universal language
(Kendon, 2004). This belief has continued to this day, with many people
outside the signing community surprised to learn that Auslan is a signed
language variety only used in Australia (Auslan is, however, closely related
to BSL and NZSL).
Signed language is not, however, a universal language. There are many
different signed languages around the world, and many of these have
developed independently of each other. Even a brief comparison of any of the
documented signed languages used in various parts of the world today will
show that signed languages are not identical in their vocabulary or
grammatical structure. If we compare the sign SISTER in Auslan, ASL and
TSL (Figure 1.1) we see that very different signs exist for this concept in
these different signed languages. Signed languages also do not all use the
same building blocks to create signs. We will see in Chapter 4 that the set of
handshapes used in Auslan is not the same as those in other signed languages
(e.g., Auslan does not use a handshape that has only the ring finger extended,
but this handshape is used in the sign SISTER in TSL). The basic sentence
structure of different signed languages also may not be similar. We will show
in Chapter 7 that, in some situations, Auslan appears to prefer a sign order in
which the actor precedes the verb and the undergoer follows it (e.g., MAN
KNOW WOMAN). It is claimed that, in the same context, NS and Argentinian
Sign Language use an actor-undergoer-verb order (e.g., MAN WOMAN KNOW)
(Nakanishi, 1994; Massone & Curiel, 2004). In Auslan, a headshake may be
used to signal negation (e.g., a headshake produced while signing WOMAN
CAN DRIVE will produce an utterance meaning ‘the woman cannot drive’), but
in Greek Sign Language, it appears that a backward head tilt may also be
used for the same function (Antzakas & Woll, 2002).
Furthermore, not only do signed languages vary from one part of the world
to the next, but (like spoken languages), variation can be found in the
vocabulary and grammar within particular signed language communities.
Thus, different signers of Auslan may use different signs for the same
concept because of their regional origin, educational background and age
(this point is explored in more detail in Chapter 2).
Despite these differences, however, studies appear to indicate that the
vocabulary of unrelated signed languages often have a proportion of similar
or identical signs (Kyle & Woll, 1985), and that the grammar of signed
languages are also similar in many ways (Johnston, 1989a; Newport &
Supalla, 2000). We explore this point in more detail in Chapter 3. Thus, although signed language is not universal and instead varies from one part of
the world to the next, it appears that different signed languages may be more
similar to each other than the spoken languages of the world.
1.3.2 Signed languages are not based on spoken languages.
As we will see in Chapter 2, signed languages of deaf communities are not
based on spoken languages. Many people assume that Auslan is simply
English in signed form. This, however, is not the case. Many aspects of the
vocabulary and grammar of Auslan are quite unrelated to English. For
example, the English word light has several meanings. English speakers
describe an object as light if it does not weigh very much; they would say
something is a light colour if it is very pale; or they would say ‘turn on the
light’ when referring to an electric light in a house or other building. All three
of these meanings would be translated into Auslan by different signs (as
shown in Figure 1.4), despite the fact that the same form is used in English.
We explore more examples of the vocabulary of Auslan in Chapter 6.
FIGURE 1,4.
In terms of grammar, Auslan uses rules that differ from those found in
English. One of the grammatical features of English is the marking of
plurality (i.e., the concept of more than one) by the use of the ending –s on
nouns. English also marks past tense (i.e., that some action occurred in the
past) by the use of the ending –ed on verbs or by a system of modified verb
forms (e.g., run versus ran). It also includes strict rules about the ordering of
words in sentences (e.g., the woman asked the boy means something quite
different from the boy asked the woman). For each of these grammatical
phenomena, Auslan and English differ. Auslan does not use an ending on
nouns to show plurality, but, as we shall see in Chapter 5, this does not mean
that Auslan cannot signal information about number. Auslan does not mark
past tense by an ending on verb signs, but the language can indicate
important time-related information in other ways (see Chapters 6 and 7). The
order of signs is more flexible in Auslan than English, and thus strategies
other than word order (as used in the English example above) might be
employed to show who does what to whom (see the discussion on the use of
space and indicating verbs in Chapter 7 for details).
Despite these differences, Auslan is the language of a minority surrounded
by a much larger English-speaking majority. As is also typical of many minority languages in the same social situation, contact between the two
languages has resulted in Auslan drawing on English in many areas of its
vocabulary and grammar. Many signs are based on fingerspelling the first
letter of the corresponding English words (e.g., D-D for DAUGHTER, B-B for
BRISBANE) or are fingerspelled abbreviations (e.g., J-A-N for JANUARY and SY
for SYDNEY). Other words are regularly fingerspelled in full (e.g., S-O-N, JU-
L-Y). The influence of English on Auslan is explored in more detail in
Chapters 2 and 6 (the two-handed fingerspelling system used in Australia is
illustrated in Figure 2.2).
Thus, signed languages of deaf communities are not based on spoken
languages, but they may in fact be significantly affected by the language of
the surrounding community.
1.3.3 Signed languages are not simply pantomime and gesture.
Sometimes it is mistakenly believed that signed languages are nothing more
than forms of pantomime and gesture. By this, it is often meant that signs,
and rules for their combination, are made up on the spot. Communication
between signers, it is sometimes believed, is achieved by simply pointing at
objects, drawing pictures in the air or by acting out descriptions of events.
People often use the term ‘sign language’ to refer to this kind of improvised
visual-gestural communication that occurs when two people who do not
speak each other’s language meet (e.g., ‘The man in the market place in Bali
did not speak English, so we had to use sign language to communicate’).
Research in linguistics, as explained above, has demonstrated that the natural
signed languages are in fact real human languages, and not simply
pantomime and gesture in this sense.
It is true, however, that the visual-gestural languages of deaf communities
share some properties with the gestural communication used by non-signers
(Kendon, 2004). The extent of these similarities is currently a matter of
controversy among sign language researchers. We explore this point in more
detail in §1.5 below, and the debate in signed language linguistics about the
relationship between signed language and gesture is taken up in Chapter 10.
1.3.4 Signed languages are not always iconic.
Related to the misconception about the relationship between signed
languages and gesture is the widespread belief that the meaning of all signs
comes from their being ‘pictures’ of what they represent. We discussed the
notion of iconicity in language in §1.1.1 above, and we pointed out that
iconicity is more common in signed languages than in spoken languages. A
range of different kinds of evidence can be presented to demonstrate that the
presence of iconicity in signed languages should not, however, be
overemphasised (Woll, 1990). First, like words in all languages, signs also
may be arbitrary. Some signs in Auslan have no apparent iconic relationship
to their meanings (e.g., PRETEND, MELBOURNE, YOUNG and BEACH (Figure 1.5)). This lack of a clear form-meaning relationship is also found in other
signed languages. In addition, the formation of signs in visual-gestural
languages is never determined solely by their resemblance to an object or
action. As we will see in Chapter 4 on the formational structure of signs, the
structure of signs is also influenced by the complex interactions of visual
perception and manual production as well as language-specific formational
patterns (e.g., the handshape in the TSL signs ELDER-SISTER and YOUNGERSISTER
is not found in any Auslan sign). Furthermore, processes of historical
change in signs result in some iconic signs developing into arbitrary symbols
over time (Frishberg, 1975; Kyle & Woll, 1985). For example, one sign for
LIBRARY (Figure 1.5) in Auslan originally meant ‘hairclip’ and was the name
sign of the librarian at the Victorian school for deaf children. For many
signers today who are unaware of the sign’s history, the sign is an arbitrary
one with no clear connection to its meaning. Together these facts mean that
the sign vocabularies of unrelated signed languages, such as NS and Auslan,
often develop many different signs.
FIGURE 1.5.
Second, even when signs are iconic in origin, the particular relationship
represented can be specific to that language, as we saw with the different
forms of the sign CAT in Auslan and NS above. Similarly, the most common
Auslan sign for WOMAN (Figure 1.6) is signed with a B hand moving down
the cheek, perhaps indicating the smooth cheeks of a woman’s face (in
contrast to the Auslan sign MAN which suggests a man’s beard). In Israeli
Sign Language, the index and thumb pinch the earlobe, while in Danish Sign
Language (DSL), the sign indicates the shape of the breasts (Woll, 1990).
Third, there is also no evidence that children from signing families learn
signed languages quicker than hearing children learn spoken ones, despite the greater degree of iconicity in sign vocabulary and grammar. In a summary of
many years of research comparing the acquisition of spoken and signed
languages, Laura Petitto (2000:452) presented the following conclusions:
Deaf children who are exposed to sign languages from birth acquire
these languages on an identical maturational times course as hearing
children acquire spoken languages. Deaf children acquiring sign
languages from birth do so without any modification, loss, or delay to
the timing, content, and maturational course associated with reaching
all the linguistic milestones observed in spoken language. Beginning
at birth, and continuing through age 3 and beyond, speaking and
signing children exhibit identical stages of language acquisition.
Finally, interesting evidence comes from experimental studies of shortterm
memory and language production errors (‘slips of the hand’) that
suggests that signers use the structural components of handshape, orientation,
location and movement (see Chapter 4) when remembering and producing
signs rather than their iconic properties alone (Emmorey, 2002). As we shall
see in §1.3.6 below, there is much evidence that visual-spatial information,
such as photographs and maps, and linguistic information, such as spoken
and written words, are processed in different areas of the brain. For sign
language researchers, the question naturally arose: are signs processed more
like pictures or more like words? Researchers wondered if highly iconic signs
(e.g., DRINK, TABLE, CUP) might be easier to recall for signers than less iconic
ones, perhaps because of strong connections with visual memories or
representations. Klima and Bellugi (1979) reported, however, that
experimental studies comparing signers’ ability to remember lists of signs
low in iconicity with lists of highly iconic signs showed no difference in
recall. This does not mean that the iconicity does not have other effects on
the processing of signed languages by the brain—for example, iconic signs
may be easier for adult learners to remember (see Lieberth & Gamble,
1991)—but only that iconic and non-iconic signs both share similar structural
properties.
Unfortunately, however, this evidence has been interpreted by some
linguists to mean that iconicity plays no significant role at all in signed
languages (see, for example, Pinker, 1994). This is not the case: most signs in
Auslan do in fact have some link between their form and meaning, and
iconicity plays an important role in the grammar (see Chapters 5, 6, 7 and 8).
1.3.5 Signed languages have the same expressive capacity as spoken
languages.
Contrary to what is sometimes believed, signed languages have the same
potential for expressing subtle, technical and complex meanings as spoken
languages. Although signed languages share some properties with gesture
and include many iconic signs, this does not mean that they are limited in their expressive capacity. There are well-established Auslan signs for a range
of complex concepts, such as CULTURE, DISCRIMINATION, PHILOSOPHY and
LINGUISTICS. Moreover, any word that exists in English (or any language
with a Roman script) can be introduced into Auslan by means of
fingerspelling.
Nevertheless, the sign vocabulary of Auslan is smaller that the vocabulary
of English (Johnston & Schembri, 1999). This, however, does not indicate
that the expressive capacity of Auslan is limited, only that the language has
not been used in as wide a range of situations as English. This is true of all
languages—the vocabulary of the language reflects the way it has been used.
Auslan has only recently begun to be employed again as a language of
instruction in schools for deaf children. It is only over the last two decades
that it has started to be used by deaf students at universities and college, and
by deaf employees in a wide-range of professional and technical jobs. As a
result, the sign vocabulary of Auslan is undergoing a period of rapid
development and expansion.
1.3.6 Signed and spoken languages are processed by the brain in similar ways.
Signed languages are produced by the hands, face and body, and perceived
through vision. This makes them very different from spoken languages that
are produced by the speech organs and perceived by hearing. Research has
shown, however, that this does not make as great a difference to how signed
and spoken languages are processed by the brain as might be expected.
The human brain is divided into two halves (known as hemispheres). In
most human beings, the left hemisphere controls many language functions,
while the right hemisphere controls many visual-spatial skills (as was
mentioned in §1.3.4 above). After a stroke, particular parts of the brain may
be damaged which can result in the loss of specific skills. Patients with
damage to parts of the right hemisphere, for example, may lose their ability
to draw. Others with left hemisphere damage may suffer from language
problems (known as aphasia), such as the inability to produce grammatically
correct sentences. This does not mean that the right hemisphere does not have
a role in language processing (it is important for the production of intonation
and for making sense of stretches of spoken discourse, for example), only
that parts of the left hemisphere play a particularly important role for spoken
language grammar.
Because many people are aware of the different roles played by the two
sides of the brain, some assume that signed languages must be entirely
processed by the right hemisphere because, unlike spoken languages, they are
visual languages that make use of space. Research into the signed
communication and visual-spatial skills of deaf people with brain damage in
the 1980s has, however, suggested that this is not the case (Poizner, Klima &
Bellugi, 1987). Deaf signers with damage to certain areas of the left
hemisphere (such as Broca’s area or Wernicke’s area of the brain) showed
very similar types of aphasia to hearing people who use spoken languages.
Signers with left hemisphere damage had difficulties with signed language
grammatical skills, and yet retained the ability to draw. Moreover, some
signers with right hemisphere damage exhibited a breakdown of visualspatial
skills, and yet were still able to use some key aspects of signed
language grammar.
Recent research using new technologies, such as functional magnetic
resonance imaging (or fMRI) or positron emission tomography (or PET), has
enabled researchers to see which areas of the brain are active in normal
healthy individuals during language production. Although recent work has
shown that the right hemisphere does indeed have a role in certain aspects of
signed language processing (such as in the use of space and facial expression
during signing), it has confirmed the initial findings based on the study of
people with aphasia. For many key aspects of the production and
comprehension of signed languages, the left hemisphere is dominant, just as
it is with spoken languages (Emmorey, 2002), though it is becoming
increasingly clear that language, especially face-to-face communication that
is signed or spoken, also uses the right hemisphere.
1.3.7 Children learn spoken and signed languages in similar ways.
There have been no longitudinal studies of children learning Auslan (i.e., no
studies that have investigated how children develop Auslan from birth to
early adulthood). Research on children learning other signed languages (such
as ASL and BSL), however, suggests that signed languages are acquired by
children in the same way as spoken languages (Emmorey, 2002). For deaf
children with signing parents, signed language acquisition begins at birth and
continues through childhood. These children appear to reach all the same
developmental milestones at the same age as hearing children learning
spoken language (Petitto, 2000). From the age of approximately six months,
these deaf children begin to ‘babble’ on their hands, producing sign-like
actions in imitation of the signed language they see around them. They
produce their first sign at around their first birthday. Although some
researchers claimed that deaf children’s first signs are acquired earlier than
hearing children’s first words, more recent research suggest that this finding
was incorrect, and that there is no significant difference in the timing of the
first sign versus the first word.
The one-sign stage (like the one-word stage in speaking children)
continues for some time, as the children add more and more new signs to
their vocabulary. Signing children produce signs like FATHER, MOTHER, DOG,
BATH, HOT, EAT and GOODBYE, as is also typical of young children learning
spoken languages. They also make the same kinds of errors in production,
producing signs with incorrect handshapes or movements in the same way
speaking children are unable initially to pronounce all the sounds used in
English words. Just before they are two years of age, children begin to
combine their signs in two-sign combinations, such as WANT MILK or FIND
BALL. By two and a half, vocabulary begins to grow more rapidly, and
sentences become much longer as children begin to acquire complex
grammatical rules. They learn how to negate sentences with headshakes and
using signs like NOT and NOTHING, and begin to form questions, and make
use of space in their signing. By age five, most of the basic grammar of the
signed language is learned, although it takes a few more years before all
aspects of the language are mastered completely.
Hearing children from deaf families who learn both signed and spoken
languages (for example, in cases where one parent signs and another speaks)
move through the same stages, and show no preference for spoken language
in their early years, even though they can hear (Petitto, 2000). This shows
that, for young children, language is language, regardless of whether it is
spoken or signed.
1.3.8 Signed languages were not invented by hearing people.
There is no evidence that any single individual, hearing or deaf, invented
natural signed languages such as Auslan, BSL, ASL and French Sign
Language (Langue des signes française or LSF). Signed languages appear to
have been in use among deaf people in Australia, Britain, the United States
and France (and elsewhere in the world) before schools for deaf children
were established in the eighteenth and nineteenth centuries (we discuss the
history of Auslan in Chapter 3). In fact, there are references to the use of
signed language by deaf people in the writings of Plato (Rée, 1999). The
work of the eighteenth-century deaf writer, Pierre Desloges, describes an
active signing community in Paris at the time, most of whom had no formal
education. In fact, the Abbé de l’Epée is known to have first learned LSF
from deaf people, and later used a variety of this signed communication as
the medium of instruction in the first public school for deaf children in Paris
(Lane, 1984). This very approach was recommended by John Wallis in
England in the late seventeenth century, who suggested that educators must
learn deaf people’s signed language in order to teach them English (Rée,
1999).
Thus, it can be assumed that signed languages developed naturally when
deaf people first came together to form deaf communities. We can see that
same process at work today in countries such as Nicaragua where a new
signed language has developed only relatively recently. In 1979, a socialist
government came to power after a revolution in Nicaragua. The new
government created the first school for the deaf, and deaf children were
brought together for the first time. Although the language of instruction in the
school was Spanish, the deaf children began to create a signed language in
the classroom and in the playground to communicate with each other. At
first, they used home signs—a limited vocabulary of signs that they had
individually created to communicate with their hearing family members.
Over time, more and more of these signs began to be shared among the deaf
students, and rules for the combination of these signs into sentences began to develop naturally. A new language—Nicaraguan Sign Language—was born
(Kegl, 1994; Kegl, Senghas & Coppola, 1999).
The misconception that signed languages were invented by hearing people
probably comes about for two main reasons. First, some artificial sign
systems have been created (at least in part) by hearing individuals, and there
is little doubt that such systems have in fact influenced natural signed
languages. This includes the Australasian Signed English system that was
developed by a committee (which included both hearing and deaf people)
between 1974 and 1982. The purpose of this system for representing English
in signed form was to teach English to deaf children. We discuss artificial
sign systems in Chapter 2.
Second, it seems that fingerspelling systems that are used by deaf signers
were first used by hearing people. For example, the two-handed manual
alphabet used in Auslan today appears to have its origins in fingerspelling
systems used by hearing people as secret codes (Sutton-Spence & Woll,
1999). Later, this alphabet began to be used by some deaf people and was
adopted as a tool for teaching literacy to deaf children. The one-handed
alphabet used in many other signed languages, such as ASL and LSF,
appears to have been introduced in the early seventeenth century by Juan
Pablo Bonet as a method of teaching reading and writing to deaf individuals
(Padden & Gunsauls, 2003). It may have its origins in a manual alphabet
used by monks during periods of ritual silence.
1.3.9 Signed languages can be written down.
Members of the Australian deaf community do not have any everyday written
form of the language, and English is used as the written language by all
literate signers. Some people mistakenly believe that signed languages cannot
be ‘real’ languages because they lack a written form. This misunderstanding
reflects the fact that writing is such a large part of our culture, and as a result,
some of us find it difficult to imagine using a language that has no written
form. There are, however, many spoken languages around the world today
that have no writing system and no written literature, and few would question
whether these are real languages. Thus, the issue of a writing system is
irrelevant to the question of whether or not signed languages are real
languages.
Sometimes, however, the point about Auslan lacking a written form is
misinterpreted as a claim that signed languages, by their very nature, cannot
be written down (Bernal & Wilson, 2004). People sometimes point out that
Auslan make use of the space around the signer, as well as a range of facial
expressions, and this poses a challenge for the design of a writing system for
the language. There is, however, much in the spoken message that is
routinely omitted from the written form (such as accent, intonation etc). In
fact, a number of writing systems for signed languages have been proposed.
Some have become widely used by sign language researchers for specific
purposes, and others have even begun to be used in schools for deaf children
in some countries as an educational tool.
Signed language writing systems come in two forms: glossing and
notation. Glossing refers to the practice of using spoken language translations
of signs, together with special symbols to represent the use of space and
facial expression. This is the type of writing system used in this book to
represent Auslan (see Conventions). Notation, in contrast, involves the use of
special symbols to represent the physical features of signed language itself.
The most well-known examples would be Stokoe Notation, first created by
William Stokoe, and HamNoSys or the Hamburg Notation System from the
Institute for German Sign Language in Hamburg, Germany. None of these
systems, however, are intended as practical ways of communicating in a
written form of a signed language: they are intended to represent signs and
signed utterances for linguistic analysis (an example of two Auslan signs
written in HamNoSys can be found in Chapter 4).
One signed languages notation system that does aspire to be a practical
way of communicating in the written form is Sutton Sign Writing. This uses
simplified illustrations of handshapes, facial expressions and the body
together with movement symbols to represent signs. It is used by researchers,
teachers and some members of the deaf community in the USA and some
other countries (e.g., Belgium, Colombia, Denmark, Japan, Nicaragua, Peru,
South Africa and Spain).
1.4 A brief history of the study of signed languages.
As mentioned above, recognition of signed languages may be traced back to
the work of Plato in Ancient Greece. In his philosophical work Cratylus
(written in 360 BC), Plato wrote that if we had no voice or tongue, ‘should
we not, like the deaf and dumb, make signs with the hand and head and the
rest of the body?’. In the eighteenth century, the French philosopher René
Descartes suggested that the signed languages of deaf people represented
examples of true human languages (Rée, 1999). Similar beliefs were shared
by nineteenth-century scholars such as Edward Tylor in Britain, Wilhelm
Wundt in Germany and Garrick Mallery in the United States of America
(Kendon, 2004). The educator Roch-Ambroise Bébian even attempted to
develop a writing system for signed languages based on his discovery that
signs can be analysed into smaller components (Fischer, 1995). For a number
of reasons, however, signed language research went into decline during the
early twentieth century, and many of these earlier insights were forgotten.
Modern signed language linguistics is often considered to have begun with
the publication in 1960 of Sign Language Structure by William Stokoe, a
hearing lecturer at Gallaudet College in Washington DC. This was the first
analysis of ASL structure using linguistic methodology, and Stokoe
presented persuasive evidence that ASL was indeed a language with a
grammar and vocabulary independent of English. This was followed five
years later by the Dictionary of American Sign Language on Linguistic
Principles (Stokoe, Casterline & Croneberg, 1965). Stokoe’s publications
were, however, preceded by work published in Dutch by Bernard Tervoort.
He described the signed communication used by deaf children in the
residential school at St Michielsgestel in The Netherlands. Tervoort
recognised this signing as a language, but his study was less influential than
the later work by Stokoe.
Despite these beginnings, however, the signed language research being
carried out by Stokoe and his colleagues at Gallaudet in the 1960s aroused
little interest elsewhere, and some hostility from other members of the
college academic and administrative staff who believed that signed languages
were not ‘real’ languages and questioned the value of this research (Maher,
1996). By the early 1970s, however, interest in ASL was growing, led by the
researchers Klima and Bellugi at the Salk Institute for Biological Studies.
Klima and Bellugi recognised that the study of human language would be
incomplete without research into the visual-gesture communication of deaf
communities, and they trained a whole generation of deaf and hearing sign
language researchers in their sign language laboratory in San Diego
(Emmorey & Lane, 2000). News of the groundbreaking work on ASL began
to spread out across the world in the 1970s. Signed language research started
in the United Kingdom and Europe in the mid 1970s, and began in Australia
in the 1980s with the work of Trevor Johnston. He wrote the first published
descriptions of Auslan including a sketch grammar and a dictionary (1987a,
1987b) as well as a curriculum guide for the teaching of Auslan as a second
language (1987c). This was followed by the first doctoral dissertation on
Auslan (1989a) and a comprehensive illustrated dictionary of the language
(1989b).
Since the 1980s, signed language research has begun to become a truly
international field of research, with research papers published on signed
languages from South and Southeast Asia, the Middle East, Africa and South
America. In 2004 at the Eighth International Conference on Theoretical
Issues in Sign Language Research in Barcelona (Spain) papers on over 25
signed languages from all parts of the world were presented.
1.5 Signed languages and gesture.
In §1.3.3 above, we showed that signed languages are not identical to gesture
and mime. Nevertheless, gesture is a very broad term, and one whose use is
easily misunderstood. Adam Kendon (2004), for example, suggests that
gestures are visible actions of the hands, face and body that are intentionally
used to communicate. When human beings interact face to face, a range of
different bodily actions conveys information about their intentions, feelings
and ideas. For example, a speaker’s posture and gaze direction can make
their addressee aware of the focus and nature of their attention, even though
this information may not be under conscious control. Kendon suggests,
however, that this body language should not be considered an example of
gesture, as gestures are deliberately communicative actions.
Gesture is often contrasted with signed languages, but we can see that
Kendon’s (2004) definition would certainly encompass the visual languages
of deaf communities. How then are gesture and signed languages to be
distinguished? Is such a distinction possible or useful?
In earlier work, Kendon suggested there were a number of main kinds of
gestural communication: (1) gesticulation, (2) mime, (3) pointing, (4)
emblems and (5) signed languages. The psychologist David McNeill (1992)
placed these gesture types on a continuum that he termed ‘Kendon’s
continuum’, reflecting their relationship to language. A version of this
continuum is shown in Figure 1.7. For our purposes, we will compare each
type of gesture to signed language so that differences and similarities can be
highlighted.
FIGURE 1.7.
Gesticulation refers to the type of spontaneous gesturing that occurs as
people speak. McNeill (1992:9) illustrated this nicely with an example from
his own research. In these studies, a speaker watched a film or animated
cartoon, and then later recounted the story to a second person. The example
in Figure 1.8 is a gesture produced by a participant while explaining how one
character in the cartoon pursued another and attempted to hit the unfortunate
individual with an umbrella. The speaker produces this gesture while saying
‘...and she chased him out again’. This example illustrates how the iconic
gesture can complement the spoken utterance, conveying information that the
speech leaves out, since the informant did not refer to the use of the umbrella
by the cartoon character in spoken words, only in the gesture.
FIGURE 1.8.
Although such use of gesture may convey specific meanings in particular
contexts, this does not necessarily mean that such gestures could be
considered equivalent to words in a spoken language, nor to signs in a signed
language. Gesticulation lacks most of the main properties of language. There
is no fixed vocabulary of such gestures, for example, and the use of
gesticulation varies from one person to the next. These gestures tend to occur
on their own, rarely joining together into sentence-like patterns. Instead,
these gestures appear to be closely synchronised with the rhythm of speech,
and to serve to supplement spoken language in particular ways. However,
like signed languages, gesticulation makes meaningful use of handshapes,
locations and movements: the gesture in Figure 1.8 resembles a sign in
Auslan for HIT, for example.
Mime involves imitating real-life activities without the object and people
normally involved being physically present. A mime artist ‘may act out the
process of riding a bike, going to bed or driving a bus without any props
other than her or his own gestures and body movement’ (Brennan, 1992:12).
It differs from the use of gesticulation shown in Figure 1.8 in two ways. First
of all, mime may rely less on accompanying speech to convey its meaning,
and it involves more than the use of the hands. If the umbrella-waving
gesture discussed above were combined with movement of the head and
body, then it would be properly considered an example of mime. Like
gesticulation, however, there is no vocabulary of mime standardised across a
community of users. As a result, the mimed communication of the types seen
in television game shows or in the theatre may sometimes require too much
time and space to work as an effective communication system (Brennan,
1992:13). The mime artist must tell a story by acting it out in real time, as if
it were happening in the present, and must walk around the stage in order to
suggest the location and spatial arrangement of the objects and people being
described. As Mary Brennan explained:
If the artist wishes to convey the meaning expressed by the sentence ‘I
over-indulged last night by eating an enormous meal’ an elaborate
replay of the activity involved would be required. In contrast, sign
languages can exploit the potential of space and gesture while honing
the medium into a fast and efficient linguistic tool.
The existence of a standardised vocabulary of signs means that users of
signed languages can refer freely to events in the past, present or future, and
do not require such elaborate acting out of activities to communicate basic
information. The grammatical organisation of signed languages also allows
signers to quickly and efficiently communicate who did what to whom. Thus,
signers may remain in one place, using only the space around them as a
‘stage’ in which to represent people, objects and actions. Despite this, many
aspects of signed language have a basis in mime. As we shall see in Chapters
8 and 9, both individual signs such as SWIM and RUN as well as the use of role
shift during stretches of signed discourse resemble mimed representations of
actions.
Unlike other forms of gesture, emblems usually involve the use of very
specific handshapes, locations and movements that are linked to specific
meanings. In Britain and Australia, for example, Churchill’s palm-forward
‘V for victory’ gesture differs only slightly from the palm-backwards ‘upyours’
insult. Emblems also have a different relationship to speech, often replacing it completely. These gestures have particular functions, being used
mainly as forms of greeting, command, request, insult, or threat. Examples of
emblematic gestures include hand waving for ‘hello’ or ‘goodbye’, the ‘okay’
sign, and the ‘thumbs-up’ gesture.
The precise meaning of particular emblematic gestures is often only known
to a particular cultural group. Thus, like the words of spoken languages,
emblems vary from one part of the world to the next. McNeill (1992)
explained that the ‘hand purse’ gesture (made by placing the fingers and
thumb together, pointing upwards) is used to signal a ‘question’ or ‘query’ in
Italy, ‘good’ in Greece, and to express fear in France, Belgium and Portugal.
Similarly, the ‘okay’ sign, so widely known throughout Europe, is considered
a threatening gesture in North Africa.
Emblematic gestures may thus be comparable to the signs in signed
languages. Unlike signs, however, such gestures tend to be restricted in
number and function. Non-signers tend to use very few emblems and there do
not appear to be rules for producing new emblematic gestures. Emblematic
gestures are rarely systematically combined into phrases and sentences.
Despite this, however, emblems are incorporated into signing and form the
basis of many Auslan signs. Examples would include GOOD and
CONGRATULATE (from the ‘thumbs up’ gesture), and PERFECT (from the
‘okay’ gesture) (Figure 1.9).
FIGURE 1.9.
Pointing falls on the continuum between gesticulation and mime at one end
and emblems at the other. This is because pointing has forms that are
conventionalised within a particular culture (McNeill, 2000). In Englishspeaking
cultures, the usual form for pointing involves the use of an index
finger extended from a fist, but in some Aboriginal cultures, this form exists
alongside other types of pointing using different handshapes (Wilkins, 2003).
However, within Western culture, two fingers or a full flat hand would still
be understood as pointing, showing that this convention is not as standardised
in our culture as the use of some emblems (e.g., the ‘thumbs up’ sign).
Pointing is also midway along Kendon’s continuum because in some
contexts (e.g., pointing while saying ‘no, I want this book, not that one’), the
use of pointing with spoken language may appear obligatory, but pointing is
also fully comprehensible without speech. In fact, the use of pointing by nonsigners
shares many characteristics with pointing in Auslan, and is the source of the pointing signs that act as pronouns and determiners in the language
(see Chapters 6 and 7).
1.6 Summary.
As all of the points above have demonstrated, research in linguistics over the
last four decades has shown that signed languages are ‘real’ languages,
having many of the same characteristics as spoken languages. Like spoken
languages, signed languages fulfil all the criteria in the definition of language
provided in the definition in §1.2 above. They are natural languages that were
not invented by any single individual. They are shared by the members of a
community and passed down from one generation of users to the next. Signed
languages do not form a universal language used by deaf people all over the
world, nor are they identical to the types of gesture and mime used by
hearing people. They have a similar expressive capacity as spoken languages
and are organised around similar grammatical rules. Signed languages have
rules for creating new vocabulary and may change across time, and they are
learned by children and appear to be processed by the brain in similar ways
to spoken languages.
No hay comentarios.:
Publicar un comentario