Tagged: evolution Toggle Comment Threads | Keyboard Shortcuts

  • The Diacritics

    The Diacritics 10:11 am on December 19, 2011 Permalink | Reply
    Tags: , , english teachers, evolution, , , ,   

    Speaking with precision 

    (posted by John)

    My first semester of law school is drawing to a close, so I thought I would write about something I heard on my very first day. I’ve been mulling it over since then, partially because at first blush it runs so against my beliefs about prescriptivism and the ‘rightness’ of one person’s language over another’s. Professor John Langbein finished his riveting orientation talk on the history of law schools in America with a lament about the debasement of the English language my generation is committing. My immediate reaction, as you might guess, was a bit of haughty “This old fogey just doesn’t get it. Prescriptivism is dumb!”

    But on at least some level, he was right. Professor Langbein’s point was not that language shouldn’t change because change is bad. His point was that it’s easy to lose some of the aspects of language most valuable—especially to someone trying to become a lawyer. To me his most potent example was the loss of precision in language, which he blamed on the overlarge number of outlets for spewing our thoughts to others. Cell phone, text, facebook, twitter—you catch the drift I’m sure. It seems every major newspaper has a bi-monthly requirement for an editorial talking about the over-share phenomenon of Facebook status and twitter updates.

    Langbein wasn’t quite talking about this, though. Think about a recent conversation you’ve had, in which you related the contents of an interaction with another person. Did it run something along the lines of “I was like . . .Then he was like . . . Then I was just like whatever and left.” It may not have, but if you do some good ol’ eavesdropping on the street you’re sure to hear something like it. (Or if you’re lucky you might get “And I was all . . . Then she was all . . . Then I was all . . . .” ). This is one of the things (<– there’s another one of them) that dismayed Professor Langbein. “Is that really what you were like?” He asked us. He gave other examples, too. Overusing “thing” was one of them. Another was prefacing a point we haven’t fully thought out and can’t very well express with “You know, uh, . . . ,” and then proceeding on our muddled way. Another was compensating for a poorly-thought-out sentence by ending it with an “. . . or whatever.”

    We can all get our point across using imprecise language, and the linguist in me recoils at the thought of saying it’s actually ‘wrong’ to do so. But you can be sure that being imprecise is the one of the quickest routes to becoming an inept law student (not to mention a bad lawyer).

    So I’ll cede the point: it is worthwhile to attempt to be precise in language. If we don’t use linguistic vagaries like “or whatever” and if we avoid saying “thing” whenever the right word doesn’t immediately come to mind, it forces us to organize our thoughts more clearly. Using precise language makes us think more precisely. I tried spending a day saying precisely what I meant every time I spoke. It was exceedingly difficult, but it seemed helpful in terms of my mental organization.

    Based on our knowledge of how language allows us to think complex thoughts in the first place, it makes sense that being more precise in our speech would make us more precise in our thinking. I wrote a post a while back looking at some of Liz Spelke’s experiments that suggest language lets otherwise distinct, insulated modules of intelligence interact, thereby making us ‘smart’ compared to other species. One experiment I didn’t discuss there shows that language allows us to grasp the concept of “sets of individuals.” Babies and monkeys can distinguish “individuals” and they can distinguish “sets,” and when the set is less than four items large, they recognize that adding or subtracting an individual changes the size of the set. But when the set is larger than four, they cannot combine the representations of ‘set’ and ‘individual’ to understand that it is a “set of individuals” such that adding or subtracting one changes the quantity. Only once we have language is this possible.

    There are also sad but interesting cases of so-called ‘feral children‘ who have been deprived of exposure to language from a very young age.  These people never fully learn a language. They also are unable to perform tasks indicative of ‘higher’ human intelligence—for example distinguishing which of two massed quantities is larger.  According to still more research by Spelke and others, children without language and other animals like monkeys can distinguish between larger and smaller quantities at a ratio of about 2:1. If the quantitates get much closer in number, it becomes difficult for them to guess correctly. Humans with language can do this at a considerably better rate.

    Finally, the emergence of language, some have argued, is associated with a cultural explosion of sorts; more complex tools, recursive patterns on bits of pottery, even materials that look like they could be used to go fishing. The idea is that language allowed us to do the ‘higher thought’ necessary to develop culture.

    All of this evidence suggests that we are able to think complex, highly structured thoughts in large part because we have language. It also suggests I should take Professor Langbein’s advice: you know, try not to be like, “Let’s speak more clearly or whatever.”

     
  • The Diacritics

    The Diacritics 6:00 am on November 17, 2011 Permalink | Reply
    Tags: , email, , evolution, , , grammer b, , , , written language   

    The effects of txt 

    (Posted by Sandeep)

    If you’ve ever transcribed a free-form conversation, you have probably been struck by how little of a spoken exchange is made up of true grammatical sentences. Listen to your conversations—we hardly ever talk “properly.” We interrupt each other, we lose our train of thought or we misconjugate verbs and get flustered.

    We’re not all careful speakers at all times: redundancies, mistakes and misinterpretations are as central to human language as descriptiveness and precision are.

    Despite this, our educational system—in fact, all of literate society in every language—demands that we write in grammatical sentences. We can’t write our academic essays in phrases and incomplete thoughts. Our literate culture requires completeness and grammaticality. Deviations from this sentence model are dismissed, at best, as art projects or, at worst, serious misunderstandings of grammar.

    Not everyone believes writing should be this way. Thirty years ago, a composition theorist named Winston Weathers proposed “Grammar B,” an alternate style providing, in his words, “options that do not yet exist but which would be beneficial if they did.” His Grammar B sought to convey information from author to reader in the same way it travels from speaker to listener. He promoted a written representation of human thought that mimicked the mechanisms of spoken language—with interruptions, redundancies and visual elements (in lieu of cues like intonation).

    Winston Weathers.

    It was a radical idea with several merits. In fact, for a writing project three years ago, I rewrote a sociology essay into Grammar B. The result was easier to read and understand than the “Grammar A” version. It was also more engaging and conversational.

    But it’s not a coincidence that Weathers’ book is out of print. Writing, especially academic writing, is driven by a cycle that rewards Grammar A and produces it too. I would never have actually submitted my Grammar B essay to my sociology professor and have expected a positive response.

    So if we write in Grammar A and speak and think in Grammar B, are we being cognitively torn apart? Are we being required to think in two different ways? To use language incongruously and inconsistently?

    Consider, at least, that spoken language dwarfs writing in our species’ timeline. We started speaking at least 200,000 years ago, around when Homo sapiens emerged. Written language, on the other hand, appeared no earlier than 10,000 years ago, and it wasn’t until about 200 years ago that mass literacy became common.

    Significant swaths of today’s world remain illiterate. All societies in the world are still based fundamentally on spoken language. In fact, all literate societies are both oral and written—and the conventional wisdom until recently was that a society can be completely oral, but it cannot be completely written.

    World rates of literacy. (Click to enlarge and for source information.)

    If our spoken language is different from our written language, what does it mean that the literate establishment requires such rigidity in writing? It’s obvious that I’m writing this post in Grammar A. I write all of my papers in Grammar A, and you probably do too. That’s considered normal. But when I speak in Grammar A, you think I am working hard to be a careful speaker: I am being formal, or I am delivering a speech.

    So we recognize the merits of Grammars A and B in different situations. But I’m no fool to think that academic writing will ever comprise Grammar B works. It’s a fun idea, but it’s not sensible for any mainstream academic or student to discard the established rules of grammar, even if Grammar B is clearer.

    ——

    I once wondered if the dichotomy between written and oral traditions would continue to grow until they had little to no relationship to one another: whether Grammar A’s rate of change would be so much slower than Grammar B’s that they eventually split.

    In my family’s first language, Kannada, a beautiful literary tradition spanning 15 centuries continues to flourish. But today’s formalized Kannada grammar and vocabulary has very little obvious relation to the spoken form—so much so that a Kannada-user like me, familiar only with speaking the language, can barely understand formal text.

    This phenomenon is called diglossia, and I wonder if English is headed toward it. To be sure, all literary languages have some spoken/written diglossia. When we have the luxury to be careful (like in writing), we are generally more grammatical. And written language usually changes more slowly than spoken language because of various forces—compare English spellings to pronunciations, for example.

    But forms of communication like short and ungrammatical text messages, or even longer, conversational emails, have thrown us a linguistic curveball.

    For the first time in our species’ history, we are constantly and continuously using written communication for real-time conversations. We IM, we text and we e-mail. Just 20 years ago, the only written communication reliably employed by most people was letter writing. Now, there are entire online communities whose primary, if not only, form of communication is through written language.

    What does this mean for the future of human communication? Will diglossia be thwarted? Or will there be an even greater divide between spoken (including instant, written messages) and formalized written English?

    Spoken language uses subtle cues like intonation, pausing and volume to deliver meaning. Written language lends itself to longer reflection and more careful word and phrasing selection. I’m not constructing the two in opposition to each other, although it is obvious which is more fundamental to our species.

    We have used spoken and written language mostly for different purposes, so they may have developed divergent characteristics for that reason. But as we communicate more and more through text, our use and understanding of language will change fundamentally—even if we never actually write our essays in Grammar B.

    (A version of this post appeared in The (Duke) Chronicle on September 23, 2010.)

     
  • The Diacritics

    The Diacritics 7:07 pm on September 28, 2011 Permalink | Reply
    Tags: , civil war, , evolution, ,   

    These United States 

    (Posted by John)

    Like a good law student, I was perusing my Constitutional Law book today. Along the way, I found a sort of linguistic diamond in the rough:

    “Prior to the Civil War, ‘the United States’ was treated as a plural noun. In Dred Scott, for example, the Court referred to a federal statute passed during the War of 1812 that referred to ‘the war in which the United States are engaged.’  After the Civil War, by contrast, ‘the United States’ became a singular noun.” Stone, Seidman, Sunstein, Tushnet, and Karlan. Constitutional Law,6th Ed. Aspen Publishers. 2009. p 451

    When I read this, I was immediately reminded of Sandeep’s post on the linguistic legacy of 9/11, where he discusses the effects wars have had on our language.  The change from “are” to “is” that the Civil War brought about is minuscule in size, but ginormous in meaning. It reflects a profound reinterpretation of the relationship between one state and another, as well as between the states and the federal government. The shift marks the real beginning of the public’s acknowledgment that the federal government would expand its control over the states. Personally, I think it’s super cool that this tiny linguistic indicator is as important as any analysis of federal statutes or court opinions in figuring out when this trend began.

    Oh, and don’t forget to vote for The Diacritics here for the Best Grammar Blog of 2011!!

     
  • The Diacritics

    The Diacritics 10:07 am on August 23, 2011 Permalink | Reply
    Tags: alice in wonderland, , , , evolution, humpty dumpty, just a theory, , , legal analysis, , speech community, ,   

    Humpty Dumpty and the meaning of words 

    posted by Sandeep

    “When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean — neither more nor less.”

    “The question is,” said Alice, “whether you can make words mean so many different things.”

    Through the Looking Glass, Lewis Carroll

    A lot of legal analysis hinges on the technical meanings of words. These definitions can be identified by statute (for example, if a government explicitly defines a criminal law term in its penal code) and by common law (what have previous courts decided that same term means?).

    If neither statutory law nor common law have defined a legal concept, lawyers and courts can also look to a dictionary definition, although this is rare. Nearly three centuries of accumulated law in the United States, building on even more centuries of law in Great Britain, have meant that almost all broad legal concepts have been defined and analyzed. The legal profession even has its own dictionaries — I, myself, just bought a fresh new copy of Black’s Law Dictionary.

    Today in my Criminal Law class, we discussed the meanings of words. Much of the discussion focused on the dichotomy between voluntary and involuntary acts. In common parlance, “voluntary” and “involuntary” have broad meanings: “voluntary” indicates some sort of will or want to achieve an end result. “Involuntary” indicates the absence of that will.

    But in the context of the law, the definitions are much narrower. Understanding these narrow senses is critical to forming an adequate defense to a criminal act. Let’s say John accidentally hit a pedestrian with his car. In normal conversation, we might describe John’s act as “involuntary” because he certainly didn’t mean to hit the pedestrian.

    But in the eyes of the law, a criminal act can’t be considered “involuntary” just because it is unintentional or the actor didn’t foresee potential consequences. Understanding what “involuntary” means is important because the law cannot punish “involuntary” acts.

    “No act is punishable if it is done involuntarily … The term “involuntary act” is, however, capable of wider connotations; and to prevent confusion … in the criminal law an act is not … an involuntary act simply because the doer does not remember it … nor … simply because the doer could not control his impulse to do it.”

    Bratty v. Attorney-General, 1963 A.C. 386, 409-410 (H.L. 1961)

    So wait. John could face years in prison for something we understand as “involuntary”? [Fear not, John--you would probably just get off with involuntary manslaughter, not murder!] Because the law is concerned with what our conscious mind causes us to do, an “involuntary” act cannot encompass things done with a conscious mind, even by accident or under duress, so its definition is narrowed to acts conducted while unconscious, asleep, hypnotized, or seizing. This definition is confusing enough, but to add to the confusion, sometimes the criminal law switches between the colloquial use of “involuntary” and the strict legal definition!

    Ugh! So how does a simple word like “involuntary” have so many conflicting meanings?

    Technical jargon sometimes conflicts with popular understandings of what a word means. When a specialized technical register exists (say, in law or in science), it often develops independently of colloquial usage, mainly because the technical and colloquial register would never interact with each other. So we might imagine a legal scholar ages and ages ago, grappling with the idea of unconscious criminal acts, coming up with two types of acts: involuntary and voluntary, based on the popular understanding of those terms. Over time, other legal scholars might have found limitations in the popular definitions and sought to narrow down their meanings. When the two worlds collide (in John’s criminal trial proceedings, for example), we get confused at the strange, specific usage of apparently familiar terms.

    Another popular example of this discrepancy between popular and technical jargon is the term “theory.” In scientific research, a “theory” is a model used to explain a natural phenomenon. A theory must stand up to rigorous testing and extensive peer-reviewed research before it can be called as such.

    In contrast, our popular understanding of the word “theory” is closer to the meaning of “hypothesis”–an unproved hunch about how a natural phenomenon might work. Disparaging the theory of evolution by natural selection, for example, as “just a theory” subscribes to this colloquial sense, even though evolution by natural selection, like other scientific theories (e.g., the theory of gravitation, germ theory), is a nearly-universally-accepted model of how a natural phenomenon works.

    But why the discrepancy? Why can’t we just all agree that a word means what it means?

    Complex social and individual forces determine the particular meaning ascribed to a word. As I have described above, the same word might mean different things in different contexts. The same word might also carry different social valence in various groups (such as the N-word among some African-Americans versus other racial groups, or vulgar profanity among some social classes versus others).

    Whether a word can have an inherent, inalienable meaning is hotly debated among linguists. I am skeptical that a word can ever have an inherent meaning. Some language prescriptivists (see John’s great post about Americanisms below), especially dictionary authors, believe otherwise.

    Dictionaries record definitions that are meant to document common usages, to be used in a particular speech community at a particular time. An English dictionary from the year 800 (if it existed) would be useless to us today [whether that language could be considered English at all is another topic altogether]. Several entries in a British dictionary would be useless in America today, and vice versa. Do you know what “pukka” means? It’s a word in Indian English: ostensibly a variety of the same language we know, but loaded with terms whose meanings we will never be able to deduce without context or explication.

    Indeed, context is crucial for deducing what particular meaning you are referring to– not only technical contexts (such as law) but also the speech community, register, geographic location, social class, ethnicity, etc. When I say “table,” am I referring to the thing with a flat surface and four (or three? or six?) supporting legs? Or am I telling you to “table” a discussion for our next meeting? Or am I studying the water “table”? Going to “table” for my local non-profit? Maybe “table” is New Jersey slang for the shape of The Situation’s hair.

    Language changes. Languages changes across contemporaneous speech communities (so my New Jersey terminology might be slightly different from John’s Virginia vocab) but it also changes over time. For example, many of the words we use today are derived from French terms (whose origins themselves are in Latin, and so on and so on) with narrower, broader, or completely different senses than their present English definitions.

    Words cannot have inherent meanings when their very existence is so tenuous and malleable.

    Lewis Carroll, in creating the character of Humpty Dumpty (see above), suggested the doctrine of “stipulative definition,” meaning that we can make words mean whatever we want, as long as we explain ourselves beforehand. Scholar Michael Hancher (in the linked article) disagrees, saying that a word’s meaning must be constructed by the commons — we all must agree on what a word means, and by doing so, we give it meaning. This becomes a complex, thorny issue when we consider how many different “commons”–that is, speech communites–exist in our world.

    So, Humpty Dumpty, a word can’t be just what you make it to mean. Sorry. We all have to come to a consensus in each of the languages we speak, whether in a colloquial context (John hit the pedestrianinvoluntarily) or in a technical context (John did not commit an involuntary act) or in some other context.

    Navigating this wonderful, awful complexity, I think, is one of the privileges, and prices, of participating in many different speech communities at once. The alternative, of course, is living in isolation, like Humpty Dumpty (and we know how that turned out…).

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel