Tagged: linguistics Toggle Comment Threads | Keyboard Shortcuts

  • The Diacritics

    The Diacritics 1:18 pm on December 13, 2011 Permalink | Reply
    Tags: bad lip reading, dubs, fake english, funny, history of english, , linguistics, mcgurk effect, , videos   

    Lots of language videos 

    Stephen Fry rails against pedantic prescriptivists: “Sod them to Hades!”

    Bad Lip Reading, whose hilarious dubs bring to mind the McGurk Effect, reimagines the words of disgraced Republican candidate Herman Cain: “Mexican people don’t eat sugar, especially when it’s a mixture of lice and tiger DNA!”

    The Open University describes the history of English in a charming cartoon video.

    Finally, short film capturing the cadences and sounds of normal spoken English, but utterly nonsensical. Apparently intended to show how American English sounds to others. (Family Guy trades it back, making fun of how British English sounds to Americans.)

     
  • The Diacritics

    The Diacritics 10:01 pm on October 12, 2011 Permalink | Reply
    Tags: , debate, election, john, linguistics, necessary and proper,   

    Necessary and Proper: the Supreme Court aren’t linguists 

    (Posted by John)

    As I watch the beginnings of the Republican presidential primary season unfold, there’s one mantra I’ve heard espoused time and again: our government is too big. With debates about spending and entitlements (not to mention the health care law) as fierce as they’ve been in my lifetime, the question of the appropriate role of government appears to be coming to a head in a serious way.

    One clause of the Constitution, in particular, has had massive influence on this debate. That clause is the Necessary and Proper Clause. The Necessary and Proper clause says that Congress has the authority “to make all laws which shall be necessary and proper for carrying into execution [its enumerated powers].” This is basically a mandate for Congress to do the things that it needs to do in order to carry out its explicitly stated powers (e.g. levying taxes). The real debate, though, is how wide a mandate this clause actually grants. And as it turns out, answering that question depends greatly on…you guessed it, linguistics!

    What’s the right interpretation?
    So what does necessary and proper actually mean? For most people, and particularly those keen on limiting the scope of government, it means that any act Congress wishes to justify under the Necessary and Proper clause must be both necessary and proper. The “and” requires that both conditions be satisfied in order for an act to be authorized.

    This makes some sense. If I say “John and Sandeep have written posts for The Diacritics blog,” I mean that both John and Sandeep write posts, not just one or the other of them. This interpretation puts severe limits on what the government can do, too: anything that is not necessary to the execution of some explicitly stated Constitutional power is prohibited. Lots of people believe this to be the correct interpretation. And for those who do, the federal government has a long history of greatly overstepping its legitimate authority.

    But lets look a little closer at what this interpretation of the Necessary and Proper clause entails. What, for example, happens when there is more than one possible method by which Congress could undertake to levy taxes? If there are multiple options, any of which would suffice, precisely none of them is necessary. Thus, on the “strict and” interpretation of the Necessary and Proper clause, whenever there are multiple courses of action, Congress may not choose any of them. In my opinion, this is not a desirable outcome. It’s not that Congress is never allowed to pass a law to carry out an explicitly stated power. It’s that Congress may only do so when there is one option and one option alone. If this reading is to be a tenable one, some kind of work still needs to be done.

    There’s another legitimate, but lesser-known, interpretation of “[the authority to make] all laws necessary and proper” that doesn’t suffer from the “strict and” defect. To get at it, consider the following: God loves all creatures great and small. Obviously this does not mean “God loves all creatures that are both great and small.” This is a nonsense sentence. It is actually parsed something like: “God loves all creatures great and [all creatures] small,” or “God loves all great creatures and all small creatures.”

    Why, then, is it not legitimate to read “all powers necessary and proper” to mean “all powers necessary and all powers proper”? This reading is at least plausible, and it doesn’t suffer from the “strict and” problem of limiting action whenever there’s a choice. This is also the reading that proponents of larger government (perhaps only implicitly) might adopt.

    What the Court has said
    The Supreme Court has, over the course of our nation’s history, ranged across the spectrum in its reading of the clause. Unfortunately, they generally aren’t the greatest of linguists, despite the fact that John Marshall’s famous McCulloch v. Maryland opinion does recognize the “strict and” problem. His solution to it is, essentially, reading the word “necessary” out of the Necessary and Proper clause. He adopts a purposive understanding: for Marshall, if the underlying goal of the act was to carry out some explicitly stated power, you were probably good to go. This meant that Congress couldn’t enact laws under the pretext of, say, regulating interstate commerce, but with the actual purpose of, say, prohibiting intrastate child labor. While this is not a linguistically plausible reading, it is perhaps a decent one from a policy standpoint: it avoids the “strict and” limitation on government but does try to set out some limit on federal power.

    The Court has treated this reading variously since then. Up until the New Deal Era, the Court was serious about keeping the federal government out of purely intrastate commerce, for example. But as we know, for most of the 20th Century, the Necessary and Proper clause was read as an essentially unlimited federal mandate. The Court ruled that the underlying purpose of a statute no longer mattered, and that any action that, considered in the aggregate, had an affect on interstate commerce was within the scope. Whether you use a Kleenex when you sneeze, taken across the entirety of the population, without doubt affects interstate commerce, and thus could have been regulated.

    Only recently has the Court begun to walk this unlimited mandate back. We’ll see how their reading evolves over the course of the next decade, as the debate about government’s size rages on.

    In the end, whichever reading you choose is fine by me. But it will be interesting to see what those in Washington, presidential candidates and Supreme Court alike, have to say on the topic.

     
  • The Diacritics

    The Diacritics 2:49 pm on September 13, 2011 Permalink | Reply
    Tags: comparative linguistics, Disney Morgenbesser, joke, linguistics, philosophy of language   

    Linguistics joke 

    Classmate JJ Snidow told me this joke, but it was supposedly an actual exchange between Oxford Philosophy of Language Professor J.L. Austin and Columbia philosopher Sidney Morgenbesser (who was apparently the man?)

    “In English,” Professor Austin said, “a double negative forms a positive. However, in some languages, such as Russian, a double negative remains a negative. But there isn’t a single language, not one, in which a double positive can express a negative.”

    A voice from the back of the room piped up, “Yeah, right.”

    Isn’t linguistics funny?

     
  • The Diacritics

    The Diacritics 2:12 pm on September 6, 2011 Permalink | Reply
    Tags: , , , , , linguistics   

    Why are humans smart? Language and LEGOs 

    posted by John

    In her absolutely awesome paper “What Makes Us Smart? Core knowledge and natural language,” Elizabeth Spelke writes

    When we compare the cognitive achievements of humans to those of nonhuman primates we see striking differences. All animals have to find and recognize food…but only humans develop the art and science of cooking. Many juvenile animals engage in play fighting, but only humans organize their competitive play into structured games with elaborate rules. All animals need to understand something about the behavior of the material world to avoid falling off cliffs…but only humans systematize their knowledge as science and extend it to…entities that are too far away or too small to perceive or act upon. (Elizabeth Spelke, “What Makes Us Smart? Core knowledge and natural language.” In Language in Mind. Gentner and Goldin-Meadow (eds.). 2003.)

    So, Spelke asks, “What is it about human cognition that makes us capable of these feats?”

    The answer to this question is a complicated one, even if you already know I’m going to say it is language. Why is it complicated? Because it’s not just language itself, but the ability, associated with language, to combine otherwise separate “core knowledge” systems. Whereas lots of animals have our same basic cognitive senses of spatial relations, object mechanics, number sense, geometric sense, and navigation, humans (once they develop language) are uniquely able to combine them and make them work in conjunction.

    How do we know this? Basically, it has been demonstrated that both infant humans and many other animals have extremely similar core knowledge systems. Babies and monkeys, for example, have essentially the same ability to understand how objects move and interact, whether one group of objects is larger than another, and how basic geometry allows you to walk a room in specific, novel paths.

    Each of these tasks represents a separate “core knowledge” system (you could also call them ‘modules’). Crucially, these modules in both babies and other animals are isolatedencapsulated, and unable to interface(representations from one are incomprehensible to the other).

    Rats and babies—all that (cognitively) different?

    To understand in what way these modules are isolated, let’s look at just one example (simplified slightly for reasons of space): Say you put a rat in a rectangular room and show him that a bit of food is located in the northeastern corner. You then disorient the rat (cruel, I know), and set him loose. Immediately, with no trouble, he will go to the northeastern corner and find the food. The rat has the cognitive ability to search using some sort of ‘directional’ or “geocentric” sense.

    Similarly, if you then put a little chair in the room, show the rat that there is some food on the chair, disorient it, then set it free, it goes directly to the chair and finds the food. The rat can also do navigation by landmark.

    These are two separate systems of spatial relationships and navigation: navigation by direction and by landmark. Crucially, then, if you put a piece of food northeast of the chair, the rat will search at random somewhere near the chair. This is evidence that he cannot navigate using both “northeast” and “the chair.” Combining the two systems—each of which works fine on its own—leads to problems.

    Infants have the exact same problem: when directed to find something at a chair, it’s easy. When directed to find something in the northeastern part of a room, it’s fine. But northeast of the chair doesn’t work. Again, the separate modules are not able to interface effectively with each other.

    Adults, of course, have no trouble going northeast of the chair. They have an ability to combine and communicate between these two cognitive systems that infants and other animals do not. The emergence of these combinatorial abilities is directly associated with the development of language. Once you can talk, you can do things like this too. How intelligent of us!

    The LEGO Analogy

    There’s a really nice way to think about how this whole business might work: Consider each individual module as a LEGO, but without the little raised dots on top. Each does it’s own thing pretty well—and maybe you can make a basic stack of them to do slightly complex things. But once you try anything more than the most basic of interactions between modules (LEGO blocks), your structure collapses. So when you try to combine navigational capacities to go to the left of the chair, things get confusing.

    Language, then, is the little raised dots on top of the LEGO (and I guess the little holes they fit into). Once you have those, everything changes. Structures unimaginably complex from the point of view of bump-less Lego blocks now become possible. We go from a basic stack of unconnected blocks to things like a full-on LEGO arena.

    Now maybe we’re not that smart—not yet at least—but that’s the basic idea. The reason that humans are smart is precisely because we have language on our side. The language capacity, Spelke and others have suggested, allows the most basic building blocks of cognitive ability to communicate and interact. So, like LEGOs with connectors, we can now build structures of near infinite complexity (remember The girl the cake the baker the owner fired baked hit screamed) and combine the faculties that previously could only work alone.

    Other linguists, like Noam Chomsky or my former professor Cedric Boeckx, have taken this even further. They have theorized it’s not language, per se, that allows for communication between modules, but rather some other relatively small, yet crucial, cognitive development. Part of the core reasoning behind this is evidence that advanced cognitive abilities, like language and culture (and also the sorts of actions discussed above), developed remarkably fast by evolutionary standards. The first evidence of language goes back only some 30,000 years! Because of the relative speed with which language evolved, it’s been supposed that the critical upgrade was actually only a tiny little change, albeit with massive consequences.

    Well, what if that change was, very simply, the ability to take all of the separate human cognitive faculties and allow them to work together? What if the only change was the development of a cognitive ‘connector’? We would then have the ability to take discrete modules and concepts and place them in communication with each other; the ability to build more complex structures using the most basic of building blocks. This would not only explain how our separate core knowledge systems could start to be combined, but also how we came to put words together into syntactic structures.

    This theory has been influential in the linguistics world (though it’s not without its detractors). It makes some sense, too. Not only would the combination of northeast and chair be possible, we could also create structures made up of concepts based in the real world.  We could take concepts (eventually words) that previously existed as individual, non-interfacing ideas (animal, food, run), and put them together into complex thought patterns and, eventually, sentences (There is an animal that we could eat, so let’s run after it). What were previously non-connecting LEGO blocks can now be combined in majorly complex ways.

    Once this ‘connector’ mechanism is sufficiently developed in human infants, they, like adults, can combine cognitive modules and, importantly, combine concepts into sentences.

    As far-fetched as this might sound, it’s actually not so different from the LEGO example. You had all the blocks before, and nothing changed but the addition of connectors. That’s the only difference between the technologies, and yet it has huge consequences.

    Our minds work in complex and fascinating ways, and of course there’s no way we can yet know for sure this idea is correct. But isn’t it exciting that there could be so simple and elegant an answer for why humans are smart? And you can’t deny that we are—we did, after all, invent the LEGO.

     
  • The Diacritics

    The Diacritics 7:56 pm on August 24, 2011 Permalink | Reply
    Tags: , , linguistics, , processing, pun,   

    Awesome Sentences (Part II of II) 

    posted by John

    Before we get started I just want to remind everyone: Club sandwiches, not seals.

    My first Awesome Sentences post was about recursion and processing capacity. Our language faculty can create infinitely long sentences using things like embedding, but our brains can only understand so many nested sentences at once. This led to some cool and confusing sentences:

    • Bulldogs bulldogs bulldogs fight fight fight.
    • The girl the cake the baker the owner fired baked hit screamed.

    But the capacity of our language processing system isn’t the only thing that leads to crazy sentences. In the public service announcement with which I began the post, we see an example of another very interesting processing effect. The sentence is reminding you that if you’re going to club something, make it a sandwich, not a seal. When you read it, however, you see ‘club’ used in a familiar sense—as in, a club sandwich—before you realize it’s being used (very punnily) as a verb. The pun comes about because you have to go back and reassess the meaning of what you read before, even though you thought you already knew. Let’s look at a few more.

    What if I said the following sentences to you:

    1. The horse raced past the barn fell.
    2. The old man the boat.
    3. The lady returned to her house cleaned the kitchen.

    Would you believe me if I told you that these are all grammatical English sentences? If you wouldn’t, you should, because they are. Each of them employs a similar trick to the one used in “Club sandwiches, not seals,” and they have even been given a special name by linguists: “garden path” sentences.  How did they earn this name? By leading the listener down what he or she thinks is a certain ‘path’ that the sentence will take, and then all of a sudden turning into something entirely different (ok, so the name doesn’t make absolute sense, but at least the part about being led down a path does).

    In the first sentence, “The horse raced past the barn fell,” we start off with what looks like the most basic syntactic structure of English. This would be the intransitive sentence, “The horse raced.” To it we add another common syntactic feature—the prepositional phrase “past the barn.” So far, our brain thinks it is looking at a run-of-the-mill intransitive sentence with an attached prepositional phrase. However, when we get to the final word, “fell,” we realize that what we initially thought was an intransitive sentence is actually something entirely different. Instead, we should have understood, “The horse that was raced past the barn [by the rider] fell.”

    The problem is that by the time we get to ‘fell,’ we have already processed the sentence as what we initially thought it was going to be. Thus our brain does not accept the last word as a coherent addition to the utterance.

    So what does this mean? It means that when we hear a sentence, our brain is immediately applying the most likely interpretation of the words and structure it is seeing and then making predictions about what is likely to come next. Simply put, we process on the fly, not as a whole. This is a sort of efficiency mechanism, designed to speed up processing and boost the overall utility of language. It backfires, however, when the “garden path” down which we are being led suddenly takes an unexpected turn, and the initial interpretation is shown to be incorrect. By the time we get to it, our brains have already ruled out “fell” as an appropriate final word for the sentence.

    Thus, in the second sentence “The old man the boat,” we begin with a common noun phrase modified by an adjective—“the old man.” We immediately process it as such, and now we are looking for the most likely thing to come next: a verb. This means that when we get to “the boat,” which is decidedly not a verb, the sentence stops making sense.

    What we don’t realize on our first try is that “the old,” (as in old people) is the subject, and that “man” (as in operate) is the verb.  If we had, we would know that “the boat” is simply a direct object. But because our brain expects to find a verb following the noun phrase “the old man,” we get confused. Only after this happens do we go back and reinterpret the sentence holistically: “Old people operate the boat.”

    The same thing occurs in the third sentence, “The lady returned to her house cleaned the kitchen.” By now, you know you should look for the trick, and you’ve recognized that the sentence means “The lady who was returned to her house cleaned the kitchen.”

    But for many people, the first time they read it, they think it is missing an “and” between “house” and “cleaned.” They interpret “The lady returned to her house” as they are first reading it. Because subordinate clauses (The lady [who was returned to her house]) are less common than simple intransitive sentences, they think they are seeing “the lady returned,” not “the lady [who was returned].” Thus when they reach “cleaned the kitchen,” it appears that the main verb has already come (“return”), and something goes awry.

    These are classic examples of “garden path” sentences. If you can come up with any novel ones, or if you have any other awesome sentences for us, leave a comment. And remember, the complex cool sentences can lead people to is like an obsession—so be careful!

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel