I’m reading a non-fiction advice book written by multiple authors, and many sentences include first-person plural words like “we” or “us.”

At times, I identify with what the authors are saying, and feel like I’m part of that “us.” At others, I feel like it is just their opinion, and the “we” who is saying whatever they’re saying comes from the authors and no one else. At still other times, I find myself questioning whether I’m meant to be included in the grouping of “we.” I also find myself putting myself into others’ shoes – the friend who recommended the book to me, for example – and wondering if they would feel part of the “us” community the authors suggest.

This could all be solved with inclusive and exclusive we. Many languages around the world have two distinct words for we. Inclusive we refers to the speaker (me) and the listener (you), and possibly some other third party (them). Exclusive we refers to the speaker (me) and some other third party (them) but definitely not the listener (you).

In a semi famous linguists’ urban legend, one missionary speaking a language that included a clusivity distinction said “We (exclusive) will be saved by such and such deity.” The listeners, understandably, did not seem excited to convert to the religion.

Although clusivity can help clarify a situation (or make for a laughable faux pas), I also wonder if the vagueness of English “we” is a benefit. The reader of this advice book is free to choose whether or not they self-identify with the authors. If the we is interpreted as exclusive, the reader can dismiss the authors’ advice as “just their opinion.” If the we is interpreted as inclusive, the reader can feel validated and feel like they are part of a community.

Have you ever been in a situation where clusivity would help (or make things worse)?

Previously, whenever someone asked me what I wug is, I let Jean Berko Gleason herself answer.

But now I have a nifty new Wacom tablet and a free trial of Camtasia so I decided to try my hand at making my own explanation:

A long time ago, I made a little script that converts whatever you type into animals. You can try it here. And you can download and play with the code here. And you can learn more about how it was done below… More »

I’m helping score the Analytical Writing Placement Exams for incoming UC freshmen this week. For those of you not familiar with the exam, 17 and 18 year olds who have been admitted to a UC wake up at some ungodly early hour of a Saturday morning, sit down, read a passage, and write an essay. It’s my job to look for things like sentence variety, organization and structure, arguments, analysis, and a general understanding of the prompt. Of course, multiple grammatical errors, poor variety in vocabulary, and numerous misspellings can hurt, but I can be forgiving for one or two misspelled words. After all, they don’t have access to spellcheck or wikipedia, and no one writes perfectly well without a chance to edit, especially on a Saturday morning when you’re 17.

The prompt this year has to do with socializing with strangers. One “error” I’ve seen in many essays (of a variety of skill levels, including those scoring “clearly competent”) is the use of conversate instead of have a conversation or converse. But is this really an error?

More »

08. May 2015 · Write a comment · Categories: linguistics · Tags:

salvadorwugiApologies for not posting lately. April was a hectic month, and May isn’t shaping up to be much better. My goal is to post weekly, maybe starting in July. In the meantime, here’s some wugart I made.

TL;DR: tweet linguistics and cognitive science wikipedia stubs to @wugology

Why do I want you to tweet me wikipedia stubs? More »

A while ago, I scraped the LINGUIST List job pages and made a set of graphs for the Linguistics Club here at UCSB, to give the undergrads an idea of where the jobs are in linguistics. It turns out, Language Log did a similar thing, but focusing just on academic jobs and comparing the number of those jobs to the number of fresh PhDs in linguistics.

More »

As part of the Developing Data Products class at Coursera, we’ve been encouraged to share our R Shiny apps on twitter using the #myDataProduct hashtag! I tweeted mine and blogged about it already. I’ve also blogged about word clouds in R. And lo and behold, someone did both! @dscorzoni combined Shiny and word clouds into a nifty little app that takes a URL and generates a word cloud from it! How cool!!

I’m taking a course called Developing Data Products at Coursera as part of the Data Science Specialization. We just learned out to make interactive graphs using Shiny, and I’m kind of obsessed. I made one using data about how long it really takes for PhD students to get their degree. You can play with it here!

My friend recently asked me how I make word clouds for presentations. Wordle is definitely a good choice. WordPress automatically makes word clouds out of my tags in the sidebar. But sometimes you can’t or don’t want to upload your data to places like WordPress or Wordle and you just want to use R (because you use R for everything else, so why not? Or is that just me?).

In a typical word cloud, word frequency is what determines the size of the word. As of this writing, the word cloud in my side bar (over there ) has “linguistics” and “programming” as clearly the largest words. Tags like “video games,” “language,” and “education” are also pretty big. There are also really small words like “Navajo” and “handwriting.” This reflects the frequency of each tag. Bigger tags are more frequent, so I write about linguistics a lot but not so much about Navajo in particular.

More »