Home > Research > Sociolinguistics Summer School 3

Sociolinguistics Summer School 3


I didn’t have time during it, but now that my holiday is almost over, I reckoned I should try and write down my thoughts on the Sociolinguistics Summer School in Glasgow from a few weeks ago. First thing to say is that the week was an inspiring and intellectually satisfying time (I’m not sure why, but I always feel really pretentious using words like ‘intellectually’…). The guest speakers (Devyani Sharma, Jane Stuart-Smith, Daniel Johnston, Erez Levon and Lauren Hall-Lew) all offered insightful and thought-provoking seminar sessions which not only show-cased their own research, but suggested a myriad of future directions for sociolinguistic theory and methodology. I don’t have space to go through everything all the speakers covered throughout the week, but I will try and give at least a general outline of the kinds of things they were talking about and

Devyani Sharma kicked the week off on Monday morning by talking about her work on language and ethnicity. Drawing on recent research she’s been conducting at Queen Mary, Devyani outlined some of the ways in which ethnicity has been operationalised in the literature over the past 20 or so years, and the way different authors view the influence of ethnicity on language variation and change (e.g. whether it’s psychological, identity-focused, and so on). The thing that excited me the most, though, was her attempts at showing how variation unfolds over time (something that’s becoming increasingly more important over the past five years or so). Rather than analysing a single variable (or even two or three variables) at the aggregate level (i.e. taking every token of a variable and charting how that variable patterns at a general level), Devyani showed how a range of variables changed between Indian English, Southern Standard British English and Cockney English by using colour-coded lines which showed the increase and decrease of the concentration of that specific varieties variants. Essentially, what Devyani is doing is moving away from a static picture of variation to something more dynamic, and it clearly demonstrated how variables act in concert with one another (or ‘clustering’) rather than in isolation.

On Tuesday, Jane Stuart-Smith talked about the Glasgow Media Project. This is an on-going ESRC-funded project which aims to understand why Glaswegian (and Scottish English more generally) appears to have ‘English’ features such as TH-fronting ([tuf] instead of [tuθ]), R-vocalisation ([bʌd] instead of [bɪɹd]) and L-vocalisation ([pipo] instead of [pipəl]). Those uninitiated to the ways of the IPA, that’s tooth, bird and people respectively. Generally, Scottish English shouldn’t have these kinds of features, but it does (or at least, Glaswegian does, other parts of Scotland are lagging behind somewhat). Traditional ways of modelling how a sound change spreads (such as the wave model and the gravity model) don’t work, primarily because these models rely on people moving to a particular area and acquiring a feature (thus it spreads via face-to-face communication), or through some sort of ‘locale strength’ with larger cities demonstrating more of an influence on smaller cities (the Gravity Model is one explanation as to why London is so linguistically and socially influential). But in Glasgow, these models are flawed because they people who are leading these changes (working-class adolescents) don’t move, and they don’t particularly recognise London (or other large English cities) as sites of influence. Jane’s work offers an alternative explanation based on the influence of the media, in particular, broadcast media, because it’s one thing that working-class Glaswegians (especially those who are the most advanced innovators) actually watch (and one important programme is EastEnders). But Jane’s argument is that it’s not simply passive viewing that causes someone to acquire a feature, but rather it’s active engagement with the programme (almost to the point of treating the programme and its characters as ‘real’). Moreover, if a speaker does acquire a feature a character uses on the programme (whether it’s TH-fronting or whatever), then that speaker has to integrate this feature within their already existing linguistic system. It’s not enough for me to watch EastEnders, hear an instance of TH-fronting, and then all of a sudden I’ll use that feature all the time. Instead, I have to have an active engagement with the show, and then that feature has to be integrated into my existing system. Jane’s work is really important primarily because it challenges existing sociolinguistic theories on how a sound change spreads, and it has important implications for how we understand the influence and effects of media on language variation and change.

Daniel Johnson on Wednesday changed the pace a little bit by focusing on methodology, primarily the use of statistical analysis in sociolinguistics. He did amazing work with the stats programme R. Now, I’ve never used R before (I’m an SPSS person myself), but after seeing Dan’s plenary, I’m more and more convinced that it’s what I should be using. The main thrust of his argument was that we shouldn’t be using fixed effects modelling for analysing sociolinguistic data, primarily because fixed effects models give too much weight to individual token counts and speakers. So, you might end up having 10 speakers (5 males and 5 females) and 100 tokens, but those 100 tokens are spread unevenly across your sample (so it could be something like 3 7 12 8 4 6 25 5 15 15). Using fixed effects models might give you a statistically significant result of gender, but it could also be because your data is skewed towards the male speakers. Mixed effects models, on the other hand, attempt to balance this skew out and give you a more realistic picture of how the data is behaving. To drive home his point, he took a bunch of data and analysed it first using F.E. models and then M.E. models which showed just how much can be hidden using F.E. models. Really fantastic stuff and very good methodological issues were raised throughout his talk. Basically, though, we should all be using M.E. models if we want a truly accurate picture of our data, but F.E. models are also good so long as you know what the limitations of the approach are.

Ok, this is getting a bit long, so I’ll save Lauren Hall-Lew and Erez Levon’s work till Tuesday, but I should wrap with a quick word on the post-graduate student presentations. In a nutshell, they were fantastic. Clear, concise, professional, and high-quality, and all confidently delivered with style and panache. And seriously, that’s not hyperbole, they were all really brilliant, and it assured me that the future of sociolinguistics is bright and sunny.

P.S. Oh, and I gave a talk as well on getting a career in academia which is here. Thanks again to Lynn Clark for helping me figure out what to talk about!

Advertisements
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: