About Me

Find out more about me here.
Showing posts with label research methods. Show all posts
Showing posts with label research methods. Show all posts

05 April 2022

The Bending Arc of the Moral Universe

I'm of the opinion that the story of history has been toward progress and that we can reasonably predict that progress to continue into the future. Critics will point out setbacks (e.g. recent increases in violence toward Asian Americans or increases in crime rates). Could these represent reversals? Sure, but the historical data are noisy, and we regularly witness short-term blips revert to the trend line of the historic mean. Short-term variability does not negate the long-term trend. 

When I say it's a safe bet that things will continue to get better, I am not saying that will happen on its own without effort. Somehow people think that the work that goes into making progress happen is outside of my prediction. It's not. I'm predicting that people will continue to put in the work because that is entangled in the historic data as well. A lack of complacency is a part of my model. 

I could be wrong. Social science is probabilistic.

17 January 2018

Big Data, AI Neural Networks, and Us

I remember being incredulous in graduate school, learning that very large N's and data mining were things to be suspicious of. After all, my thinking went, how could more information and an automatic way to recognize connections that one might miss be bad? It turns out that big data sets make it more likely that one will find statistical significance in the absence of a real relationship and that data mining tends to turn up a lot of spurious, atheoretical correlations. Enter the Big Data and AI Neural Networks movements. They present approximately the same issues. Big Data, it turns out, leads to lots of connections with little explanation, and Neural Networks are, by definition, not understandable.

It occurs to me, though, that traditional statistical regression analysis could be combined with Big Data and AI Neural Networks as a corrective. Why not start with a traditional statistical model rooted in theory and previous empirical findings and then have a neutral network mine the error term? We could develop a set of meta-analyses that clearly state our priors. (This part eventually could even be done by bots.) Then, the AI's could do a kind of Bayesian exploration. I'm out of my depth now, though.

06 May 2017

'Are You in a Band?' Autoethnographic Interview Essays, Pt. 1/10

As I posted recently, I'm starting a new research project. It's an ethnography on how do musicians start and sustain a band. I'm also incorporating elements of autoethnography. Primarily, I see this part of the project as exploratory and supplementary. In that spirit, I'm going to write essay answers to the preliminary, sample interview questions. It should be in ten parts. Here we go.

--

Can you tell me about the first time you joined a band?

The first band I was in was a foursome called Hammertoe. We founded the group when I was in 7th or 8th grade. I played bass and my buddy Devin played drums. We were both fairly talented musicians already at that point, both having taken lessons for a few years and developed some chops. We recruited a mutual friend, Stu, to play guitar and another mutual friend, Sean, to sing. It turned out that Sean couldn't actually sing so he didn't last long. Stu and I started trading lead singing responsibilities. Stu really was only a guitar player at that point in the sense that he owned a guitar and an amp, but he was a pretty quick study. It would have been around 1993 or 1994 when we started so most of our early repertoire consisted of Nirvana and Green Day covers, but we almost immediately started writing original stuff. I was already reading enough trade publications to know that we weren't going to be taken seriously as a cover band. The first songs we wrote were really bad, but we were learning.

Our first gig was at party in the basement of our friend Jill's house when we were high school frosh. I think we played some Green Day, Offspring, and Nirvana. As amateur as it was, it was legitimizing, especially since Jill and her friends were all a year ahead of us in school, and though it's easy to forget, that's a big step up in the adolescent hierarchy. 

That same year, we changed our name from Hammertoe (a reference to my grandmother's jacked up feet that our family dog would lick) to Steamboat Willie. Stu, who was becoming our lead singer and frontman, thought the name was original, but when he realized that it was a Mickey Mouse cartoon, we quickly dropped the "Willie" and became Steamboat. We played our high school talent show that year as frosh, covering Hendrix's cover of "Hey Joe" with our new lead guitarist Jason. We had been introduced to him through some mutual friends. He was, in our eyes, a very cool, older, and popular kid, but he proved to be very affable. The variety show was not the typical amateur-hour; it was a big production with a longstanding reputation in the community.

Sometime after that, we started playing shows more regularly and eventually recorded and self-released an album of original material. We disbanded as we were graduating high school and heading in different directions to college and whatnot.

Overall, I feel very lucky for the time I had with that band. We had little squabbles here and there, but generally, we got along very well. I was contributing to songwriting with the group, but much of what I was writing wasn't appropriate for the group. I wish that I'd pursued other avenues for that stuff. I tended to write poppier material, and Steamboat was very much rooted in the Led Zeppelin school of '70s blues-based jam rock. I sang background vocals, but it was determined that my voice was too clean and pretty, which, in retrospect, was absolutely true for that genre.

I learned some important lessons about band dynamics. We started the band as very, very close friends, but as we went along, I became more distant and sought outside friendships. The guys in the band were experimenting with drugs and generally being more deviant, and while I wasn't a square, I was becoming more focused on academics and sports and knew that path was probably dangerous so I distanced myself. We continued to play well together, though.

For a few years in there, I was also playing a Christian rock band called Embrace. It was fronted by Chris, who was in his thirties and the youth pastor at the local Church of God. Stu recruited me into the band. He attended the church and was playing guitar for the group. I had many other friends and acquaintences in the youth group there so it was a fairly easy sell, especially since I was a fairly religious kid. In retrospect, it was a very odd time. I learned a lot in the group musically and learned hard lessons about what I didn't believe religiously, but I have fond memories of playing with the band. We did record a album of original material, mostly written by Chris. I am a very different person now than I was then. Adult Brad would judge teenage Brad.

I played an occassional gig here or there with other groups, but I was pretty loyal to Steamboat. I went on to play in a few other bands in college and grad school, but none was as formative as that first band.

01 May 2017

New Project: 'Are You in a Band?' (Auto-)Ethnography of Forming a Popular Music Ensemble

I'm starting a new research project, and I'm pretty excited. Here is a portion of the proposal. If you or someone you know wants to give me funding or a book contract, I'm game.

Background and Purpose
Previous research has looked at the community created by music consumers (Lena 2012) as well as at the musical tastes of individuals (Bourdieu 1984, Peterson and Kern 1996). Previous research has also looked at discrimination of and among musical performers (Clawson 1999, Donze 2011, Goldin and Rouse 2000). Little work has been conducted, however, in understanding the process by which musicians in popular music negotiate the formation and maintenance of a musical collective; in other words, how do musicians start and sustain a band?

Methods
I propose to conduct an autoethnography and ethnography of the process of forming a band in a prominent musical center and college town in the southeastern United States. I will collect my own personal reflections, fieldnotes from participant-observations, and semi-structured interviews with fellow musicians and related insiders (e.g. promoters, venue owners, press, record industry, etc.) of the local popular music scene. This will follow the process from early formation to recruitment, auditions, rehearsals, socializing, socialization, performances, and recording and, possibly, to ejection, replacement, and even dissolution. The specific role of the primary investigator is of band (co-)leader, songwriter, and multi-instrumentalist.

While the nature of semi-structured interviews does not include verbatim questionnaires, the following are likely indicative of the nature of questions to be asked:

  • How would you describe the interaction of band members and how is this similar to or different from other relationships in your life?
  • How has being in a band affected your outside relationships?
  • How has being in a band affected your regular gig [i.e. your primary employment]?
  • Can you tell me about the first time you joined a band?
  • Have you ever been kicked out of a band? Can you tell me about that experience?
  • Tell me about what it’s like to perform on stage?
  • How would you describe rehearsals? Do you enjoy it?
  • Tell me about your practice routine?
  • What was the worst argument you ever had in a band?
  • What was the most rewarding experience you ever had in a band?

Sources
Bourdieu, Pierre. 1984. Distinction: a Social Critique of the Judgement of Taste. Cambridge, Massachusetts: Harvard University Press.
Clawson, Mary Ann. 1999. “When Women Play the Bass: Instrument Specialization and Gender Interpretation in Alternative Rock Music.” Gender and Society 13(2):193-210.
Donze, Patti. 2011. “Popular Music, Identity, and Sexualization: a Latent Class Analysis of Artist Types.” Poetics 39:44-63.
Goldin, Claudia and Cecilia Rouse. 2000. “Orchestrating Impartiality: The Impact of ‘Blind’ Auditions on Female Musicians.” American Economic Review 90(4):715-741.
Lena, Jennifer. 2012. Banding Together: How Communities Create Genres in Popular Music. Princeton, New Jersey: Princeton University Press.
Peterson, Richard and Roger Kern. 1996. “Changing Highbrow Taste: from Snob to Omnivore.” American Sociological Review 61:900-907.

06 June 2016

Sociology and the Crystal Ball

Sociologists generally don't like to make predictions. I just read a piece where Peter Berger, the prominent sociologist of religion, says that "prediction is very dangerous." (He was specifically talking about James Davison Hunter's prediction in Evangelicalism: The Coming Generation [1987] that Evangelicalism would be in decline within a generation.) It seems to me that this is very bad practice.

First, if we are not making predictions, we are merely giving descriptions. This is fine and necessary, but it does not allow for adequate explanation of social reality.

Second, sociologists seem largely leery of prediction-making because they could be wrong; the danger is that in getting things wrong, as will occassionally happen, one--or even the entire field--will be dismissed. Falsification, though, is central to science. By making predictions, we set up a test for our empirically informed theory. If we get it wrong, we reconsider and modify the theory. Without the prediction, we are never wrong, but we also never move forward.

Specifically, I happen to disagree with Berger about Hunter's prediction about Evangelicalism. I think that we are starting to see hints of evidence toward a decline, or denominationalization.

Course Goals and Outcomes: We Can't Measure It All

Increasingly, we are being encouraged and even required by administrators to align teaching goals directly to course assignments. I think it is OK not to do this--at least not all of them. Not everything I teach has to be measured. Think about it this way: if I give an exam in a class, do I ask a question about each and every concept that I teach and believe to be important? Of course I don't! That exam would take at least as long as the time that we spend together in class! Instead, I test on a few select concepts as a sample of what I taught. Why would our course, curriculum, or major goals be any different?

Facebook Wall as Personal Property

It's no surprise that social media can be a contentious and even ugly place. I hold tightly to my own rule never to read the comment section. Here is a bit more along those lines.

My Facebook wall is my personal space. I get to decide what goes there. Just as my neighbor has no right to post a sign in my front lawn without my permission, my FB friends' have no rights to my wall. My friends are free to hide me from their feed, to de-friend me, or to post a response on their own walls. In a perfect world, I would love to allow open discussion in the comments to my wall posts, but as I am regularly reminded, we don't (yet) live in that world. Take one post for example. The information was from a peer-reviewed scholarly article recently published in the flagship journal of sociology. The conclusions came from the rigorous statistical analysis of one of the most trusted datasets in all of social science. I'm certain that some of my FB friends would have been inclined to comment on my post along the lines of "Brad, I disagree; I think that Democrats are as much to blame as Republicans," but the data in this case make it overwhelmingly clear that they are wrong. People like to argue as if everything is a matter of differing opinions when, in reality, some opinions are simply untenable. That said, I'm certain that there are plenty of things about that study that one could question, like whether the authors used the proper method or whether they included the right control variables, but from past experience, the people who are the most vociferous in their disagreement over such matters tend to be the people who are the least qualified to adjudicate the research. They reverse the logic: "I disagree with the conclusions, ergo the manner in which those conclusions were decided must be flawed." At a much more basic level, I don't have the time or emotional energy to expend on constantly engaging in arguments on social media that have no chance of actually swaying another's mind. It's much easier just to say, "Look at this. Food for thought. Now, be on your way."

Now, get off my lawn, you kids!

Visualizing Believing/Behaving/Belonging

Here is a figure laying out a more nuanced way that I propose for understanding the believing/belonging/behaving paradigm:



Here is some speculation on examples from each location:
  1. High Belief/Low Belonging/Low Behaving
    1. e.g. Cultural Christians
  2. Low Belief/High Belonging/Low Behaving
    1. e.g. Mainline Protestants
  3. Low Belief/Low Belonging/High Behaving
    1. e.g. Reform Christians
  4. High Belief/High Belonging/Low Behaving
    1. e.g. nondemoninational Christains
  5. High Belief/Low Belonging/High Behaving
    1. e.g. Buddhists
  6. Low Belief/High Belonging/High Behaving
    1. e.g. Conservative Jews
  7. High Belief/High Belonging/High Behaving
    1. Evangelical Protestants
  8. Low Belief/Low Belonging/Low Behaving
    1. e.g. seculars
--
Steensland et al. 2000. "The Measure of American Religion: Toward Improving the State of the Art." Social Forces 79(1):291-318.

Smidt, Corwin E., Lyman A. Kellstedt, and James L. Guth. 2009. "The Role of Religion in American Politics: Explanatory Theories and Associated Analytical and Measurement Issues." Pp. 3-42 in The Oxford Handbook of Religion and American Politics. New York: Oxford University Press. Ed 

10 February 2016

Positionality in the Scientific Study of Religion

Positionality seems to be an emerging--dare I write "trendy"--topic in sociology as of late. As I understand it, positionality is the acknowledgement that the social location of the researcher has some bearing on her/his study of others, particularly when there is great social distance between the researcher and the subject. A recent example of this that has garnered a lot of discussion is Alice Goffman (white, upper class woman) and her ethnography of black, lower class men in her book, On the Run: Fugitive Life in an American City.

It occurs to me that positionality has not been discussed to my knowledge within the sociology of religion. If it matters that a woman studying race is white, does it not matter in the same way what a man's religious identity is who studies religion? Consider the following hypothetical research reports:
  1. a study of Black Protestantism by a Black Protestant sociologist
  2. a study of Black Protestantism by a white Episcopalian sociologist
  3. a study of Black Protestantism by a white atheist
Which of the above is the most trustworthy? Do we benefit from knowing about the researchers' identities? Should the researchers feel compelled to share their identities? Consider these as well:
  1. a study of nones by an atheist sociologist
  2. a study of nones by a conservative Evangelical/sectarian Christian sociologist
  3. a study of nones by a Mainline Protestant sociologist
In research methods, I regularly teach that researchers who are in-group members have the advantage of entree, in that they have access to people, opinions, and insider perspectives that outsiders do not; on the other hand, out-group members can offer critique and outsider perspectives that insiders cannot. Every academic can tell you a story about a discussion s/he had with a dissertation committee chair about this kind of status and one's ability, or even whether, to remain objective. However, this is a discussion I think that happens far less often in the scholarship of religion. In the wake of much discussion over the not-so-hidden religious motivations for some fairly high-profile research in the sociology of religion and of family (see here, here, and here for starters), I think this discussion is overdue. Consider this a call to extend the discussion of positionality into the subfield.

25 November 2015

(Not) Understanding the Racial, Economic, and Migratory History of Two Midwestern Cities

Here is a story that I wanted to confirm with data:
Benton Harbor, Michigan, used to be more white than it is now. At some point (likely in the late-1960s/early-1970s), the racial proportions of the city shifted, presumably as whites with the requisite economic and social capital fled the city, leaving behind poor blacks without the resources to relocate. Conversely, Saint Joseph, Michigan, Benton Harbor's "Twin City" (the height of unintendedly satirical titles) has become more solidly white and affluent over that period. In short, Benton Harbor's destitution is a story of White Flight across the river into Saint Joseph.
It's a simple hypothesis; however, the data don't seem to be (readily) available to tell this story.  Both Benton Harbor and Saint Joseph are in the same county (i.e. Berrien) and have generally been counted in the same metropolitan statistical area by the Census, which means that it is nearly impossible--at least for a person without specific training in historical demography and a lot of free time--to disentangle the two cities. (If you are reading this and know how to glean the data, please contact me.)

It makes me wonder if this is part of the reason that the two cities have been able to linger for so long in their highly unequal socioeconomic states. If it were easier to demonstrate the social history, perhaps it would have been easier to overcome.

Regardless, here is the history as I see it:
Benton Harbor is currently nearly 90% black and more than 40% impoverished (cite). Saint Joseph, on the other hand, is currently nearly 90% white and less than 7% impoverished (cite). This is the causal chain that I think led to this:
slavery  Jim Crow laws in the South Great Migration  deindustrialization  White Flight  Benton Harbor

--
Special thanks to Philip Cohen for his help via Twitter pointing me to possible data and Tiffany Julian via Facebook for her help pointing out the limitations of the data.

Full disclosure: I spent the first 18+ years of my life in St. Joe and still have family and friends living there, hence my interest.

17 September 2015

What's the Problem with the IRB?

I've been putting this post off for well over a year, and today, I'm finally getting it out of the way. This will likely be my final post on the topic. I was prompted to revisit the issue after reading this very friendly post from Contexts. This is my take on the problems with the IRB.

Overall, I would argue that IRB's generally do admirably, especially given their circumstances. That's not to say that they are perfect, however. Most importantly, those problems that do exist for IRB's do so simultaneously and interdependently at several different levels. Here are my notes on those:
  • Institutional
    • The federal government created IRB's, requires that institutions accepting federal funds adopt them, and sets the rules by which they function. It has long been agreed that the federal guidelines are based on biomedical ethical concerns that do not apply to social scientific research (which has exacerbated the cultural and interpersonal issues listed below). Fixes are in the works, but change comes slow through such bureaucracy.
    • As the local and public face of the IRB, the IRB chair often becomes the punching bag for those who for whatever reason do not acknowledge the larger constraints that are in place. The IRB chair is the concrete end of an abstract labyrinth.
  • Organizational
    • My employing institution, and as I understand it most institutions, chronically under-resources our IRB. Here is a short list of some specifics:
      • Our IRB has no support staff. All of the administrative work was handled by the chair, and all of the reviews were performed by faculty doing voluntary service.
      • Our IRB had no annual budget. That is not to say that the university didn't spend money; it did, on things like a submission system and a small stipend for the chair, but financing was a hidden issue over which the IRB had no authority.
      • Our administration only allowed for inadequate course release for the chair, granting only a single course release for an entire academic year.
      • Our administration only granted a woefully inadequate stipend for the chair in comparison to the additional work requirements.
      • This isn't necessarily about resources (perhaps as human or social capital), but our university was willing to allow a faculty member without tenure (i.e. moi) to serve as chair, a problem for many reason that I won't take up here.
  • Cultural
    • Academics as a group generally neither respect the need for IRB's nor understand how they function. I think this is partly a function of the type of person who selects into the academe, but mostly, the blame here lies with a failure of graduate programs working with human subjects to adequately train future faculty in the history and ethics of human subjects research. (This hostile culture unarguably exacerbates the interpersonal issues below.)
  • Interpersonal
    • There are difficult people in the world, and they are over-represented among academics. There are many among us with inflated yet fragile egos.
      • internal
        • Though the decided minority, IRB members themselves can be surprisingly quite difficult. One can often ignore outside agitators, but it's not as easy to deal with internal contention.
      • external
        • Every institution has a handful of problem children who are consistently and predictably difficult to deal with. I recall an email exchange with one researcher who complained that the review of his protocol was taking too long, and this was only a week after having submitted his application. When I politely informed him that our IRB had a turnaround time that was on par with peer IRB's, he scolded me and said that our goal should be to be better than our peers. It is said that you can please all of the people some of the time and some of the people all of the time, but in my experience, there are some people who refused to be pleased in this life.

28 August 2013

A Guiding IRB Principle

Just a quick post today. This IRB chairship is really dominating what passed for discretionary time in previous semesters.

The sole and primary purpose of the IRB is to protect human subjects. Full stop.

An important secondary concern, though, should always be for the IRB to ensure the legitimacy of its own authority within the university by not inflicting undue burdens on researchers that slow the review process unnecessarily. If researchers perceive the IRB as a bureaucratic roadblock instead of as an ethical champion, they are less likely to abide the IRB and its policies, and this has the potential to directly constrain that sole, primary purpose: to protect human subjects.

I don't pretend yet to have an answer to this conundrum, but I think it's a place to begin any discussion.

15 August 2013

More on RELTRAD

Some further thoughts on my post yesterday about Darren Sherkat's critique of the RELTRAD scheme. Before I even begin entertaining work on a statistical defense of the scheme, I think it's worth taking a step back and thinking about basic research design and what we can learn from that about the study of religion. The first step is conceptualization: what concept are we trying to use/understand? One of the advances put forward with the RELTRAD scheme is that it is primarily a measure of belonging, where previous and competing schemes conflate beliefs with belonging.[1] The second step is operationalization: how can we faithfully measure this concept? Ideally, we ask people to which specific congregation they belong. Short of that, we ask people to which denomination they belong. We could not, however, include variables for each denomination in our statistical models, let along variables for each congregation, so we need a way to categorize, to collapse and simplify. To stay true to our conceptualization of religious affiliation, RELTRAD collapses these affiliations into several traditions. These traditions are based on the qualitative work of historians of religion who have identified continuities for formal religious institutions (i.e. churches and denominations) and informal religious movements (i.e. sects and NRM's) as well as on the qualitative work of theologians who have identified several strains of formal scholastic theologies and informal "everyday" theologies. To put it simply, RELTRAD attempts to measure a social reality; there is qualitative evidence that these groups are things and distinct things at that. Previous and competing schemes arguably fabricate the categories that they measure; there is little evidence that there is in lived social reality such thing as a "religious liberal" or a "liberal Protestant" or a "sectarian Protestant." We have conjured these categories for the sake of predictive power in our models completely detached from their own reality. In essence, these schemes come dangerously close to selecting on the dependent variable. And, this is the big deal: we err if we evaluate our operationalization, that is our measures, solely on how well they predict some outcome. The first concern should be whether our measures actually measure something about the social world. We assume sociologically that well-operationalized concepts by definition should be predictive of outcomes so predictiveness does indeed become indirect evidence that we have good measures; predictiveness by itself, however, is not evidence that we have quality measures.[2] Based on qualitative evidence and on the logic of research design, RELTRAD offers the best scheme currently available to social researchers paying attention to religious belonging.

In passing, it's also worth noting that not all Black Protestants are black. In fact, in the cumulative GSS dataset, about 4% of Black Protestants are categorized as non-black.

--
[1] - Yes, RELTRAD does require some imputation using attendance and race measures for datasets that have less-than-perfect denominational or congregational measures (e.g. the GSS). The core of the measures, however, are affiliation.

[2] - Although there is plenty of evidence for the continued social relevance of religious belonging, one could imagine a time when the RELTRAD measures would stop being significant predictors. This would not necessarily mean that the RELTRAD measure were poor or deficient but, instead, that religious affiliation itself has become less socially salient, which would be a major sociologically finding.

14 August 2013

Sherkat and RELTRAD


In his most recent blog post, Darren Sherkat critiques what has become known as the Steensland scheme of religion (though the authors of the scheme call it the Religious Traditions scheme or RELTRAD for short). I met Darren several years ago a Southern Sociological Society meeting where we were both presenting in the same session. The paper I was presenting with my undergraduate student co-author, a first-time conference attendee, analyzed changes of attitudes over time about homosexuality in the RELTRAD categories using the cumulative GSS data. There, his major critique of our paper was that people should stop using the Religious Traditions scheme because there was more variation within religious groups than there was between religious groups, an argument that he again implies in his blog post. I think that Sherkat's critique is wrong. First of all, this simply isn't born out in statistical analysis, both in the original paper and in many other analyses using the scheme. (More on this in a future post.) Second is the categories themselves. Sherkat primarily takes issue with Black Protestantism as a category, which he characterizes as being based on the racist assumption that all black people are the same. By that logic, we should never include "black" as a dummy right-hand variable in any of our models either because this would be an assumption that all black people are the same nor should we include "Republican" because there is a lot of variation within political party ID. Moreover, Sherkat mistakes Black Protestantism to be an individual characteristic when it is in actuality a measure of institutional membership. One is not so much Black Protestant as one belongs to a Black Protestant congregation. It is difficult to understand how one could deny the historical reality of a Black Protestant tradition in the United States that is socially distinct from other religious traditions. (See Lincoln and Mamiya's The Black Church in the African-American Experience for more on this.)


--
In the interest of full disclosure, Brian Steensland, who, among others, developed the RELTRAD scheme, was one of my dissertation committee members.

01 August 2013

IRB Chair Extraordinaire

So, today I begin my tenure as the chair of our college's IRB. I've written before on IRB's (here and here). Orgtheory has quite a bit on IRB's that's worth checking out as well. In general, I think it is safe to say that social scientists, and likely researchers of virtually all ilks, do an unfair amount of kvetching and gnashing of teeth about IRB's and human subjects committees. I've starting brainstorming about ways to alleviate the causes of the frustration that many of us, including me, experience with the IRB process. So far, I can see two places for improvement. First, most reviews that spend a prolonged amount of time in the machine do so because researchers inadvertently leave out requirements (e.g. supervisor approvals), fail to attach important documents (e.g. informed consent forms), or fail to provide enough detail about their protocol. A little more attention to detail at the front end can dramatically reduce the time to approval. Second, reviewers need to be regularly reminded to be mindful of the level of risk involved in proposed research. Clarification and revision is often unnecessary when it is clear that the protocol will only pose minimal risks to human subjects. We reviewers can often and easily lose sight of our purpose in the haze of our own institutional bureaucracy. I'm going to try to make these points-of-emphasis early in my stint as chair. I'll report back on how it all goes.

30 July 2013

Clean at Last? 2013 Tour de France Prediction Results

In my recent post on le Tour de France, I predicted an average speed of 41.741 kph (25.936 mph) for the winner this summer based on regression analysis of the winning speeds over all 99 previous races. If you hadn't heard yet, Chris Froome won the yellow jersey this year with an average speed of 40.545 kph (25.193 mph), which is below the 95% confidence interval of my prediction. Although I can't say this with any statistical significance yet, it seems that speeds have slowed since 2005. In fact, average speeds over the last seven races have been below what the model predicts. Why would a trend that has been rather consistent over the last 110 years apparently reverse? Well, this is precisely what one would predict if the racers had suddenly gotten clean. There have been a number of reasons to expect average speeds for professional bicycle races to increase steadily, including technological (e.g. carbon fiber) and biomedical (e.g. better designed training and recovery regimens) advances. Bridging tech and medicine are the so-called doping strategies employed by riders like Lance Armstrong. If we agree that the human body--even a body belonging to a genetically gifted professional athlete--has limits, then extraordinary means like doping simply stretch those biological limits a wee bit. When abandoned, the bodies tend back toward their ordinary constraints. I'll be keeping an eye on these trends for the next few years.

26 July 2013

Colors, Qualia, and Sociology

I think the following vlog post is a good starting point for lots of sociological conversation.



I had been unaware of the philosophical concept "qualia," but I think it imposes a number of questions for those of us who argue for the scientific-ness of qualitative methods. (For the record, I do still believe that qualitative research is science.) Qualia also has lot of implications for empathy, multiculturalism, violence, and sociality in general.

11 June 2013

Little 500 Prediction Results

I wrote a couple months ago about the Little 500. How did my prediction pan out? My model suggested a winning time for this year's men's race of 2:02:32.6. The actual winning time was 2:07:51, which falls outside of the 95% confidence interval for my predicted value. You can read more about the race here. We'll see if my prediction for le Tour de France does any better.

06 June 2013

Experiencing Music

Quick thought before I'm off for weekend travel. (No post likely tomorrow.) It's a bit cliche, but bands are often criticized for not being able to capture the energy of their live performances on their studio albums. Could it be that it's not the band but the audience? Could it be that the experience of the music, rather than its performance, is what varies? Think about the difference in settings.

live venue
  • light show
  • high sound pressure levels (i.e. loud volume)
  • social setting
  • alcohol/drugs
recorded music
  • typically asocial
  • distractions (e.g. traffic if in car)
More on this in a future post.