I teach a variety of research courses to my doctoral students: a qualitative course that emphasizes interviewing, a statistics course that emphasizes SPSS, and a mixed methods course that emphasizes how to complete your dissertation before you are ready to retire.
For some time now I have been advising my students to combine survey research with interviews. They start out thinking I am crazy: why do two dissertations when one is hard enough? But I am not crazy, at least not about this.
The main virtue of mixed methods research is that it will help you tell a story when it comes time to turn your raw data into a polished report. Most graduate students underestimate how difficult and time consuming this is. They imagine that once they have collected their data whether these are survey results or interviews they are done with the heavy lifting. Now all they have to do is write up their results. This view of the process ignores some difficulties that are likely to arise.
If you are doing quantitative research and the statistical analyses support your hypotheses you are home free but what can you do if this is not the case? You can, I suppose, write something like this: “The hypotheses seemed reasonable; unfortunately, they were not confirmed.”
Most dissertation committees will accept this as an honest conclusion, and you will probably get your degree but what a sad end to such a long investment. You will have put at least a year of concentrated effort into this project; surely you want to be able to say something more to yourself than, “Oh well, better luck next time.” Mixed methods research can help you salvage a more interesting story, even if all your quantitative analyses are statistically insignificant. I will show you how in a moment.
Qualitative research presents a different problem. Here, of course, there are no statistical analyses, so the issue of statistically insignificant results does not arise. Instead, the main difficulty comes in organizing a mountain of interview transcripts into a coherent story. One common approach (by no means the only possibility) is to organize your raw material into some sort of typology. (If you are not sure what a typology is, try to remember the different types of students in your high school: the jocks, the nerds, the artsy kids, the stoners, and so on.)
In much the same fashion, you might use your interview material to describe different types of business executives, high school principals, emergency room patients, and so on. The difficulty lies in figuring out a typology that captures something important, and that will persuade your readers.
Here is where the quantitative portion of a mixed methods study can be useful. You can use your participants’ scores on your survey to organize them into a typology and you can do this even if you have no statistically significant results. Here is an example to illustrate what I have in mind; this is from a dissertation I am currently supervising.
The topic of this dissertation is: Why do (or don’t) professors provide accommodations to students with learning disabilities? Professors are required by law to provide these students with accommodations, but of course, some do more and some do less. Why is this? The study adopted a mixed methods approach: a survey and a focused interview with a carefully selected sub-sample of participants.
The survey included several scales, of which two are most important for this discussion. One was a measure of professors’ Attitudes toward accommodations. Those with a positive attitude agree that accommodations are necessary and helpful to students with disabilities, fair to other students, and not too much of a burden for the professor.
Those with a negative attitude say that accommodations are not necessary or helpful, not fair to other students, and far too much of a burden for the professor. The other main scale measured Action: what accommodations do professors actually provide? Participants who scored high on Action might give their learning disabled students more time on a test, extra advice, different kinds of assignments, and so on; those who scored low on Action, would, of course, do less of this.
The results were surprising.
It seems obvious that there would be a strong correlation between these two scales: professors with positive attitudes toward accommodations should be more likely to take action that is consistent with their beliefs.
In fact, however, the correlation was minimal. Apparently, many professors who claim to favor accommodations do not actually provide them, and some professors who do provide accommodations are actually fairly skeptical about them. If this had been a purely quantitative study, this result would have been a disaster. If even the most obvious of this student’s hypotheses turns out to be wrong, what can she possibly do to rescue her project? (There were other hypotheses, which I will not discuss here.)
Fortunately, this student had anticipated this outcome; she had a back-up plan. Using the two scales together, she divided her participants into four types. The “Committed” were those who scored above average in both Attitude and Action. The “Skeptically Resistant” were those who scored below average in both Attitude and Action. The “Well-Intentioned” were those who scored above average in Attitude but below average in Action. Finally, the “Compliant” were those who scored below average in Attitude but nevertheless above average in Action.
|Action (above average)||Action ( below average)|
|Attitude (above average)||Committed||Well-Intentioned|
|Attitude (below average)||Compliant||Skeptically Resistant|
She then used her survey to identify a small number of professors of each type, and interviewed them. In these interviews she invited them to describe, in greater detail than surveys allow, what they think of accommodations and what they actually do. She also asked them to describe how they would react to a handful of carefully constructed, hypothetical scenarios. She is now in the midst of doing those interviews.
I want to point out how this mixed methods approach turned a potential disaster into a likely success. The problematic cases were the well-intentioned professors (those who do not provide accommodations despite claiming to favor them) and the compliant professors (those who do provide accommodations despite their skepticism).
From a statistical point of view, these are the cases that upset the logical prediction, but of course, these are also precisely the cases that are most interesting. Why do people act contrary to their professed beliefs? Her interviews will, in all likelihood, shed light on this apparent paradox.
This student’s experience is far from atypical. Many graduate students sail into their quantitative dissertations confident that the data will support their hypotheses; in reality, this is all-too-often not the case. As Niels Bohr once remarked, “Prediction is very difficult, especially about the future.” A good deal of what we take for granted turns out to be at best very weakly supported by whatever data we can find. This is not the exception to the rule; it is the rule.
As for those who embark on qualitative research, they too may be excessively confident. Students often promise in their proposals that “themes will emerge from the data.” This is wishful thinking: themes are no more likely to emerge spontaneously from your interview transcripts than is a house likely to “emerge” spontaneously from a pile of bricks.
You need some idea how to organize the raw material, whether that material is bricks or quotes selected from your interview transcripts, and survey scores can provide you with that organizational skeleton. No matter what you choose to measure, it is inevitable that half of your participants will score above average and half will score below.
Pick the measures that seem most interesting to you, use them to organize a typology, then use that typology to locate an interesting assortment of participants to interview. This is how quantitative scores can help you organize a story out of your interviews even if none of your quantitative analyses turn out to be statistically significant.
Now to return, briefly, to the question of time and effort: Isn’t this a more time consuming approach? Not really. Much of the time that you spend on your dissertation will go into writing the Problem chapter and the Literature Review; these chapters require no duplication of effort if you adopt a mixed methods approach.
If you are already planning a qualitative study, mixed methods may save you some time, because you can probably do less interviewing.
In our program, we ask students to do 20-30 hours of interviewing for a purely qualitative study, 10 -15 hours of interviewing for one that uses mixed methods. That difference usually saves them a month or two of interviewing, transcribing, and coding easily enough time to construct a survey and analyze the responses.
If you are planning a purely quantitative study, adding some interviews will indeed take more time, but not that much more.
The most important point is that this mixed methods approach will help you find a story when your statistical predictions fail (as they are very likely to do) and when no themes “emerge,” all by themselves, from that pile of interview transcripts heaped on your desk. It may seem like more work now it may save your bacon down the road.