The research supporting most strategic planning in colleges and universities relies on a dual thrust: scanning external conditions ("what should we be doing?") and evaluating internal operations ("how well do we do what we do?"). This duality has characterized higher education strategic planning since it was effectively described by Keller (1983).
For American community (two-year) colleges, this planning methodology means conducting elaborate "scans" of the external environment--trends in community or service area demographics and socioeconomics, competitors, and local cultural change--along with the usual evaluation of internal operations. (See, for example, Knoell and McIntyre  and McIntyre [2004, 2008a].)
This work relies largely on quantitative rather than qualitative metrics and tends to be lean on input from the college's clients: students and community constituents (trustees, students' parents, alumni, and others) who are touched by the several-part mission of the institution.
Moreover, most data about students and their behavior are quantitative (yield, enrollment, retention, and the like) and/or come from student "satisfaction" surveys that rely on a Likert-type scale response. Surveys by firms like Noel-Levitz and the National Survey of Student Engagement (NSSE) are popular examples. The accountability movement has produced a plethora of such surveys, leading to what Lipka (2011) describes as survey "fatigue" and resulting in low (less than one-third) and often incomplete responses.
These surveys also pose other problems. An overall Likert scale response value of 3.5 (when 5 = "highly" and 1 = "not at all" satisfied) does not reveal much in the way of useful inferences unless benchmarked against a different student group, the same group at an earlier point in time, or students at another (hopefully comparable) institution. But even then, there is little insight into the real cause of a condition or problem (if there is one) and much less into a possible solution.
To gain input from other clients in the community, colleges typically employ advisory committees. However, by statute or regulation, these groups most often deliberate on narrow issues like specific career training programs, developing little evidence that can be used to formulate major planning strategies.
This article builds on the idea that standard strategic planning should be supplemented by qualitative research-in this case, two techniques commonly used in marketing and the social sciences: focus groups and ethnographic research. After briefly describing the two methods, the author describes his use of these tools at two community colleges:
* Focus groups in 2009 at College A in the Midwest
* Ethnographic research in 2006 on students at College B in the Southwest
Focus groups began to be used soon after World War II to evaluate radio programming and have evolved for use in social and health research along with all kinds of marketing. They are a form of interviewing that relies on interaction within the group to allow participants to question each other and reconsider their own understandings of the topic(s) under discussion (Lindlof and Taylor 2002). The group leader (researcher) can directly question and follow up on participants' observations, probing a topic if necessary (e.g., "say more" is a typical segue).
A focus group is described by Merton and Kendall (1946) as consisting of individuals who have experience with or an opinion on the topic(s) under examination and who are subject to a predetermined set of questions. Or, as defined by Krueger and Casey (2000, p. 5), a focus group is "a carefully planned series of discussions designed to obtain perceptions on a defined area of interest in a permissive, non-threatening environment." Thus, focus groups seek to uncover attitudes, feelings, experiences, and beliefs not easily elicited by observation, one-on-one interviews, or questionnaire surveys. …