Academic journal article Community College Review

The Denominator as the "Target"

Academic journal article Community College Review

The Denominator as the "Target"

Article excerpt

Abstract

Various analyses have used the transfer rate as a performance indicator for community colleges, but the question of what constitutes an appropriate denominator in the transfer-rate equation remains a point of contention. This article examines the potential drawbacks of using student-reported educational goals to determine which students are included in the denominator and notes how the behavioral signal approach--based on the courses students take and complete may be a more appropriate alternative. In addition, the article discusses the prospects of employing a "transfer opportunity diagnosis" based on multiple indicators.

Keywords

transfer rates, research methods, student goals, use of information

**********

In the fervor to measure the performance of community colleges, various analysts have proposed and applied rates of success, such as transfer rates and graduation rates. The transfer mission in particular receives plenty of attention because of its role in economic mobility, and the transfer rate has, therefore, emerged as a particularly salient performance indicator (Cohen, 2005). These transfer rates generally represent a proportion of a target population that successfully enrolls at a 4-year college. This seems quite simple. The transfer rate is just this:

(those in the target population who transfer)/(the target population)

The numerator of this rate can cause headaches because it requires some way to track a community college student after he or she has left the community college. However, over time, institutions have discovered a generally effective way to count this numerator, especially with the help of data matches made possible by the National Student Clearinghouse (Boughan, 2001; Shoenecker & Reeves, 2008). Yes, there can be some undercount of the people who transfer, because students sometimes do not give the community college adequate or accurate identifying data, and some 4-year colleges and 2-year colleges fail to submit student enrollment data to the organization that performs a data match to help us count transfers. There has been some discussion about how many units a student must complete at a community college prior to enrollment at a 4-year college in order for us to count a student as a vertical transfer. But we generally do not have that much controversy about what kind of student history qualifies as a transfer.

On the other hand, the denominator of this rate, the target population, seems to have frustrated the community college research community for the past few decades (Banks, 1990; Spicer & Armstrong, 1996). This is important because the denominator really controls whom we can include in the numerator. Researchers have noted different ways to define the denominator, the target population, and substantially different transfer rates have been calculated as a result of these different definitions (Horn & Lew, 2007). Thus, the definition of the target population in higher education poses a special problem. For example, the transfer rate is quite different from the success rate for heart surgeries. The denominator for heart surgeries is unequivocally the count of those patients who receive heart surgery at a given hospital. Heart surgery is a specific and tangible procedure (with generally tangible outcomes, although we sometimes need to wait a while for a valid evaluation). The denominator here results from a joint decision (consent of the patient and the judgment of his or her doctors) that relies partly on client desires (the patient's desire to be healthy) and partly on expert judgment (clinical decision making based on diagnostic information about the patient's condition and the prospect of benefit from surgery). That does not sound much like the various definitions used to place community college students into the target population of transfer.

The Self-Reported Educational Goal

What is the procedure involving transfer that admits students to the target population for the community college transfer mission? Is a student's answer to a question about his or her educational goal the appropriate signal to admit that person to the target population? Intuitively, if a student says that his or her educational goal is transfer, then doesn't that provide an adequate "signal" to admit that student into the target population? The following discussion reviews a set of factors that would make self-reported educational goals an overestimation of the target population for transfer. Figure 1 below graphs these factors.

Unfortunately, the student's selection of transfer as a goal may not communicate a clear signal for a number of reasons, including his or her haphazard response to a questionnaire asking about educational goals. Community college staff members and officials have long asserted that many of their students put little cognitive effort into their response to the question of educational goal. If some students put very little cognitive effort into their answer, can their choice of transfer (or even some other educational goal) have much meaning? When a casual choice occurs, can that student have much commitment to carrying out the effort needed to realize that goal? If students do not view the answer to this question as a "high-stakes" decision, for which they may be accountable, how much credence should researchers give to these self-reports of educational goals?

[FIGURE 1 OMITTED]

This brings up a further point. Would students choose their educational goal more carefully if it meant that someone would hold them accountable for that decision? Presently, students usually suffer no consequences if they mark the educational goal question haphazardly--it's nothing like signing a contract for a major purchase or for a commitment to work. However, students who mark their responses haphazardly can skew the denominator of a transfer rate, and this can have consequences for their community college if transfer rates become a critical measure of success for the college.

The likelihood of overestimating the population with a goal of transfer rises substantially if the answer sheet lists transfer as one of the first goals seen by the student--a well-known bias in questionnaires dubbed as the primacy effect (Groves et al., 2004). For California, this bias may produce overestimation on a broad basis. In 2009, 94 of the 110 community colleges in the state used the computer application known as CCCApply for registering students (C. McKenzie, personal communication, June 9, 2009). In those colleges that used CCCApply, 80% to 100% of the applications by new students occurred through this computer application (CCCApply Project Center, n.d.). The first two response options (among the 15 provided) for CCCApply's question on educational goal are, "Obtain an associate degree and transfer to a 4-year institution," and "Transfer to a 4-year institution without an associate degree."

The response bias known as the social desirability effect (Groves et al., 2004; Weisberg, 2005) could well exacerbate the skewing of the target population by motivating some students to mark the socially desirable goal of transfer (and the baccalaureate degree) when they know that they have little intent to transfer. This is different from the haphazard responses I have described above because this scenario involves the conscious choice of a goal that the student does not have. In the social desirability effect, the respondent chooses to misreport an educational goal because he or she believes that his or her true choice would fail to satisfy perceived social norms about the universal value of the 4-year degree.

A concept related to social desirability is subjective expected utility (SEU). In SEU, respondents rationally weigh the personal gains or losses that they incur if they provide a specific answer to a question (Esser, 1993; Tourangeau, Rips, & Rasinski, 2000). False reporting of educational goals, according to the SEU theory, would occur if students believed that they could gain (or avoid loss) personally by reporting a transfer goal (in lieu of some other true goal of theirs). If students believed that they could improve their chances for assistance (i.e., financial aid) or for preferred course scheduling, this could motivate false reports of a transfer goal.

An unclear (noisy) signal does not have to come from haphazard questionnaire completion or other forms of response bias. A possible inflation in the count of students who intend to transfer can result from certain qualities of a respondent's interest in transfer as a goal. Sometimes new students really do harbor reservations and ambivalence about their educational goal. But a single survey question that forces the student to choose one option will not let the analysts or researchers give that rather tenuous choice of transfer any less weight than the student who staunchly embraces the transfer goal. Wouldn't it be great if we also asked the student to mark on a scale how much confidence he or she has in his or her selection of an educational goal? Without such qualifying information, we tend to treat all students who mark the "transfer" choice as equal in terms of attitude and commitment, something that is not likely a reality.

Students' educational goals may be unstable over time. Many students change their educational goals after that initial entrance period (Bailey, Jenkins, & Leinbach, 2005). But how many colleges monitor the student's revised educational goal and update their databases with the revised data? Students may eventually convert their goal of transfer to a goal of an associate's degree, and these decisions may stem more from personal circumstances (such as family obligations or job situations) than from any effect of the community college. But should we still count this student as part of the denominator if he or she no longer considers himelf or herself part of the target population? We have to recognize that some student attitudes and interests have short lives.

Finally, let's imagine a scenario in which many high school students with very poor secondary school performance develop unrealistic expectations for a bachelor's degree because of a strong high school campaign (or any campaign by some community organization for that matter) or because of high school counseling. Let us further imagine that these very underprepared students register for classes at a local community college and they all choose the educational goal of transfer. That community college may see a "nosedive" in its transfer rate because its denominator now includes a set of students with a very low probability for the transfer outcome. If the campaign to promote bachelor's degrees among very underprepared students occurred outside of the control of the community college, then the community college will suffer the consequences of an outside entity's actions. (If the college actually sponsored this promotion, then it may rethink the wisdom of the campaign if the college were to suffer consequences from a depressed transfer rate.) This issue of unrealistic goals for college is not unrealistic in itself, as long as community colleges have relatively open admission policies (Rosenbaum, 2001).

Implications of Self-Reported Goal for the Denominator

So far, I have only discussed how student-reported educational goals may distort the denominator of the transfer rate. I often like to play a what-if game that assumes that someday we will obtain perfect information about student goals so that we can flag specific students who make up the denominator and should transfer (with all of the political implications of that word "should"). If a college becomes accountable for the number of students who declare themselves as transfer bound, then would it make sense for the government to fund each college adequately to support the mission of transferring a specific volume of declared transfer-bound students? In this line of thinking, community colleges should receive funding for transfer-related college workload that would match the number of students who signaled the goal of transfer. For example, for every 100 students with the transfer goal, the college should be funded to provide x number of transfer counselor hours, and so on. From an accountability perspective, adequate funding of an operation is assumed to exist; otherwise, the public is basically asking a college administration to achieve something that it cannot accomplish because it has inadequate control or resources. That is inequitable and very counterproductive.

There is some irony here. In which public sector programs would we see a target population defined by the simple questionnaire check-off (with large potential for response error or bias) made by a person? There are probably a few out there (but I could not think of one at this time). The point is that success rates of public programs conventionally use specific rules and conditions to qualify individuals as part of a target population. Rarely does a person's response to one survey question provide sufficient qualification by itself.

Answers?

I have raised quite a few questions here. So it's time for some answers. A behavioral signal of some duration for the student goal of transfer, rather than a possibly transient attitudinal signal, would allow researchers to create a reasonably accurate head count for the denominator. The completion of some number of credit coursework, such as 12 units, would be one simple example of a behavioral signal. It essentially represents the student's level of commitment to transfer. We could enhance this basic behavioral signal by including another condition, such as a student's attempt at a transferable course in English or mathematics (Bahr, Hom,& Perry, 2005). The conditions could obviously multiply, but further enhancements may encounter missing data problems or offer us too small a gain in information in relation to the complexity they add. But regardless of the exact definition of this behavioral signal, completed units or courses predict transfer quite well, and such data avoid the shortcomings of student self-reporting (Hagedom, Cypers, & Lester, 2008; Hagedorn & Kress, 2008).

Some states and researchers use the behavioral signal method to count the denominator in their transfer rates (Boughan & Clagett, 2008; Chancellor's Office, California Community Colleges, 2008; Cohen & Sanchez, 1997; Roksa & Calcagno, 2008). Some researchers use the behavioral signal along with the self-reported goal to calculate a transfer rate (Moore & Shulock, 2007), and others report multiple transfer rates that show the variation related to course-taking behavior (Horn & Lew, 2007; Sengupta & Jepsen, 2006). The behavioral signal probably works well for counting the denominator for a state-level measure and for counting the denominators for the transfer rates of a set of community colleges (perhaps for between-college comparison). The qualifier of completed credits per student has appeal because it accurately identifies each student at each college who demonstrates the characteristics of someone in the target population. Completing credits (and perhaps attempting a college-level English or mathematics course) would effectively identify those students in a state who plan to transfer versus those students who do not plan to transfer. If different colleges collect the student-reported educational goal differently, posing the so-called instrumentation threat to validity (Campbell & Stanley, 1963), then the quality of the denominators will vary between colleges--making between-college analysis invalid.

The criticism of the behavioral signal for a target population largely seems to focus on the notion that this qualifier excludes some of the students who should transfer but do not have enough success in college to amass 12 credits. These critics argue that the behavioral signal method inflates community college transfer rates. However, the concept of should transfer has a multidimensional nature that complicates the design of a performance indicator (a rate of success) for the transfer function at a community college. Is should transfer a social ethic or community value in the sense of "ought to transfer"? In this case, do we judge who should transfer on the grounds of social justice? That is, is a member of an underrepresented minority automatically part of this target population? Or do we think any individual who has behaved conscientiously throughout his or her secondary education and exhibited desirable community values should transfer? These two versions of the should transfer concept reflect moral or social rules that people tend to consider from time to time.

And to take a macro viewpoint for the transfer mission, what about considering criteria such as how many slots are open for transfer students at public 4-year institutions and how many total baccalaureate degree completers a community or state needs? If a state's public 4-year colleges only have openings for 1,000 transfers in a given year, then that state should transfer only 1,000 students that year. What is the point of having any denominator in the case of limited capacity at public 4-year colleges'? Here the numerator cannot increase no matter how much we increase the denominator. We are just measuring the growth in the deficit of 4-year college capacity (or the volume of "unprocessed" cases) for transfer as our transfer rate declines. In fact, some researchers would argue that the attention of policy makers really should include the functioning of the 4-year colleges when we think about transfer from community colleges, placing less emphasis on specific community college transfer rates in policy analysis (Ehrenberg & Smith, 2004; Long & Kurlaender, 2008; Roksa & Keith, 2008).

If our viewpoint happens to be that of the labor economist or the economic planner (Reed, 2008), then the number for should transfer becomes the expected need for baccalaureate degrees in future time periods--a perspective that, again, really depends critically on a volume rather than a rate. For the future labor supply viewpoint, we truly need a count of baccalaureate completers (an effectiveness measure) rather than a rate (an efficiency measure).

The Predictive Connotation

A different dimension of the should transfer concept is the predictive connotation. That is, which student has a high probability to achieve transfer? If we created a statistical model (or a mental rule of thumb for that matter) of who has a high probability of transferring, then the aggregate of those high-probability individuals would constitute our target population, our denominator. The problem we face with a high-probability rule is where to set the cutoff point for defining "high probability of transfer." Is it 50%? Is it 80%'? What does society deem to be a proper cutoff point here? Who has the political desire and clout to set such a cutoff?. In essence, the setting of a cutoff point presents us with a political and community judgment that must precede our formulation of a performance indicator in this context. In a way, this question is like "What is a good unemployment rate for this state'?"

On the other hand, if the focus of should transfer is on efficiency in transfer (not social equity per se), then the should-transfer rule would place in the denominator only that group of students who have a high probability to transfer. This concept of an efficient target population receives full discussion in Schuck and Zeckhauser (2006). In this viewpoint, a community college should focus its scarce transfer-related resources on those students who have a reasonable chance of transfer and not on those students who have very little chance of transfer. Unless the public supports the mission of transferring students who have very low chances of transfer, it seems counterproductive to include such students in a performance indicator that is supposed to shape the administrative behavior of a community college. The efficiency argument supports the use of a behavioral signal to define the target population because students who have begun to demonstrate the characteristics that improve the chances for transfer may represent the most efficient use of public funds that were intended to promote overall transfer volumes. So, oddly enough, we have sort of come "full circle" in our discussion. While critics may argue that a behavioral signal method for defining the transfer-rate denominator would inflate the performance of a community college-possibly inflating its measurement of efficiency--the behavioral signal method may actually do a better job at improving a college's efficiency at transfer by focusing that community college's performance and our evaluation of that performance.

Back to the Heart Surgery Example

Perhaps the consideration of the benefits from accurate diagnosis for heart surgery can illustrate this efficiency concept, as it relates to the public good. If medical doctors have a good model about patient health, then they can predict which patients will have a high probability of good health as a result of heart surgery (not despite the heart surgery). Proper medical testing (part of the model of health) will indicate whether or not the patient has a heart problem and whether or not the patient on the whole will be better off going through heart surgery. Note that the success of heart surgery does not rest solely on the surgical skills of the surgeons in the operating room. If the patient had some other undetected life-threatening disease, the patient may not really benefit from heart surgery. In fact, such a patient may actually suffer a decline in health from heart surgery because hospital staff erred about the patient's overall health needs and situation. If a patient still suffers a decline in health despite or because of the heart surgery, we have created two "losses" so to speak. First, we did not cure the patient (and possibly may have harmed him or her), and second, we consumed the medical resources that we could have used productively for a properly diagnosed patient.

For community colleges, the incoming student parallels the patient in many important ways. We need to make the correct diagnosis of that student's educational "health" and future (their desired and expected end states). If community colleges misclassify a student as a transfer

student (and students may themselves contribute to such misclassification), we may witness the two losses of failing the student and diverting some scarce resources away from students who could have benefitted. So the definition of our target population, our denominator, could benefit if it could correctly diagnose the students who are likely to succeed at transfer.

In addition, to test and improve our diagnostic techniques for student "treatments," we could use some common measures applied in health research such as sensitivity and specificity (Hunik et al., 2001; Selvin, 1996). In this effort we would use a classification table such as the one below in Figure 2 to evaluate the diagnostic quality of a specific indicator such as the self-reported educational goal. Cells "a" and "d" in Figure 2 indicate correct diagnoses of students. Cell "c" indicates the frequency of false negatives (i.e., omission of students from the target population when they should be included). Cell "b" indicates the frequency of false positives (i.e., inclusion of students for the target population when they should be excluded). Cells "c" and "b" reflect absolute measures of underestimation and overestimation, respectively. Sensitivity is a relative measure because it equals the ratio of a/(a+c). Specificity is also a relative measure because it equals the ratio of d/(b+d).

Both sensitivity and selectivity can take values between and including 0 to I so that both measures retain their interpretation across populations of different sizes. High values (at 1 or close to one 1) are preferable. A value of 1 for sensitivity means that the diagnostic technique has perfectly identified all of those in the population who truly need treatment (are likely to transfer) for perfect sensitivity. This ideal result will only occur if Cell C equals 0 (or if Cell B is 0 in the case of selectivity), but such a result will rarely occur in the real world. A diagnostic technique with high sensitivity (some value near 1) has high success in identifying students who are likely to transfer, and a diagnostic technique with high selectivity (also near 1) has high success in identifying students who are unlikely to transfer. In general, a college would benefit from the use of a diagnostic technique that is both highly sensitive and highly selective in order to deliver counseling and other services efficiently.

Value From the Student-Reported Goal

The use of the behavioral signal as a decision rule for defining the target population does not necessarily mean that the student's own attitude has no place in this discussion. Some researchers (Leigh & Gill, 2003) seem to have effectively employed student-reported educational goal (two survey questions) in the data from the National Longitudinal Survey of Youth, a data source designed for rigorous research. Similarly, Bailey et al. (2005) used student intent data from the Beginning Postsecondary Students Longitudinal Study (another national sample survey designed for rigorous research) in their work. However, these national surveys have insufficient sample sizes for specific states to support state-level analyses.

At the state level of analysis, if researchers could determine that a survey response by students at their entry point to the community colleges can serve as a valid and stable indicator, over time and across institutions, of a true intent to transfer, then this self-reported signal may help us define the target population. However, even if this self-reported signal for state-level analysis were valid and stable, is this the policy direction we want to take? That is, should we evaluate the transfer performance of community colleges largely upon what a student wishes, regardless of how realistic those wishes may be (from the standpoint of how realistic an individual's chances are and from the standpoint of the resources a community college has to raise the chances of success for the very underprepared matriculant)?

[FIGURE 2 OMITTED]

The Treated Versus At-Risk Answer

Mohr (1995) explained the utility of two different ways for defining the target population. He describes the broadest denominator that a public program may have to serve as the population at risk. A narrower denominator is the population treated (or served). The population at risk denotes a set of people who presumably would benefit from treatment or service. The population treated denotes a set of people whom a particular program has actually served. The population at risk will generally exceed the population treated because most programs will lack sufficient resources (or knowledge of who is at risk) to treat 100% of the population at risk. In a sense, the population at risk quantifies a level of funding that a program would need to meet 100% of a public need. On the other hand, if we use the population treated as a denominator and the number of positive outcomes among those treated as the numerator, we have a treatment effectiveness rate. The population treated is the appropriate denominator for evaluating a program's efficiency in the sense that we should not judge a program's efficiency on the basis of any outcomes for individuals who did not receive its treatment. If we replace population treated with the population at risk as our denominator (assuming that the numerator is a subset of the population at risk), then we essentially produce an indicator of a community's success rate at addressing a public need. But that rate (positive outcomes/population at risk) does not reflect the actual efficiency of the program because it now includes individuals whom the program did not treat. Green and Lewis (1986) offered a framework much like Mohr's explanation, but they applied it to efforts for health education and health promotion.

The connection to transfer rates comes about in this way. Students who achieve a behavioral signal largely reflect the population treated. Students who accurately declare transfer as a goal may be assumed to largely reflect the population at risk (or those who need the transfer service). When we use the behavioral signal to define the population treated by community colleges, we basically provide a valid measure of efficiency (disregarding for a moment the 4-year college capacity factor). This is why a 12-unit condition, at a minimum, makes sense as a fair performance indicator of community college transfer. If a student takes less than 12 units at a community college, it seems appropriate to exclude that student from the population treated, considering that participation below 12 units would not constitute treatment or service rendered. When we compare the rate using the behavioral signal to the rate using the self-reported goal of transfer, we simply measure the coverage gap that community colleges experience; this coverage gap is analogous to a resource gap. That is, if the rate of success with a population at risk is at 0.20 while the success rate for the population treated is at 0.40, the implication is that society would benefit if we increased funding to cover the 60% (or 1.00 minus 0.40) that did not obtain the benefit. In this viewpoint, a treated-at risk dichotomy reconciles the utility of these two different transfer rates; they serve different purposes. The astute reader will note that an analysis could use any number of measurements aside from the student-reported goal to define a population at-risk; the student-reported educational goal does not hold a monopoly for that role.

Rossi, Freeman, and Lipsey (1999) also described the population at risk as the group with a need for a specific service, but they add the concept of program demand (the set of individuals who desire a service or are willing to participate in a program). Rossi et al. make the important distinction between program demand and population at risk because these two populations will overlap but not be identical. In the case of transfer, the group that expresses the desire to pursue transfer is not precisely the same set of people who constitute the population in need (or at risk). Defining the program demand (i.e., with the student-reported transfer goal) as the same as the population at risk will tend to result in both false positives and false negatives (these details relate back to the table in Figure 2).

Visionary Stuff

The area of student attitudes and intentions deserves much more attention and analysis if community colleges are expected to do a better job in helping their students. Many analyses of community college transfer within a single state seem to neglect this aspect because they cannot afford the cost or time to collect attitudinal data from students. But omission of such data for policy analysis does not diminish the relevance of such hard-to-get data; such omissions tend to indicate decisions about study feasibility rather than judgments about the value of attitudinal data. Perhaps an ambitious doctoral candidate could get a very nice dissertation out of doing some field research on this topic. If a set of student surveys (by different doctoral students or different institutional researchers) at different community colleges were to take place, a subsequent meta-analysis (Cook, Cooper, & Cordray, 1994; Cooper & Hedges, 1994) could leverage these small-sample studies to estimate relationships of a broad nature (i.e., at a state level or a national level). Practically speaking, such research could help community college policy makers in the areas beyond transfer, such as career-technical education and basic skills (developmental education).

Even a small, focused study on what students really intend to communicate with existing data collection instruments (even if the instrument is just one question) could help institutional researchers gauge how much weight they should allot to the self-report of educational goals (or if a much wider sample analysis would be useful). In this focused effort, a verbal protocol-cognitive process study (Beatty & Willis, 2007; Sudman, Bradburn, & Schwarz, 1996) of a sample of new students could really enlighten us. Study results could possibly even lead to improvements in the existing process for collecting data on student goals.

In the long run, one policy alternative may be the careful assessment of new entrants to each community college. This assessment would include a multiple indicator approach in which the matriculation office gathers and analyzes the motivational dimensions of the prospective student (with a well-designed, tested, and carefully administered survey instrument); the level of educational preparation of the prospective student (with secondary grades and transcripts and any test or assessment scores); and the economic circumstances of the prospective students (with an evaluation of their dependents, their costs, and their financial resources and income, among other things). The matriculation staff person (or some counselor) would synthesize these multiple indicators of the chances for transfer success to decide whether the prospective student should go into our target population of people who should transfer. If data for this student were to change, then the college would revisit its classification of the student. Perhaps some ingenious researcher could partially automate this process by creating a computer program that would help a college identify the prospective student as a member of the denominator. Call this process transfer opportunity diagnosis, if you will.

Ideally, this transfer opportunity diagnosis would equip a counselor (funding permitting) with an objective tool to advise new students. The diagnosis would help new students make informed choices about their educational goals, promoting student self-directed behavior and commitment to a specific educational path and goal. The diagnosis would exclude the consideration of demographic variables because they are not factors that students (or anyone else) can modify or improve (whereas motivation, personal interests, attitudes, and levels of educational achievement are malleable). This alternative plan would simultaneously help incoming students and enable the college to accumulate valuable student-level data that researchers could mine or test for evaluation research--a capacity that many community colleges currently lack. Consequently, the plan could immediately benefit each student who completes the diagnosis, and analysis of the accumulated data base would eventually benefit entire programs (or all students) at the college by making rigorous evaluations feasible.

Of course, some observers may see risk in this transfer opportunity diagnosis. The diagnosis conceivably may contribute to the tracking (or sorting) of students (Oakes, 1985) such that colleges may limit the potential of individual students for academic achievement and socioeconomic mobility. This effect would depend on the consequences of the diagnosis. If the diagnosis were to act as a mandatory, high-stakes test or hurdle for a student's academic future, then we would need to recognize any potential negative effects of tracking or sorting. Basically, if the diagnosis were to directly limit the educational opportunity of students, it would need to handle the potential for disparate impact. In such situations, colleges will probably need to prove the validity of the diagnostic process. If we only use the diagnosis as input for an institutional performance indicator or as part of an institutional needs assessment to support institutional funding and strategic planning, then we largely avoid the risks of tracking or sorting students and of disparate impact. If the diagnosis process has only an advisory effect (i.e., no administrative limitation on a student's ability to take a course for which he or she is otherwise eligible to take), then we also diminish the risks of tracking and disparate impact. Finally, if the transfer opportunity diagnosis occurs as a voluntary, low-stakes, academic advising program for students, students will generally have no incentive to invalidate the diagnosis (and degrade our student database) with false reporting of personal attitudes to avoid a required educational path he or she did not like.

The transfer opportunity diagnosis concept falls into the visionary category because it does not seem feasible in the near future. A policy to implement transfer opportunity diagnoses would require additional funding for the community colleges, a scenario that currently seems far-fetched to say the least. But if this tool materializes, we would have a very valid denominator to evaluate community college transfer performance (barring capacity issues at the public 4-year institutions) and a process that may enable community colleges to help students consummate their commitment to a chosen path.

Conclusion

Townsend (2002) put the above discussion about the denominator (our target population) in a very useful perspective for us: "To choose a particular numerator and denominator, policymakers, institutional leaders, and researchers need to clarify the purpose behind determining transfer rates." It follows that we should use the behavioral signal to define the target population if our purpose is to measure the efficiency in transfer at one or more community colleges. If our purpose is to see how much the community still needs in terms of a given service, then we can use a different indicator to define our denominator (such as a student's self-report of his or her goal) to estimate the proportion of the population at risk that still could use service.

Practically speaking, nothing really stops a researcher or a higher education agency or body from turning the existence of multiple definitions for the target population from a problem (i.e., the lack of consensus for one definition) into a useful approach to analysis. Researchers and policy makers could use multiple transfer rates to cover different populations. For example, Rossi et al. (1999) stated, "Estimates of target populations and their characteristics may be made at several levels of disaggregration and ... allow one to estimate the target populations that can be reached by tailoring a project to specific age cohorts" (p. 145). Although multiple target populations may increase the perceived complexity of performance in transfer within a set of community colleges and expand the size of reports (and the associated workload on institutional researchers) to the legislature, this could set aside the issue of "dueling denominators" while contributing to the design of interventions.

Strategically speaking, the concept of one universally accepted transfer rate is not all that necessary for meaningful research into transfer, especially if we focus on specific target populations in our communities and execute proper statistical analyses. For example, Surette (2001) examined the gender differences in transfer using data from the National Longitudinal Survey of Youth, and Bailey and Weininger (2002) tested for immigrant advantage or disadvantage in transfer. Each study produced useful findings for policy but neither study locked into a standard transfer rate definition (or emphasized the transfer rate for that matter). The meaningful results of these studies came from the statistical models that predicted transfer (and their delineation of statistically related factors)--not from the simple presentation of transfer rates by themselves. Furthermore, any analysis of transfer could advance our ability to improve educational achievement by measuring changes in transfer rates over time or across space (or across institutions) as long as (a) that one analysis had a consistent definition of the transfer rate within it, and (b) that one analysis used accurate measurements of hypothesized causal or moderating factors.

On the other hand, the pursuit of accountability by oversight bodies and administrators creates a demand for one universally accepted transfer rate. Accountability initiatives usually rely on standard (i.e., consistently formatted) performance indicators and the reporting of a few basic numbers. This largely stems from the need to communicate performance levels to lay audiences (voters and legislators ) who have little time or technical background to appreciate a statistical model or multiple measures the way researchers do (Caplan, 1977). For the goal of accountability, the transfer rate and its denominator will continue to deserve the close attention of researchers and analysts so that a reported transfer rate will match the specific objective (community need or program efficiency) that policy makers want to address. These critical audiences will force researchers and institutions to produce a simple indicator and to use a meaningful target population (denominator) for that indicator.

DOI: 10.1177/0091552109348043

Author's Note

This article reflects only the personal opinion of the author and not necessarily the positions of any organization or public agency. The author acknowledges helpful comments from Alice van Ommeren, Research Program Specialist, and Catharine Liddicoat, Specialist: Information Systems & Analysis, both of the Chancellor's Office, California Community Colleges.

Declaration of Conflicting Interests

The author declared no potential conflicts of interests with respect to the authorship and/or publication of this article.

Funding

The author received no financial support for the research and/or authorship of this article.

References

Bahr, P. R., Hom, W., & Perry, P. (2005). College transfer performance: A methodology for equitable measurement and comparison. Journal of Applied Research in the Community College, 13, 73-87.

Bailey, T., Jenkins, D., & Leinbach, T. (2005, June). Is student success labeled institutional failure? (CCRC Working Paper No. 1). New York: Columbia University.

Bailey, T., & Weininger, E. B. (2002). Performance, graduation, and transfer of immigrants and natives in City University of New York community colleges. Educational Evaluation and Policy Analysis, 24, 359-377.

Banks, D. L. (1990). Why a consistent definition of transfer? An ERIC review. Community College Review, 18, 47-53.

Beatty, P. C., & Willis, G. B. (2007). Research synthesis: The practice of cognitive interviewing. Public Opinion Quarterly, 71, 287-311.

Boughan, K. (2001). Closing the transfer data gap: Using National Student Clearinghouse data in community college outcomes research. Journal of Applied Research in the Community College, 8, 107-116.

Boughan, K., & Clagett, C. (2008). Degree progress measures for community colleges: Analyzing the Maryland model. Journal of Applied Research in the Community College, 15, 150-162.

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Chicago: Rand McNally.

Caplan, N. (1977). A minimal set of conditions necessary for the utilization of social science knowledge in policy formulation at the national level. In C. H. Weiss (Ed.), Using social research in public policy making (pp. 183-197). Lexington, MA: Lexington Books.

CCC Apply Project Center. (n.d.). Retrieved June 9, 2009, from http://www.cccnext.net/cccapply/

Chancellor's Office, California Community Colleges. (2008, March). Focus on results:

Accountability reporting for the community colleges, Retrieved March 22, 2009, from the Technology, Research and Information Systems Web site: http://www.cccco.edu/Portals/4/ TRIS/research/ARCC/arcc_2008_final.pdf.

Cohen, A. M. (2005). The future of transfer. Journal of Applied Research in the Community College. 12, 85-91.

Cohen, A. M., & Sanchez, J. R. (1997). The transfer rate: A model of consistency. Community College Journal. 68(2), 24-26.

Cook, T. D., Cooper, H., & Cordray, D. S. (1994). Meta-analysis[or explanation: A casebook. New York: Russell SAGE.

Cooper, H., & Hedges, L. V. (1994). The handbook of research synthesis. New York: Russell SAGE.

Ehrenberg, R. G., & Smith, C. L. (2004). Analyzing the success of student transitions from 2- to 4-year institutions within a state. Economics of Education Review, 23, 11-28.

Esser, H. (1993). Response set: Habit, frame or rational choice? In D. Krebs & P. Schmidt (Eds.), New directions in attitude measurement (pp. 293-314). New York: Walter de Gruyter.

Green, L. W., & Lewis, F. M. (1986). Measurement and evaluation in health education and health promotion. Palo Alto, CA: Mayfield.

Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2004). Survey methodology. New York: John Wiley.

Hagedorn, L. S., Cypers, S., & Lester, J. (2008). Looking in the review mirror: Factors affecting transfer for urban community college students. Community College Journal of Research and Practice, 32, 643-664.

Hagedorn, L. S., & Kress, A. M. (2008). Using transcripts in analyses: Directions and opportunities. In T. H. Bers (Ed.), Student tracking in the community college (New Directions for Community Colleges, Vol. 143, pp. 7-17). San Francisco: Jossey-Bass.

Horn, L., & Lew, S. (2007). California community college transfer rates: Who is counted makes a difference. Berkeley, CA: MPR Associates.

Hunik, M., Glasziou, P., Siegel, J., Weeks, J., Pliskin, J., Elstein, A., et al. (2001). Decision making in health and medicine: Integrating evidence and values. New York: Cambridge University Press.

Leigh, D. E., & Gill, A. M. (2003). Do community colleges really divert students from earning bachelor's degrees? Economics of Education Review, 22, 23-30.

Long, B. T., & Kurlaender, M. (2008). Do community colleges provide a viable pathway to a baccalaureate degree? (Working Paper No. 14367). Cambridge, MA: National Bureau of Economic Research.

Mohr, L. B. (1995). Impact analysis for program evaluation. Thousand Oaks, CA: SAGE.

Moore, C., & Shulock, N. (2007). Beyond the open door: Increasing student success in the California community colleges. Sacramento, CA: California State University. Retrieved August 11, 2009, from http://www.csus.edu/ihe/PDFs/R Beyond_Open_Door_08-07.pdf.

Oakes, J. (1985). Keeping track: How schools structure inequality. New Haven, CT: Yale University Press.

Reed, D. (2008). California's future workforce: Will there be enough college graduates? San Francisco: Public Policy Institute of California.

Roksa, J., & Calcagno, J. C. (2008). Making the transition to four-year institutions: Academic preparation and transfer (CCRC Working Paper No. 13). New York: Community College Research Center. Retrieved March 22, 2009, from http://ccrc.tc.columbia.edu/ContentByType.asp?t=l

Roksa, J., & Keith, B. (2008). Credits, time, and attainment: Articulation policies and success after transfer. Educational Evaluation and Policy Analysis, 30, 236-254.

Rosenbaum, J. (2001). Beyond college for all. New York: Russell SAGE.

Rossi, P. H., Freeman, H. E., & Lipsey, M. W. (1999). Evaluation: A systematic approach. Thousand Oaks, CA: SAGE.

Schoenecker, C., & Reeves, R. (2008). The National Student Clearinghouse: The largest current student tracking database. In T. H. Bets (Ed.), Student tracking in the community college (New Directions for Community Colleges, Vol. 143, pp. 47-57). San Francisco: Jossey-Bass.

Schuck, P. H., & Zeckhauser, R. J. (2006). Targeting in social programs. Washington, DC: Brookings Institution.

Selvin, S. (1996). Statistical analysis of epidemiologic data. New York: Oxford.

Sengupta, R., & Jepsen, C. (2006, November). California's community college students. California counts: Demographic trends and profiles, 8(2). Retrieved August 11, 2009, from http://www.ppic.org/content/pubs/cacounts/CC_1106RSCC.pdf.

Spicer, S. L., & Armstrong, W. B. (1996). Transfer: The elusive denominator. In T. Rifkin (Ed.), Transfer and articulation: Improving policies to meet new needs (New Directions for Community Colleges, Vol. 96, pp. 45-54). San Francisco: Jossey-Bass.

Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology. San Francisco: Jossey-Bass.

Surette, B. (2001). Transfer from two-year to four-year college: An analysis of gender differences. Economics of Education Review, 20, 151-163.

Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The psychology of survey response. New York: Cambridge University.

Townsend, B. K. (2002). Transfer rates: A problematic criterion for measuring the community college. In T. H. Bers & H. D. Calhoun (Eds.), Next steps for the community college (New Directions for Community Colleges, Vol. 117, pp. 13-23). San Francisco: Jossey-Bass

Weisberg, H. F. (2005). The total survey error approach. Chicago: University of Chicago Press.

Willard C. Hom (1)

(1) Chancellor's Office, California Community Colleges, Sacramento

Bio

Willard C. Hom is the director of research and planning, Chancellor's Office, California Community Colleges. He is also the president of the National Community College Council for Research and Planning.

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.