Federal policymakers and accrediting bodies are devoting increased attention to one of the primary indicators of effective education: graduation rates. Because graduation is an outcome, options for managing it are limited and generally after the fact. The effective management of graduation rates requires managing the processes that contribute incrementally to retention. To think otherwise is to believe in the possibility of managing effects rather than causes.
A growing number of companies are offering tools to improve retention. Many of these companies offer technology solutions based on predictive analytics. While specific methods differ, the goal is to assign a crude coefficient representing each student's probability of persisting. This is not our approach.
It's About the Relationship
The human processes that go into persistence and drops form a complicated mosaic. Big data analysis and predictive modeling have done little to illuminate these processes and, on balance, have distracted our attention with a black box that tags winners and losers in the game of higher education. As measurement scientists that have too much fun running regressions and neural networks, we have no objections to statistically driven black boxes. Our objection centers on the fact that such modeling ignores the dynamics of retention in favor of probabilistic labeling. While we think it is important to make statistical assessments of risk associated to student attributes and behaviors, and to use that information to shape interventions, the utility of this approach stops there. Other than by chance, no predictive system will get your staff in front of an at-risk student at the exact time that an intervention is required to retain the student. There is a way, however, and technology can help. Because only the student knows when his or her internal state of mind has begun to think seriously about dropping out, the institution must have established a relationship and a structure by which a student can reach out for assistance and feel comfortable doing so. Email exhortations and warm greetings by the president will not accomplish this. Students reach out and retention is an outcome only when students have an authentic relationship with a retention counselor. This presumes, of course, that you have developed genuinely helpful services to offer the student who reaches out.
The following process measures can help managers insure that retention staff members are doing the things required to establish and maintain authentic relationships with their students. These relationships will improve performance in the incremental retention metrics summarized below as well as in the final retention-to-graduation outcome measure.
Average Outbound Contact Frequency
Retention Counselor activity to support relationships with students should be measured as an average daily outbound contact attempt volume. These contact attempts can occur in any media (phone, email, text) and must be directed at students assigned to them. For each institution, technology, and program, there is a precise number.
Counselor Name Recognition
This metric is the single most effective proxy measure for effective relationships. On a regular calendar, researchers (ideally independent) call a random sample of students and ask them if they know the name of their retention counselor. If they know the name (or indicate they know exactly how to find it), it is a positive result. If they have their counselor's contact information in their smart phone, that is a doubly positive finding. This metric should be tracked and reported monthly and should reach plateau benchmarks appropriate for your context.
Initial Experience Assessment
Schools that are focused on retaining students are attuned to their students' goals and expectations. Expectation management is an important service provided by good enrollment and retention professionals. The lack of it is characteristic of substandard professionals in these roles. Schools should systematically seek feedback from students during their first term on a variety of topics including how the school handled the transition from prospect to student, how their early experience matched or did not match their expectations, and how their early engagement with their retention counselors has progressed.
Completed Goal-Setting Meetings
Schools that organize their support services around students talk to their students about their goals for their educations. They capture this information near the beginning of the student's tenure and update it annually, checking in to determine if the student thinks they are progressing towards achieving their goals. One metric to measure if counselors are having these deeper discussions is the number of completed goal setting meetings during the term. The metric would be measured against a standard based on workload and assigned student count.
Resolved & Open Cases
Implementing a case management approach to student issues can be a powerful tool for retention counselors. It gathers information about the types of issues students confront as it helps counselors organize their days and prioritize actions. The cases become the home for interaction records related to the issue so that the solution could be reconstructed later. Metrics for managing such a system would focus on the percent of total cases with successful resolution, the number of cases currently in an open status, and the average time to resolve open cases. It is important to note that a case management system also aggregates the issues confronted by students and staff. Periodic review of this data might suggest needed policy or training changes.
Dropped Student Interviews
Finally, effective schools gather information from students who leave in order to understand how their retention services failed. This information gathering is best done via a directed telephone interview rather than an online or telephone survey. This information is collected in real time and collated and reported periodically.
Incremental Retention Metrics
Schools effective at managing their retention and graduation rates focus on cohort retention rates. This means that every student is assigned to a start cohort based on the term she started attending the school. Cohort membership never changes and other key factors in the retention calculation such as transfer status, credits at entry, full time attendance, program, etc. become categories of analysis for the start cohort. Students enter the start cohort when they persist past the 100% refund date in their first term. These schools then track and analyze how these students move through their degree programs. Important incremental progress points in this journey become the key measures of retention.
The devil is in the details when it comes to defining and managing the evolution of cohorts. Bad decisions setting up or managing the migration of cohorts through drops, adds, and program re-directions can produce findings that are useless or even counterfactual.
Cohort Persistence to Second Term
This measures the percentage of the start cohort students who persist through the 100% refund rate in their second consecutive term. Note that some will stop-out and return for their second term later, but this metric is focused on the consecutive advancement. Again, important details must be managed to make this metric useful.
Cohort Persistence to Break-even
In order to use this metric, schools need to know their financial break-even point. This is the number of courses or credits that students must take in order to off-set the cost of enrollment with the revenue from their tuition. This is an important measurement point because until the student reaches it, the school would have been better off from a financial perspective (and so would the student, likely) declining to enroll the student and paying him to go elsewhere.
Cohort Persistence to Second Year
Many schools already measure this rate as fall-to-fall retention. While there is nothing magic about persisting for one year, it is a popular benchmark allowing comparison and the fact that the student likely successfully renewed their financial aid is a good indication of continuing persistence. Analyzing the overall persistence pattern for a variety of start cohorts as they advance towards graduation will suggest other meaningful timeframes to track as metrics. These waypoints vary by program type and length, as well as other institutional nuances.
Conclusion
We view federal, regional, and state attention to outcomes, particularly graduation rates, as a positive change. Unfortunately, the federal definition of retention is flawed due to its reliance on first time, full time students, a group that represents an already small and still shrinking proportion of most college populations. Moreover, the feds especially need to understand that retention and graduation are events that take meaning in programs within institutions and not in institutions themselves. Retention can only be managed effectively at the program level. The incremental cohort-based process measures broadly outlined here provide metrics that managers can influence. The process metrics tied to creating relationships with students empower managers to take control of this vital process.
When we have worked with schools to implement and manage a metrics system focused on student retention, we have found widely varying conditions related to staffing, technological support, and student culture. These variables resist simple solutions and automated approaches. Retaining students through authentic relationships requires focused work from staff and managers.