Student Exchanges Hit Record High. According to the Open Doors Report on International Educational Exchange, the number of international students at U.S. colleges and universities and the number of American students studying abroad are at record highs. In 2012-13, 820,000 foreign students attended American higher ed institutions, a 55,000 increase (7.2 percent) from the previous year. Chinese undergraduates exhibited the biggest increase, 26 percent, bringing the total number of Chinese students studying in the U.S. (undergraduates and graduates) to 235,000. In 2011-12 (the most recent year for which data are available) 283,000 American students went abroad for credit university courses, up 3.4 percent from the prior year. For institutions hosting the most international students, the UW ranked 14th in the country.
New Studies Cast Doubt on Effectivenessof State Performance-based Funding. Now that economies are recovering from the Great Recession, state legislators across the country have been hurrying to adopt systems that link state funding for higher education to student outcomes like degree production and completion rates. However, several research papers presented at the annual meeting of the Association for the Study of Higher Education question the effectiveness of these “performance-based funding” systems. See Inside Higher Ed for a summary of the findings.
College Completion Rates See Little Improvement. College-completion rates remained largely unchanged this year, according to the National Student Clearinghouse Research Center. Of the first-time students who entered college in fall 2007, 54.2 percent earned a degree or certificate within six years—up 0.1 percentage points from the 2006 cohort. In the public sector, completion rates rose by 1.3 percentage points for students who started at public four-years and by 1.1 percentage points for those who began at public two-years. Unlike the federal government’s college-completion measure, the center tracks part-time students and students who transfer to a different college, sector, or state. Only 22 percent of part-time students earned credentials within six years, compared with 76 percent of those enrolled full time. The research center will issue its full report next month.
University of Michigan’s Shared Services Strategy Faces Opposition. The University of Michigan is the latest campus to implement “shared services,” a cost-saving strategy that has academic departments rely on centralized staff, rather than department-level staffers. Theoretically, employees in the central pool could become more specialized, and thus more efficient, than departments’ jack-of-all-trades staff. Administrators at Michigan hoped to save $17 million by moving 275 staffers from their campus offices to a single building on the edge of town. However, not only are faculty and students speaking out in opposition, the plan is no longer expected to save nearly as much as once hoped and may barely break even in the short term. Read more at Inside Higher Ed.
On Monday, the U.S. Education Department (ED) began formal negotiationson the draft language of a proposed new “gainful employment” rule. The rule, originally published in 2011, was designed to enforce a requirement of the Higher Education Act that states career education programs—non-degree programs at all colleges and most degree programs at for-profit colleges—must “prepare students for gainful employment” in order to participate in federal student aid programs. The rule was meant to discourage these programs from misusing federal aid dollars and leaving students with debts burdens they are unable to repay. However, in 2012 a federal judge rejected major provisions of the rule, requiring that ED rethink its strategy.
Here’s a summary of the changes:
- The proposed rule applies to programs with as few as 10 students, whereas the old rule counted only career-focused programs with 30 or more students. Because of this change, ED estimates that the new rule could cover 11,359 programs at for-profit and nonprofit colleges—nearly twice as many as the old rule covered—and that 974 of those programs (9 percent) could fail to meet the proposed standards.
- The draft regulation omits loan-repayment as a criterion for federal student aid eligibility. The old rule severed federal aid to programs where too few students were repaying their loans or where graduates’ debt-to-earnings and debt-to-discretionary-income ratios were too high. The new rule removes the loan repayment standards, which the courts deemed “arbitrary and capricious,” and relies only on the latter two measures.
- Debt-to-earnings calculations would be based only on students who receive federal aid, rather than students who complete the program. The old calculations were based on all students who completed the program, whereas the proposed calculations are based on any students who receive federal student loans and Pell Grants, regardless of whether they complete the program. As the rule is designed to ensure that federal aid is used effectively, this seems a more appropriate approach.
- Schools would have fewer chances to improve their performance before losing federal aid eligibility. Under the previous rule, programs that failed the measures in 3 out of any 4 years would be ineligible for federal student aid. However, the new rule only lets programs fail in 2 out of any 3 years before they lose eligibility.
For details, see a comparison of the two versions prepared by the Education Department. Please continue to follow our blog as well as the Federal Relations blog for updates on this topic.
Although there are many types of financial aid, it is typically awarded on the basis of either need or merit. Need-based aid is largely a result of a federal calculation and is somewhat predictable: to ensure access, students with more financial need receive more financial aid of various forms. And, although there is no universal definition of the merit aid, it traditionally describes scholarship money used to attract top academic achievers. However, Kevin Carey, director of education policy at the New America Foundation, asserts in a recent commentary for The Chronicle that a significant portion of merit aid is actually used to attract “academically marginal students with wealthy parents.”
Carey cites evidence of this trend. A 2011 U.S. Department of Education study found that of the full-time students at four-year institutions who received “merit” aid in 2007-08, almost 20 percent had entered college with a combined SAT score of less than 700 and 45 percent had scored below 1000 (out of a possible 1600). The study also shows that although the percentage of private college students receiving need-based aid showed a slight decline from 1995 to 2007 (going from 43 to 42 percent), the proportion receiving “merit” aid nearly doubled during that time span (from 24 to 44 percent). At public universities, the percentage of students getting need-based aid increased from 13 to 16 percent, but the growth in merit aid outpaced it, going from 8 to 18 percent. Thankfully, as discussed in a previous post, a group of private-college presidents has been calling on its peers to limit the amount of financial aid awarded on criteria other than need.
The National Association of State Student Grant and Aid Programs’ (NASSGAP) Annual Survey Report on State-Sponsored Student Financial Aid and Brookings’ Beyond Need and Merit: Strengthening State Grant Programs provide corroborating evidence that merit-aid is becoming more prevalent, while need-based aid is diminishing. However, neither discusses the academic strength of the students receiving merit aid.
So why is this happening? If a college offers good scholarships and financial aid packages to an affluent family, it may incentivize them to choose that school. Even though that family’s son or daughter may be a low academic achiever who has a decent chance of dropping out, it is still lucrative for the school to attract those students. Noel-Levitz, a higher ed consulting firm, revealed that one of its client colleges was able to generate over $10,000 more per low-achieving student than they could per top-achieving student.
Carey hopes that as taxpayers, the news media, and affiliates of universities become aware of this trend, their vigilance will keep institutions in check.
Last Wednesday, eight Democratic senators sent a letter to the U.S. Department of Education (ED) asking Education Secretary, Arne Duncan, to investigate strategies that some for-profit colleges allegedly use to falsely lower their cohort default rates (CDRs)—the rate at which student borrowers default on federal loans. Institutions with high CDRs can face penalties including a loss of eligibility for federal student aid programs.
The letter cites a recent Senate Committee report, which presents evidence that for-profits routinely use two tactics
in particular to manipulate CDRs:
- “Encouraging or even harassing borrowers” into forbearances or deferments, which can delay default until after the
period for which CDRs are typically reported; and
- Manipulating campus and program categorizations in a way that makes their default rates artificially low.
The senators argue that “for-profit schools should not be able to use administrative smoke and mirrors to circumvent regulations that protect students and taxpayers, and the department should take action to prevent these tactics.” Some for-profits have admitted to using such strategies to “manage” their CDRs, but they deny that doing so conflicts with their students’ best interests.
For-profits consistently average higher default rates than all other higher education sectors. Of the students who began repaying loans in 2009, 22.7 percent of students at for-profits defaulted within three years, while only 11 percent of public students defaulted in that timeframe, and only 7.5 percent of private nonprofit students. In contrast, the UW’s three-year CDR was an impressively low 3.1.
Comparing for-profits’ two-year CDRs with newly-reported three-year CDRs reveals a major, and potentially damning, discrepancy. Fifty percent more students from for profits’ defaulted in the three-year timeframe than in the two-year timeframe. The senators say this “raises serious questions about how widespread the use of such tactics may be across the sector.”
ED has yet to respond to the senators’ letter.
The Wisconsin Education Approval Board, which oversees all for-profit colleges located in the state and any online-learning programs offered to its residents, may require that those institutions achieve specific performance standards in order to operate within Wisconsin. Specifically, that board is proposing to require that at least 60 percent of a college’s students complete their studies within a certain time-frame and at least 60 percent of its graduates have jobs. Public universities and private nonprofit colleges are not under the board’s jurisdiction and would therefore be exempt from the requirements.
The board already collects and publishes data on its institutions. According to those reports, average completion rates fell from 82 to 59 percent over the last six years and the percentage of graduates who were employed during a given year dropped from 44 to 22 percent (in the same time frame).
The Chronicle reports that the board is basing its standards on what they believe “Wisconsin consumers would find ethical, responsible, and acceptable for institutions choosing to enroll them.” However, for-profit colleges have already submitted letters to the board arguing that the proposed standards are “arbitrary and should not be broadly applied to a diverse set of programs, which often enroll underserved populations.”
While the federal government’s “gainful employment” rule is similar to Wisconsin’s proposal, it is unusual to see a state attempt this type of regulatory system. Some states have increased their requirements for online and for-profit institutions—but Wisconsin’s proposal is especially aggressive. For-profits that wish to operate in Washington must receive authorization from the Washington Student Achievement Council, which considers institutions; “financial stability, business practices, academic programs, and faculty qualifications”—but does not yet hold them to specific graduation or employment standards.
On Wednesday, Wisconsin’s board voted unanimously to postpone a final decision until a team made of board members, representatives from colleges and universities, and State legislators can review the proposal more thoroughly. The team is scheduled to make recommendations to the board in June of 2013.
Here is a quick look at some recent happenings in the world of higher education:
- The College Scorecard confuses students and lacks desired information, says a report released today by the Center for American Progress (CAP). The College Scorecard, which President Obama proposed last February, is an online tool to help students compare colleges’ costs, completion rates, average student-loan debt, and more. The CAP asked focus groups of college-bound high-school students for their opinions on the scorecard’s design, content, and overall effectiveness. Student responses indicated that they did not understand the scorecard’s purpose; they would like the ability to customize the scorecard according to their interests; they want more information on student-loan debt; and they would prefer seeing four-year graduation rates, rather than six-year rates. The CAP report includes recommendations for improving the readability and usability of not just the scorecard, but of government disclosures in general.
- The U.S. House of Representatives passed the STEM Jobs Act on Friday by a 245 to 139 vote. The bill would eliminate the “diversity visa program,” which currently distributes 55,000 visas per year to people from countries with low rates of immigration to the U.S. Those visas would instead go to foreign graduates from U.S. universities who earn advanced degrees in science, technology, engineering or mathematics (STEM). Proponents of the Republican-backed bill say it would keep “highly trained, in-demand” workers in the U.S., boosting the nation’s economy and preserving its global competitiveness. While the White House and most Democrats support the expansion of STEM visas, they oppose the bill’s attempt to eliminate the diversity visa program. Consequently, the measure is unlikely to pass the Democrat-controlled Senate.
- The overlapping agendas of Texas, Florida, and Wisconsin governors could signal a new Republican approach to higher education policy, says Inside Higher Ed. The three governors agree on cost-cutting strategies such as requiring some colleges to offer $10,000 bachelor’s degrees; limiting tuition increases at flagship institutions; linking institutions’ graduation rates to state appropriations; and letting performance indicators, such as student evaluations, determine faculty salaries. Although the governors’ proposed reforms appeal to some voters, “actions taken by all three have been sharply criticized not only by faculty members and higher education leaders in their states, but also by national leaders, who view the erosion of state funding and increased restrictions on what institutions can do a breach of the traditional relationship between state lawmakers and public colleges and universities.”
A few weeks ago, the National Research Council’s Panel on Measuring Higher Education Productivity published its 192-page report on Improving Measurement of Productivity in Higher Education, marking the culmination of a three-year, $900,000 effort funded by the Lumina Foundation and involving 15 higher education policy experts nationwide.
In explaining the need for a new productivity measure, the Panel made several key observations:
It’s all about incentives: Institutional behavior is dynamic and directly related to the incentives embedded within measurement systems. As such, policymakers must ensure that the incentives in the measurement system genuinely support the behaviors that society wants from higher education institutions and are structured so that measured performance is the result of authentic success rather than manipulative behaviors.
Costs and productivity are two different issues: Focusing on reducing the cost of credit hours or credentials invites the obvious solutions: substitute cheap teachers for expensive ones, increase class sizes, and eliminate departments that serve small numbers of students unless they somehow offset their costs. In contrast, focusing on productivity assesses whether changes in strategy are producing more quality-adjusted output (credit hours or credentials) per quality-adjusted unit of input (faculty, equipment, laboratory space, etc.).
Using accountability measures without context is akin to reading a legend without looking at the map: Different types of institutions have different objectives, so the productivity of a research university cannot be compared to that of a liberal arts or community college, not least because they serve very different student populations who have different abilities, goals, and aspirations. The panel notes that, among the most important contextual variables that must be controlled for when comparing productivity measures are institutional selectivity, program mix, size, and student demographics.
The Panel also contributed a thorough documentation of the difficulties involved in defining productivity in higher education. From time to time, it is helpful to remind ourselves that, while it may be “possible to count and assign value to goods such as cars and carrots because they are tangible and sold in markets, it is harder to tabulate abstractions like knowledge and health because they are neither tangible nor sold in markets”. The diversity of outputs produced by the institutions, the myriad inputs used in its activities, quality change over time and quality variation across institutions and systems all contribute to the complexity of the task.
Despite these difficulties, the Panel concluded that the higher education policy arena would be better served if it used a measure of productivity whose limitations were clearly documented than if it used no measure of productivity at all. It proposed a basic productivity metric measuring the instructional activities of a college or university: a simple ratio of outputs over inputs for a given period. Its preferred measure of output was the sum of credit hours produced, adjusted to reflect the added value that credit hours gain when they form a completed degree. Its measure of input was a combination of labor (faculty, staff) and non-labor (buildings and grounds, materials, and supplies) factors of production used for instruction, adjusted to allow for comparability. The Panel was careful to link all components of its formula to readily available data published in the Integrated Postsecondary Education Data System (IPEDS) so that its suggested measure may easily be calculated and used. It also specified how improvements to the IPEDS data structure might help produce more complete productivity measures.
The key limitation in the Panel’s proposal – fully acknowledged in the report – is that it does not account for the quality of inputs or outputs. As the Panel notes, when attention is overwhelmingly focused on quantitative metrics, there is a high risk that a numeric goal will be pursued at the expense of quality. There is also a risk that quantitative metrics will be compared across institutions without paying heed to differences in the quality of input or output. The report summarizes some of the work that has been done to help track quality, but concludes that the state of research is not advanced enough to allow any quality weighting factors to be included in its productivity formula.
While readers may lament the Panel’s relegation of measures of quality to further research, especially given the time and resources invested in its effort, the report remains a very useful tool in understanding the issues involved in assessing productivity in higher education and provides valuable food for thought for policymakers and administrators alike.