On Monday, The Equity Line posted the following piece about how the U.S. compares to the other World Cup countries in terms of degree attainment.
More Than Just a Game: Degree Attainment Around the World (Cup)
Posted on June 16, 2014 by Kaylé Barnes and Joseph Yeado
“Defying commentators, critics, and prognosticators, the U.S. has already performed quite well against the other nations competing for the 2014 World Cup. Yes, the competition on the field only started last Thursday and the Yanks have yet to kick things off today, but the U.S. is beating most of the competition in another competition: college attainment.
Among the 32 teams competing in Brazil, the United States ranks third for the percentage of adults with a 2-year or 4-year college degree.
It may look like America has trounced the competition, but there are two important facts that put these figures into perspective.
In 1990 the United States soccer team qualified for its first World Cup after a 40-year drought. Though it failed to win a game and was sent home, the U.S. was ranked first in the world in four-year degree attainment among young adults. Since that time, our men’s national soccer team has steadily improved, but our college attainment rates have not. The United States now ranks 11th among developed nations for young adults with college degrees.
The U.S. may compare favorably to other World Cup countries, but the data still mean that only 2 in 5 adults have some kind of a college degree. In fact, just 59 percent of students at a 4-year college will earn a bachelor’s degree in six years – not to mention that black and Latino students complete at even lower rates (40 percent and 52 percent, respectively). Ranking well relative to other countries doesn’t mean much when we are leaving so many of our students behind.
Third place is not good enough. More important to our country’s well-being than winning the World Cup is whether we have an educated population prepared to face the challenges of the new global economy. Higher education leaders and policymakers should look to the example of the colleges and universities across the country that are leading the way to improve student success and proving that low graduation rates are not inevitable.
The expectations of American soccer supporters have risen steadily since 1990, and millions are tuning in to watch our boys play in Brazil. It’s time that we raise our expectations about college attainment and the equity in attainment levels.
Only then can the United States realize its gooooooaaaaals of being first in the world on the fútbol pitch and in degrees.”
As The Seattle Times reported today, “For the first time ever, three Washington colleges swept the nation in their respective size categories for having the most participants in the Peace Corps.” The UW topped the large-schools list with 107 volunteers (tied with the University of Florida), Western Washington University led the medium-schools list with 73 volunteers, and Gonzaga University came in first on the small-schools list with 24.
Carrie Hessler-Radelet, acting director of the Peace Corps, traveled from D.C. to attend a news conference here on campus today. She personally congratulated the three schools on their rankings and said the achievement reflects Washington’s dedication to innovation and helping the poor.
The UW also topped the large-schools list between 2007 and 2010.
In 2009, the National Research Council received a request from Congress for a “report that examines the health and competitiveness of America’s research universities vis-à-vis their counterparts elsewhere in the world”.
Responding to the request, the NRC assembled a 22-member panel of university and business leaders and mandated them to identify the “top ten actions that Congress, the federal government, state governments, research universities, and others could take to assure the ability of the American research university to maintain the excellence in research and doctoral education needed to help the United States compete, prosper, and achieve national goals for health, energy, the environment, and security in the global community of the 21st century”.
The panel released its final report last week under the title Research Universities and the Future of America: Ten Breakthrough Actions Vital to Our Nation’s Prosperity and Security. The following were the strongest themes:
- State and federal governments must increase their investment in research universities, allow these institutions more autonomy and agility, and reduce their regulatory burden: The panel identified the state and federal governments as the key actors in the strategy it proposed; indeed, seven of its ten recommendations were primarily aimed at them. In one of its more ambitious statements, the panel recommended that states should strive to restore and maintain per-student funding for higher education to the mean level for the 15-year period 1987-2002, adjusted for inflation. In Washington, this translates into recommending a per-FTE funding increase of between 70% and 80%. The panel acknowledged that this could be difficult to implement in the near term given current state budget challenges and shifting state priorities, but nevertheless stressed that “any loss of world-class quality for America’s public research institutions seriously damages national prosperity, security, and quality of life.”
- Strengthen the role of business and industry in the research partnership: The panel recommended that tax incentives be put in place to encourage businesses to invest in partnerships with universities both to produce new research and to define new graduate degree programs. It also encouraged business leaders and philanthropists to help increase the participation and success of women and underrepresented minorities in science, technology, engineering and mathematics (STEM).
- Research universities should strive to increase their cost-effectiveness and productivity: The panel recommended that universities should “strive to contain the cost escalation of all ongoing activities […] to the inflation rate or lower through improved efficiency and productivity”. However, it made no mention of the difficulties raised in the previous NRC report on productivity concerning the impact of cost-reduction measures on quality.
The panel’s recommendations are not novel: they have already been made by multiple parties in the higher education sector over the last few years. However, given the weight of the signatures on the report, this document may prove useful in raising the profile of higher education in upcoming budget battles both at the state and federal level.
A few weeks ago, the National Research Council’s Panel on Measuring Higher Education Productivity published its 192-page report on Improving Measurement of Productivity in Higher Education, marking the culmination of a three-year, $900,000 effort funded by the Lumina Foundation and involving 15 higher education policy experts nationwide.
In explaining the need for a new productivity measure, the Panel made several key observations:
It’s all about incentives: Institutional behavior is dynamic and directly related to the incentives embedded within measurement systems. As such, policymakers must ensure that the incentives in the measurement system genuinely support the behaviors that society wants from higher education institutions and are structured so that measured performance is the result of authentic success rather than manipulative behaviors.
Costs and productivity are two different issues: Focusing on reducing the cost of credit hours or credentials invites the obvious solutions: substitute cheap teachers for expensive ones, increase class sizes, and eliminate departments that serve small numbers of students unless they somehow offset their costs. In contrast, focusing on productivity assesses whether changes in strategy are producing more quality-adjusted output (credit hours or credentials) per quality-adjusted unit of input (faculty, equipment, laboratory space, etc.).
Using accountability measures without context is akin to reading a legend without looking at the map: Different types of institutions have different objectives, so the productivity of a research university cannot be compared to that of a liberal arts or community college, not least because they serve very different student populations who have different abilities, goals, and aspirations. The panel notes that, among the most important contextual variables that must be controlled for when comparing productivity measures are institutional selectivity, program mix, size, and student demographics.
The Panel also contributed a thorough documentation of the difficulties involved in defining productivity in higher education. From time to time, it is helpful to remind ourselves that, while it may be “possible to count and assign value to goods such as cars and carrots because they are tangible and sold in markets, it is harder to tabulate abstractions like knowledge and health because they are neither tangible nor sold in markets”. The diversity of outputs produced by the institutions, the myriad inputs used in its activities, quality change over time and quality variation across institutions and systems all contribute to the complexity of the task.
Despite these difficulties, the Panel concluded that the higher education policy arena would be better served if it used a measure of productivity whose limitations were clearly documented than if it used no measure of productivity at all. It proposed a basic productivity metric measuring the instructional activities of a college or university: a simple ratio of outputs over inputs for a given period. Its preferred measure of output was the sum of credit hours produced, adjusted to reflect the added value that credit hours gain when they form a completed degree. Its measure of input was a combination of labor (faculty, staff) and non-labor (buildings and grounds, materials, and supplies) factors of production used for instruction, adjusted to allow for comparability. The Panel was careful to link all components of its formula to readily available data published in the Integrated Postsecondary Education Data System (IPEDS) so that its suggested measure may easily be calculated and used. It also specified how improvements to the IPEDS data structure might help produce more complete productivity measures.
The key limitation in the Panel’s proposal – fully acknowledged in the report – is that it does not account for the quality of inputs or outputs. As the Panel notes, when attention is overwhelmingly focused on quantitative metrics, there is a high risk that a numeric goal will be pursued at the expense of quality. There is also a risk that quantitative metrics will be compared across institutions without paying heed to differences in the quality of input or output. The report summarizes some of the work that has been done to help track quality, but concludes that the state of research is not advanced enough to allow any quality weighting factors to be included in its productivity formula.
While readers may lament the Panel’s relegation of measures of quality to further research, especially given the time and resources invested in its effort, the report remains a very useful tool in understanding the issues involved in assessing productivity in higher education and provides valuable food for thought for policymakers and administrators alike.