It may have a great reputation - shame about the education

Higher fees should reflect an institution's quality, rather than status, so we should start measuring it, argues Graham Gibbs

September 30, 2010

One way or another, universities will be allowed to charge students more. The remaining political issue is how to decide how much each university can charge, in a way that reflects some notion of "value for money".

One approach to this is to allow reputation to determine everything. If an institution has the best reputation it will attract the best students, who will achieve the best degree results. Employers will be attracted by the reputation and students' career earnings will reflect their market value.

In this model in the US, the inevitable cycle of increased reputation for those at the top has led to very substantial tuition fees for those with recognisable brand names, and league tables based almost entirely on reputation. However, it causes huge problems for the system. Institutions focus on trying to improve their reputation, for example by investing in research, instead of trying to improve their educational quality. This would not matter if they were the same thing - but the evidence shows that they are largely unrelated. If we are to have a funding system based on differential fees then we must have indicators of quality that have some validity, in that they tell us about educational effectiveness. This will motivate institutions to improve their effectiveness to justify higher fees.

Input variables, such as resources, do not predict outcomes, such as degree results and employability, as much as you would expect. And the modest extent to which input variables do predict outcomes results largely from reputation. Input variables are even worse at predicting educational gains - the difference between students at the start and on graduation - than they are at predicting outcomes. Outcome measures such as employability tell us little about the institution, other than about their reputation and the quality of students they can attract, and so outcome measures are also not as helpful as one might hope as indicators of quality. What can predict educational gains are process measures - what institutions do with whatever students they have, using whatever resources are available.

Thirty years of research has identified which process variables best predict educational gains. They include: class size; cohort size; who does the teaching; the volume, promptness and usefulness of feedback on student work; the extent of close contact with academics; and the extent of collaborative learning - along with the extent of student engagement that results from these variables. Key aspects of engagement include how much time students devote to their studies and the extent to which they take a deep approach (attempting to understand) or a surface approach (attempting only to reproduce). All these variables are measurable. Institutions that have improved in these process variables have been shown to increase student engagement and increase learning gains, without increasing resources.

The Quality Assurance Agency does not ask institutions to provide information about these educational characteristics. This has allowed institutions to increase class sizes, cut feedback, use cheap and inexperienced teachers and to cut the size of programmes in terms of learning hours, often in order to pursue increased reputation. And they have got away with it. The National Student Survey and the National Union of Students' student satisfaction ratings tell us little about the extent to which the key educational processes are employed. Where potentially useful data exist (such as data on student effort) they are in unconnected databases (at the Higher Education Policy Institute, the Higher Education Statistics Agency and the Higher Education Academy) which makes pooling and analysing them difficult. The only attempts to pull data together, in league tables, include few or sometimes no valid process measures and these rankings have little value as teaching performance indicators.

Furthermore, departments differ hugely within institutions. Some institutions have the highest-ranked teaching department nationally in one subject and the lowest-ranked in another - with the same resources and within the same institutional quality system. This is because the subjects use different educational processes. The US researcher who has done most to identify and publicise which indicators of teaching quality we should pay attention to, Ernest Pascarella, has concluded that what students experience within their course counts substantially more than differences between institutions. Potential students need to have access to indicators of the educational processes in the programmes they are applying for, rather than having to rely on university rankings that average across different programmes and largely reflect reputation.

If we want fees to reflect educational quality rather than reputation, then we should use indicators of what are known to be effective educational processes.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored