Student standards in economics in Australia


In this article, Peter Abelson reports on a survey of student standards in Australian universities conducted by the Economic Society of Australia. The full report on the survey, including the survey questionnaire, can be found on the Society website: www.ecosoc.org.au. Peter Abelson is Secretary of the Economic Society of Australia and a Professor in the Department of Economics, Macquarie University, Sydney.

Since 1990, the number of students in Australian universities has doubled to nearly a million students. One fifth of these students are from overseas. Over this same period, staff increased by only about 15 per cent. There is widespread concern that university enrolments have increased at the expense of a general decline in the quality of education. In 2001, after a lengthy inquiry, an Australian Senate Committee found ‘strong evidence to demonstrate that many subject disciplines in many universities had experienced declining standards in recent years’.

Commerce faculties are under special pressure because they are attractive to students and they may generate surpluses to support other parts of universities. Within economics departments, there is concern that standards have fallen in order to maintain enrolments and to match apparently easier subjects.

Against this background, in September 2003 the Central Council of the Economic Society of Australia (ESA) resolved to conduct a survey into student standards in economics courses in universities. The survey had three main aims: to determine

• the standards of work achieved by students of economics in Australian universities;
• the main factors that influence these standards; and
• policies for maintaining or improving standards in economics.

The Society sent the questionnaire to Heads of Economics Departments at the 29 Australian universities which run economics courses. We received 22 responses including responses from two large campuses in one university that were treated as separate responses. The responses came from 6 of the 8 older metropolitan universities, 10 newer metropolitan universities, and 5 non-metropolitan universities.

In this article, I briefly describe the survey and the main results. I then discuss some issues in conducting such a survey and some major policy issues relating to student standards.

The survey questionnaire
University students and standards are heterogeneous. Many first year undergraduate students do economics courses as part of other degrees. On the other hand, most third year economics students intend to graduate with an economics qualification. Postgraduate students may study for a run-of-the-mill Masters or for PhD qualifications. ESA designed the survey to elicit answers principally about standards for first and third year undergraduate students and masters by coursework students. Respondents were also asked to provide information on Honours and PhD students.

As discussed below, defining standards is a central issue. For the purpose of the survey, respondents were asked to use the following guidelines.

• Very good - a high distinction or distinction standard of work, 75 plus out of 100
• Good - a credit standard of work, 65-74 out of 100
• Satisfactory - work that is worth 50-64 out of 100
• Poor - work that is worth 40-49 out of 100
• Very poor - work generally below 40 out of 100

Respondents were asked to judge the percentage of students in each of these five categories and whether standards had changed over the last 10 years.

To elicit information on factors influencing standards, the questionnaire sought responses on nine potential factors (for example entry standards, linguistic ability, faculty resources and so on). Turning to policies, the questionnaire provided ten possible policies for each student group (including various accreditation and review procedures). Throughout the survey, respondents were invited to provide additional comments as they considered appropriate.

Confidentiality and anonymity were critical features of the survey. Respondents were told that their responses would be viewed only by the President, Secretary and Administrator of Central Council and three independent university professors, who would review the draft report. No individuals or institutions are identified in the report. Before publication, all respondents were sent the draft report to ensure that they were satisfied with it.

Summary of survey results
Respondents reported a broad distribution of standards. Seventeen of the 20 respondents reported that 30 per cent or more of their first year students are good or very good (credit grade students or higher). On the other hand, eleven departments reported that 30 per cent or more of their first year students are poor or very poor (students likely to fail their courses). Results for third year students were similar, but with slightly fewer poor students. Assessments of students undertaking masters by coursework were more mixed, with respondents reporting a variety of experiences. Table 1 provides summary statistics on current standards. The percentages are the means of the estimates provided by all respondents.

Table 2 summarises responses on changes in standards over the last ten years. Thirteen respondents considered that standards in first year courses have fallen compared with only three who considered standards have risen. For third year courses, eight respondents considered that standards have fallen, whereas only four judged that they have risen.

On the other hand, out of eight respondents about masters’ coursework courses, four judged that standards have risen compared with two who judged they have fallen. Also, most respondents considered that standards of Honours and PhD students have been maintained.

Importantly, respondents were asked to state whether their assessments were based on experience or evidence. Most judgments were based on experience. Where evidence was cited, it related mostly to an assessment of results over time. Few respondents cited other evidence. The issue of evidence is crucial to the debate on standards. I return to this point below.

Factors determining student standards
Table 3 summarises the responses on factors affecting standards of undergraduate students. For first year students, some two-thirds of the respondents considered that high student-staff ratios, poor English of international students, and competition with other subjects had an important or very important impact in lowering standards. Qualitative responses indicated particular concerns about the impact of business studies on lower standards. Other factors of major concern are low entry standards of international and local students and low student work hours. Similar factors are rated important for third year undergraduate students.

Views on standards of masters’ coursework students were more mixed and there were fewer responses. The responses indicate some concern about entry standards and English standards. But it was not possible to generalize from the small number of responses.

Policy options and practices
A theme of the responses was that each institution needs to do the things that best reflect the backgrounds and objectives of their particular students. Although there was some support for external reviews of programs, there was little support for external accreditation or exams.

Table 4 shows the numbers of respondents citing policies for maintaining or raising student standards. As would be expected, the preferred policies reflect respondents’ judgments on the determinants of standards. Most respondents considered that lower student-staff ratios, higher English standards, and higher entry standards for international and local students are important or very important.

Entry standards and English language requirements are again an issue for Master’s students, albeit that the sample of respondents is small. Again, there was little support for external reviews of any kind.

Issues in a survey of student standards
Many issues arise in conducting a survey of student standards. Some are practical, such as how to define economics students, how to measure student-staff ratios, and indeed how to ask clear questions. I discuss such issues in Abelson (2004). Here I focus on more fundamental questions such as ‘what is quality?’ What incentives do department heads face when responding to such surveys? How can we tell whether the responses are honest and accurate?

The concept of student and subject quality is multi-dimensional and not necessarily clear or agreed. One survey respondent argued that research and writing skills have fallen but that quantitative and memorisation skills have risen. Another claimed that students benefit from getting less economics but more practical business studies in their courses. Two other respondents argued that their department’s institutional approach to economics was more useful to students than a conventional neo-classical approach. Such arguments strongly underscored views that decentralised solutions to student standards are appropriate and desirable.

Whatever definition of quality is adopted, there remains the issue of what is the evidence of quality? How do universities know what standards are achieved? As I have noted, few respondents cited evidence on standards. In an attempt to obtain objective measures of student standards, the survey sought data on texts used in core first and second year courses. However, several respondents were unable to provide information about texts used 10 years ago. Thus no conclusions could be drawn from the responses. More fundamentally, as one respondent observed, the ‘real issue is what sorts of questions do we ask and what sorts of answers are we “satisfied” with?’ While data on texts could be useful, conclusions on standards would require an in-depth examination of course materials.

In my department I have conducted two surveys to try to understand standards. One was a survey of student work hours. This found that the median workload for a standard university course was only 50 per cent of what the university nominally requires. In the second survey, I conducted two vocabulary tests set for me by the Linguistics Department in my university as adjuncts to multiple choice economics tests. I found that a high proportion of the students were likely to fail the course due to poor vocabulary alone. More such tests could provide important material on student standards.

Without such tests, can we rely on the responses of heads of departments with regard to standards? Many academics (like their universities) have a personal financial stake in greater student numbers regardless of quality. Heads of departments are appointed inter alia to promote their department’s reputation and financial interests. They may be reluctant to note potential negatives in performance.

In these circumstances, confidentiality (and confidence in confidentiality) is essential. One respondent, who expressed explicit concern privately to me about the possible views of his Vice-Chancellor, made his response conditional on the report not analysing differences between the types of university responding. Our survey process was designed to assure confidentiality and anonymity. This significantly reduced strategic responses, but may not have eliminated them entirely.

Finally, how can the survey agency determine whether responses are accurate and honest? It may request evidence, but this often does not exist. One test of accuracy of response, though not of bias, is internal consistency of responses. In our survey, responses on causes of standards and policies were consistent. A possible test for bias would be to look for responses that appear inconsistent with external data, for example departments with low entry standards reporting high achieved standards. However, this requires the survey agency to vet responses, which is difficult and inconsistent with a professional society’s relationship with its members. The survey agency cannot discount certain responses because they appear inconsistent with external information.

Policy issues relating to student standards
For many economists, policies are required only when there are problems that markets cannot fix and when, following Adam Smith, the cure is better than the disease. A few survey respondents argued that standards have improved. It is indeed possible that, although average standards may have fallen as numbers increase, most students nevertheless achieve similar or better educational standards than previously. However, most survey respondents considered that standards have fallen for many students, and that this is a matter for concern.

Does the market supply efficient standards? One respondent argued, ‘let the market rule. Avoid credentialism and the temptation to centralise.’ This seems to place too much faith in the effectiveness of market mechanisms and signals in the regulated university sector in Australia. Commenting on the variety of standards in masters programs, another respondent observed that ‘the market is currently pretty poorly informed about these differences (in masters programs) as often are the students themselves.’

It is questionable whether consumers recognise the differential qualities of degrees and whether this in turn influences the behaviour of university administrations, staff and students. Prices for courses in Australian universities are similar and send limited signals to students. While local employers may have a fair idea of the value of many degrees, overseas employers may not. In any case, many overseas students prize an Australian degree as a potential migration ticket for which the standard of the degree is largely immaterial. Many university administrations appear motivated more by revenue maximisation than by quality objectives. Given price controls on degrees, revenue is maximised by increasing throughput. Surplus is generated by skimping on inputs. Information failures combined with the public good characteristics of education mean, I believe, that in the current Australian framework decentralised revenue-maximising institutions and market forces are unlikely to produce efficient student standards.

Turning to policy issues, four main causes of low standards and related policy issues are taken up here.

1. Low entry standards, including poor English - raise entry standards.
2. Lack of resources to deal with these issues - increase resources.
3. Low student inputs - require more student work.
4. Low passing standards - policies to raise grade standards.

Most survey respondents considered higher entry standards would be desirable. Many cited improved English language for international students as important or very important. However, respondents recognised that raising entry standards would often run counter to university policies and that academics have little control over general entry standards. One respondent noted that he has argued for a ‘university-run language test, but this has been regarded as undermining the university’s competitive position’. More pertinently, raising entry standards and thus (possibly) reducing students could reduce departmental revenue, salaries and jobs.

Not surprisingly, most survey respondents considered that reducing student-staff ratios is critical to standards. Weak students often need more assistance than do stronger ones. Variance in student standards means that stronger students bear the cost of lower standards unless they are provided differentiated services. Most economics departments are attempting to maintain standards in various ways (more emphasis on teaching, student mentoring, web-based services and so on). However, it seems that technical improvements cannot substitute fully for the decline in resources per student.

In recent years, student participation in university work has declined markedly. In response, universities could foster a work culture by making obligations clear to students before they start their university education and continuously thereafter, possibly in the form of a quasi contract. Currently, university marketing encourages students to enter the university with little idea of the work involved. University administrations provide few upfront explicit work requirements to students. Students are permitted to enrol as full-time students when they are really part-time students. It is not hard to see why. A policy that set explicit work standards for students would run counter to a university’s revenue maximising objective that requires a permissive attitude to student work attitudes.

Course grades provide another signalling opportunity. Indeed, it could be argued that if grade standards are appropriate and known, there is no need to attempt to influence student inputs. But this appears unrealistic. If a university accepts low entry standard students and short working weeks, it cannot set grades inconsistent with this. This may be why respondents to the survey did not place a high priority on raising failure rates. Unfortunately, grading is another area where incentive structures are often perverse, with academic salaries or even jobs tied to student numbers. In some departments, individual salary supplementation is related to student assessments. It is hard to believe that awards of grades in these conditions are not influenced by salary incentives.

External accreditation, reviews, and exams
The ESA survey canvassed three forms of external review: formal accreditation of courses or degrees, external reviews of courses or degrees, and external exams. There was some support for external reviews but little support for external accreditation or exams. This reflects the status quo. Departments have more control over reviews than they would have over accreditation or external exams. Reviews are typically based on terms of reference set by the host university and are constrained to review courses subject to the objectives of that university.

The ESA has long opposed any form of accreditation for a variety of reasons. The reasons include that accreditation is anti-competitive; that it either sets standards too high and excludes people or sets then too low and is meaningless; that it may define economics too narrowly; or just that it is too hard to achieve. Some respondents to the survey argued strongly that accreditation would not recognise the diversity of student needs and academic approaches to teaching economics and that any form of central control would be a major error.

Opponents of external exams argue likewise that there is a need for differentiation of product and plurality of process. They are often concerned that the tests will be based on a neo-classical model of economics that they believe is irrelevant for many students. Hard external tests would allegedly discourage entry to economics. Simple ones would be inappropriate for better students. In my view, there are potential benefits of external exams as signals and as sources of beneficial competition. But few of my colleagues appear to agree with this.

Conclusions
On balance, standards of undergraduates appear to have declined. There was insufficient evidence to draw conclusions about graduate work. In general, more evidence on what is happening is much needed.

The prime causes of the decline in standards appear to be high student-staff ratios, poor English standards, competition with other subjects, and a declining student culture of university work. It may be observed that these findings could have been expected. However, these issues are not well documented and there is little action on many of these issues. Keeping or putting the issues on the policy agenda seems to be a useful exercise.

The main policy theme of the responses was that each institution should do the things that best reflect the needs of their particular students. Most respondents considered that lower student-staff ratios, higher English standards, and higher entry standards for international and local students are important. Although there was some support for external reviews of programs, there was little support for external accreditation or exams.


References

Abelson, P., 2004, ‘Surveying University Student Standards in Economics’, Economic Education Conference, Adelaide.

Australian Senate Committee, 2001, Universities in Crisis, Senate Employment, Workplace Relations, Small Business, and Education Committee, Australian Parliament, Canberra, www.aph.gov.au/senate/eet_ctte/public_uni/report/contents.htm

Economic Society of Australia, 2004, A Survey of Student Standards in Economics in Australian Universities in 2003, Economic Society of Australia, Sydney, www.ecosoc.org.au

Page Options