Issue 12 / 4 April 2011

THE selection of medical students is an important issue for the community and the medical profession because of the high costs involved in medical education and the need for graduates to be good doctors.

Since the 1970s, the method of selecting medical students in Australia has evolved from the use of purely academic criteria based on secondary school matriculation results to the use of interviews that assess personal characteristics, and more recently to tests of aptitude — the Undergraduate Medicine and Health Sciences Admission Test (UMAT) and Graduate Australian Medical School Admissions Test (GAMSAT).

The imperative for change in the selection process has been the perceived need for doctors to provide academically and clinically appropriate medical care in a professional and humane manner that is appropriate for the society in which they work. Additionally, reliance on results achieved in high school has been held to introduce considerable socioeconomic bias.

While some of the steps taken to reduce socioeconomic disadvantage are transparent, such as pathways designed to improve access of Indigenous and rural students to medical school, the rationale supporting the use of interviews and aptitude tests has not been well articulated.

These methods add complexity and cost to the selection process, and their evaluation is a priority. The MJA has published comment and research on the selection of medical students over many years, including papers on the role of the interview, the effect of coaching and the role of the GAMSAT.

Unsurprisingly, there seems to be agreement that academic ability is a good predictor of completing medical school, but the minimum level of academic ability required is not certain. Studies assessing other selection methods suggest that the additional benefit conferred by the interview may be small and that of the GAMSAT may be negligible.

In the latest issue of the MJA, Wilkinson and colleagues contribute to this debate with the first peer-reviewed data on the predictive validity of the UMAT for medical students’ academic performance.

The paper shares limitations of other studies in this area — it is a correlation study (and thus cannot prove causation), it assesses outcomes in a highly performing and selected cohort of potential students (“range restriction”) who might all be expected to perform well in medical school, and it does not evaluate the clinical performance of students after graduation.

The finding that there is only weak correlation between UMAT results and performance in medical school makes it vital that research into selection processes continues.

It remains to be seen whether the UMAT predicts clinical performance and contribution to the medical profession and the health of the community, but early indications seem to suggest it has little to offer.

Dr Annette Katelaris is the editor of the Medical Journal of Australia.

This article is reproduced from the MJA with permission.

Posted 4 April 2011

2 thoughts on “Annette Katelaris: What’s the matter with UMAT?

  1. CE says:

    Hold on.
    Exactly what are we hoping the UMAT and interview will predict?
    The UMAT isn’t supposed to select students who will get the highest marks in undergraduate exams. It’s supposed to select students who will become the best doctors. Until someone defines what this means it’s not easy to evaluate the UMAT’s success.
    If we’re looking for people who will finish the course rather than drop out, the interview is excellent because it allows you to check that (a) the candidate has acceptable English language proficiency and (b) the candidate understands the job and has decided they want it.
    Nobody suggests that the UMAT and interview select students who perform best in academic tests; that’s why an acceptable academic record is required in addition to the interview and UMAT.

  2. KS says:

    I was a very good but not outstanding student in high school in the ’80s, and as such failed to get into medicine based on marks alone. After finishing an undergraduate degree in science, again with good but not stellar results, I credit the psychometric test and subsequent interview process at the medical school I applied to for my place there and subsequent career as a medical specialist. So I’m certainly not denigrating this ‘alternative’ method of entry and I agree that entry based on marks alone carries significant bias towards the socioeconomically advantaged. However, one thing Dr Katelaris doesn’t mention in her article, probably for good reasons, is the introduction of significant cultural and racial bias by the interview process. I reached the inescapable conclusion that I was probably unfairly advantaged by my Anglo-Celtic name, Australian accent and manners, and good verbal English skills in the interview process, where the medical and lay members of the panel are as susceptible to initial impressions and prejudices as the rest of us. What else can you think, when looking at the lists of students in a course where entry is determined solely by academic achievement, and noting that 90% of the names on the list are of Asian origin? And then comparing that to the proportion of students with Asian names and appearances in the medical school I went to (much much much lower). The cynical might say that the interview process and ‘alternative’ means of entry were introduced in the ’80s and ’90s when the clever sons and daughters of medical families were failing to win places at med school in a much more competitive environment. So I’m not terribly surprised to read that the UMAT isn’t doing a much better job of predicting undergraduate performance than matriculation results.

Leave a Reply

Your email address will not be published. Required fields are marked *