New national data suggest that the “average” dose of Medicare-subsidised mental health services provided by psychologists is around five sessions per year — raising uncomfortable questions about whether publicly funded care is adequately dosed, relationally safe, and fit for purpose.
Health services are costly, and they don’t always lead to good outcomes. Medicine has begun to confront this directly through the lens of diagnostic safety. A recent InSight+ commentary estimates that each year around 140 000 Australians experience a diagnostic error, with about 21 000 people seriously harmed and roughly 4 000 deaths attributed to these failures. Diagnostic errors are estimated to cost $44.2 billion a year — about 17.5% of health-care spending — and roughly 80% of this harm is considered preventable.
Psychological services sit in different evidentiary landscapes. Medicare data can tell us how many sessions were delivered and what was spent, but not what diagnosis was made, whether it was accurate, or whether the psychological service provided actually addressed the person’s difficulties in a meaningful way.
That absence makes a diagnostic-error calculus in psychological services impossible in the straightforward way medicine is now attempting. Yet it doesn’t remove the parallel safety question: are publicly funded psychological services delivered at a dose and in a form likely to be effective — or are we funding a substantial volume of care that is under-dosed and potentially iatrogenic?

Why dose and attendance matter
Across psychological practice, the therapeutic alliance is widely recognised as central to change. Bruce Wampold’s contextual/common-factors model makes the point most directly: effective therapy works through a credible meaning-framework, a collaborative bond, and shared therapeutic tasks that mobilise hope and agency.
There is a large body of evidence that the single most significant factor predicting the response to psychological therapy is the therapeutic alliance, regardless of the therapeutic approach adopted by the individual therapist. The alliance provides a context in which meaning is negotiated, affect is held and transformed, and from a relational-psychodynamic sensibility, ruptures are repaired, and the self is extended within an intersubjective field. Empirically, this is not parochial to one school. Large meta-analyses show the alliance to be a robust, pan-theoretical predictor of outcome across diagnoses and settings.
In that context, attendance becomes clinically significant. The number of sessions a person returns for is not only about cost; it is one behavioural trace of whether a workable alliance and shared purpose are established between the patient and the psychologist and whether this relationship is sustained.
When therapy proceeds, the person repeatedly re-enters a co-constructed therapeutic field. Early disengagement from therapy — particularly while still distressed — may be associated with an unresolved rupture, poor fit, or the absence of sufficient relational safety for the work to continue. Early dropout can therefore be treated as a relational outcome in its own right, and a plausible marker of absent benefit or latent harm.
What the Medicare numbers show
National Medicare utilisation patterns sharpen these concerns. The Actuaries Institute’s synthesis of Better Access data reports that, in the year to June 2025:
- about 500 000 patients received clinical psychologist treatment items (≈ 2.7 million services; mean ≈ 5.4 sessions per patient); and
- around 700 000 patients received other psychologist treatment items (≈ 3.2 million services; mean ≈ 4.6 sessions per patient).
That is, the average Medicare-subsidised psychological service dose is about five sessions per year. That may be sufficient for some brief, lower-severity episodes. But for many moderate-to-severe, chronic, or complex presentations, five sessions is below the range in which reliable improvement and recovery are most probable. Dose–response studies suggest that around half of patients show reliable improvement somewhere between about 8 and 18 sessions, and that 70–75% of patients who begin in the clinical range typically require 20–25 or more sessions to recover or show clinically significant change, usually delivered on a weekly basis. If the therapeutic alliance is a key mechanism, the question is not simply “how many sessions were funded?”, but “how many sessions are sufficient for the alliance to become therapeutically workable?”
There is an argument that the Medicare model encourages less intensive therapy, as patients wish to extend their sessions, which are eligible for funding across time. These is evidence to suggest that when “weekly” therapy drifts to fortnightly or more sporadic contacts, improvement is slower and less robust than weekly or higher-frequency care.
A large spend, limited visibility
Psychologists account for the largest share of Medicare mental-health expenditure, with about $715 million in benefits paid for psychologist services in 2023–24 — around 47% of all Medicare mental-health benefits. Total Medicare rebates for all mental-health services were about $1.5 billion in that year.
In other words, we are investing heavily in psychological services while lacking basic visibility on whether the funded “dose” is routinely sufficient, and whether patterns of attendance signal under-serviced care in particular population groups.
Symptom scales are not safety measures
One might respond that outcomes are monitored through symptom measures. In practice, Australian clinicians commonly use the Kessler-10 (K10/K10+) Kessler Psychological Distress Scale-10 and Depression Anxiety Stress Scales (DASS).
These are useful indices of distress, but their limitations are structural. They measure symptom severity and nonspecific distress, not relational process, alliance quality, rupture-repair, or therapeutic safety.
Scores can fall because circumstances change or affect settles, while the therapeutic relationship remains thin or precarious. Conversely, meaningful relational work can temporarily intensify distress before it reorganises.
If psychological therapy is fundamentally intersubjective, symptom scales alone are an incomplete — and sometimes misleading — index of effectiveness and safety.
Towards a relational safety lens
Taken together, the national picture invites a hard but necessary inference. We have a system in which the average publicly funded dose of psychological treatment is about five sessions. While these are provided at high aggregate cost, we should not assume effectiveness or safety by default.
Low attendance may reflect good brief care for some. But for many others it plausibly represents less than optimum care, persisting moderate-to-severe difficulties that have not been adequately addressed, or relational failure that carries its own risks — including discouragement, reinforced hopelessness, or the impression that “therapy doesn’t work”.
Medicine is learning what becomes visible once a system decides to measure preventable harm. Psychology does not yet have an equivalent safety metric, but the relatively low attendance pattern is a cause for concern and has implications for the training of psychologists and the monitoring of service delivery.
In the meantime, session-frequency patterns — which are transparent to clinicians and clinic staff — offer a defensible relational proxy. They are not a substitute for richer process-based measures, but they can help services benchmark locally, ask harder questions about who is receiving under-dosed care, and give governments an early-warning signal while we work towards more clinically meaningful indicators of the quality and safety of psychological services.
Adjunct Professor Robert Schweitzer (QUT) is a clinical and counselling psychologist in clinical practice and Course Convenor of the INSIGHT Psychodynamic Psychotherapy Registrar Program based in Queensland.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
The statements or opinions expressed in this article reflect the views of the authors and do not necessarily represent the official policy of the AMA, the MJA or InSight+ unless so stated.
Subscribe to the free InSight+ weekly newsletter here. It is available to all readers, not just registered medical practitioners.
If you would like to submit an article for consideration, send a Word version to mjainsight-editor@ampco.com.au.

more_vert