×

Trial of the Primary Care Practice Improvement Tool: building organisational performance in Australian general practice and primary health care

As the cornerstone of health care, primary care should be effective, efficient and responsive to the needs of patients, families and communities.1 The Australian primary care system currently has significant opportunities to co-create approaches to quality improvement (QI) and practice redesign in ways that could fundamentally improve health care in Australia. To ensure these efforts are successful, there is a need to build and sustain the ability of primary care practices to engage in QI activities in a systematic, continuous and effective way.2 Primary Health Networks (PHNs) are a central component of the government’s primary health care reforms and have a number of roles in improving the quality of primary care. They will work closely with general practices and other primary care services in planning and supporting primary care teams to adopt QI initiatives, with the overall aim of improving care delivery and health outcomes.3,4

The Australian Commission on Safety and Quality in Health Care has recently developed a national set of practice-level indicators of safety and quality for primary health care. These indicators are designed for voluntary inclusion in QI strategies at the local practice or service level and are intended for local use by organisations and individuals providing primary health care services.5 The Australian Safety and Quality Framework for Health Care was also developed, which sets out the actions needed to achieve safe and high-quality care for all Australians.6 To respond effectively to these, primary care practices will need to be equipped with both organisational development and change management approaches. In response, the Primary Care Practice Improvement Tool (PC-PIT), an organisational performance improvement tool, was co-created for Australian primary care.7

The co-creation methods involved combining results from a systematic literature review8 and pilot study9 with cyclical feedback from partners and end users (general practices), using a variety of engagement platforms (interviews, face-to-face meetings, formal presentations, workshops and webinar discussions).10 The systematic literature review identified 13 elements integral to high-quality practice performance, which was defined as “systems, structures and processes aimed to enable the delivery of good quality patient care” but which do not necessarily include clinical processes.11 During discussion sessions, stakeholders and end users identified a list of desired attributes for a performance improvement process; namely, that it should be simple, accessible online, enable a “whole-of-practice” approach, be an internal process facilitated by practice managers or nurses without the need for extensive external facilitation, have additional support resources, be no or low cost and fit with existing QI or practice support programs.

The tool resulting from this combination of processes, the PC-PIT, was initially piloted with six high-functioning practices.9 In this study, we trialled the refined PC-PIT approach nationwide with three objectives: (i) to document and describe the use of the online PC-PIT in practice; (ii) to validate the PC-PIT independent practice visit objective indicators; and (iii) to identify the perceived needs (eg, resources, professional development) to support practice managers as leaders of organisational improvement in general practice.

Methods

We conducted the national PC-PIT trial from March to December 2015, with volunteer general practices from a range of Australian primary care settings. Practices were invited to participate using newsletter information sheets distributed by partner and stakeholder organisations and through national webinars and conference workshops. We used a mixed-methods approach, collecting both quantitative and qualitative data. A full description of the trial protocol is available in Appendix 1. Ethics approval for the study was granted by the University of Queensland Behavioural and Social Sciences Ethical Review Committee (approval number 201000924).

The PC-PIT trial consisted of two parts. In Part 1, practice staff at all participating practices completed the online PC-PIT, with each practice staff member giving a subjective assessment of how they perceived their practice met (or did not meet) the best practice definition of each of the 13 PC-PIT elements, using a 1–5 Likert rating scale.

For Part 2, we selected a purposeful sample of the primary care services to represent a range of practice sizes, business models and geographic locations. Two external raters conducted an independent practice visit to each selected practice, during which they assessed the subjective practice assessment from Part 1 against objective indicators of the same 13 PC-PIT elements, as supported by documented practice evidence. Each rater completed a separate evidence assessment form and ranked a set of objective indicators for each PC-PIT element (using a 1–5 Likert scale) by reviewing the documented practice evidence; working with both practice managers and practice nurses to identify and cite the evidence. Box 1 provides a summary of the Likert ratings and how they are interpreted in the context of the PC-PIT elements.

Data were gathered from a variety of sources during the independent practice visits, including interviews with practice managers and practice nurses, background materials and documented evidence such as meeting minutes, policy and procedure manuals, and communications books. Well documented strategies to enhance trustworthiness and rigour were incorporated in the qualitative phase of the study design; namely, that the two raters cited the sources, which allowed the triangulation of information and the review and confirmation of findings.12,13 The in-depth, semi-structured interviews with practice managers and practice nurses explored their involvement in QI and also their perceptions of resources and support needed to facilitate their role in performance improvement. A proforma was used to guide the interview discussions and all interviews were recorded, either manually or using a digital recorder. Interviews were then transcribed, and participants were provided with the opportunity to review and edit their responses.

The staff ratings for the PC-PIT elements from Part 1 were aggregated to a median practice score for each element and then compared with the ratings from the objective indicators in Part 2 and displayed in two side-by-side spider diagrams. These data were presented to each practice in individual PC-PIT reports, thus presenting a profile of practice performance against each of the 13 elements, using staff perceptions compared with documented practice evidence. Appendix 2 provides two de-identified examples of completed PC-PIT reports, including spider diagrams, as provided to a higher- and a lower-scoring practice. Practice managers used their PC-PIT reports to lead staff discussions about the identification of specific improvements to be made, strategies to achieve them, a time frame, measures of success and the staff member(s) responsible. This was then formalised using the Plan, Do, Study, Act (PDSA) approach.

Data analysis

Quantitative data were entered into a Microsoft Excel 2013 spreadsheet, then imported into SPSS version 21.0 (SPSS); data were analysed using SPSS and Excel. A key outcome measure was the degree of concordance between Rater 1 and Rater 2 during the independent practice visits, measured using concordance and κ statistics. Standard integer weights were used, as described by Fleiss.14

A total of 32 in-depth, semi-structured interviews were held (with 19 practice managers and 13 practice nurses). Transcribed interviews were coded by one member of the research team (L C) using NVivo 10 (QSR International). Codes were reviewed for duplication and clarity. We used thematic analysis to identify and classify recurrent patterns and themes. Interviews focused on aspects such as the background training and current role of the practice manager and practice nurse, QI experience of the individual interviewees and the most recent QI undertaken in the practice.

Results

A total of 45 general practices participated in Part 1 of the PC-PIT trial. At the time of writing, complete datasets were available for 34 of the 45 practices. These represented a range of geographic locations (urban, regional and rural areas), although most were urban and regional practices. It also included a range of practice sizes (< 2, 2–< 5, 5–< 10, 10&plus; full-time-equivalent general practitioners) and represented a range of business models (privately owned, partnerships and corporate business models). Ten of the 34 practices described both undertaking internal QI activities, such as PDSA cycles, and involvement in externally run QI activities, such as Medicare Local programs, National Prescribing Service activities and the Australian Primary Care Collaboratives. The remaining practices described either internal QI activities or involvement in external improvement programs. One practice was newly established and had not undertaken any improvement activities within the past 12 months. The practice managers came from a variety of backgrounds, including business management, nursing and allied health. Appendix 3 details the characteristics of the 34 participating practices.

Of these trial practices, 20 were selected for the independent practice visits and qualitative interviews in Part 2, and complete datasets were available for 19 practices. One practice was excluded due to competing commitments of the practice manager, which made it impossible for the independent practice visit to be conducted by the raters during the study timeframe.

Assessment of practice performance against the PC-PIT

A total of 310 online PC-PIT forms were completed in Part 1 by practice staff, comprising 19 practice managers, 95 GPs, 56 practice nurses, 109 administration and reception staff, 25 allied health (including pharmacy) staff and six “other” staff (primarily in business development, finance or information management roles). Using the combined online PC-PIT element ratings, the independent practice visit ratings and interviews with practice managers and practice nurses, three specific practice types were identified among the 19 practices from Part 2, each with a distinct way of using the PC-PIT. Rather than being discrete, these three types represented key points along a continuum of organisational performance, from lower-scoring to higher-scoring practices.

First practice type

The three lowest-scoring practices appeared to have separate and uncoordinated clinical and practice management processes. This was evidenced by uncoordinated clinical governance and organisational management activities and the incomplete translation of clinical and management processes into formalised policies and protocols that were clearly known and understood by all staff (both clinical and administrative).

Second practice type

A further three practices had a primary focus on clinical governance, with organisational management as a supporting basis. In this model, practice managers had limited or no autonomy in relation to organisational changes within the practice. This was illustrated by a lack of cited documented evidence (and therefore lower scores) on the PC-PIT element of organisational management, including key indicators such as evidence of staff role descriptions, performance appraisals, internal QI activities and the use of information such as data reports, formal meetings and discussion to improve the internal function of the practice. The practice manager generally worked in a supporting role to the GP(s), but there was limited evidence of communication and coordination between clinical and organisational management.

Third practice type

The five highest-scoring practices recognised the equal importance of organisational and clinical management in supporting the ongoing operation of the practice as a whole, demonstrated by high ratings in both the independent practice visit and staff online PC-PIT forms. Documented evidence of meeting minutes and previous PDSA processes and outcomes showed that management processes were constantly reviewed in a combined approach by clinical and administrative–management staff and readjusted to facilitate patient care. These practices demonstrated close communication and shared decision making in relation to continuous QI, championed by an autonomous practice manager who worked closely with a defined clinical leader. They were also more likely to have a history of involvement in a range of external continuous QI programs.

The remaining eight practices fell along the continuum, with most toward the lower-scoring end. These practices were generally characterised by positive staff perceptions of the 13 PC-PIT elements but a lack of documented supporting evidence, particularly on the use of practice data in making ongoing improvements to their organisational processes and in reviewing and using performance results. Box 2 provides examples of the three practice types, the median PC-PIT element scores given by the staff and the raters, illustrative interview quotes and the evidence cited during the independent evidence assessments.

Agreement between raters: independent practice visits

Overall, there was complete agreement between the two raters in 11 of the 19 general practices. Rater 1 scored higher for 11 PC-PIT elements and lower for one. The mean difference was 0.10. Box 3 presents the agreement between raters and the κ statistic for each element. The element with the lowest κ (0.43) was team-based care. For this element, the two raters agreed in 11 of the 19 practices; Rater 1 scored higher than Rater 2 in seven practices and lower in one. If we excluded this element from the overall κ coefficient, the χ2 test for homogeneity was 14.66 (P = 0.20).

To identify reasons for key discrepancies by practice and by element, the raters reviewed their evidence-based assessment forms and discussed possible reasons for the discrepancies. The discrepancy in Practice 10 was due to circumstances that required the raters to interview different informants and cite separate documentation in relation to the PC-PIT elements. Poor concordance between the ratings for the element of team-based care reflected a lack of formally documented policies and procedures available to practice managers, while additional undocumented information could be provided by practice nurses. Rater 1 scored this verbal (but undocumented) information, while Rater 2 did not (Box 3).

In terms of practices, Rater 1 scored more elements higher than Rater 2 in six practices, especially for Practice 10, where there was agreement on only one element (Appendix 4).

Resource and support needs of practice managers

During the in-depth interviews, four key themes were identified in relation to practice managers’ perceived resource and support needs (Appendix 5). Most practice managers were not familiar with internal organisational development tools other than those previously developed by the former Divisions of General Practice. Most of these tools were neither trialled nor validated in general practice settings. Only one practice manager described the use of a formalised approach to organisational development (Six Sigma), which was recently adapted for use in general practice but required extensive external facilitation to complete. Practice managers perceived the benefits of having additional supporting tools relating to elements in the PC-PIT, and also identified strategies such as online forums or email updates, based on the PC-PIT elements of high-performing practices, which might focus on sharing key problems and solutions for organisational performance improvement.

Discussion

This national trial of the PC-PIT determined that it can be a useful organisational performance tool in various general practice settings.

As health care delivery becomes more complex and technology-driven, the organisational context in which QI initiatives take place becomes increasingly recognised as a crucial determinant of their effectiveness.16,17 Contextual elements have been described as the “adaptive reserves” of a practice; that is, those features that represent a practice’s internal capability.18 They include features such as culture, leadership, collaboration and teamwork, data and information tools, improvement skills, incentives and time allocation, which general practices should address to support a context of continuing QI.19

The establishment of the PHNs and the release of the consultation paper for the review of the Performance and Accountability Framework indicators20 illustrate the integration of aspects of QI across the health reform strategy. Although as yet incomplete, the national Performance and Accountability Framework objectives relating to quality focus on outputs related to safety, responsiveness (based on measures of patient experience), capability and capacity.20,21 Following this, the proposed national PHN evaluation framework lists continuous QI activities, outputs and outcomes related to provision of practice support and the identification of high-priority practices (those practices requiring targeted support to build their capacity to engage in QI), such as accreditation and the use of data for practice improvement.

The need to develop and strengthen managers’ skills also involves the development of management processes for motivation, supervision control and action, and support at an organisational level.22 Practice managers may be responsible for large and fluctuating numbers of staff, high yearly financial turnovers and the ongoing facilitation and management of change; many do so with limited timely access to appropriate ongoing training and validated support resources. While there is an undeniable need to focus on the role of GPs in QI, it is also worth noting that the elements relating to organisational improvement are also the domain of practice of operational managers.

Although the independent practice visit was conducted by two external raters in this trial, we anticipate that this aspect will become part of the PC-PIT as a wholly internal assessment process. However, future testing of the PC-PIT will seek to further address the discrepancies relating to the involvement of different practice staff members (ie, practice managers v practice nurses) in the use of the evidence-based assessment forms. The objective indicators for the element of team-based care have also been refined and clarified to include additional meeting minutes and documentation accessible to either practice managers or practice nurses, to ensure that all available evidence is taken into account during the assessment.

Two of our partner organisations, the Royal Australian College of General Practitioners and the PHNs, identified a key benefit of the PC-PIT as the ability to identify the lower-scoring practices and more effectively engage them in organisational improvement activities, allowing for more targeted interventions that are relevant to the capacities of the individual practices. Thus, there are two aspects to the future sustainability of the PC-PIT: (i) embedding the PC-PIT approach into existing QI frameworks, and (ii) further research into the role of the PC-PIT in supporting performance improvement in primary health care. The PC-PIT approach will be further developed to include a suite of high-quality, validated and free-to-access resources that complement the use of the tool.23,24

Limitations

Although this trial was conducted with volunteer practices, every effort was made to ensure a range of geographic locations and practice sizes were incorporated in the inter-rater comparison. However, there was an over-representation of urban and regional practices. Many rural practices were unable to commit due to perceptions of the time required to complete improvement activities. However, this is an area that may be of interest to the newly established PHNs, given their formal role as facilitators and supporters of practice engagement in QI. We also acknowledge the lack of consumer involvement during the trial phase of the PC-PIT. Further work to refine and embed the PC-PIT in existing QI programs will seek to involve the Consumer Health Forum of Australia as a key partner in the process, with emphasis on the role of consumer feedback as an embedded feature of external validation.

In relation to the inter-rater comparison, the calculation of the aggregate value of κ over the 13 PC-PIT elements assumes that the κ values are independent, which is unlikely. The lack of independence, however, is unlikely to affect the aggregate value but might increase the standard error to a small degree.

Conclusion

With the refocus on the importance of organisational aspects of practice in relation to quality care delivery, the time is now right to focus on a standardised, internally led approach to improving practice performance, designed for the dynamic context of primary care. Work with our key partners and end users is ongoing, with the aim of further trialling and embedding the PC-PIT within existing QI initiatives.

Box 1 –
Interpretation of the staff and independent practice visit (IPV) ratings for the PC-PIT elements

Staff rating*

IPV rating*

What it means

What it indicates


1–3 (perception)

1–3 (objective indicators)

Staff perceive the practice does not at all meet (rating 1) or only partially meets (ratings 2–3) best practice definition of element.IPV indicates practice does not at all meet (rating 1) or only partially meets (ratings 2–3) best practice definition of element.

Improvement needed. Recognised by staff and demonstrated by objective indicators.

4–5 (perception)

4–5 (objective indicators)

Staff perceive the practice entirely meets (rating 5) or almost entirely meets (rating 4) best practice definition of element.IPV indicates practice entirely meets (rating 5) or almost entirely meets (rating 4) best practice definition of element.

No or limited improvement needed at this time. Focus is on monitoring and sustaining best practice function.

1–3 (perception)

4–5 (objective indicators)

Staff perceive the practice does not at all meet (rating 1) or only partially meets (ratings 2–3) best practice definition of element.IPV indicates practice entirely meets (rating 5) or almost entirely meets (rating 4) best practice definition of element.

Improvements needed. Indication that the best practice processes evidenced in the practice documentation (policy and protocols) are not embedded in practice workflow and/or are unknown by practice staff.

4–5 (perception)

1–3 (objective indicators)

Staff perceive the practice entirely meets (rating 5) or almost entirely meets (rating 4) best practice definition of element.IPV indicates practice does not at all meet (rating 1) or only partially meets (ratings 2–3) best practice definition of element.

Improvements needed. Indication that the best practice processes perceived by staff are not evidenced in practice documentation (policy or protocols).


PC-PIT = Primary Care Practice Improvement Tool. * 1–5 Likert scale.

Box 2 –
Practice types and illustrative interview quotes from the independent practice visits (IPVs)

PC-PIT element

Rating*


Examples from qualitative interviews

IPV sources

Improvements identified

Staff

IPV


First practice type: Separate clinical and organisational management processes; lack of coordinated approach

Governance — organisational management

3

2

We have separate but … regular admin meetings; just no joint meetings with the GPs. I can’t make any changes here, I’m not allowed to really … and so there’s just no way to do it … I don’t even know when [the GPs] are planning leave. We don’t know who is following up any urgent pathology or other results, we don’t know if we should be offering patients appointments with other GPs so we can’t even tell [patients] when their GP will be back … and I don’t know what to tell the reception staff to do … I developed up this flow table, which shows what we have to do, it can go in our manual but we’re not doing it in practice. We need to sort this out — it’s part of our 2015 accreditation but there’s just no motivation (practice manager)

Policy and procedures manuals; practice manager interview; agenda and meeting minutes (administrative and clinical meetings)

Developing (i) a staff leave recording system, and (ii) a formalised GP buddy system using established protocol developed by practice manager and GP, following accreditation requirements

Performance — performance results

4

2

We have a PenCat report on our type 2 diabetes patients — it shows the number of patients and treatment information … I send it to the GP and registrars to help with our service delivery (practice manager)A review of the report by the IPV raters showed the data were incorrect. There was a significant underestimate of current active type 2 diabetes patients. A further review of patient data showed this was primarily due to a lack of consistent diagnosis information recorded for patients with type 2 diabetes. The practice manager was not aware the report was incorrect.There aren’t standard approaches to data entry — for clinical data into our patient files; we have a lot of registrars that come and go … they enter things the way they want — we haven’t got a standard way of entering information. I think we could develop a standard system for the common things like diabetes, a session for new registrars and have a reminder sheet … I haven’t spoken with the practice manager about it … we don’t really get together to discuss problems (practice nurse)

Practice software and PenCat report on patients with active type 2 diabetes; practice manager and practice nurse interviews

Practice manager to undertake further training in the use and interpretation of the PenCat tool and reports; practice manager and practice nurse to develop protocols to guide clinical data entry for visiting registrars; role of the practice nurse in data cleaning to be defined and formalised, with initial focus on patients with type 2 diabetes

Second practice type: Primary focus on clinical governance; organisational management is basis for clinical support

Change management — incentives

3

5

There are a range of incentives that are available … they’re mostly for the GPs but there are some for the staff … maybe [the staff] don’t really know about them … or maybe we don’t update them and tell them … it’s sort of something I guess we need to keep track of (practice manager)In reviewing the available evidence, the IPV raters found there were policies concerning paid leave and financial support for staff to attend training and conferences, but it was clear from the median staff score that the staff were unaware of the available incentives. While these incentives may have been part of documented practice policy, they were not a part of daily workflow or staff performance review

Human resource manuals; policy and procedures manuals; meeting minutes; position descriptions; practice nurse, practice manager and GP interviews

Developing quarterly news sheet for staff outlining upcoming professional development opportunities approved by practice and method of applying for support to attend; review of existing protocols to support staff education and training in practice

Third practice type: Clinical and organisational management equally important; coordinated and consultative approach to patient care and practice management

All elements

4

4

Our principal GP here and myself are talking now … we want to work together on looking at our patients with type 2 diabetes, especially the organisational side of recall and follow-up, with our nurses and admin staff … we think it would be good to see how changes made to the management of our recall and follow-up systems result in better HbA1cs and other outcomes for our patients (practice manager)

Human resource manuals; policy and procedures manuals; data printout (patients with active type 2 diabetes); meeting minutes; position descriptions; communication book; practice nurse and practice manager interviews

Initial focus on reviewing current recall and follow-up procedures; working to identify appropriate methods to link


GP = general practitioner. PC-PIT = Primary Care Practice Improvement Tool. * Median rating; 1–5 Likert scale.

Box 3 –
Agreement between raters, by PC-PIT element

Element

Sub-element

Number of practices for which:


κ statistic (95% CI)*

SE

Rater 1 higher

Raters agreed

Rater 2 higher


Patient-centred care

2

16

1

0.78 (0.54–1.00)

0.12

Leadership

1

17

1

0.86 (0.68–1.00)

0.09

Governance

Organisational management

4

14

1

0.64 (0.35–0.93)

0.14

Clinical governance

3

14

2

0.65 (0.38–0.92)

0.14

Communication

Team-based care

7

11

1

0.43 (0.14–0.73)

0.15

Availability of information for patients

4

14

1

0.54 (0.27–0.83)

0.14

Availability of information for staff

4

14

1

0.59 (0.26–0.92)

0.17

Change management

Readiness for change

3

15

1

0.68 (0.39–0.96)

0.14

Education and training

0

18

1

0.87 (0.63–1.00)

0.12

Incentives

1

18

0

0.91 (0.74–1.00)

0.09

Performance

Process improvement

2

16

1

0.85 (0.69–1.00)

0.08

Performance results

3

16

0

0.86 (0.70–1.00)

0.08

Information and IT

1

18

0

0.93 (0.81–1.00)

0.06


IT = information technology. PC-PIT = Primary Care Practice Improvement Tool. SE = standard error. * κ statistic and associated 95% CI: κ > 0.8 represents almost perfect agreement beyond chance; 0.60 < κ ≤ 0.80 represents substantial agreement; 0.40 < κ ≤ 0.60 represents moderate agreement; 0.20 < κ ≤ 0.40 represents fair agreement; 0.00 ≤ κ ≤ 0.20 represents slight agreement; and κ < 0.00 represents no agreement beyond chance.15 Overall κ coefficient: 0.82 (95% CI, 0.76–0.87); SE, 0.0287. Test for homogeneity: χ2 = 21.34; df = 12; P = 0.046.

Quality tools and resources to support organisational improvement integral to high-quality primary care: a systematic review of published and grey literature

There is growing awareness of the need to improve quality in health care, including in primary care.14 In Australia, this is witnessed by the National Primary Health Care Strategy, with its focus on the importance of quality as a foundation and driver of change.2 There is also an ongoing push for the development of new indicators for performance improvement, quality and safety benchmarking, and change management approaches and strategies for quality improvement (QI) in primary care.2

There are diverse terms and definitions used for QI,5 and varying QI strategies involving structured processes that include assessment, refinement, evaluation and adoption of processes used by individuals, teams, an organisation or a health system, with the aim to enhance some aspect of quality and achieve measurable improvements.6,7 These can include simple tools, such as flow charts and checklists; more complex multiple-method tools, such as re-engineering; and frameworks, such as the Plan, Do, Study, Act (PDSA) and audit cycles.8 These strategies have yielded modest change and are often not sustained over time.8,9 There is increasing evidence that QI initiatives that are locally owned and delivered, team-focused, formative and flexible and involve interorganisational collaboration and networking are more sustainable and yield better outcomes.10,11 The primary care practice team has a responsibility for QI as part of clinical and organisational governance, and team members are encouraged to collaboratively engage in QI activities in areas that will improve the safety or quality of patient health care.1216 Primary care practices that embrace a QI culture and support QI initiatives are likely to have better health outcomes, better care delivery and better professional development.1,7,17,18

There is currently no single tool available to Australian general practices that combines traditional areas of clinical governance and less widely used aspects of organisational performance.19 In response, the Primary Care Practice Improvement Tool (PC-PIT)11 was co-created by a range of stakeholders using various engagement platforms, including ongoing cyclical feedback from partners and end users. The result is an organisational performance tool tailored to Australian primary care.11 The PC-PIT includes seven key elements integral to high-quality practice performance: patient-centred and community-focused care; leadership; governance; communication; change management; a culture of performance; and information and information technology. Results from the pilot study and the trial of the PC-PIT indicate that this tool offers an appropriate and acceptable approach to internal QI in general practice.11,20 The findings also showed that additional QI tools and resources are necessary to support the seven elements in the PC-PIT.20 Therefore, we aimed to undertake a systematic review of the international published and grey literature to identify existing primary care QI tools and resources that could support organisational improvement related to the seven elements in the PC-PIT. The identified tools and resources were then included in the next phase of study, which used a Delphi approach to assess the relevance and utility of these tools and resources for use in Australian primary care and to complement the PC-PIT.21

Methods

We undertook a systematic review of published and grey literature to identify existing online QI tools and resources to be included in a Delphi study assessing their relevance and utility in Australian general practice.21

Search strategy

In March 2014, we searched the electronic databases CINAHL, Embase and PubMed for articles published between January 2004 and December 2013, using the search strategy outlined in Table 1 of Appendix 1. We searched for articles where the search terms appeared in the title, abstract or subject headings, and limited results to those published in the English language. All searches were designed and conducted in collaboration with an experienced search librarian. We imposed no restrictions on the type or method of QI tool or resource and included any simple tools, multiple-method tools or frameworks that can be used by an individual in the practice, teams in the practice or the whole organisation to improve any aspect of organisational quality related to any of the seven elements in the PC-PIT.

In March–April 2014, we also conducted a comprehensive search of grey literature for documents dated between 1992 and 2012.22,23 This included an iterative manual search of the electronic database GreyNet International (http://www.greynet.org) and relevant government and non-government websites (Appendix 2). We consulted experts in primary care and QI to ensure key electronic databases, organisation websites and online repositories were included in the search. Searches were also conducted using Google Advanced Search (http://www.google.com/advanced_search) and repositories such as OpenGrey (http://www.opengrey.eu), WorldCat (http://www.worldcat.org) and OpenDOAR (http://www.opendoar.org).

For all relevant tools and resources identified through the grey literature search, we also searched in the research databases CINAHL, Embase and PubMed, as well as Google Scholar, for evidence of their use in practice. Search terms used in the grey literature search are shown in Table 2 of Appendix 1.

Finally, we reviewed the bibliographies of all identified relevant studies, reports, websites, databases, tools and resources to identify any additional QI tools and resources for inclusion. All additional tools and resources identified through this snowballing process underwent the screening and assessment process.

Selection of studies, tools and resources

All citations were imported into a bibliographic database (EndNote, version X7). To be included in the review, identified citations, tools and resources had to meet the following eligibility criteria: (1) purpose of the tool or resource is QI; (2) tool or resource is used in the primary care setting or has potential for use in primary care; (3) tool or resource addresses at least one of the seven elements integral to high-quality primary care practice; and (4) tool or resource is available and in the English language.

The initial screening process involved two reviewers (S U and T J) screening the titles and abstracts of published citations and any articles, reports, tools or resources identified through the grey literature, and categorising them as “relevant” or “not relevant” according to the review objective. The full texts of all tools and resources deemed relevant were sought and reviewed by two independent reviewers with expertise in primary care QI (S U and T J) to further assess their relevance according to the eligibility criteria.

There is no single well established assessment or scoring instrument suited for QI tools and resources that covers the broad range of tools and resources included in this review. Therefore, we developed a four-criteria appraisal framework from common sets of criteria proposed for assessing a range of QI tools, resources and initiatives, such as guidelines, instruments, programs and web-based resources (Box 1).2430 All identified tools and resources that met the eligibility criteria were evaluated for their accessibility (ie, able to be accessed online and at no cost), relevance, utility and comprehensiveness using this four-criteria appraisal framework. Two reviewers (S U and L C) independently gave each tool or resource a score out of 8 using the criteria. Tools or resources with a score of 7–8 were rated as the “best” and passed on to the Delphi study21 for further assessment. Tools and resources rated less than 7 were rejected and not included in further assessment. The reviewers compared their ratings, and any discrepancies were resolved through discussion.

Data extraction and synthesis

We created a data extraction template using Microsoft Excel to assist in systematically extracting information about the tools and resources that met the eligibility criteria. A content analysis approach was used to explore each tool or resource to collate the following information: name of the tool or resource, year and country of development, author, name of the organisation that provided access to the tool or resource and its URL, accessibility information or problems, a brief overview of each tool or resource, the QI element(s) it addresses and any supporting evidence (published or unpublished data). If accessible, a copy of the tool or resource was downloaded into the bibliographic database. Any supporting evidence (studies, reports and any other data) on the use of the tool or resource in primary care was also added to the bibliographic database.

Results

The database search yielded 1900 citations after duplicate records were removed (Box 2). After reviewing the titles and abstracts for relevance to the review objective, the total was reduced to 249 articles. Of these, 140 did not meet eligibility criteria and were excluded, leaving 109 articles. Most excluded citations did not meet the eligibility criteria because the tools or resources were not used in primary care settings. From the 109 citations, 76 QI tools or resources were identified (Appendix 3).

The level of empirical evidence for each tool or resource varied substantially — some, such as the PDSA, had numerous studies supporting their use in primary care,3135 whereas others, such as the Organisational Capability Questionnaire,36 had only been taken to pilot stage. Of the 76 tools and resources identified in the published literature, 37 were rejected because of accessibility problems. Of the remaining 39 tools and resources, 19 scored less than 7 on the four-criteria appraisal and were rejected due to problems related to utility (n = 10), relevance (n = 3) and comprehensiveness (n = 6). This left 20 that were classified as the best tools and resources (Appendix 4).

Through the grey literature search, we identified 186 tools or resources that met the eligibility criteria (Appendix 5). Of these, 12 were rejected because of accessibility problems. A further ten tools or resources were duplicates and also excluded. Of the remaining 164 tools and resources, 131 scored less than 7 on the four-criteria appraisal and were rejected due to problems related to comprehensiveness (n = 99), utility (n = 16) and relevance (n = 16). This left 33 tools or resources identified as the best from the grey literature (Appendix 4).

Of the total 53 best tools and resources identified through published and grey literature, 13 were from Australia and the remainder were from the United Kingdom (n = 14), United States (n = 14), Canada (n = 4), New Zealand (n = 4) and Europe (n = 4). There was significant overlap of the PC-PIT elements covered by the best tools and resources, with most tools relevant to two or more elements integral to high-performing practices. Of the 53 identified tools and resources, 34 predominantly addressed performance, 20 governance, 19 patient-centred care, 15 change management, nine leadership, nine communication, and six information and information technology (Appendix 4).

Discussion

In an effort to strengthen primary care practices, and thereby strengthen the broader health care system, many providers, delivery systems and other organisations are supporting the use of QI initiatives to improve the performance of practices.37 There are currently no published data regarding the available QI tools and resources for Australian primary care. In this review, we identified and synthesised existing primary care QI tools and resources from the international published and grey literature that are relevant to the seven elements integral to high-quality primary care practice,19 which are specifically covered by the PC-PIT.11 Our findings provide data on QI tools and resources that can be used to support QI initiatives in primary care, including complementing and optimising the value of the PC-PIT.

Given the complexity of health care, developing, implementing and assessing QI initiatives is a dynamic, evolving and challenging area.38 This review illustrates the wide range of primary care QI tools and resources that are available. There is substantial variability in the accessibility, comprehensiveness and utility of tools and resources for primary care, as well as the evidence for their use. Many tools and resources require extensive (and often costly) external facilitation, which adds further complexity and limitations to their application in general practice settings.

Variability in evidence

There is a gap in the published literature on QI tools and resources in primary care settings, and the available literature is of varying quality.39,40 This is partly due to the complexities involved in reviewing a heterogeneous set of interventions that are applied in a varying set of contexts.41 This lack of scientific literature has somewhat inhibited the acceptance of QI methods in health care.38 With new approaches, tools and resources being introduced at a rapid pace and disseminated through the World Wide Web, there is some debate about the most effective QI tools and resources for use in the health care setting.7 Although new studies are emerging,38 there is a need for more rigorous evaluations of different QI tools and resources in primary care settings.39,42

Comprehensiveness of tools and resources

There are many approaches and strategies that can be used to improve the quality of primary care practices. These improvement strategies are generally divided into two types: improvement focusing on clinical areas and improvement focusing on quality from a management perspective.6 Although the two may share common themes, they are often seen as discrete parallel activities. For example, the NPS MedicineWise Clinical e-Audits are used to facilitate clinical QI by assisting GPs to review their prescribing practices,43 while the Advanced Access and Efficiency Workbook for Primary Care focuses on improving the organisational quality of the practice to enable patients to see their doctor when they need to.44 Some tools and resources, such as Lean,45 Six Sigma46 and the Manchester Patient Safety Framework,47 are based on theoretical frameworks, whereas others, such as the Canning Data Extraction Tools48 or the eCHAT (electronic case-finding and help assessment tool), are more pragmatic.49 Some tools and resources, such as the PDSA, Six Sigma and Significant Event Analysis, are well known.6,50 Other less well recognised tools and resources range from the simple, such as the Organisational Capability Questionnaire,36 to the more comprehensive, incorporating a range of other supporting tools, such as the UK’s National Health Service (NHS) clinical engagement resources51 and the NHS Scotland Quality Improvement Hub.52

Due to the complexity of primary care practice and the dynamic process of QI, several QI tools and resources could be used in conjunction with each other, or one after another, to yield successful outcomes; for instance, beginning with root-cause analysis, then using either Six Sigma or PDSA to implement a change in processes.38 Another example is the use of tools and resources for improving chronic illness care, such as using the Primary Care Resources and Supports for Chronic Disease Self Management53 in conjunction with the Assessment of Chronic Illness Care,54 with the former focused on self-management support and the latter on improved patient and staff competency in self-management processes.

Accessibility and utility of tools and resources

It can be challenging to engage practices in QI initiatives because primary care clinicians and staff often feel intense time pressures; have competing priorities; lack a culture and leadership that support change; lack resources, capability and capacity; and may fear the perceived costs of undertaking QI.7,17,18,55 Therefore, ease of access and utility are important factors in optimising the acceptance and adoption of QI initiatives in primary care practices.7,18 In line with the literature, the main reasons tools and resources were rejected in this review were that they rated poorly with regard to their comprehensiveness (42%), accessibility (19%), utility (10%) (ie, too complicated, contained difficult language, too time-consuming or required extensive facilitation) and relevance to primary care (8%).

QI efforts need to be substantially more efficient and easy to access and must reduce the burden on practices to maximise their adoption in primary care settings.17 Recognising this need, some health care organisations provide comprehensive online libraries of quality and service improvement tools and resources that are readily accessible and free of charge.5658 Nonetheless, it is often difficult for busy practitioners to navigate through multiple websites to obtain the right tools or resources for QI. Therefore, a better option would be a suite of QI tools and resources that is embedded into existing quality frameworks.

Support and incentives for quality improvement

Practices need to be supported and incentivised to adopt a QI culture and engage in continuous QI initiatives.7,18 Even the most determined practices are likely to require help in developing their QI capacity, such as skills to identify areas for improvement, knowledge and understanding of QI approaches, how to use data for QI, planning and making changes, and tracking performance over time.37 This demands the commitment of practice leadership and staff to dedicate time and resources to QI activities.37,38 Practices will also require external support, such as technical assistance, learning activities and tools and resources provided by organisations to assist practices undertaking QI initiatives.37

Public and private health care sectors around the world are now linking service quality with provider payment. Both the UK and the US provide financial incentives to some health care providers for adopting improved quality practices. Using a “pay for performance” system can drive and support practices to adopt QI initiatives to improve the quality of their practice and patient outcomes.59 In Australia, the Primary Health Care Advisory Group recently considered new payment mechanisms to better support the primary care system to drive safe and high-quality care.60

Limitations

Our review has several limitations. First, the exclusion of non-English-language literature may have omitted some relevant tools and resources. However, non-English tools and resources could not have been used in Australian primary care without being translated, which was not feasible within the scope of the study. Second, QI initiatives (including tools and resources) are poorly indexed in bibliographic databases.39 As such, we employed broad search strategies that used free text and Medical Subject Headings (MeSH) to optimise our search strategy. While we also included grey literature to capture tools and resources, an exhaustive search was not undertaken due to time constraints. Other studies have reported similar challenges.61,62 In response to this, we consulted with experts in the area to ensure that the key relevant electronic databases, organisation websites and online repositories were not missed in the search. Finally, the four-criteria appraisal framework and the method of rating the tools and resources was subjective and potentially biased, and we did not perform a sensitivity analysis against the robustness of the assumptions. Hence, caution is required when interpreting the classification and rating of each tool or resource. To address these limitations and increase reliability, the two reviewers who assigned the ratings discussed, checked and agreed on the scoring.

Conclusions

The necessity for QI initiatives permeates health care37,59,63 and presents opportunities to fundamentally improve health in Australia. Engaging primary care practices in QI and practice redesign activities allows them to work toward achieving improved quality, better health and improved patient and provider experiences, as well as reducing the ongoing costs of care.37,64 To ensure these efforts have a positive impact, there is a need to build and sustain the ability of primary care practices to engage in QI initiatives in a continuous and effective way. To foster QI capacity in Australian health care, we have identified tools and resources that can potentially be provided as part of a suite of tools and resources to support primary care practices in improving the quality of their practice, to achieve improved health outcomes. Following this review, a Delphi study was undertaken to evaluate the 53 best tools and resources to assess their relevance and utility in Australian general practice; the results are published elsewhere in this supplement.21

Box 1 –
Criteria for assessing the accessibility, relevance, utility and comprehensiveness of identified tools and resources2430

Each tool or resource was given a total score out of 8. Those with a score of 7–8 were rated as the “best” and passed on to the Delphi study for further assessment.21

  1. Accessibility of tool (yes/no; if yes to both items, tool or resource is assessed on Criteria 2–4)

    • Readily available (easy to access)
    • Accessible free of charge
  2. Relevance to primary care (2 points, one point for each item)

    • Supports organisational improvement related to the seven elements of the PC-PIT (patient-centred and community-focused care; leadership; governance; communication; change management; a culture of performance; information and information technology) integral to high-quality primary care practice
    • Complements the PC-PIT
  3. Utility (3 points, one point for each item)

    • Ease of use in primary care (structure and layout easy to follow, appropriate language, and feasible [not too time-consuming to use in general practice])
    • Can be used by all practice staff
    • Requires minimal training and support to use (does not require extensive external facilitation)
  4. Comprehensiveness (3 points, one point for each item)

    • Best available content (completeness, coverage, scope, currency of content related to the quality improvement element/s)
    • From a reputable source
    • Has supporting data (research or reports) demonstrating use in practice or potential use in primary care

PC-PIT = Primary Care Practice Improvement Tool.

Box 2 –
Flow diagram outlining selection process for tools and resources

[Correspondence] HIV moments and pre-exposure prophylaxis

The PROUD study (Jan 2, p 53)1 recently reported confirmatory evidence that oral tenofovir disoproxil fumarate–emtricitabine pre-exposure prophylaxis (PrEP) protects men who have sex with men against HIV acquisition. The study showed unexpectedly high HIV incidence (9·0 infections per 100 person-years) in men who asked for PrEP and who were asked to defer. The HIV incidence in this group was three times what was expected on the basis of epidemic trends. This finding is consistent with our observations that people at higher risk for HIV infection were more likely to seek PrEP services, stay in care, and be adherent.

Blaming individual doctors for medical errors doesn’t help anyone

If you work in healthcare and have a blog topic you would like to write for doctorportal, please get in touch.

In Australia, estimates suggest undesired harmful effects from medication or other intervention such as surgery, known as “adverse events”, occur in around 17% of hospital admissions. This results in up to 18,000 unnecessary deaths and 50,000 temporarily or permanently disabled patients each year.

Over 50% of adverse events are the result of medical error. Harms are physical, financial and psychological. Adverse events mean patients need to stay in hospital longer, have more treatment and incur financial loss.

Adverse events are the result of errors and violations (deviations from prescribed practice) of health-care professionals. Although the direct and most obvious causes of adverse events are errors and violations, the causes of adverse events we can control are the working conditions and organisational systems that cause people to make mistakes.

When the pace of work is too fast, health professionals can get distracted and feel under pressure. When supervisors turn a blind eye to non-compliance, teams aren’t functioning well, equipment is unavailable or opportunities for training rare, the willingness and ability of staff to perform reliably is reduced.

Safety cannot be assured by identifying the individuals who make an error. Safety can only be assured by creating conditions in which people can perform well.

Blame is unhelpful

Finding someone to blame and dealing with this person by assuming they are uniquely incompetent (a person-centred approach) is a comforting strategy for those managing risk and for society at large. Much less satisfying is the notion that the majority of health professionals, in the same situation, would make the same mistake and that perhaps the situation, not the professional, is to blame.

The human tendency to blame others’ mistakes on their personal characteristics (ability, personality, attitudes) is even stronger when the outcome of the mistake is more severe (such as a patient’s life being shortened). This makes it difficult to move away from blame even when there is no compelling evidence “person-centred” strategies reduce error rates.

A person-centred approach also exacerbates the feelings of guilt, shame and anxiety that plague health-care professionals in the aftermath of error. These negative emotions, in turn, can lead to denial, avoidance and a failure to learn about the causes of the error. The possibility of putting preventive strategies in place is then limited.

The health-care professional may feel defensive, and this doesn’t help patients or their families learn the truth about what has happened and, in many cases, compounds the distress they feel. Blame means a lost opportunity for learning and can be detrimental to open and honest patient-professional discussions.

What are the alternatives?

Although we can and should focus efforts to reduce the number of medical errors made, errors are inevitable and so we also need to prepare for them more effectively. Both health professionals and patients need better support.

For health professionals, building psychological resilience at an individual and team level may help. Psychological resilience is defined as an individual’s ability to adapt to stress and adversity; to be positive, optimistic and to learn from mistakes. Not everyone is equally resilient and this is where being part of a team or being able to access social support from others is important.

So, what can we do in health care to promote resilience? At present, there is no definitive answer to this question; there is little research evidence available and even fewer recommendations.

In the United States, rapid response teams have been established in acute hospital settings to provide individuals with the support they need after an error. Support is offered either informally within the unit, through trained peer supporters within the hospital or via referral to professional guidance.

Training staff in emotional resilience is one approach that has been reported as successful among nurses transitioning from being students to staff. Mentoring for physicians has also been promoted as a strategy to enhance individual resilience and reduce burnout and stress. Neither approach has yet been evaluated at sufficient scale.

Minimising power differences between team members is important in encouraging people, no matter what their professional status, to speak up, ask questions and check understanding.

Training health-care leaders how to invite and appreciate contributions from all team members may provide a basis for greater equality and openness in health-care teams. When things go wrong in health care, blame is a rife but unhelpful response. What we need now are evidence-based strategies that support staff and organisations to use adverse events as an impetus for change.The Conversation

Reema Harrison, Lecturer & Research Fellow: Patient Safety, University of Sydney and Rebecca Lawton, Professor, Psychology of Healthcare, University of Leeds

This article was originally published on The Conversation. Read the original article.

Other doctorportal blogs

Old but not forgotten: Antibiotic allergies in General Medicine (the AGM Study)

The prevalence of antibiotic allergy labels (AAL) has been estimated to be 10–20%.1,2 AALs have been shown to have a significant impact on the use of antimicrobial drugs, including their appropriateness, and on microbiological outcomes for patients.3,4 Many reported antibiotic allergies are, in fact, drug intolerances or side effects, or non-recent “unknown” reactions of questionable clinical significance. Incorrect classification of patient AALs is exacerbated by variations in clinicians’ knowledge about antibiotic allergies and the recording of allergies in electronic medical records.57 The prevalence of AALs in particular subgroups, such as the elderly, remains unknown; the same applies to the accuracy of AAL descriptions and their impact on antimicrobial stewardship. While models of antibiotic allergy care have been proposed8,9 and protocols for oral re-challenge in patients with “low risk allergies” successfully employed,10 the feasibility of a risk-stratified direct oral re-challenge approach remains ill defined. In this multicentre, cross-sectional study of general medical inpatients, we assessed the prevalence of AALs, their impact on prescribing practices, the accuracy of their recording, and the feasibility of an oral antibiotic re-challenge study.

Methods

Study design, setting and population

Austin Health and Alfred Health are tertiary referral centres located in north-eastern and central Melbourne respectively. This was a multicentre, cross-sectional study of general medical inpatients admitted between 18 May 2015 and 5 June 2015; those admitted to an intensive care unit (ICU), emergency unit or short stay unit were excluded from analysis.

At 08:00 (Monday to Friday) during the study period, a list of all general medical inpatients was generated. Baseline demographics, comorbidities (age-adjusted Charlson comorbidity index11), infection diagnoses, and inpatient antibiotic medications (name, route, frequency) were recorded. Patients with an AAL were identified from drug charts, medical admission notes, or electronic medical records (EMRs). A patient questionnaire was administered to clarify AAL history (Appendix), followed by correlation of the responses with allergy descriptions in the patient’s drug chart, EMR and medical admission record. To maintain consistency, this questionnaire was administered by pharmacy and medical staff trained at each site. Patients with a history of dementia or delirium who were unable to provide informed consent were excluded only from the patient questionnaire component of the study. A hypothetical oral antibiotic re-challenge in a supervised setting was offered to patients with a low risk allergy phenotype (Appendix).

Definitions

An AAL was defined as any reported antibiotic allergy or adverse drug reaction (ADR) recorded in the allergy section of the EMR, drug chart, or medical admission note. AALs were classified as either type A or type B ADRs according to previously published definitions (Box 1):12,13

  • type A: non-immune-mediated ADR consistent with a known drug side effect (eg, gastrointestinal upset);

  • type B: immune-mediated reactions consistent with an IgE-mediated (eg, angioedema, anaphylaxis, or urticaria = type B-I) or a T cell-mediated response (type B-IV):

    • Type B-IV: delayed benign maculopapular exanthema (MPE);

    • Type B-IV* (life-threatening in nature): severe cutaneous adverse reactions (SCAR),14 erythema multiforme (EM), fixed drug eruption (FDE), serum sickness, and antibiotic-induced haemolytic anaemia.

Study investigators JAT and AKA categorised AALs; if consensus could not be reached, a third investigator (LG) was recruited to adjudicate.

An AAL was defined as a “low risk phenotype” if it was consistent with a non-immune-mediated reaction (type A), delayed benign MPE without mucosal involvement that had occurred more than 10 years earlier (type B-IV), or an unknown reaction that had occurred more than 10 years earlier. Unknown reactions in patients who could not recall when the reaction had occurred were also classified as low risk phenotypes. All low risk phenotypes were ADRs that did not require hospitalisation. A “moderate risk phenotype” included an MPE or unknown reaction that had occurred within the past 10 years. A “high risk phenotype” was defined as any ADR reflecting an immediate reaction (type B-I) or non-MPE delayed hypersensitivity (type B-IV*).

AAL mismatch was defined as non-concordance between a patient’s self-reported description of an antibiotic ADR in the questionnaire and the recorded description in any of the medical record platforms (drug charts, medical admission notes, EMR). Infection diagnosis was classified according to Centers for Disease Control/National Healthcare Safety definitions.15

Statistical analysis

Statistical analyses were performed in Stata 12.0 (StataCorp). Variables of interest in the AAL and no antibiotic allergy label (NAAL) groups were compared. Categorical variables were compared in χ2 tests, and continuous variables with the Wilcoxon rank sum test. P < 0.05 (two-sided) was deemed statistically significant.

Ethics approval

The human research ethics committees of both Austin (LNR/15/Austin/93) and Alfred Health (project 184/15) approved the study.

Results

Antibiotic allergy label description and classification

The baseline patient demographics for the AAL and NAAL groups are shown in Box 2. Of the 453 patients initially identified, 107 (24%) had an AAL. A total of 160 individual AALs were recorded: 27 were type A (17%), 26 were type B-I (16%), 45 were type B-IV (28%), and 62 were of unknown type (39%) (Box 3). Sixteen of the type B-IV reactions (35%) were consistent with more severe phenotypes (type B-IV*). When the time frame criterion (more than 10 years v 10 years or less since the index reaction) was applied to phenotype definitions, this translated to 63% low risk (101 of 160), 4% moderate risk (7 of 160), and 32% high risk (52 of 160) phenotypes. The antibiotics implicated in AALs and their ADR classifications are summarised in Box 3; 34% of reactions were to simple penicillins, 13% to sulfonamide antimicrobials, and 11% to cephalosporins. Three AAL patients (2.8%) were referred to an allergy specialist for assessment (one with type A, two with type B-I reactions). No recorded AALs were associated with admission to an ICU, while eight either ended or occurred during the index hospital admission (two type A, five type B-I, and one type B-IV).

Antibiotic use

Ceftriaxone was prescribed more frequently for patients with AALs (29 of 89 [32%]) than for those in the NAAL group (74 of 368 [20%]; P = 0.02); flucloxacillin was prescribed less frequently (0 v 21 of 368 [5.7%]; P = 0.02). The rate of prescription of other restricted antibiotics, including carbapenems, monobactams, quinolones, glycopeptides and lincosamides, was low in both groups (Box 4).

Antibiotic cross-reactivity

Seventy patients had a documented reaction to a penicillin (a total of 72 penicillin AALs: 55 to penicillin V or G, eight to aminopenicillins, nine to anti-staphylococcal penicillins), including two patients with two separate penicillin allergy labels to members of different β-lactam classes. Of these, 23 (32.9%) were prescribed and tolerated cephalosporins (Box 5). Of the 55 patients with a penicillin V/G AAL, β-lactam antibiotics were prescribed for 19 patients (34%); one patient received aminopenicillins (1.8%), four first generation cephalosporins (7%), two second generation cephalosporins (3.6%), and 12 received third generation cephalosporins (21.8%). Conversely, 18 patients had documented ADRs to cephalosporins, with a total of 19 AALs (14 to first generation, one to second generation, two to third generation cephalosporins, and two to cephalosporins of unknown generation). Of these, five patients (27.8%) were again prescribed cephalosporins without any reaction, and a further five (27.8%) tolerated any penicillin (Box 5).

Eight patients with AALs (7%) were administered an antibiotic from the same antibiotic class. No adverse events were noted in any of the patients inadvertently re-challenged. Eighty-six AAL patients (77%) reported a history of taking any antibiotic after their index ADR event. Thirteen patients (12%) believed they had previously received an antibiotic to which they were considered allergic, 62 had not (58%), and 32 were unsure (30%).

Recording of AALs

Almost all AALs (156 of 160 [98%]) were documented in medication charts, but only 115 (72%) were documented in admission notes and 81 (51%) in the EMR. Twenty-five per cent of patients had an AAL mismatch. No patients received the exact antibiotic recorded in the AAL.

Hypothetical oral antibiotic re-challenge

Fifty-eight AAL patients (54%) were willing to undergo a hypothetical oral antibiotic re-challenge in a supervised environment, of whom 28 (48%) had a low risk phenotype, seven a moderate risk phenotype (12%), and 23 a high risk phenotype (40%). If patients had received and tolerated an antibiotic to which they were previously considered allergic, they were more likely to accept a hypothetical re-challenge than those who had not (9 of 12 [75%] v 3 of 12 [25%]; P = 0.04).

Discussion

The major users of antibiotics in community and hospital settings remain our expanding geriatric population.16 An accumulation of AALs, reflecting both genuine allergies (immune-mediated) and drug side effects or intolerances, follows years of antibiotic prescribing. This is reflected in the high AAL prevalence (24%) in our cohort of older Australian general medical inpatients, notably higher than the national average (18%) and closer to that reported for immune-compromised patients (20–23%).4,17

To understand the high prevalence of AALs and the predominance of low risk phenotypes in our study group requires an understanding of “penicillin past”, as many AALs are confounded by the impurity of early penicillin formulations and later penicillin contamination of cephalosporin products.18,19 Re-examining non-recent AALs of general medical inpatients is therefore potentially both a high yield and a low risk task, considering the low pre-test probability of a persistent genuine penicillin allergy.2022 While the definition of a low risk allergy phenotype is hypothetical, it is based upon findings that indicate the loss of allergy reactivity over time,20,21,23 the low rate of adverse responses to challenges in patients with mild delayed hypersensitivities,20,22,23 and the safety of oral challenge in patients with similar phenotypes.24

The high rate of type A, non-severe MPE and of non-recent unknown reactions in our patients (74% of all AALs; 63% low risk phenotypes) provides a large sample size to explore further, while the higher use of antibiotics that are the target of antimicrobial stewardship programs (eg, ceftriaxone) in AAL patients provides an impetus for change. The increased use of restricted antibiotics (eg, ceftriaxone and fluoroquinolones) and the reduced use of simple penicillins (eg, flucloxacillin) in patients with an AAL were marked. The effects of AALs on antibiotic prescribing have been described in large hospital cohorts and in specialist subgroups (eg, cancer patients).3,4 Associations between AALs and inferior patient outcomes, higher hospital costs and microbiological resistance have also been recently noted.24,8,17,25 Re-examining AALs in older patients from an antimicrobial stewardship viewpoint is therefore essential, particularly in an era when multidrug-resistant (MDR) organisms are being isolated more frequently in Australia.26 The fact that third generation cephalosporins and fluoroquinolones are associated with MDR organisms and with Clostridium difficile infection generation further supports the need for re-examining AALs, especially in those with easily resolved non-genuine allergies.2730

The high rate of potential patient acceptance of an oral re-challenge (54%), especially by those with low risk phenotypes (48%), suggests that this should be explored in prospective studies. The idea of an antibiotic allergy re-challenge of low risk phenotypes is a practical extension of the work by Blumenthal and colleagues,24 who found a sevenfold increase in β-lactam uptake and a low rate of adverse reactions. Another group found that oral re-challenge was safe in children with a history of delayed allergy.23 These are both important advances; while skin-prick allergy testing is sensitive for immediate penicillin hypersensitivity, skin testing (delayed intradermal and patch) lacks sensitivity for delayed hypersensitivities.8,22,31 Incident-free accidental re-challenge with the culprit antibiotic or a drug from a similar class had occurred in some of our patients, adding further support for exploring this approach. A structured oral re-challenge strategy is attractive, as skin-prick testing is potentially expensive and inaccessible for most people.8

Analysing the high rate of AAL mismatch may be a more pragmatic low-cost approach, as not only were AAL labels absent from a number of medical records, the EMR AAL often differed from patients’ reports. Incorrect and absent AALs in other centres have been raised as a concern from a drug safety viewpoint.6,7,10 Education programs aimed at improving clinicians’ (pharmacy and medical) understanding of allergy pathogenesis could also assist antibiotic prescribing in the presence of AALs.5,10 Interrogation of the patient and their relatives about allergy history and examination of blood investigations at the time of the ADR for evidence of end organ dysfunction or eosinophilia may also provide greater accuracy in phenotyping and severity assessment. Many accumulated childhood allergies reflect the infectious syndrome that resulted in the implicated antibiotic being prescribed, rather than an immunologically mediated drug hypersensitivity.21,23 Referral to allergy specialists at the time of drug hypersensitivity may also reduce over-labelling.

That a clinician questionnaire about antibiotic prescribing attitudes was not administered is a limitation of this study, as was the inability to obtain AAL information from all patients (eg, because of dementia or delirium) or to further clarify “unknown” reactions. Some AAL descriptions are also likely to be affected by recall bias; however, this reflects real world attitudes and prescribing in the presence of AALs. While the prevalence of AALs in younger patients is probably lower than found in this study, the distribution of genuine, non-genuine and low risk allergies may well be the same. In a group of paediatric patients with an AAL for β-lactam antibiotics following non-immediate mild cutaneous reactions without systemic symptoms, none experienced severe reactions after undergoing oral re-challenge.23

Conclusion

AALs were highly prevalent in our older inpatients, with a significant proportion involving non-genuine allergies (eg, drug side effects) and low risk phenotypes. Most patients were willing to undergo a supervised oral re-challenge if their allergy was deemed low risk. AALs were sometimes associated with inadvertent class re-challenges, facilitated by poor allergy documentation, without ill effect. AALs were also associated with increased prescribing of ceftriaxone and fluoroquinolone, antibiotics commonly restricted by antimicrobial stewardship programs. These findings inform a mandate to assess AALs in the interests of appropriate antibiotic use and drug safety. Prospective studies incorporating AALs into antimicrobial stewardship and clinical practice are required.

Box 1 –
Classification of reported antibiotic allergy labels into adverse drug reaction groups12,13


EM=erythema multiforme; FDE=fixed drug eruption; MPE=maculopapular exanthema; SCAR=severe cutaneous adverse reactions (includes Stevens–Johnson syndrome, toxic epidermal necrolysis, drug rash with eosinophilia and systemic symptoms, and acute generalised exathematous pustulosis). *These adverse reactions are classified as type B-IV* in this study, denoting their potentially life-threatening nature.

Box 2 –
Baseline demographics for patients with and without antibiotic allergy labels

Characteristic

Patients with an antibiotic allergy label

Patients with no antibiotic allergy label

P


Number

107

346

Median age [IQR], years

82 [74–87]

80 [71–88]

0.32

Sex, men*

38 (36%)

194 (56%)

< 0.001

Immunosuppressed

25 (23%)

29 (8%)

< 0.001

Median age-adjusted Charlson Comorbidity Index score [IQR]

6 [4–7]

6 [4–7]

0.17

Ethnicity

0.38

European

106 (99%)

334 (97%)

African

0

2 (1%)

Asian

1 (1%)

10 (3%)

Infection diagnosis

50 (47%)

140 (41%)

0.25

Infections (205 patients)

56

151

0.002

Cardiovascular system

0

2 (1%)

Central nervous system

1 (2%)

3 (2%)

Gastrointestinal

9 (16%)

9 (6%)

Eyes, ears, nose and throat

0

3 (2%)

Upper respiratory tract

7 (13%)

30 (20%)

Lower respiratory tract (including pneumonia)

12 (21%)

54 (36%)

Skin and soft tissue

7 (13%)

14 (9%)

Urinary system

11 (20%)

21 (14%)

Pyrexia (no source)

3 (5%)

4 (3%)

Sepsis (unspecified)

5 (9%)

8 (5%)

Other

0

2 (1%)

Received antibiotics

45 (42%)

162 (46%)

0.43


* There were a total of 232 men and 221 women in the study.

Box 3 –
Spectrum of implicated antibiotics linked with reported antibiotic allergy labels according to adverse drug reaction classification

Implicated antibiotics

Antibiotic allergy labels: adverse drug reactions


Type A

Type B


Unknown

Total

Type B-I

Type B-IV

Type B-IV*


All antibiotics

27 (17%)

26 (16%)

29 (18%)

16 (10%)

62 (39%)

160

Simple penicillins*

7 (26%)

14 (54%)

16 (55%)

4 (25%)

14 (23%)

55 (34%)

Aminopenicillins

1 (4%)

2 (8%)

2 (7%)

1 (6%)

2 (3%)

8 (5%)

Anti-staphylococcal penicillins

0

0

1 (3%)

5 (31%)

3 (5%)

9 (6%)

Cephalosporins

3 (11%)

1 (4%)

1 (3%)

2 (13%)

11 (18%)

18 (11%)

Carbapenems§

0

0

0

0

1 (2%)

1 (0.6%)

Monobactam

0

0

0

0

0

0

Fluoroquinolones

2 (7%)

0

2 (7%)

0

3 (5%)

7 (4%)

Glycopeptides

0

0

1 (3%)

1 (6%)

1 (2%)

3 (2%)

Lincosamides

0

0

1 (3%)

0

2 (3%)

3 (2%)

Tetracyclines

4 (15%)

1 (4%)

0

1 (6%)

5 (8%)

11 (7%)

Macrolides

1 (4%)

2 (8%)

1 (3%)

1 (6%)

6 (10%)

11 (7%)

Aminoglycosides

0

0

1 (3%)

0

0

1 (0.6%)

Sulfonamides

4 (15%)

4 (15%)

3 (10%)

1 (6%)

9 (15%)

21 (13%)

Others

5 (19%)

2 (8%)

0

0

5 (8%)

12 (8%)


All percentages are column percentages, except for the “all antibiotics” row. * Benzylpenicillin, phenoxymethylpenicillin, benzathine penicillin. † Amoxicillin, amoxicillin–clavulanate, ampicillin. ‡ Flucloxacillin, dicloxacillin, piperacillin–tazobactam, ticarcillin–clavulanate. § Meropenem, imipenem, ertapenem. ¶ Trimethoprim–sulfamethoxazole, sulfadiazine.

Box 4 –
Antibiotic use in patients with and without an antibiotic allergy label

Antibiotic class prescribed

Antibiotic prescriptions


P

Antibiotic allergy label group

No antibiotic allergy label group


Total number of patients

89

368

β-Lactam penicillins

14 (16%)

120 (35%)

0.02

Simple penicillins*

4 (5%)

32 (9%)

0.27

Aminopenicillins

8 (9%)

52 (14%)

0.22

Anti-staphylococcal penicillins

2 (2%)

36 (10%)

0.02

Carbapenems§

2 (2%)

5 (1%)

0.63

Cephalosporins (first/second generation)

8 (9%)

20 (5%)

0.22

Cephalosporins (third or later generation)

29 (33%)

82 (22%)

0.05

Monobactam

0

0

NA

Fluoroquinolones

5 (6%)

6 (2%)

0.04

Glycopeptides

3 (3%)

12 (3%)

1

Tetracyclines

6 (7%)

46 (13%)

0.14

Lincosamides

0

0

NA

Others

26 (29%)

109 (30%)

1


NA = not applicable. * Benzylpenicillin, phenoxymethylpenicillin, benzathine penicillin. † Amoxicillin, amoxicillin–clavulanate, ampicillin. ‡ Flucloxacillin, dicloxacillin, piperacillin–tazobactam, ticarcillin–clavulanate. § Meropenem, imipenem, ertapenem. Some patients received more than one antibiotic.

Box 5 –
Antibiotic use in patients with penicillin and cephalosporin antibiotic allergy labels


Patients with documented allergy to penicillins* (n = 70)

Antibiotics prescribed:

Any antibiotics

28 (40%)

More than one class of antibiotic

31 (44%)

Culprit group penicillins

1 (1.4%)

Non-culprit group penicillins

2 (2.9%)

First generation cephalosporins

4 (5.7%)

Second generation cephalosporins

2 (2.9%)

Third generation cephalosporins

17 (24%)

Carbapenems

2 (2.9%)

Fluoroquinolones

4 (5.7%)

Glycopeptides

2 (2.9%)

Aminoglycosides

2 (2.9%)

Lincosamides

0

Patients with documented allergy to cephalosporins (n = 18)

Antibiotics prescribed:

Any antibiotics

10 (56%)

More than one class of antibiotic

7 (39%)

Culprit generation cephalosporins

1 (5.6%)

Non-culprit generation cephalosporins

3 (17%)

Other

1 (5.6%)

Any penicillins*

5 (28%)

Carbapenems

1 (5.6%)

Fluoroquinolones

1 (5.6%)

Glycopeptides

1 (5.6%)

Aminoglycosides

1 (5.6%)

Lincosamides

0


* Penicillins (benzylpenicillin, phenoxymethylpenicillin, benzathine penicillin); aminopenicillins (amoxicillin, amoxicillin–clavulanate, ampicillin), and anti-staphylococcal penicillins (flucloxacillin, dicloxacillin, ticarcillin–clavulanate and piperacillin–tazobactam). † Prescription of culprit group penicillin: received any penicillin from the same group as that to which the patient is allergic. This patient had a documented allergy to an unknown generation of cephalosporin, and received ceftriaxone.

[Correspondence] Health equity for LGBTQ people through education

We applaud The Lancet Editors (Jan 9, p 95)1 for drawing attention to new initiatives to improve the health and wellbeing of lesbian, gay, bisexual, transgender, and queer (LGBTQ) people worldwide. Many challenges remain, but the US Department of Health and Human Services report presents a strategy for change that could inform the efforts of other nations. However, one important aspect is missing from the worldwide conversation on addressing the health needs of LGBTQ—educating ourselves.

[Editorial] UK PrEP decision re-ignites HIV activism

On March 21, NHS England announced that, contrary to expectation, it will not proceed with a scale-up of pre-exposure prophylaxis (PrEP) for prevention of HIV infection among at-risk populations. Saying it was “not responsible for commissioning HIV prevention services”, the agency effectively dropped plans to hold public consultations and instead tepidly said it would work with local health authorities to consider how to make the anti-HIV drugs more widely available. Few other details were offered.

National talks on remote area nurse safety

Improvements in the security of remote area nurses have been put off to a future meeting of Federal, State and Territory health ministers.

In a statement issued following a meeting with remote health service operators and representatives, Rural Health Minister Fiona Nash said there had been “a number of worthy, original and thoughtful ideas” which the she would carefully consider and raise with her State and Territory counterparts “over the coming weeks”.

The meeting was convened in the wake of the fatal attack on Gayle Woodford, 56, who was working as a nurse in the remote Fregon community in the Anangu Pitjantjatjara Yankunytjatjara (APY) lands of north-west South Australia. A 34-year-old man, Dudley Davey, has been charged with her murder.

The murder has ignited a campaign for improved security for nurses working in remote areas, including calls for the abolition of single-nurse posts and new rules requiring health workers attending call-outs and emergencies to operate in pairs. As at 8 April, almost 130,000 people had signed a petition calling for the changes.

The sector also faces the threat of a mass walkout of staff. A survey of 800 regional nurses cited by the Adelaide Advertiser indicates 42 would quit if single nurse posts are retained.

The fatal attack on Ms Woodford is but the latest in a series of incidents and assaults on remote area nurses. A University of South Australia study of 349 such nurses, undertaken in 2008, found almost 29 per cent had experienced physical violence, and 66 per cent had felt concerned for their safety.

The study found that there had been a drop in violence against nurses since 1995, coinciding with a reduction in the number of single nurse posts.

Senator Nash paid tribute to health workers in remote areas and acknowledged that they faced “unique and difficult challenges”, but held back from endorsing any particular course of action to improve security.

Part of the problem she faces is that the ability of Federal and State governments to act to improve health worker safety is constrained because remote area health services are independently run, often by Aboriginal communities.

Senator Nash said she would respect the independence of service operators.

“Whilst the Federal Government funds many of these remote services, they are, in fact, independently run, as they should be,” she said. “I will not break Australia’s long-standing multi-partisan commitment to Indigenous self-determination by telling these health providers how to run their services.

“Remote health services do the work on the ground and they know best, so I will be asking them for their ideas on this important issue.”

Adrian Rollins

 

[Comment] Lean economies and innovation in mental health systems

Poor access to mental health care is widely reported, although it differs according to sociopolitical and economic contexts. In emerging economies, including Brazil, Russia, India, China, and South Africa (BRICS), there has been increased public investment in recent years, but rapid economic growth in these countries has now slowed. Precarious global transitions affect both the burden of mental health problems and demand for services. Innovations prompted by these transitions, in both high-income and low-income countries, could help meet population needs during times of economic shock, whether scarcity or affluence.

Military should get annual check up

Australian Defence Force personnel would undergo annual mental health checks under plans backed by the AMA to tackle rates of depression, post-traumatic stress disorder and suicidal thoughts in the military.

A parliamentary committee inquiring into the mental health of soldiers, sailors and air force personnel found that although in the short term they were no more prone to mental health problems than the broader community, the nature of their work meant the types of problems they experience are not the same.

The 2010 ADF Mental Health Prevalence and Wellbeing Study found that 22 per cent of Defence personnel experienced a mental disorder in the previous 12 months, roughly similar to that found in a sample of general members of the community, while almost 7 per cent who suffered multiple problems.

But although, in the short term, the prevalence of problems was approximately the same, over their lifetime, ADF personnel were found to be more at risk of mental health problems.

Military personnel were found to be less prone to alcohol abuse, but they were more likely to suffer depression, and to think about and plan suicide. The most common mental health problem, however, was anxiety, particularly post-traumatic stress disorder.

AMA President Professor Brian Owler said this reflected the particular characteristics of their work, including experiences during deployment overseas and long absences from family and support networks.

Professor Owler said a recommendation from the Foreign Affairs, Defence and Trade References Committee for annual mental health screening was a welcome proposal.

“Annual screening would help ensure that mental health problems are identified at a much earlier stage, would support early intervention, and lead to much better mental health outcomes for affected personnel,” the AMA President said.

He also endorsed the Committee’s call for a unique identifier number for veterans linked to their service and medical records.

In 2013, the Federal Government gave in-principle support to a similar idea put forward by the Joint Standing Committee on Foreign Affairs, Defence and Trade, but Professor Owler said there appeared to have been little progress made on it since.

“A unique or universal identifier could help improve health outcomes for these patients,” Professor Owler said.

The AMA President said it would support the transition of personnel out of Defence Force-funded health services into those provided by the Department of Veterans’ Affairs or the mainstream health system, and would enable tracking of the health of former ADF personnel over time, which was critical to research.

He said there was strong support for the idea among veterans’ groups, and called on the Government and bureaucracy to fast-track the initiative.

Adrian Rollins