×

Modern challenges in acute coronary syndrome

Despite a growing evidence base, gaps in knowledge and practice leave room for improvement in the treatment of acute coronary syndrome

A few decades ago, there was still controversy about the importance of interruption of blood flow versus myocardial tissue oxygen demand in causing myocardial infarction.1,2 It is now universally accepted that coronary thrombosis at the site of an unstable atherosclerotic plaque is the usual cause of coronary occlusion3 and the cluster of conditions of unstable angina, non-ST-elevation myocardial infarction (NSTEMI) and ST-elevation myocardial infarction (STEMI) comprise the clinical complex now called acute coronary syndrome (ACS).

An important observation from the investigations at the start of the “reperfusion era” was the recognition that STEMI and NSTEMI, while both due to coronary thrombosis, had quite different presentations and natural histories.4 Important differences between the pathophysiology of STEMI and NSTEMI determine the focus of treatment. In STEMI, the complete occlusion of the coronary vessel initiates a cascade of myocardial necrosis, which can be prevented by early reperfusion with percutaneous coronary intervention or fibrinolytic therapy.5 In NSTEMI, the less complete occlusion of the coronary vessel means there is less immediate urgency to salvage myocardium, and the initial focus is on antithrombotic therapy to limit the size and instability of the thrombosis in the coronary artery. In this situation, the size, shape and location of the coronary thrombosis are highly variable. The patient’s clinical course can be unpredictable, and progression to STEMI is a pervading concern. In patients with NSTEMI who are at high risk, an early invasive approach has been shown to be superior to a conservative approach,6 but the optimal timing of this remains controversial.7 These major advances in understanding this symptom complex have driven quantum shifts in management approaches and greatly improved outcomes for patients who have suffered a heart attack. However, it remains a condition which can be unpredictable and, despite the best of modern treatments, can still be lethal. As ACS is a symptom of underlying coronary heart disease, long-term management is often more important than the acute phase. This supplement focuses on the many challenges in managing ACS.

The first two articles in this supplement deal with managing the acute stage of ACS. The many valuable guidelines on this topic,812 not reiterated in detail in the supplement, all concur on the basics of modern therapy. The use of potent antithrombotic agents is central to tackling the coronary thrombosis, albeit with an increased risk of bleeding. While controversies continue over the ideal duration of antiplatelet therapy, the evidence to support routine early and post-hospital use of potent antiplatelet agents is overwhelming. Statin therapy is also central to the management of the acute episode and for long-term management, irrespective of the low-density lipoprotein cholesterol level at the time of the episode. The role of β-adrenergic blockers and inhibitors of the renin–angiotensin–aldosterone system remain important, but perhaps better targeted to patients at higher risk. The guidelines, while sometimes exhaustingly complete, do not cover all aspects of management.

In the first article in the supplement, Brieger focuses on the identification of patients with ACS who are at high risk (https://www.mja.com.au/doi/10.5694/mja14.01249). He argues that routine risk stratification as soon as possible after presentation will determine the clinical pathway, and that this practice should be embedded in the hospital system — it is too important to leave to ad-hoc and potentially unreliable clinical judgement. This is a challenging change in approach for the hospital system, but bound to be fruitful in reducing decision time when early revascularisation is needed, and avoiding unnecessary intervention when it is not.

Next, McQuillan and Thompson review the limited evidence to guide management in four important subgroups: female, older, diabetic and Indigenous patients (https://www.mja.com.au/doi/10.5694/mja14.01248). These subgroups have been underrepresented in clinical trials, in contrast with the evidence base that guides the care of most other patients with ACS, which is rich and detailed. There is also evidence that these subgroups are at particular risk, and clinical decisions must often be based on extrapolation from the results of clinical trials without absolute certainty that the evidence is applicable.

The other articles in the supplement deal with the challenges in caring for post-ACS patients at the time of discharge from hospital and handover to the general practitioner. This transition can lead to confusion for the patient and frustration for the GP in dealing with patients returning to their practice with major changes in their management incompletely documented and uncertainty about how best to access the services available to their patients.

Redfern and Briffa use data from three registries to describe common shortfalls in the transition from hospital to primary care (https://www.mja.com.au/doi/10.5694/mja14.01156). The challenges in improving access to effective secondary prevention are concisely summarised, with positive guidance on how to improve secondary prevention in primary care, raising awareness of the need for lifelong secondary prevention, better integration and use of existing services, consideration of the use of registry data in data monitoring and quality assurance, and the potential in embracing new technologies such as automated texting reminders to patients, already outlined in a summit on this topic last year.13

Thompson and colleagues summarise the extensive evidence base for ideal post-hospital therapy (https://www.mja.com.au/doi/10.5694/mja14.01155), focusing on the 50% of patients who do not receive coronary intervention or revascularisation at the time of their acute episode.14 The extensive collaboration on clinical trials and registries that has gone into developing the rich evidence base is a source of pride in modern cardiology, but many gaps in evidence remain.

Thakkar and Chow reassert the truism that drugs do not work in patients who do not take them (https://www.mja.com.au/doi/10.5694/mja14.01157); there is evidence that non-adherence among post-ACS patients is common and associated with adverse outcomes.15 Their review summarises strategies to improve adherence to prescribed medications, and touches on the future possibility of a polypill to include a combination of evidence-based therapies to improve adherence.

Finally, Vickery and Thompson take the GP’s perspective in managing the post-ACS patient and describe eight common challenges that GPs face in this setting (https://www.mja.com.au/doi/10.5694/mja14.01250). The need for courteous, detailed communication between the hospital and primary care is highlighted.

The common theme of each article in this supplement is that progress has been impressive, but much has to be done to continue the improvements in understanding and in translating the knowledge we already have into further improvements in outcomes. The disturbing evidence from recent Australian nationwide surveys that the application of proven evidence-based therapies remains less than optimal16 is a concern and presents a major challenge in the modern management of ACS.

Splenic rupture: a rare complication of infectious mononucleosis

A 28-year-old man presented to the emergency department with acute left upper quadrant tenderness and postural hypotension. He reported having had fever and cervical tenderness for 1 week before his presentation.

Blood tests showed an elevated white cell count with reactive lymphocytosis. A test for infectious mononucleosis heterophile antibody was positive, consistent with recent infection.

A contrast scan of the abdomen showed splenomegaly with subcapsular haematoma.

Splenic rupture after infectious mononucleosis is rare (incidence, 0.1%–0.5%), but can have disastrous consequences if overlooked.1,2

Artefactual “in-vitro coagulopathy” in a patient with non-Hodgkin lymphoma and lower gastrointestinal bleeding

Clinical record

An 80-year-old man presented in August 2012 with lower gastrointestinal bleeding 3 days after a routine colonoscopy and polypectomy as a day procedure. Five sessile polyps had been excised from the right colon (one from the caecum, one from the hepatic flexure and three from the transverse colon); the largest polyp measured 8 mm. Additionally, a 20 mm sessile polyp was excised from the rectum by means of chromogel elevation and endoscopic mucosal resection.1 A resultant defect in the muscularis propria was closed with five endoclips. The patient was discharged in the evening, well after the procedure. Subsequent histopathological examination showed that the polyps were tubular or tubulovillous adenomas without dysplasia.

Three days after initial polypectomy the patient was woken from sleep by abdominal discomfort and an episode of passing about 500 mL of fresh blood from the rectum. On presenting to the emergency department, he continued to bleed, although at a slower rate. He was asymptomatic and haemodynamically stable. His haemoglobin level was 115 g/L (reference interval [RI], 130–180 g/L) on presentation, and dropped to 98 g/L after a second large bleed 8 hours later. His blood urea nitrogen level on admission was elevated (12.6 mmol/L; RI, 3.0–8.0 mmol/L), consistent with a gastrointestinal bleed.

The patient had multiple medical comorbid conditions, including non-Hodgkin lymphoma (NHL). He had relapsed with NHL a month earlier with a rise in his IgM paraprotein levels to 5.0 g/L (RI, 0.0 g/L) after 8 years in remission. He also had hypertension, ischaemic heart disease, Paget disease and prostate cancer. Previous surgery included repair of type B aortic dissection and a splenectomy. His medications on admission included aspirin, simvastatin, amlodipine, metoprolol and frusemide.

Initial results of blood tests (Table) showed an apparent coagulopathy, with a prolonged prothrombin time (PT) and activated partial thromboplastin time (APTT), an international normalised ratio (INR) of 2.0, but a normal thrombin time. The haematology team was consulted to assess and, if necessary, haemostatically correct the apparent coagulopathy before endoscopic assessment.2.3

The differential diagnoses of concurrently prolonged APTT and PT are concomitant warfarin and heparin therapy, supratherapeutic warfarinisation or heparinisation, inherited clotting factor deficiencies, acquired clotting factor deficiencies (secondary to disseminated intravascular coagulation, liver disease or vitamin K deficiency) and acquired inhibitors of clotting factors.4 The patient was not receiving anticoagulation therapy. He had no previous history of bleeding or liver disease, and his fibrinogen levels were high (5.6 g/L; RI, 2.0–4.3 g/L), arguing against an acquired or hereditary clotting factor deficiency. Most importantly, mixing studies (which combine patient plasma with donor plasma that contains a normal concentration of clotting factors) did not correct the apparent coagulopathy. This is in keeping with the presence of either a clotting factor inhibitor or antiphospholipid antibodies.4

Further investigations showed moderately positive lupus anticoagulant levels and no specific factor VIII or IX inhibitor. Lymphoproliferative disorders are well known to cause abnormalities of coagulation studies and, in particular, can cause lupus anticoagulant activity.5 It was concluded that the apparent coagulopathy was an in-vitro phenomenon caused by an interaction between lupus anticoagulant and the reagents used in the coagulation studies. In the absence of any clinical features of antiphospholipid syndrome, it was considered that there was no need for active treatment for the lupus anticoagulant results, which would have consisted of anticoagulant therapy.6

After prompt resolution of the possible coagulopathy issue by laboratory staff and haematologists (ie, establishing that there was indeed no in-vivo coagulopathy), the patient underwent urgent flexible sigmoidoscopy for ongoing rectal bleeding that evening, 17 hours after presentation. He was found to have active bleeding from the complicated rectal polypectomy site. Haemostasis was achieved with thermocoagulation and clipping. He recovered well and was discharged home 48 hours later having experienced no further bleeding and maintaining stable haemoglobin levels.

Results of the patient’s coagulation studies on admission

Study

Value

Reference interval

Prothrombin time (PT)

25 seconds

11–18 seconds

Activated partial thromboplastin time (APTT)

75 seconds

25–36 seconds

International normalised ratio

2.0

na

Mixing study

No correction of prolonged APTT and PT

Thrombin time

16 seconds

10–17 seconds


na = not applicable.

Bleeding after polypectomy is not uncommon, occurring in about 2% of patients overall, and up to 7% of patients after an endoscopic mucosal resection involving a muscularis propria defect.1,7 The main predictors of bleeding include the type and size of the polyp (sessile polyps larger the 20 mm confer a high risk), anticoagulation or antiplatelet therapy and multiple comorbid illnesses.1,8 The management of bleeding after polypectomy consists of resuscitating the patient, correcting any coagulopathy and then performing a colonoscopy or urgent computed tomography angiography.3 Despite remaining haemodynamically stable, our patient was considered to be at high risk because of his multiple comorbid illnesses, persistent bleeding, anaemia, aspirin use and elevated blood urea nitrogen test result.1,8 In the absence of the haemostatic anomaly, he would have undergone urgent endoscopic intervention within 2 hours of presenting to the emergency department. By contrast, the flexible sigmoidoscopy was delayed for more than 10 hours to allow further investigation of the apparent coagulopathy.

In this patient’s case, the initial blood workup — with mixing studies not correcting the anomalies — suggested the presence of either a clotting factor inhibitor or lupus anticoagulant.4 Differentiating between these was both crucial to the patient’s care in the short and long term, and very challenging to achieve. If a clotting factor inhibitor was found, the patient would have required high level and expensive haemostatic support, potentially including the use of activated factor VII. On the other hand, the finding of a moderate level of lupus anticoagulant would not be a cause for active intervention, because the patient did not have any clinical features of antiphospholipid syndrome.6

Lessons from practice

  • Abnormalities in coagulation studies may not represent an in-vivo coagulopathy.
  • If abnormal coagulation results are not corrected by mixing studies, the main differential diagnoses are the presence of a clotting factor inhibitor or lupus anticoagulant activity.
  • Differentiating between these is essential, as one is a true coagulopathy that might require high-level haemostatic support and the other represents a pro-thrombotic state.
  • The diagnosis of lupus anticoagulant activity is complex, especially in a clinical scenario with circulating serum paraprotein, and many laboratories misinterpret these cases as clotting factor inhibitors.

Distinguishing between lupus anticoagulant activity and the presence of a clotting factor inhibitor was particularly difficult because of the high levels of paraprotein in the patient’s serum.9,10 In a recent study of 93 members enrolled in the lupus anticoagulant module of the Special Haemostasis Program of the Royal College of Pathologists of Australia Quality Assessment Program, only 58% of laboratories correctly diagnosed the presence of lupus anticoagulant in a sample that was strongly positive for lupus anticoagulant and that also had a high titre of paraprotein.9 Many laboratories instead incorrectly identified specific factor inhibitors. If this had happened in our case, it would have led to efforts to reverse the apparent coagulopathy.

This case is an example of a rare haematological diagnostic problem, described only in case reports, leading to delayed definitive therapy in a bleeding patient. It highlights the importance of a sound knowledge of the science underlying the investigations that we order. Physicians must understand the distinction between abnormal results and actual abnormality. Early involvement of the haematology team prevented us from attempting unnecessary reversal of a suspected coagulopathy. Attempting to do so would have (i) further delayed the definitive treatment of this patient; (ii) exposed the patient to needless intervention and possible adverse events; (iii) been potentially very expensive (should activated factor VII been contemplated); and (iv) would likely have failed to correct the in-vitro identified defect in any case.

Taking the inferior out of inferior vena cava filter follow-up

To the Editor: Follow-up of inferior vena cava (IVC) filters after insertion is a task that is variably successful. This was highlighted
by a recent article describing poor removal rates of IVC filters at our institution between 2007 and 2009.1 Since that time, the interventional radiology (IR) department has established a filter database and clinic with the aim of improving IVC filter monitoring and removal. This radiology-driven initiative has been integrated into the standard interventional procedures and has proven extremely effective. Based
on the success of this program, we advocate strongly that IVC filter follow-up should be the responsibility of those who provide the insertion service.

Between 2011 and 2013, all 87 IVC filters inserted by IR at St Vincent’s Hospital Melbourne have been accounted for; 44 have been removed, 33 were planned not for removal (eg, patients with a poor prognosis or contraindications), and 10 are awaiting removal or review in our IR clinic. The average indwelling time for IVC filters inserted between 2011 and 2013 was 23 weeks (range, 1–130 weeks), with 71/87 filters remaining in situ for less than 6 months. The average time in situ fell from 35 weeks in 2011 to 14 weeks
in 2013. These results contrast with 2008–2010 data from our institution, where only 16/68 IVC filters were removed or planned for removal,
19/68 were planned not for removal or the patient was deceased, and
33/68 were lost to follow-up (Box).

Concerns about inadequate IVC filter removal and follow-up have been reported in the literature, particularly as the use of filters has increased.2,3 Potential complications associated with “forgotten” IVC filters include thrombogenesis, caval perforation, and filter fracture and migration. However, the important role of IR in tracking and removing lost filters is becoming widely recognised, with a number of studies outlining improvements in IVC filter follow-up that have resulted from radiology-led initiatives.4,5 Our data support the conclusion that IR departments, through easily implemented strategies, can and should take responsibility for tracking and removing all IVC filters they insert.

Inferior vena cava filter outcomes at St Vincent’s Hospital Melbourne

Re-treating bleeding hereditary haemorrhagic telangiectasia with bevacizumab

To the Editor: Symptomatic bleeding in patients with hereditary haemorrhagic telangiectasia (HHT) has been reported to respond to bevacizumab treatment. In this Journal in March 2011, Cruikshank and Chern described the successful treatment of gastric HHT with bevacizumab.1 Here, we present the result of a rechallenge with bevacizumab in the same patient.

Following 18 months of being haemorrhage-free after therapy with bevacizumab, the 72-year-old man returned to the clinic with increased frequencies of epistaxis and upper gastrointestinal bleeding. Multiple
facial and hand telangiectasias were observed. Endoscopy revealed two HHT lesions and evidence of residual gastric antral vascular ectasia (Box).

The patient was rechallenged with six cycles of intravenous bevacizumab therapy, delivered fortnightly at a dose of 5 mg/kg, and completed in May 2012. He tolerated the treatment well and reported no adverse side effects; specifically, there was no hypertension or proteinuria. He reported Grade 1 lethargy, which was likely due to anaemia as he had a haemoglobin
level of 89 g/L. His anaemia resolved during the course of the treatment.

A follow-up endoscopy after the bevacizumab therapy showed complete resolution of the HHT lesions and few residual hypoplastic polyps. The patient has reported no further gastrointestinal bleeding or epistaxis to date. We continue to monitor him, and will consider further bevacizumab treatment if needed.

Despite there being little evidence
for the repeated use of intravenous bevacizumab therapy in patients with HHT, our case suggests that bevacizumab rechallenge is effective and safe. The time until relapse in patients with HHT after a dose regimen of bevacizumab therapy comparable to the one we used has been reported to range from 3 months to a year.24 In our patient, the initial bevacizumab therapy was successful in suppressing the HHT for up to 18 months.

Area of telangiectasia and residual gastric antral vascular ectasia in the patient’s stomach before the bevacizumab rechallenge

Research using autologous cord blood — time for a policy change

It is now well established that type 1 diabetes is a chronic, multifactorial disease that results from autoimmune-mediated destruction of pancreatic β cells. However, no intervention has successfully prevented the disease to date. Recently, reinfusion of autologous umbilical cord blood has been proposed as a novel preventive therapy and is the focus of an Australian Phase I trial, the Cord Reinfusion in Diabetes (CoRD) pilot study (https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=363694). However, the use of publicly stored cord blood for research in Australia is currently limited by policy that restricts its use to recognised indications, including allogeneic haematopoietic stem cell transplantation for oncological, haematological, genetic and immunological disorders. There are also specific ethical issues associated with the collection and storage of cord blood, including storage (public v private), informed consent (from whom, when and how?), ownership (does it belong to the child or the parent?), access (exclusive autologous v allogeneic use) and the principle of beneficence.1

A substantial body of research in recent years has been directed towards prevention of type 1 diabetes. Primary prevention has largely targeted putative environmental risk factors such as early introduction of docosahexaenoeic acid or dietary cow milk protein.2 The latter is being investigated in the international randomised controlled Trial to Reduce the Incidence of Type 1 Diabetes in the Genetically at Risk (TRIGR).3 In contrast, secondary prevention trials have targeted immunomodulation through interventions such as nicotinamide and oral or parenteral insulin. While previous trials have failed to demonstrate significant therapeutic benefit,2 and evidence-based guidelines state that no interventions are recommended for use in clinical practice to delay or prevent the onset of type 1 diabetes,4 the role of intranasal insulin is under investigation through the Type 1 Diabetes Prevention Trial.5 Nevertheless, there is a compelling argument to explore novel approaches to the prevention of type 1 diabetes.

An alternative preventive strategy is to modify the regulatory components of the immune system — in particular, Foxp3+ regulatory T cells (Tregs). There is emerging evidence in animal models and in humans to suggest that the loss of normal immunological self-tolerance in type 1 diabetes, a crucial step in its pathogenesis, may be attributable to the failure of Tregs.6 The specific Treg abnormalities involved in type 1 diabetes are yet to be fully elucidated, but may include defects in Treg number and function, and increased resistance to regulation by effector T cells.6

Umbilical cord blood is rich in Tregs, which become functionally suppressive on antigen stimulation,7 and is also a source of haematopoietic and pluripotent stem cells.8 Thus, there is a strong scientific rationale behind the potential for cord blood to prevent or delay the onset of type 1 diabetes. The CoRD trial will examine whether autologous cord blood infusion can prevent type 1 diabetes in high-risk children with serum antibodies to multiple β-cell antigens. In parallel, the trial will study the immunological effects of cord blood infusion. This is the first time such a trial will be undertaken for the prevention of type 1 diabetes in humans, although studies have used autologous cord blood after the onset of type 1 diabetes. In a Phase I trial involving 24 children (median age, 5.1 years) with type 1 diabetes,9 infusion of autologous cord blood after a median diabetes duration of 3 months was associated with a transient increase in total and naive Tregs at 6 and 9 months, respectively. No adverse events were observed. Nevertheless, the intervention did not preserve β-cell function, as C-peptide levels decreased after infusion. However, the time after diagnosis at which the infusion was given may have been important; a rapid loss of β-cell mass has been frequently observed, and this decline may have occurred before the infusion was given. In a pilot study of 15 individuals (median age, 29 years) with type 1 diabetes,10 circulating lymphocytes were cocultured with allogeneic cord blood-derived stem cells and subsequently reintroduced into the circulation. There was a significant improvement in mean fasting C-peptide levels and a reduction in glycated haemoglobin levels and daily insulin requirements, in parallel with an increase in Tregs. The procedure was well tolerated, with no adverse events. These two studies suggest that cord blood may increase the frequency of Tregs in people with type 1 diabetes and may therefore induce immune tolerance. Whether cord blood has the same effect among people with prediabetes is unknown.

There are several fundamental methodological issues that must be addressed in the development of trials such as CoRD, which involve autologous cord blood. Studies that have demonstrated either no or minimal adverse effects in the use of autologous cord blood have involved small study samples.9,10 While the safety of autologous cord blood may also be inferred from the known safety of allogeneic cord blood, further data are required, particularly in the paediatric population. Rates of microbial contamination are low (< 5% in privately banked samples), although such samples are generally not suitable for use. In addition, samples with low total nucleated cell counts may be ineffective;11 however, private banks specify a lower limit of 108 total nucleated cells for storage, thereby reducing the likelihood of inadequate samples being collected.

Despite the clear need for well designed trials to examine the specific therapeutic applications of cord blood, there are important differences in the ways in which public and private banks collect, store and provide access to cord blood, which could affect potential research. Public banks store donated cord blood units for allogeneic use, with around 3000 units stored per year in Australia (about 1% of live births). The rate of collection in public banks is dependent on available funding and only a few hospitals participate in collection nationally. In contrast, private banks provide storage for personal and familial use, for a fee. The storage rate in private banks is around 4000 units per year (Mark Kirkland, Cord Blood Bank Director, Cell Care Australia, personal communication), and growth is estimated at 12%–15% per annum.12 Globally, over a million cord blood units have been stored in private banks. Nevertheless, the chance of using a privately stored cord blood sample is less than 0.01%.13 Although a number of potential therapeutic indications for autologous cord blood have been proposed — such as cerebral palsy, hypoxic–ischaemic encephalopathy,14 congenital hydrocephalus and stroke15 — there are few published data. The number of published clinical trials using autologous cord blood is limited; however, there are 14 ongoing trials registered on ClinicalTrials.gov using both publicly and privately stored cord blood.16

The expansion of cord blood trials, along with high consumer demand for storage, places pressure on regulatory bodies to develop and adapt policies to meet these needs. Although the regulatory framework surrounding cord blood banking in Australia has undergone significant development, issues remain regarding access to publicly donated cord blood. In particular, there is no clear guideline that addresses degifting and use of publicly stored cord blood for autologous reinfusion beyond recognised indications. At present, the use of publicly banked cord blood is essentially limited to well researched and established applications, particularly for haematopoietic reconstitution, and does not extend to research purposes (Anthony Montague, National Cord Blood Network Operations Manager, Australian Bone Marrow Donor Registry, personal communication). These processes are, however, currently under review. Beyond being a policy issue, this raises deeper ethical questions regarding the rights of public donors to access their donated cord blood and equity between public donors and those who privately bank cord blood, particularly as the private industry continues to expand.17,18

While the future applicability of cord blood-based therapeutics, including prevention of type 1 diabetes, is at present unclear, this is an emerging area of research. An evidence base is clearly needed in response to the burgeoning interest in the community for storage of cord blood. However, important questions regarding the storage and use of publicly donated cord blood remain unanswered. Should cord blood banks be permitted to degift altruistically donated samples to enable participation in research? Will novel therapeutic uses for cord blood lead to changes to public cord blood banking policy? Given the likelihood of future cord blood-based clinical trials, the existing framework of cord blood banking policy must be reviewed to meet the needs that will be posed by such research, which may lead the way to expanding novel uses of cord blood.

Ethical and legal issues raised by cord blood banking — the challenges of the new bioeconomy

While human tissue has always had cultural, educational and scientific value, it is only in the past 40 years that tissue economies have begun to emerge.1 These economies have been driven by advances in technology that rapidly reversed our perception of human tissue as a useless by-product of treatment, to being a valuable and powerful source of wealth. Umbilical cord blood banking is an excellent example of this trend — what once was waste is now considered to be a wonder product that is collected, stored and traded in private and public markets.2 As a consequence, cord blood stem cells have become a focus of public, medical and scientific contest because of their uncertain ontological status (whether they are simply adult stem cells or share properties of both embryonic and adult stem cells), and because they have proven value as a source of haematopoietic stem cells for transplantation and more speculative value as a source of autologous stem cells for regenerative medicine.

Therapy and research: proof and potential

The benefits of cord blood stem cells in haematopoietic stem cell transplantation (HSCT) are clear. They are easily (non-invasively) collected and stored, they provide a rapidly available source of stem cells for HSCT, they extend the number of patients for whom HSCT is an option (because of their immunological immaturity), and they have been shown to give clinical outcomes equivalent to HSCT using bone marrow or peripheral blood stem cells.3 The benefits of cord blood transplantation rely primarily on the altruistic donation of cord blood around the world to an international network of more than 54 public cord blood banks, which collect, process, store and supply cord blood units (CBUs) for transplantation in Australia and internationally.

In contrast, the banking of autologous cord blood in private cord blood banks is based on the hope that it may, at some point in the future, be of therapeutic benefit in treating or curing any form of chronic or degenerative disease in its donor or in others. At present, this is entirely speculative. The likelihood that the infant, or other family members, will need access to their cord blood haematopoietic stem cells for established therapeutic indications is extremely low. For the most part, such use would be largely unnecessary, as most patients who need an autologous HSCT have haematopoietic stem cells collected from their peripheral blood or bone marrow when required.4 Despite the absence of indications for this use of cord blood, many parents elect to bank their child’s cord blood as a form of biological insurance, in the belief that such benefits may be recognised in the future — or at least to satisfy themselves that they have done everything in their power to guarantee their child’s wellbeing. There are now 225 private cord blood banks worldwide, storing nearly 1 million CBUs, which far exceeds the estimated 600 000 CBUs that are stored in publicly accessible cord blood banks.5

The emergence and rapid growth of private cord blood banks provides a significant challenge to the viability of public cord blood banks — a challenge that is likely to grow if evidence accumulates regarding cord blood stem cells’ utility in regenerative medicine.6 This challenge is both philosophical and practical. Philosophically, transplantation programs can only meet defined need and optimise equity in health care if sufficient numbers of citizens value social solidarity and make an altruistic decision to donate cord blood in the public interest. Practically, cord blood banks need to have a sufficient number of CBUs to maintain human leukocyte antigen variability, so that HSCT with sufficiently high cell doses and optimal human leukocyte antigen matching is available to those who need it.7,8

What makes these challenges so difficult is that they reflect fundamental moral questions regarding the design and delivery of health care systems, the rights and responsibilities of citizens in liberal democracies, the commitments that we have to generational fairness, and questions concerning ownership of human tissue. While some of these ethical and legal uncertainties are specific to cord blood (due, in part, to its origins in pregnancy), others are a feature of the challenges that arise in regulating emerging tissue economies.

In Australia, cord blood banking has emerged in a regulatory vacuum, and so has come to be governed by a patchwork of common law and legislative principles, including the doctrine of informed consent, legal and ethical notions of donation, and therapeutic goods law.9 This lack of direct regulation has undoubtedly created uncertainty regarding the status and management of cord blood and the obligations of both public and private cord blood banks.

Problems with consent: whose blood is being banked for whom?

Standard models of tissue donation assume that the process of informed consent must involve the person from whom the tissue is being taken (in living donation), or their representative (in some types of posthumous donation). Such an assumption is problematic in cord blood banking because it is not clear whether the mother or the child is the donor. As a practical matter, most collection occurs after consent from the mother either directly or on behalf of the child — but it is not clear, legally or ethically, that this is the most appropriate way of gaining consent. While consent has not yet been a subject of disagreement in this area, it is not hard to envision, in the modern era of the blended family, that there is potential for competing family claims should the therapeutic potential of cord blood materialise. And if situations arise where competing claims are made over stored cord blood, the nature of consent to storage and issues regarding proprietary interests (discussed below) will take central place in the resolution of any conflict.

Cord blood is genetically identical to the child but, depending on the method of collection and type of birth, may be collected from the mother or child, or from the cord when neither child nor mother is connected to it. There seems to be no dominant ethical or religious position as to who the donor is.10 Nor does the law provide any direct answer, as it views the placenta/umbilicus as being part of the mother for some purposes and part of the child for others.2 If one seeks an answer via genetics, one might argue that the child has a better claim, but it might be equally argued that other family members, including the father and siblings (both present and future), have claim to the cord blood, given the shared nature of the genetic information contained within it and its potential therapeutic uses.

We argue that these complexities require the consent process to be inclusive and representative of the different family interests in the cord blood. Thus, rather than relying purely on the birth mother to give consent, the process should, where possible, include the mother’s parenting partner. Further, it is our view that the consent given by the parents should expressly recognise that they are also consenting on behalf of the child.

Questions have also been raised regarding whether informed consent to private banking is a realistic possibility, given the extremely low likelihood of requiring one’s own stem cells for autologous transplantation later in life and the inevitably coercive nature of any decision that rests on assumptions about the best interests of children. It has been claimed that private banks in the United States have capitalised on the chance that families will overestimate the true likelihood of needing stored cord blood.11 Indeed, some would argue that advertising these services is exploitative, given the degree to which expectant parents are open to consider anything that may benefit their child (irrespective of the realistic expectation of that benefit) and the low likelihood that the cord blood will ever be used. However, there is no evidence that Australian private cord blood banks have engaged in any form of misleading or deceptive conduct in relation to their services. Indeed, our own view of Australian private cord blood banks is that their advertising seems rather modest, raising notions of insurance and explicitly acknowledging the low chance of the family ever needing to access the cord blood. In any event, the strictures of the Australian Consumer Law would be likely to prevent misleading and deceptive advertising.

Cord blood banking and property rights

As tissue economies have emerged, the common law of property has changed to recognise that people have property rights over their human tissue. Traditionally, the common law refused to recognise property rights in human tissue, unless the tissue had been preserved through some work or skill.1214 This rule gave the property rights to whoever provided the labour or whoever paid for it to be done, which in the cord blood context gave rights to the public banks and to the purchasers of the services of the private banks. However, in the past 2 years, courts in both the United Kingdom and Australia have begun to recognise property rights in tissue that are not dependent on the work and skill exception, giving rise to rights to its possession, use, bailment (a property relationship), and protection from negligent storage.1517 These cases indicate the potential for donors to have rights to deal with their tissue under both contract and tort law.

This broader recognition of property rights is a challenge to those who see human tissue donation as a form of gift that is devoid of proprietary rights. Indeed, the very notion of a gift in law is a property relationship where property passes hands without payment. Gifts can be given without conditions attached, but property law also recognises that gifts can be made conditionally in ways that preserve some rights of control and access for the donor.

Property law may be very useful in regulating cord blood banking because it creates a language for understanding conditional donation. Property laws may help to explain how cord blood could be gifted to a public bank on the condition that the donor parent(s) have the option to withdraw donated cord blood should the donor child or their sibling require the cord blood for their own medical use, and on the condition that the family are contacted before the cord blood is used in treatment or research (which is standard practice in some public banks). Property law also provides a model for understanding how, in the private banking context, a relative (such as a grandparent) could pay for the banking on the grounds that it is made available to a range of family members through a form of discretionary trust.

We believe that the current practice in the private banking industry has already effectively adopted property forms. The contracts for storage are bailments. The contracts also treat the cord blood as being held on behalf of an individual child or on behalf of the family group. This idea of holding property for the benefit of another is clearly a trust, where the legal title (held normally by the parents) is exercised for the benefit of the child (or family group). Once the child turns 18 years of age, most Australian contracts then state that the property then passes to the (now adult) child, which is again a classic trust mechanism.

In public banking, there appears to be more reticence to adopt property language, arguably because of its non-commercial focus. We argue that a public bank works just like a charitable trust — holding valuable property for the public benefit in pursuit of the charitable aims of improved health care. The usefulness of charitable trust laws is that they create a framework for donation and use of the cord blood.

While we recognise that property law is not a panacea for all regulatory woes, it does have the potential to provide tools for unpacking cord blood banking, as it provides a well established mechanism for recognising rights and resolving disputes between different interest holders.

Private, public or hybrid banking?

The complex issues surrounding cord blood banking and donation are not simply a function of the uncertain legal status of cord blood; they are also a function of the structure and function of public and private banks. Traditionally, public banks have stored donated cord blood for public use in allogeneic transplant programs and some (limited) types of research, while private banks have focused on storage for autologous treatment for the child donor. However, the line between public and private banking is becoming increasingly blurred.5 Many private banks now offer cord blood to matched family members of the child and are becoming increasingly involved in research projects. Public cord blood banks have generally acceded to requests to release donated CBUs for autologous transplantation or for use in related donor transplantation. But although there has been some convergence of practice, requests to use CBUs that have been donated to public banks to support allogeneic transplantation programs for research are more challenging.18

While those with opposing perspectives often build walls to protect their interests, this is unlikely to provide the best approach to accommodating both public and personal interest in cord blood. Indeed, the tensions that currently exist are likely to increase if the indications for autologous transplantation expand, or if evidence emerges that cord blood stem cells may have a therapeutic role outside of transplantation — be that in regenerative medicine or in immunoprotection against non-communicable diseases. As Han and Craig point out in this issue of the Journal,18 it is becoming increasingly clear that policy is needed to deal with requests for the release of cord blood for therapeutic and research purposes from public banks. Serious consideration needs to be given to hybrid models of banking that offer both public donation and private banking or make privately stored CBUs available to the public system should they be needed for transplantation.5

Conclusion

Cord blood banking is a challenging area to regulate. It throws up unique problems of consent and is complicated by a dual system of public and private banking. The emergence of property rights in human tissue is a further complication, but one that we feel will ultimately provide a means of untying some of these difficult questions and resolving the disputes that will inevitably arise concerning this valuable resource. Regardless of the way forward, it is clear that any regulation should be developed in an open and transparent manner. Regulation should be drafted in light of the concerns of both public and private banking and should be based on well informed public debate.19 We welcome efforts by AusCord and the Australian Bone Marrow Donor Registry to address these issues and believe that there is much to be gained from a broader public discussion about the goals of medicine and research and the importance of community and social solidarity.

International treatment guidelines for anaemia in chronic kidney disease — what has changed?

Balancing the risks and benefits of erythropoietin-stimulating agents and iron therapy

One in nine Australians has chronic kidney disease (CKD),1 although the condition may often not be recognised in primary care.2 There are five stages of CKD, ranging from Stage 1, in which patients have normal renal function but urinary abnormalities, structural abnormalities or genetic traits pointing to kidney disease, through to Stage 5, in which patients have end-stage disease.1

Although anaemia in patients with CKD is multifactorial in origin, it is primarily associated with relative erythropoietin production deficiency3 as the glomerular filtration rate (GFR) falls. Once the estimated GFR trends below 60 mL/min/1.73 m2 (Stage 3a CKD), erythropoietin production by the kidneys falls, and anaemia may develop.

The history of anaemia management in CKD and associated clinical practice guidelines has been one of contradiction and perceived industry influence.4 The publication in August 2012 by the Kidney Disease: Improving Global Outcomes (KDIGO) group (an international collaboration of nephrologists) of a guideline that updates and informs clinical practice in this area5 was therefore welcome.

The KDIGO guideline contains 47 recommendations with varying grades of evidence (Box).5 It is noteworthy that the conclusions, recommendations and ungraded suggestions for clinical practice in the KDIGO guideline are largely consistent with those currently provided in the Kidney Health Australia–Caring for Australasians with Renal Impairment (KHA-CARI) guidelines.6,7 Of particular note, the KDIGO guideline takes into account the importance of balancing the risks and benefits of erythropoietin-stimulating agents (ESAs) and iron therapy.

A key aspect of the KDIGO guideline is that it recommends more caution in the use of erythropoietin. The cloning of human erythropoietin 30 years ago was an important breakthrough that led to the clinical development of recombinant human erythropoietin and thus to the use of ESAs for treating anaemia in patients with CKD. Numerous observational studies suggested that patients with lower targeted haemoglobin levels had worse outcomes than those with higher targets.4 This conclusion was supported by the clinical consequences of anaemia (including fatigue, exercise intolerance, cognitive impairment and exacerbation of cardiovascular disease)3 and led clinicians to virtually demand ESAs for the treatment of CKD-associated anaemia. It was therefore ironic that the first randomised controlled trials to evaluate the outcomes of higher versus lower haemoglobin targets, using recombinant human erythropoietin, found the unexpected reverse outcome of negative effects (increased cardiovascular events and mortality) in patients randomly assigned to the higher targets.8,9

The evidence now seems to suggest that haemoglobin target levels between 100 and 115 g/L should be the aim for patients with CKD, and certainly not levels > 130 g/L,10 which have the potential to cause harm. In fact, the KDIGO guideline reaffirms the very strong (Grade 1A) KHA-CARI recommendation of not targeting haemoglobin concentrations > 130 g/L.7,10 There are also newer cautionary recommendations, not covered by the KHA-CARI guidelines, regarding ESA use in patients with active malignancy (1B), a history of stroke (1B) or a history of malignancy (2C), where there is greater potential for harm.

The recommendations in the KDIGO guideline relating to the use of iron supplementation are similar to those in the KHA-CARI guidelines.6 They include the recommendation that oral iron supplementation is inadequate compared with injectable iron. This is problematic for most general practitioners, as intravenous iron cannot be administered in general practice, due to the need for monitoring. However, intravenous iron therapy may sometimes be arranged for the patient at a local hospital. Further, the KDIGO guideline recommends the use of iron indices to help guide therapy, with considerations of infection risk from excess iron and suboptimal ESA responsiveness.

The KDIGO guideline also covers the use of red cell transfusion in patients with CKD,5 again emphasising the importance of balancing risks and benefits. For example, for patients who may undergo future organ transplantation, red cell transfusion carries a risk of sensitisation to various tissue-type antigens; and, for all patients, excessive blood transfusions can lead to iron overload diseases. Symptom change (eg, decrease in fatigue) in an individual patient, rather than a haemoglobin concentration threshold, is a better measure of successful improvement.

Importantly, the first section of the KDIGO guideline considers the diagnosis and evaluation of anaemia in later stages of CKD (Stage 3a and beyond), where anaemia is most common.3 The recommendations, while not graded, provide protocol-type approaches to the frequency of testing and a rational approach to diagnosis that is relevant and appropriate in the Australian context. As GPs frequently manage patients with Stage 3a and 3b CKD, when CKD-associated anaemia is most likely to develop, it is expected that the KDIGO guideline will assist them in investigating and excluding other reversible causes of anaemia in the earlier stages of CKD.

In summary, for GPs and other non-nephrologist clinicians, the key KDIGO guideline recommendations are to: (i) consider CKD in any diagnostic workup of anaemia; (ii) recognise that injectable (rather than oral) iron is the first-line treatment for CKD-associated anaemia; (iii) consider ESA use only when the haemoglobin level is about 100 g/L — refer the patient to a nephrologist at that time; and, lastly, (iv) be appropriately cautious with the use of transfusions (blood and platelets) if transplantation potential exists.

Number of KDIGO guideline recommendations,5 by grade
of evidence*

KDIGO = Kidney Disease: Improving Global Outcomes. * Using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system.

Vitamin B12 and folate tests: the ongoing need to determine appropriate use and public funding

It’s not as simple as new for old: we need to follow a process for “disinvestment” in existing medical procedures, services and technologies

Criteria have been developed for assessing the safety, effectiveness and cost-effectiveness of new and emerging health interventions, but additional challenges exist in identifying opportunities for reducing the use of existing health technologies
or procedures that are potentially overused,
(cost-)ineffective or unsafe.1 Criteria have been proposed to flag technologies that might warrant further investigation under quality improvement programs.1 These criteria are: new evidence becomes available; there is geographical variation in use; variation in care between providers is present; the technology has evolved and differs markedly from the original; there exists a temporal trend in the volume of use; public interest or controversy is present; consultation with health care workers and funders raises concerns; new technology has displaced old technology; there is evidence of leakage (use beyond the restriction or indication); the technology or intervention is a “legacy item” that has never been assessed for cost-effectiveness; use is not in accordance with clinical guidelines; or the technology is nominated by clinical groups.

After such a nomination was made by members of the clinical laboratory community regarding vitamin B12 and folate tests, we sought to determine whether these tests met other criteria. We hope that this article will encourage debate and discussion about the appropriate use of these tests.

Testing for vitamin B12 and folate deficiency

Diagnosing vitamin B12 and folate deficiencies is difficult. The symptoms are diverse (such as malaise, fatigue and neurological symptoms), as are the signs (including megaloblastic anaemia and cognitive impairments). Defining target conditions is, therefore, also difficult. Tests include a full blood count and blood film examination, serum B12, serum folate and red-cell folate (RCF) assays, as well as examination of metabolic markers such as methylmalonic acid (MMA) and homocysteine (Hcy). Untreated vitamin B12 deficiencies may cause serious health problems, including permanent neurological damage (which may occur with low serum B12 levels without haematological changes). Maternal folate deficiencies have been associated with neural tube defects in infants. Potential vitamin B12 or folate deficiencies therefore need to be appropriately investigated and managed.

New evidence

The utility of a diagnostic test is influenced in part by its precision (the ability of a test to faithfully reproduce its own result) and its diagnostic accuracy (ability to discriminate between a patient with a target condition and a healthy patient). Evidence suggests serum B12 tests have poor discriminative ability in many situations, and debate is ongoing over which folate assay is most useful.

The only systematic review and meta-analysis of the diagnostic accuracy of serum B12 tests (conducted by members of our group) suggested that these tests often misclassify individuals as either B12 deficient or B12 replete.2 These findings are consistent with other reports in the literature. One recent report states:

Both false negative and false positive values are common (occurring in up to 50% of tests) with the use of the laboratory-reported lower limit of the normal range as a cutoff point for deficiency.3

And further:

There is often poor agreement when samples are assayed by different laboratories or with the use of different methods.3

Widespread CBLA (competitive-binding luminescence assay) malfunction has also been noted, with assay failure rates of 22% to 35%4 (interference due to intrinsic factor antibodies may explain some of this variation). While a critical overview has suggested that “falsely normal cobalamin concentrations are infrequent in patients with clinically expressed deficiency”, the author notes challenges in diagnosing subclinical deficiency5 (mild metabolic abnormalities without clinical signs or symptoms). Assessment of this evidence base is complicated by the lack of a universally accepted gold standard and by target conditions that are difficult to define, variable clinical presentations and variable cut-off values used to define deficiency.

For investigating folate status, RCF assays are thought to be less susceptible to short-term dietary intake than are assays for serum folate. However, it has been reported that:

The red cell folate assay is more complex to perform than the serum folate assay and requires more steps in sample handling before analysis, and this may be one of the reasons why the precision of the red cell folate assay is less than that of the serum folate assay.6

As discussion continues over which folate test is preferable, new evidence related to the prevalence of folate deficiencies in countries with mandatory food fortification has shifted the focus toward whether there is a need to perform any folate investigations in these jurisdictions. In Australia, mandatory fortification of wheat flour with folic acid was introduced in September 2009.7 Prevalence estimates from a sample of inpatients and outpatients suggested that folate deficiency stood at 0.5% in April 2010, showing an 85% reduction in absolute numbers since April 2009.7 While there is currently no evidence to suggest that the prevalence of megaloblastic anaemia caused by folate deficiency has been reduced, the low frequency of low serum RCF test results in countries where there is mandatory fortification of grain products with folic acid supports the perspective that “there is no longer any justification in ordering folate assays to evaluate the folate status of the patients”.8

Technology development

Over time, multiple technologies for analysing vitamin B12 status have become available, including assays for measuring holotranscobalamin (holoTC, the bioavailable form of vitamin B12), as well as metabolic markers such as MMA and Hcy.3,5 However, like all tests, these are imperfect: holoTC is expensive, not routinely available, itself reliant on poorly defined serum B12 reference ranges, and is yet to be confirmed as a superior test to the serum B12 assay.5 Hcy measurement is subject to artefactual increases due to collection practices, and reference ranges are variable. The availability of MMA tests is restricted to some clinical and research laboratories. As a result, the optimal procedure for measuring vitamin B12 is unclear. As noted above, while a number of approaches exist for assessing folate status, there is currently no consensus on the most appropriate laboratory investigation process.

Temporal, geographical and provider variations

Australian Medicare utilisation data have shown substantial growth in the use of item 66602, which relates to the combined use of serum B12 and folate tests. Between the financial years 2000–01 and 2009–10, use increased from 1082 services per 100 000 population to 7243 services per 100 000 population (21.78% average annual growth rate).9 Over the same period, spending on pathology services overall grew at an average annual rate of 6.3%.

Geographical variation was also present, with the number of services reimbursed for item 66602 ranging from 1962 per 100 000 population in the Northern Territory to 8658 per 100 000 population in the Australian Capital Territory in 2009–10.9 While some of this variation may be due to demographic differences and populations known to have access to fewer health services (eg, Indigenous Australians), the substantial temporal and geographical differences in use raise more questions about appropriate use of these tests, and whether or not they are underused or overused.

Guidelines

Guidelines related to the use of vitamin B12 and folate tests vary widely in their recommendations. While some recommend B12 and folate tests as screening tools in commonly encountered illnesses such as dementia, others suggest restricting testing to patients who have already undergone pretest investigations (such as full blood examinations; however, we note that neurological damage may occur in patients with low serum B12 levels and without haematological changes).10,11 Guidelines may differ on key recommendations, such as the preferred first-line investigation for establishing folate status, while some question the utility of folate investigations at all in jurisdictions where food is fortified with folate.1214

Leakage

With wide variability in guideline recommendations, and with few appearing to consider the diagnostic accuracy of B12 or folate tests, determining the extent to which services have “leaked” beyond their clinical indications is difficult. Possible leakage is evidenced by the use of serum B12 tests among patients presenting with weakness and tiredness, which is not supported by any available guidelines.15 A large study of general practitioners indicated that between April 2000 – March 2002 and April 2006 – March 2008 their use of serum B12 tests among patients presenting with weakness and tiredness increased by 105%.15

Discussion

Tests for investigating patients’ vitamin B12 and folate status have become widely used in clinical practice. Yet existing evidence suggests that the diagnostic accuracy of serum B12 tests is difficult to determine and may be highly variable. While other tests are available for investigating suspected B12 and folate deficiency (such as holoTC, MMA and Hcy), the diagnostic accuracy of these tests is also contested. Challenges in examining the diagnostic accuracy of serum B12 tests include highly variable clinical presentations, lack of a gold standard and inconsistent cut-off values used to define deficiency. While it remains under debate whether the serum or red-cell folate test is most useful for investigating folate status, mandatory folate fortification in Australia may call into question any use of these tests.

Temporal variation in use and geographical differences in how these tests are employed are both evident in Australian data. Moreover, available clinical guidelines are highly inconsistent in their recommendations. Collectively, the issues of test accuracy, wide variability in test use, and inconsistent guideline recommendations suggest that the use of vitamin B12 and folate tests is an area with much scope for quality improvement.

To improve the use of these tests, further assessment is needed that examines the complexity associated with clinical decision making and the various factors influencing why doctors request these tests. The decision to request an investigation such as a B12 or folate test may be driven by a range of factors, including ease of use, cost, absence of significant patient risk, the perceived need to respond to patient requests, lack of appreciation of the diagnostic accuracy of the tests, or ready availability of results.16 Understanding how these factors influence the use of B12 and folate tests may best be achieved through direct consultation with general practitioners, pathologists, specialists and consumers and is a critical step in advancing the assessment of these tests.

Vitamin B12 and folate tests: interpret with care

Clinicians need to consider analytical issues when requesting and interpreting these tests

Vitamin B12 and folate tests are useful for identifying patients with a deficiency. In this issue of the Journal, Willis and colleagues highlight some of
the limitations of serum vitamin B12 assays.1 They also emphasise the uncertainty regarding whether red-cell or serum folate should be the preferred first-line test for folate status. The issues underlying some of the data presented require elaboration.

Vitamin B12 assays have an interpretative grey zone in the region of low-normal and mildly low results. Outside the grey zone, the tests show good performance characteristics: a cut-off of 221 pmol/L has a sensitivity of 99%2 and a cut-off of 123 pmol/L has a specificity of 95%.3 Optimal decision points may vary between methods, but results above 220 pmol/L generally rule out deficiency, while results below 125 pmol/L “rule in” deficiency. Between these limits, misclassification may occur if results are interpreted in a binary manner as simply above or below the lower limit of normal (typically about 150 pmol/L).
One study used a binary interpretative approach in a cohort of patients with low-normal or low vitamin B12 concentrations (< 221 pmol/L).4 It is the results of this study that have led to the claims of the extraordinary misclassification rate of 50%, quoted by Willis et al. In contrast, the appropriate response to vitamin B12 results
of 125–220 pmol/L in patients clinically suspected of deficiency is further testing — using, for example, metabolic markers.

Clinicians also need to be alert to interference in vitamin B12 assays from intrinsic factor antibodies. The interference is sporadic and the same sample may give false results in one assay but not another. An investigation of 23 samples from patients with clinically overt pernicious anaemia (15 positive and eight negative for intrinsic factor antibodies) showed false-normal results for five to eight of the samples, depending on the assay used.5 It is from this report of highly selected samples that claims of “assay failure rates of 22% to 35%” are made.1 This overstates the frequency of the error among non-selected requests to laboratories, which has been estimated as closer to 1 in 3000 requests.6 The interference appears to be limited
to samples from overtly deficient patients and therefore needs to be considered when vitamin B12 results are normal or high in patients with clinically evident deficiency.

In most clinical contexts, assessment of folate status is valid with either red-cell or serum folate. Red-cell folate has the theoretical advantage of providing a longer-term assessment of folate status. This is offset in practice by additional variation in red-cell folate measurements due to sample pretreatment factors and the binding of folate to deoxyhaemoglobin — factors that do not influence serum folate results. A systematic review recently assessed the performance of red-cell versus serum folate for identifying deficiency.7 It found that neither was clearly superior, although serum folate more frequently showed higher correlation with homocysteine as a functional marker
of deficiency.

Tests for folate status remain relevant among populations in countries where staple foods are fortified with folate. The introduction of mandatory fortification
of wheat flour used for breadmaking has reduced the prevalence of deficiency in Australia. In most patients with macrocytic anaemia it may therefore be appropriate to use folate as a second-line investigation after more common causes have been excluded. However, deficiency may still be seen, particularly in those not regularly consuming bread or other grain-based products, such as those with coeliac disease or alcohol dependence.

Vitamin B12 and folate assays are widely available and inexpensive investigations. They identify vitamin deficiencies that have serious consequences if untreated. Recognition of the grey zone in vitamin B12 interpretation limits misclassification; however, clinicians must also remain alert to sporadic assay interference in overtly deficient patients. Folate status may be assessed with either red-cell or serum folate. As the faster and less expensive test to perform, serum folate appears to offer
the best combination of test cost and clinical information.