×

How and why the brain and the gut talk to each other

 

It’s widely recognised that emotions can directly affect stomach function. As early as 1915, influential physiologist Walter Cannon noted that stomach functions are changed in animals when frightened. The same is true for humans. Those who stress a lot often report diarrhoea or stomach pain.
We now know this is because the brain communicates with the gastrointestinal system. A whole ecosystem comprising 100 trillion bacteria living in our bowels is an active participant in this brain-gut chat.

Recent discoveries around this relationship have made us consider using talk therapy and antidepressants as possible treatments for symptoms of chronic gut problems. The aim is to interfere with the conversation between the two organs by telling the brain to repair the faulty bowel.

Our research found talk therapy can improve depression and the quality of life of patients with gastrointestinal conditions. Antidepressants may also have a beneficial effect on both the course of a bowel disease and accompanying anxiety and depression.

What are gastrointestinal conditions?

Gastrointestinal conditions are incredibly common. About 20% of adults and adolescents suffer from irritable bowel syndrome (IBS), a disorder where abdominal discomfort or pain go hand-in-hand with changes in bowel habits. These could involve chronic diarrhoea and constipation, or a mixture of the two.

IBS is a so-called functional disorder, because while its symptoms are debilitating, there are no visible pathological changes in the bowel. So it is diagnosed based on symptoms rather than specific diagnostic tests or procedures.

People with chronic gut conditions can experience severe pain that affects their quality of life.
from shutterstock.com

This is contrary to inflammatory bowel disease (IBD), a condition where the immune system reacts in an exaggerated manner to normal gut bacteria. Inflammatory bowel disease is associated with bleeding, diarrhoea, weight loss and anaemia (iron deficiency) and can be a cause of death. It’s called an organic bowel disease because we can see clear pathological changes caused by inflammation to the bowel lining.

Subtypes of inflammatory bowel disease are Crohn’s disease and ulcerative colitis. Around five million people worldwide, and more than 75,000 in Australia, live with the condition.

People with bowel conditions may need to use the toilet 20 to 30 times a day. They also suffer pain that can affect their family and social lives, education, careers and ability to travel. Many experience anxiety and depression in response to the way the illness changes their life. But studies also suggest those with anxiety and depression are more likely to develop bowel disorders. This is important evidence of brain-gut interactions.

How the brain speaks with the gut

The brain and gut speak to each other constantly through a network of neural, hormonal and immunological messages. But this healthy communication can be disturbed when we stress or develop chronic inflammation in our guts.

Stress can influence the type of bacteria inhabiting the gut, making our bowel flora less diverse and possibly more attractive to harmful bacteria. It can also increase inflammation in the bowel, and vulnerability to infection.

Chronic intestinal inflammation may lower our sensitivity to positive emotions. When we become sick with conditions like inflammatory bowel disease, our brains become rewired through a process called neuroplasticity, which changes the connections between the nerve signals.

Anxiety and depression are common in people suffering chronic bowel problems. Approximately 20% of those living with inflammatory bowel disease report feeling anxious or blue for extended periods of time. When their disease flares, this rate may exceed 60%.

Interestingly, in a recent large study where we observed 2,007 people living with inflammatory bowel disease over nine years, we found a strong association between symptoms of depression or anxiety and disease activity over time. So, anxiety and depression are likely to make the symptoms of inflammatory bowel disease worse long-term.

It makes sense then to offer psychological treatment to those with chronic gut problems. But would such a treatment also benefit their gut health?


Gut feeling: how your microbiota affects your mood, sleep and stress levels


Inflammatory bowel disease

Our recent study combined data from 14 trials and 1,196 participants to examine the effects of talk therapy for inflammatory bowel disease. We showed that talk therapy – particularly cognitive behavioural therapy (CBT), which is focused on teaching people to identify and modify unhelpful thinking styles and problematic behaviours – might have short-term beneficial effects on depression and quality of life in people with inflammatory bowel disease.

But we did not observe any improvements in the bowel disease activity. This could be for several reasons. Inflammatory bowel disease is hard to treat even with strong anti-inflammatory drugs such as steroids, so talk therapy may not be strong enough.

Talk therapy may only help when it’s offered to people experiencing a flare up in their disease. The majority of the included studies in our review were of people in remission, so we don’t know if talk therapy could help those who flare.

On the other hand, in our latest review of 15 studies, we showed antidepressants had a positive impact on inflammatory bowel disease as well as anxiety and depression. It’s important to note the studies in this review were few and largely observational, which means they showed associations between symptoms and antidepressant use rather than proving antidepressants caused a decrease in symptoms.

Studies show talk therapy improves the symptoms of irritable bowel syndrome.
from shutterstock.com

Irritable bowel syndrome

When it comes to irritable bowel syndrome, the studies are more conclusive. According to a meta-analysis combining 32 trials,
both talk therapy and antidepressants improve bowel symptoms in the disorder. A recent update to this meta-analysis, including 48 trials, further confirmed this result.

The studies showed symptoms such as diarrhoea and constipation improved in 56% of those who took antidepressants, compared to 35% in the group who received a placebo. Abdominal pain significantly improved in around 52% of those who took antidepressants, compared to 27% of those in the placebo group.

Symptoms also improved in around 48% of patients receiving psychological therapies, compared with nearly 24% in the control group, who received another intervention such as usual management. IBS symptoms improved in 59% of people who had cognitive behavioural therapy, compared to 36% in the control group.

Stress management and relaxation were found to be ineffective. Interestingly, hypnotherapy was also found effective for bowel symptoms in 45%, compared to 23% of control therapy participants.

What now?

Better studies exploring the role of talk therapy and antidepressants for symptoms of inflammatory bowel disease need to be conducted. We should know in a few years which patients are likely to benefit.

The ConversationIn the meantime, there is enough evidence for doctors to consider referring patients with irritable bowel syndrome for talk therapy and antidepressants.

Antonina Mikocka-Walus, Senior Lecturer in Health Psychology, Deakin University

This article was originally published on The Conversation. Read the original article.

Dementia study debunks exercise theory

 

Look at any of the multitude of articles of the past few years on how to avoid dementia and you’ll almost certainly read that exercise delays onset. Not so, according to the most recent research, published this week in the BMJ.

The 28-year study followed over 10,000 middle-aged British civil servants, noting at seven-year intervals whether participants were doing the “recommended” amount of exercise, defined as moderate or vigorous physical activity for 2.5 or more hours per week.

Surprisingly, the researchers found no correlation between how much exercise a patient did and whether they experienced cognitive decline over the study period, identified through a battery of cognitive tests, along with dementia diagnoses from hospital and mental health services.

The finding runs counter to several recent meta-analyses of observational studies which concluded that physical activity is neuroprotective in cognitive decline and dementia risk.

What the researchers did find was that in participants who eventually developed dementia, a decline in physical activity started around nine years before diagnosis.

This finding could be key to why previous observational studies have found a correlation between exercise and dementia risk, say the French researchers from the Centre for Research in Epidemiology and Population Health in Paris.

It’s now well known that brain changes start happening many years before dementia symptoms become apparent, and a decrease in physical activity is probably part of the cascade of changes in this preclinical phase of dementia, the researchers say.

The upshot is that findings of a lower risk of dementia with exercise may be attributable to reverse causation – in other words, decline in physical activity is due to the dementia, and not the other way around.

The researchers say that two problems with some of the earlier observational studies were that their duration was too short and their participants were too old. This made them more liable to be confounded by participants with preclinical dementia, who for that reason had lower levels of physical exercise.

They also point out a difference between observational and randomised trials, with the latter less likely to find a protective effect with exercise.

The recommendation of exercise for the prevention of dementia has already become enshrined in a number of international guidelines, including in Australia.

You can access the study here.

Art and Medicine

By Dr Jim Chambliss

It is often said that a picture speaks a thousand words.

Contemporary medical technology provides incredibly intricate pictures of external and internal human anatomy.

However, technology does not communicate holistic representations of the social, behavioural and psychosocial impacts associated with illness and the healing process.

Studies have shown that increased reliance on reports from expensive laboratory tests, radiology and specialised diagnostic technology has resulted in inadequacy of physical examination skills; decline in patient empathy, and less effective doctor/patient communication.

Having commenced in May this year and continuing until July 8, continuing professional development workshops which explore and promote the value of art expression in the development of observation skills, human sensitivity and relevant healthcare insights will be presented at the National Gallery of Victoria exhibition of the original works of Vincent van Gogh.

The program will incorporate empirical research to illustrate the way neuropsychological conditions can influence art and creativity. The objectives of the workshops are to:

 • advance understanding of the impact of medical, psychological and social issues on the health and wellbeing of all people;

 • promote deeper empathy and compassion among a wide variety of professionals;

 • enhance visual observation and communication skills; and

 • heighten creative thinking.

Over the last 15 years, the observation and discussion of visual art has emerged in medical education, as a significantly effective approach to improving visual observation skills, patient communication and empathy.

Pilot studies of implementing visual art to teach visual diagnostic skills and communication were so greatly effective that now more than 48 of the top medical schools in the USA integrate visual arts into their curriculum and professional development courses are conducted in many of the most prestigious art galleries and hospitals.

The work of Vincent van Gogh profoundly illustrates the revelations of what it means to be uniquely human in light of neurological characteristics, behavioural changes and creative expression through an educated, respectful and empathic perspective.

The exact cause of a possible brain injury, psychological illness and/or epilepsy of van Gogh is unknown.

It is speculated by numerous prominent neurologists that Vincent suffered a brain lesion at birth or in childhood while others opine that it is absinthe consumption that caused seizures.

Two doctors – Felix Rey and Théopile Peyron – diagnosed van Gogh with epilepsy during his lifetime.

Paul-Ferdinand Gachet also treated van Gogh for epilepsy, depression and mania until his death in 1890 at the age of 37.

After the epilepsy diagnosis by Dr Rey, van Gogh stated in a letter to his brother Theo, dated 28 January 1989: “I well knew that one could break one’s arms and legs before, and that then afterwards that could get better but I didn’t know that one could break one’s brain and that afterwards that got better too.”

Vincent did not, by any account, demonstrate artistic genius in his youth. He started painting at the age of 28 in 1881.

In fact, his erratic line quality, compositional skills and sloppiness with paint were judged in his February 1886 examinations at the Royale Academy of Fine Arts, Antwerp to be worthy of demotion to the beginners’ painting class. His original drawings and paintings were copies from others’ art, while his sketches in drawing class showed remarkably different characteristics.

Increased symptoms of epilepsy and exposure to seizure triggers (absinthe and sleep deprivation) ran parallel with van Gogh’s most innovative artistic techniques and inspirations following his move to Paris in 1886 to 1888.

These symptoms increased, accompanied by breathtaking innovation following his move to Arles, France in 1888 and his further decline in mental and physical health.

In Paris he was exposed to the works of many of the most famous impressionistic and post impressionistic painters, but so much of his new techniques and imagery were distinctly innovative in detail without traceable influences from others.

While in Paris his work transitioned from drab, sombre and realistic images to the vibrant colours and bold lines.

His ebb-and-flow of creative activity and episodes of seizures, depression and mania were at their most intense in the last two years of his life when he produced the greatest number of paintings.

His works are among the most emotionally and monetarily valued of all time. Vincent’s painting of Dr Gachet (1890) in a melancholy pose with digitalis flowers – used in the treatment of epilepsy at that time – sold for $US82.5 million in May, 1990, which at the time set a new record price for a painting bought at auction.

Healthcare professionals and art historians have written from many perspectives of other medical and/or psychological conditions that impacted van Gogh’s art and life with theories involving bipolar disorder, migraines, Meniere’s decease, syphilis, schizophrenia, alcoholism, emotional trauma and the layman concept of ‘madness’.

What was missing as a basis to best resolve disputes over which mental or medical condition(s) had significant impact on his life was a comprehensive foundation of how epilepsy or mental illness can influence art and possibly enhance creativity based on insights from a large group of contemporary artists.

Following a brain injury and acquired epilepsy I gained personal insight into what may have affected the brain, mind and creativity of van Gogh and others who experience neurological and/or psychological conditions.

The experience opened my eyes to the medical, cognitive, behavioural and social aspects of two of the most complex and widely misunderstood human conditions.

Despite having no prior experience or recognisable talent, I discovered that my brain injury/epilepsy had sparked a creative mindset that resulted in a passion for producing award-winning visual art.

I enrolled in art classes and began to recognise common topics, styles and characteristics in the art of contemporary and famous artists who are speculated or known to have had epilepsy, such as Vincent van Gogh, Lewis Carroll, Edward Lear and Giorgio de Chirico.

Curiosity for solving the complex puzzle of how epilepsy could influence art led me to pursue a Masters in Visual Art which included a full course exclusively about Vincent van Gogh.

I subsequently obtained the world’s first dual PhD combining Visual Arts, Medicine and Art Curation at the University of Melbourne.

The PhD Creative Sparks: Epilepsy and enhanced creativity in visual arts (2014) was based on the visual, written and verbal insights from more than 100 contemporary artists with epilepsy and provided:

 • objective and subjective proof that epilepsy can sometimes enhance creativity – supported by brain imaging illustrating how that can occur;

 • a comprehensive inventory of the signature traits of neurological and psychological conditions that have significant interpretive value in healthcare practice and consideration in art history;

 • the largest collection of images of the visual narratives from people with epilepsy;

 • comparative data to distinguish epilepsy from other medical and mental conditions; and

 • the Creative Sparks Art Collection and Website – artandepilepsy.com.

Interest in these research discoveries and art exhibitions provided opportunities for me to deliver presentations at national and international universities, hospitals and conferences. Melbourne University Medical School sponsored an innovative series of workshops through which to teach neurology and empathy by an intriguing new approach.

 Jim Chambliss has a dual PhD in Creative Arts and Medicine and has explored the ways epilepsy and other health conditions can influence art and enhance creativity.

Information about his Art and Medicine Workshops involving Vincent van Gogh can be obtained by visiting artforinsight.com or artandepliepsy.com

 

[Perspectives] Brain Diaries: two hemispheres of interest

“Understanding the brain and its diseases is one of the key challenges of the 21st century”, said Professor of Clinical Neurology Christopher Kennard at the launch of Oxford University Museum of Natural History’s Brain Diaries. “I’ve said that is like climbing Everest, but I don’t even think we’ve got to base camp”, Kennard explained, citing the growing “problem of dementia: the longer we live, the more likely we are to develop Alzheimer’s”. Incorporating research from more than 50 neuroscientists, Brain Diaries explores the passage of a healthy brain from conception to old age.

Reducing the burden of neurological disease and mental illness

The key to finding solutions for brain disorders is cooperation and collaboration, from the laboratory to the clinic

Australia is challenged by the rising economic and social costs of neurological disease and mental illness, which together account for one-third of the total disease burden in Australia.1 The financial cost of these disorders — about $45.5 billion annually14 — does not take into account the emotional impact and social isolation they cause. Many are chronic conditions with limited options for even ameliorative treatment, so that research into finding new approaches to their management is urgently needed. Translation of research into improved clinical practice, however, requires a continuum of process, including basic research, application of research findings, clinical trials, and implementation. Involving both basic researchers and clinicians in this process is crucial to its success. The Australasian Neuroscience Society (ANS; www.ans.org.au) recognises this need both by representing neuroscientists and clinicians in Australia and New Zealand active in neuroscience and mental health research, and by acting as a conduit for clinicians to interact more closely with researchers to achieve their shared goals.

This issue of the MJA highlights examples of current progress in the neuroscience of neurological disease and mental health conditions. As discussed by Koblar and colleagues,5 restoring brain function in people who have had a stroke or incurred other damage to the central nervous system remains an area of unmet need. Australian researchers play significant roles in international efforts to develop regenerative neurology; for example, the 2017 Australian of the Year, Professor Alan Mackay-Sim, was recognised for his work in developing stem cell therapies for people with spinal cord injuries. Australians have long played an important role in developing devices for restoring central nervous system function. For instance, the cochlear implant, invented by Professor Graeme Clark and colleagues at the University of Melbourne in 1978, has restored hearing to nearly 350 000 individuals across the world with sensorineural hearing dysfunction. Australians continue to operate at the cutting edge of the development of devices at the brain–computer interface, such as those described in this issue by Rosenfeld and colleagues.6

The burden of neurodegenerative disorders is rising as the Australian population ages. Dharmadasa and her co-authors7 review advances in the treatment of motor neurone disease, including three ongoing Australian clinical trials of potentially neuroprotective therapies; that is, of interventions that aim to slow the progress of the disease, not just provide symptomatic relief.

2017 promises to be an exciting year for accelerating progress in understanding the human brain. Major research projects seeking to deepen our understanding of its function and to translate this understanding into practical therapies are underway in the United States, Europe, Japan, and China, and the number of participating countries is rapidly expanding.8 Australia itself has a national brain project; developed by the Australian Brain Alliance and coordinated by the Australian Academy of Science, it is a collaboration of 28 organisations (including ANS) involved in brain research.9 The Australian Brain Project aims to understand how the brain encodes, stores and retrieves information, and its goals will be the focus of a proposal to be presented to the federal government in 2018. The Australian Brain Alliance also participated in an historic meeting at Rockefeller University (New York) in September 2016 with the goal of promoting collaboration and cooperation between large scale brain research projects around the world.10

The fundamental brain functions investigated by the members of ANS and the Australian Brain Project are intrinsic to our humanity, and they are often compromised by neurological disease and mental illness. Comprehensive understanding of these processes, and of precisely how and why they are disrupted in disease states, will provide us with new opportunities for improving diagnostics and developing more effective therapies that enhance the lives of the many Australians burdened by these disorders.

Strategic lacunar infarction

A 74-year-old right-handed man with homonymous hemianopia from an occipital stroke presented with an abrupt behaviour change. A neuropsychological examination revealed severe cognitive impairment, apathy, amnesia and paraphasia, without sensory or motor deficit. A head computed tomography scan showed no new lesions. A diffusion-weighted magnetic resonance imaging scan of the head performed 10 days after the disease onset showed a strategic lacunar infarction in the left genu of the internal capsule (Figure, A, arrow). A computed tomography scan repeated 45 days after the disease onset showed the lacunar infarction (Figure, B, white arrow) and an old left occipital stroke (Figure, B, red arrow). The symptoms were unchanged at an 8-month follow-up visit. This condition severs the connection of the anterior and inferior thalamic peduncles with the cingulate gyrus, amygdala, and prefrontal, orbitofrontal, insular, temporal and frontal cortex, and manifests as an abrupt behaviour change.1

Figure

Undetected and underserved: the untold story of patients who had a minor stroke

Equity of access is particularly concerning for minor stroke

Medical advances, such as stroke units, improved primary and secondary stroke prevention, and hyperacute treatments have revolutionised acute stroke management.1 The lessening of stroke severity as a result of such ground-breaking initiatives has, however, led to a larger proportion of individuals returning to community living following minor strokes2 (ie, with minimal motor deficits or no obvious sensory abnormality). In this article, we review current literature to identify the potential difficulties experienced following a minor stroke.

Individuals who survive a more severe stroke often undergo extensive multidisciplinary rehabilitation in an inpatient setting. By contrast, patients who have a minor stroke are likely to be discharged home early, often with limited referrals to services beyond their general practitioner.3 This is despite increasing evidence that survivors of minor stroke may have persisting stroke-related impairments that require rehabilitation.4 These “hidden” impairments may not become apparent until after discharge, when the patient attempts to resume their usual daily activities.2,4 Edwards and colleagues4 found that despite full independence with personal activities of daily living, 87% of patients who had a minor stroke reported residual difficulties with mobility, concentration, and participation in social activities and physically demanding leisure activities such as golf. These persisting subtle impairments may cause social and economic disruption for the individual and their family; however, due to difficulties identifying them in the hospital setting, it may result in poor coordination between primary and secondary care, especially if the patient is deemed fully independent at discharge. When the impairments are detected at a later stage, rehabilitation or support services may not be accessible, potentially rendering the patient “lost” in the health care system.

Equity of access is particularly concerning for minor stroke. In regional Australia, there may be no hospital or community rehabilitation services available,5 with patients at home dependent on the Medicare rebate for access to private allied health services within the current Chronic Disease Management (CDM) program.6 Women, who are more likely to be discharged to residential care, face further access challenges.7 Compounding this is evidence suggesting that all patients who may benefit from inpatient rehabilitation are not appropriately identified,1 which is concerning given the “hidden” nature of many impairments resulting from minor stroke.

A systematic review by Tellier and Rochette2 revealed that patients who have had a minor stroke often have impairments that span the domains of physical status, emotional health, cognition and social participation. The combined effect of these impairments may be an inability to fully resume valued activities, leading to reduced quality of life.2 Studies have shown that between one- and two-thirds of minor stroke survivors have compromised social participation outcomes.2,4 Edwards and colleagues4 found that 62% of patients who had a mild stroke had difficulty returning to employment or volunteer work, while 36% had reduced social activity 6 months after the stroke. Since about 30% of strokes occur in individuals under 65 years of age,8 these figures are particularly troubling. It is worth noting, however, that participants in the study by Edwards and colleagues4 had experienced a single ischaemic stroke and had a mean age of 64.74 years (range = 20–97 years). Therefore, as about only half of the participants4 in the study fell into the young stroke category, it is unknown how accurately these figures reflect the return to work status specifically of younger patients who had a minor stroke.

The 2014 National Stroke Foundation Rehabilitation audit9 found that less than 40% of patients who had a stroke received a psychological assessment before discharge. Formal neuropsychological assessment is expensive and not available in many areas and so inpatients rarely receive one, even if experiencing obvious impairments, such as aphasia or pronounced memory deficits. For people who have had a minor stroke, impairments are even less obvious and may manifest as a diverse range of milder cognitive problems, including attentional neglect or reduced processing speed. A neuropsychological assessment could identify these deficits and their impact on functioning and make recommendations for compensatory strategies or adjustments to reduce this impact.

Mental health problems, in particular depression, are prevalent regardless of stroke severity, with 25–29% of patients who have had a minor stroke reporting depression in the first year.10,11 Early and late onset post-stroke depression has been associated with disability and poor physical and mental health at 1 year,11 and with a reduced likelihood of driving a vehicle, participating in sports or recreational activities and interpersonal relationships at 6 months after the stroke.12 It is encouraging that improvement of depression within the first year after the stroke has been associated with better functional outcomes and quality of life.10 This highlights the need to regularly monitor patients after a minor stroke to identify and treat depression as soon as possible. Despite apparent good recovery, depression is a risk and some patients require referral to services, medication and psychological support in a coordinated manner.

As with most patients who have had a stroke, patients who have had a minor stroke are usually unable to drive for a period of time, relying instead on public transport, family members or unapproved driving for transport to medical appointments and other destinations. Research has found that one in four young survivors of stroke (aged 18–65 years) return to driving within 1 month after the stroke, despite recommendations to the contrary.13 Drivers who have had a minor stroke perform significantly worse on complex tasks, with greater cognitive load (eg, turning across oncoming traffic and bus following), and make twice the number of driving errors compared with control subjects.14 In addition to the detrimental influence of spatial, visual and cognitive impairments, the risk of seizure contributes to the moratorium on driving after a stroke. Premature return to driving may reflect poor compliance with advice, which is perceived as inconvenient and perhaps not fully explained to patients. Providing patients who have had a minor stroke with education about driving restrictions and alternative transport options and ongoing monitoring of driving fitness should be part of primary health care.

Patients who have had a minor stroke are also at risk of hospital re-admissions due to other medical conditions. For example, patients who have had a minor stroke have a heightened risk of experiencing a subsequent cardiovascular event.15 They may also have an array of concomitant medical conditions, including diabetes mellitus, atrial fibrillation and congestive cardiac failure,15 and may benefit from a coordinated approach to manage these comorbidities and prevent hospital re-admission.

Six months after a minor stroke, patients do significantly less high intensity physical activity compared with the activity done before the stroke, and despite the benefits of physical activity for future stroke prevention, they tend not to take up new high intensity activities.12 Indeed, Kono and colleagues16 found that higher levels of exercise in the form of daily step counts were associated with a reduced risk of new vascular events following minor stroke. Patients who have had a minor stroke and are living in the community may benefit from education about secondary stroke prevention. A GP-led multifaceted and target-based approach to secondary stroke prevention may be ideal for this population, especially given that a combination of medications (eg, aspirin, a statin and an antihypertensive agent), exercise and dietary modifications have been found to produce a cumulative relative risk reduction of stroke by 80%.17

Conclusion

In summary, minor stroke is a chronic health condition with long term impairment and disability.2 Residual impairments and comorbidities often require the involvement of multiple health care providers, the need for which may not always be evident at the time of stroke. Community-living patients who had a minor stroke may currently be managed through initiatives such as the CDM program. Access to CDM items can be problematic and, due to the mild nature of minor stroke, it is likely that these items will be overlooked. The five sessions per calendar year under the CDM program — which include a range of allied health services, such as speech pathology, occupational therapy, psychology and physiotherapy, with a Medicare rebate that may cover the total cost depending on whether the provider accepts the Medicare benefit as full payment for the service — are often inadequate for patients who have a more complex situation, but may be ideal in the population who have had a minor stroke and hence, a good use of existing resources. Therefore, we need to audit existing strategies in primary care to uncover which processes are working well, and which require attention. This is particularly pertinent given the creation of new government initiatives, including the National Disability Insurance Scheme (in which, however, patients who have had a minor stroke look unlikely to be eligible), and Primary Health Networks within the Health Care Home framework.

A GP-led approach that coordinates a range of primary and allied health professionals close to the home of patients who have had a minor stroke may be the ideal way to meet the needs of this population and prevent costly re-admissions to hospital, while simultaneously maximising quality of life. To ensure that community-dwelling patients who have had a minor stroke and have unmet needs are not missed, we need a coordinated, integrated primary health care response that detects and manages impairments and activity restrictions as they arise, along with medical comorbidity management and self-management support. At a minimum, we need to ensure that all patients who have had a minor stroke, regardless of their geographic location, have improved access to formal neuropsychological assessment, falls prevention, exercise programs and more extensive Medicare-based allied health funding if required. The key to this is auditing existing programs and investigating the relevance of new government initiatives as they arise for these patients, while also improving the communication between hospitals and primary health care services. Further study of the unmet needs and mechanisms for ensuring access for all patients who have had a stroke is also vital.

Regenerative neurology: meeting the need of patients with disability after stroke

If regenerative neurology restores function, it will meet a huge unmet need and change dogma

Treatment of stroke in the acute phase has come a long way with the development of paramedic, emergency department and stroke team pathways for hyperacute assessment and management with intravenous thrombolysis, endovascular clot retrieval and hemicraniectomy. Acute stroke units reduce mortality and morbidity by up to 20% or more.1 An estimated 80% of stroke patients survive for one year after stroke, with the large majority being left with chronic disability.2 In Australia and many other countries around the world, stroke is the leading cause of adult disability.3 It is estimated that up to 450 000 Australians have disability after stroke.4,5

The only intervention currently available to stroke survivors is rehabilitation. Increasing evidence suggests that rehabilitation complements the natural functional recovery process that can often continue for months or years after stroke.6 However, there are persisting gaps in our understanding of the basic biological pathways that drive post-stroke recovery, and these pose challenges in applying evidence-based rehabilitation strategies in the real world. This becomes especially critical as patients often need a combination of rehabilitation strategies that cater for their specific disability and complement their potential for long-term recovery. These are often required beyond the period for which rehabilitation services are currently made available due to resource constraints.7 So where does that leave us in 2017?

Regenerative neurology or stem cell therapy may provide an answer to this unmet need by potentially restoring neurological function in an individualised manner. Many stem cell researchers and clinicians hold the view that the field of regenerative medicine may have as large an impact on humanity as antibiotics.8

Basics of stem cells

Stem cells are unique in possessing two qualities — the capacity for self-renewal and the potential for multilineage differentiation. If a stem cell is pluripotent, it can give rise to cells derived from all three germ layers (ectoderm, mesoderm and endoderm) that differentiate into different tissues during embryonic development. On the other hand, a multipotent stem cell tends to generate limited cell types, often relevant to the organ from which the stem cell was derived — for example, haematopoietic stem cells (HSCs) tend to generate blood and immune cell types. Embryonic stem cells isolated from the very early embryo are pluripotent while adult somatic stem cells derived from adult organs, such as mesenchymal stem cells from bone marrow, are multipotent, similar to HSCs.

A significant clinical limitation to the use of embryonic stem cells therapeutically is the potential for them to form tumours, such as teratomas which have multiple cell types from the different embryonic lineages (hair, bone, teeth, heart muscle, etc).9 In contrast, to date, multipotent cells such as mesenchymal stem cells are considered safer, with animal studies reporting no increase in tumorigenicity.10

In 2006, Yamanaka (2012 Physiology or Medicine Nobel Laureate) showed that somatic cells (skin fibroblasts) could be engineered genetically by four genes (known as the Yamanaka factors) to produce pluripotent cells similar to embryonic stem cells.11 This third type of stem cell is termed an induced pluripotent stem cell (iPSC). This discovery has radically transformed stem cell research and proffers the concept of personalised regenerative medicine. Early clinical trials have already started deriving iPSCs from an individual’s fibroblasts for autologous (self-)treatment or personalised medicine.12 The findings of preclinical studies in stroke models have provided encouraging evidence for potential for neuroregeneration and useful insights into potential applicability in the future.1315

Chronic stroke and local injection

Last year was an exciting one for stem cell therapy in stroke patients. There were two high impact publications documenting early phase clinical studies with two different multipotent stem cells, SB623 and CTX0E03. Both are genetically modified stem cell types, one isolated from fetal brain tissue16 and the other from adult bone marrow.17 Two independent research teams from reputable institutions in the United Kingdom and United States performed these studies with industry funding (ReNeuron and San Bio, respectively).

This research examined two key questions in relation to study design:

  • Is it potentially useful to treat stroke survivors in the chronic phase when their disability has plateaued, sometimes as long as 3 to 4 years after stroke?

  • Is intracerebral implantation of stem cells a feasible route of administration?

Published preclinical and preliminary clinical data indicate that the design of the studies was valid, although research opinion is often divided as to optimum timing and route of administration of cell transplantation.9

Why was stem cell therapy not administered in the acute phase after stroke in these studies? There may be a number of clinically pragmatic answers to this question — in the acute phase, patients may be too medically unstable to undergo neurosurgery. Moreover, patients are often still showing rapid improvement, so it would be problematic to measure any benefit above that of optimum acute stroke unit care, when disability has not yet plateaued.18

Why was a neurosurgical implantation chosen? “Functional neurosurgery” is a fast-developing specialty and these neurosurgeons routinely implant electrodes for deep brain stimulation to treat Parkinson disease. Thus they have the expertise to inject, via a narrow bore cannula, deposits of stem cells into multiple sites within the human brain. One benefit to the patient of intracerebral implantation is that the cells remain within the brain and can be imaged non-invasively.19 An alternative route of administration used in earlier clinical studies was intravenous injection.20 Initially, this approach was considered safer than intracerebral implantation, but it is now appreciated that there is a theoretical risk of distant tumorigenicity, in that stem cells injected intravenously may deposit widely throughout a number of organs within the body (ie, lung, liver, etc.) and may interact with presymptomatic tumours.20

Is it safe?

Early phase clinical trials characteristically involve small numbers of patients to minimise the number at risk if there is a serious treatment-related adverse event. In the two studies described above,16,17 27 patients were followed for 12 months after treatment, which is a generally accepted timeframe. The studies stated that no adverse event directly attributable to the stem cell therapy was found. However, the neurosurgical procedure of creating a burr hole and entering the brain to administer the cells did result in appreciable anticipated adverse events (ie, haematoma, headache and other symptoms related to the consequent reduction of intracranial pressure). It is noteworthy that both studies will continue surveillance of all patients after 12 months to detect any longer term adverse events.

We propose an alternate perspective with respect to the claims that no stem cell-related adverse events occurred. Stem cells implanted into the brain are known from preclinical data to differentiate into neural cells and probably integrate within the brain.9 In theory, this cellular behaviour has the potential to form an epileptogenic focus. A small number of patients in each of the two high impact studies16,17 were reported to have seizures. With this limited clinical dataset it cannot be concluded whether their seizures arose from the neurosurgical procedure, as suggested in the publications,16,17 or was related to the stem cells. We propose that larger phase 2/3 studies should incorporate electroencephalography investigations to better understand the association of seizures with intracerebral implantation stem cell therapy.

The clinical data in these two early phase clinical studies supports the clinical feasibility and safety of intracerebral implantation of stem cells in patients with chronic disability after stroke. Both studies used an escalating dose of stem cell therapy. Cell doses of up to 10 million SB623 and 20 million CTX0E03 stem cells may be used for future larger phase 2 studies.

So: does it work?

This question will not be answered with any degree of certainty for a number of years as we await the results from large, multicentre, multinational, double-blind, randomised controlled clinical trials. While preclinical data from animal studies suggest an overall functional improvement of 40.6%, the extrapolation of these findings to human stroke pathophysiology is limited by: (i) species-specific differences; and (ii) the fact that controlled induction of cerebral ischaemic lesions in animals is not fully representative of the heterogeneous lesion load seen with human stroke.9

Early clinical studies enrolled a heterogeneous mix of patient groups. Most of these studies were open label and single arm and thus not designed to answer the question of efficacy. Therefore, at present, it is difficult to postulate any differential benefit for specific patient or stroke subgroups.18 From a mechanistic perspective, there are a number of theories from preclinical data on how stem cell therapy may decrease post-stroke disability (Box), with neuroplasticity considered to be an important factor.21

An aspect of immense practical relevance is that standardised rehabilitation was not provided to participants in these studies. There is an ongoing debate about the potential confounding effect of rehabilitation on functional and structural outcomes. However, rehabilitation is accepted as a standard of care to optimise natural recovery, and guidelines for stem cell research such as Stem Cell Therapy as an Emerging Paradigm for Stroke (STEPS)22 recommend its inclusion in trial design. Stroke clinicians will know from everyday experience that significant improvement in neurological function many years after an ischaemic stroke is rarely observed. The two studies described above16,17 are very important in the field of regenerative neurology in that both found an associated improvement in function in the chronic phase of stroke among patients with different areas of stroke-induced injury. In light of the emerging evidence for long-term potential to relearn that can be harnessed by rehabilitation, stem cell implantation along with targeted and protracted rehabilitation could have a synergistic and biologically plausible impact on post-stroke recovery.

It is of fundamental interest that both studies described changes on magnetic resonance imaging (MRI) of the human brain after treatment. It was suggested that these MRI findings may not be explained by the neurosurgical procedure alone.17 These preliminary findings may present an opportunity for reverse translational research, from the clinic back into the research laboratory, to gain a better understanding of how changes in the human brain may occur after stem cell therapy.

At this juncture of stem cell research in stroke, there are three important points to be considered:

  • The preclinical and early clinical data which suggest that stem cell therapy may be helpful are becoming encouragingly robust.23

  • The preponderance of failed translation efforts from preclinical to clinical therapeutics in stroke highlights that continued exercise of scientific rigor is critical.

  • Ongoing stem cell tourism across the world and in Australia to reach centres that operate for financial gain without regard to research integrity or patient safety poses a significant danger to the credibility of this field.24

The current regulatory framework in Australia for oversight of cellular therapies has significant gaps in scope as well as implementation. It is a matter of urgency that our politicians and regulatory authorities collaborate with their counterparts in the US, European Union, Japan and other regions where innovative approaches are being implemented to develop the field while creating adequate safeguards to protect patient interests.25,26

Exciting scientific research is that in which the questions raised outweigh the answers. We suggest the quest to fulfil the unmet need for treating disability after stroke has taken a step forward.

Box –
Putative mechanisms of action of stem cells in stroke*


* Adapted with permission from Nagpal et al.21

Clot retrieval and acute stroke care

Resource distribution in stroke care must be rational and evidence-based, not driven by media coverage

There has been a sudden upsurge of interest in the availability of endovascular clot retrieval (ECR) for stroke treatment (Box). Recent newspaper articles published in New South Wales1,2 highlight the potential benefits as well as the financial and logistical challenges of providing around the clock ECR services in a vast country like Australia.

Many well designed randomised trials have demonstrated the efficacy and safety of ECR,3 but the evidence of benefit, although dramatic in some cases, is confined to patients whose ECR procedure begins within 6 hours of symptom onset,4 with or without intravenous thrombolysis. However, before 24/7 ECR services can operate in Australia, individual health services need to examine the challenges of transporting eligible stroke patients from the emergency departments wherever they may be — remote, regional or urban — to a comprehensive stroke centre within the required time window. This potentially means covering vast distances — several hundreds of kilometres in some cases — in a matter of hours. A detailed analysis then needs to demonstrate that the benefits of a rapid ECR service justify funding over many other competing health care needs. Victoria, the third smallest Australian health jurisdiction (after the Australian Capital Territory and Tasmania) with the second highest population,5 has set an example by starting a statewide 24/7 ECR service. However, it is unclear at this stage whether a similar service would be feasible in larger states with lower population densities, given the similar challenges for ECR service provision in the United States and Canada.6 Currently, there are no 24/7 statewide ECR pathways in New South Wales. In the Australian Capital Territory, the health service is establishing a 24/7 ECR service in 2017, although many hurdles still remain.

However, even within stroke care, there are other, perhaps more immediate, needs. Currently, there are large gaps in our ability to deliver basics of stroke care, particularly in regional Australia. For example, a 2015 national audit of acute stroke services coordinated by the Stroke Foundation showed that only 67% of patients were admitted into a stroke unit,7 even though patients admitted to stroke units with any type of stroke, ischaemic and haemorrhagic, are more likely to be alive, living at home and independent 1 year after their stroke.8

Another area that could be improved is availability of advanced imaging — currently, as a result of lack of expertise and local department policy, too many stroke patients, even in urban centres, do not benefit from multimodal imaging such as computed tomography (CT) angiogram or CT perfusion scans. These scans can demonstrate the presence and location of a clot within the cerebral vasculature, as well as the size of the penumbra (the area of the brain at risk of infarction without urgent revascularisation), to identify patients who may benefit from aggressive and more invasive treatments such as ECR.9

Further, intravenous thrombolysis — a proven10 and more easily accessible therapy than ECR — is currently not an option for many stroke patients who would be eligible. The national thrombolysis rate is currently languishing at 7%, unchanged from 2011.7 Some work has already been done to improve this by establishing 24/7 acute stroke teams and forming regional acute stroke networks where a large hospital provides thrombolysis expertise and support for the regional and rural hospitals. Setting up a statewide ECR service without widely available capacity to optimally assess, image, treat and transport stroke patients risks squandering precious resources without clear benefits for the majority who do not live in the immediate vicinity of an ECR centre.

Delivering a world class stroke service is complex; the constantly advancing evidence base means that the entire service is always playing catch up. It is important to foster a provision and funding model for rational, holistic and flexible stroke services that puts patient outcomes first and covers all aspect of stroke care — not just the acute reperfusion therapies of intravenous thrombolysis and ECR but also stroke unit care and specialist neurorehabilitation. There is much work to be done, given that only one in 87 stroke units qualifies as a comprehensive stroke service,7 only 40% of stroke units routinely utilise established guidelines, care plans and protocols,7 and one in three patients is discharged from hospital without any preventive medications.7 Focusing the discussion about stroke care only on the availability of ECR — a necessary but complex and costly intervention which will benefit only a small proportion of stroke patients — diverts resources from wider and more fundamental needs in stroke care and does not serve the best interests of our patients.

Box –
Endovascular clot retrieval


Cerebral angiograms showing (A) occlusion of the proximal right middle cerebral artery (arrow), and (B) recanalisation of the same artery after clot retrieval (arrow). Images courtesy of Dr Shivendra Lallo, Canberra Hospital, ACT.

Neurobionics and the brain–computer interface: current applications and future horizons

Neurobionics is the science of directly integrating electronics with the nervous system to repair or substitute impaired functions. The brain–computer interface (BCI) is the linkage of the brain to computers through scalp, subdural or intracortical electrodes (Box 1). Development of neurobionic technologies requires interdisciplinary collaboration between specialists in medicine, science, engineering and information technology, and large multidisciplinary teams are needed to translate the findings of high performance BCIs from animals to humans.1

Neurobionics evolved out of Brindley and Lewin’s work in the 1960s, in which electrodes were placed over the cerebral cortex of a blind woman.24 Wireless stimulation of the electrodes induced phosphenes — spots of light appearing in the visual fields. This was followed in the 1970s by the work of Dobelle and colleagues, who provided electrical input to electrodes placed on the visual cortex of blind individuals via a camera mounted on spectacle frames.24 The cochlear implant, also developed in the 1960s and 1970s, is now a commercially successful 22-channel prosthesis for restoring hearing in deaf people with intact auditory nerves.5 To aid those who have lost their auditory nerves, successful development of the direct brainstem cochlear nucleus multi-electrode prosthesis followed.6

The field of neurobionics has advanced rapidly because of the need to provide bionic engineering solutions to the many disabled US veterans from the Iraq and Afghanistan wars who have lost limbs and, in some cases, vision. The United States Defense Advanced Research Projects Agency (DARPA) has focused on funding this research in the past decade.7

Through media reports about courageous individuals who have undergone this pioneering surgery, disabled people and their families are becoming more aware of the promise of neurobionics. In this review, we aim to inform medical professionals of the rapid progress in this field, along with ethical challenges that have arisen. We performed a search on PubMed using the terms “brain computer interface”, “brain machine interface”, “cochlear implants”, “vision prostheses” and “deep brain stimulators”. In addition, we conducted a further search based on reference lists in these initial articles. We tried to limit articles to those published in the past 10 years, as well as those that describe the first instances of brain–machine interfaces.

Electrode design and placement

Neurobionics has been increasing in scope and complexity because of innovative electrode design, miniaturisation of electronic circuitry and manufacture, improvements in wireless technology and increasing computing power. Using computers and advanced signal processing, neuroscientists are learning to decipher the complex patterns of electrical activity in the human brain via these implanted electrodes. Multiple electrodes can be placed on or within different regions of the cerebral cortex, or deep within the subcortical nuclei. These electrodes transmit computer-generated electrical signals to the brain or, conversely, receive, record and interpret electrical signals from this region of the brain.

Microelectrodes that penetrate the cortical tissue offer the highest fidelity signals in terms of spatial and temporal resolution, but they are also the most invasive (Box 2, A).8 These electrodes can be positioned within tens of micrometres of neurons, allowing the recording of both action potential spikes (the output) of individual neurons and the summed synaptic input of neurons in the form of the local field potential.9 Spiking activity has the highest temporal and spatial resolution of all the neural signals, with action potentials occurring in the order of milliseconds. In contrast, the local field potential integrates information over about 100 μm, with a temporal resolution of tens to hundreds of milliseconds.

Electrocorticography (ECoG), using electrodes placed in the subdural space (on the cortical surface), and electroencephalography (EEG), using scalp electrodes, are also being used to detect cortical waveforms for signal processing by advanced computer algorithms (Box 2, C, D). Although these methods are less invasive than using penetrating microelectrodes, they cannot record individual neuron action potentials, instead measuring an averaged voltage waveform over populations of thousands of neurons. In general, the further away the electrodes are from the brain, the safer the implantation procedure is, but with a resulting decrease in the signal-to-noise ratio and the amount of control signals that can be decoded (ie, there is a lot of background noise). Therefore, ECoG recordings, which are closer to the brain, typically have a higher signal spatial and temporal resolution than that achievable by EEG.8 As EEG electrodes are placed on the opposite side of the skull from the brain, the recordings have a low fidelity and a low signal-to-noise ratio. For stimulation, subdural electrodes require higher voltages to activate neurons than intracortical electrodes and are less precise for stimulation and recording. Transcranial magnetic stimulation can be used to stimulate populations of neurons, but this is a crude technique compared with the invasive microelectrode techniques.10

Currently, implanted devices have an electrical plug connection through the skull and scalp, with attached cables. This is clearly not a viable solution for long term implantation. The challenge for engineers has been to develop the next generation of implantable wireless microelectronic devices with a large number of electrodes that have a long duration of functionality. Wireless interfaces are beginning to emerge.3,1113

Applications for brain–computer interfaces

Motor interfaces

The aim of the motor BCI has been to help paralysed patients and amputees gain motor control using, respectively, a robot and a prosthetic upper limb. Non-human primates with electrodes implanted in the motor cortex were able, with training, to control robotic arms through a closed loop brain–machine interface.14 Hochberg and colleagues were the first to place a 96-electrode array in the primary motor cortex of a tetraplegic patient and connect this to a computer cursor. The patient could then open emails, operate various devices (such as a television) and perform rudimentary movements with a robotic arm.15 For tetraplegic patients with a BCI, improved control of the position of a cursor on a computer screen was obtained by controlling its velocity and through advanced signal processing.16 These signal processing techniques find relationships between changes in the neural signals and the intended movements of the patient.17,18

Reach, grasp and more complex movements have been achieved with a neurally controlled robotic arm in tetraplegic patients.19,20 These tasks are significantly more difficult than simple movements as they require decoding of up to 15 independent signals to allow a person to perform everyday tasks, and up to 27 signals for a full range of movements.21,22 To date, the best BCI devices provide fewer than ten independent signals. The patient requires a period of training with the BCI to achieve optimal control over the robotic arm. More complex motor imagery, including imagined goals and trajectories and types of movement, has been recorded in the human posterior parietal cortex. Decoding this imagery could provide higher levels of control of neural prostheses.23 More recently, a quadriplegic patient was able to move his fingers to grasp, manipulate and release objects in real time, using a BCI connected to cutaneous electrodes on his forearms that activated the underlying muscles.24

The challenge with all these motor cortex electrode interfaces is to convert them to wireless devices. This has recently been achieved in a monkey with a brain–spinal cord interface, enabling restoration of movement in its paralysed leg,25 and in a paralysed patient with amyotrophic lateral sclerosis, enabling control of a computer typing program.11

These examples of BCIs have primarily used penetrating microelectrodes, which, despite offering the highest fidelity signal, suffer from signal loss over months to years due to peri-electrode gliosis.26 This scarring reduces electrical conduction and the resulting signal change can require daily or even hourly recalibration of the algorithms used to extract information.18 This makes BCIs difficult to use while unsupervised and hinders wider clinical application, including use outside a laboratory setting.

A recently developed, less invasive means of electrode interface with the motor cortex is the stent-electrode recording array (“stentrode”) (Box 2, B).27 This is a stent embedded with recording electrodes that is placed into the sagittal venous sinus (situated near the motor cortex) using interventional neuroradiology techniques. This avoids the need for a craniotomy to implant the electrodes, but there are many technical challenges to overcome before human trials of the stentrode can commence.

Lower-limb robotic exoskeleton devices that enable paraplegic patients to stand and walk have generated much excitement and anticipation. BCIs using scalp EEG electrodes are unlikely to provide control of movement beyond activating simple robotic walking algorithms in the exoskeleton, such as “walk forward” or “walk to the right”. Higher degrees of complex movement control of the exoskeleton with a BCI would require intracranial electrode placement.28 Robotic exoskeleton devices are currently cumbersome and expensive.

Sensory interfaces

Fine control of grasping and manipulation of the hand depends on tactile feedback. No commercial solution for providing artificial tactile feedback is available. Although early primate studies have produced artificial perceptions through electrical stimulation of the somatosensory cortex, stimulation can detrimentally interfere with the neural recordings.29 Optogenetics — the ability to make neurons light-sensitive — has been proposed to overcome this.30 Sensorised thimbles have been placed on the fingers of the upper limb myoelectric prosthesis to provide vibratory sensory feedback to a cuff on the arm, to inform the individual when contact with an object is made and then broken. Five amputees have trialled this, with resulting enhancement of their fine control and manipulation of objects, particularly for fragile objects.31 Sensory feedback relayed to the peripheral nerves and ultimately to the sensory cortex may provide more precise prosthetic control.32

Eight people with chronic paraplegia who used immersive virtual reality training over 12 months saw remarkable improvements in sensory and motor function. The training involved an EEG-based BCI that activated an exoskeleton for ambulation and visual–tactile feedback to the skin on the forearms. This is the first demonstration in animals or humans of long term BCI training improving neurological function, which is hypothesised to result from both spinal cord and cortical plasticity.33

The success of the cochlear prosthesis in restoring hearing to totally deaf individuals has also demonstrated how “plastic” the brain is in learning to interpret electrical signals from the sound-processing computer. The recipient learns to discern, identify and synthesise the various sounds.

The development of bionic vision devices has mainly focused on the retina, but electrical connectivity of these electrode arrays depends on the recipient having intact neural elements. Two retinal implants are commercially available.3 Retinitis pigmentosa has been the main indication. Early trials of retinal implants are commencing for patients with age-related macular degeneration. However, there are many blind people who will not be able to have retinal implants because they have lost the retinal neurons or optic pathways. Placing electrodes directly in the visual cortex bypasses all the afferent visual pathways.

It has been demonstrated that electrical stimulation of the human visual cortex produces discrete reproducible phosphenes. Several groups have been developing cortical microelectrode implants to be placed into the primary visual cortex. Since 2009, the Monash Vision Group has been developing a wireless cortical bionic vision device for people with acquired bilateral blindness (Box 3). Photographic images from a digital camera are processed by a pocket computer, which transforms the images into the relevant contours and shapes and into patterns of electrical stimulation that are transmitted wirelessly to the electrodes implanted in the visual cortex (Box 3, B). The aim is for the recipient to be able to navigate, identify objects and possibly read large print. Facial recognition is not offered because the number of electrodes will not deliver sufficient resolution.2 A first-in-human trial is planned for late 2017.2,34

The lateral geniculate nucleus of the thalamus is an alternative site for implantation of bionic vision devices. Further technical development of the design, manufacture and placement of multiple brain microelectrodes in this small deep brain structure is needed before this could be applied in humans.35

Memory restoration and enhancement

The same concepts and technologies used to record and stimulate the brain in motor and sensory prostheses can also be applied to deeper brain structures. For example, the fornix is an important brain structure for memory function. A human safety study of bilateral deep brain stimulation of the fornix has been conducted in 42 patients with mild, probable Alzheimer disease (ADvance trial), and this study will now proceed to a randomised controlled trial.36 This technique involves deep brain stimulation without direct feedback from neural recording.

A more definitive approach to memory augmentation would be to place a multi-electrode prosthesis directly into the hippocampus. Electrical mimicry of encoded patterns of memory about a task transmitted from trained donor rats to untrained recipient rats resulted in enhanced task performance in the recipients.37,38 This technology has been applied to the prefrontal cortex of non-human primates.39 Although human application is futuristic, this research is advancing rapidly. A start-up company was formed in 2016 to develop this prosthetic memory implant into a clinic-ready device for people with Alzheimer disease.40 The challenge in applying these therapies to Alzheimer disease and other forms of dementia will be to intervene before excessive neuronal loss has occurred.

Seizure detection and mitigation

Many patients with severe epilepsy do not achieve adequate control of seizures with medication. Deep brain electrical stimulation, using electrodes placed in the basal ganglia, is a treatment option for patients with medically refractory generalised epilepsy.41 Methods to detect the early onset of epileptic seizures using cortical recording and stimulation (to probe for excitability) are evolving rapidly.42 A hybrid neuroprosthesis, which combines electrical detection of seizures with an implanted anti-epileptic drug delivery system, is also being developed.43,44

Parkinson disease and other movement disorders

Deep brain stimulation in the basal ganglia is an effective treatment for Parkinson disease and other movement disorders.45 This type of BCI includes a four-electrode system implanted in the basal ganglia, on one or both sides, which is connected to a pulse generator implanted in the chest wall. This device can be reprogrammed wirelessly. Novel electrodes with many more electrode contacts and a recording capacity are being developed. This feedback controlled or closed loop stimulation will require a fully implanted BCI, so that the deep brain stimulation is adaptive and will better modulate the level of control of the movement disorder from minute to minute. More selective directional and steerable deep brain stimulation, with the electrical current being delivered in one direction from the active electrodes, rather than circumferentially, is being developed. The aim is to provide more precise stimulation of the target neurons, with less unwanted stimulation of surrounding areas and therefore fewer side effects.46

Technical challenges and future directions

Biocompatibility of materials, electrode design to minimise peri-electrode gliosis and electrode corrosion, and loss of insulation integrity are key engineering challenges in developing BCIs.47 Electrode carriers must be hermetically sealed to prevent ingress of body fluids. Smaller, more compact electronic components and improved wireless interfaces will be required. Electronic interfaces with larger numbers of neurons will necessitate new electrode design, but also more powerful computers and advanced signal processing to allow significant use time without recalibration of algorithms.

Advances in nanoscience and wireless and battery technology will likely have an increasing impact on BCIs. Novel electrode designs using materials such as carbon nanotubes and other nanomaterials, electrodes with anti-inflammatory coatings or mechanically flexible electrodes to minimise micromotion may have greater longevity than standard, rigid, platinum–iridium brain electrodes.48 Electrodes that record from neural networks in three dimensions have been achieved experimentally using injectable mesh electronics with tissue-like mechanical properties.49 Optogenetic techniques activate selected neuronal populations by directing light onto neurons that have been genetically engineered with light-sensitive proteins. There are clearly many hurdles to overcome before this technology is available in humans, but microscale wireless optoelectronic devices are working in mice.50

Populating the brain with nanobots that create a wireless interface may eventually enable direct electronic interface with “the cloud”. Although this is currently science fiction, the early stages of development of this type of technology have been explored in mice, using intravenously administered 10 μg magnetoelectric particles that enter the brain and modify brain activity by coupling intrinsic neural activity with external magnetic fields.51

Also in development is the electrical connection of more than one brain region to a central control hub — using multiple electrodes with both stimulation and recording capabilities — for integration of data and neuromodulation. This may result in more nuanced treatments for psychiatric illness (such as depression, post-traumatic stress disorder and obsessive compulsive disorder), movement disorders, epilepsy and possibly dementia.

Ethical and practical considerations

Implantable BCI devices are in an early phase of development, with most first-in-human studies describing only a single patient. However, the performance of these devices is rapidly improving and, as they become wireless, the next step will be to implant BCIs in larger numbers of patients in multicentre trials.

The prime purpose of neurobionic devices is to help people with disabilities. However, there will be pressure in the future for bionic enhancement of normal cognitive, memory, sensory or motor function using BCIs. Memory augmentation, cognitive enhancement, infrared vision and exoskeletal enhancement of physical performance will all likely be achievable.

The introduction of this technology generates many ethical challenges, including:

  • appreciation of the risk–benefit ratio;

  • provision of adequate and balanced information for the recipient to give informed consent;

  • affordability in relation to the fair and equitable use of the scarce health dollar;

  • inequality of patient access to implants, particularly affecting those in poorer countries;

  • undue influence on physicians and scientists by commercial interests; and

  • the ability to achieve unfair physical or cognitive advantage with the technology, such as enhancing disabled athletes’ performance using exoskeleton devices, military application with the creation of an enhanced “super” soldier, or using a BCI as the ultimate lie detector.52

The introduction of these devices into clinical practice should therefore not proceed unchecked. As the technology transitions from clinical trial to the marketplace, training courses and mentoring will be needed for the surgeons who are implanting these devices. Any new human application of the BCI should be initially tested for safety and efficacy in experimental animal models. After receiving ethics committee approval for human application, the technology should be thoroughly evaluated in well conducted clinical trials with clear protocols and strict inclusion criteria.53

One question requiring consideration is whether sham surgery should be used to try to eliminate a placebo effect from the implantation of a new BCI device. Inclusion of a sham surgery control group in randomised controlled trials of surgical procedures has rarely been undertaken,54 and previous trials involving sham surgery have generated much controversy.5557 Sham surgery trials undertaken for Parkinson disease have involved placing a stereotactic frame on the patient and drilling of burr holes but not implanting embryonic cells or gene therapy.5860 We do not believe sham surgery would be applicable for BCI surgery, for several reasons. First, each trial usually involves only one or a few participants; there are not sufficient numbers for a randomised controlled trial. Second, the BCI patients can serve as their own controls because the devices can be inactivated. Finally, although sham controls may be justified if there is likely to be a significant placebo effect from the operation, this is not the case in BCI recipients, who have major neurological deficits such as blindness or paralysis.

Clinical application of a commercial BCI will require regulatory approval for an active implantable medical device, rather than approval as a therapy. It is also important for researchers to ask the potential recipients of this new technology how they feel about it and how it is likely to affect their lives if they volunteer to receive it.61 This can modify the plans of the researchers and the design of the technology. The need for craniotomy, with its attendant risks, may deter some potential users from accepting this technology.

As the current intracortical electrode interfaces may not function for more than a few years because of electrode or device failure, managing unrealistic patient and family expectations is essential. Trial participants will also require ongoing care and monitoring, which should be built into any trial budget. International BCI standards will need to be developed so that there is uniformity in the way this technology is introduced and evaluated.

Conclusions

BCI research and its application in humans is a rapidly advancing field of interdisciplinary research in medicine, neuroscience and engineering. The goal of these devices is to improve the level of function and quality of life for people with paralysis, spinal cord injury, amputation, acquired blindness, deafness, memory deficits and other neurological disorders. The capability to enhance normal motor, sensory or cognitive function is also emerging and will require careful regulation and control. Further technical development of BCIs, clinical trials and regulatory approval will be required before there is widespread introduction of these devices into clinical practice.

Box 1 –
Schematic overview of the major components of brain–computer interfaces


Common to all devices are electrodes that can interface at different scales with the neurons in the brain. For output-type interfaces (green arrows), the brain signals are amplified and control signals from them are decoded via a computer. These decoded signals are then used to control devices that can interact with the world, such as computer cursors or robotic limbs. For input-type interfaces (red arrows), such as vision or auditory prostheses, a sensor captures the relevant input, which a computer translates into stimulation parameters that are sent to the brain via an electrode interface. EEG = electroencephalography. LFP = local field potential.

Box 2 –
Electrodes of different scales that can be used to record neural activity for brain–computer interfaces


A: The most invasive method of recording neural activity, which produces the best signal quality, requires penetrating microelectrodes, such as this Utah array (Blackrock Microsystems), with 100 electrodes with a spacing of 400 μm. Wires connected to each electrode (bundled to the right of the image) need to be percutaneously connected to the outside world. B: Electrodes placed on an intravascular stent with (inset) a close-up image of a few electrodes (750 μm diameter). C: A 128-channel, non-invasive electroencephalography cap. After the cap is applied to the scalp, conductive gel is injected into each electrode to ensure electrical contact. D: An example of a planar array that can be placed in the subdural space to record electrocorticography signals. The platinum electrodes (350 μm diameter circles) are embedded in silicone.

Box 3 –
An example of a fully implantable brain–computer interface


A: The Monash Vision Group cortical vision prosthesis, which consists of an array of penetrating microelectrodes (metallic spikes) connected through a ceramic casing to electronics that are capable of delivering electrical stimulation and receiving wireless power and control signals. B: A close-up of a single electrode with a 150 μm diameter. The bright band is the conductive ring electrode, where electrical charge is delivered. Electrodes are spaced 1 mm apart.