Friday, July 12, 2013

The Brain Processes Complex Stimuli More Cumulatively Than We Thought

The finding represents a new view of how the brain creates internal representations of the visual world. "We are excited to see if this novel view will dominate the wider consensus" said senior author Dr. Miyashita, who is also Professor of Physiology at the University of Tokyo's School of Medicine, "and also about the potential impact of our new computational principle on a wide range of views on human cognitive abilities." The brain recalls the patterns and objects we observe by developing distinct neuronal representations that go along with them (this is the same way it recalls memories). Scientists have long hypothesized that these neuronal representations emerge in a hierarchical process limited to the same cortical region in which the representations are first processed. Because the brain perceives and recognizes the external world through these internal images, any new information about the process by which this takes place has the power to inform our understanding of related functions, including knowledge acquisition and memory. However, studies attempting to uncover the functional hierarchy involved in the cortical process of visual stimuli have tried to characterize this hierarchy by analyzing the activity of single nerve cells, which are not necessarily correlated with neurons nearby, thus leaving these analyses lacking. In a new study appearing in the 12 July issue of the journal Science, lead author Toshiyuki Hirabayashi and colleagues focus not on single neurons but instead on the relationship between neuron pairs, testing the possibility that the representation of an object in a single brain region emerges in a hierarchically lower brain area. "I became interested in this work," said Dr. Hirabayashi, "because I was impressed by the elaborate neuronal circuitry in the early visual system, which is well-studied, and I wanted to explore the circuitry underlying higher-order visual processing, which is not yet fully understood." Hirabayashi and colleagues analyzed nerve cell pairs in cortical areas TE and 36, the latter of which is hierarchically higher, in two adult macaques. After these animals looked at six sets of paired stimuli for several months to learn to associate related objects (a process that can lead to pair-coding neurons in the brain), the researchers recorded neuron responses in areas TE and 36 of both animals as they again performed this task. The neurons exhibited pair association, but not where the researchers would have thought. "The most surprising result," said senior author Dr. Yasushi Miyashita "was that the neuronal circuit that generated pair-association was found only in area TE, not in area 36." Indeed, based on previous studies, which indicated that the number of pair-coding neurons in area TE is much smaller, the researchers would have expected the opposite. During their study, Miyashita and other team members observed that in region TE of the macaque cortex, unit 1 neurons (or source neurons) provided input to unit 2 neurons (or target neurons), which -- unlike unit 1 neurons -- responded to both members of a stimulus pair. "The representations generated in area TE did not reflect a mere random fluctuation of response patterns," explained Dr. Miyashita, "but rather, they emerged as a result of circuit processing inherent to that area of the brain." In area 36, meanwhile, members of neuron pairs behaved differently; on average, unit 1 as well as unit 2 neurons responded to both members of a stimulus pair. Neurons in area 36 received input from area TE, but only from its unit 2 neurons. Taken together, these findings lead the authors to hypothesize the existence of a hierarchical relationship between regions TE and 36, in which paired associations first established in the former region are propagated to the latter one. Here, area 36 represents the next level of a so-called feed forward hierarchy. The work by Hirabayashi and colleagues suggests that the detailed representations of objects commonly observed in the brain are attained not by buildup of representations in a single area, but by emergence of these representations in a hierarchically prior area and their subsequent transfer to the brain region that follows. There, they become sufficiently prevalent for the brain to register. The work also reveals that the brain activity involved in recreating visual stimuli emerges in a hierarchically lower brain area than previously thought. Moving forward, the Japanese research team has plans to expand upon this research, thus continuing to contribute to studies worldwide that aim to give scientists the best possible tools with which to obtain a dynamic picture of the brain. As a next step, the team hopes to further elucidate interactions between the various cortical microcircuits that operate in memory encoding. Dr. Miyashita has conjectured that these microcircuits are manipulated by a global brain network. Using the results of this latest study, he and colleagues are poised to further evaluate this assumption. "It will also be important to weave the neuronal circuit mechanisms into a unified framework," said Dr. Hirabayashi," and to examine the effects of learning on these circuit organizations." Equipped with their new view of cortical processing, the team also hopes to trace the causal chain of memory retrieval across different areas of the cortex. "I am excited by the recent development of genetic tools that will allow us to do this," said Dr. Miyashita. A better understanding of object representations from one area of the brain to the next will shed even greater light on elusive aspects of this hierarchical organ

Tuesday, May 14, 2013

Brain Frontal Lobes Not Sole Center of Human Intelligence, Comparative Research Suggests

May 13, 2013 — Human intelligence cannot be explained by the size of the brain's frontal lobes, say researchers.  Research into the comparative size of the frontal lobes in humans and other species has determined that they are not -- as previously thought -- disproportionately enlarged relative to other areas of the brain, according to the most accurate and conclusive study of this area of the brain.

It concludes that the size of our frontal lobes cannot solely account for humans' superior cognitive abilities.

The study by Durham and Reading universities suggests that supposedly more 'primitive' areas, such as the cerebellum, were equally important in the expansion of the human brain. These areas may therefore play unexpectedly important roles in human cognition and its disorders, such as autism and dyslexia, say the researchers.

The study is published in the Proceedings of the National Academy of Sciences (PNAS) today.

The frontal lobes are an area in the brain of mammals located at the front of each cerebral hemisphere, and are thought to be critical for advanced intelligence.

Lead author Professor Robert Barton from the Department of Anthropology at Durham University, said: "Probably the most widespread assumption about how the human brain evolved is that size increase was concentrated in the frontal lobes.
"It has been thought that frontal lobe expansion was particularly crucial to the development of modern human behaviour, thought and language, and that it is our bulging frontal lobes that truly make us human. We show that this is untrue: human frontal lobes are exactly the size expected for a non-human brain scaled up to human size.

"This means that areas traditionally considered to be more primitive were just as important during our evolution. These other areas should now get more attention. In fact there is already some evidence that damage to the cerebellum, for example, is a factor in disorders such as autism and dyslexia."

The scientists argue that many of our high-level abilities are carried out by more extensive brain networks linking many different areas of the brain. They suggest it may be the structure of these extended networks more than the size of any isolated brain region that is critical for cognitive functioning.

Previously, various studies have been conducted to try and establish whether humans' frontal lobes are disproportionately enlarged compared to their size in other primates such as apes and monkeys. They have resulted in a confused picture with use of different methods and measurements leading to inconsistent findings

Wednesday, May 8, 2013

Using Anticholinergics for as Few as 60 Days Causes Memory Problems in Older Adults

May 7, 2013 — Research from the Regenstrief Institute, the Indiana University Center for Aging Research and Wishard-Eskenazi Health on medications commonly taken by older adults has found that drugs with strong anticholinergic effects cause cognitive impairment when taken continuously for as few as 60 days. A similar impact can be seen with 90 days of continuous use when taking multiple drugs with weak anticholinergic effect.

The study of 3,690 older adults is among the first to explore how length of use of this group of drugs affects the brain. The study is available online in advance of publication in a print issue of Alzheimer's & Dementia, the journal of the Alzheimer's Association. The research was funded by a grant (R24MH080827) from the National Institute on Aging.

Anticholinergic drugs block acetylcholine, a nervous system neurotransmitter. Drugs with anticholinergic effects are sold over the counter and by prescription. Older adults commonly use over-the-counter drugs with anticholinergic effects as sleep aids and to relieve bladder leakage. Drugs with anticholinergic effects are frequently prescribed for many chronic diseases including hypertension, cardiovascular disease and chronic obstructive pulmonary disease.

A list of drugs noting their anticholinergic burden can be found on the Aging Brain Care website.

The Regenstrief Institute, IU Center for Aging Research and Wishard-Eskenazi Health researchers reported that continuously taking strong anticholinergics, like many sleeping pills or antihistamines, for only 60 days caused memory problems and other indicators of mild cognitive impairment. Taking multiple drugs with weaker anticholinergic effects, such as many common over-the-counter digestive aids, had a negative impact on cognition in 90 days.

"We found that a high anticholinergic burden -- either from one or multiple drugs -- plus two to three months of continuous exposure to that high burden approximately doubled the risk of developing cognitive impairment," said Noll Campbell, Pharm.D., study co-author and Regenstrief Institute investigator. "Millions of older adults are taking sleeping pills or prescription drugs year after year that may be impacting their organizational abilities and memory."

Dr. Campbell is also an IU Center for Aging Research scientist, a research assistant professor in the Department of Pharmacy Practice, Purdue University College of Pharmacy, and a clinical pharmacy specialist in geriatrics with Wishard-Eskenazi Health Services.

"While the link between anticholinergics and cognitive impairment has been reported by our group and others, the cumulative burden of anticholinergics was rather unexpected, as was the lack of a clear association between anticholinergic burden and dementia," said Regenstrief Institute investigator Malaz Boustani, M.D., MPH. Dr. Boustani, the senior author of the study, who is also associate director of the IU Center for Aging Research and an associate professor of medicine at IU School of Medicine. He sees patients at the Healthy Aging Brain Center at Wishard-Eskenazi Health.

"The fact that taking anticholinergics is linked with mild cognitive impairment, involving memory loss without functional disability, but not with Alzheimer's disease and other dementing disorders, gives hope. Our research efforts will now focus on whether anticholinergic-induced cognitive impairment may be reversible," Dr. Boustani said.


Alzheimer's Fuzzy Signals Into High Definition

 May 7, 2013 — Scientists at the Virginia Tech Carilion Research Institute have discovered how the predominant class of Alzheimer's pharmaceuticals might sharpen the brain's performance

17One factor even more important than the size of a television screen is the quality of the signal it displays. Having a life-sized projection of Harry Potter dodging a Bludger in a Quidditch match is of little use if the details are lost to pixilation.

The importance of transmitting clear signals, however, is not relegated to the airwaves. The same creed applies to the electrical impulses navigating a human brain. Now, new research has shown that one of the few drugs approved for the treatment of Alzheimer's disease helps patients by clearing up the signals coming in from the outside world.

The discovery was made by a team of researchers led by Rosalyn Moran, an assistant professor at the Virginia Tech Carilion Research Institute. Her study indicates that cholinesterase inhibitors -- a class of drugs that stop the breakdown of the neurotransmitter acetylcholine -- allow signals to enter the brain with more precision and less background noise.

"Increasing the levels of acetylcholine appears to turn your fuzzy, old analog TV signal into a shiny, new, high-definition one," said Moran, who holds an appointment as an assistant professor in the Virginia Tech College of Engineering. "And the drug does this in the sensory cortices. These are the workhorses of the brain, the gatekeepers, not the more sophisticated processing regions -- such as the prefrontal cortex -- where one may have expected the drugs to have their most prominent effect."

Alzheimer's disease affects more than 35 million people worldwide -- a number expected to double every 20 years, leading to more than 115 million cases by 2050. Of the five pharmaceuticals approved to treat the disease by the U.S. Food and Drug Administration, four are cholinesterase inhibitors. Although it is clear that the drugs increase the amount of acetylcholine in the brain, why this improves Alzheimer's symptoms has been unknown. If scientists understood the mechanisms and pathways responsible for improvement, they might be able to tailor better drugs to combat the disease, which costs more than $200 billion annually in the United States alone.

In the new study, Moran recruited 13 healthy young adults and gave them doses of galantamine, one of the cholinesterase inhibitors commonly prescribed to Alzheimer's patients. Two electroencephalographs were taken -- one with the drugs and one without -- as the participants listened to a series of modulating tones while focusing on a simple concentration task.

The researchers were looking for differences in neural activity between the two drug states in response to surprising changes in the sound patterns that the participants were hearing.

The scientists compared the results with computer models built on a Bayesian brain theory, known as the Free Energy Principle, which is a leading theory that describes the basic rules of neuronal communication and explains the creation of complex networks.

The theory hypothesizes that neurons seek to reduce uncertainty, which can be modeled and calculated using free energy molecular dynamics. Connecting tens of thousands of neurons behaving in this manner produces the probability machine that we call a brain.

Moran and her colleagues compiled 10 computer simulations based on the different effects that the drugs could have on the brain. The model that best fit the results revealed that the low-level wheels of the brain early on in the neural networking process were the ones benefitting from the drugs and creating clearer, more precise signals.

"When people take these drugs you can imagine the brain bathed in them," Moran said. "But what we found is that the drugs don't have broad-stroke impacts on brain activity. Instead, they are working very specifically at the cortex's entry points, gating the signals coming into the network in the first place."


Tuesday, April 9, 2013

Distorted Thinking in Gambling Addiction: What Are the Cognitive and Neural Mechanisms?

Apr. 8, 2013 — Fascinating new studies into brain activity and behavioural responses have highlighted the overlap between pathological gambling and drug addiction. The research, which is presented at the British Neuroscience Association Festival of Neuroscience (BNA2013) has implications for both the treatment and prevention of problem gambling.

Dr Luke Clark, a senior lecturer at the University of Cambridge (UK), told the meeting that neurocognitive tests of impulsivity and compulsivity, and also positron emission tomography (PET) imaging of the brain have started to show how gambling becomes addictive in pathological gamblers -- people whose gambling habit has spiralled out of control and become a problem.

"Around 70% of the British population will gamble occasionally, but for some of these people, it will become a problem," he said. "Our work has been seeking to understand the changes in decision-making that happen in people with gambling problems. It represents the first large scale study of individuals seeking treatment for gambling problems in the UK, at a time when this disorder is being re-classified alongside drug addiction as the first 'behavioural addiction'. Given the unique legislation around gambling from country to country, it is vital that we understand gambling at a national level. For example, 40% of the problem gamblers at the National Problem Gambling Clinic report that the game they have a problem with is roulette on Fixed Odds Betting Terminals; this kind of gambling machine is peculiar to the British gambling landscape."

In collaboration between the University of Cambridge and Dr Henrietta Bowden-Jones, director of the UK's only specialist gambling clinic in the Central and North West London NHS Trust, Dr Clark and his colleagues compared the brains and behaviours of 86 male, pathological gamblers with those of 45 healthy men without a gambling problem.

"We approach gambling within the framework of addiction, where we think that problematic gambling arises from a combination of individual risk factors, such as genetics, and features of the games themselves. To study individual factors, we have been testing gamblers at the National Problem Gambling Clinic on neurocognitive tests of impulsivity and compulsivity, and we have also measured their dopamine levels using PET imaging," said Dr Clark.

The tests showed that problem gamblers had increased impulsivity, similar to people with alcohol and drug addictions, but there was less evidence of compulsivity. Levels of dopamine -- a neurotransmitter involved in signalling between nerve cells and which is implicated in drug addiction -- showed differences in the more impulsive gamblers.

"Previous PET research has shown that people with drug addiction have reduced dopamine receptors. We predicted the same effect in pathological gamblers, but we did not see any group differences between the pathological gamblers and healthy men. Nevertheless, the problem gamblers do show some individual differences in their dopamine function, related to their levels of impulsivity: more impulsive gamblers showed fewer dopamine receptors," said Dr Clark. "These studies highlight the overlap between pathological gambling and drug addiction.

"To study the properties of the games themselves and how they relate to problem gambling, we have focussed on two psychological distortions that occur across many forms of gambling: 'near-miss' outcomes (where a loss looks similar or 'close' to a jackpot win) and the 'gambler's fallacy' (for example, believing that a run of heads means that a tail is 'due', in a game of chance). In one important discovery, we were the first lab to show that gambling 'near-misses' recruit brain regions that overlap with those recruited in gambling 'wins'. These responses may cause 'near-misses' to maintain gambling play despite their objective status as losses."

Dr Clark said that these findings had implications for both prevention and treatment. "Gambling distortions like the 'near-miss' effect may be amenable to both psychological therapies for problem gambling, and also by drug treatments that may act on the underlying brain systems. By understanding the styles of thinking that characterise the problem gambler, we may also be able to improve education about gambling in teenagers and young adults, to reduce the number of people developing a gambling problem."

The researchers also found a striking demonstration of the underlying brain regions that are involved in gambling when they studied the gambling behaviour of patients who had experienced brain injury due to a tumour or stroke.

"We have seen that two gambling distortions -- the 'gambler's fallacy' and the 'near-miss' effect -- that are evident in the general population, and which appear to be increased in problem gamblers, are actually abolished in patients with damage to the insula region of the brain," he said. "This suggests that in the healthy brain, the insula may be a critical area in generating these distorted expectancies during gambling play, and that interventions to reduce insula activity may have treatment potential.

"The insula is quite a mysterious part of the brain, tucked deep inside the lateral fissure. It is important in processing pain and, more broadly, in representing the state of the body in the brain, and it is striking that gambling is a very visceral, exciting activity. Our ongoing neuroimaging work will look at the relationship between responses in the insula and the body during our gambling tests."

Future work will investigate the styles of thinking that are in evidence when the problem gamblers at the National Problem Gambling Clinic play the simplified games the researchers have developed. "This is the first study to directly look at whether these biases are more pronounced in problem gamblers. We are also starting to recruit the siblings of problem gamblers (those who do not have a gambling problem themselves) in order to look at underlying vulnerability factors," concluded Dr Clark.

This research is funded by grants from the UK's Medical Research Council, and involves further collaboration with researchers at Imperial College London and the University of Oxford.



Non-Invasive Mapping Helps to Localize Language Centers Before Brain Surgery

Apr. 8, 2013 — A new functional magnetic resonance imaging (fMRI) technique may provide neurosurgeons with a non-invasive tool to help in mapping critical areas of the brain before surgery, reports a study in the April issue of Neurosurgery, official journal of the Congress of Neurological Surgeons.


Evaluating brain fMRI responses to a "single, short auditory language task" can reliably localize critical language areas of the brain -- in healthy people as well as patients requiring brain surgery for epilepsy or tumors, according to the new research by Melanie Genetti, PhD, and colleagues of Geneva University Hospitals, Switzerland.

Brief fMRI Task for Functional Brain Mapping
The researchers designed and evaluated a quick and simple fMRI task for use in functional brain mapping. Functional MRI can show brain activity in response to stimuli (in contrast to conventional brain MRI, which shows anatomy only). Before neurosurgery for severe epilepsy or brain tumors, functional brain mapping provides essential information on the location of critical brain areas governing speech and other functions.

The standard approach to brain mapping is direct electrocortical stimulation (ECS) -- recording brain activity from electrodes placed on the brain surface. However, this requires several hours of testing and may not be applicable in all patients. Previous studies have compared fMRI techniques with ECS, but mainly for determining the side of language function (lateralization) rather than the precise location (localization).

The new fMRI task was developed and evaluated in 28 healthy volunteers and in 35 patients undergoing surgery for brain tumors or epilepsy. The test used a brief (eight minutes) auditory language stimulus in which the patients heard a series of sense and nonsense sentences.

Functional MRI scans were obtained to localize the brain areas activated by the language task -- activated areas would "light up," reflecting increased oxygenation. A subgroup of patients also underwent ECS, the results of which were compared to fMRI.

Non-invasive Test Accurately Localizes Critical Brain Areas

Based on responses to the language stimulus, fMRI showed activation of the anterior and posterior (front and rear) language areas of the brain in about 90 percent of subjects -- neurosurgery patients as well as healthy volunteers. Functional MRI activation was weaker and the language centers more spread-out in the patient group. These differences may have reflected brain adaptations to slow-growing tumors or longstanding epilepsy.

Five of the epilepsy patients also underwent ECS using brain electrodes, the results of which agreed well with the fMRI findings. Two patients had temporary problems with language function after surgery. In both cases, the deficits were related to surgery or complications (bleeding) in the language area identified by fMRI.

Functional brain mapping is important for planning for complex neurosurgery procedures. It provides a guide for the neurosurgeon to navigate safely to the tumor or other diseased area, while avoiding damage to critical areas of the brain. An accurate, non-invasive approach to brain mapping would provide a valuable alternative to the time-consuming ECS procedure.

"The proposed fast fMRI language protocol reliably localized the most relevant language areas in individual subjects," Dr. Genetti and colleagues conclude. In its current state, the new test probably isn't suitable as the only approach to planning surgery -- too many areas "light up" with fMRI, which may limit the surgeon's ability to perform more extensive surgery with necessary confidence. The researchers add, "Rather than a substitute, our current fMRI protocol can be considered as a valuable complementary tool that can reliably guide ECS in the surgical planning of epileptogenic foci and of brain tumors."



Monday, March 25, 2013

Spatial Memory: Mapping Blank Spots in the Cheeseboard Maze


Mar. 21, 2013 — IST Austria Professor Jozsef Csicsvari together with collaborators has succeeded in uncovering processes in which the formation of spatial memory is manifested in a map representation.

During learning, novel information is transformed into memory through the processing and encoding of information in neural circuits. In a recent publication in Neuron, IST Austria Professor Jozsef Csicsvari, together with his collaborator David Dupret at the University of Oxford, and Joseph O'Neill, postdoc in Csicsvari's group, uncovered a novel role for inhibitory interneurons in the rat hippocampus during the formation of spatial memory.

During spatial learning, space is represented in the hippocampus through plastic changes in the connections between neurons. Jozsef Csicsvari and his collaborators investigate spatial learning in rats using the cheeseboard maze apparatus. This apparatus contains many holes, some of which are selected to hide food in order to test spatial memory. During learning trials, animals learn where the rewards are located, and after a period sleep, the researchers test whether the animal can recall these reward locations. In previous work, they and others have shown that memory of space is encoded in the hippocampus through changes in the firing of excitatory pyramidal cells, the so-called "place cells."

A place cell fires when the animal arrives at a particular location. Normally, place cells always fire at the same place in an environment; however, during spatial learning the place of their firing can change to encode where the reward is found, forming memory maps.

In their new publication, the researchers investigated the timescale of map formation, showing that during spatial learning, pyramidal neuron maps representing previous and new reward locations "flicker," with both firing patterns occurring. At first, old maps and new maps fluctuate, as the animal is unsure whether the location change is transient or long-lasting. At a later stage, the new map and so the relevant new information dominates.

The scientists also investigated the contribution of inhibitory interneuron circuits to learning. They show that these interneurons, which are extensively interconnected with pyramidal cells, change their firing rates during map formation and flickering: some interneurons fire more often when the new pyramidal map fires, while others fire less often with the new map. These changes in interneuron firing were only observed during learning, not during sleep or recall. The scientists also show that the changes in firing rate are due to map-specific changes in the connections between pyramidal cells and interneurons. When a pyramidal cell is part of a new map, the strengthening of a connection with an interneuron causes an increase in the firing of this interneuron. Conversely, when a pyramidal cell is not part of a new map, the weakening of the connection with the interneuron causes a decrease in interneuron firing rate. Both, the increase and the decrease in firing rate can be beneficial for learning, allowing the regulation of plasticity between pyramidal cells and controlling the timing in their firing.

The new research therefore shows that not only excitatory neurons modify their behaviour and exhibit plastic connection changes during learning, but also the inhibitory interneuron circuits. The researchers suggest that inhibitory interneurons could be involved in map selection -- helping one map dominate and take over during learning, so that the relevant information is encoded.


Sunday, March 3, 2013

Changes in Patterns of Brain Activity Predict Fear Memory Formation

Science News-Mar. 1, 2013 — Psychologists at the University of Amsterdam (UvA) have discovered that changes in patterns of brain activity during fearful experiences predict whether a long-term fear memory is formed. The research results have recently been published in the scientific journal Nature Neuroscience.

Researchers Renee Visser MSc, Dr Steven Scholte, Tinka Beemsterboer MSc and Prof. Merel Kindt discovered that they can predict future fear memories by looking at patterns of brain activity during fearful experiences. Up until now, there was no way of predicting fear memory. It was also, above all, unclear whether the selection of information to be stored in the long-term memory occurred at the time of fear learning or after the event.

Picture predicts pain stimulus
During magnetic resonance brain imaging (MRI), participants saw neutral pictures of faces and houses, some of which were followed by a small electric shock. In this way, the participants formed fear memories. They showed fear responses when the pictures were shown that were paired with shocks. This fear response can be measured in the brain, but is also evident from increased pupil dilation when someone sees the picture. After a few weeks, the participants returned to the lab and were shown the same images. Brain activity and pupil diameter were once again measured. The extent to which the pupil dilated when seeing the images that were previously followed by a shock, was considered an expression of the previously formed fear memory.


Pattern Analysis
In order to analyse the fMRI data, (spatial) patterns of brain activity (Multi-Voxel Pattern Analysis, or MVPA) were analysed. By correlating patterns of various stimulus presentations with each other, it is possible to measure the extent to which the representation of two stimuli is the same. It appears that images that have nothing in common, such as houses and faces, lead to increasing neural pattern similarity when they predict danger. This does not occur when they do not predict danger. This leads to the formation of stronger fear responses. The extent to which this occurs is an indication of fear memory formation: the stronger the response during learning, the stronger the fear response will be in the long term.

These findings may lead to greater insights into the formation of emotional memory. As a result, it is possible to conduct experimental research into the mechanisms that strengthen, weaken or even erase fear memory in a more direct fashion, without having to wait until the fear memory is expressed.

The research is part of the Vici project of Prof. Merel Kindt, which is funded by the Netherlands Organisation for Scientific Research (NWO).



Friday, February 15, 2013

Biological Aging, Seen in Women With Alzheimer's Risk Factor, Blocked by Hormone Therapy


Feb. 13, 2013 — Healthy menopausal women carrying a well-known genetic risk factor for Alzheimer's disease showed measurable signs of accelerated biological aging, a new study has found.
However, in carriers who started hormone therapy at menopause  and remained on that therapy, this acceleration was absent, the researchers said. Hormone therapy for non-carriers of the risk factor, a gene variant called ApoE4, had no protective effect on their biological aging.

"This shows that ApoE4 is contributing to aging at the cellular level well before any outward symptoms of decline become apparent," said Natalie Rasgon, MD, PhD, professor of psychiatry and behavioral sciences at the Stanford University School of Medicine and director of the Stanford Center for Neuroscience in Women's Health. "Yet, estrogen appears to have a protective effect for middle-aged women who are carrying this genetic risk factor."

All people carry two copies of a gene called ApoE. (One copy is inherited from each parent). Like genes for eye or hair color, ApoE comes in more than one version. Some 15 to 20 percent of Americans carry at least one copy of ApoE4, a version that puts them at substantially increased risk for late-onset Alzheimer's disease in comparison with people who are not ApoE4 carriers.

Rasgon is the senior author of a study involving 70 relatively well-educated, high-functioning women. It was published online Feb. 13 in PLOS ONE. First author Emily Jacobs, PhD, is a postdoctoral fellow at Harvard Medical School. When the work took place, Jacobs was associated with the lab of another study co-author, Elissa Epel, PhD, associate professor of psychiatry at the University of California-San Francisco.

"We know from numerous studies that ApoE4 is a major genetic risk factor for cognitive decline, Alzheimer's disease and early mortality," Jacobs said. "We wanted to see whether an accelerated rate of biological aging explained this risk."

Another co-author of the study is Elizabeth Blackburn, PhD, professor of biochemistry and biophysics at UCSF, who won the Nobel Prize in 2009 for her work elucidating the mechanism by which intracellular features called telomeres act as biological clocks.

Telomeres are repeated sequences of alternating chemical units of DNA that cap the ends of each chromosome in every cell of all living creatures from fungi to humans. Their function is analogous to that of the plastic caps ringing the ends of a shoelace: They stabilize chromosomes, keeping them from unraveling and preventing other damage, too. But telomeres themselves are not perfectly stable. The process of cell division, as well as bouts of oxidative stress or inflammation, cause them to shorten. If they reach a point at which chromosomal integrity is challenged, this could give rise to cancer or other malfunction in the cell housing the challenged chromosomes. Evolution has engineered protective mechanisms into such cells so that they die or, at least, lose their ability to divide further. But this evolutionary emergency brake has its downside: It contributes to the slow but steady deterioration that manifests visibly in our aging skin and, less visibly, in all the other bodily organs.

Using telomere shortening as an index of biological aging, the investigators drew blood samples from almost 70 healthy women, most of them between the ages of 45 and 65, who had been on hormone therapy since menopause. These women were randomly divided into two groups. One group remained on hormones, while the second group discontinued therapy.

Blood samples from the volunteers were taken when they first entered the study and again two years later. Jacobs, Rasgon and their colleagues separated white blood cells from each sample, extracted the cells' DNA and measured the length of each woman's telomeres at both time points. Then they calculated the change in telomere length that had taken place over the two-year period.

"Telomere length is relatively easy to measure in blood cells, and it's an emerging marker of biological aging," said Jacobs. "It predicts the incidence of age-related diseases and mortality."

Among the many other assessments the researchers made on these women was their ApoE status. They found that ApoE4 carriers' telomeres were six times as likely as those of non-carriers to undergo significant shortening within the two-year study window. On average, the telomeres of ApoE4 carriers had shortened by an amount equivalent to what might be expected to take a decade, based on other studies of healthy women.

However, hormone therapy effectively zeroed out ApoE4's negative influence on telomere length over time. Carriers who remained on this regimen showed no evidence of telomere shortening.

"Our take-home findings from this study were, first, that ApoE4 carriers are at greater risk of biological aging, which is associated with negative health outcomes and, second, that if you were a postmenopausal ApoE4 carrier, being on estrogen therapy was a good thing for telomere length, an established measure of biological aging at the cellular level," Rasgon said. "This brings us a step closer to being able to identify which women will benefit the most from estrogen replacement therapy."

In 2002, one arm of a large-scale longitudinal trial of women examining hormone therapy was halted due to an unexpected increase in adverse cardiovascular events among women on the therapy. The ensuing publicity resulted in women abandoning the regimen in droves. But the trial subjects among whom these ill effects occurred were women who had begun estrogen treatment years after reaching menopause. Subsequent studies have demonstrated that women who start treatment at menopause or soon afterward may experience some benefit.

Rasgon noted that in addition to timing and ApoE status, the type of estrogen formulation used may prove to be an important determinant of hormone therapy's health impact. She said she expects to publish other work soon concerning the differential effects of different formulations.

Rasgon's graduate student Heather Kenna was another Stanford co-author of the study, which was funded by National Institutes of Health grants (AG22008, RR-00070) and the Robert Wood Johnson Foundation Health and Society Scholars Program.

Thursday, January 24, 2013

Parkinson's Treatment Can Trigger Creativity: Patients Treated With Dopamine-Enhancing Drugs Are Developing Artistic Talents

Jan. 14, 2013 — Parkinson's experts across the world have been reporting a remarkable phenomenon -- many patients treated with drugs to increase the activity of dopamine in the brain as a therapy for motor symptoms such as tremors and muscle rigidity are developing new creative talents, including painting, sculpting, writing, and more.


Prof. Rivka Inzelberg of Tel Aviv University's Sackler Faculty of Medicine first noticed the trend in her own Sheba Medical Center clinic when the usual holiday presents from patients -- typically chocolates or similar gifts -- took a surprising turn. "Instead, patients starting bringing us art they had made themselves," she says.

Inspired by the discovery, Prof. Inzelberg sought out evidence of this rise in creativity in current medical literature. Bringing together case studies from around the world, she examined the details of each patient to uncover a common underlying factor -- all were being treated with either synthetic precursors of dopamine or dopamine receptor agonists, which increase the amount of dopamine activity in the brain by stimulating receptors. Her report will be published in the journal Behavioral Neuroscience.

Giving in to artistic impulse Dopamine is involved in several neurological systems, explains Prof. Inzelberg. Its main purpose is to aid in the transmission of motor commands, which is why a lack of dopamine in Parkinson's patients is associated with tremors and a difficulty in coordinating their movements.

But it's also involved in the brain's "reward system" -- the satisfaction or happiness we experience from an accomplishment. This is the system which Prof. Inzelberg predicts is associated with increasing creativity. Dopamine and artistry have long been connected, she points out, citing the example of the Vincent Van Gogh, who suffered from psychosis. It's possible that his creativity was the result of this psychosis, thought to be caused by a spontaneous spiking of dopamine levels in the brain.

There are seemingly no limits to the types of artistic work for which patients develop talents, observes Prof. Inzelberg. Cases include an architect who began to draw and paint human figures after treatment, and a patient who, after treatment, became a prize-winning poet though he had never been involved in the arts before.

It's possible that these patients are expressing latent talents they never had the courage to demonstrate before, she suggests. Dopamine-inducing therapies are also connected to a loss of impulse control, and sometimes result in behaviors like excessive gambling or obsessional hobbies. An increase in artistic drive could be linked to this lowering of inhibitions, allowing patients to embrace their creativity. Some patients have even reported a connection between their artistic sensibilities and medication dose, noting that they feel they can create more freely when the dose is higher.


Therapeutic value Prof. Inzelberg believes that such artistic expressions have promising therapeutic potential, both psychologically and physiologically. Her patients report being happier when they are busy with their art, and have noted that motor handicaps can lessen significantly. One such patient is usually wheelchair-bound or dependent on a walker, but creates intricate wooden sculptures that have been displayed in galleries. External stimuli can sometimes bypass motor issues and foster normal movement, she explains. Similar types of art therapy are already used for dementia and stroke patients to help mitigate the loss of verbal communication skills, for example.


The next step is to try to characterize those patients who become more creative through treatment through comparing them to patients who do not experience a growth in artistic output. "We want to screen patients under treatment for creativity and impulsivity to see if we can identify what is unique in those who do become more creative," says Prof. Inzelberg. She also believes that such research could provide valuable insights into creativity in healthy populations, too.