Monday, August 15, 2016

Science Proves Chocolate Cake is a Health Food

chocolate cake on plateIs chocolate cake the new breakfast of champions? We sure hope so. (Photo: Africa Studio/Shutterstock)

Chocolate cake for breakfast? Research says it's good for both your brain and your waistline

We all know breakfast is the most important meal of the day. Here's why it should also be the sweetest.
By Jaime Bender  |  Tuesday, February 23, 2016
File this one under "studies we would definitely volunteer for:" New research says eating chocolate regularly can actually improve brain function.
Yes, that sweet, sticky treat you seem to crave at the most inopportune times is now being associated with a host of cognitive benefits, including memory and abstract reasoning. It's all part of a long-term, large-scale study out of Syracuse University in New York that measured the effects of chocolate consumption on 968 people aged 23 to 98, without changing their overall dietary habits.
"Habitual chocolate intake was related to cognitive performance, measured with an extensive battery of neuropsychological tests," the researchers wrote. "More frequent chocolate consumption was significantly associated with better performance on [these tests]."
We're willing to bet that's not the first time you've heard about a study touting the benefits of chocolate on your health. A few years ago, researchers at Tel Aviv University in Israel reported that eating chocolate in the morning – yes, every morning – was found to help people lose weight, despite long-held beliefs that chocolate is one of those occasional splurge foods that dieters must resist in order to achieve their weight-loss goals.
The biggest takeaway of this research, according to study leader Dr. Daniela Jakubowicz, is that eating ahigher-calorie breakfast in the morning reduces cravings throughout the day and prevents late-night snacking.
"When you wake up, your brain needs energy immediately," said Jakubowicz, whose book "The Big Breakfast Diet" became a bestseller. "This is the time of the day when your body converts food into energy. Later in the day, when you eat, your body and brain are still in high-alert mode, saving the energy from food as fat reserve. This is how you gain weight even eating less."
So what kind of breakfast does she suggest? Breakfast with dessert, of course. Jakubowicz said in her study, people who were given a 600-calorie breakfast that included dessert as well as proteins and carbohydrates lost more weight than people who were given a 300-calorie breakfast but ate more later in the day.
What is it about chocolate that's so beneficial? Experts say it's a nutrient called a flavonoid that's commonly found in plant-based foods and represents up to 20 percent of the compounds present in cocoa beans. High levels of flavonoids are also found in tea, red wine and fruits such as grapes and apples.
So next time you're thinking about that chocolate cake looking all lonesome on your counter, sleep on it – and indulge in the morning. Your brain – and your waistline – might thank you.
___________________
SNL, apparently a science program, ahead of its time:

Monday, May 9, 2016

Media and Science via John Oliver


A question: why isn't science making sure the message is accurate. A more fundamental question: is science, and the questions asked, actively shaped by media. The question about whether science is BS with the answer being "clearly no" is actually only a bit of conjecture. There must be a BS continuum after all.

Saturday, April 9, 2016

"After all, that’s how science works, isn’t it?"

The Sugar Conspiracy
In 1972, a British scientist sounded the alarm that sugar – and not fat – was the greatest danger to our health. But his findings were ridiculed and his reputation ruined. How did the world’s top nutrition scientists get it so wrong for so long?
by Ian Leslie Thursday 7 April 2016 from The Guardian
Robert Lustig is a paediatric endocrinologist at the University of California who specialises in the treatment of childhood obesity. A 90-minute talk he gave in 2009, titled Sugar: The Bitter Truth, has now been viewed more than six million times on YouTube. In it, Lustig argues forcefully that fructose, a form of sugar ubiquitous in modern diets, is a “poison” culpable for America’s obesity epidemic.
A year or so before the video was posted, Lustig gave a similar talk to a conference of biochemists in Adelaide, Australia. Afterwards, a scientist in the audience approached him. Surely, the man said, you’ve read Yudkin. Lustig shook his head. John Yudkin, said the scientist, was a British professor of nutrition who had sounded the alarm on sugar back in 1972, in a book called Pure, White, and Deadly.
“If only a small fraction of what we know about the effects of sugar were to be revealed in relation to any other material used as a food additive,” wrote Yudkin, “that material would promptly be banned.” The book did well, but Yudkin paid a high price for it. Prominent nutritionists combined with the food industry to destroy his reputation, and his career never recovered. He died, in 1995, a disappointed, largely forgotten man.
Perhaps the Australian scientist intended a friendly warning. Lustig was certainly putting his academic reputation at risk when he embarked on a high-profile campaign against sugar. But, unlike Yudkin, Lustig is backed by a prevailing wind. We read almost every week of new research into the deleterious effects of sugar on our bodies. In the US, the latest edition of the government’s official dietary guidelines includes a cap on sugar consumption. In the UK, the chancellor George Osborne has announced a new tax on sugary drinks. Sugar has become dietary enemy number one.
This represents a dramatic shift in priority. For at least the last three decades, the dietary arch-villain has been saturated fat. When Yudkin was conducting his research into the effects of sugar, in the 1960s, a new nutritional orthodoxy was in the process of asserting itself. Its central tenet was that a healthy diet is a low-fat diet. Yudkin led a diminishing band of dissenters who believed that sugar, not fat, was the more likely cause of maladies such as obesity, heart disease and diabetes. But by the time he wrote his book, the commanding heights of the field had been seized by proponents of the fat hypothesis. Yudkin found himself fighting a rearguard action, and he was defeated.
Not just defeated, in fact, but buried. When Lustig returned to California, he searched for Pure, White and Deadly in bookstores and online, to no avail. Eventually, he tracked down a copy after submitting a request to his university library. On reading Yudkin’s introduction, he felt a shock of recognition.
“Holy crap,” Lustig thought. “This guy got there 35 years before me.”

In 1980, after long consultation with some of America’s most senior nutrition scientists, the US government issued its first Dietary Guidelines. The guidelines shaped the diets of hundreds of millions of people. Doctors base their advice on them, food companies develop products to comply with them. Their influence extends beyond the US. In 1983, the UK government issued advice that closely followed the American example.
The most prominent recommendation of both governments was to cut back on saturated fats and cholesterol (this was the first time that the public had been advised to eat less of something, rather than enough of everything). Consumers dutifully obeyed. We replaced steak and sausages with pasta and rice, butter with margarine and vegetable oils, eggs with muesli, and milk with low-fat milk or orange juice. But instead of becoming healthier, we grew fatter and sicker.
Look at a graph of postwar obesity rates and it becomes clear that something changed after 1980. In the US, the line rises very gradually until, in the early 1980s, it takes off like an aeroplane. Just 12% of Americans were obese in 1950, 15% in 1980, 35% by 2000. In the UK, the line is flat for decades until the mid-1980s, at which point it also turns towards the sky. Only 6% of Britons were obese in 1980. In the next 20 years that figure more than trebled. Today, two thirds of Britons are either obese or overweight, making this the fattest country in the EU. Type 2 diabetes, closely related to obesity, has risen in tandem in both countries.
At best, we can conclude that the official guidelines did not achieve their objective; at worst, they led to a decades-long health catastrophe. Naturally, then, a search for culprits has ensued. Scientists are conventionally apolitical figures, but these days, nutrition researchers write editorials and books that resemble liberal activist tracts, fizzing with righteous denunciations of “big sugar” and fast food. Nobody could have predicted, it is said, how the food manufacturers would respond to the injunction against fat – selling us low-fat yoghurts bulked up with sugar, and cakes infused with liver-corroding transfats.
Nutrition scientists are angry with the press for distorting their findings, politicians for failing to heed them, and the rest of us for overeating and under-exercising. In short, everyone – business, media, politicians, consumers – is to blame. Everyone, that is, except scientists.
But it was not impossible to foresee that the vilification of fat might be an error. Energy from food comes to us in three forms: fat, carbohydrate, and protein. Since the proportion of energy we get from protein tends to stay stable, whatever our diet, a low-fat diet effectively means a high-carbohydrate diet. The most versatile and palatable carbohydrate is sugar, which John Yudkin had already circled in red. In 1974, the UK medical journal, the Lancet, sounded a warning about the possible consequences of recommending reductions in dietary fat: “The cure should not be worse than the disease.”
Still, it would be reasonable to assume that Yudkin lost this argument simply because, by 1980, more evidence had accumulated against fat than against sugar.
After all, that’s how science works, isn’t it?

If, as seems increasingly likely, the nutritional advice on which we have relied for 40 years was profoundly flawed, this is not a mistake that can be laid at the door of corporate ogres. Nor can it be passed off as innocuous scientific error. What happened to John Yudkin belies that interpretation. It suggests instead that this is something the scientists did to themselves – and, consequently, to us.
We tend to think of heretics as contrarians, individuals with a compulsion to flout conventional wisdom. But sometimes a heretic is simply a mainstream thinker who stays facing the same way while everyone around him turns 180 degrees. When, in 1957, John Yudkin first floated his hypothesis that sugar was a hazard to public health, it was taken seriously, as was its proponent. By the time Yudkin retired, 14 years later, both theory and author had been marginalised and derided. Only now is Yudkin’s work being returned, posthumously, to the scientific mainstream.
These sharp fluctuations in Yudkin’s stock have had little to do with the scientific method, and a lot to do with the unscientific way in which the field of nutrition has conducted itself over the years. This story, which has begun to emerge in the past decade, has been brought to public attention largely by sceptical outsiders rather than eminent nutritionists. In her painstakingly researched book, The Big Fat Surprise, the journalist Nina Teicholz traces the history of the proposition that saturated fats cause heart disease, and reveals the remarkable extent to which its progress from controversial theory to accepted truth was driven, not by new evidence, but by the influence of a few powerful personalities, one in particular.
Teicholz’s book also describes how an establishment of senior nutrition scientists, at once insecure about its medical authority and vigilant for threats to it, consistently exaggerated the case for low-fat diets, while turning its guns on those who offered evidence or argument to the contrary. John Yudkin was only its first and most eminent victim.
Today, as nutritionists struggle to comprehend a health disaster they did not predict and may have precipitated, the field is undergoing a painful period of re-evaluation. It is edging away from prohibitions on cholesterol and fat, and hardening its warnings on sugar, without going so far as to perform a reverse turn. But its senior members still retain a collective instinct to malign those who challenge its tattered conventional wisdom too loudly, as Teicholz is now discovering.

To understand how we arrived at this point, we need to go back almost to the beginning of modern nutrition science.
On 23 September, 1955, US President Dwight Eisenhower suffered a heart attack. Rather than pretend it hadn’t happened, Eisenhower insisted on making details of his illness public. The next day, his chief physician, Dr Paul Dudley White, gave a press conference at which he instructed Americans on how to avoid heart disease: stop smoking, and cut down on fat and cholesterol. In a follow-up article, White cited the research of a nutritionist at the University of Minnesota, Ancel Keys.
Heart disease, which had been a relative rarity in the 1920s, was now felling middle-aged men at a frightening rate, and Americans were casting around for cause and cure. Ancel Keys provided an answer: the “diet-heart hypothesis” (for simplicity’s sake, I am calling it the “fat hypothesis”). This is the idea, now familiar, that an excess of saturated fats in the diet, from red meat, cheese, butter, and eggs, raises cholesterol, which congeals on the inside of coronary arteries, causing them to harden and narrow, until the flow of blood is staunched and the heart seizes up.
Ancel Keys was brilliant, charismatic, and combative. A friendly colleague at the University of Minnesota described him as, “direct to the point of bluntness, critical to the point of skewering”; others were less charitable. He exuded conviction at a time when confidence was most welcome. The president, the physician and the scientist formed a reassuring chain of male authority, and the notion that fatty foods were unhealthy started to take hold with doctors, and the public. (Eisenhower himself cut saturated fats and cholesterol from his diet altogether, right up until his death, in 1969, from heart disease.)
Many scientists, especially British ones, remained sceptical. The most prominent doubter was John Yudkin, then the UK’s leading nutritionist. When Yudkin looked at the data on heart disease, he was struck by its correlation with the consumption of sugar, not fat. He carried out a series of laboratory experiments on animals and humans, and observed, as others had before him, that sugar is processed in the liver, where it turns to fat, before entering the bloodstream.
He noted, too, that while humans have always been carnivorous, carbohydrates only became a major component of their diet 10,000 years ago, with the advent of mass agriculture. Sugar – a pure carbohydrate, with all fibre and nutrition stripped out – has been part of western diets for just 300 years; in evolutionary terms, it is as if we have, just this second, taken our first dose of it. Saturated fats, by contrast, are so intimately bound up with our evolution that they are abundantly present in breast milk. To Yudkin’s thinking, it seemed more likely to be the recent innovation, rather than the prehistoric staple, making us sick.
John Yudkin was born in 1910, in the East End of London. His parents were Russian Jews who settled in England after fleeing the pogroms of 1905. Yudkin’s father died when he was six, and his mother brought up her five sons in poverty. By way of a scholarship to a local grammar school in Hackney, Yudkin made it to Cambridge. He studied biochemistry and physiology, before taking up medicine. After serving in the Royal Army Medical Corps during the second world war, Yudkin was made a professor at Queen Elizabeth College in London, where he built a department of nutrition science with an international reputation.
Ancel Keys was intensely aware that Yudkin’s sugar hypothesis posed an alternative to his own. If Yudkin published a paper, Keys would excoriate it, and him. He called Yudkin’s theory “a mountain of nonsense”, and accused him of issuing “propaganda” for the meat and dairy industries. “Yudkin and his commercial backers are not deterred by the facts,” he said. “They continue to sing the same discredited tune.” Yudkin never responded in kind. He was a mild-mannered man, unskilled in the art of political combat.
That made him vulnerable to attack, and not just from Keys. The British Sugar Bureau dismissed Yudkin’s claims about sugar as “emotional assertions”; the World Sugar Research Organisation called his book “science fiction”. In his prose, Yudkin is fastidiously precise and undemonstrative, as he was in person. Only occasionally does he hint at how it must have felt to have his life’s work besmirched, as when he asks the reader, “Can you wonder that one sometimes becomes quite despondent about whether it is worthwhile trying to do scientific research in matters of health?”
Throughout the 1960s, Keys accumulated institutional power. He secured places for himself and his allies on the boards of the most influential bodies in American healthcare, including the American Heart Association and the National Institutes ofHealth. From these strongholds, they directed funds to like-minded researchers, and issued authoritative advice to the nation. “People should know the facts,” Keys told Time magazine. “Then if they want to eat themselves to death, let them.”
This apparent certainty was unwarranted: even some supporters of the fat hypothesis admitted that the evidence for it was still inconclusive. But Keys held a trump card. From 1958 to 1964, he and his fellow researchers gathered data on the diets, lifestyles and health of 12,770 middle-aged men, in Italy, Greece, Yugoslavia, Finland, Netherlands, Japan and the United States. The Seven Countries Study was finally published as a 211-page monograph in 1970. It showed a correlation between intake of saturated fats and deaths from heart disease, just as Keys had predicted. The scientific debate swung decisively behind the fat hypothesis.
Keys was the original big data guy (a contemporary remarked: “Every time you question this man Keys, he says, ‘I’ve got 5,000 cases. How many do you have?’). Despite its monumental stature, however, the Seven Countries Study, which was the basis for a cascade of subsequent papers by its original authors, was a rickety construction. There was no objective basis for the countries chosen by Keys, and it is hard to avoid the conclusion that he picked only those he suspected would support his hypothesis. After all, it is quite something to choose seven nations in Europe and leave out France and what was then West Germany, but then, Keys already knew that the French and Germans had relatively low rates of heart disease, despite living on a diet rich in saturated fats.
The study’s biggest limitation was inherent to its method. Epidemiological research involves the collection of data on people’s behaviour and health, and a search for patterns. Originally developed to study infection, Keys and his successors adapted it to the study of chronic diseases, which, unlike most infections, take decades to develop, and are entangled with hundreds of dietary and lifestyle factors, effectively impossible to separate.
To reliably identify causes, as opposed to correlations, a higher standard of evidence is required: the controlled trial. In its simplest form: recruit a group of subjects, and assign half of them a diet for, say, 15 years. At the end of the trial, assess the health of those in the intervention group, versus the control group. This method is also problematic: it is virtually impossible to closely supervise the diets of large groups of people. But a properly conducted trial is the only way to conclude with any confidence that X is responsible for Y.
Although Keys had shown a correlation between heart disease and saturated fat, he had not excluded the possibility that heart disease was being caused by something else. Years later, the Seven Countries study’s lead Italian researcher, Alessandro Menotti, went back to the data, and found that the food that correlated most closely with deaths from heart disease was not saturated fat, but sugar.
By then it was too late. The Seven Countries study had become canonical, and the fat hypothesis was enshrined in official advice. The congressional committee responsible for the original Dietary Guidelines was chaired by Senator George McGovern. It took most of its evidence from America’s nutritional elite: men from a handful of prestigious universities, most of whom knew or worked with each other, all of whom agreed that fat was the problem – an assumption that McGovern and his fellow senators never seriously questioned. Only occasionally were they asked to reconsider. In 1973, John Yudkin was called from London to testify before the committee, and presented his alternative theory of heart disease.
A bemused McGovern asked Yudkin if he was really suggesting that a high fat intake was not a problem, and that cholesterol presented no danger.
“I believe both those things,” replied Yudkin.
“That is exactly the opposite of what my doctor told me,” said McGovern.

In a 2015 paper titled Does Science Advance One Funeral at a Time?, a team of scholars at the National Bureau of Economic Research sought an empirical basis for a remark made by the physicist Max Planck: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”
The researchers identified more than 12,000 “elite” scientists from different fields. The criteria for elite status included funding, number of publications, and whether they were members of the National Academies of Science or the Institute of Medicine. Searching obituaries, the team found 452 who had died before retirement. They then looked to see what happened to the fields from which these celebrated scientists had unexpectedly departed, by analysing publishing patterns.
What they found confirmed the truth of Planck’s maxim. Junior researchers who had worked closely with the elite scientists, authoring papers with them, published less. At the same time, there was a marked increase in papers by newcomers to the field, who were less likely to cite the work of the deceased eminence. The articles by these newcomers were substantive and influential, attracting a high number of citations. They moved the whole field along.
A scientist is part of what the Polish philosopher of science Ludwik Fleck called a “thought collective”: a group of people exchanging ideas in a mutually comprehensible idiom. The group, suggested Fleck, inevitably develops a mind of its own, as the individuals in it converge on a way of communicating, thinking and feeling.
This makes scientific inquiry prone to the eternal rules of human social life: deference to the charismatic, herding towards majority opinion, punishment for deviance, and intense discomfort with admitting to error. Of course, such tendencies are precisely what the scientific method was invented to correct for, and over the long run, it does a good job of it. In the long run, however, we’re all dead, quite possibly sooner than we would be if we hadn’t been following a diet based on poor advice.

In a series of densely argued articles and books, including Why We Get Fat(2010), the science writer Gary Taubes has assembled a critique of contemporary nutrition science, powerful enough to compel the field to listen. One of his contributions has been to uncover a body of research conducted by German and Austrian scientists before the second world war, which had been overlooked by the Americans who reinvented the field in the 1950s. The Europeans were practising physicians and experts in the metabolic system. The Americans were more likely to be epidemiologists, labouring in relative ignorance of biochemistry and endocrinology (the study of hormones). This led to some of the foundational mistakes of modern nutrition.
The rise and slow fall of cholesterol’s infamy is a case in point. After it was discovered inside the arteries of men who had suffered heart attacks, public health officials, advised by scientists, put eggs, whose yolks are rich in cholesterol, on the danger list. But it is a biological error to confuse what a person puts in their mouth with what it becomes after it is swallowed. The human body, far from being a passive vessel for whatever we choose to fill it with, is a busy chemical plant, transforming and redistributing the energy it receives. Its governing principle is homeostasis, or the maintenance of energy equilibrium (when exercise heats us up, sweat cools us down). Cholesterol, present in all of our cells, is created by the liver. Biochemists had long known that the more cholesterol you eat, the less your liver produces.
Unsurprisingly, then, repeated attempts to prove a correlation between dietary cholesterol and blood cholesterol failed. For the vast majority of people, eating two or three, or 25 eggs a day, does not significantly raise cholesterol levels. One of the most nutrient-dense, versatile and delicious foods we have was needlessly stigmatised. The health authorities have spent the last few years slowly backing away from this mistake, presumably in the hope that if no sudden movements are made, nobody will notice. In a sense, they have succeeded: a survey carried out in 2014 by Credit Suisse found that 54% of US doctors believe that dietary cholesterol raises blood cholesterol.
To his credit, Ancel Keys realised early on that dietary cholesterol was not a problem. But in order to sustain his assertion that cholesterol causes heart attacks, he needed to identify an agent that raises its levels in the blood – he landed on saturated fats. In the 30 years after Eisenhower’s heart attack, trial after trial failed to conclusively bear out the association he claimed to have identified in the Seven Countries study.
The nutritional establishment wasn’t greatly discomfited by the absence of definitive proof, but by 1993 it found that it couldn’t evade another criticism: while a low-fat diet had been recommended to women, it had never been tested on them (a fact that is astonishing only if you are not a nutrition scientist). The National Heart, Lung and Blood Institute decided to go all in, commissioning the largest controlled trial of diets ever undertaken. As well as addressing the other half of the population, the Women’s Health Initiative was expected to obliterate any lingering doubts about the ill-effects of fat.
It did nothing of the sort. At the end of the trial, it was found that women on the low-fat diet were no less likely than the control group to contract cancer or heart disease. This caused much consternation. The study’s principal researcher, unwilling to accept the implications of his own findings, remarked: “We are scratching our heads over some of these results.” A consensus quickly formed that the study – meticulously planned, lavishly funded, overseen by impressively credentialed researchers – must have been so flawed as to be meaningless. The field moved on, or rather did not.
In 2008, researchers from Oxford University undertook a Europe-wide study of the causes of heart disease. Its data shows an inverse correlation between saturated fat and heart disease, across the continent. France, the country with the highest intake of saturated fat, has the lowest rate of heart disease; Ukraine, the country with the lowest intake of saturated fat, has the highest. When the British obesity researcher Zoë Harcombe performed an analysis of the data on cholesterol levels for 192 countries around the world, she found that lower cholesterol correlated with higher rates of death from heart disease.
In the last 10 years, a theory that had somehow held up unsupported for nearly half a century has been rejected by several comprehensive evidence reviews, even as it staggers on, zombie-like, in our dietary guidelines and medical advice.
The UN’s Food and Agriculture Organisation, in a 2008 analysis of all studies of the low-fat diet, found “no probable or convincing evidence” that a high level of dietary fat causes heart disease or cancer. Another landmark review, published in 2010, in the American Society for Nutrition, and authored by, among others, Ronald Krauss, a highly respected researcher and physician at the University of California, stated “there is no significant evidence for concluding that dietary saturated fat is associated with an increased risk of CHD or CVD [coronary heart disease and cardiovascular disease]”.
Many nutritionists refused to accept these conclusions. The journal that published Krauss’s review, wary of outrage among its readers, prefaced it with a rebuttal by a former right-hand man of Ancel Keys, which implied that since Krauss’s findings contradicted every national and international dietary recommendation, they must be flawed. The circular logic is symptomatic of a field with an unusually high propensity for ignoring evidence that does not fit its conventional wisdom.
Gary Taubes is a physicist by background. “In physics,” he told me, “You look for the anomalous result. Then you have something to explain. In nutrition, the game is to confirm what you and your predecessors have always believed.” As one nutritionist explained to Nina Teicholz, with delicate understatement: “Scientists believe that saturated fat is bad for you, and there is a good deal of reluctance toward accepting evidence to the contrary.”
When obesity started to become recognised as a problem in western societies, it too was blamed on saturated fats. It was not difficult to persuade the public that if we eat fat, we will be fat (this is a trick of the language: we call an overweight person “fat”; we don’t describe a person with a muscular body as “proteiny”). The scientific rationale was also pleasingly simple: a gramme of fat has twice as many calories as a gramme of protein or carbohydrate, and we can all grasp the idea that if a person takes in more calories than she expends in physical activity, the surplus ends up as fat.
Simple does not mean right, of course. It’s difficult to square this theory with the dramatic rise in obesity since 1980, or with much other evidence. In America, average calorific intake increased by just a sixth over that period. In the UK, it actually fell. There has been no commensurate decline in physical activity, in either country – in the UK, exercise levels have increased over the last 20 years. Obesity is a problem in some of the poorest parts of the world, even among communities in which food is scarce. Controlled trials have repeatedly failed to show that people lose weight on low-fat or low-calorie diets, over the long-term.
Those prewar European researchers would have regarded the idea that obesity results from “excess calories” as laughably simplistic. Biochemists and endocrinologists are more likely to think of obesity as a hormonal disorder, triggered by the kinds of foods we started eating a lot more of when we cut back on fat: easily digestible starches and sugars. In his new book, Always Hungry, David Ludwig, an endocrinologist and professor of pediatrics at Harvard Medical School, calls this the “Insulin-Carbohydrate” model of obesity. According to this model, an excess of refined carbohydrates interferes with the self-balancing equilibrium of the metabolic system.
Far from being an inert dumping ground for excess calories, fat tissue operates as a reserve energy supply for the body. Its calories are called upon when glucose is running low – that is, between meals, or during fasts and famines. Fat takes instruction from insulin, the hormone responsible for regulating blood sugar. Refined carbohydrates break down at speed into glucose in the blood, prompting the pancreas to produce insulin. When insulin levels rise, fat tissue gets a signal to suck energy out of the blood, and to stop releasing it. So when insulin stays high for unnaturally long, a person gains weight, gets hungrier, and feels fatigued. Then we blame them for it. But, as Gary Taubes puts it, obese people are not fat because they are overeating and sedentary – they are overeating and sedentary because they are fat, or getting fatter.
Ludwig makes clear, as Taubes does, that this is not a new theory – John Yudkin would have recognised it – but an old one that has been galvanised by new evidence. What he does not mention is the role that supporters of the fat hypothesis have played, historically, in demolishing the credibility of those who proposed it.
In 1972, the same year Yudkin published Pure, White and Deadly, a Cornell-trained cardiologist called Robert Atkins published Dr Atkins’ Diet Revolution. Their arguments shared a premise – that carbohydrates are more dangerous to our health than fat – though they differed in particulars. Yudkin focused on the evils of one carbohydrate in particular, and didn’t explicitly recommend a high-fat diet. Atkins argued that a high-fat, low-carbohydrate diet was the only viable route to weight loss.
Perhaps the most important difference between the two books was tone. Yudkin’s was cool, polite and reasonable, which reflected his temperament, and the fact that he saw himself as a scientist first and a clinician second. Atkins, resolutely a practitioner rather than an academic, was unbound by gentlemanly conventions. He declared himself furious that he had been “duped” by medical scientists. Unsurprisingly, this attack enraged the nutritional establishment, which hit back hard. Atkins was labelled a fraud, and his diet a “fad”. It was a successful campaign: even today, Atkins’s name brings with it the odour of quackery.
A “fad” implies something new-fangled. But low-carbohydrate, high-fat diets had been popular for well over a century before Atkins, and were, until the 1960s, a method of weight loss endorsed by mainstream science. By the start of the 1970s, that had changed. Researchers interested in the effects of sugar and complex carbohydrates on obesity only had to look at what had happened to the most senior nutritionist in the UK to see that pursuing such a line of inquiry was a terrible career move.
John Yudkin’s scientific reputation had been all but sunk. He found himself uninvited from international conferences on nutrition. Research journals refused his papers. He was talked about by fellow scientists as an eccentric, a lone obsessive. Eventually, he became a scare story. Sheldon Reiser, one of the few researchers to continue working on the effects of refined carbohydrates and sugar through the 1970s, told Gary Taubes in 2011: “Yudkin was so discredited. He was ridiculed in a way. And anybody else who said something bad about sucrose [sugar], they’d say, ‘He’s just like Yudkin.’”
If Yudkin was ridiculed, Atkins was a hate figure. Only in the last few years has it become acceptable to study the effects of Atkins-type diets. In 2014, in a trial funded by the US National Institutes of Health, 150 men and women were assigned a diet for one year which limited either the amount of fat or carbs they could eat, but not the calories. By the end of the year, the people on the low carbohydrate, high fat diet had lost about 8lb more on average than the low-fat group. They were also more likely to lose weight from fat tissue; the low-fat group lost some weight too, but it came from the muscles. The NIH study is the latest of more than 50 similar studies, which together suggest that low-carbohydrate diets are better than low-fat diets for achieving weight loss and controlling type 2 diabetes. As a body of evidence, it is far from conclusive, but it is as consistent as any in the literature.
The 2015 edition of the US Dietary Guidelines (they are revised every five years) makes no reference to any of this new research, because the scientists who advised the committee – the most eminent and well-connected nutritionists in the country – neglected to include a discussion of it in their report. It is a gaping omission, inexplicable in scientific terms, but entirely explicable in terms of the politics of nutrition science. If you are seeking to protect your authority, why draw attention to evidence that seems to contradict the assertions on which that authority is founded? Allow a thread like that to be pulled, and a great unravelling might begin.
It may already have done. Last December, the scientists responsible for the report received a humiliating rebuke from Congress, which passed a measure proposing a review of the way the advice informing the guidelines is compiled. It referred to “questions … about the scientific integrity of the process”. The scientists reacted angrily, accusing the politicians of being in thrall to the meat and dairy industries (given how many of the scientists depend on research funding from food and pharmaceutical companies, this might be characterised as audacious).
Some scientists agree with the politicians. David McCarron, a research associate at the Department of Nutrition at the University of California-Davis, told the Washington Post: “There’s a lot of stuff in the guidelines that was right 40 years ago but that has been disproved. Unfortunately, sometimes, the scientific community doesn’t like to backtrack.” Steven Nissen, chairman of cardiovascular medicine at the Cleveland Clinic, was blunter, calling the new guidelines “an evidence-free zone”.
The congressional review has come about partly because of Nina Teicholz. Since her book was published, in 2014, Teicholz has become an advocate for better dietary guidelines. She is on the board of the Nutrition Coalition, a body funded by the philanthropists John and Laura Arnold, the stated purpose of which is to help ensure that nutrition policy is grounded in good science.
In September last year she wrote an article for the BMJ (formerly the British Medical Journal), which makes the case for the inadequacy of the scientific advice that underpins the Dietary Guidelines. The response of the nutrition establishment was ferocious: 173 scientists – some of whom were on the advisory panel, and many of whose work had been critiqued in Teicholz’s book – signed a letter to the BMJ, demanding it retract the piece.
Publishing a rejoinder to an article is one thing; requesting its erasure is another, conventionally reserved for cases involving fraudulent data. As a consultant oncologist for the NHS, Santhanam Sundar, pointed out in a response to the letter on the BMJ website: “Scientific discussion helps to advance science. Calls for retraction, particularly from those in eminent positions, are unscientific and frankly disturbing.”
The letter lists “11 errors”, which on close reading turn out to range from the trivial to the entirely specious. I spoke to several of the scientists who signed the letter. They were happy to condemn the article in general terms, but when I asked them to name just one of the supposed errors in it, not one of them was able to. One admitted he had not read it. Another told me she had signed the letter because the BMJ should not have published an article that was not peer reviewed (it was peer reviewed). Meir Stampfer, a Harvard epidemiologist, asserted that Teicholz’s work is “riddled with errors”, while declining to discuss them with me.
Reticent as they were to discuss the substance of the piece, the scientists were noticeably keener to comment on its author. I was frequently and insistently reminded that Teicholz is a journalist, and not a scientist, and that she had a book to sell, as if this settled the argument. David Katz, of Yale, one of the members of the advisory panel, and an indefatigable defender of the orthodoxies, told me that Teicholz’s work “reeks of conflict of interest” without specifying what those conflicts were. (Dr Katz is the author of four diet books.)
Dr Katz does not pretend that his field has been right on everything – he admitted to changing his own mind, for example, on dietary cholesterol. But he returned again and again to the subject of Teicholz’s character. “Nina is shockingly unprofessional … I have been in rooms filled with the who’s who of nutrition and I have never seen such unanimous revulsion as when Miss Teicholz’s name comes up. She is an animal unlike anything I’ve ever seen before.” Despite requests, he cited no examples of her unprofessional behaviour. (The vitriol poured over Teicholz is rarely dispensed to Gary Taubes, though they make fundamentally similar arguments.)
In March this year, Teicholz was invited to participate in a panel discussion on nutrition science at the National Food Policy conference, in Washington DC, only to be promptly disinvited, after her fellow panelists made it clear that they would not share a platform with her. The organisers replaced her with the CEO of the Alliance for Potato Research and Education.

One of the scientists who called for the retraction of Nina Teicholz’s BMJ article, who requested that our conversation be off the record, complained that the rise of social media has created a “problem of authority” for nutrition science. “Any voice, however mad, can gain ground,” he told me.
It is a familiar complaint. By opening the gates of publishing to all, the internet has flattened hierarchies everywhere they exist. We no longer live in a world in which elites of accredited experts are able to dominate conversations about complex or contested matters. Politicians cannot rely on the aura of office to persuade, newspapers struggle to assert the superior integrity of their stories. It is not clear that this change is, overall, a boon for the public realm. But in areas where experts have a track record of getting it wrong, it is hard to see how it could be worse. If ever there was a case that an information democracy, even a very messy one, is preferable to an information oligarchy, then the history of nutrition advice is it.
In the past, we only had two sources of nutritional authority: our doctor and government officials. It was a system that worked well as long as the doctors and officials were informed by good science. But what happens if that cannot be relied on?
The nutritional establishment has proved itself, over the years, skilled at ad hominem takedowns, but it is harder for them to do to Robert Lustig or Nina Teicholz what they once did to John Yudkin. Harder, too, to deflect or smother the charge that the promotion of low-fat diets was a 40-year fad, with disastrous outcomes, conceived of, authorised, and policed by nutritionists.
Professor John Yudkin retired from his post at Queen Elizabeth College in 1971, to write Pure, White and Deadly. The college reneged on a promise to allow him to continue to use its research facilities. It had hired a fully committed supporter of the fat hypothesis to replace him, and it was no longer deemed politic to have a prominent opponent of it on the premises. The man who had built the college’s nutrition department from scratch was forced to ask a solicitor to intervene. Eventually, a small room in a separate building was found for Yudkin.
When I asked Lustig why he was the first researcher in years to focus on the dangers of sugar, he answered: “John Yudkin. They took him down so severely – so severely – that nobody wanted to attempt it on their own.”
Ian Leslie, the author of Curious: the Desire to Know and Why Your Future Depends On It, is a regular contributor to the Long Read. Twitter: @mrianleslie
 Follow the Long Read on Twitter at @gdnlongread, or sign up to the long read weekly email here.

Wednesday, March 30, 2016

Return to blogging.



On the discussion of an article "Caught Red-Handed - Exxon Has Been Funding Climate Change Denial For 30+ Years" I said this piece of insolentness:

The problem (that to me is never understood) is that scientists are roundly funded by corporations. Science should not be trusted as the purveyor of truth. Science should be suspect whenever it says it has the truth (no matter whether you agree politically or not ) because the foundation of science is in questioning assumed truths. So many past scientific "truths" have fallen over time that it is stunning to me what blind faith people continue to have in science as truth rather than as a method of testing for falsehood.... which is all it appears to me to be.


Of course, then I erased it. This was Facebook, for heaven's sake.

It is time I returned to blogging. This presidential election could make anyone believe the real world was not so real. 

I will continue my essays against "truth" whether it be religious or scientific here, and hope to revive my other blogs as well. My project to create one blog of edited blog entries from all of them can wait a while longer.

Sorry I have been away.





Friday, March 11, 2016

How can so many scientists have been wrong?




Everything Is Crumbling

An influential psychological theory, borne out in hundreds of experiments, may have just been debunked. How can so many scientists have been so wrong?




160304_SCI_Cookies
Lisa Larson-Walker
Nearly 20 years ago, psychologists Roy Baumeister and Dianne Tice, a married couple at Case Western Reserve University, devised a foundational experiment on self-control. “Chocolate chip cookies were baked in the room in a small oven,” they wrote in a paper that has been cited more than 3,000 times. “As a result, the laboratory was filled with the delicious aroma of fresh chocolate and baking.”



Daniel EngberDANIEL ENGBER
Daniel Engber is a columnist forSlate

In the history of psychology, there has never been a more important chocolate-y aroma.
Here’s how that experiment worked. Baumeister and Tice stacked their fresh-baked cookies on a plate, beside a bowl of red and white radishes, and brought in a parade of student volunteers. They told some of the students to hang out for a while unattended, eating only from the bowl of radishes, while another group ate only cookies. Afterward, each volunteer tried to solve a puzzle, one that was designed to be impossible to complete.
Baumeister and Tice timed the students in the puzzle task, to see how long it took them to give up. They found that the ones who’d eaten chocolate chip cookies kept working on the puzzle for 19 minutes, on average—about as long as people in a control condition who hadn’t snacked at all. The group of kids who noshed on radishes flubbed the puzzle test. They lasted just eight minutes before they quit in frustration.
The authors called this effect “ego depletion” and said it revealed a fundamental fact about the human mind: We all have a limited supply of willpower, and it decreases with overuse. Eating a radish when you’re surrounded by fresh-baked cookies represents an epic feat of self-denial, and one that really wears you out. Willpower, argued Baumeister and Tice, draws down mental energy—it’s a muscle that can be exercised to exhaustion.
That simple idea—perhaps intuitive for nonscientists, but revolutionary in the field—turned into a research juggernaut. In the years that followed, Baumeister and Tice’s lab, as well as dozens of others, published scores of studies using similar procedures. First, the scientists would deplete subjects’ willpower with a task that requires self-control: don’t eat chocolate chip cookies, watch this sad movie but don’t react at all. Then, a few minutes later, they’d test them with a puzzle, a game, or something else that requires mental effort.
Psychologists discovered that lots of different tasks could drain a person’s energy and leave them cognitively depleted. Poverty-stricken day laborers in rural India might wear themselves out simply by deciding whether to purchase a bar of soap. Dogs might waste their willpower by holding back from eating chow. White people might lose mental strength when they tried to talk about racial politics with a black scientist. In 2010, a group of researchers led by Martin Hagger put out a meta-analysis of the field—a study of published studies—to find out whether this sort of research could be trusted. Using data from 83 studies and 198 separate experiments, Hagger’s team confirmed the main result. “Ego depletion” seemed to be a real and reliable phenomenon.
In 2011, Baumeister and John Tierney of the New York Times published a science-cum-self-help book based around this research. Their best-seller, Willpower: Rediscovering the Greatest Human Strength, advised readers on how the science of ego depletion could be put to use. A glass of lemonade that’s been sweetened with real sugar, they said, could help replenish someone’s inner store of self-control. And if willpower works like a muscle, then regular exercise could boost its strength.You could literally build character, Baumeister said in an interview with theTempleton Foundation, a religiously inclined science-funding organization that has given him about $1 million in grants. By that point, he told the Atlantic, the effects that he’d first begun to study in the late 1990s were established fact: “They’ve been replicated and extended in many different laboratories, so I am confident they are real,” he said.
But that story is about to change. A paper now in press, and due to publish next month in the journal Perspectives on Psychological Science, describes a massive effort to reproduce the main effect that underlies this work. Comprising more than 2,000 subjects tested at two-dozen different labs on several continents, the study found exactly nothing. A zero-effect for ego depletion: No sign that the human will works as it’s been described, or that these hundreds of studies amount to very much at all.
This isn’t the first time that an idea in psychology has been challenged—not by a long shot. A “reproducibility crisis” in psychology, and in many other fields, has now been well-established. A study out last summer tried to replicate 100 psychology experiments one-for-one and found that just 40 percent of those replications were successful. A critique of that study just appeared last week, claiming that the original authors made statistical errors—but that critique has itself been attacked for misconstruing factsignoring evidence, and indulging in some wishful thinking.
For scientists and science journalists, this back and forth is worrying. We’d like to think that a published study has more than even odds of being true. The new study of ego depletion has much higher stakes: Instead of warning us that any single piece of research might be unreliable, the new paper casts a shadow on a fully-formed research literature. Or, to put it another way: It takes aim not at the single paper but at the Big Idea.
Baumeister’s theory of willpower, and his clever means of testing it, have been borne out again and again in empirical studies. The effect has been recreated in hundreds of different ways, and the underlying concept has been verified via meta-analysis. It’s not some crazy new idea, wobbling on a pile of flimsy data; it’s a sturdy edifice of knowledge, built over many years from solid bricks.
And yet, it now appears that ego depletion could be completely bogus, that its foundation might be made of rotted-out materials. That means an entire field of study—and significant portions of certain scientists’ careers—could be resting on a false premise. If something this well-established could fall apart, then what’s next? That’s not just worrying. It’s terrifying.
* * *
Evan Carter was among the first to spot some weaknesses in the ego depletion literature. As a graduate student at the University of Miami, Carter set out to recreate the lemonade effect, first described in 2007, whereby the consumption of a sugary drink staves off the loss of willpower. “I was collecting as many subjects as I could, and we ended up having one of the largest samples in the ego-depletion literature,” Carter told me. But for all his efforts, he couldn’t make the study work. “I figured that I had gotten some bad intel on how to do these experiments,” he said.



To figure out what went wrong, Carter reviewed the 2010 meta-analysis—the study using data from 83 studies and 198 experiments. The closer he looked at the paper, though, the less he believed in its conclusions. First, the meta-analysis included only published studies, which meant the data would be subject to a standard bias in favor of positive results. Second, it included studies with contradictory or counterintuitive measures of self-control. One study, for example, suggested that depleted subjects would give more money to charity while another said depleted subjects would spend less time helping a stranger. When he and his adviser, Michael McCullough, reanalyzed the 2010 paper’s data using state-of-the-art analytic methods, they found no effect. For a second paper published last year, Carter and McCullough completed a second meta-analysis that included different studies, including 48 experiments that had never been published. Again, they found “very little evidence” of a real effect.
“All of a sudden it felt like everything was crumbling,” says Carter, now 31 years old and not yet in a tenure-track position. “I basically lost my compass. Normally I could say, all right there have been 100 published studies on this, so I can feel good about it, I can feel confident. And then that just went away.”
Not everyone believed Carter and McCullough’s reappraisal of the field. The fancy methods they used to correct for publication bias were new, and not yet fully tested. Several prominent researchers in the field called their findings premature.
But by this point, there were other signs of problems in the literature. The lemonade effect, for one, seemed implausible on its face: There’s no way the brain could use enough glucose, and so quickly, that drinking a glass of lemonade would make a difference. What’s more, several labs were able to produce the same result—restoration of self-control—by having people swish the lemonade around their mouths and spit it out instead of drinking it. Other labs discovered that a subject’sbeliefs and mindset could also affect whether and how her willpower was depleted.
These criticisms weren’t fatal in themselves. It could be that willpower is a finite resource, but one that we expend according to our motivations. After all, that’s how money works: A person’s buying habits might encompass lots of different factors, including how much cash she’s holding and how she feels about her finances. But given these larger questions about the nature of willpower as well as the meta-analysis debate, the whole body of research began to seem suspicious.



160304_SCI_Radish
Lisa Larson-Walker
In October 2014, the Association for Psychological Science announced it would try to resolve some of this uncertainty. APS would create a “Registered Replication Report”—a planned-out set of experiments, conducted by many different labs, in the hopes of testing a single study that represents an important research idea. Martin Hagger, who wrote the original 2010 meta-analysis, would serve as lead author on the project. Roy Baumeister would consult on methodology.
The replication team had to choose the specific form of its experiment: Which of the hundreds of ego-depletion studies would they try to replicate? Baumeister suggested some of his favorite experimental designs, but most turned out to be unworkable. The replication team needed tasks that could be reliably repeated in many different labs. The chocolate-chip-cookie experiment, for example, would never work. What if one lab burned the cookies? That would ruin everything!
With Baumeister’s counsel, Hagger’s team settled on a 2014 paper from researchers at the University of Michigan. That study used a standard self-control task. Subjects watched as simple words flashed on a screen: leveltroubleplastic,business, and so on. They were asked to hit a key if the word contained the letter e, but only if it was not within two spaces of another vowel (i.e., they had to hit the key for trouble but withhold their button-press for level and business). In the original study, this exercise of self-control produced a strong depletion effect. The subjects performed markedly worse on a follow-up test, also done on the computer.
The replication team ran that same experiment at 24 different labs, including ones that translated the letter e task into Dutch, German, French, and Indonesian. Just two of the research groups produced a significant, positive effect, says study co-author Michael Inzlicht of the University of Toronto. (One appeared to find a negative effect, a reverse-depletion.) Taken all together, the experiments showed no signs whatsoever of Baumeister and Tice’s original effect.



What, exactly, does that mean? At the very least, it tells us that one specific task—the letter e game—doesn’t sap a subject’s willpower, or else that the follow-up test did not adequately measure that depletion. Indeed, that’s how Baumeister himself sees the project. “I feel bad that people went through all this work all over the world and did this study and found a whole bunch of nothing,” he told me earlier this week, in a phone call from Australia. He still believes ego depletion is real. The tasks had failed, not the Big Idea.
In his lab, Baumeister told me, the letter e task would have been handled differently. First, he’d train his subjects to pick out all the words containing e, until that became an ingrained habit. Only then would he add the second rule, about ignoring words with e’s and nearby vowels. That version of the task requires much more self-control, he says.
Second, he’d have his subjects do the task with pen and paper, instead of on a computer. It might take more self-control, he suggested, to withhold a gross movement of the arm than to stifle a tap of the finger on a keyboard.
If the replication showed us anything, Baumeister says, it’s that the field has gotten hung up on computer-based investigations. “In the olden days there was a craft to running an experiment. You worked with people, and got them into the right psychological state and then measured the consequences. There’s a wish now to have everything be automated so it can be done quickly and easily online.” These days, he continues, there’s less and less actual behavior in the science of behavior. “It’s just sitting at a computer and doing readings.”
I’m more inclined than Baumeister to see this replication failure as something truly momentous. Let’s say it’s true the tasks were wrong, and that ego depletion, as it’s been described, is a real thing. If that’s the case, then the study clearly shows that the effect is not as sturdy as it seemed. One of the idea’s major selling points is its flexibility: Ego depletion applied not just to experiments involving chocolate chip cookies and radishes, but to those involving word games, conversations between white people and black people, decisions on whether to purchase soap, and even the behavior of dogs. In fact, the incredible range of the effect has often been cited in its favor. How could so many studies, performed in so many different ways, have all been wrong?
Yet now we know that ego depletion might be very fragile. It might be so sensitive to how a test is run that switching from a pen and paper to a keyboard and screen would be enough to make it disappear. If that’s the case, then why should we trust all those other variations on the theme? If that’s the case, then the Big Idea has shrunk to something very small.
The diminution of the Big Idea isn’t easy to accept, even for those willing to concede that there are major problems in their field. An ego depletion optimist might acknowledge that psychology studies tend to be too small to demonstrate a real effect, or that scientists like to futz around with their statistics until the answers come out right. (None of this implies deliberate fraud; just that sloppy standards prevail.) Still, the optimist would say, it seems unlikely that such mistakes would propagate so thoroughly throughout a single literature, and that so many noisy, spurious results could line up quite so perfectly. If all these successes came about by random chance, then it’s a miracle that they’re so consistent.
And here’s the pessimist’s counterargument: It’s easy to imagine how one bad result could lead directly to another. Ego depletion is such a bold, pervasive theory that you can test it in a thousand different ways. Instead of baking up a tray of chocolate chip cookies, you can tempt your students with an overflowing bowl of M&Ms. Instead of having subjects talk to people of another race, you can ask them to recall a time that they were victimized by racism. Different versions of the standard paradigm all produce the same effect—that’s the nature of the Big Idea. That means you can tweak the concept however you want, and however many times you need, until you’ve stumbled on a version that seems to give a positive result. But then your replication of the concept won’t always mean you have a real result. It will only show that you’ve tried a lot of different methods—that you had the willpower to stick with your hypothesis until you found an experiment that worked.
Taken at face value, the new Registered Replication Report doesn’t invalidate everything we thought we knew about willpower. A person’s self-control can lapse, of course. We just don’t know exactly when or why. It might even be the case that Baumeister has it exactly right—that people hold a reservoir of mental strength that drains each time we use it. But the two-task method that he and Tice invented 20 years ago now appears to be in doubt. As a result, an entire literature has been rendered suspect.
“At some point we have to start over and say, This is Year One,” says Inzlicht, referring not just to the sum total of ego depletion research, but to how he sometimes feels about the entire field of social psychology.*
All the old methods are in doubt. Even meta-analyses, which once were thought to yield a gold standard for evaluating bodies of research now seem somewhat worthless. “Meta-analyses are fucked,” Inzlicht warned me. If you analyze 200 lousy studies, you’ll get a lousy answer in the end. It’s garbage in, garbage out.
Baumeister, for his part, intends to launch his own replication effort, using methods that he thinks will work. “We try to do straight, honest work, and now we have to go to square one—just to make a point that was made 20 years ago. … It’s easier to publish stuff that tears something down than it is to build something up,” he told me wearily. “It’s not an enjoyable time. It’s not much fun.”
If it’s not much fun for the people whose life’s work has been called into question, neither does it hearten skeptics in the field. “I’m in a dark place,” Inzlicht wrote on his blog earlier this week. “I feel like the ground is moving from underneath me and I no longer know what is real and what is not.”