Here’s the follow-up interview with Epidemiology Monitor. This one wasn’t available online, so what we have here is the version that the editor sent me pre-publication and has been sitting in my files ever since. It’s absence from google suggests the possibility that maybe it never actually made it to publication. When EpiMonitor published a book in 2000 — Epidemiology Wit & Wisdom – The Best of Epidemiology Monitor — and included the first interview, the editor(s) referred to it as “perhaps the most notorious interview we have conducted over the years.” So maybe they came to their senses and chose never to publish this one and compound the sin. Among other insights, I had an interesting idea back in 2005: fine institutions and researchers some large sum — say $50,000 — every time their research made it into the newspapers. This would be an incentive to actually write up the articles in such a way as to play down the findings and to do everything possible to prevent press releases and so media coverage. It wouldn’t necessarily improve the quality of the science that these epidemiologists were (and seem still to be) doing, but it might lessen the collateral damage. This interview was done three years into my work on Good Calories, Bad Calories and two years before that book was published along with my re-assessment of epidemiology as an “Unhealthy Science” in The New York Times Magazine. I’ve included a few notes and fixes in brackets.
On The State of Epidemiolgy A Decade After Publication of Epidemiology Faces Its Limits–A Re-Interview With Science Writer Gary Taubes
EpiMonitor: The paper you published in Science in 1995 received a lot of attention in the epidemiology community. Would you like to comment on the attention the paper received?
Taubes: Actually what I find interesting about it is that it’s received so much more attention than either of the big articles that I did afterwards questioning the controversy over salt and blood pressure and the piece in which I discuss the controversy over fat and heart disease. Both of those were looking at the end result of the epidemiologic research. I would have expected them to get at least as much attention as the Science article on epidemiology, and they didn’t. This makes me think that people are more likely to cite your work when you’re discussing the methodology itself than when you’re arguing that the methodology was used to establish orthodox wisdom that may not be correct.
EpiMonitor: It’s an interesting question–what actions are taken from some of these articles? I did not become aware of many substantive actions that were taken to address the points that you made back in 1995
Taubes: The ultimate point I was making is that the problem in epidemiology is really in the nature of the practice of epidemiology itself. I don’t actually know what could be done other than to somehow separate the results from the way the public accepts them.
Breast Cancer and Low-Fat Diets Article in the NY Times
Taubes: Here’s an example: Did you see the article that made the front page in The New York Times yesterday on breast cancer and low fat diet?
EpiMonitor: Yes, but I didn’t finish reading it.
Taubes: The point is this isn’t even an observational trial, it’s a clinical trial. Which should be more reliable, (although it’s hard to tell if it was randomized from the newspaper accounts) and yet the odds of the interpretation being dead wrong are probably still close to fifty percent. It’s a coin toss. One reason I say this is that the [Los Angeles Biomedical Research Institute] investigators who did the study conclude that low-fat diets reduce the risk of breast cancer, or at least the recurrence of breast cancer, but they have no mechanism to explain why it would work. They speculate that it might work by lowering insulin levels, but you don’t lower insulin levels by putting people on low-fat diets. You do that by putting them on low calorie or low carbohydrate diets. My guess is that when we get a chance to read it closely, we’ll find that the diet trial had so many biases and confounders that you wouldn’t be able to rule out alternative interpretations of the data that may be entirely different than the one that hit the front page of the New York Times. And you can argue that peer review might improve the situation but this was a study that made the front page of the most influential newspaper in the country without benefit of peer review; it was announced at a conference in Florida and the results were considered so newsworthy that the lack of peer review was deemed by the media to be irrelevant. And unlike some similar result on cosmology or high energy physics, this one will directly affect behavior and health.
EpiMonitor: I do remember a quote in the story from a doctor to the effect that he has changed his mind and was going to be recommending this to his patients.
Taubes: Yes, and just by the virtue of putting it on the front page of the New York Times. You know, if you’ve had breast cancer or you’ve got breast cancer running in your family, you’re going to start doing low-fat diets. I mean this will affect how people eat, and yet even the researchers themselves are saying that all they did was generate a hypothesis that now has to be tested. The other subtext to the article is that nobody has ever been able to show prior to this that low-fat diets have any effect on cancer. For instance, there was a large fiber trial funded by the National Cancer Institute to look at the effect of a very similar diet, low fat, high in fruits and vegetables and fiber on colon cancer, or colon polyps. It showed there was no effect whatsoever. The subtext is a series of trials that have failed. You finally have one trial that’s a success and it’s on the front page of the New York Times. Yet if you were to do a meta-analysis, you’d probably come out with no effect whatsoever.
EpiMonitor: Is this one trial they were reporting yesterday?
Taubes: Yes, one trial. A strange one at that because there are 900 odd people in the diet group and 1,500 odd people in the control group. Again, maybe it was randomized. But it’s hard to imagine why they would choose that kind of randomization. The problem with all these studies is that when you have a diet group versus a usual care group, the diet group gets extensive nutritional counseling. The usual care group is just sent home to eat as they normally would or if they do get counseling they get less. It’s hard to give nutritional counseling without talking about the evils of dietary fat in our society. So it’s highly unlikely that they gave the same level of nutritional counseling. So you’re going to have an intervention effect from that kind of study. These people in the intervention group will almost assuredly change their diet in ways other than lowering fat. Plus there is also the issue of what you replace the fat with. Do you replace it with fruits and vegetables? Do you replace it with starches?
EpiMonitor: You’re basically saying that in such a study you’re not doing just one intervention. It’s not like taking a drug.
On Causality
Taubes: Exactly. It’s not like taking a drug. The interesting line in the New York Times article, which was written by two reasonably intelligent people, was that they said “the best way” to establish cause and effect was to do a clinical controlled trial. Again, in the ten years since I wrote that epidemiology story I think the major conclusion I’ve come to is that the only way to establish a cause and effect relationship is to do a clinical controlled trial. There is no other way. And that’s what science is all about, establishing cause and effect at a very fundamental level. And it can only be done if you can assure that you have only one variable that is different between the groups you’re comparing and that can only be done with a randomized clinical controlled trial and sometimes not even then.
EpiMonitor: Epidemiologists would argue that in the absence of a clinical trial, in the absence of that kind of evidence, it is possible to aggregate evidence from different sources including animal studies and epidemiologic studies. And if you can satisfy a series of criteria on causality, then you can reliably infer the existence of a cause and effect relationship.
Taubes: Yes, but the hormone replacement therapy story told us that you can have the biggest epidemiologic observational studies in the world, that you can have animal research that supports it, that you can have 30 years of anecdotal evidence, and you can put together a compelling story, and yet it can be dead wrong. This is the danger again. You’re asking me what I’ve learned. To me the hormone replacement therapy story was tremendously revealing because it tells you how easy it is to get the wrong answer. Also, once you believe something, once you have a preconception that something’s true, then investigators will now alter their behavior in that context. They will create animal models that support the preconception. It’s just the way the human brain works. So with this sort of accumulation of evidence, the catch is always that you don’t know what you’re leaving out. That’s the issue.
One of the things I discussed in the Science story in 1995 that I would change today is about the issue of confounders and the issue of biases and the issue of not being able to measure exposure. To me now, the overwhelming issue is confounders. Because even if you can measure exposure well, you simply don’t know what it’s associated with or what else you’re not measuring. You can’t know that. There are an infinite number of possibilities. Again that’s what the hormone replacement therapy story told you, because that was something where it is relatively easy to measure exposure. It’s a drug. You know whether people are taking the drug or not. But what you can’t measure is what you don’t know. The confounders–
EpiMonitor: They can come back to bite you.
Taubes: Yes. It’s true that the epidemiologists will always defend what they do because, first of all, as they always tell me, this is the best they can do. But that still doesn’t answer the important questions; how do you know you’re not leaving something out? Because people are changing their behavior in response to what you’re doing, it’s a very dangerous game to be playing. Again if the interpretation of the hormone replacement trial from the Women’s Health Initiative is correct, then there are women who are dead today because of epidemiology. I don’t know if I’d want that on my track record. Once you have it on your track record, how do you establish any credibility in the future? And continuing to insist you’re right in the face of such a problem is not sufficient.
Activities Since 1995
EpiMonitor: It has been ten years since your article “Epidemiology Faces Its Limits” appeared in Science in July 1995. Could you say a little bit about what you have been up to in the last ten years?
Taubes: After I did the epidemiology story, I got into this whole field of public health. My specialty, my obsession is with controversial science and how difficult it is to get the right answer. Public health had these controversies. The first one that I stepped into was salt and blood pressure. I got involved in a very simple way. I was asked by Science to write a short one-page story on the results of the DASH trial, the Dietary Approaches to Stop Hypertension trial. The DASH trial used a low fat, high dairy product diet that didn’t manipulate salt intake in any way, but seemed to lower blood pressure almost as much as drug therapy would. I started doing interviews and I got to one former president of the American Society of Hypertension [and the American Heart Association] who will go unnamed who said that she couldn’t speak to me because they would have their funding cut off from the National Heart Lung & Blood Institute. Then I spoke to another fellow who said to me that there is no controversy over salt regardless of what anyone may have told me. The fact that this person was telling me there was not a controversy before it even crossed my mind that the results of the DASH trial were controversial (it was not a salt reduction trial) strongly suggested that there indeed was a controversy. So I called my editors at Science and told them that when I was done with the DASH story I was going to look into this non-existent controversy controversy. I spent roughly a year of my life reporting this story on salt and blood pressure and coming to the conclusion that not only was it one of the most vitriolic controversies on which I’d ever had the pleasure to report, but the evidence supporting the hypothesis that salt played some causative role in hypertension was uncompelling to say the least. It was all mostly observational epidemiology. The results were marginal and some of the data manipulation that went on to make them appear less marginal would have constituted scientific misconduct in some of the harder sciences, as far as I’m concerned.
EpiMonitor: So you did that. That took a year?
Taubes: That took a year. In the course of doing that, one of the scientists who took credit for getting Americans to eat less salt also told me that he took credit for getting Americans to eat less fat and less eggs. I called my editor and I said, “When I’m done with this salt story, I’m going to do a story on fat. I don’t know what the story is. But I know that if this fellow is involved, there is a story there because he is a terrible scientist”.
My defense here, by the way, is that after 15 years of writing about controversial science, of living in laboratories where people were discovering non-existent fundamental particles and things like that, I’m pretty aware of what it takes to be a good scientist. I think I can judge a good scientist just by talking to him or her because of the way they discuss their data or how they interpret it. More than anything, they will use so many caveats and speak so skeptically that you can pretty quickly tell that this person understands what it means to be a scientist and acts accordingly. When such caveats and skepticism are lacking, you can be pretty sure that they don’t.
EpiMonitor: You mean someone who doesn’t use a lot of caveats or doesn’t use a lot of qualifiers is probably a poor scientist?
Taubes: Right, these are people who aren’t particularly skeptical of their own experiments, of their own data, and people like that will get the answer that they want to get. It’s one reason why Francis Bacon more or less invented the scientific method; to prevent us from seeing what we want to see in our data. If you’re not highly skeptical, you’re almost guaranteed to do it because that’s what human nature is– that’s just the way we’re programmed.
So anyway, I then spent roughly a year of my life working on this fat story for Science, which came out in 2001. I then spent another year working on another story that was something of an investigation into scientific misconduct.
EpiMonitor: Where did that appear?
Taubes: That appeared in Science, although by the time it appeared the words misconduct or fraud or anything like that had been expunged from the article. I think I spent perhaps a year fighting with my editors. I can’t even mention what it was about because of the obvious legal issues involved. My editors at Science were getting at that point highly sensitive to possibilities of lawsuits, and for good reason. When I finally finished that I then decided that I would pursue what I considered the second half of the Science story on fat. Once you’ve concluded that the data, again almost exclusively observational epidemiologic data supporting the idea that fat causes heart disease or even saturated fat causes heart disease, are very weak, you’re left with this possibility that low fat diets which are high carbohydrate diets, actually lead to obesity or weight gain. So I reported that story for the New York Times magazine. It ended up on the cover in July of 2002. It was such a controversial article. It was perceived as being a kind of defense of the Atkins Diet, which in a sense it was. It got me for the first time in my life a significant book advance that I thought would allow me to write a book that would cover everything I had covered to date in all these articles. I’ve been working on that ever since.
Working on a new book
EpiMonitor: So this book you’re working on is not about fat or salt but about bad science. Is it a more broad topic?
Taubes: If I had my say, which I probably won’t, it would be called “The Alternative Hypothesis”. And the alternative hypothesis is that obesity and all the chronic diseases of civilization are caused not by over eating and sedentary behavior and calorically-dense high-fat foods, which is what the World Health Organization will tell you these days, but are caused by refined carbohydrates. I don’t know what form the book is going to take by the time it’s published, because at the moment the draft is running to eight hundred or nine hundred pages and I can’t imagine my editors publishing that. But yes, it has become an examination of the fat story, the salt story, and the entire field of observational epidemiology, because that’s the basis for all these conclusions. It discusses all the public health issues involved.
EpiMonitor: If the salt and fat stories are erected on observational studies, is this book going to become another big black eye for epidemiology comparable to hormone replacement therapy?
Taubes: Well let me put it this way. It would be, except that there are no clinical trials out there to prove that this alternative hypothesis of refined carbohydrates and disease is right or wrong.
EpiMonitor: It has not been “calibrated” as you might phrase it. Is that correct?
Taubes: Yes, it has not been calibrated.
EpiMonitor: Is it correct to say that over the last decade since you wrote “Epidemiology Faces Its Limits” you have been quite deeply involved with continuing to look at public health and epidemiologic data?
Taubes: Yes, very much so. I’ve gone from examining the technology itself to examining these fruits of the technology, and I’ve never left that. When I finish the book, one thing I want to do is go back and write an updated version of my Science story, based on what I now believe is the lesson of the hormone replacement therapy story. I also want to try to get it across to my colleagues in the press and to the epidemiologists again that there are perhaps fatal flaws in their endeavor. That it would indeed be harmless if the press and the public weren’t interested. But they are interested, and there’s a reasonable possibility that this entire endeavor of risk factor epidemiology has done far more harm than good.
EpiMonitor: That’s a very, very provocative statement.
Bad Science Yesterday vs Today
EpiMonitor: As you take an overview today of the occurrence of bad science, what’s your impression about whether there is as much of it as there was when you wrote your article ten years ago? Is there more or less or about the same amount of bad science?
Taubes: Well, it’s funny, I don’t pay a great deal of attention these days. There seems to be less of the cancer anxiety stuff going back and forth. This is just my own gut feeling reading the papers. There seems to be such an obsession with diet now. Again this could be my own focus. I could be mistaken. But I don’t seem to see the “hair dyes lead to cancer” stories. There was a “cell phone leads to brain cancer” for a while. I think people just like their cell phones too much, so that vanished surprisingly quickly. I thought that story would linger for years.
EpiMonitor: In rereading your Science article I noticed that you introduced the story by stating that there was a kind of outbreak of anxiety associated with conflicting results. That was the trigger for your story. If we say there was an outbreak or an epidemic of bad science at the time in 1995 that was partly responsible for your writing the article, what do you think has happened to that epidemic of bad science since 1995?
Taubes: Well let me rephrase. What I said was that there was an on-going epidemic of anxiety. What I think happened since then is that the media started focusing on the so-called obesity epidemic. I think the obesity epidemic started hitting the papers in ‘93 or ‘94. But it really started getting publicity around ‘98 and I think that has pretty much overwhelmed everything. Everything became so focused on diet that a lot of these other anxieties to some extent faded into the background, like electromagnetic fields and cancer was one that I used as an example.
EpiMonitor: Anyway your take is that the epidemic of anxiety is not worse. It may have shifted to concerns about obesity, but again admittedly this is not something anyone can measure in any serious way.
Taubes: There may be a possibility that there is less of these silly trials going on that we were peppered with in the 1980s. In the Science article, I did this box that ticked everyone off because it was a list of all these exposures that had been linked to cancer. My dream would be that there is less of that. There are these people that just do these terrible case-control studies and come to some meaningless conclusion about whatever they studied and the next thing you know, it’s in the papers. But again I haven’t seen as much of this. Now it could be that I’m not paying attention, or that maybe the health reporters got tired of writing this stuff up, or maybe the epidemiologists are actually doing less of it. I’ll tell you one thing that has changed on the other side is the Internet. It has created this enormous demand for daily if not hourly news stories on health. So it seems as though it’s almost impossible to do a study on anything related to diet or health that will not end up in the lay press because of the Internet.
EpiMonitor: So public interest in health news has increased you think?
Taubes: Well certainly the number of outlets for it has increased, the demand to have something new every day on your website. Think about it, just go to CNN.com and there is a health section and there has got to be something new in that health section every day. So there is this sort of extraordinary demand now for the latest study, so that people can write it up and then give advice. Along with this you’re also seeing websites that say here is a study and here is how you should perceive it; here’s how you should respond to it based on what kind of trial it was. But there is still, I think almost no awareness, again exemplified by the low-fat diet breast cancer story we discussed, of how unreliable these results are and why.
Thoughts on the Press
EpiMonitor: Well let’s talk then about the different “culprits” in your 1995 article. What I remember from reading that article is that the media, medical journals, research institutions, the public, and epidemiologists were condemned to play the roles they were playing because the incentives in the system were such that it was in the interest of each to sensationalize things. That was discouraging because you had to think that part of the solution was creating new and different incentives for each party, which is never easy. But let me ask you going through these one at a time—Do you see things getting better, worse, or the same with the press?
Taubes: I think I just answered that in my last answer. I think it’s worse in that there is more of it. There is a greater demand. It may be better on the level that people are slightly more attuned to possible shortcomings; the coverage has become slightly more sophisticated, at least in some outlets.
I mentioned the story in the New York Times that low-fat diets reduce risk of breast cancer. The part of the story on the front page discussed what the results were. Then when you turn to the inside to follow the story you find out that it’s full of caveats and they do a wonderful job of quoting people saying why they don’t believe it. Then they end with the advice that there is still no reason not to recommend a low-fat diet, and it can’t possibly hurt. Then they have a side bar with an example of a low-fat diet. So what they’ve basically said is—here is the result. It may or may not be true. Even the principal investigator acknowledges that it may or may not be true. I’m sure he would put the odds at better than 50%, but I wouldn’t. And then they’re telling you to eat a low-fat diet to avoid breast cancer. It’s a guarantee that some people, at least, will come away from that story intending to change their diets. So despite everything, the fact that it shows up in the paper, and on the front page of the paper, serves to negate all the caveats that acknowledge that the interpretation of the data- which is the headline, of course- is as likely to be wrong as right. The only way to get around that is to say, in effect, in the second paragraph that this may or may not be true. And to point out that there are indeed negative consequences to going on low-fat diets which happen to have been known for 40 years. Low-fat diets are high carbohydrate diets and high carbohydrate diets will increase the risk of syndrome X or metabolic syndrome. They will raise triglyceride levels. Triglycerides are a risk factor for heart disease. They also lower HDL levels. They lead to hyperinsulinemia and insulin resistance, etc…
EpiMonitor: And so the press doesn’t go that far.
Taubes: The press– they don’t have the space. If they go that far you end up with such a confusing picture that there is no story. And even if the reporters are smart enough to realize that there is no story, or that the story is so ambiguous that it should be played on page 23 at the bottom of the page, the fact that other papers played the story prominently dictates how you have to play it. In this case, the Washington Post actually ran the story the day before the Times did and the Post ran it on the front page. Big news. So the Times can’t ignore it. In an ideal world, if I were the newspapers, maybe I would dedicate a page to the latest health news with the entire slant being why the latest result is unreliable and should not be believed. That would be more consistent with scientific thinking, in any case. If someone is selling you a used car, you want to know what’s wrong with the car, not what the seller tells you is right with it.
EpiMonitor: You try to make news out of…
Taubes: Well, you just accept the fact that it’s going to be news and then you try to buffer it.
EpiMonitor: What you say then is that the better way to serve your readers is not to tell them why this might be true–
Taubes: But to tell them why it’s probably not.
EpiMonitor: Why it’s probably not, and then since most of the time that’s more helpful than why it might be true, then that’s the better service.
Taubes: That’s a good way to put it, yes. Again, it brings us back to the fundamental problem which is the science itself.
Thoughts About the Public
EpiMonitor: Let’s talk about the public for a moment. We talked about the press. What do you think as far as the public is concerned? They were one of the culprits in 1995 probably because they have an appetite or desire to learn about new breakthroughs.
Taubes: Well I didn’t blame them actually. I quoted I think Marcia Angell and Jerome Kassirer blaming the press and the public. You know the public wants to know how to improve their lives. If they didn’t care, we’d all be still smoking! You know, nobody would be reading this stuff.
EpiMonitor: Say that again I didn’t catch your point.
The Quagmire
Taubes: I mean if the public didn’t care – the public being all of us – we’d still be smoking cigarettes and engaging in other serious unhealthy but enjoyable habits. The problem is we do care. We want to live longer. We want to know how to live healthy lives. We want to be able to tell our children how to eat healthy. I don’t think you can blame the public. The problem is you can teach them to be highly skeptical which they’re becoming. But there’s a problem even with that: first of all, some of these studies are going to be right. The cigarette studies were right, but then the tobacco industry plays on the skepticism by insisting that we should be skeptical about those studies, as well. That’s what the industry has insisted for 40 years. And because they can pay researchers to generate data suggesting cigarettes are harmless, they can argue that we should be skeptical of the studies that say they are certainly not harmless. So now we’ve got a problem. Some of these studies are right. As soon as you come out with a study that says product X will cause disease Y, the makers of product X will fund studies to provide the opposite conclusion. They will fund studies to show that the substitute for product X will also cause disease Y. It becomes such a quagmire and its so much bad science. There are so many studies and so many conflicting results and that’s a direct result of basic fundamental problems with the science of epidemiology.
EpiMonitor: So we’re in a quagmire.
Taubes: So we are in a quagmire. Why are we in a quagmire? This should be your next question.
EpiMonitor: Well I think it’s because everybody’s interested in getting to a better place, understanding what’s going on, and doing something about it.
Taubes: But its also that we have a tool that is absolutely incapable of establishing cause and effect.
EpiMonitor: Right, but the tool is capable of producing positive beneficial results. That’s what counts.
Taubes: Except you don’t know which ones are positive and you end up taking votes. You may or may not have quoted me saying this in the first interview. But if you didn’t, this is one of the issues I always raised when I lectured about this afterwards. When I was writing my first book, I lived at a physics laboratory for nine months, and I watched these very smart physicists discover these [nonexistent] fundamental particles. In this very hard science, one of the unwritten rules was put forth by this very smart fellow named Wolfgang Panofsky.
EpiMonitor: Yes, you did quote him in our first interview.
Taubes: What Panofsky said is that if you throw money at an effect and it doesn’t get bigger, that means it is not really there. In epidemiology, if you throw money at an effect and it doesn’t get bigger, then you do a meta-analysis. But if Panofsky’s law is right, then that means it holds for epidemiology as well. Just the fact of having to do the meta- analysis tells you that you’re dealing with a non-existent phenomenon. I still, to this day, do not know how to get around that. That’s the paradox. Until some epidemiologist can explain to me a way to get around it, I’m going to be extremely skeptical of the endeavor. Because that’s one of the things we said in the Science story originally. You know we talked about the problem of exposure. You figure out how to measure exposure better. When you do that the effect should get bigger. So if you’ve got 20 years of studies and the effect has not gotten bigger, then that’s telling you that its not there. You can do all the epidemiology you want; you can do all the meta-analyses, you can do more studies, but it’s all meaningless. The fact that the effect has not gotten bigger, tells you it’s not there, or that it’s beyond the limits of your science to establish that it is. If I were running this world – and I’m sure there’s a lot of people by now who would consider that the equivalent of giving the keys of the insane asylum to one of the mad men – I would make it illegal to continue publishing studies on this kind of association. The only exception would be if someone comes up with an entirely new, fool proof, method to measure exposure. And even then, I would make it illegal for the press to write up the result until it had been confirmed at least twice, using this new method of exposure.
EpiMonitor: You mean you’d stop publishing studies in a particular area until you could do it in an improved way.
Taubes: Until you can do it in a way that some group of very skeptical learned experts can say this could possibly change the way we do the science. Let’s say, for instance, we now have a biological marker of exposure that we didn’t have before. We have confidence in this biologic marker. But even then we’re still going to be stuck with the problem of the confounders, of what other factors might associate with the marker, that we never imagined we would have to worry about. And so we’re still left with the unknown confounder problem. How do we get around that?
Thoughts on Epidemiology
EpiMonitor: Let me ask you the key question. From the point of view of epidemiologists, what’s your sense about the problem that you put your finger on in 1995, are epidemiologists doing better, worse or the same?
Taubes: Well again my problem with commenting on that is that most of what I’ve been concentrating on in my later work has been historic. On one hand, I don’t see how they can do much better because I don’t see how they can avoid doing poorly. The problem is fundamental to the science, no matter how it’s practiced. My gut feeling again would be that there is probably just as much bad science out there as ever because the demand has increased. I believe I quoted Dimitrios Trichopoulos to this effect ten years ago. He said then the demand had gotten bigger. So there is more of the chaff that gets out there for every kernel of wheat, and certainly the demand has increased dramatically since the Science article ten years ago. So if epidemiologists have gotten better, I would be surprised and I would like somebody to tell me how they can get around the problems that we discussed.
EpiMonitor: Somewhere I have read, I don’t know if it was in your material or what, this idea of taking graduated steps in releasing work, where you start by doing everything you can to think of to identify what you did wrong, then if you cannot find anything, you graduate to the next step and talk to people maybe at conferences or seminars. Then–
Taubes: Right, I’ve definitely said that in the past.
EpiMonitor: Only when you’ve gone through all these hurdles do you consider publication of the paper.
Taubes: The point of the publication of the paper is so that you get more people to tell you where you probably screwed up because the chances of discovering anything new are infinitesimal compared to the chances of discovering something incorrect. This is just because there’s an infinite number of wrong answers for every right one. So every step of the way, from planning the very first experiment to the moment of publication is based on the assumption that you have almost assuredly screwed up and you simply haven’t figured out how yet.
EpiMonitor: In my experience, epidemiologists don’t think that way.
Taubes: They don’t. And the reason is that with epidemiology it is so easy to point out potential flaws that nothing would ever get published if the field had even the barest minimum of skepticism. So everyone says we’re just going to do the best we can. But the best we can is pathological. It leads you to this world in which you can’t believe anything. So this is the fundamental problem with the discipline; it’s the fundamental problem with a science that cannot establish cause and effect. It’s not a science. Everyone gets a vote, and that’s why the field has been dominated over the past 30 years with the idea of achieving consensus of opinion. If 70% of the people, whether epidemiologists, or public health types, or nutritionists, or whatever, believe something, we can tell the public that it’s probably right. But 90% of the people believe in ghosts and ESP and stuff like that. It’s true that 90% of the scientists don’t. But the whole history of science is basically the history of things being proven wrong that most of the people in the field believe are right. Again I don’t know how to do it other than to say randomized clinical controlled trials will give you a reasonable idea that the thing you’re testing has an effect or doesn’t. Unfortunately, it won’t tell you whether you tested the wrong thing. And it won’t tell you whether the effect would have gone away or become negative if you had only run the trial longer.
EpiMonitor: Epidemiologists are not going to go out of business. The demand for it is there because public health questions can be looked at with this tool. The tool sometimes produces very useful results.
Taubes: But how do you know?
Example of an Epidemiology Success
EpiMonitor: Well sometimes you can know because you can do things that work like with SIDS. You can change the baby’s sleeping position and you know your right because the SIDS rate is cut in half.
[Eric Boodman at STAT just wrote an interesting assessment of the state of the SIDS science circa 2019 that’s worth a read.]
Taubes: Ok, so you do a trial. In SIDS it may not have been a randomized controlled trial.
EpiMonitor: Right, but that’s not a trial. At that point you can believe observational studies and conclusions from observational studies–
Taubes: What if SIDS was caused by a virus?
EpiMonitor: What?
Taubes: I’m talking off the top of my head now. But let’s just say SIDS was caused by a virus, and the virus burnt itself out. So now you’ve got half the number of cases because–
EpiMonitor: Well you don’t know with 100% certainty. That’s true. But at this point if you’ve got a 50% reduction, you’re going to draw an inference. It’s a reasonable thing to do, particularly if kids are dying and you don’t know why.
Taubes: If you have a mechanism, but this has always been one of my complaints about this stuff. The diet story is an interesting parallel because one of the arguments that I’ll make is that people don’t need a randomized controlled trial to tell them if a diet works for them. They can go on the diet and if they lose weight then it doesn’t matter what an RCT shows. What’s interesting is you have the community saying, well we can’t advocate these low-carbohydrate diets because we don’t know if they work and we don’t know they work because we haven’t done clinical controlled trials. My point is that I want a clinical controlled trial to know if the diet will lead to heart disease or shorten my life. But I can tell myself whether or not I lose weight on it. I don’t need a clinical controlled trial to tell me this is the best diet for me if it does work. If it doesn’t work, I certainly don’t need a clinical controlled trial to tell me that. And I don’t want a clinical controlled trial to tell me this diet is no better than a low-fat diet if I go on the diet and I find that, in my case, at least, I can lose weight easier on a low-carb diet despite what the study says. So the studies are mostly irrelevant in the case of weight loss. I can get an answer with an N of 1. So now what about SIDS? You say, ok, you turn the baby over, you’re right, how can that possibly be bad for the baby? So if we get a reduction in SIDS death, this must be good. But most issues aren’t that clear cut. There might always be a trade-off, for instance. I mean for all we know again, maybe autism is more frequent when babies sleep on their backs. Maybe that’s why babies sleep on their stomach preferentially. I don’t know. And if SIDS is caused by a virus, you now blamed it on something else instead. It’s like convicting an innocent man for a crime. The problem isn’t only that an innocent man goes to prison, but that the guilty party is still out there, and might still be dangerous.
The Issue of Unintended Consequences
EpiMonitor: But that’s another issue, the question of unintended consequences. The fact remains you have cut the rate of SIDS dramatically and on top of that you don’t need to understand the mechanism to achieve the positive result. Would it really have been reasonable in the 1950s and 1960s when scientists developed the live polio vaccine to ask questions about whether there is some kind of undetected virus in these cell cultures that we were using as we later learned there was? You don’t know what you don’t know. At some point society has to make a decision that we’ve got a technology, we’ve got a new development. We don’t know all the unintended consequences. What are the potential benefits of using this? If the benefits are dramatic, you take your chances.
Taubes: Yes, if the benefits are dramatic. The problem is once society has made that decision, it is a very rare society that can ever go back and say it was wrong. Like I said, the way I got onto this book I’m doing now is in part because when I was reporting the fat story for Science this fellow at NIH said to me, “its just weird– when we told people in 1984 to go on low fat diets, we assumed they would lose weight because these are less dense calories. Low and behold, its 15 years later and we’re in the middle of an obesity epidemic.” It stuck with me. There’s an unintended consequence for you.
EpiMonitor: That was the whole point of your New York Times magazine article.
Taubes: That was the point. That’s what prompted that story basically. I wanted to find out– was there truth to that?
More On Solutions
EpiMonitor: Well Gary, I want to get back to solutions.
Taubes: After the piece came out– I don’t know if I told you this in the Epidemiology Monitor ten years ago. But after I had the piece about epidemiology in Science I got invited to lecture at the Harvard School of Public Health, which I thought was a real plum. But the reality was that one of the guys who invited me also consulted for industry. This particular industry thought it had gotten a bad rap due to bad epidemiology, and I think this Harvard fellow was hoping that if he paid me $1,000 to lecture, I would also be induced to write about this industry and right this particular wrong. I won’t mention any names. But what I said then I still believe, which is that if we took all the epidemiologists out of the School of Public Health and sent them out to take guns away from gang members in the ghetto, at least we would know that they would be doing something good with their lives. We might lose a few of them, but the upside might be worth it.
EpiMonitor: I am not sure I want to print that.
Taubes: Well I did say it there. They were a bit shocked, but– I’ll leave it up to you. What I am talking about is a particular type of epidemiology. I’m talking about observational risk factor epidemiology. I’m not talking about infectious disease epidemiology.
EpiMonitor: But let’s make a couple of assumptions here for a moment. First, that these studies are not going to stop, that public interest in these topics is real, and that the tool can be useful. I don’t think there is going to be an end to the use of epidemiologic techniques in these kinds of studies to try to look at questions and try to get helpful answers. So if you accept that, what are the most important things that you can say from what you’ve learned? Is it a human nature problem? Is it a technique problem? Is it a professional society problem? What could be done? We’re not going to change human nature. But maybe there are safeguards that can be introduced. I think this is a real challenge to you because after all, why do you do what you do?
Taubes: I find it fascinating that’s all. Because it’s the same problems that come up in our every day life. How do you decide what is reliable knowledge? There’s a story about Thomas Huxley that’s relevant to this. He was in correspondence with a religious scholar and the fellow wanted to know why he was an atheist and Huxley said, in effect, because there just wasn’t enough reason to believe otherwise. You know, he said, when I talk about the law of inverse squares I know what I’m talking about and I know I can rest my convictions on it. I’m fascinated with these questions: how do we know what to believe? What out there is firm enough to rest our convictions on it? The advice I could give is, let’s start with the source. The source is the epidemiologist. I want to know when they do a study why they think I should believe it is true. I don’t want epidemiologists to just tire me out by giving me possibilities why it could be true. If they can’t tell me why there is a very good reason that this is so, and that it’s a solid hypothesis that’s worth spending my money on to test with a clinical trial, then I don’t want them to publish a paper. I don’t want them to– I just don’t want to hear from them. I don’t want the epidemiologist to move forward in his career. Science is about establishing cause and effect, and if you can’t answer all my very critical questions with solid and reliable answers, then I don’t want to hear from you. Even if you’re right, you’re not ready to publish yet. And I don’t care if you have to do this to get funding, or you have to do this to get tenure. That’s irrelevant. If you can’t defend your result in front of an audience of experts who are trying their very best to tear it down, then I don’t want to hear from you. Because what you’re doing is not science and it’s not going to give me reliable knowledge.
EpiMonitor: Your focus on reliability at the expense of utility.
Taubes: You know it’s got to start there. It can’t just be well maybe it’s right and maybe its wrong or maybe we missed something but it’s important if it’s right. That’s simply not good enough. There’s something I recently read that’s relevant here. It was an article by Hans Krebs, as in the Krebs cycle that we all learn about in high school biology, and Krebs was talking about the making of a scientist. It takes having a good mentor, he said. and your mentor doesn’t just teach you about good laboratory techniques and how to set up an experiment or pose a theory or publish a paper, but about having the mindset of a scientist. And this mindset is that of someone who will stop at nothing to learn the truth, and will publish nothing that can’t be defended against all known criticisms.
EpiMonitor: That’s not a useful standard to set in public health. What you are saying is ironic because if you know anything about epidemiologists compared to other scientists, it is that they are highly skeptical.
Taubes: That’s what they tell me. But the problem is this. With other scientists, nobody cares. If they’re wrong about the origins of the big bang, I don’t change my life accordingly. If supersymmetry particles exist or not, or if string theory is right or wrong, I don’t change my diet because of it. If somebody says they’ve made a new discovery in some nanometer scale optical laser technology, it doesn’t affect my life one iota. And so there is time for science to test their hypothesis or confirm their experimental results in these fields; there is time to see if it’s reproducible or refute it. In other words, there are all these checks and balances that then go to work, and they simply don’t exist in risk factor epidemiology and this preventive medicine business.
EpiMonitor: Because you need the information today to run your life.
Taubes: That’s the implication. If we don’t need it today, then why bother publishing anything? Why does a scientist… what is the motivation of these epidemiologists who do these small case control studies, other than to generate hypotheses? What are they thinking when they publish those papers? So why publish? And why, for Heaven’s sakes, do they talk to the newspapers afterward? What are they thinking?
EpiMonitor: I think the motivation is to move forward towards disease control and disease prevention.
Taubes: Yet, we have no idea if what they’re publishing is true.
EpiMonitor: No, not 100%, but…
Taubes: So the real way to do it would be to say in a real science, if you’ll pardon me, it would be to say— now ok, I’ve generated my hypothesis. Now I’m going to test it. How can I test it? When I test it then I’m going to publish it. But what happens with epidemiology is because you cannot establish cause and effect, people publish hypotheses.
EpiMonitor: Yes, but there’s a difference between effects or results you observe that you hadn’t anticipated and those you did anticipate or hypothesize about. You can have a very specific hypothesis about an exposure for example. That’s what you’re looking for. You may be testing the effect of oral contraceptives on breast cancer or whatever; I mean that’s a very specific thing. you’re going into it ahead of time and so it’s not as if you find some unexpected result.
Taubes: Even with oral contraceptives and breast cancer. We’ve been looking at it for twenty years. Some people say yeah, some people say no. All you can do is a clinical trial. If you do a clinical trial, you can answer the question. Otherwise, how do you get closure? Look at breast implants and autoimmune disorders. I mean look at how much damage was done by lousy studies and anecdotal evidence. Okay., maybe nobody cares that it’s Dow, a chemical company, that paid the price. We don’t care if we put a few of them out of business. I mean nobody I know is working for them, but this is not a victimless crime.
EpiMonitor: Right, false positives have potentially very negative effects. There is no question about that. They can have very negative effects. So nobody wants to generate false positives.
Taubes: But the history of science tells you that most positives coming out of most studies will be false positives. That’s just the way the universe is.
EpiMonitor: Epidemiologists rarely reach casual conclusions based on one positive study.
Taubes: I hate to do this. I feel again it’s a strange situation to be in where we’re looking at something that has such an enormous effect on our culture and the way we live our lives. Yet it is fundamentally flawed. To the point that I can not argue with you about ways to do it better because the technology is flawed, because you cannot establish cause and effect.
EpiMonitor: Well then, you have a fundamental issue with epidemiology. You don’t consider epidemiology to be contributory in a positive way.
Taubes: Let’s pick the Nurses Health study at Harvard as an example. This is probably one of the better, bigger, more expensive, longer running studies. We will find that we can divide its results into those where it got the wrong answer – hormone replacement therapy, for example – and cases in which we still don’t know. Now that’s a bad track record. It’s a bad record. It could turn out that for 90% of those cases in which we don’t know, that they are actually right, but unless we do a clinical controlled trial, we don’t know.
EpiMonitor: Again though, some of these results can lead to policies that can lead to positive changes. That’s another way of knowing.
Taubes: But some of those policies led to changes that could have led to negative outcomes.
EpiMonitor: But that part of it, that’s a different point. That’s the unintended consequences issue.
Taubes: I know, but that’s science, that’s life. Life is unintended consequences.
EpiMonitor: No, life is not just unintended consequences. Sometimes there are predictable consequences we can manipulate. If you don’t want to take a chance on something that looks positive because there might be unintended consequences–
Taubes: I know but you’re talking to someone who spent the last ten years of his life writing about policies that were institutionalized based on this kind of data and that are probably wrong and may have done a massive amount of harm.
EpiMonitor: Ok, so your–
Taubes: All I can do is hope that I’m wrong. I may be wrong. It will be a hell of a lot easier to say that I’m wrong than to say that the entire country of public health authorities, epidemiologists, nutritionists, dieticians, and everyone who spent two hours in a nutrition class in the past 30 years is wrong. So chances are–
EpiMonitor: But there are two issues there. One is being wrong because you got the wrong answer. Another is wrong because your answer was correct but overall with unintended consequences factored into it, it wasn’t a good decision.
Taubes: But the only way to know about unintended consequences is to do a randomized controlled trial.
EpiMonitor: Ok, so what do you do if you can’t do that?
Taubes: Then you can’t believe anything. You cannot act on premature data. You cannot act on hypotheses without taking the risk– again, remember that what we’re talking about here is preventive medicine. We’re talking about taking healthy people and telling them what to do so they can live longer. That’s a fundamentally different world than what doctors are used to dealing with. A fundamentally different situation than taking sick people and saying I have to act because you’ll die for sure if I don’t. These are healthy people. You don’t have to act. They are healthy now. So again to use a phrase that David Sackett, a Canadian epidemiologist, used in connection with the hormone replacement therapy incident, what we’re faced with is the “disastrous inadequacy of lesser evidence.”
EpiMonitor: Disastrous what?
Taubes: Disastrous inadequacy of lesser evidence. He was talking about the hormone replacement therapy story, but he was using it in terms of all these decisions. He made the point that we’re dealing with healthy people and we’re telling them how to change their lives so theoretically they will live longer. And we’re doing it in a way that we’re as likely to get it wrong as we are to get it right. So they’re as likely to shorten their life by taking our advice as lengthen it.
EpiMonitor: You’re saying it’s not a high enough standard?
Taubes: It’s not a high enough standard. The only standard by which we can do it is one of a clinical controlled trial. Like I said, the worst that will happen there is we’ll get a false negative. But we have to live with that. The same way the justice system chooses to live with a system that will let guilty parties go rather than hang innocent people or incarcerate innocent people. Science has to live with its limitations as well. The limitations of epidemiology are so huge. I think that’s what the ultimate argument is. I’m saying that the limitations are so huge that we need a culture that says we’re not going to believe anything except the clinical controlled trial. There are obviously benefits to epidemiology. It took epidemiology to get the country and the government and the cigarette industry to face up to what people knew all along. Okay, so there is a benefit of epidemiology. But if you smoked, I mean you may not have known for sure that you were going to get lung cancer from smoking – apparently a lot of people didn’t – but they were called coffin nails for a reason. And that is a huge effect. That was obvious. What we’re talking about now are the not huge effects.
Solutions
EpiMonitor: I’m going to try one more time to get you to talk about solutions. Remember that the question of unintended consequences doesn’t get resolved with randomized controlled trials either.
Taubes: It does if you run it long enough. This is a problem with drugs like statins. Say they decide some 25 year old male has high cholesterol. They are going to put him on a drug for 50 years that they’ve only got 10 years of follow-up on. That’s a problem. Again you just assume that maybe 10 years is long enough to identify any long-term negative side effects, but what if it’s not. Remember this 25-year-old with high cholesterol isn’t necessarily going to get a heart attack. He just has an increased risk.
EpiMonitor: But the clinical trial limitation is a limitation of resources and time.
Taubes: But again, you’re telling healthy people how to change their lives. Take this alternative hypothesis of refined carbohydrates and disease, for example. Ideally, I publish the book and the experts say (in my fantasies), “gee, Gary, this is compelling, what do we have to do to test this?” And I would say, “20,000 people are needed, 10,000 on each side. Run the trial out for ten years. You should see effects. And if you don’t, then it can’t possibly make that much difference to any single individual because if it doesn’t make enough of a difference to 10,000, then it’s probably mildly irrelevant and anyone who wants to do it can flip a coin and decide on their own.” But if somebody says to me, “Can you be sure this will help” all I can say is “I don’t know. Nobody’s done the tests. I live by this diet, but, who knows, I could have a heart attack tomorrow.” All I can do is live by what I now believe to be fairly compelling evidence. I think the evidence is sufficiently compelling that I am willing to risk my reputation on it by writing the book and rest my convictions on it when it comes to my own diet.
EpiMonitor: That’s another problem. Now you’re talking about what you find to be true on a population basis and whether it’s true on an individual basis.
Taubes: I talk about that in the book because a lot of these issues we’re discussing are ones in which the public health authorities chose to make decisions and give advice based on the notion that it would make a difference on a population-wide basis, but would be effectively irrelevant to the individual. But in order to get the population to change — and the British epidemiologist Geoffrey Rose talked about this in a couple of famous lectures he gave in the 1980s — you have to make people think it will make a difference for them individually. It’s fascinating logic but its disingenuous, to say the least.
EpiMonitor: That’s true for some people but not for everyone. You don’t know which ones it’s true for and which ones it is not.
Taubes: That’s the problem. The epidemiologist will say to me –well it’s no different than seatbelts. The odds are very good that we’ll both spend our entire life not benefiting one iota from wearing a seat belt, but they tell us to wear them just in case. And their logic, in this case, is reasonable. But that’s a different issue. If I have a car accident, and I live through it, I can be fairly confident that the seatbelt made a difference. I can be fairly confident of the effect I got out of wearing the seatbelt. If I change my diet and then have a heart attack or my breast cancer recurs or whatever, I have no idea whether changing the diet made it more likely to happen or less likely. I might live to be 90 if I eat a low-fat diet, and I may live to be 70 if I eat a low-fat diet, and I don’t know. And if I do live to be 90 on my low-fat diet, I’ll never know if I would have lived to be 95 had I eaten a high-fat diet.
EpiMonitor: But it may not always be important for people to know individually if we know it on a population basis.
Taubes: Well, but we’re also making an assumption that this will benefit some people, but not others. Everyone’s different. What if it benefits some and harms some. So you’re willing to decide that some advice is worth giving if 70 percent of the population benefits and yet 30 percent are harmed? I’m not willing to make that kind of bargain, personally.
EpiMonitor: The benefit-risk ratio is often very much more favorable than 70-30. Also, one needs to look at the nature of the benefits and harms. But it is often correct that interventions are not perfectly safe and some trade-off is being made to obtain the benefits.
Taubes: What do you do about that?
EpiMonitor: Well, that’s where you get into the values of public health and–
Taubes: Yes, religion, faith and–
EpiMonitor: No, but the greatest good for the greatest number, a value which drives public health–
Taubes: Yes, well all of this is relatively new. Remember this is a relatively new science, certainly post World War II.
EpiMonitor: If you and I talk again in ten years on the 20th anniversary of your article, what do you think you’re going to tell me on the 20th anniversary about the state of epidemiology and misleading results?
Taubes: I’ll be telling you I don’t know much about it anymore because I’ve been living in Pago Pago since my book came out.
EpiMonitor: You have vanished to Pago Pago?
Taubes: You know, that I wasn’t allowed to have lunch with anyone in this town again. Okay, I’m just kidding. I hope. Anyway, what I’ve been saying is that there is a fundamental problem with the discipline, with the science, with the technology, with the equipment. I can’t see how it can change. I don’t know what the answer is. I know better statistical manipulation of the data, or fancier computers, isn’t going to solve it.
JASON as a Solution
EpiMonitor: I don’t know why we should let you off the hook on that because you know that’s how you ended the interview last time.
Taubes: Okay, how about this. Here’s what I would do. Because I know there are people out there who are a lot smarter than I am. One of the joys of science writing is you get to meet very, very smart people. The kind of people who make you say, boy if I had half a brain like that, I’d really be able to get this stuff right. So I would get the 20 best scientists that I know. It wouldn’t matter what field they were in, but I can guarantee that the field of epidemiology would not be well represented, because I’m looking for the 20 best scientists and my implication here is that risk factor epidemiology is not really a science. I would have them deconstruct this whole issue, ok. I would find the 20 best scientists in the country, the most rigorous, the most relentlessly skeptical, and I would have them sit down and work this out. Have them spend six months or however long it takes. And then I would let them tell us what to do. I would delegate authority to people whose intelligence and track record is such that I would trust them to think for me, in this case.
Actually there is a Department of Defense group called JASON. JASON is an interesting group. I’ve written about them in the past. They are a group of mostly physicists. Some chemists, mathematicians. The only way to get into JASON is to be invited in by JASON scientists. It’s like an exclusive club. So theoretically it’s the very best scientists in physics, chemistry, math, probably some biology now. Last time I looked, there were a half a dozen to a dozen Nobel prize winners in JASON. JASON meets every summer in San Diego. And the Departments of Defense and Energy provide a list of topics that they would like to have JASON think about for them. Topics that they need JASON’s input on. So, I would take JASON and I would say, figure out this epidemiology problem. Is there a way to assure that the benefits are greater than the risks? Is there a way to come to reliable conclusions about cause and effect? You know, look at the way the science works. Look at the effect it has on society. Tell us, is there something we can do because this fellow Taubes thinks it’s hopeless and we don’t know how to convince him – or ourselves, in all honesty – that it’s not. And now that I think about it, this isn’t a bad idea. JASON might actually be intrigued by it.
EpiMonitor: But then that still leaves the question …
Taubes: 10,000 epidemiologists are going to say—why do physicists think they’re so smart? And the answer is, because they were smart enough to stay out of epidemiology and because they know what science is and how it has to be done, and that may not be the case with the 10,000 epidemiologists.
Alternative Hypothesis
EpiMonitor: Yes, that’s going to be one reaction. But if you are so hopeless, why do you write a book?
Taubes: When I wanted to write the book to begin with I just wanted to address the issues. Now I find weirdly enough that I’ve been seduced by a hypothesis. I think the hypothesis is compelling.
EpiMonitor: Which is?
Taubes: Which is this alternative hypothesis that refined carbohydrates and sugars are the cause of obesity and most chronic diseases, or at least that they increase the risk dramatically. I’m surprised. Again there is always a very great danger that I might have been seduced by my hypothesis just like every other lousy scientist out there. I’m accumulating positive evidence and ignoring negative evidence.
EpiMonitor: You have succumbed to the journalist’s version of pathological science! Yes, to go back to our earlier interview in the Epidemiology Monitor in 1996 and to borrow a familiar metaphor, you’re building a different cathedral!
Taubes: I’m building a different cathedral.
On the other hand , it’s a cathedral that can be relatively easily tested. I had this fight with my brother, who is a professor of mathematics at Harvard, and he’s read some of the book in draft. He found it pretty persuasive and he was pretty impressed, but he kept saying you have to give us tests of your hypothesis. What good is a hypothesis that doesn’t have tests? My response was that well at least it’s a hypothesis that fits the data as opposed to the current hypothesis that doesn’t. But actually there are very simple tests. The argument I would make would be very simple to test.
EpiMonitor: But this is an issue about fat or the carbohydrate hypothesis. What we’ve been talking about is a judgment about epidemiology.
Taubes: Like I said, I don’t know. Everything I know about science says epidemiology is fundamentally flawed.
EpiMonitor: So what are you going to write in your next Science article? What’s going to be the title of that article?
Taubes: Well it’s going to try and convince people to back out of the quagmire. That’s what I’m going to make the argument over. With the help of the press and the journalists and perhaps with the help of the epidemiologists and the public, we can slowly back ourselves out of the swamp and get back onto firm ground.
[Note circa 2019: I did not write another Science article, I did write a cover story for The New York Times Magazine on epidemiology. It was called “Do We Really Know What Makes Us Healthy” and I think it is worth reading. Worth noting, though, is that while the 1995 Science article is still cited regularly, that 2007 NYT Magazine article is pretty much ignored, either because it’s not in the peer-reviewed literature, or the epidemiologists have been reticent to discuss an article that argued that they were practicing, quite literally, a pseudo-science. Or maybe it’s just not very good, although I think it is (for what that’s worth).]
EpiMonitor: How do we do that? Not on a specific issue?
Taubes: Well most of it is just being excruciatingly and brutally honest about what these studies do and do not tell you. My spur of the moment idea is that press reports of the latest study should say what are the caveats and why it’s almost assuredly wrong, why it shouldn’t be believed.
EpiMonitor: So we should not seek to change the science, although we can try to introduce certain safeguards to be more careful and more rigorous. But if we communicate about it differently, then we don’t create the same kind of problems until perhaps there is a body of evidence?
Taubes: Ok, so then what you have to do is you have to have the NIH give penalties for every institution that puts out a press release on the latest study that appears in the latest epidemiology journal. Say, each institution would lose $50,000 in funding for each press release. Then the institutions themselves can decide if it’s worth it.
EpiMonitor: Well then the answer is that we need to be more judicious about who we give grants to.
Taubes: Well that’s certainly the case, anyway, but the problem is again that the least reliable studies are the cheapest ones to do. (Although I don’t want to let the expensive studies like the Nurses Health Study off the hook, by saying that.) Say you’re getting a Ph.D. in risk factor epidemiology. So you go ask some senior researcher if you can dredge some large health study for some data. So now you can come up with a hypothesis. And now you publish the hypothesis as though it’s a meaningful result. It’s going to get published. The senior researcher’s name is going to be on it. It’s going to be in the New York Times. It’s still just a hypothesis. But that’s not how we’re going to perceive it. Again I’m sorry. Every time you make it into the newspaper you should get your funding cut.
EpiMonitor: That’s a creative solution!
Taubes: That will also teach them to start phrasing their articles and press releases in such a way as to stay out of the papers instead of trying to get in. Again epidemiology would be a harmless pursuit if the public wasn’t plugged so directly into it.
EpiMonitor: Well then maybe that’s an important point and a direction for change.
Taubes: Yes, but again, I don’t know.
EpiMonitor: Well, you don’t want to stop generating ideas and possibilities if you have a real need for new health interventions. So if you had some way to better triage those results.
Taubes: Well that’s true. I mean that’s what you could do.
EpiMonitor: You could put a gag order on the publication so to speak. We could control the dissemination of the results for a longer period of time and allow basically a longer gestation period. Instead of having to meet a one standard before you could go public, now you might have to meet a higher standard. Again that would be something that would have to be discussed. But we would still need to determine the point at which people would be prepared to go public with an association?
Taubes: That’s the thing. If you could keep this stuff hidden. But you can’t keep it hidden in this day and age. It will end up with bloggers, who will accuse it of being suppressed as part of a conspiracy to keep the truth from the public. And then there’s always the question of who decides whether it’s reliable enough to be revealed.
EpiMonitor: Right. That’s the other problem. Everybody wants to have all the information. We don’t trust any authority to make decisions or judgments for us. We want the information so we can judge for ourselves. But as you pointed out, if the true situation is often a quagmire, what good is it to let out all the information?
Taubes: That’s the thing. If you could educate the public and you could educate the bloggers, you’ll still end up dealing with a lunatic fringe. But even then. I mean look, I’m now writing a book saying the lunatic fringe was right or probably right. It’s a weird situation. You can go out tomorrow and find 10 journalists who will say, yes, Gary was a very smart guy until he went crazy. But, you can say, ok, we’re going to accept a certain amount of lunacy because we have to.
EpiMonitor: I think you were educated in a somewhat different culture. The skeptical culture of physicists that you talk about in the earlier interview and this one. Epidemiologists don’t have that same kind of culture in which people go to such extreme lengths to check their results and to avoid being wrong. Epidemiologists are both scientists and public health professionals seeking to control or prevent disease. Sometimes you have to act with less than perfect information if you want to make a difference in health.
Taubes: But it’s not the skeptical culture of physicists. It’s the skeptical culture of science. You remove the skepticism, you remove the science. That’s what science is about. If you publish something you’re supposed to be able to defend it. If you make a mistake and publish a sloppy paper, or you publish a paper you can’t defend, or you don’t believe it enough to devote your life and your money to it, then you’re not a scientist and you shouldn’t be working in the field. You can become doctor instead, or you can sell shoes. But I mean if you want to do science then you’ve got to live up to the standards. If you don’t, you shouldn’t be involved. I use to joke with my science writer friends that there should be something called scientific malconduct, as opposed to misconduct. So you publish a sloppy study you get one warning, like steroids in baseball, 200 days you can’t practice. Then you’re allowed back in and if you do it again, you’re gone! That would be the penalty for publishing sloppy work or overinterpreting your data. If you publish something, you should believe it’s true and you should be able to defend it against the equivalent of an inquisition. What else can I say?
EpiMonitor: I don’t know enough about other sciences but I’m not so sure that there is such a high degree of certainty that attaches to all the other kinds of experimental work.
Taubes: Your bar as epidemiologists has to be set higher because you have a much higher impact.
EpiMonitor: Ok, that’s an interesting comment. So epidemiologists are no better or worse than other people.
Taubes: No, I’m not saying that. They probably are worse because of the methodology, like I said. But I don’t have to concern myself with bad theoretical physicists or lousy paleontologists because they don’t have any health impact.
EpiMonitor: Okay, so the other scientists don’t rise to the level of certainty that you’re demanding for epidemiology. You’re demanding it of epidemiology because what we do affects people’s lives. Whereas you’re not demanding it of other scientists because what they do doesn’t affect people as directly.
Fundamental Conflict of Interest
Taubes: Yes, but I’m also willing to bet that in other fields, because there is not such an effect on people’s lives, because there is not this warping of motivations, because the public and the money doesn’t warp the field so much, you probably have much more rigorous research, to begin with. Because there is much less benefit from publishing bad science. What’s the good of doing some chemistry experiment and publishing something that may or may not be true? Sure, you might want to get a Ph.D. to be a working chemist, but if your standards are so low that you don’t care whether it’s true, what’s the good of getting the Ph.D? Theoretically, you’re in that business to do good science. You’re going to get the Ph.D. if you do good science. So there is no real motivation to do anything less. Okay, you’re always going to find people who do it, who get perverted; who think the motivation is to become a professor or to graduate, and it doesn’t matter whether you’re doing good science or not. But for the most part, scientists go through the trouble of becoming scientists because they want to do good science. That’s what it’s all about. Unfortunately, here there are so many other motivations– the public health issues and the money and the press, basically just the desire to help people. It’s one of the themes that keeps recurring in my book–this conflict between the desire to help– to help the public and the requirements of good science.
EpiMonitor: I don’t want to put words in your mouth, but I think there is an important point here or at least one that helps me to understand differently what you consider to be the fatal flaw in epidemiology. There is a fundamental conflict of interest between the motivation for good science and the motivation for why we’re doing the science; that we’re fundamentally applied in our science, that is, we’re instrumentally motivated in a way that perhaps other scientists are not. That causes us not to have the rigor that you think the scientific method demands.
Taubes: It’s not that it causes you not to have the rigor. I believe that the science inherently does not have the rigor. But I don’t think it would have become such a massive enterprise.
EpiMonitor: Yes, but it would not have accomplished what it has accomplished with the obsessive focus on rigor that you call for.
Taubes: You have such a motivation to do good with it, to have an impact that you have ignored the fact that the tools themselves are flawed and you’ve allowed the endeavor to continue to grow. In one sense it’s like the spread of religion, because you tell people what they want to hear. It doesn’t matter if it’s true or not, because you end up with these huge institutions for the same reasons. These other scientists, it’s not about telling people what they want to hear. So they have less motivation. IF you have a tool that doesn’t do a reliable job of establishing cause and effect, you just throw out the tool. So I think this desire to do good is a wonderful desire, but it’s not wonderful enough if it means that you’ve taken this flawed tool and created a massive enterprise with it. And it’s an enterprise that may do as much harm as benefit.
EpiMonitor: These are provocative thoughts. I think it’s important that we continue to push for ideas about solutions to these concerns.
Taubes: You’ve given me an idea because I have friends in JASON from my physics days and my cold fusion days and I think I’m going to talk to some of them and see if there is anything they can do.
EpiMonitor: Getting people to say you have put your finger on a real problem is an accomplishment. If you could get people to acknowledge that then at some point people would have to address it.
Taubes: Well again–
EpiMonitor: I’m not sure that was accomplished last time with your article and our interview. I think you got people talking with your Science article in 1995, but I’m not sure that people acknowledged there was a problem. Personally, I am not aware that the Science article changed the practice of epidemiology in the last decade.
Taubes: My fantasy with the book I’m writing is that, right or wrong, I should end up in a Congressional hearing saying here is the problem. The problem is that I’ve just given you a compelling argument that nutritionists and public health experts and epidemiologists have spent the last fifty years screwing up mightily, and you don’t know if it’s right or wrong, and neither do I and the science is incapable of telling us.
END OF INTERVIEW