SCIENCE CORNER: 2011 ARCHIVES

Recently, I attended the “Animal Models and Their Value in Predicting Drug Efficacy and Toxicity” Conference sponsored by the New York Academy of Sciences.  The 2-day meeting, held in New York City on September 15-16, 2011, brought together international clinical and basic science investigators to discuss the predictive value of various animal models.  I learned a great deal about the current state of animal models, and what I learned really concerned me.

- Dr. Pam Osenkowski, Director of Science Programs

I am always surprised with the confidence in which scientists report their data from studies using animal models and how convincing they seem when they indicate their findings may have some human relevance.  As a former researcher, I am well aware of the scientific reasons why it is dangerous to extrapolate findings from animal studies to people.  So I have always wondered how scientists who use animal models really feel about their data and if they, too, worry about the relevance of their model systems.  By attending this Animal Models conference, I got the inside scoop on how scientists really feel about animal models, and I was quite shocked.  And I was not the only one!  Jimmy Bell, a scientist from Imperial College London who spoke at the meeting began his talk by exclaiming, “Let me say how frightened I am to use animal models after yesterday afternoon,” referring to day one of the conference in which significant issues with animal models were discussed.

The conference began with a great keynote speech from Dr. Ann Jacqueline Hunter, Founder of Ol Pharma Partners, Ltd.  Her talk focused on whether animal models of human disease have helped or hindered drug discovery.  One of the most impactful parts of her talk was when she asked the audience if we are “deluding ourselves” by believing we have good models of human disease with genetically-modified animals, and cautioned scientists not to over interpret data from animal studies.  Dr. Hunter was not the only speaker who questioned the animal model.  A number of individuals spoke of the inability of animal models to completely recapitulate the human phenotype of disease or risk.  Several talks mentioned that genetic differences between animals and people can result in species having different mechanisms of drug effects.  Even when speakers talked about “good” animal models, ones that mimicked some aspect of the human condition, other researchers in the audience would mention that other breeds of the same animal did not respond the same way, and reminded the speakers that the human recapitulation they were observing might have occurred through totally different mechanisms in the animal. 

While the issues above are indeed concerning, they were matters that I was already familiar with.  However, I was made aware of additional problems at the meeting which did come as more of a surprise to me.  These issues surrounded the quality of scientific experiments using animal models.  I was shocked to learn that not all scientists adhere to the policy of “blinding” in animal studies, where certain information that may bias the outcome of a study is withheld until after the experiment is conducted and the data analyzed.  I once worked in a lab where a researcher did not want to be blinded when performing an experiment, and the results of that experiment came out just the way he thought they should.  When other lab mates insisted the experiment be blinded, suddenly the data did not match up with what was expected- blinding is critical to quality experiments.  I learned that something as simple as mouse diet is not as controlled in studies as it could be, further complicating matters.  For instance, in “high fat” verses “normal” mouse chow, fat content is not the only variable -- there are large differences in salt levels as well.  And mouse chow is also high in soy, leading to unusually high estrogen levels in male mice.  These variables can influence experimental findings and complicate data analysis in model systems that are already inherently flawed.

Many researchers were concerned that “negative data” is not brought to the light of day, as scientific journals are not interested in publishing those findings.  Unfortunately, this policy causes researchers to repeat failed experiments, and does not allow the scientific community and general public alike to know just how many animal experiments have really failed.

I left the conference feeling very troubled.  I don’t understand why this kind of science continues, when all of those involved realize that there are serious flaws with the model systems in place.  And I’m sure this conference informed scientists of additional flaws of their model systems that they may not have even known before.  These scientists will inevitably return to their labs and continue carrying out animal experiments because they have no motivation or incentive to change their ways.  And there is something terribly wrong with this passive scenario.  This kind of science simply should not continue.  Funding agencies have the power and the ability to intervene and demand that scientists identify better models with more human relevance, and it is time they do so, our health and well-being is at stake.  Think of how far cell phones and computers have come in the last decade -- the workers in those fields were presented the challenge of designing better machines and devices, and they did!  I know that if scientists were given the same push to identify more human-relevant model systems that they too would come through.

 
53 West Jackson Blvd., Suite 1552
Chicago, IL 60604
(800) 888-NAVS or (312) 427-6065
Fax: (312) 427-6524
navs@navs.org
© 2013 National Anti-Vivisection Society is a
501(c)3 non-profit organization
53 West Jackson Blvd., Suite 1552
Chicago, IL 60604
(800) 888-NAVS or (312) 427-6065
Fax: (312) 427-6524
navs@navs.org
© 2013 National Anti-Vivisection Society is a
501(c)3 non-profit organization