Recently, CleanGredients interviewed Jennifer McPartland of the Environmental Defense Fund (EDF) to talk about her recent work to assess hazards associated with preservatives, what their approach means for smart innovation in chemistry, and the future of toxicology. (This interview has been edited for clarity and brevity.)
CleanGredients: Why did you choose to focus on preservatives?
Jennifer: We wanted to look at a functional class of chemicals used in personal care products, as people come into direct contact with these products, as well as a class that has received a fair amount of regulatory and market scrutiny. Preservatives met all of these criteria. We also reached out to an array of experts from the government, private, and nonprofit sectors for their input. Almost unanimously these experts recommended we focus on preservatives.
We conducted a market review of preservatives commonly used in personal care products and consulted a group of personal care product companies and preservative suppliers to select our final set of 16 preservatives for the project.
CleanGredients: Why did you select the GreenScreen® methodology to review the hazards associated with the preservatives you assessed?
Jennifer: It’s a well-established methodology that has gained traction in both the public and private sectors. GreenScreen captures a fairly comprehensive set of human health and environmental endpoints and explicitly indicates where data gaps exist. Importantly, the method is publicly available online, so anyone can understand the process assessors use to assign low, moderate, and high hazard scores.
GreenScreens are typically used to identify what chemicals may be more or less toxic within a functional class as part of a chemical selection process. The aim of our project took a different spin on GreenScreen. We wanted to demonstrate how the GreenScreen method can be applied to help set design criteria during chemical innovation.
CleanGredients: For those chemicals that had higher hazard scores, were there certain endpoints that were consistent hotspots?
Jennifer: Skin sensitization, skin irritation and acute and chronic aquatic toxicity were common among the higher hazard scoring chemicals. Going into the project we didn’t have any preconceived notions of whether we would see hazard trends. It only became apparent to us through this work that these particular endpoints often scored moderate to very high hazard across the preservatives that we evaluated.
CleanGredients: Were you able to note anything at a molecular level that allowed other chemicals you evaluated to avoid those hazards?
Jennifer: The molecular biologist in me really wishes we could have done that level of analysis, but time and resources didn’t allow it. EPA’s Center for Computational Toxicology is working to tackle key questions like this one. Namely, can we identify certain structural features of chemicals that drive toxicity? EPA scientists have been running large numbers of cell-based tests or assays on thousands of chemicals and are at a point where they can start trying to identify relationships between the structural features of chemicals and observed toxic effects. That’s the next frontier for a project like ours. That is once you identify a hazard trend within a functional class can figure you out how to design out that hazard using structural information.
CleanGredients: That type of pattern recognition could be a bridge to help synthesis chemists in the lab get insight into what they could be doing differently.
One thing you weren’t able to evaluate as part of the study was the efficacy of the preservatives. For some audiences, performance is an important consideration in balancing toxicity.
Jennifer: Yes, I agree. From the outset we realized how important performance was in supplementing the toxicological evaluations we were doing. Although we weren’t able to do de novo efficacy testing of the preservatives we evaluated, in the report we provide information from the literature on what types of microbes the preservatives are effective against as well as formulation compatibility considerations. For example, some preservatives might be effective against a gram positive or gram negative bacteria, but they can’t be used in a formulation that’s at pH 4. While we think that ingredient hazard needs to be a priority consideration in formulation decisions, there are other important considerations such as performance and how much of an ingredient is required in the formulation to achieve its function. So while you can’t limit yourself to only considering hazard when thinking through product formulation, hazard has to be treated with at least equal importance to other considerations. It should not be treated as an afterthought.
CleanGredients: The definition of quality or suitability has to include hazard characteristics as well, not just performance, cost, or aesthetics. But acknowledging that performance and cost are also part of the puzzle is important from the perspective of industry.
Jennifer: It was really valuable to have companies involved in the project because it provided an opportunity for us to have real conversations about our perspective on these types of key issues as well as theirs.
CleanGredients: Were there any other big wins with the suppliers and brands that were on your advisory committee?
Jennifer: The consistent, structured approach and presentation of our analyses allowed some of the companies to notice aspects of preservatives that they hadn’t realized before. That’s a big win as it demonstrates the value of the approach we applied. The distribution of the report internally by at least some of the companies was another big win. A primary purpose of the whole project was to encourage businesses to conduct these type of analyses in their R&D efforts.
CleanGredients: What other feedback did you receive on the report?
Jennifer: Many of the preservatives that we evaluated were data-rich. A preservative supplier reached out and asked how we would apply this approach in a data-poor space. That question is a very salient and important one. We would argue data must be generated to fill data gaps.
In the future I believe we’ll be seeing much greater use of predictive chemical testing approaches, like the ones I mentioned earlier, that are able to generate large volumes of data more quickly and inexpensively. Today, there are a lot of questions around how biologically comprehensive the assays are, how reliable they are, how sensitive they are – and there’s no lack of scientific opinion and debate. But the debate is part of the process; we need to work through the scientific difficulties and challenges to strengthen these newer approaches over time.
CleanGredients: Do these assays create data for each of the endpoints that toxicologists typically look at, or is the output in a different form?
Jennifer: This is part of the paradigm shift. Toxicologists are accustomed to reviewing data for apical endpoints: a tumor is there or it is not there; the uterus weighs more or it doesn’t. These are effects at the anatomical level. In contrast, the newer, predictive approaches measure molecular level perturbations. For example, whether a chemical binds to a protein or not or whether a cell divides more rapidly as a result of an exposure to a chemical. Part of the reason we describe the assays as predictive is that they predict the apical effect you’d ultimately see if you were doing testing the chemical in a whole animal or observing an effect in a human or ecological population. How you take molecular level information and apply it in chemical hazard or risk assessment is a work in progress – it’s a real process to figure apply these new data streams in a scientifically credible way.
CleanGredients: Let’s say that you draw certain conclusions about a chemical based on the assay results. Do you then have to do in vitro or in vivo testing to see what the impact is on particular organs or physical functions?
Jennifer: One of the main drivers for these newer predictive approaches is to avoid always having to use animal models. Folks working to advance these newer methods are taking sets of really well-characterized chemicals and putting them through batteries of different types of predictive approaches in order to assess the extent to which these predictive methods or a combination of them correlate with what is known about the well-characterized chemicals. In other words, you validate an assay or prediction model against chemicals with known toxicity until you reach a point where you’re confident in using that assay or prediction model to assess a chemistry that’s less well-characterized or not characterized at all.
CleanGredients: Another challenge is the availability of toxicological data to users after it is generated. In your report, you propose a chemicals assessment clearinghouse to share toxicological information about chemistries used in commerce. How would this type of clearinghouse help get safer products to market?
Jennifer: The significant business interest we received in response to our project, which looked just at a small subset of chemistries, is a clear signal of the demand for this type of analysis and information. A clearinghouse has the potential to empower businesses to make smart, informed decisions about chemicals and products they make or sell. It could also lessen the individual workload of companies separately working to develop chemical health and safety information while promoting consistency in the understanding of chemical hazards across businesses.
There are certainly many considerations to building a credible clearinghouse that relate to cost sharing, data sharing, and ensuring robust analysis; however, I think the desire for much more chemical hazard and risk information in the marketplace is not going away. In the near future somebody’s going to figure out how to make this happen because the demand signal is so strong and businesses are engaged. It’s no small feat, but I think it’s time for a disruption of how we’ve traditionally been doing things.