Supposedly you check the ingredients list of your product against the comedogenicity list. If it has highly comedogenic ingredients, it will cause pimples, if it doesn’t, then it won’t. It’s simple, systematic and foolproof, right? Unfortunately, it’s not quite that simple…
This post goes into the science behind these comedogenicity ratings, and how exactly you should use them. It also comes in video form!
Click here for the video, scroll down for the rest of the blog post…
What does comedogenicity mean?
Comedogenicity is the tendency of an ingredient or product to clog pores. Ingredients are ranked on a scale:
- 0 – completely non-comedogenic
- 1 – Slightly comedogenic
- 2-3 – Moderately comedogenic
- 4-5 – Severely comedogenic
The numbers in comedogenicity scales come from studies performed by academics, published in peer reviewed journals – this usually means they’re somewhat reliable and valid. However, like with many other skincare claims “supported by the literature”, problems emerge when you dig deeper!
What’s wrong with the comedogenicity scale?
The problem is that the studies that produced the comedogenicity ratings don’t reflect real-world usage, for a number of reasons:
Tests aren’t done in real-world conditions
In an ideal world, we’d test every single product on every single person’s face, and develop a definitive comedogenicity rating list based on that. But this would be impossible – it would cost too much, there are too many products, and getting a lot of people to only use the one product and not change their daily routine for weeks or months at a time would be a mammoth task.
Instead, what’s used in most scientific studies is a model – a situation that mimics the real world, but is simpler to carry out and control. Think crash test dummies, dyed samples of hair, pouring blue liquid onto sanitary pads, patch testing potential allergens on your arm, testing bikes on a race track.
Most of the time these models work pretty well, but sometimes they don’t reflect the real world situation, so their results can’t be applied to everyday life (they have low external validity). In the case of comedogenicity ratings, the models don’t fare so well.
The most common rabbit ear test is flawed
The most common test for comedogenicity is the rabbit ear test, pioneered in cosmetics testing by two famous dermatologists, Albert Kligman and James Fulton, in the 1970s. This involves applying a substance to the inner ear of a rabbit, and waiting a few weeks to see if any clogged pores formed. Because rabbit ears are more sensitive than human skin, they reacted to comedogenic products faster, which was more convenient.
Unfortunately, this also meant that there were lots of false positives, where ingredients that are non-comedogenic in humans would be found to be comedogenic in the hypersensitive rabbit model. Additionally, in the original tests, the scientists didn’t realise that there are naturally enlarged pores in rabbit ears. Some results counted these as acne, leading to even more false positives.
Related post: Purging vs Breakouts: When to Ditch Your Skincare
The most famous false positive is petroleum jelly (petrolatum or Vaseline), which was corrected in the late 1980s, but this was debated until the mid-1990s – that’s why the myth that Vaseline and oily products cause pimples is still so pervasive. This wasn’t the first time the rabbit ear tests were questioned – conflicting results were commonplace, and comedogenicity lists frequently disagreed with each other (and still do).
Related post: Is Mineral Oil Dangerous?
More recently in 2007, dermatologists Mirshahpanah and Maibach went so far as to say:
“[the rabbit ear] model is unable to accurately depict the acnegenic potential of chemical compounds, and is therefore only valuable for distinguishing absolute negatives.” – Mirshahpanah and Maibach, 2007
Tests on human subjects are also flawed
If rabbit ears don’t reflect what happens on human skin, then the obvious solution is to test on humans, right? Yes…but there are problems there too!