One of the fundamental advantages of the regulated Cannabis market in Washington is that the state has required that products sold at Retail Access Points have been made from Cannabis and/or intermediate products that have undergone both Quality Assurance (QA) and Potency testing.
Quality Assurance Testing
Products that pass the required series of QA tests are less likely to contain unacceptable levels of (depending on the product) moisture, foreign matter, pathogenic micro-organisms, residual solvents and, as of Aug. 31, mycotoxins. The LCB reserves the right to also test any product, for heavy metals and/or pesticides, but such testing is not required for “non-medical-grade” products.
Products certified as “medical-grade” are of even higher quality in that they have been shown to not exceed allowed levels of heavy metals, non-restricted pesticides, and mycotoxins (which have been assessed in “medical-grade” products since the inception of the regulated medical market last year).
Products passing all required QA tests are considered to be of higher quality in that potentially harmful and/or noxious things are not present beyond the levels considered acceptable by the LCB. They are, effectively, considered to ensure a higher level of product safety. They are presumed to be less likely to cause harm through the presence of harmful things at potentially harmful levels.
There is little question that QA testing, when done well, increases the quality of product available to consumers who source their product from regulated Retail Access Points. QA testing, when done well, increases the safety of products available to consumers, whether they be recreators or Patients. All products available in Retail Access Points today have passed all required QA tests.
QA testing, when not done well, accomplishes neither of these things. That is likely why the LCB attempts to ensure that it IS done well. How successful they have been in those efforts is less clear.
Potency testing (PT), on the other hand, is not conducted in a pass/fail manner. PT tests for the levels of four Cannabinoids as either a percentage of total weight or, in the case of edibles and drinkables, as the number of milligrams present per unit dose.
All products are required to undergo PT which, under current rules, consists of reporting on levels of delta-9 THC, delta-9 THCA, CBD and CBDA (and the resulting “totally decarbed” delta-9 THC and “totally decarbed” CBD. These are not the only cannabinoids present in Cannabis. They are, however, considered by the LCB (and DOH) to be among the more important ones for predicting the intoxicating (delta-9 THC) and the “medicinal” (CBD) potential of the product.
PT, when done well, yields important information to consumers. Often unable to smell (let alone taste) the product before purchase, Cannabis consumers in Washington rely heavily on the package label for information regarding the strength of the product (in either a “recreational” or “medical” sense).
All products available in Retail Access Points today are required to contain a label which reports both “fully-decarbed” delta-9 THC and “fully-decarbed” CBD. The LCB also allows the reporting of any other cannabinoids and/or terpenes for which testing has been conducted and for which test results can be made available to the consumer, upon request, at the point of sale. Product labels vary substantially, but a common additional “cannabinoid” number included is for “Total” or “Total Cannabinoids”. This number is often reported as the total of “fully-decarbed” THC and “fully decarbed” CBD.
There is an active debate regarding whether the PT testing and reporting requirements in place today actually convey useful information to consumers. One major theme that in this debate is that THC and CBD alone (and THC in isolation) is not a good proxy for the degree to which Cannabis will will either get one “high” or help one medically. Advocates of this notion tend to point to the “entourage effect”, in which numerous components of Cannabis working together in concert impact the potency of products. I will not deal with this theme today, other than to say it tends to be held by folks that know a great deal about the plant and about products derived from it (e.g., they are likely correct).
One other major theme, and the one I hope to shed further evidentiary light on is that there is so much variability across the labs in their PT numbers that what is being reported by them is virtually useless. While there is considerable cross-lab variability in the PT numbers being reported, the conclusion that this makes their results virtually useless is a bit harsh.
Assessment of cross-lab Variability in Potency Reporting
You may have heard stories about how the cannabinoid levels measured at the top of the plant can differ dramatically from those same cannabinoid levels measured in the middle of the same plant. The cannabinoid levels measured can differ based on the time of day at which the flower was harvested. The cannabinoid levels measured can vary because of lots of things. I am going to ignore this source of variability in reported cannabinoid levels for now.
You may also have heard stories about how the cannabinoid levels measured by different labs can differ dramatically. The following charts (and table) summarize an assessment of cross-lab variability in reported PT results — but first some methodological details.
Hidden within the traceability database are many instances in which product sharing a common heritage (e.g. product sourced from the same farm at about the same time from the same “batch”) is tested by more than one lab. This presents a wonderful opportunity to statistically assess the variability across labs in the PT numbers they generate.
The following summarizes work on over 150 instances in which samples taken from the same “batch” were tested by two different labs. For the farmer to pay for two tests when only one is required, there must be some value to the farmer in doing multiple tests on the same “batch”. In doing so, the farmer likely wants to know if the results reported by Lab A are reasonably close to those reported by Lab B or if Lab A consistently reports higher or lower numbers than does Lab B.
My primary assumption underlying the analysis below is that when a farmer sends two samples out for testing to different labs, that she/he will do their absolute best to ensure that the two samples are as equivalent as possible.
Based on that assumption, each situation in which 2 labs test flower from the same batch is analogous to a coin flip. If a coin is “fair”, it will tend to come up heads ½ of the time and tails ½ of the time. If a coin is “biased”, it will tend to come up heads (or tails) more often than expected.
Similarly, when two labs test “identical” product, one lab should report PT numbers higher than the other lab about ½ of the time and PT numbers lower than the other lab about ½ of the time.
The following five charts summarize the results displayed by the 5 labs that had participated in at least 20 instances of testing flower sourced from the same “batch” as flower tested by one other lab. I limited this analysis to FLOWER to minimize variability attributable to processing.
For each of these same-batch-duplicate-test situations, I calculated the fully decarbed “Total” of THC and CBD, combined (specifically, “Total = THC + (0.877 x THCA) + CBD + (0.877 x CBDA).
Once “Total” was calculated for each of the two tests, the difference between the two values was taken by subtracting A given lab’s value (the “Target Lab”) from that produced by the other lab testing product taken from the same “batch”.
If there were no systematic difference in the PT numbers reported across labs, the distribution of difference scores should be equally distributed on both sides of zero. There should be as many instances, on average, where the difference is positive (where the target lab reports numbers HIGHER than the other lab) as when the difference is negative (where the target lab reports numbers LOWER than the other lab).
The five charts below display the distribution of difference scores from the perspective of each of the 5 labs participating in the largest number of such “duplicate PT tests” on flower samples.
One thing to look for is the degree of symmetry around the zero point (marked by the vertical green line) for each lab. For each lab, are there about an equal number of times that their results were higher (to the right of the green line) as there were when their results were lower (to the left of the green line)?
If that is the case, that lab’s “coin” would appear to be “fair” or “unbiased”.
If, however, there are noticeably more samples on one side or the other of the green line on a given chart, that lab’s “coin” would appear to be “biased”.
One other thing worth looking at is the sheer size of the difference. There are quite a few differences that show up to (and in a handful of extreme cases, larger than) 10 percentage point differences between labs.
10 percentage point differences from product that (remember my assumption) the farmer went to great efforts to ensure were as similar/equivalent/homogenous as possible.
Now, for some statistics. The following table summarizes a 2-tailed SIGN TEST applied to the data for each of these 5 labs. The null hypothesis is that the lab is unbiased (relative to the others) with respect to PT reporting. The null hypothesis assumes an equal number of positive and negative difference scores.
As you can see, one of the labs achieved statistical significance at a level where p < .005. This suggests that at least one of the coins being used by one of the labs is not “fair”. Of, perhaps, even more interest, is the fact that 3 of the 5 labs showed differences of 10 or more percentage points compared to other labs TESTING THE SAME PRODUCT. When you consider that the average “Total” potency being reported is around 18%, that is a very large discrepancy and it confirms a number of comments made by folks that have been involved in the debate regarding variability across the labs. This degree of discrepancy and the degree of cross-lab variability it implies is unacceptable.
So What? It’s just Potency.
While the risks associated with inadequate QA testing are fairly self-evident when product unfit for market makes it’s way to market if QA tests are not done well, the linkage between potency tests not done well and public safety is less obvious.
In rolling out a state-legal Cannabis market, some of the consumers purchasing within that market will be new to Cannabis. Such users will likely take note of the reported potency levels for product they have purchased. They will consume that product and they will, eventually, form a base knowledge of how much they can consume before becoming too impaired to do such things as drive.
Think for a moment about a new Cannabis consumer that has consistently sought out product advertising 28% Total Cannabinoids (or more) on it’s label. That consumer eventually comes to believe that they are just fine to drive if they limit themselves to no more than (for example) two hits before subsequently driving.
That new Cannabis consumer’s efforts to learn how to use responsibly are somewhat derailed if they subsequently, upon finding themselves in possession of product labelled at a much lower level of Total Cannabinoids (say 15%) make the reasonable inference that they should be able to take FOUR hits and then still be OK to drive.
The numbers presented in the charts above suggest that this new Cannabis user may, actually, be consuming almost twice the amount of Cannabinoids as usual before going out to drive.
We all know that people driving with “too much” alcohol in their system are dangerous — to others as well as to themselves. It is reasonable to expect that people driving with “too much” cannabinoids in their system (or, specifically, more cannabinoids in their system than they expect to be there) might also pose an increased risk to not only themselves, but to others on the roadways and sidewalks of Washington.
The lack of consistent, repeatable potency testing across labs that is summarized above is both unacceptable and dangerous.
Given this, I would encourage the WSLCB to expand the “secret shopper” program that they started earlier this year. Buying product off the shelves of retail access points and having the WSDA lab in Yakima conduct PT testing and then comparing those results to the ones reported by the certified labs serving this market is a great idea. They should increase the number of such tests and they should institute a properly-designed sampling regimen that allows them to efficiently find labs whose processes are failing to yield consistently accurate results.
I have heard rumors that one of the labs has organized a subset of the certified testing labs to do a “round-robin” test of a small number of product samples in order to assess cross-lab variability. Great idea, but it suffers from the same problem as the RJ-Lee annual (or semi-annual) inspections — the labs know they are being evaluated and are, presumably, giving their very best during such evaluations.
What is important is to have Proficiency, Reliability, Accuracy, Goodness, Unbiasedness and Empiricism be the norm in lab practices each and every day as they contribute their expertise and technical prowess toward the goal of minimizing dangers to public health and safety associated with this new industry. It is important that we hold all certified labs to the standard of being P.R.A.G.U.E..
Cannabis seems to be a relatively safe product. Let’s all demand that it be kept that way and, where possible, offer up solutions that will ensure that it is.
One such solution would be to have the LCB begin enforcing against retail access points that are not able to produce, upon customer request, the Certificates of Analysis that the labs produce for each tested sample. This is much more likely to happen if consumers begin routinely asking for this information at the point of purchase. Processors are REQUIRED to include a CoA with shipments to Retail. Retailers are REQUIRED to be able produce CoAs upon customer request. If they are unable to, take your business elsewhere, but also COMPLAIN TO THE STORE MANAGER and COMPLAIN ABOUT THE STORE TO THE LCB.
If the LCB were to receive a few thousand consumer complaints naming specific stores that are unable or unwilling to do this, I suspect they might begin re-prioritizing their efforts. If the stores that cannot or will not do this today begin losing business, I suspect many of them would begin complying with this requirement quickly.
The net result would be that you would be more likely to know who tested the product you are considering. That is a first step toward holding all of the labs accountable to a better standard.