Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the sparkling domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home1/corvad/public_html/wp-includes/functions.php on line 6131

Warning: Cannot modify header information - headers already sent by (output started at /home1/corvad/public_html/wp-includes/functions.php:6131) in /home1/corvad/public_html/wp-includes/feed-rss2-comments.php on line 8
Comments on: Taming THC Inflation: Is There a Silver Bullet? http://cannabisandsocialpolicy.org/taming-thc-inflation-silver-bullet/ Academic, Policy and Industry Evolution Sat, 13 May 2017 00:50:31 +0000 hourly 1 https://wordpress.org/?v=6.9.4 By: SteveO http://cannabisandsocialpolicy.org/taming-thc-inflation-silver-bullet/#comment-25076 Sat, 13 May 2017 00:50:31 +0000 http://cannabisandsocialpolicy.org/?p=3627#comment-25076 Hi Dominic,

What a great article! I think you have a very interesting suggestion for addressing the Accuracy problem – if I may paraphrase, simply normalize it within each lab. But I think there is a bigger problem here that I don’t see much discussion on, probably because the data is lacking to do any real analysis, but the problem is real. And that problem is Precision.

You mention a couple of times how the labs’ results can be Precise, but not necessarily accurate. For those that don’t know the difference, here is a link to a graphic that pretty clearly lays it out:

https://www.google.com/imgres?imgurl=http://cdn.antarcticglaciers.org/wp-content/uploads/2013/11/precision_accuracy.png&imgrefurl=http://www.antarcticglaciers.org/glacial-geology/dating-glacial-sediments-2/precision-and-accuracy-glacial-geology/&h=1363&w=2040&tbnid=d92uiG4-wAF2aM:&tbnh=140&tbnw=211&usg=__5_SC4MoAraOOU7FSsHT6e0OAS2k=&vet=10ahUKEwjSw9iUx-vTAhUXwWMKHXvGDmgQ9QEIKzAA..i&docid=DP3vEoGBlFj3NM&sa=X&ved=0ahUKEwjSw9iUx-vTAhUXwWMKHXvGDmgQ9QEIKzAA

Simplifying a bit, Precision is repeat-ability. It is a measure of how consistent the results are, as opposed to whether they are actually on the mark or not. Suppose we have a sample that we somehow know is exactly 20%, and we get 5 samples tested, and the results are 14%, 15%, 15%, 16%, and 15%. These results are not very Accurate (they’re off by roughly 25% of the known value), but they ARE fairly precise, repeatable, and consistent, differing by +/- 1% from each other.

The degree of Precision is usually expressed in terms of digits, or orders of magnitude. For example, most college statistics classes will teach that 15% (two digits) is less precise than 15.0% (three digits), which in turn is less precise than 15.00% (four digits). Why is that? The lower number of digits implies a greater variability, which means less measured repeat-ability. For example, contrast the numbers above with a different sample set that tested at 15.01%, 15.00%, 15.00%, 14.99%, and 15.01%. The first data set is 15% +/- 1%, while the second data set is 15% +/- 0.01%. They’re both 15% (not very Accurate), but the second data set is 100x more Precise.

I would argue this is what our labs are missing (both across labs and within labs). Unfortunately, we may not be capturing the data that is necessary to examine Precision, and the fact that we see reporting happening with 4 digits of Precision by labs that don’t seem to be able to reliably re-produce results with more than 1 digit of Precision seems to me to be the bigger problem.

Please give some thought as to how we might be able to examine repeat-ability and Precision both within a lab, and across labs. I’m guessing that it would involve collecting a lot more data than we do today, so that we could somehow know that any given group of samples came from the same “thing”, and thus *should* be repeatable, as opposed to from different “things” that rightly ought to be different. Then we could group those samples, and take a standard deviation, or something like that.

But until we have repeat-ability, and degrees of Precision that match our reporting, I’m not sure how much Accuracy really matters. If we have two samples from the exact same flower, same location, same timing, same lab, and they test at 13% and 27%, how reliable are either of those numbers? Even normalizing won’t fix this.

]]>
By: Doc O'Zee http://cannabisandsocialpolicy.org/taming-thc-inflation-silver-bullet/#comment-25065 Thu, 11 May 2017 12:10:56 +0000 http://cannabisandsocialpolicy.org/?p=3627#comment-25065 THC, as a metric, w-a-y overrated. I generally ignore those numbers as absolutes, simply view as “relative to”. Terpene profile what it’s all about for me. THC specificity just one more stupid thing WA does.

]]>
By: corvad http://cannabisandsocialpolicy.org/taming-thc-inflation-silver-bullet/#comment-25056 Fri, 05 May 2017 17:49:03 +0000 http://cannabisandsocialpolicy.org/?p=3627#comment-25056 In reply to Nick Mosely.

Thank you Nick, this is amazing feedback the details of which really improve the suggestion considerably! I’m absolutely in concurrence and would add what you say here as canonical improvements to the “silver bullet”!

]]>
By: Nick Mosely http://cannabisandsocialpolicy.org/taming-thc-inflation-silver-bullet/#comment-25055 Fri, 05 May 2017 17:25:58 +0000 http://cannabisandsocialpolicy.org/?p=3627#comment-25055 Great article. Well written once again by Dominic. I remember when you mentioned this idea to me a month-or-so ago. It’s an elegant solution. Very unfortunate that it is necessary at all, but it would work without any intervention with the labs. To keep parity with current results, the scale could be 0-30 instead of 0-100, that would make most samples at most labs relatively unaffected by the change, which would be easier for consumers to grapple with. IMO, a rolling 100 to get your percentile bins is not enough. Some labs do 100+ samples in a day, so they could bias their results one day but not the next. And especially during harvest season, sometimes one producer can provide 100 samples to a lab, so in that case the producer would be being compared only to him/herself, instead of compared to their peers, as intended. You need an n of at least 1000 in my opinion. Most of the labs already have that large of a flower n already.

In fact, this paradigm would encourage unscrupulous labs to withhold results that are abnormally high (quite the reverse of their current temptations), as high values would have the effect of skewing all other results to be lower. To cheat under this paradigm, the lab would have to test a whole bunch of low-THC trim and call it flower in order to skew the rest of their results up. Doesn’t make cheating impossible, but would make it much, much more difficult. And – of course – this correction method would only work well with flower, where the population of samples is roughly the same between labs. Extracts vary quite widely in concentration between processors and between extraction methods and even between runs, so extracts would be more difficult to pin down this way.

I’ll bring it up with the packaging and labeling committee at the LCB. I fear I already know the LCB’s response: “traceability system constraint.” Their software isn’t ready for such complexity as this. Keep up the good work. This issue is a huge burden to the industry and a solution is desperately needed. It’s a black eye for all of us.

]]>