Keegan Skeate commited on
Commit
2b99de3
β€’
1 Parent(s): 33c4691

Refactored get results for MCR Labs and SC Labs

Browse files
.gitignore ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ # Ignore environment variables.
2
+ *.env
3
+
4
+ # Ignore temporary files.
5
+ *tmp
6
+
7
+ # Ignore PDFs.
8
+ *pdfs
LICENSE ADDED
@@ -0,0 +1,395 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Attribution 4.0 International
2
+
3
+ =======================================================================
4
+
5
+ Creative Commons Corporation ("Creative Commons") is not a law firm and
6
+ does not provide legal services or legal advice. Distribution of
7
+ Creative Commons public licenses does not create a lawyer-client or
8
+ other relationship. Creative Commons makes its licenses and related
9
+ information available on an "as-is" basis. Creative Commons gives no
10
+ warranties regarding its licenses, any material licensed under their
11
+ terms and conditions, or any related information. Creative Commons
12
+ disclaims all liability for damages resulting from their use to the
13
+ fullest extent possible.
14
+
15
+ Using Creative Commons Public Licenses
16
+
17
+ Creative Commons public licenses provide a standard set of terms and
18
+ conditions that creators and other rights holders may use to share
19
+ original works of authorship and other material subject to copyright
20
+ and certain other rights specified in the public license below. The
21
+ following considerations are for informational purposes only, are not
22
+ exhaustive, and do not form part of our licenses.
23
+
24
+ Considerations for licensors: Our public licenses are
25
+ intended for use by those authorized to give the public
26
+ permission to use material in ways otherwise restricted by
27
+ copyright and certain other rights. Our licenses are
28
+ irrevocable. Licensors should read and understand the terms
29
+ and conditions of the license they choose before applying it.
30
+ Licensors should also secure all rights necessary before
31
+ applying our licenses so that the public can reuse the
32
+ material as expected. Licensors should clearly mark any
33
+ material not subject to the license. This includes other CC-
34
+ licensed material, or material used under an exception or
35
+ limitation to copyright. More considerations for licensors:
36
+ wiki.creativecommons.org/Considerations_for_licensors
37
+
38
+ Considerations for the public: By using one of our public
39
+ licenses, a licensor grants the public permission to use the
40
+ licensed material under specified terms and conditions. If
41
+ the licensor's permission is not necessary for any reason--for
42
+ example, because of any applicable exception or limitation to
43
+ copyright--then that use is not regulated by the license. Our
44
+ licenses grant only permissions under copyright and certain
45
+ other rights that a licensor has authority to grant. Use of
46
+ the licensed material may still be restricted for other
47
+ reasons, including because others have copyright or other
48
+ rights in the material. A licensor may make special requests,
49
+ such as asking that all changes be marked or described.
50
+ Although not required by our licenses, you are encouraged to
51
+ respect those requests where reasonable. More considerations
52
+ for the public:
53
+ wiki.creativecommons.org/Considerations_for_licensees
54
+
55
+ =======================================================================
56
+
57
+ Creative Commons Attribution 4.0 International Public License
58
+
59
+ By exercising the Licensed Rights (defined below), You accept and agree
60
+ to be bound by the terms and conditions of this Creative Commons
61
+ Attribution 4.0 International Public License ("Public License"). To the
62
+ extent this Public License may be interpreted as a contract, You are
63
+ granted the Licensed Rights in consideration of Your acceptance of
64
+ these terms and conditions, and the Licensor grants You such rights in
65
+ consideration of benefits the Licensor receives from making the
66
+ Licensed Material available under these terms and conditions.
67
+
68
+
69
+ Section 1 -- Definitions.
70
+
71
+ a. Adapted Material means material subject to Copyright and Similar
72
+ Rights that is derived from or based upon the Licensed Material
73
+ and in which the Licensed Material is translated, altered,
74
+ arranged, transformed, or otherwise modified in a manner requiring
75
+ permission under the Copyright and Similar Rights held by the
76
+ Licensor. For purposes of this Public License, where the Licensed
77
+ Material is a musical work, performance, or sound recording,
78
+ Adapted Material is always produced where the Licensed Material is
79
+ synched in timed relation with a moving image.
80
+
81
+ b. Adapter's License means the license You apply to Your Copyright
82
+ and Similar Rights in Your contributions to Adapted Material in
83
+ accordance with the terms and conditions of this Public License.
84
+
85
+ c. Copyright and Similar Rights means copyright and/or similar rights
86
+ closely related to copyright including, without limitation,
87
+ performance, broadcast, sound recording, and Sui Generis Database
88
+ Rights, without regard to how the rights are labeled or
89
+ categorized. For purposes of this Public License, the rights
90
+ specified in Section 2(b)(1)-(2) are not Copyright and Similar
91
+ Rights.
92
+
93
+ d. Effective Technological Measures means those measures that, in the
94
+ absence of proper authority, may not be circumvented under laws
95
+ fulfilling obligations under Article 11 of the WIPO Copyright
96
+ Treaty adopted on December 20, 1996, and/or similar international
97
+ agreements.
98
+
99
+ e. Exceptions and Limitations means fair use, fair dealing, and/or
100
+ any other exception or limitation to Copyright and Similar Rights
101
+ that applies to Your use of the Licensed Material.
102
+
103
+ f. Licensed Material means the artistic or literary work, database,
104
+ or other material to which the Licensor applied this Public
105
+ License.
106
+
107
+ g. Licensed Rights means the rights granted to You subject to the
108
+ terms and conditions of this Public License, which are limited to
109
+ all Copyright and Similar Rights that apply to Your use of the
110
+ Licensed Material and that the Licensor has authority to license.
111
+
112
+ h. Licensor means the individual(s) or entity(ies) granting rights
113
+ under this Public License.
114
+
115
+ i. Share means to provide material to the public by any means or
116
+ process that requires permission under the Licensed Rights, such
117
+ as reproduction, public display, public performance, distribution,
118
+ dissemination, communication, or importation, and to make material
119
+ available to the public including in ways that members of the
120
+ public may access the material from a place and at a time
121
+ individually chosen by them.
122
+
123
+ j. Sui Generis Database Rights means rights other than copyright
124
+ resulting from Directive 96/9/EC of the European Parliament and of
125
+ the Council of 11 March 1996 on the legal protection of databases,
126
+ as amended and/or succeeded, as well as other essentially
127
+ equivalent rights anywhere in the world.
128
+
129
+ k. You means the individual or entity exercising the Licensed Rights
130
+ under this Public License. Your has a corresponding meaning.
131
+
132
+
133
+ Section 2 -- Scope.
134
+
135
+ a. License grant.
136
+
137
+ 1. Subject to the terms and conditions of this Public License,
138
+ the Licensor hereby grants You a worldwide, royalty-free,
139
+ non-sublicensable, non-exclusive, irrevocable license to
140
+ exercise the Licensed Rights in the Licensed Material to:
141
+
142
+ a. reproduce and Share the Licensed Material, in whole or
143
+ in part; and
144
+
145
+ b. produce, reproduce, and Share Adapted Material.
146
+
147
+ 2. Exceptions and Limitations. For the avoidance of doubt, where
148
+ Exceptions and Limitations apply to Your use, this Public
149
+ License does not apply, and You do not need to comply with
150
+ its terms and conditions.
151
+
152
+ 3. Term. The term of this Public License is specified in Section
153
+ 6(a).
154
+
155
+ 4. Media and formats; technical modifications allowed. The
156
+ Licensor authorizes You to exercise the Licensed Rights in
157
+ all media and formats whether now known or hereafter created,
158
+ and to make technical modifications necessary to do so. The
159
+ Licensor waives and/or agrees not to assert any right or
160
+ authority to forbid You from making technical modifications
161
+ necessary to exercise the Licensed Rights, including
162
+ technical modifications necessary to circumvent Effective
163
+ Technological Measures. For purposes of this Public License,
164
+ simply making modifications authorized by this Section 2(a)
165
+ (4) never produces Adapted Material.
166
+
167
+ 5. Downstream recipients.
168
+
169
+ a. Offer from the Licensor -- Licensed Material. Every
170
+ recipient of the Licensed Material automatically
171
+ receives an offer from the Licensor to exercise the
172
+ Licensed Rights under the terms and conditions of this
173
+ Public License.
174
+
175
+ b. No downstream restrictions. You may not offer or impose
176
+ any additional or different terms or conditions on, or
177
+ apply any Effective Technological Measures to, the
178
+ Licensed Material if doing so restricts exercise of the
179
+ Licensed Rights by any recipient of the Licensed
180
+ Material.
181
+
182
+ 6. No endorsement. Nothing in this Public License constitutes or
183
+ may be construed as permission to assert or imply that You
184
+ are, or that Your use of the Licensed Material is, connected
185
+ with, or sponsored, endorsed, or granted official status by,
186
+ the Licensor or others designated to receive attribution as
187
+ provided in Section 3(a)(1)(A)(i).
188
+
189
+ b. Other rights.
190
+
191
+ 1. Moral rights, such as the right of integrity, are not
192
+ licensed under this Public License, nor are publicity,
193
+ privacy, and/or other similar personality rights; however, to
194
+ the extent possible, the Licensor waives and/or agrees not to
195
+ assert any such rights held by the Licensor to the limited
196
+ extent necessary to allow You to exercise the Licensed
197
+ Rights, but not otherwise.
198
+
199
+ 2. Patent and trademark rights are not licensed under this
200
+ Public License.
201
+
202
+ 3. To the extent possible, the Licensor waives any right to
203
+ collect royalties from You for the exercise of the Licensed
204
+ Rights, whether directly or through a collecting society
205
+ under any voluntary or waivable statutory or compulsory
206
+ licensing scheme. In all other cases the Licensor expressly
207
+ reserves any right to collect such royalties.
208
+
209
+
210
+ Section 3 -- License Conditions.
211
+
212
+ Your exercise of the Licensed Rights is expressly made subject to the
213
+ following conditions.
214
+
215
+ a. Attribution.
216
+
217
+ 1. If You Share the Licensed Material (including in modified
218
+ form), You must:
219
+
220
+ a. retain the following if it is supplied by the Licensor
221
+ with the Licensed Material:
222
+
223
+ i. identification of the creator(s) of the Licensed
224
+ Material and any others designated to receive
225
+ attribution, in any reasonable manner requested by
226
+ the Licensor (including by pseudonym if
227
+ designated);
228
+
229
+ ii. a copyright notice;
230
+
231
+ iii. a notice that refers to this Public License;
232
+
233
+ iv. a notice that refers to the disclaimer of
234
+ warranties;
235
+
236
+ v. a URI or hyperlink to the Licensed Material to the
237
+ extent reasonably practicable;
238
+
239
+ b. indicate if You modified the Licensed Material and
240
+ retain an indication of any previous modifications; and
241
+
242
+ c. indicate the Licensed Material is licensed under this
243
+ Public License, and include the text of, or the URI or
244
+ hyperlink to, this Public License.
245
+
246
+ 2. You may satisfy the conditions in Section 3(a)(1) in any
247
+ reasonable manner based on the medium, means, and context in
248
+ which You Share the Licensed Material. For example, it may be
249
+ reasonable to satisfy the conditions by providing a URI or
250
+ hyperlink to a resource that includes the required
251
+ information.
252
+
253
+ 3. If requested by the Licensor, You must remove any of the
254
+ information required by Section 3(a)(1)(A) to the extent
255
+ reasonably practicable.
256
+
257
+ 4. If You Share Adapted Material You produce, the Adapter's
258
+ License You apply must not prevent recipients of the Adapted
259
+ Material from complying with this Public License.
260
+
261
+
262
+ Section 4 -- Sui Generis Database Rights.
263
+
264
+ Where the Licensed Rights include Sui Generis Database Rights that
265
+ apply to Your use of the Licensed Material:
266
+
267
+ a. for the avoidance of doubt, Section 2(a)(1) grants You the right
268
+ to extract, reuse, reproduce, and Share all or a substantial
269
+ portion of the contents of the database;
270
+
271
+ b. if You include all or a substantial portion of the database
272
+ contents in a database in which You have Sui Generis Database
273
+ Rights, then the database in which You have Sui Generis Database
274
+ Rights (but not its individual contents) is Adapted Material; and
275
+
276
+ c. You must comply with the conditions in Section 3(a) if You Share
277
+ all or a substantial portion of the contents of the database.
278
+
279
+ For the avoidance of doubt, this Section 4 supplements and does not
280
+ replace Your obligations under this Public License where the Licensed
281
+ Rights include other Copyright and Similar Rights.
282
+
283
+
284
+ Section 5 -- Disclaimer of Warranties and Limitation of Liability.
285
+
286
+ a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
287
+ EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
288
+ AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
289
+ ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
290
+ IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
291
+ WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
292
+ PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
293
+ ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
294
+ KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
295
+ ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
296
+
297
+ b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
298
+ TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
299
+ NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
300
+ INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
301
+ COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
302
+ USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
303
+ ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
304
+ DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
305
+ IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
306
+
307
+ c. The disclaimer of warranties and limitation of liability provided
308
+ above shall be interpreted in a manner that, to the extent
309
+ possible, most closely approximates an absolute disclaimer and
310
+ waiver of all liability.
311
+
312
+
313
+ Section 6 -- Term and Termination.
314
+
315
+ a. This Public License applies for the term of the Copyright and
316
+ Similar Rights licensed here. However, if You fail to comply with
317
+ this Public License, then Your rights under this Public License
318
+ terminate automatically.
319
+
320
+ b. Where Your right to use the Licensed Material has terminated under
321
+ Section 6(a), it reinstates:
322
+
323
+ 1. automatically as of the date the violation is cured, provided
324
+ it is cured within 30 days of Your discovery of the
325
+ violation; or
326
+
327
+ 2. upon express reinstatement by the Licensor.
328
+
329
+ For the avoidance of doubt, this Section 6(b) does not affect any
330
+ right the Licensor may have to seek remedies for Your violations
331
+ of this Public License.
332
+
333
+ c. For the avoidance of doubt, the Licensor may also offer the
334
+ Licensed Material under separate terms or conditions or stop
335
+ distributing the Licensed Material at any time; however, doing so
336
+ will not terminate this Public License.
337
+
338
+ d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
339
+ License.
340
+
341
+
342
+ Section 7 -- Other Terms and Conditions.
343
+
344
+ a. The Licensor shall not be bound by any additional or different
345
+ terms or conditions communicated by You unless expressly agreed.
346
+
347
+ b. Any arrangements, understandings, or agreements regarding the
348
+ Licensed Material not stated herein are separate from and
349
+ independent of the terms and conditions of this Public License.
350
+
351
+
352
+ Section 8 -- Interpretation.
353
+
354
+ a. For the avoidance of doubt, this Public License does not, and
355
+ shall not be interpreted to, reduce, limit, restrict, or impose
356
+ conditions on any use of the Licensed Material that could lawfully
357
+ be made without permission under this Public License.
358
+
359
+ b. To the extent possible, if any provision of this Public License is
360
+ deemed unenforceable, it shall be automatically reformed to the
361
+ minimum extent necessary to make it enforceable. If the provision
362
+ cannot be reformed, it shall be severed from this Public License
363
+ without affecting the enforceability of the remaining terms and
364
+ conditions.
365
+
366
+ c. No term or condition of this Public License will be waived and no
367
+ failure to comply consented to unless expressly agreed to by the
368
+ Licensor.
369
+
370
+ d. Nothing in this Public License constitutes or may be interpreted
371
+ as a limitation upon, or waiver of, any privileges and immunities
372
+ that apply to the Licensor or You, including from the legal
373
+ processes of any jurisdiction or authority.
374
+
375
+
376
+ =======================================================================
377
+
378
+ Creative Commons is not a party to its public
379
+ licenses. Notwithstanding, Creative Commons may elect to apply one of
380
+ its public licenses to material it publishes and in those instances
381
+ will be considered the β€œLicensor.” The text of the Creative Commons
382
+ public licenses is dedicated to the public domain under the CC0 Public
383
+ Domain Dedication. Except for the limited purpose of indicating that
384
+ material is shared under a Creative Commons public license or as
385
+ otherwise permitted by the Creative Commons policies published at
386
+ creativecommons.org/policies, Creative Commons does not authorize the
387
+ use of the trademark "Creative Commons" or any other trademark or logo
388
+ of Creative Commons without its prior written consent including,
389
+ without limitation, in connection with any unauthorized modifications
390
+ to any of its public licenses or any other arrangements,
391
+ understandings, or agreements concerning use of licensed material. For
392
+ the avoidance of doubt, this paragraph does not form part of the
393
+ public licenses.
394
+
395
+ Creative Commons may be contacted at creativecommons.org.
README.md CHANGED
@@ -18,6 +18,10 @@ tags:
18
 
19
  # Cannabis Tests, Curated by Cannlytics
20
 
 
 
 
 
21
  ## Table of Contents
22
  - [Table of Contents](#table-of-contents)
23
  - [Dataset Description](#dataset-description)
@@ -29,6 +33,7 @@ tags:
29
  - [Dataset Creation](#dataset-creation)
30
  - [Curation Rationale](#curation-rationale)
31
  - [Source Data](#source-data)
 
32
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
33
  - [Considerations for Using the Data](#considerations-for-using-the-data)
34
  - [Social Impact of Dataset](#social-impact-of-dataset)
@@ -48,18 +53,19 @@ tags:
48
 
49
  ### Dataset Summary
50
 
51
- This dataset is a collection of public cannabis lab test results parsed by `CoADoc`, a certificate of analysis (COA) parsing tool.
52
 
53
  ## Dataset Structure
54
 
55
  The dataset is partitioned into the various sources of lab results.
56
 
57
- | Source | Observations |
58
- |--------|--------------|
59
- | Raw Gardens | 2,667 |
60
- | MCR Labs | Coming soon! |
61
- | PSI Labs | Coming soon! |
62
- | SC Labs | Coming soon! |
 
63
 
64
  ### Data Instances
65
 
@@ -123,8 +129,10 @@ Below is a non-exhaustive list of fields, used to standardize the various data t
123
  | `total_thc` | 14.00 | The analytical total of THC and THCA. |
124
  | `total_cbd` | 0.20 | The analytical total of CBD and CBDA. |
125
  | `total_terpenes` | 0.42 | The sum of all terpenes measured. |
126
- | `sample_id` | "{sha256-hash}" | A generated ID to uniquely identify the `producer`, `product_name`, and `date_tested`. |
127
- | `strain_name` | "Blue Rhino" | A strain name, if specified. Otherwise, can be attempted to be parsed from the `product_name`. |
 
 
128
 
129
  Each result can contain the following fields.
130
 
@@ -148,15 +156,17 @@ The data is split into `details`, `results`, and `values` data. Configurations f
148
  ```py
149
  from cannlytics.data.coas import CoADoc
150
  from datasets import load_dataset
 
151
 
152
  # Download Raw Garden lab result details.
153
- dataset = load_dataset('cannlytics/cannabis_tests', 'rawgarden')
 
154
  details = dataset['details']
155
 
156
  # Save the data locally with "Details", "Results", and "Values" worksheets.
157
  outfile = 'details.xlsx'
158
  parser = CoADoc()
159
- parser.save(details, outfile)
160
 
161
  # Read the values.
162
  values = pd.read_excel(outfile, sheet_name='Values')
@@ -181,15 +191,44 @@ Certificates of analysis (CoAs) are abundant for cannabis cultivators, processor
181
  | PSI Labs Test Results | <https://results.psilabs.org/test-results/> |
182
  | Raw Garden Test Results | <https://rawgarden.farm/lab-results/> |
183
  | SC Labs Test Results | <https://client.sclabs.com/> |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
184
 
185
- #### Initial Data Collection and Normalization
186
 
187
- | Algorithm | URL |
188
- |-----------|-----|
189
- | MCR Labs Data Collection Routine | <https://github.com/cannlytics/cannlytics/tree/main/ai/curation/get_mcr_labs_data> |
190
- | PSI Labs Data Collection Routine | <https://github.com/cannlytics/cannlytics/tree/main/ai/curation/get_psi_labs_data> |
191
- | SC Labs Data Collection Routine | <https://github.com/cannlytics/cannlytics/tree/main/ai/curation/get_sc_labs_data> |
192
- | Raw Garden Data Collection Routine | <https://github.com/cannlytics/cannlytics/tree/main/ai/curation/get_rawgarden_data> |
 
193
 
194
  ### Personal and Sensitive Information
195
 
@@ -265,4 +304,4 @@ Please cite the following if you use the code examples in your research:
265
 
266
  ### Contributions
267
 
268
- Thanks to [πŸ”₯Cannlytics](https://cannlytics.com), [@candy-o](https://github.com/candy-o), [@keeganskeate](https://github.com/keeganskeate), [The CESC](https://thecesc.org), and the entire [Cannabis Data Science Team](https://meetup.com/cannabis-data-science/members) for their contributions.
 
18
 
19
  # Cannabis Tests, Curated by Cannlytics
20
 
21
+ <div style="margin-top:1rem; margin-bottom: 1rem;">
22
+ <img width="240px" alt="" src="https://firebasestorage.googleapis.com/v0/b/cannlytics.appspot.com/o/public%2Fimages%2Fdatasets%2Fcannabis_tests%2Fcannabis_tests_curated_by_cannlytics.png?alt=media&token=22e4d1da-6b30-4c3f-9ff7-1954ac2739b2">
23
+ </div>
24
+
25
  ## Table of Contents
26
  - [Table of Contents](#table-of-contents)
27
  - [Dataset Description](#dataset-description)
 
33
  - [Dataset Creation](#dataset-creation)
34
  - [Curation Rationale](#curation-rationale)
35
  - [Source Data](#source-data)
36
+ - [Data Collection and Normalization](#data-collection-and-normalization)
37
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
  - [Considerations for Using the Data](#considerations-for-using-the-data)
39
  - [Social Impact of Dataset](#social-impact-of-dataset)
 
53
 
54
  ### Dataset Summary
55
 
56
+ This dataset is a collection of public cannabis lab test results parsed by [`CoADoc`](https://github.com/cannlytics/cannlytics/tree/main/cannlytics/data/coas), a certificate of analysis (COA) parsing tool.
57
 
58
  ## Dataset Structure
59
 
60
  The dataset is partitioned into the various sources of lab results.
61
 
62
+ | Subset | Source | Observations |
63
+ |--------|--------|--------------|
64
+ | `rawgarden` | Raw Gardens | 2,667 |
65
+ | `mcrlabs` | MCR Labs | Coming soon! |
66
+ | `psilabs` | PSI Labs | Coming soon! |
67
+ | `sclabs` | SC Labs | Coming soon! |
68
+ | `washington` | Washington State | Coming soon! |
69
 
70
  ### Data Instances
71
 
 
129
  | `total_thc` | 14.00 | The analytical total of THC and THCA. |
130
  | `total_cbd` | 0.20 | The analytical total of CBD and CBDA. |
131
  | `total_terpenes` | 0.42 | The sum of all terpenes measured. |
132
+ | `results_hash` | "{sha256-hash}" | An HMAC of the sample's `results` JSON signed with Cannlytics' public key, `"cannlytics.eth"`. |
133
+ | `sample_id` | "{sha256-hash}" | A generated ID to uniquely identify the `producer`, `product_name`, and `results`. |
134
+ | `sample_hash` | "{sha256-hash}" | An HMAC of the entire sample JSON signed with Cannlytics' public key, `"cannlytics.eth"`. |
135
+ <!-- | `strain_name` | "Blue Rhino" | A strain name, if specified. Otherwise, can be attempted to be parsed from the `product_name`. | -->
136
 
137
  Each result can contain the following fields.
138
 
 
156
  ```py
157
  from cannlytics.data.coas import CoADoc
158
  from datasets import load_dataset
159
+ import pandas as pd
160
 
161
  # Download Raw Garden lab result details.
162
+ repo = 'cannlytics/cannabis_tests'
163
+ dataset = load_dataset(repo, 'rawgarden')
164
  details = dataset['details']
165
 
166
  # Save the data locally with "Details", "Results", and "Values" worksheets.
167
  outfile = 'details.xlsx'
168
  parser = CoADoc()
169
+ parser.save(details.to_pandas(), outfile)
170
 
171
  # Read the values.
172
  values = pd.read_excel(outfile, sheet_name='Values')
 
191
  | PSI Labs Test Results | <https://results.psilabs.org/test-results/> |
192
  | Raw Garden Test Results | <https://rawgarden.farm/lab-results/> |
193
  | SC Labs Test Results | <https://client.sclabs.com/> |
194
+ | Washington State Lab Test Results | <https://lcb.app.box.com/s/e89t59s0yb558tjoncjsid710oirqbgd> |
195
+
196
+ #### Data Collection and Normalization
197
+
198
+ You can recreate the dataset using the open source algorithms in the repository. First clone the repository:
199
+
200
+ ```
201
+ git clone https://huggingface.co/datasets/cannlytics/cannabis_tests
202
+ ```
203
+
204
+ You can then install the algorithm Python (3.9+) requirements:
205
+
206
+ ```
207
+ cd cannabis_tests
208
+ pip install -r requirements.txt
209
+ ```
210
+
211
+ Then you can run all of the data-collection algorithms:
212
+
213
+ ```
214
+ python algorithms/main.py
215
+ ```
216
+
217
+ Or you can run each algorithm individually. For example:
218
+
219
+ ```
220
+ python algorithms/get_results_mcrlabs.py
221
+ ```
222
 
223
+ In the `algorithms` directory, you can find the data collection scripts described in the table below.
224
 
225
+ | Algorithm | Organization | Description |
226
+ |-----------|---------------|-------------|
227
+ | `get_results_mcrlabs.py` | MCR Labs | Get lab results published by MCR Labs. |
228
+ | `get_results_psilabs.py` | PSI Labs | Get historic lab results published by MCR Labs. |
229
+ | `get_results_rawgarden.py` | Raw Garden | Get lab results Raw Garden publishes for their products. |
230
+ | `get_results_sclabs.py` | SC Labs | Get lab results published by SC Labs. |
231
+ | `get_results_washington.py` | Washington State | Get historic lab results obtained through a FOIA request in Washington State. |
232
 
233
  ### Personal and Sensitive Information
234
 
 
304
 
305
  ### Contributions
306
 
307
+ Thanks to [πŸ”₯Cannlytics](https://cannlytics.com), [@candy-o](https://github.com/candy-o), [@hcadeaux](https://huggingface.co/hcadeaux), [@keeganskeate](https://github.com/keeganskeate), [The CESC](https://thecesc.org), and the entire [Cannabis Data Science Team](https://meetup.com/cannabis-data-science/members) for their contributions.
algorithms/algorithm_constants.py ADDED
@@ -0,0 +1,903 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Cannabis Tests | Algorithm Constants
3
+ Copyright (c) 2022 Cannlytics
4
+
5
+ Authors:
6
+ Keegan Skeate <https://github.com/keeganskeate>
7
+ Candace O'Sullivan-Sutherland <https://github.com/candy-o>
8
+ Created: 1/18/2022
9
+ Updated: 9/16/2022
10
+ License: CC-BY 4.0 <https://huggingface.co/datasets/cannlytics/cannabis_tests/blob/main/LICENSE>
11
+ """
12
+
13
+ SC_LABS_PRODUCER_IDS = [
14
+ '6',
15
+ '23',
16
+ '214',
17
+ '257',
18
+ '325',
19
+ '365',
20
+ '546',
21
+ '936',
22
+ '971',
23
+ '1064',
24
+ '1212',
25
+ '1303',
26
+ '1360',
27
+ '1503',
28
+ '1523',
29
+ '1739',
30
+ '1811',
31
+ '1822',
32
+ '1995',
33
+ '2243',
34
+ '2411',
35
+ '2619',
36
+ '2728',
37
+ '2798',
38
+ '2821',
39
+ '2850',
40
+ '2884',
41
+ '3146',
42
+ '3153',
43
+ '3193',
44
+ '3430',
45
+ '3448',
46
+ '3506',
47
+ '3785',
48
+ '3798',
49
+ '3905',
50
+ '3926',
51
+ '4069',
52
+ '4085',
53
+ '4200',
54
+ '4252',
55
+ '4287',
56
+ '4446',
57
+ '4512',
58
+ '4614',
59
+ '4702',
60
+ '5029',
61
+ '5071',
62
+ '5096',
63
+ '5139',
64
+ '5164',
65
+ '5282',
66
+ '5505',
67
+ '5560',
68
+ '5615',
69
+ '5950',
70
+ '6071',
71
+ '6109',
72
+ '6112',
73
+ '6145',
74
+ '6272',
75
+ '6331',
76
+ '6340',
77
+ '6358',
78
+ '6399',
79
+ '6437',
80
+ '6756',
81
+ '6762',
82
+ '6771',
83
+ '6791',
84
+ '6815',
85
+ '6873',
86
+ '6882',
87
+ '6887',
88
+ '6900',
89
+ '6913',
90
+ '6933',
91
+ '7005',
92
+ '7034',
93
+ '7065',
94
+ '7066',
95
+ '7102',
96
+ '7112',
97
+ '7118',
98
+ '7131',
99
+ '7132',
100
+ '7134',
101
+ '7139',
102
+ '7147',
103
+ '7149',
104
+ '7159',
105
+ '7169',
106
+ '7172',
107
+ '7176',
108
+ '7195',
109
+ '7198',
110
+ '7218',
111
+ '7221',
112
+ '7228',
113
+ '7233',
114
+ '7249',
115
+ '7250',
116
+ '7253',
117
+ '7275',
118
+ '7277',
119
+ '7284',
120
+ '7303',
121
+ '7329',
122
+ '7337',
123
+ '7346',
124
+ '7349',
125
+ '7382',
126
+ '7393',
127
+ '7396',
128
+ '7406',
129
+ '7414',
130
+ '7428',
131
+ '7454',
132
+ '7472',
133
+ '7481',
134
+ '7486',
135
+ '7503',
136
+ '7509',
137
+ '7510',
138
+ '7524',
139
+ '7544',
140
+ '7569',
141
+ '7589',
142
+ '7675',
143
+ '7885',
144
+ '7939',
145
+ '7948',
146
+ '7955',
147
+ '7959',
148
+ '7984',
149
+ '8013',
150
+ '8027',
151
+ '8042',
152
+ '8079',
153
+ '8082',
154
+ '8099',
155
+ '8101',
156
+ '8104',
157
+ '8121',
158
+ '8143',
159
+ '8156',
160
+ '8168',
161
+ '8193',
162
+ '8269',
163
+ '8278',
164
+ '8285',
165
+ '8381',
166
+ '8490',
167
+ '8497',
168
+ '8516',
169
+ '8647',
170
+ '8661',
171
+ '8676',
172
+ '8710',
173
+ '8719',
174
+ '8724',
175
+ '8732',
176
+ '8776',
177
+ '8778',
178
+ '8782',
179
+ '8791',
180
+ '8809',
181
+ '8836',
182
+ '8838',
183
+ '8839',
184
+ '8856',
185
+ '8917',
186
+ '8923',
187
+ '8940',
188
+ '8954',
189
+ '8992',
190
+ '9002',
191
+ '9013',
192
+ '9071',
193
+ '9104',
194
+ '9115',
195
+ '9147',
196
+ '9176',
197
+ '9206',
198
+ '9216',
199
+ '9220',
200
+ '9281',
201
+ '9292',
202
+ '9325',
203
+ '9346',
204
+ '9370',
205
+ '9372',
206
+ '9393',
207
+ '9420',
208
+ '9431',
209
+ '9438',
210
+ '9460',
211
+ '9473',
212
+ '9476',
213
+ '9484',
214
+ '9515',
215
+ '9516',
216
+ '9536',
217
+ '9575',
218
+ '9583',
219
+ '9584',
220
+ '9589',
221
+ '9609',
222
+ '9647',
223
+ '9689',
224
+ '9709',
225
+ '9715',
226
+ '9716',
227
+ '9725',
228
+ '9726',
229
+ '9736',
230
+ '9742',
231
+ '9745',
232
+ '9746',
233
+ '9753',
234
+ '9787',
235
+ '9796',
236
+ '9802',
237
+ '9805',
238
+ '9811',
239
+ '9848',
240
+ '9856',
241
+ '9861',
242
+ '9863',
243
+ '9872',
244
+ '9895',
245
+ '9907',
246
+ '9912',
247
+ '9923',
248
+ '9940',
249
+ '9958',
250
+ '9959',
251
+ '9965',
252
+ '9982',
253
+ '9984',
254
+ '10006',
255
+ '10014',
256
+ '10019',
257
+ '10020',
258
+ '10022',
259
+ '10033',
260
+ '10074',
261
+ '10085',
262
+ '10140',
263
+ '10145',
264
+ '10164',
265
+ '10169',
266
+ '10180',
267
+ '10197',
268
+ '10221',
269
+ '10252',
270
+ '10254',
271
+ '10265',
272
+ '10276',
273
+ '10293',
274
+ '10300',
275
+ '10307',
276
+ '10316',
277
+ '10357',
278
+ '10366',
279
+ '10376',
280
+ '10382',
281
+ '10388',
282
+ '10394',
283
+ '10405',
284
+ '10415',
285
+ '10446',
286
+ '10447',
287
+ '10474',
288
+ '10477',
289
+ '10478',
290
+ '10481',
291
+ '10482',
292
+ '10487',
293
+ '10505',
294
+ '10513',
295
+ '10519',
296
+ '10543',
297
+ '10553',
298
+ '10570',
299
+ '10573',
300
+ '10590',
301
+ '10598',
302
+ '10639',
303
+ '10644',
304
+ '10651',
305
+ '10679',
306
+ '10683',
307
+ '10685',
308
+ '10727',
309
+ '10767',
310
+ '10773',
311
+ '10783',
312
+ '10793',
313
+ '10813',
314
+ '10815',
315
+ '10830',
316
+ '10833',
317
+ '10886',
318
+ '10905',
319
+ '10915',
320
+ '10922',
321
+ '10924',
322
+ '10934',
323
+ '10998',
324
+ '11006',
325
+ '11022',
326
+ '11031',
327
+ '11033',
328
+ '11043',
329
+ '11059',
330
+ '11067',
331
+ '11073',
332
+ '11078',
333
+ '11083',
334
+ '11084',
335
+ '11086',
336
+ '11088',
337
+ '11095',
338
+ '11098',
339
+ '11119',
340
+ '11167',
341
+ '11185',
342
+ '11195',
343
+ '11198',
344
+ '11226',
345
+ '11232',
346
+ '11236',
347
+ '11237',
348
+ '11248',
349
+ '11251',
350
+ '11256',
351
+ '11259',
352
+ '11260',
353
+ '11269',
354
+ '11273',
355
+ '11288',
356
+ '11297',
357
+ '11301',
358
+ '11327',
359
+ '11344',
360
+ '11368',
361
+ '11382',
362
+ '11387',
363
+ '11399',
364
+ '11409',
365
+ '11413',
366
+ '11424',
367
+ '11433',
368
+ ]
369
+
370
+ #------------------------------------------------------------------------------
371
+ # Lab result fields.
372
+ #------------------------------------------------------------------------------
373
+
374
+ lab_result_fields = {
375
+ 'global_id': 'string',
376
+ 'mme_id': 'string',
377
+ 'intermediate_type': 'category',
378
+ 'status': 'category',
379
+ 'global_for_inventory_id': 'string',
380
+ 'cannabinoid_status': 'category',
381
+ 'cannabinoid_cbc_percent': 'float',
382
+ 'cannabinoid_cbc_mg_g': 'float',
383
+ 'cannabinoid_cbd_percent': 'float',
384
+ 'cannabinoid_cbd_mg_g': 'float',
385
+ 'cannabinoid_cbda_percent': 'float',
386
+ 'cannabinoid_cbda_mg_g': 'float',
387
+ 'cannabinoid_cbdv_percent': 'float',
388
+ 'cannabinoid_cbdv_mg_g': 'float',
389
+ 'cannabinoid_cbg_percent': 'float',
390
+ 'cannabinoid_cbg_mg_g': 'float',
391
+ 'cannabinoid_cbga_percent': 'float',
392
+ 'cannabinoid_cbga_mg_g': 'float',
393
+ 'cannabinoid_cbn_percent': 'float',
394
+ 'cannabinoid_cbn_mg_g': 'float',
395
+ 'cannabinoid_d8_thc_percent': 'float',
396
+ 'cannabinoid_d8_thc_mg_g': 'float',
397
+ 'cannabinoid_d9_thca_percent': 'float',
398
+ 'cannabinoid_d9_thca_mg_g': 'float',
399
+ 'cannabinoid_d9_thc_percent': 'float',
400
+ 'cannabinoid_d9_thc_mg_g': 'float',
401
+ 'cannabinoid_thcv_percent': 'float',
402
+ 'cannabinoid_thcv_mg_g': 'float',
403
+ 'solvent_status': 'category',
404
+ 'solvent_acetone_ppm': 'float',
405
+ 'solvent_benzene_ppm': 'float',
406
+ 'solvent_butanes_ppm': 'float',
407
+ 'solvent_chloroform_ppm': 'float',
408
+ 'solvent_cyclohexane_ppm': 'float',
409
+ 'solvent_dichloromethane_ppm': 'float',
410
+ 'solvent_ethyl_acetate_ppm': 'float',
411
+ 'solvent_heptane_ppm': 'float',
412
+ 'solvent_hexanes_ppm': 'float',
413
+ 'solvent_isopropanol_ppm': 'float',
414
+ 'solvent_methanol_ppm': 'float',
415
+ 'solvent_pentanes_ppm': 'float',
416
+ 'solvent_propane_ppm': 'float',
417
+ 'solvent_toluene_ppm': 'float',
418
+ 'solvent_xylene_ppm': 'float',
419
+ 'foreign_matter': 'bool',
420
+ 'foreign_matter_stems': 'float',
421
+ 'foreign_matter_seeds': 'float',
422
+ 'microbial_status': 'category',
423
+ 'microbial_bile_tolerant_cfu_g': 'float',
424
+ 'microbial_pathogenic_e_coli_cfu_g': 'float',
425
+ 'microbial_salmonella_cfu_g': 'float',
426
+ 'moisture_content_percent': 'float',
427
+ 'moisture_content_water_activity_rate': 'float',
428
+ 'mycotoxin_status': 'category',
429
+ 'mycotoxin_aflatoxins_ppb': 'float',
430
+ 'mycotoxin_ochratoxin_ppb': 'float',
431
+ 'thc_percent': 'float',
432
+ 'notes': 'string',
433
+ 'testing_status': 'category',
434
+ 'type': 'category',
435
+ 'inventory_id': 'string',
436
+ 'batch_id': 'string',
437
+ 'parent_lab_result_id': 'string',
438
+ 'og_parent_lab_result_id': 'string',
439
+ 'copied_from_lab_id': 'string',
440
+ 'external_id': 'string',
441
+ 'lab_user_id': 'string',
442
+ 'user_id': 'string',
443
+ 'cannabinoid_editor': 'string',
444
+ 'microbial_editor': 'string',
445
+ 'mycotoxin_editor': 'string',
446
+ 'solvent_editor': 'string',
447
+ }
448
+
449
+ lab_result_date_fields = [
450
+ 'created_at',
451
+ 'deleted_at',
452
+ 'updated_at',
453
+ 'received_at',
454
+ ]
455
+
456
+ #------------------------------------------------------------------------------
457
+ # Licensees fields.
458
+ #------------------------------------------------------------------------------
459
+
460
+ licensee_fields = {
461
+ 'global_id': 'string',
462
+ 'name': 'string',
463
+ 'type': 'string',
464
+ 'code': 'string',
465
+ 'address1': 'string',
466
+ 'address2': 'string',
467
+ 'city': 'string',
468
+ 'state_code': 'string',
469
+ 'postal_code': 'string',
470
+ 'country_code': 'string',
471
+ 'phone': 'string',
472
+ 'external_id': 'string',
473
+ 'certificate_number': 'string',
474
+ 'is_live': 'bool',
475
+ 'suspended': 'bool',
476
+ }
477
+
478
+ licensee_date_fields = [
479
+ 'created_at', # No records if issued before 2018-02-21.
480
+ 'updated_at',
481
+ 'deleted_at',
482
+ 'expired_at',
483
+ ]
484
+
485
+ #------------------------------------------------------------------------------
486
+ # Inventories fields.
487
+ #------------------------------------------------------------------------------
488
+
489
+ inventory_fields = {
490
+ 'global_id': 'string',
491
+ 'strain_id': 'string',
492
+ 'inventory_type_id': 'string',
493
+ 'qty': 'float',
494
+ 'uom': 'string',
495
+ 'mme_id': 'string',
496
+ 'user_id': 'string',
497
+ 'external_id': 'string',
498
+ 'area_id': 'string',
499
+ 'batch_id': 'string',
500
+ 'lab_result_id': 'string',
501
+ 'lab_retest_id': 'string',
502
+ 'is_initial_inventory': 'bool',
503
+ 'created_by_mme_id': 'string',
504
+ 'additives': 'string',
505
+ 'serving_num': 'float',
506
+ 'sent_for_testing': 'bool',
507
+ 'medically_compliant': 'string',
508
+ 'legacy_id': 'string',
509
+ 'lab_results_attested': 'int',
510
+ 'global_original_id': 'string',
511
+ }
512
+
513
+ inventory_date_fields = [
514
+ 'created_at', # No records if issued before 2018-02-21.
515
+ 'updated_at',
516
+ 'deleted_at',
517
+ 'inventory_created_at',
518
+ 'inventory_packaged_at',
519
+ 'lab_results_date',
520
+ ]
521
+
522
+ #------------------------------------------------------------------------------
523
+ # Inventory type fields.
524
+ #------------------------------------------------------------------------------
525
+
526
+ inventory_type_fields = {
527
+ 'global_id': 'string',
528
+ 'mme_id': 'string',
529
+ 'user_id': 'string',
530
+ 'external_id': 'string',
531
+ 'uom': 'string',
532
+ 'name': 'string',
533
+ 'intermediate_type': 'string',
534
+ }
535
+
536
+ inventory_type_date_fields = [
537
+ 'created_at',
538
+ 'updated_at',
539
+ 'deleted_at',
540
+ ]
541
+
542
+ #------------------------------------------------------------------------------
543
+ # Strain fields.
544
+ #------------------------------------------------------------------------------
545
+
546
+ strain_fields = {
547
+ 'mme_id': 'string',
548
+ 'user_id': 'string',
549
+ 'global_id': 'string',
550
+ 'external_id': 'string',
551
+ 'name': 'string',
552
+ }
553
+ strain_date_fields = [
554
+ 'created_at',
555
+ 'updated_at',
556
+ 'deleted_at',
557
+ ]
558
+
559
+
560
+ #------------------------------------------------------------------------------
561
+ # Sales fields.
562
+ # TODO: Parse Sales_0, Sales_1, Sales_2
563
+ #------------------------------------------------------------------------------
564
+
565
+ sales_fields = {
566
+ 'global_id': 'string',
567
+ 'external_id': 'string',
568
+ 'type': 'string', # wholesale or retail_recrational
569
+ 'price_total': 'float',
570
+ 'status': 'string',
571
+ 'mme_id': 'string',
572
+ 'user_id': 'string',
573
+ 'area_id': 'string',
574
+ 'sold_by_user_id': 'string',
575
+ }
576
+ sales_date_fields = [
577
+ 'created_at',
578
+ 'updated_at',
579
+ 'sold_at',
580
+ 'deleted_at',
581
+ ]
582
+
583
+
584
+ #------------------------------------------------------------------------------
585
+ # Sales Items fields.
586
+ # TODO: Parse SalesItems_0, SalesItems_1, SalesItems_2, SalesItems_3
587
+ #------------------------------------------------------------------------------
588
+
589
+ sales_items_fields = {
590
+ 'global_id': 'string',
591
+ 'mme_id': 'string',
592
+ 'user_id': 'string',
593
+ 'sale_id': 'string',
594
+ 'batch_id': 'string',
595
+ 'inventory_id': 'string',
596
+ 'external_id': 'string',
597
+ 'qty': 'float',
598
+ 'uom': 'string',
599
+ 'unit_price': 'float',
600
+ 'price_total': 'float',
601
+ 'name': 'string',
602
+ }
603
+ sales_items_date_fields = [
604
+ 'created_at',
605
+ 'updated_at',
606
+ 'sold_at',
607
+ 'use_by_date',
608
+ ]
609
+
610
+ #------------------------------------------------------------------------------
611
+ # Batches fields.
612
+ # TODO: Parse Batches_0
613
+ #------------------------------------------------------------------------------
614
+
615
+ batches_fields = {
616
+ 'external_id': 'string',
617
+ 'num_plants': 'float',
618
+ 'status': 'string',
619
+ 'qty_harvest': 'float',
620
+ 'uom': 'string',
621
+ 'is_parent_batch': 'int',
622
+ 'is_child_batch': 'int',
623
+ 'type': 'string',
624
+ 'harvest_stage': 'string',
625
+ 'qty_accumulated_waste': 'float',
626
+ 'qty_packaged_flower': 'float',
627
+ 'qty_packaged_by_product': 'float',
628
+ 'origin': 'string',
629
+ 'source': 'string',
630
+ 'qty_cure': 'float',
631
+ 'plant_stage': 'string',
632
+ 'flower_dry_weight': 'float',
633
+ 'waste': 'float',
634
+ 'other_waste': 'float',
635
+ 'flower_waste': 'float',
636
+ 'other_dry_weight': 'float',
637
+ 'flower_wet_weight': 'float',
638
+ 'other_wet_weight': 'float',
639
+ 'global_id': 'string',
640
+ 'global_area_id': 'string',
641
+ 'area_name': 'string',
642
+ 'global_mme_id': 'string',
643
+ 'mme_name': 'string',
644
+ 'mme_code': 'string',
645
+ 'global_user_id': 'string',
646
+ 'global_strain_id': 'string',
647
+ 'strain_name': 'string',
648
+ 'global_mother_plant_id': 'string',
649
+ 'global_flower_area_id': 'string',
650
+ 'global_other_area_id': 'string',
651
+ }
652
+ batches_date_fields = [
653
+ 'created_at',
654
+ 'updated_at',
655
+ 'planted_at',
656
+ 'harvested_at',
657
+ 'batch_created_at',
658
+ 'deleted_at',
659
+ 'est_harvest_at',
660
+ 'packaged_completed_at',
661
+ 'harvested_end_at',
662
+ ]
663
+
664
+
665
+ #------------------------------------------------------------------------------
666
+ # Taxes fields.
667
+ # TODO: Parse Taxes_0
668
+ #------------------------------------------------------------------------------
669
+
670
+ taxes_fields = {
671
+
672
+ }
673
+ taxes_date_fields = [
674
+
675
+ ]
676
+
677
+ #------------------------------------------------------------------------------
678
+ # Areas fields.
679
+ #------------------------------------------------------------------------------
680
+
681
+ areas_fields = {
682
+ 'external_id': 'string',
683
+ 'name': 'string',
684
+ 'type': 'string',
685
+ 'is_quarantine_area': 'bool',
686
+ 'global_id': 'string',
687
+ }
688
+ areas_date_fields = [
689
+ 'created_at',
690
+ 'updated_at',
691
+ 'deleted_at',
692
+ ]
693
+
694
+ #------------------------------------------------------------------------------
695
+ # Inventory Transfer Items fields.
696
+ # TODO: Parse InventoryTransferItems_0
697
+ #------------------------------------------------------------------------------
698
+
699
+ inventory_transfer_items_fields = {
700
+ 'external_id': 'string',
701
+ 'is_sample': 'int',
702
+ 'sample_type': 'string',
703
+ 'product_sample_type': 'string',
704
+ 'description': 'string',
705
+ 'qty': 'float',
706
+ 'price': 'float',
707
+ 'uom': 'string',
708
+ 'received_qty': 'float',
709
+ 'retest': 'int',
710
+ 'global_id': 'string',
711
+ 'is_for_extraction': 'int',
712
+ 'propagation_source': 'string',
713
+ 'inventory_name': 'string',
714
+ 'intermediate_type': 'string',
715
+ 'strain_name': 'string',
716
+ 'global_mme_id': 'string',
717
+ 'global_user_id': 'string',
718
+ 'global_batch_id': 'string',
719
+ 'global_plant_id': 'string',
720
+ 'global_inventory_id': 'string',
721
+ 'global_lab_result_id': 'string',
722
+ 'global_received_area_id': 'string',
723
+ 'global_received_strain_id': 'string',
724
+ 'global_inventory_transfer_id': 'string',
725
+ 'global_received_batch_id': 'string',
726
+ 'global_received_inventory_id': 'string',
727
+ 'global_received_plant_id': 'string',
728
+ 'global_received_mme_id': 'string',
729
+ 'global_received_mme_user_id': 'string',
730
+ 'global_customer_id': 'string',
731
+ 'global_inventory_type_id': 'string',
732
+ # Optional: Match with inventory type fields
733
+ # "created_at": "09/11/2018 07:39am",
734
+ # "updated_at": "09/12/2018 03:55am",
735
+ # "external_id": "123425",
736
+ # "name": "Charlotte's Web Pre-Packs - 3.5gm",
737
+ # "description": "",
738
+ # "storage_instructions": "",
739
+ # "ingredients": "",
740
+ # "type": "end_product",
741
+ # "allergens": "",
742
+ # "contains": "",
743
+ # "used_butane": 0,
744
+ # "net_weight": "2",
745
+ # "packed_qty": null,
746
+ # "cost": "0.00",
747
+ # "value": "0.00",
748
+ # "serving_num": 1,
749
+ # "serving_size": 0,
750
+ # "uom": "ea",
751
+ # "total_marijuana_in_grams": "0.000000",
752
+ # "total_marijuana_in_mcg": null,
753
+ # "deleted_at": null,
754
+ # "intermediate_type": "usable_marijuana",
755
+ # "global_id": "WAG12.TY3DE",
756
+ # "global_original_id": null,
757
+ # "weight_per_unit_in_grams": "0.00"
758
+ # "global_mme_id": "WASTATE1.MM30",
759
+ # "global_user_id": "WASTATE1.US1I",
760
+ # "global_strain_id": null
761
+ }
762
+ inventory_transfer_items_date_fields = [
763
+ 'created_at',
764
+ 'updated_at',
765
+ 'received_at',
766
+ 'deleted_at',
767
+ ]
768
+
769
+ #------------------------------------------------------------------------------
770
+ # Inventory Transfers fields.
771
+ # TODO: Parse InventoryTransfers_0
772
+ #------------------------------------------------------------------------------
773
+
774
+ inventory_transfers_fields = {
775
+ 'number_of_edits': 'int',
776
+ 'external_id': 'string',
777
+ 'void': 'int',
778
+ 'multi_stop': 'int',
779
+ 'route': 'string',
780
+ 'stops': 'string',
781
+ 'vehicle_description': 'string',
782
+ 'vehicle_year': 'string',
783
+ 'vehicle_color': 'string',
784
+ 'vehicle_vin': 'string',
785
+ 'vehicle_license_plate': 'string',
786
+ 'notes': 'string',
787
+ 'transfer_manifest': 'string',
788
+ 'manifest_type': 'string',
789
+ 'status': 'string',
790
+ 'type': 'string',
791
+ 'transfer_type': 'string',
792
+ 'global_id': 'string',
793
+ 'test_for_terpenes': 'int',
794
+ 'transporter_name1': 'string',
795
+ 'transporter_name2': 'string',
796
+ 'global_mme_id': 'string',
797
+ 'global_user_id': 'string',
798
+ 'global_from_mme_id': 'string',
799
+ 'global_to_mme_id': 'string',
800
+ 'global_from_user_id': 'string',
801
+ 'global_to_user_id': 'string',
802
+ 'global_from_customer_id': 'string',
803
+ 'global_to_customer_id': 'string',
804
+ 'global_transporter_user_id': 'string',
805
+ }
806
+ inventory_transfers_date_fields = [
807
+ 'created_at',
808
+ 'updated_at',
809
+ 'hold_starts_at',
810
+ 'hold_ends_at',
811
+ 'transferred_at',
812
+ 'est_departed_at',
813
+ 'est_arrival_at',
814
+ 'deleted_at',
815
+ ]
816
+
817
+ #------------------------------------------------------------------------------
818
+ # Disposals fields.
819
+ # Optional: Parse Disposals_0
820
+ #------------------------------------------------------------------------------
821
+
822
+ disposals_fields = {
823
+ 'external_id': 'string',
824
+ 'whole_plant': 'string',
825
+ 'reason': 'string',
826
+ 'method': 'string',
827
+ 'phase': 'string',
828
+ 'type': 'string',
829
+ 'qty': 'float',
830
+ 'uom': 'string',
831
+ 'source': 'string',
832
+ 'disposal_cert': 'string',
833
+ 'global_id': 'string',
834
+ 'global_mme_id': 'string',
835
+ 'global_user_id': 'string',
836
+ 'global_batch_id': 'string',
837
+ 'global_area_id': 'string',
838
+ 'global_plant_id': 'string',
839
+ 'global_inventory_id': 'string',
840
+ }
841
+ disposals_date_fields = [
842
+ 'created_at',
843
+ 'updated_at',
844
+ 'hold_starts_at',
845
+ 'hold_ends_at',
846
+ 'disposal_at',
847
+ 'deleted_at',
848
+ ]
849
+
850
+ #------------------------------------------------------------------------------
851
+ # Inventory Adjustments fields.
852
+ # Optional: Parse InventoryAdjustments_0, InventoryAdjustments_1, InventoryAdjustments_2
853
+ #------------------------------------------------------------------------------
854
+
855
+ inventory_adjustments_fields = {
856
+ 'external_id': 'string',
857
+ 'qty': 'float',
858
+ 'uom': 'string',
859
+ 'reason': 'string',
860
+ 'memo': 'string',
861
+ 'global_id': 'string',
862
+ 'global_mme_id': 'string',
863
+ 'global_user_id': 'string',
864
+ 'global_inventory_id': 'string',
865
+ 'global_adjusted_by_user_id': 'string',
866
+ }
867
+ inventory_adjustments_date_fields = [
868
+ 'created_at',
869
+ 'updated_at',
870
+ 'adjusted_at',
871
+ 'deleted_at',
872
+ ]
873
+
874
+ #------------------------------------------------------------------------------
875
+ # Plants fields.
876
+ #------------------------------------------------------------------------------
877
+
878
+ plants_fields = {
879
+ 'global_id': 'string',
880
+ 'mme_id': 'string',
881
+ 'user_id': 'string',
882
+ 'external_id': 'string',
883
+ 'inventory_id': 'string',
884
+ 'batch_id': 'string',
885
+ 'area_id': 'string',
886
+ 'mother_plant_id': 'string',
887
+ 'is_initial_inventory': 'string',
888
+ 'origin': 'string',
889
+ 'stage': 'string',
890
+ 'strain_id': 'string',
891
+ 'is_mother': 'string',
892
+ 'last_moved_at': 'string',
893
+ 'plant_harvested_end_at': 'string',
894
+ 'legacy_id': 'string',
895
+ }
896
+ plants_date_fields = [
897
+ 'created_at',
898
+ 'deleted_at',
899
+ 'updated_at',
900
+ 'plant_created_at',
901
+ 'plant_harvested_at',
902
+ 'plant_harvested_end_at'
903
+ ]
algorithms/algorithm_utils.py ADDED
@@ -0,0 +1,240 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Cannabis Tests | Utility Functions
3
+ Copyright (c) 2021-2022 Cannlytics
4
+
5
+ Authors:
6
+ Keegan Skeate <https://github.com/keeganskeate>
7
+ Candace O'Sullivan-Sutherland <https://github.com/candy-o>
8
+ Created: 10/27/2021
9
+ Updated: 9/16/2022
10
+ License: CC-BY 4.0 <https://huggingface.co/datasets/cannlytics/cannabis_tests/blob/main/LICENSE>
11
+ """
12
+ # Standard imports.
13
+ from datetime import datetime
14
+ import re
15
+ from typing import Any, List, Optional, Tuple
16
+
17
+ # External imports.
18
+ import pandas as pd
19
+ from pandas import DataFrame, Series, to_datetime
20
+ from pandas.tseries.offsets import MonthEnd
21
+
22
+
23
+ def end_of_month(value: datetime) -> str:
24
+ """Format a datetime as an ISO formatted date at the end of the month.
25
+ Args:
26
+ value (datetime): A datetime value to transform into an ISO date.
27
+ Returns:
28
+ (str): An ISO formatted date.
29
+ """
30
+ month = value.month
31
+ if month < 10:
32
+ month = f'0{month}'
33
+ year = value.year
34
+ day = value + MonthEnd(0)
35
+ return f'{year}-{month}-{day.day}'
36
+
37
+
38
+ def end_of_year(value: datetime) -> str:
39
+ """Format a datetime as an ISO formatted date at the end of the year.
40
+ Args:
41
+ value (datetime): A datetime value to transform into an ISO date.
42
+ Returns:
43
+ (str): An ISO formatted date.
44
+ """
45
+ return f'{value.year}-12-31'
46
+
47
+
48
+ def end_of_period_timeseries(data: DataFrame, period: Optional[str] = 'M') -> DataFrame:
49
+ """Convert a DataFrame from beginning-of-the-period to
50
+ end-of-the-period timeseries.
51
+ Args:
52
+ data (DataFrame): The DataFrame to adjust timestamps.
53
+ period (str): The period of the time series, monthly "M" by default.
54
+ Returns:
55
+ (DataFrame): The adjusted DataFrame, with end-of-the-month timestamps.
56
+ """
57
+ data.index = data.index.to_period(period).to_timestamp(period)
58
+ return data
59
+
60
+
61
+ # def forecast_arima(
62
+ # model: Any,
63
+ # forecast_horizon: Any,
64
+ # exogenous: Optional[Any] = None,
65
+ # ) -> Tuple[Any]:
66
+ # """Format an auto-ARIMA model forecast as a time series.
67
+ # Args:
68
+ # model (ARIMA): An pmdarima auto-ARIMA model.
69
+ # forecast_horizon (DatetimeIndex): A series of dates.
70
+ # exogenous (DataFrame): Am optional DataFrame of exogenous variables.
71
+ # Returns:
72
+ # forecast (Series): The forecast series with forecast horizon index.
73
+ # conf (Array): A 2xN array of lower and upper confidence bounds.
74
+ # """
75
+ # periods = len(forecast_horizon)
76
+ # forecast, conf = model.predict(
77
+ # n_periods=periods,
78
+ # return_conf_int=True,
79
+ # X=exogenous,
80
+ # )
81
+ # forecast = Series(forecast)
82
+ # forecast.index = forecast_horizon
83
+ # return forecast, conf
84
+
85
+
86
+ def format_billions(value: float, pos: Optional[int] = None) -> str: #pylint: disable=unused-argument
87
+ """The two args are the value and tick position."""
88
+ return '%1.1fB' % (value * 1e-9)
89
+
90
+
91
+ def format_millions(value: float, pos: Optional[int] = None) -> str: #pylint: disable=unused-argument
92
+ """The two args are the value and tick position."""
93
+ return '%1.1fM' % (value * 1e-6)
94
+
95
+
96
+ def format_thousands(value: float, pos: Optional[int] = None) -> str: #pylint: disable=unused-argument
97
+ """The two args are the value and tick position."""
98
+ return '%1.0fK' % (value * 1e-3)
99
+
100
+
101
+ def get_blocks(files, size=65536):
102
+ """Get a block of a file by the given size."""
103
+ while True:
104
+ block = files.read(size)
105
+ if not block: break
106
+ yield block
107
+
108
+
109
+ def get_number_of_lines(file_name, encoding='utf-16', errors='ignore'):
110
+ """
111
+ Read the number of lines in a large file.
112
+ Credit: glglgl, SU3 <https://stackoverflow.com/a/9631635/5021266>
113
+ License: CC BY-SA 3.0 <https://creativecommons.org/licenses/by-sa/3.0/>
114
+ """
115
+ with open(file_name, 'r', encoding=encoding, errors=errors) as f:
116
+ count = sum(bl.count('\n') for bl in get_blocks(f))
117
+ print('Number of rows:', count)
118
+ return count
119
+
120
+
121
+ def reverse_dataframe(data: DataFrame) -> DataFrame:
122
+ """Reverse the ordering of a DataFrame.
123
+ Args:
124
+ data (DataFrame): A DataFrame to re-order.
125
+ Returns:
126
+ (DataFrame): The re-ordered DataFrame.
127
+ """
128
+ return data[::-1].reset_index(drop=True)
129
+
130
+
131
+ def set_training_period(series: Series, date_start: str, date_end: str) -> Series:
132
+ """Helper function to restrict a series to the desired
133
+ training time period.
134
+ Args:
135
+ series (Series): The series to clean.
136
+ date_start (str): An ISO date to mark the beginning of the training period.
137
+ date_end (str): An ISO date to mark the end of the training period.
138
+ Returns
139
+ (Series): The series restricted to the desired time period.
140
+ """
141
+ return series.loc[
142
+ (series.index >= to_datetime(date_start)) & \
143
+ (series.index < to_datetime(date_end))
144
+ ]
145
+
146
+
147
+ def sorted_nicely(unsorted_list: List[str]) -> List[str]:
148
+ """Sort the given iterable in the way that humans expect.
149
+ Credit: Mark Byers <https://stackoverflow.com/a/2669120/5021266>
150
+ License: CC BY-SA 2.5 <https://creativecommons.org/licenses/by-sa/2.5/>
151
+ """
152
+ convert = lambda text: int(text) if text.isdigit() else text
153
+ alpha = lambda key: [convert(c) for c in re.split('([0-9]+)', key)]
154
+ return sorted(unsorted_list, key=alpha)
155
+
156
+
157
+ def rmerge(left, right, **kwargs):
158
+ """Perform a merge using pandas with optional removal of overlapping
159
+ column names not associated with the join.
160
+
161
+ Though I suspect this does not adhere to the spirit of pandas merge
162
+ command, I find it useful because re-executing IPython notebook cells
163
+ containing a merge command does not result in the replacement of existing
164
+ columns if the name of the resulting DataFrame is the same as one of the
165
+ two merged DataFrames, i.e. data = pa.merge(data,new_dataframe). I prefer
166
+ this command over pandas df.combine_first() method because it has more
167
+ flexible join options.
168
+
169
+ The column removal is controlled by the 'replace' flag which is
170
+ 'left' (default) or 'right' to remove overlapping columns in either the
171
+ left or right DataFrame. If 'replace' is set to None, the default
172
+ pandas behavior will be used. All other parameters are the same
173
+ as pandas merge command.
174
+
175
+ Author: Michelle Gill
176
+ Source: https://gist.github.com/mlgill/11334821
177
+
178
+ Examples
179
+ --------
180
+ >>> left >>> right
181
+ a b c a c d
182
+ 0 1 4 9 0 1 7 13
183
+ 1 2 5 10 1 2 8 14
184
+ 2 3 6 11 2 3 9 15
185
+ 3 4 7 12
186
+
187
+ >>> rmerge(left,right,on='a')
188
+ a b c d
189
+ 0 1 4 7 13
190
+ 1 2 5 8 14
191
+ 2 3 6 9 15
192
+
193
+ >>> rmerge(left,right,on='a',how='left')
194
+ a b c d
195
+ 0 1 4 7 13
196
+ 1 2 5 8 14
197
+ 2 3 6 9 15
198
+ 3 4 7 NaN NaN
199
+
200
+ >>> rmerge(left,right,on='a',how='left',replace='right')
201
+ a b c d
202
+ 0 1 4 9 13
203
+ 1 2 5 10 14
204
+ 2 3 6 11 15
205
+ 3 4 7 12 NaN
206
+
207
+ >>> rmerge(left,right,on='a',how='left',replace=None)
208
+ a b c_x c_y d
209
+ 0 1 4 9 7 13
210
+ 1 2 5 10 8 14
211
+ 2 3 6 11 9 15
212
+ 3 4 7 12 NaN NaN
213
+ """
214
+
215
+ # Function to flatten lists from http://rosettacode.org/wiki/Flatten_a_list#Python
216
+ def flatten(lst):
217
+ return sum(([x] if not isinstance(x, list) else flatten(x) for x in lst), [])
218
+
219
+ # Set default for removing overlapping columns in "left" to be true
220
+ myargs = {'replace':'left'}
221
+ myargs.update(kwargs)
222
+
223
+ # Remove the replace key from the argument dict to be sent to
224
+ # pandas merge command
225
+ kwargs = {k:v for k, v in myargs.items() if k != 'replace'}
226
+
227
+ if myargs['replace'] is not None:
228
+ # Generate a list of overlapping column names not associated with the join
229
+ skipcols = set(flatten([v for k, v in myargs.items() if k in ['on', 'left_on', 'right_on']]))
230
+ leftcols = set(left.columns)
231
+ rightcols = set(right.columns)
232
+ dropcols = list((leftcols & rightcols).difference(skipcols))
233
+
234
+ # Remove the overlapping column names from the appropriate DataFrame
235
+ if myargs['replace'].lower() == 'left':
236
+ left = left.copy().drop(dropcols, axis=1)
237
+ elif myargs['replace'].lower() == 'right':
238
+ right = right.copy().drop(dropcols, axis=1)
239
+
240
+ return pd.merge(left, right, **kwargs)
algorithms/get_results_mcrlabs.py ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Cannabis Tests | Get MCR Labs Test Result Data
3
+ Copyright (c) 2022-2023 Cannlytics
4
+
5
+ Authors:
6
+ Keegan Skeate <https://github.com/keeganskeate>
7
+ Candace O'Sullivan-Sutherland <https://github.com/candy-o>
8
+ Created: 7/13/2022
9
+ Updated: 2/6/2023
10
+ License: CC-BY 4.0 <https://huggingface.co/datasets/cannlytics/cannabis_tests/blob/main/LICENSE>
11
+
12
+ Description:
13
+
14
+ Collect all of MCR Labs' publicly published lab results.
15
+
16
+ Data Points: See `cannlytics.data.coas.mcrlabs.py`.
17
+
18
+ Data Sources:
19
+
20
+ - MCR Labs Test Results
21
+ URL: <https://reports.mcrlabs.com>
22
+
23
+ """
24
+ # Standard imports.
25
+ from datetime import datetime
26
+ import os
27
+
28
+ # External imports.
29
+ import pandas as pd
30
+
31
+ # Internal imports.
32
+ from cannlytics.data.coas.mcrlabs import get_mcr_labs_test_results
33
+ from cannlytics.firebase import initialize_firebase, update_documents
34
+ from cannlytics.utils.utils import to_excel_with_style
35
+
36
+
37
+ # Specify where your data lives.
38
+ DATA_DIR = '.datasets/lab_results/mcr_labs'
39
+
40
+ # Get all of the results!
41
+ all_results = get_mcr_labs_test_results(
42
+ starting_page=1,
43
+ pause=3,
44
+ )
45
+
46
+ # Save the results to Excel.
47
+ data = pd.DataFrame(all_results)
48
+ timestamp = datetime.now().isoformat()[:19].replace(':', '-')
49
+ if not os.path.exists(DATA_DIR): os.makedirs(DATA_DIR)
50
+ datafile = f'{DATA_DIR}/mcr-lab-results-{timestamp}.xlsx'
51
+ to_excel_with_style(data, datafile)
52
+
53
+ # Prepare the data to upload to Firestore.
54
+ refs, updates = [], []
55
+ for index, obs in data.iterrows():
56
+ sample_id = obs['sample_id']
57
+ refs.append(f'public/data/lab_results/{sample_id}')
58
+ updates.append(obs.to_dict())
59
+
60
+ # Initialize Firebase and upload the data to Firestore!
61
+ database = initialize_firebase()
62
+ update_documents(refs, updates, database=database)
63
+ print('Added %i lab results to Firestore!' % len(refs))
algorithms/get_results_psilabs.py ADDED
@@ -0,0 +1,714 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Cannabis Tests | Get PSI Labs Test Result Data
3
+ Copyright (c) 2022 Cannlytics
4
+
5
+ Authors:
6
+ Keegan Skeate <https://github.com/keeganskeate>
7
+ Candace O'Sullivan-Sutherland <https://github.com/candy-o>
8
+ Created: July 4th, 2022
9
+ Updated: 9/16/2022
10
+ License: CC-BY 4.0 <https://huggingface.co/datasets/cannlytics/cannabis_tests/blob/main/LICENSE>
11
+
12
+ Description:
13
+
14
+ 1. Archive all of the PSI Labs test results.
15
+
16
+ 2. Analyze all of the PSI Labs test results, separating
17
+ training and testing data to use for prediction models.
18
+
19
+ 3. Create and use re-usable prediction models.
20
+
21
+ Data Sources:
22
+
23
+ - PSI Labs Test Results
24
+ URL: <https://results.psilabs.org/test-results/>
25
+
26
+ Resources:
27
+
28
+ - ChromeDriver
29
+ URL: <https://chromedriver.chromium.org/home>
30
+
31
+ - Automation Cartoon
32
+ URL: https://xkcd.com/1319/
33
+
34
+ - Efficiency Cartoon
35
+ URL: https://xkcd.com/1445/
36
+
37
+ - SHA in Python
38
+ URL: https://www.geeksforgeeks.org/sha-in-python/
39
+
40
+ - Split / Explode a column of dictionaries into separate columns with pandas
41
+ URL: https://stackoverflow.com/questions/38231591/split-explode-a-column-of-dictionaries-into-separate-columns-with-pandas
42
+
43
+ - Tidyverse: Wide and Long Data Tables
44
+ URL: https://rstudio-education.github.io/tidyverse-cookbook/tidy.html
45
+
46
+ - Web Scraping using Selenium and Python
47
+ URL: <https://www.scrapingbee.com/blog/selenium-python/>
48
+
49
+ Setup:
50
+
51
+ 1. Create a data folder `../../.datasets/lab_results/psi_labs/raw_data`.
52
+
53
+ 2. Download ChromeDriver and put it in your `C:\Python39\Scripts` folder
54
+ or pass the `executable_path` to the `Service`.
55
+
56
+ 3. Specify the `PAGES` that you want to collect.
57
+
58
+ """
59
+ # Standard imports.
60
+ from ast import literal_eval
61
+ from datetime import datetime
62
+ from hashlib import sha256
63
+ import hmac
64
+ from time import sleep
65
+
66
+ # External imports.
67
+ from cannlytics.utils.utils import snake_case
68
+ import pandas as pd
69
+
70
+ # Selenium imports.
71
+ from selenium import webdriver
72
+ from selenium.webdriver.chrome.options import Options
73
+ from selenium.webdriver.common.by import By
74
+ from selenium.webdriver.chrome.service import Service
75
+ from selenium.common.exceptions import ElementNotInteractableException, TimeoutException
76
+ from selenium.webdriver.support import expected_conditions as EC
77
+ from selenium.webdriver.support.ui import WebDriverWait
78
+
79
+
80
+ # Setup.
81
+ DATA_DIR = '../../.datasets/lab_results/raw_data/psi_labs'
82
+ TRAINING_DATA = '../../../.datasets/lab_results/training_data'
83
+
84
+ # API Constants
85
+ BASE = 'https://results.psilabs.org/test-results/?page={}'
86
+ PAGES = range(1, 10) # 4921 total!
87
+
88
+ # Desired order for output columns.
89
+ COLUMNS = [
90
+ 'sample_id',
91
+ 'date_tested',
92
+ 'analyses',
93
+ 'producer',
94
+ 'product_name',
95
+ 'product_type',
96
+ 'results',
97
+ 'coa_urls',
98
+ 'images',
99
+ 'lab_results_url',
100
+ 'date_received',
101
+ 'method',
102
+ 'qr_code',
103
+ 'sample_weight',
104
+ ]
105
+
106
+
107
+ def create_sample_id(private_key, public_key, salt='') -> str:
108
+ """Create a hash to be used as a sample ID.
109
+ The standard is to use:
110
+ 1. `private_key = producer`
111
+ 2. `public_key = product_name`
112
+ 3. `salt = date_tested`
113
+ Args:
114
+ private_key (str): A string to be used as the private key.
115
+ public_key (str): A string to be used as the public key.
116
+ salt (str): A string to be used as the salt, '' by default (optional).
117
+ Returns:
118
+ (str): A sample ID hash.
119
+ """
120
+ secret = bytes(private_key, 'UTF-8')
121
+ message = snake_case(public_key) + snake_case(salt)
122
+ sample_id = hmac.new(secret, message.encode(), sha256).hexdigest()
123
+ return sample_id
124
+
125
+
126
+ #-----------------------------------------------------------------------
127
+ # Getting ALL the data.
128
+ #-----------------------------------------------------------------------
129
+
130
+ def get_psi_labs_test_results(driver, max_delay=5, reverse=True) -> list:
131
+ """Get all test results for PSI labs.
132
+ Args:
133
+ driver (WebDriver): A Selenium Chrome WebDiver.
134
+ max_delay (float): The maximum number of seconds to wait for rendering (optional).
135
+ reverse (bool): Whether to collect in reverse order, True by default (optional).
136
+ Returns:
137
+ (list): A list of dictionaries of sample data.
138
+ """
139
+
140
+ # Get all the samples on the page.
141
+ samples = []
142
+ try:
143
+ detect = EC.presence_of_element_located((By.TAG_NAME, 'sample-card'))
144
+ WebDriverWait(driver, max_delay).until(detect)
145
+ except TimeoutException:
146
+ print('Failed to load page within %i seconds.' % max_delay)
147
+ return samples
148
+ cards = driver.find_elements(by=By.TAG_NAME, value='sample-card')
149
+ if reverse:
150
+ cards.reverse()
151
+ for card in cards:
152
+
153
+ # Begin getting sample details from the card.
154
+ details = card.find_element(by=By.TAG_NAME, value='md-card-title')
155
+
156
+ # Get images.
157
+ image_elements = details.find_elements(by=By.TAG_NAME, value='img')
158
+ images = []
159
+ for image in image_elements:
160
+ src = image.get_attribute('src')
161
+ filename = src.split('/')[-1]
162
+ images.append({'url': src, 'filename': filename})
163
+
164
+ # Get the product name.
165
+ product_name = details.find_element(by=By.CLASS_NAME, value='md-title').text
166
+
167
+ # Get the producer, date tested, and product type.
168
+ headers = details.find_elements(by=By.CLASS_NAME, value='md-subhead')
169
+ producer = headers[0].text
170
+ try:
171
+ mm, dd, yy = tuple(headers[1].text.split(': ')[-1].split('/'))
172
+ date_tested = f'20{yy}-{mm}-{dd}'
173
+ except ValueError:
174
+ date_tested = headers[1].text.split(': ')[-1]
175
+ product_type = headers[2].text.split(' ')[-1]
176
+
177
+ # Create a sample ID.
178
+ private_key = bytes(date_tested, 'UTF-8')
179
+ public_key = snake_case(product_name)
180
+ salt = snake_case(producer)
181
+ sample_id = hmac.new(private_key, (public_key + salt).encode(), sha256).hexdigest()
182
+
183
+ # Get the analyses.
184
+ analyses = []
185
+ container = details.find_element(by=By.CLASS_NAME, value='layout-row')
186
+ chips = container.find_elements(by=By.TAG_NAME, value='md-chip')
187
+ for chip in chips:
188
+ hidden = chip.get_attribute('aria-hidden')
189
+ if hidden == 'false':
190
+ analyses.append(chip.text)
191
+
192
+ # Get the lab results URL.
193
+ links = card.find_elements(by=By.TAG_NAME, value='a')
194
+ lab_results_url = links[0].get_attribute('href')
195
+
196
+ # Aggregate sample data.
197
+ sample = {
198
+ 'analyses': analyses,
199
+ 'date_tested': date_tested,
200
+ 'images': images,
201
+ 'lab_results_url': lab_results_url,
202
+ 'producer': producer,
203
+ 'product_name': product_name,
204
+ 'product_type': product_type,
205
+ 'sample_id': sample_id,
206
+ }
207
+ samples.append(sample)
208
+
209
+ return samples
210
+
211
+
212
+ def get_psi_labs_test_result_details(driver, max_delay=5) -> dict:
213
+ """Get the test result details for a specific PSI lab result.
214
+ Args:
215
+ driver (WebDriver): A Selenium Chrome WebDiver.
216
+ max_delay (float): The maximum number of seconds to wait for rendering.
217
+ Returns:
218
+ (dict): A dictionary of sample details.
219
+ """
220
+
221
+ # Deemed optional:
222
+ # Wait for elements to load, after a maximum delay of X seconds.
223
+ qr_code, coa_urls = None, []
224
+ # try:
225
+
226
+ # # Wait for the QR code to load.
227
+ # detect = EC.presence_of_element_located((By.CLASS_NAME, 'qrcode-link'))
228
+ # qr_code_link = WebDriverWait(driver, max_delay).until(detect)
229
+
230
+ # # Get the QR code.
231
+ # qr_code = qr_code_link.get_attribute('href')
232
+
233
+ # # Get CoA URLs by finding all links with with `analytics-event="PDF View"`.
234
+ # actions = driver.find_elements(by=By.TAG_NAME, value='a')
235
+ # coa_urls = []
236
+ # for action in actions:
237
+ # event = action.get_attribute('analytics-event')
238
+ # if event == 'PDF View':
239
+ # href = action.get_attribute('href')
240
+ # coa_urls.append({'filename': action.text, 'url': href})
241
+
242
+ # except TimeoutException:
243
+ # print('QR Code not loaded within %i seconds.' % max_delay)
244
+
245
+
246
+ # Wait for the results to load.
247
+ try:
248
+ detect = EC.presence_of_element_located((By.TAG_NAME, 'ng-include'))
249
+ WebDriverWait(driver, max_delay).until(detect)
250
+ except TimeoutException:
251
+ print('Results not loaded within %i seconds.' % max_delay)
252
+
253
+ # Get results for each analysis.
254
+ results = []
255
+ date_received, sample_weight, method = None, None, None
256
+ values = ['name', 'value', 'margin_of_error']
257
+ analysis_cards = driver.find_elements(by=By.TAG_NAME, value='ng-include')
258
+ for analysis in analysis_cards:
259
+ try:
260
+ analysis.click()
261
+ except ElementNotInteractableException:
262
+ continue
263
+ rows = analysis.find_elements(by=By.TAG_NAME, value='tr')
264
+ if rows:
265
+ for row in rows:
266
+ result = {}
267
+ cells = row.find_elements(by=By.TAG_NAME, value='td')
268
+ for i, cell in enumerate(cells):
269
+ key = values[i]
270
+ result[key] = cell.text
271
+ if result:
272
+ results.append(result)
273
+
274
+ # Get the last few sample details: method, sample_weight, and received_at
275
+ if analysis == 'potency':
276
+ extra = analysis.find_element(by=By.TAG_NAME, value='md-card-content')
277
+ headings = extra.find_elements(by=By.TAG_NAME, value='h3')
278
+ mm, dd, yy = tuple(headings[0].text.split('/'))
279
+ date_received = f'20{yy}-{mm}-{dd}'
280
+ sample_weight = headings[1].text
281
+ method = headings[-1].text
282
+
283
+ # Aggregate sample details.
284
+ details = {
285
+ 'coa_urls': coa_urls,
286
+ 'date_received': date_received,
287
+ 'method': method,
288
+ 'qr_code': qr_code,
289
+ 'results': results,
290
+ 'sample_weight': sample_weight,
291
+ }
292
+ return details
293
+
294
+
295
+ # FIXME: This function doesn't work well.
296
+ def get_all_psi_labs_test_results(service, pages, pause=0.125, verbose=True):
297
+ """Get ALL of PSI Labs test results.
298
+ Args:
299
+ service (ChromeDriver): A ChromeDriver service.
300
+ pages (iterable): A range of pages to get lab results from.
301
+ pause (float): A pause between requests to respect PSI Labs' server.
302
+ verbose (bool): Whether or not to print out progress, True by default (optional).
303
+ Returns:
304
+ (list): A list of collected lab results.
305
+ """
306
+
307
+ # Create a headless Chrome browser.
308
+ options = Options()
309
+ options.headless = True
310
+ options.add_argument('--window-size=1920,1200')
311
+ driver = webdriver.Chrome(options=options, service=service)
312
+
313
+ # Iterate over all of the pages to get all of the samples.
314
+ test_results = []
315
+ for page in pages:
316
+ if verbose:
317
+ print('Getting samples on page:', page)
318
+ driver.get(BASE.format(str(page)))
319
+ results = get_psi_labs_test_results(driver)
320
+ if results:
321
+ test_results += results
322
+ else:
323
+ print('Failed to find samples on page:', page)
324
+ sleep(pause)
325
+
326
+ # Get the details for each sample.
327
+ for i, test_result in enumerate(test_results):
328
+ if verbose:
329
+ print('Getting details for:', test_result['product_name'])
330
+ driver.get(test_result['lab_results_url'])
331
+ details = get_psi_labs_test_result_details(driver)
332
+ test_results[i] = {**test_result, **details}
333
+ sleep(pause)
334
+
335
+ # Close the browser and return the results.
336
+ driver.quit()
337
+ return test_results
338
+
339
+
340
+ #-----------------------------------------------------------------------
341
+ # Test: Data aggregation with `get_all_psi_labs_test_results`.
342
+ #-----------------------------------------------------------------------
343
+
344
+ # if __name__ == '__main__':
345
+
346
+ # # Specify the full-path to your chromedriver.
347
+ # # You can also put your chromedriver in `C:\Python39\Scripts`.
348
+ # # DRIVER_PATH = '../assets/tools/chromedriver_win32/chromedriver'
349
+ # # full_driver_path = os.path.abspath(DRIVER_PATH)
350
+ # start = datetime.now()
351
+ # service = Service()
352
+
353
+ # # Create a headless Chrome browser.
354
+ # options = Options()
355
+ # options.headless = True
356
+ # options.add_argument('--window-size=1920,1200')
357
+ # driver = webdriver.Chrome(options=options, service=service)
358
+
359
+ # # Iterate over all of the pages to get all of the samples.
360
+ # errors = []
361
+ # test_results = []
362
+ # pages = list(PAGES)
363
+ # pages.reverse()
364
+ # for page in pages:
365
+ # print('Getting samples on page:', page)
366
+ # driver.get(BASE.format(str(page)))
367
+ # results = get_psi_labs_test_results(driver)
368
+ # if results:
369
+ # test_results += results
370
+ # else:
371
+ # print('Failed to find samples on page:', page)
372
+ # errors.append(page)
373
+
374
+ # # Get the details for each sample.
375
+ # rows = []
376
+ # samples = pd.DataFrame(test_results)
377
+ # total = len(samples)
378
+ # for index, values in samples.iterrows():
379
+ # percent = round((index + 1) / total * 100, 2)
380
+ # print('Collecting (%.2f%%) (%i/%i):' % (percent, index + 1, total), values['product_name'])
381
+ # driver.get(values['lab_results_url'])
382
+ # details = get_psi_labs_test_result_details(driver)
383
+ # rows.append({**values.to_dict(), **details})
384
+
385
+ # # Save the results.
386
+ # data = pd.DataFrame(rows)
387
+ # timestamp = datetime.now().isoformat()[:19].replace(':', '-')
388
+ # datafile = f'{DATA_DIR}/psi-lab-results-{timestamp}.xlsx'
389
+ # data.to_excel(datafile, index=False)
390
+ # end = datetime.now()
391
+ # print('Runtime took:', end - start)
392
+
393
+ # # Close the browser.
394
+ # driver.quit()
395
+
396
+
397
+ #-----------------------------------------------------------------------
398
+ # TODO: Preprocessing the Data
399
+ #-----------------------------------------------------------------------
400
+
401
+ ANALYSES = {
402
+ 'cannabinoids': ['potency', 'POT'],
403
+ 'terpenes': ['terpene', 'TERP'],
404
+ 'residual_solvents': ['solvent', 'RST'],
405
+ 'pesticides': ['pesticide', 'PEST'],
406
+ 'microbes': ['microbial', 'MICRO'],
407
+ 'heavy_metals': ['metal', 'MET'],
408
+ }
409
+ ANALYTES = {
410
+ # TODO: Define all of the known analytes from the Cannlytics library.
411
+ }
412
+ DECODINGS = {
413
+ '<LOQ': 0,
414
+ '<LOD': 0,
415
+ 'ND': 0,
416
+ }
417
+
418
+
419
+ # Read in the saved results.
420
+ datafile = f'{DATA_DIR}/aggregated-cannabis-test-results.xlsx'
421
+ data = pd.read_excel(datafile, sheet_name='psi_labs_raw_data')
422
+
423
+ # Optional: Drop rows with no analyses at this point.
424
+ drop = ['coa_urls', 'date_received', 'method', 'qr_code', 'sample_weight']
425
+ data.drop(drop, axis=1, inplace=True)
426
+
427
+ # Isolate a training sample.
428
+ sample = data.sample(100, random_state=420)
429
+
430
+
431
+ # Create both wide and long data for ease of use.
432
+ # See: https://rstudio-education.github.io/tidyverse-cookbook/tidy.html
433
+ # Normalize and clean the data. In particular, flatten:
434
+ # βœ“ `analyses`
435
+ # βœ“ `results`
436
+ # - `images`
437
+ wide_data = pd.DataFrame()
438
+ long_data = pd.DataFrame()
439
+ for index, row in sample.iterrows():
440
+ series = row.copy()
441
+ analyses = literal_eval(series['analyses'])
442
+ images = literal_eval(series['images'])
443
+ results = literal_eval(series['results'])
444
+ series.drop(['analyses', 'images', 'results'], inplace=True)
445
+
446
+ # Code analyses.
447
+ if not analyses:
448
+ continue
449
+ for analysis in analyses:
450
+ series[analysis] = 1
451
+
452
+ # Add to wide data.
453
+ wide_data = pd.concat([wide_data, pd.DataFrame([series])])
454
+
455
+ # Iterate over results, cleaning results and adding columns.
456
+ # Future work: Augment results with key, limit, and CAS.
457
+ for result in results:
458
+
459
+ # Clean the values.
460
+ analyte_name = result['name']
461
+ measurements = result['value'].split(' ')
462
+ try:
463
+ measurement = float(measurements[0])
464
+ except:
465
+ measurement = None
466
+ try:
467
+ units = measurements[1]
468
+ except:
469
+ units = None
470
+ key = snake_case(analyte_name)
471
+ try:
472
+ margin_of_error = float(result['margin_of_error'].split(' ')[-1])
473
+ except:
474
+ margin_of_error = None
475
+
476
+ # Format long data.
477
+ entry = series.copy()
478
+ entry['analyte'] = key
479
+ entry['analyte_name'] = analyte_name
480
+ entry['result'] = measurement
481
+ entry['units'] = units
482
+ entry['margin_of_error'] = margin_of_error
483
+
484
+ # Add to long data.
485
+ long_data = pd.concat([long_data, pd.DataFrame([entry])])
486
+
487
+
488
+ # Fill null observations.
489
+ wide_data = wide_data.fillna(0)
490
+
491
+ # Rename columns
492
+ analyses = {
493
+ 'POT': 'cannabinoids',
494
+ 'RST': 'residual_solvents',
495
+ 'TERP': 'terpenes',
496
+ 'PEST': 'pesticides',
497
+ 'MICRO': 'micro',
498
+ 'MET': 'heavy_metals',
499
+ }
500
+ wide_data.rename(columns=analyses, inplace=True)
501
+ long_data.rename(columns=analyses, inplace=True)
502
+
503
+
504
+ #------------------------------------------------------------------------------
505
+ # Processing the data.
506
+ #------------------------------------------------------------------------------
507
+
508
+ # Calculate totals:
509
+ # - `total_cbd`
510
+ # - `total_thc`
511
+ # - `total_terpenes`
512
+ # - `total_cannabinoids`
513
+ # - `total_cbg`
514
+ # - `total_thcv`
515
+ # - `total_cbc`
516
+ # - `total_cbdv`
517
+
518
+
519
+ # Optional: Add `status` variables for pass / fail tests.
520
+
521
+
522
+ # TODO: Augment with:
523
+ # - lab details: lab, lab_url, lab_license_number, etc.
524
+ # - lab_latitude, lab_longitude
525
+
526
+ # Future work: Calculate average results by state, county, and zip code.
527
+
528
+
529
+ #------------------------------------------------------------------------------
530
+ # Exploring the data.
531
+ #------------------------------------------------------------------------------
532
+
533
+ # Count the number of various tests.
534
+ terpenes = wide_data.loc[wide_data['terpenes'] == 1]
535
+
536
+ # Find all of the analytes.
537
+ analytes = list(long_data.analyte.unique())
538
+
539
+ # Find all of the product types.
540
+ product_types = list(long_data.product_type.unique())
541
+
542
+ # Look at cannabinoid distributions by type.
543
+ flower = long_data.loc[long_data['product_type'] == 'Flower']
544
+ flower.loc[flower['analyte'] == '9_thc']['result'].hist(bins=100)
545
+
546
+ concentrates = long_data.loc[long_data['product_type'] == 'Concentrate']
547
+ concentrates.loc[concentrates['analyte'] == '9_thc']['result'].hist(bins=100)
548
+
549
+
550
+ # Look at terpene distributions by type!
551
+ terpene = flower.loc[flower['analyte'] == 'dlimonene']
552
+ terpene['result'].hist(bins=100)
553
+
554
+ terpene = concentrates.loc[concentrates['analyte'] == 'dlimonene']
555
+ terpene['result'].hist(bins=100)
556
+
557
+
558
+ # Find the first occurrences of famous strains.
559
+ gorilla_glue = flower.loc[
560
+ (flower['product_name'].str.contains('gorilla', case=False)) |
561
+ (flower['product_name'].str.contains('glu', case=False))
562
+ ]
563
+
564
+ # Create strain fingerprints: histograms of dominant terpenes.
565
+ compound = gorilla_glue.loc[gorilla_glue['analyte'] == '9_thc']
566
+ compound['result'].hist(bins=100)
567
+
568
+
569
+ #------------------------------------------------------------------------------
570
+ # Exploring the data.
571
+ #------------------------------------------------------------------------------
572
+
573
+ # Future work: Augment results with key, limit, and CAS.
574
+
575
+ # TODO: Save the curated data, both wide and long data.
576
+
577
+
578
+ # TODO: Standardize the `analyte` names! Ideally with a re-usable function.
579
+
580
+
581
+ # TODO: Standardize `analyses`.
582
+
583
+
584
+ # TODO: Standardize the `product_type`.
585
+
586
+
587
+ # TODO: Standardize `strain_name`.
588
+
589
+
590
+ # TODO: Add any new entries to the Cannlypedia:
591
+ # - New `analyses`
592
+ # - New `analytes`
593
+ # - New `strains`
594
+ # - New `product_types`
595
+
596
+
597
+ # Optional: Create data / CoA NFTs for the lab results!!!
598
+
599
+
600
+ #------------------------------------------------------------------------------
601
+ # Exploring the data.
602
+ #------------------------------------------------------------------------------
603
+
604
+ # TODO: Count the number of lab results scraped!
605
+
606
+
607
+ # TODO: Count the number of unique data points scraped!
608
+
609
+
610
+ # TODO: Look at cannabinoid concentrations over time.
611
+
612
+
613
+ # TODO: Look at cannabinoid distributions by type.
614
+
615
+
616
+ # TODO: Look at terpene distributions by type!
617
+
618
+
619
+ #-----------------------------------------------------------------------
620
+ # Modeling the data.
621
+ #-----------------------------------------------------------------------
622
+
623
+ # TODO: Given a lab result, predict if it's in the Xth percentile.
624
+
625
+
626
+ # TODO: Use in ARIMA model to approach the question:
627
+ # Are terpene or cannabinoid concentrations increasing over time by sample type?
628
+ # - total_terpenes
629
+ # - D-limonene
630
+ # - beta-pinene
631
+ # - myrcene
632
+ # - caryophyllene
633
+ # - linalool
634
+ # - cbg
635
+ # - thcv
636
+ # - total_thc
637
+ # - total_cbd
638
+ # - total_cannabinoids
639
+
640
+
641
+ # Calculate THC to CBD ratio.
642
+
643
+
644
+ # Calculate average terpene ratios by strain:
645
+ # - beta-pinene to d-limonene ratio
646
+ # - humulene to caryophyllene
647
+ # - linalool and myrcene? (Research these!)
648
+
649
+
650
+ # Future work: Add description of why the ratio is meaningful.
651
+
652
+
653
+ # Future work: Calculator to determine the number of mg's of each
654
+ # compound are in a given unit of weight.
655
+ # E.g. How much total THC in mg is in an eighth given that it tests X%.
656
+ # mg = percent * 10 * grams
657
+ # mg_per_serving = percent * 10 * grams_per_serving (0.33 suggested?)
658
+
659
+
660
+ # TODO: Find parents and crosses of particular strains.
661
+ # E.g. Find all Jack crosses.
662
+
663
+
664
+ #-----------------------------------------------------------------------
665
+ # Training and testing the model.
666
+ #-----------------------------------------------------------------------
667
+
668
+ # TODO: Separate results after 2020 as test data.
669
+
670
+
671
+ # TODO: Estimate a large number of ARIMA models on the training data,
672
+ # use the models to predict the test data, and measure the accuracies.
673
+
674
+
675
+ # TODO: Pick the model that predicts the test data the best.
676
+
677
+
678
+ #-----------------------------------------------------------------------
679
+ # Evaluating the model.
680
+ #-----------------------------------------------------------------------
681
+
682
+ # TODO: Re-estimate the model with the entire dataset.
683
+
684
+
685
+ # TODO: Predict if cannabinoid and terpene concentrations are trending
686
+ # up or down and to what degree if so.
687
+
688
+
689
+ # TODO: Take away an insight: Is there statistical evidence that
690
+ # cannabis cultivated in Michigan is successfully being bred to, on average,
691
+ # have higher levels of cannabinoids or terpenes? If so, which compounds?
692
+
693
+
694
+ # TODO: Forecast: If the trend continues, what would cannabis look like
695
+ # in 10 years? What average cannabinoid and terpene concentration can
696
+ # we expect to see in Michigan in 2025 and 2030?
697
+
698
+
699
+
700
+ #-----------------------------------------------------------------------
701
+ # Saving the data, statistics, and model.
702
+ #-----------------------------------------------------------------------
703
+ # Note: The data, statistics, and model are only useful if we can get
704
+ # them # in front of people's eyeballs. Therefore, saving the data and
705
+ # making them available to people is arguably the most important step.
706
+ #-----------------------------------------------------------------------
707
+
708
+ # TODO: Upload the data to Firestore.
709
+
710
+
711
+ # TODO: Test getting the data and statistics through the Cannlytics API.
712
+
713
+
714
+ # TODO: Test using the statistical model through the Cannlytics API.
algorithms/{get_all_rawgarden_data.py β†’ get_results_rawgarden.py} RENAMED
@@ -1,13 +1,13 @@
1
  """
2
- Get Raw Garden Test Result Data
3
  Copyright (c) 2022 Cannlytics
4
 
5
  Authors:
6
  Keegan Skeate <https://github.com/keeganskeate>
7
  Candace O'Sullivan-Sutherland <https://github.com/candy-o>
8
  Created: 8/23/2022
9
- Updated: 9/13/2022
10
- License: <https://github.com/cannlytics/cannlytics/blob/main/LICENSE>
11
 
12
  Description:
13
 
@@ -56,13 +56,13 @@ from cannlytics.utils.constants import DEFAULT_HEADERS
56
  # Specify where your data lives.
57
  BUCKET_NAME = 'cannlytics-company.appspot.com'
58
  COLLECTION = 'public/data/lab_results'
59
- STORAGE_REF = 'data/lab_results/raw_garden'
60
 
61
  # Create directories if they don't already exist.
62
  # TODO: Edit `ENV_FILE` and `DATA_DIR` as needed for your desired setup.
63
  ENV_FILE = '../.env'
64
- DATA_DIR = '.././'
65
- COA_DATA_DIR = f'{DATA_DIR}/raw_garden'
66
  COA_PDF_DIR = f'{COA_DATA_DIR}/pdfs'
67
  TEMP_PATH = f'{COA_DATA_DIR}/tmp'
68
  if not os.path.exists(DATA_DIR): os.makedirs(DATA_DIR)
@@ -141,6 +141,7 @@ def get_rawgarden_products(
141
  def download_rawgarden_coas(
142
  items: pd.DataFrame,
143
  pause: Optional[float] = 0.24,
 
144
  verbose: Optional[bool] = True,
145
  ) -> None:
146
  """Download Raw Garden product COAs to `product_subtype` folders.
@@ -149,6 +150,8 @@ def download_rawgarden_coas(
149
  and `lab_results_url` to download.
150
  pause (float): A pause to respect the server serving the PDFs,
151
  `0.24` seconds by default (optional).
 
 
152
  verbose (bool): Whether or not to print status, `True` by
153
  default (optional).
154
  """
@@ -172,6 +175,8 @@ def download_rawgarden_coas(
172
  filename = url.split('/')[-1]
173
  folder = kebab_case(subtype)
174
  outfile = os.path.join(COA_PDF_DIR, folder, filename)
 
 
175
  response = requests.get(url, headers=DEFAULT_HEADERS)
176
  with open(outfile, 'wb') as pdf:
177
  pdf.write(response.content)
@@ -263,6 +268,8 @@ def parse_rawgarden_coas(
263
  if verbose:
264
  print('Parsed:', filename)
265
 
 
 
266
  return parsed, unidentified
267
 
268
 
@@ -311,7 +318,7 @@ def upload_lab_results(
311
  #
312
  # 1. Finding products and their COA URLS.
313
  # 2. Downloading COA PDFs from their URLs.
314
- # 3. Using CoADoc to parse the COA PDFs (with OCR).
315
  # 4. Saving the data to datafiles, Firebase Storage, and Firestore.
316
  #
317
  #-----------------------------------------------------------------------
@@ -331,7 +338,7 @@ if __name__ == '__main__':
331
  args = {}
332
 
333
  # Specify collection period.
334
- DAYS_AGO = args.get('days_ago', 1)
335
  GET_ALL = args.get('get_all', True)
336
 
337
  # === Data Collection ===
@@ -357,6 +364,7 @@ if __name__ == '__main__':
357
  )
358
 
359
  # Merge the `products`'s `product_subtype` with the COA data.
 
360
  coa_df = rmerge(
361
  pd.DataFrame(coa_data),
362
  products,
@@ -398,20 +406,20 @@ if __name__ == '__main__':
398
  # === Firebase Database and Storage ===
399
 
400
  # Optional: Initialize Firebase.
401
- initialize_firebase(ENV_FILE)
402
 
403
- # Optional: Upload the lab results to Firestore.
404
- upload_lab_results(
405
- coa_df.to_dict(orient='records'),
406
- update=True,
407
- verbose=True
408
- )
409
 
410
- # Optional: Upload datafiles to Firebase Storage.
411
- storage_datafile = '/'.join([STORAGE_REF, datafile.split('/')[-1]])
412
- storage_error_file = '/'.join([STORAGE_REF, error_file.split('/')[-1]])
413
- upload_file(storage_datafile, datafile, bucket_name=BUCKET_NAME)
414
- upload_file(storage_error_file, error_file, bucket_name=BUCKET_NAME)
415
 
416
  # == Data Aggregation ===
417
 
@@ -422,7 +430,6 @@ if __name__ == '__main__':
422
  # datafiles = [
423
  # f'{COA_DATA_DIR}/d7815fd2a097d06b719aadcc00233026f86076a680db63c532a11b67d7c8bc70.xlsx',
424
  # f'{COA_DATA_DIR}/01880e30f092cf5739f9f2b58de705fc4c245d6859c00b50505a3a802ff7c2b2.xlsx',
425
- # f'{COA_DATA_DIR}/154de9b1992a1bfd9a07d2e52c702e8437596923f34bee43f62f3e24f042b81c.xlsx',
426
  # ]
427
 
428
  # # Create custom column order.
 
1
  """
2
+ Cannabis Tests | Get Raw Garden Test Result Data
3
  Copyright (c) 2022 Cannlytics
4
 
5
  Authors:
6
  Keegan Skeate <https://github.com/keeganskeate>
7
  Candace O'Sullivan-Sutherland <https://github.com/candy-o>
8
  Created: 8/23/2022
9
+ Updated: 9/22/2022
10
+ License: CC-BY 4.0 <https://huggingface.co/datasets/cannlytics/cannabis_tests/blob/main/LICENSE>
11
 
12
  Description:
13
 
 
56
  # Specify where your data lives.
57
  BUCKET_NAME = 'cannlytics-company.appspot.com'
58
  COLLECTION = 'public/data/lab_results'
59
+ STORAGE_REF = 'data/lab_results/rawgarden'
60
 
61
  # Create directories if they don't already exist.
62
  # TODO: Edit `ENV_FILE` and `DATA_DIR` as needed for your desired setup.
63
  ENV_FILE = '../.env'
64
+ DATA_DIR = '../'
65
+ COA_DATA_DIR = f'{DATA_DIR}/rawgarden'
66
  COA_PDF_DIR = f'{COA_DATA_DIR}/pdfs'
67
  TEMP_PATH = f'{COA_DATA_DIR}/tmp'
68
  if not os.path.exists(DATA_DIR): os.makedirs(DATA_DIR)
 
141
  def download_rawgarden_coas(
142
  items: pd.DataFrame,
143
  pause: Optional[float] = 0.24,
144
+ replace: Optional[bool] = False,
145
  verbose: Optional[bool] = True,
146
  ) -> None:
147
  """Download Raw Garden product COAs to `product_subtype` folders.
 
150
  and `lab_results_url` to download.
151
  pause (float): A pause to respect the server serving the PDFs,
152
  `0.24` seconds by default (optional).
153
+ replace (bool): Whether or not to replace any existing PDFs,
154
+ `False` by default (optional).
155
  verbose (bool): Whether or not to print status, `True` by
156
  default (optional).
157
  """
 
175
  filename = url.split('/')[-1]
176
  folder = kebab_case(subtype)
177
  outfile = os.path.join(COA_PDF_DIR, folder, filename)
178
+ if os.path.isfile(outfile):
179
+ continue
180
  response = requests.get(url, headers=DEFAULT_HEADERS)
181
  with open(outfile, 'wb') as pdf:
182
  pdf.write(response.content)
 
268
  if verbose:
269
  print('Parsed:', filename)
270
 
271
+ # TODO: Save intermittently?
272
+
273
  return parsed, unidentified
274
 
275
 
 
318
  #
319
  # 1. Finding products and their COA URLS.
320
  # 2. Downloading COA PDFs from their URLs.
321
+ # 3. Using CoADoc to parse the COA PDFs (with OCR as needed).
322
  # 4. Saving the data to datafiles, Firebase Storage, and Firestore.
323
  #
324
  #-----------------------------------------------------------------------
 
338
  args = {}
339
 
340
  # Specify collection period.
341
+ DAYS_AGO = args.get('days_ago', 365)
342
  GET_ALL = args.get('get_all', True)
343
 
344
  # === Data Collection ===
 
364
  )
365
 
366
  # Merge the `products`'s `product_subtype` with the COA data.
367
+ # FIXME: Keep the URL (`lab_results_url`)!
368
  coa_df = rmerge(
369
  pd.DataFrame(coa_data),
370
  products,
 
406
  # === Firebase Database and Storage ===
407
 
408
  # Optional: Initialize Firebase.
409
+ # initialize_firebase(ENV_FILE)
410
 
411
+ # # Optional: Upload the lab results to Firestore.
412
+ # upload_lab_results(
413
+ # coa_df.to_dict(orient='records'),
414
+ # update=True,
415
+ # verbose=True
416
+ # )
417
 
418
+ # # Optional: Upload datafiles to Firebase Storage.
419
+ # storage_datafile = '/'.join([STORAGE_REF, datafile.split('/')[-1]])
420
+ # storage_error_file = '/'.join([STORAGE_REF, error_file.split('/')[-1]])
421
+ # upload_file(storage_datafile, datafile, bucket_name=BUCKET_NAME)
422
+ # upload_file(storage_error_file, error_file, bucket_name=BUCKET_NAME)
423
 
424
  # == Data Aggregation ===
425
 
 
430
  # datafiles = [
431
  # f'{COA_DATA_DIR}/d7815fd2a097d06b719aadcc00233026f86076a680db63c532a11b67d7c8bc70.xlsx',
432
  # f'{COA_DATA_DIR}/01880e30f092cf5739f9f2b58de705fc4c245d6859c00b50505a3a802ff7c2b2.xlsx',
 
433
  # ]
434
 
435
  # # Create custom column order.
algorithms/get_results_sclabs.py ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Cannabis Tests | Get SC Labs Test Result Data
3
+ Copyright (c) 2022-2023 Cannlytics
4
+
5
+ Authors:
6
+ Keegan Skeate <https://github.com/keeganskeate>
7
+ Candace O'Sullivan-Sutherland <https://github.com/candy-o>
8
+ Created: 7/8/2022
9
+ Updated: 2/6/2023
10
+ License: CC-BY 4.0 <https://huggingface.co/datasets/cannlytics/cannabis_tests/blob/main/LICENSE>
11
+
12
+ Description:
13
+
14
+ Collect all of SC Labs' publicly published lab results.
15
+
16
+ Algorithm:
17
+
18
+ 1. Discover all SC Labs public clients by scanning:
19
+
20
+ https://client.sclabs.com/client/{client}/
21
+
22
+ 2. Iterate over pages for each client, collecting samples until
23
+ the 1st sample and active page are the same:
24
+
25
+ https://client.sclabs.com/client/{client}/?page={page}
26
+
27
+ 3. (a) Get the sample details for each sample found.
28
+ (b) Save the sample details.
29
+
30
+ Data Sources:
31
+
32
+ - SC Labs Test Results
33
+ URL: <https://client.sclabs.com/>
34
+
35
+ """
36
+ # Standard imports.
37
+ from datetime import datetime
38
+ import math
39
+ import os
40
+ from time import sleep
41
+
42
+ # External imports.
43
+ import pandas as pd
44
+
45
+ # Internal imports.
46
+ from cannlytics.data.coas.sclabs import (
47
+ get_sc_labs_sample_details,
48
+ get_sc_labs_test_results,
49
+ )
50
+ from cannlytics.firebase import initialize_firebase, update_documents
51
+
52
+ # Specify where your data lives.
53
+ RAW_DATA = '../../../.datasets/lab_results/raw_data/sc_labs'
54
+
55
+ # Future work: Figure out a more efficient way to find all producer IDs.
56
+ PAGES = range(1, 12_000)
57
+ PRODUCER_IDS = list(PAGES)
58
+ PRODUCER_IDS.reverse()
59
+
60
+ # Alternatively, uncomment to read in the known producer IDs.
61
+ # from algorithm_constants import SC_LABS_PRODUCER_IDS as PRODUCER_IDS
62
+
63
+ # Iterate over potential client pages and client sample pages.
64
+ start = datetime.now()
65
+ clients = []
66
+ errors = []
67
+ test_results = []
68
+ for _id in PRODUCER_IDS:
69
+ results = get_sc_labs_test_results(_id)
70
+ if results:
71
+ test_results += results
72
+ print('Found all samples for producer:', _id)
73
+ clients.append(_id)
74
+ sleep(3)
75
+
76
+ # Save the results, just in case.
77
+ data = pd.DataFrame(test_results)
78
+ timestamp = datetime.now().isoformat()[:19].replace(':', '-')
79
+ if not os.path.exists(RAW_DATA): os.makedirs(RAW_DATA)
80
+ datafile = f'{RAW_DATA}/sc-lab-results-{timestamp}.xlsx'
81
+ data.to_excel(datafile, index=False)
82
+ end = datetime.now()
83
+ print('Sample collection took:', end - start)
84
+
85
+ # Read in the saved test results (useful for debugging).
86
+ start = datetime.now()
87
+ data = pd.read_excel(datafile)
88
+
89
+ # Get the sample details for each sample found.
90
+ errors = []
91
+ rows = []
92
+ subset = data.loc[data['results'].isnull()]
93
+ total = len(subset)
94
+ for index, values in subset.iterrows():
95
+ if not math.isnan(values['results']):
96
+ continue
97
+ percent = round((index + 1) * 100 / total, 2)
98
+ sample = values['lab_results_url'].split('/')[-2]
99
+ details = get_sc_labs_sample_details(sample)
100
+ rows.append({**values.to_dict(), **details})
101
+ if details['results']:
102
+ print('Results found (%.2f%%) (%i/%i):' % (percent, index + 1, total), sample)
103
+ else:
104
+ print('No results found (%.2f%%) (%i/%i):' % (percent, index + 1, total), sample)
105
+ sleep(3)
106
+
107
+ # Save every 500 samples just in case.
108
+ if index % 500 == 0 and index != 0:
109
+ data = pd.DataFrame(rows)
110
+ timestamp = datetime.now().isoformat()[:19].replace(':', '-')
111
+ datafile = f'{RAW_DATA}/sc-lab-results-{timestamp}.xlsx'
112
+ data.to_excel(datafile, index=False)
113
+ print('Saved data:', datafile)
114
+
115
+ # Save the final results.
116
+ data = pd.DataFrame(rows)
117
+ timestamp = datetime.now().isoformat()[:19].replace(':', '-')
118
+ datafile = f'{RAW_DATA}/sc-lab-results-{timestamp}.xlsx'
119
+ data.to_excel(datafile, index=False)
120
+ end = datetime.now()
121
+ print('Detail collection took:', end - start)
122
+
123
+ # Prepare the data to upload to Firestore.
124
+ refs, updates = [], []
125
+ for index, obs in data.iterrows():
126
+ sample_id = obs['sample_id']
127
+ refs.append(f'public/data/lab_results/{sample_id}')
128
+ updates.append(obs.to_dict())
129
+
130
+ # Initialize Firebase and upload the data to Firestore!
131
+ database = initialize_firebase()
132
+ update_documents(refs, updates, database=database)
133
+ print('Added %i lab results to Firestore!' % len(refs))
algorithms/get_results_sdpharmlabs.py ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Cannabis Tests | Get SDPharmLabs Test Result Data
3
+ Copyright (c) 2022 Cannlytics
4
+
5
+ Authors:
6
+ Keegan Skeate <https://github.com/keeganskeate>
7
+ Candace O'Sullivan-Sutherland <https://github.com/candy-o>
8
+ Created: 8/23/2022
9
+ Updated: 9/20/2022
10
+ License: CC-BY 4.0 <https://huggingface.co/datasets/cannlytics/cannabis_tests/blob/main/LICENSE>
11
+
12
+ Description:
13
+
14
+ Curate SDPharmLabs' publicly published lab results by:
15
+
16
+ 1. Finding products and their COA URLS on SDPharmLabs' website.
17
+ 2. Downloading COA PDFs from their URLs.
18
+ 3. Using CoADoc to parse the COA PDFs (with OCR if needed).
19
+ 4. Archiving the COA data in Firestore.
20
+
21
+ Data Source:
22
+
23
+ - SDPharmLabs
24
+ URL: <https://sandiego.pharmlabscannabistesting.com/>
25
+
26
+ """
27
+
28
+ base = 'https://sandiego.pharmlabscannabistesting.com/results'
algorithms/get_results_washington_ccrs.py ADDED
@@ -0,0 +1,471 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Cannabis Tests | Washington State
3
+ Copyright (c) 2022 Cannlytics
4
+
5
+ Authors: Keegan Skeate <https://github.com/keeganskeate>
6
+ Created: 9/23/2022
7
+ Updated: 9/27/2022
8
+ License: <https://github.com/cannlytics/cannlytics/blob/main/LICENSE>
9
+
10
+ Description: This script augments lab result data with pertinent
11
+ licensee, inventory, inventory type, product, and strain data.
12
+
13
+ Data sources:
14
+
15
+ - WA State Traceability Data Dec. 2021 to Aug. 2022
16
+ URL: <https://lcb.app.box.com/s/gosuk65m5iinuaqxx2ef7uis9ccnzb20/folder/170118338288>
17
+
18
+ """
19
+ # Standard imports.
20
+ import gc
21
+ import json
22
+ import os
23
+
24
+ # External imports.
25
+ from dotenv import dotenv_values
26
+ import matplotlib.pyplot as plt
27
+ import pandas as pd
28
+
29
+ # Internal imports.
30
+ # from cannlytics.data.ccrs.utils import get_number_of_lines
31
+ from cannlytics.data.ccrs import CCRS
32
+ from cannlytics.data.ccrs.utils import unzip_files
33
+ from cannlytics.utils import (
34
+ camel_to_snake,
35
+ get_number_of_lines,
36
+ snake_case,
37
+ sorted_nicely,
38
+ )
39
+
40
+
41
+ DATA_DIR = 'D:\\data\\washington\\ccrs-2022-08-18'
42
+ SUB_DIR = 'CCRS PRR (8-18-22)'
43
+ ENV_FILE = '.env'
44
+
45
+
46
+ #-----------------------------------------------------------------------
47
+ # Get the data.
48
+ #-----------------------------------------------------------------------
49
+
50
+ # Extract all files.
51
+ unzip_files(DATA_DIR, extension='.zip')
52
+
53
+
54
+ #-----------------------------------------------------------------------
55
+ # Curate the data.
56
+ #-----------------------------------------------------------------------
57
+
58
+ # Get all of the datafiles.
59
+ subsets = {}
60
+ datafiles = []
61
+ for path, _, files in os.walk(DATA_DIR):
62
+ for f in files:
63
+ abs_path = os.path.join(path, f)
64
+ if f.endswith('.csv'):
65
+ datafiles.append(abs_path)
66
+
67
+ # Count the number of observations in each file.
68
+ print('| Subset | Observations |')
69
+ print('|--------|--------------|')
70
+ for f in sorted_nicely(datafiles):
71
+ datafile = f.split('\\')[-1]
72
+ name = datafile.replace('.csv', '').split('_')[0]
73
+ subset = subsets.get(name, {
74
+ 'observations': 0,
75
+ 'datafiles': [],
76
+ })
77
+ abs_path = os.path.join(DATA_DIR, f)
78
+ file_name = os.path.abspath(abs_path)
79
+ number = get_number_of_lines(file_name)
80
+ subset['observations'] += number
81
+ subset['datafiles'].append(datafile)
82
+ print(f'| `{datafile}` | `{number:,}` |')
83
+ subsets[name] = subset
84
+
85
+ # Print the total number of observations.
86
+ for key, values in subsets.items():
87
+ print(f'{key}: {values["observations"]:,}', 'observations.')
88
+
89
+ # Get the columns for each subset.
90
+ for key, values in subsets.items():
91
+ datafile = values['datafiles'][0]
92
+ name = datafile.replace('.csv', '').split('_')[0]
93
+ folder = datafile.replace('.csv', '')
94
+ abs_path = os.path.join(DATA_DIR, SUB_DIR, folder, datafile)
95
+ file_name = os.path.abspath(abs_path)
96
+ df = pd.read_csv(
97
+ file_name,
98
+ sep='\t',
99
+ encoding='utf-16',
100
+ nrows=2,
101
+ index_col=False,
102
+ low_memory=False,
103
+ )
104
+ subsets[name]['columns'] = list(df.columns)
105
+
106
+ # Count the number of data points for each subset.
107
+ for key, values in subsets.items():
108
+ number_of_cols = len(values['columns'])
109
+ data_points = values['observations'] * number_of_cols
110
+ print(f'{key}: {data_points:,}', 'data points.')
111
+
112
+
113
+ #-----------------------------------------------------------------------
114
+ # Augment license data.
115
+ #-----------------------------------------------------------------------
116
+
117
+ # Read licensee data.
118
+ # licensees = ccrs.read_licensees()
119
+ licensees = pd.read_csv(
120
+ f'{DATA_DIR}/{SUB_DIR}/Licensee_0/Licensee_0.csv',
121
+ sep='\t',
122
+ encoding='utf-16',
123
+ index_col=False,
124
+ low_memory=False,
125
+ )
126
+ licensees.columns = [camel_to_snake(x) for x in licensees.columns]
127
+
128
+ # Restrict to active licensees.
129
+ licensees = licensees.loc[licensees['license_status'] == 'Active']
130
+
131
+ # TODO: Geocode licensees.
132
+
133
+ # TODO: Figure out `license_type`.
134
+
135
+ # TODO: Save augmented licensees.
136
+
137
+
138
+ #-----------------------------------------------------------------------
139
+ # Augment strain data.
140
+ #-----------------------------------------------------------------------
141
+
142
+ # Read strain data.
143
+ strains = pd.read_csv(
144
+ f'{DATA_DIR}/{SUB_DIR}/Strains_0/Strains_0.csv',
145
+ sep='\t',
146
+ # sep=',',
147
+ encoding='utf-16',
148
+ index_col=False,
149
+ # skiprows=range(2, 901),
150
+ engine='python',
151
+ quotechar='"',
152
+ nrows=2000,
153
+ error_bad_lines=False,
154
+ )
155
+ strains.columns = [camel_to_snake(x) for x in strains.columns]
156
+
157
+ # FIXME: First 899 rows are misaligned.
158
+ strains = strains.iloc[900:]
159
+
160
+
161
+ #------------------------------------------------------------------------------
162
+ # Manage lab result data.
163
+ #------------------------------------------------------------------------------
164
+
165
+ # # Read lab results.
166
+ # lab_results = ccrs.read_lab_results()
167
+
168
+ # # Note: Sometimes "Not Tested" is a `test_value`.
169
+ # lab_results['test_value'] = pd.to_numeric(lab_results['test_value'], errors='coerce')
170
+
171
+ # # Remove lab results with `created_date` in the past.
172
+ # lab_results = lab_results.loc[lab_results['created_date'] >= pd.to_datetime(START)]
173
+
174
+ # # Identify all of the labs.
175
+ # lab_ids = list(lab_results['lab_licensee_id'].unique())
176
+
177
+ # # Trend analytes by day by lab.
178
+ # group = [pd.Grouper(key='created_date', freq='M'), 'test_name', 'lab_licensee_id']
179
+ # trending = lab_results.groupby(group, as_index=True)['test_value'].mean()
180
+
181
+ # # Visualize all analytes!!!
182
+ # tested_analytes = list(trending.index.get_level_values(1).unique())
183
+ # for analyte in tested_analytes:
184
+ # fig, ax = plt.subplots(figsize=(8, 5))
185
+ # idx = pd.IndexSlice
186
+ # for lab_id in lab_ids:
187
+ # try:
188
+ # lab_samples = trending.loc[idx[:, analyte, lab_id]]
189
+ # if len(lab_samples) > 0:
190
+ # lab_samples.plot(
191
+ # ax=ax,
192
+ # label=lab_id,
193
+ # )
194
+ # except KeyError:
195
+ # pass
196
+ # plt.legend(title='Lab ID', loc='upper right')
197
+ # plt.title(f'Average {analyte} by Lab in Washington')
198
+ # plt.show()
199
+
200
+ # # TODO: Save trending!
201
+
202
+ # # Calculate failure rate by lab.
203
+
204
+ # # TODO: Calculate failure rate by licensee.
205
+ # # fail = lab_results.loc[lab_results['LabTestStatus'] == 'Fail']
206
+
207
+ # # Get lab prices.
208
+
209
+ # # Estimate laboratory revenue.
210
+
211
+ # # Estimate laboratory market share.
212
+
213
+ # # TODO: Estimate amount spent on lab testing by licensee.
214
+
215
+
216
+ #-----------------------------------------------------------------------
217
+ # CCRS data exploration.
218
+ #-----------------------------------------------------------------------
219
+
220
+ # # Initialize a CCRS client.
221
+ # config = dotenv_values(ENV_FILE)
222
+ # os.environ['CANNLYTICS_API_KEY'] = config['CANNLYTICS_API_KEY']
223
+ # os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = config['GOOGLE_APPLICATION_CREDENTIALS']
224
+ # ccrs = CCRS(data_dir=DATA_DIR)
225
+
226
+ # # Read licensee data.
227
+ # licensees = ccrs.read_licensees()
228
+
229
+ # # Read areas data.
230
+ # areas = ccrs.read_areas()
231
+
232
+ # # Read inventory data.
233
+ # inventory = ccrs.read_inventory(limit=100_000)
234
+
235
+ # # Wishlist: Augment with licensee data with licensee_id
236
+
237
+ # # Wishlist: Augment with strain data with strain_id
238
+
239
+ # # Wishlist Augment product data with product_id
240
+
241
+ # # Optional: Explore interesting fields:
242
+ # # - quantity_on_hand
243
+ # # - total_cost
244
+ # # - created_date
245
+
246
+ # # Optional: Count inventory items by date for each licensee?
247
+
248
+ # # Estimate Cost of Goods Sold (CoGS) (Poor data for this metric).
249
+ # cogs = (inventory.initial_quantity - inventory.quantity_on_hand) * inventory.total_cost
250
+
251
+ # # Read inventory adjustment data.
252
+ # adjustments = ccrs.read_inventory_adjustments()
253
+
254
+ # # Wishlist: Merge inventory details
255
+ # # inventory_adjustments = pd.merge()
256
+
257
+ # # Highlight imperfect system.
258
+ # lost = adjustments.loc[adjustments.inventory_adjustment_reason == 'Lost']
259
+ # theft = adjustments.loc[adjustments.inventory_adjustment_reason == 'Theft']
260
+ # seized = adjustments.loc[adjustments.inventory_adjustment_reason == 'Seizure']
261
+ # other = adjustments.loc[adjustments.inventory_adjustment_reason == 'Other']
262
+ # not_found = lost.loc[lost['adjustment_detail'].astype(str).str.contains('not found', case=False)]
263
+
264
+ # # Read plant data.
265
+ # plants = ccrs.read_plants()
266
+
267
+ # # Wishlist: Augment with strain data.
268
+ # # StrainId is missing from strain data! And all plant StrainIds are 1...
269
+ # strains = ccrs.read_strains()
270
+
271
+ # # Wishlist: Augment with area data.
272
+ # # Area data is missing AreaId.
273
+
274
+ # # Wishlist: Augment with licensee data.
275
+ # # Licensee data is missing LicenseeId
276
+
277
+ # # TODO: Calculate number of plants by type by day, week, month, year
278
+ # # for each licensee.
279
+ # # This may have to be done by looking at created_date and harvest_date.
280
+
281
+ # # TODO: Estimate wholesale sales by licensee_id
282
+
283
+ # # Estimate growing period.
284
+ # final_states = ['Harvested', 'Drying', 'Sold']
285
+ # harvested = plants.loc[plants.plant_state.isin(final_states)]
286
+ # grow_days = (harvested.harvest_date - harvested.created_date).dt.days
287
+ # grow_days = grow_days.loc[(grow_days > 30) & (grow_days < 365)]
288
+ # grow_days.describe()
289
+ # grow_days.hist(bins=100)
290
+ # plt.show()
291
+
292
+ # # TODO: Estimate a production function (yield per plant).
293
+
294
+ # # # Optional: See who is transferring plants to who.
295
+ # # # InventoryPlantTransfer_0
296
+ # # # FromLicenseeId, ToLicenseeId, FromInventoryId, ToInventoryId, TransferDate
297
+
298
+ # # Read plant destruction data.
299
+ # destructions = ccrs.read_plant_destructions()
300
+
301
+ # # Look at the reasons for destruction.
302
+ # destructions['destruction_reason'].value_counts().plot(kind='pie')
303
+
304
+ # # Look at contaminants
305
+ # mites = destructions.loc[destructions.destruction_reason == 'Mites']
306
+ # contaminated = destructions.loc[destructions.destruction_reason == 'Contamination']
307
+
308
+ # # Plot plants destroyed by mites per day.
309
+ # mites_by_day = mites.groupby('destruction_date')['plant_id'].count()
310
+ # mites_by_day.plot()
311
+ # plt.title('Number of Plants Destroyed by Mites in Washington')
312
+ # plt.show()
313
+
314
+ # # Plot plants destroyed by contamination per day.
315
+ # contaminated_by_day = contaminated.groupby('destruction_date')['plant_id'].count()
316
+ # contaminated_by_day.plot()
317
+ # plt.title('Number of Contaminated Plants in Washington')
318
+ # plt.show()
319
+
320
+ # # # TODO: Calculate daily risk of plant death.
321
+ # # destructions_by_day = destructions.groupby('destruction_date')['plant_id'].count()
322
+ # # # plants_by_day =
323
+ # # # plant_risk =
324
+
325
+ # # Saturday Morning Statistics teaser:
326
+ # # Capital asset pricing model (CAPM) or...
327
+ # # Plant liability asset net total model (PLANTM) ;)
328
+
329
+
330
+ #------------------------------------------------------------------------------
331
+ # Manage product data.
332
+ #------------------------------------------------------------------------------
333
+
334
+ # # Read product data.
335
+ # products = ccrs.read_products(limit=100_000)
336
+
337
+ # # Look at products by day by licensee.
338
+ # products_by_day = products.groupby(['licensee_id', 'created_date'])['name'].count()
339
+
340
+ # # Wishlist: There is a reference to InventoryTypeId but not inventory type data.
341
+
342
+ # # Wishlist: Match with licensee data with licensee_id
343
+
344
+
345
+ #------------------------------------------------------------------------------
346
+ # Manage sales data.
347
+ #------------------------------------------------------------------------------
348
+
349
+ # # Read sale header data.
350
+ # sale_headers = ccrs.read_sale_headers()
351
+
352
+ # # Read sale detail data.
353
+ # sale_details = ccrs.read_sale_details()
354
+
355
+ # # Calculate total price and total tax.
356
+ # sale_details['total_tax'] = sale_details['sales_tax'] + sale_details['other_tax']
357
+ # sale_details['total_price'] = sale_details['unit_price'] - abs(sale_details['discount']) + sale_details['total_tax']
358
+
359
+ # sale_details = pd.merge(
360
+ # sale_details,
361
+ # sale_headers,
362
+ # left_on='sale_header_id',
363
+ # right_on='sale_header_id',
364
+ # how='left',
365
+ # validate='m:1',
366
+ # suffixes=(None, '_header'),
367
+ # )
368
+
369
+ # # Calculate total transactions, average transaction, and total sales by retailer.
370
+ # transactions = sale_details.groupby(['sale_header_id', 'licensee_id'], as_index=False)
371
+ # transaction_amount = transactions['total_price'].sum()
372
+ # avg_transaction_amount = transaction_amount.groupby('licensee_id')['total_price'].mean()
373
+
374
+ # # Calculate transactions and sales by day.
375
+ # daily = sale_details.groupby(['sale_date', 'licensee_id'], as_index=False)
376
+ # daily_sales = daily['total_price'].sum()
377
+ # daily_transactions = daily['total_price'].count()
378
+ # group = ['sale_date', 'licensee_id', 'sale_header_id']
379
+ # daily_avg_transaction_amount = sale_details.groupby(group, as_index=False)['total_price'].mean()
380
+
381
+ # # TODO: Aggregate statistics by daily and licensee.
382
+
383
+ # # TODO: Calculate year-to-date statistics for each licensee.
384
+
385
+ # # FIXME: Figure out how to connect sale_headers.licensee_id with licensees.license_number?
386
+
387
+ # # TODO: Break down by sale type:
388
+ # # 'RecreationalRetail', 'RecreationalMedical', 'Wholesale'
389
+
390
+ # # TODO: Try to match sale_items.inventory_id to other details?
391
+
392
+
393
+ #------------------------------------------------------------------------------
394
+ # Manage transfer data.
395
+ #------------------------------------------------------------------------------
396
+
397
+ # # Read transfer data.
398
+ # transfers = ccrs.read_transfers()
399
+
400
+ # # TODO: Get list of license numbers / addresses from transers.
401
+
402
+ # # Future work: Look at number of items, etc. for each transfer.
403
+
404
+
405
+ #------------------------------------------------------------------------------
406
+ # Future work: Augment the data.
407
+ #------------------------------------------------------------------------------
408
+
409
+ # Get Fed FRED data pertinent to geographic area.
410
+
411
+ # Get Census data pertinent to geographic area.
412
+
413
+
414
+ #------------------------------------------------------------------------------
415
+ # Future work: Estimate ARIMAX for every variable.
416
+ #------------------------------------------------------------------------------
417
+
418
+ # Estimate each variable by licensee in 2022 by day, month, week, and year-end:
419
+ # - total sales
420
+ # - number of transactions (Poisson model)
421
+ # - average transaction amount
422
+ # - Number of failures (Poisson model)
423
+
424
+
425
+ #------------------------------------------------------------------------------
426
+ # Save the data and statistics, making the data available for future use.
427
+ #------------------------------------------------------------------------------
428
+
429
+ # # Save all the statistics and forecasts to local data archive.
430
+ # ccrs.save(lab_results, 'D:\\data\\washington\\stats\\daily_sales.xlsx')
431
+
432
+ # # Upload all the statistics and forecasts to make available through the API.
433
+ # # through the Cannlytics API and Cannlytics Website.
434
+ # ccrs.upload(lab_results, 'lab_results', id_field='lab_result_id')
435
+
436
+ # # Get all data and statistics from the API!
437
+ # base = 'http://127.0.0.1:8000/api'
438
+ # ccrs.get('lab_results', limit=100, base=base)
439
+
440
+
441
+ #-----------------------------------------------------------------------
442
+ # Read lab results data.
443
+ #-----------------------------------------------------------------------
444
+
445
+ # 1. Read Leaf lab results.
446
+ # 2. Sort the data, removing null observations.
447
+ # 3. Define a lab ID for each observation and remove attested lab results.
448
+
449
+ #-----------------------------------------------------------------------
450
+ # Augment lab result data with inventory data.
451
+ #-----------------------------------------------------------------------
452
+
453
+
454
+ #-----------------------------------------------------------------------
455
+ # Augment lab result data with inventory type data.
456
+ #-----------------------------------------------------------------------
457
+
458
+
459
+ #-----------------------------------------------------------------------
460
+ # Augment lab result data with strain data.
461
+ #-----------------------------------------------------------------------
462
+
463
+
464
+ #-----------------------------------------------------------------------
465
+ # Augment lab result data with GIS data.
466
+ #-----------------------------------------------------------------------
467
+
468
+
469
+ #-----------------------------------------------------------------------
470
+ # Augment lab result data with the labs' licensee data.
471
+ #-----------------------------------------------------------------------
algorithms/get_results_washington_leaf.py ADDED
@@ -0,0 +1,490 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Cannabis Tests | Get Washington Test Result Data
3
+ Copyright (c) 2022 Cannlytics
4
+
5
+ Authors:
6
+ Keegan Skeate <https://github.com/keeganskeate>
7
+ Created: 1/11/2022
8
+ Updated: 9/16/2022
9
+ License: CC-BY 4.0 <https://huggingface.co/datasets/cannlytics/cannabis_tests/blob/main/LICENSE>
10
+
11
+ Description: This script combines relevant fields from the licensees, inventories,
12
+ inventory types, and strains datasets with the lab results data. Lab results are
13
+ augmented with licensees, inventories, inventory types, and strains data.
14
+
15
+ Data sources:
16
+
17
+ - WA State Traceability Data January 2018 - November 2021
18
+ https://lcb.app.box.com/s/e89t59s0yb558tjoncjsid710oirqbgd?page=1
19
+ https://lcb.app.box.com/s/e89t59s0yb558tjoncjsid710oirqbgd?page=2
20
+
21
+ Data Guide:
22
+
23
+ - Washington State Leaf Data Systems Guide
24
+ https://lcb.wa.gov/sites/default/files/publications/Marijuana/traceability/WALeafDataSystems_UserManual_v1.37.5_AddendumC_LicenseeUser.pdf
25
+
26
+ Data available at:
27
+
28
+ - https://cannlytics.com/data/market/augmented-washington-state-lab-results
29
+ - https://cannlytics.com/data/market/augmented-washington-state-licensees
30
+
31
+ """
32
+ # Standard imports.
33
+ import gc
34
+ import json
35
+
36
+ # External imports.
37
+ import pandas as pd
38
+
39
+ # Internal imports.
40
+ from utils import get_number_of_lines
41
+
42
+ #------------------------------------------------------------------------------
43
+ # Read lab results data.
44
+ #------------------------------------------------------------------------------
45
+
46
+ def read_lab_results(
47
+ columns=None,
48
+ fields=None,
49
+ date_columns=None,
50
+ nrows=None,
51
+ data_dir='../.datasets',
52
+ ):
53
+ """
54
+ 1. Read Leaf lab results.
55
+ 2. Sort the data, removing null observations.
56
+ 3. Define a lab ID for each observation and remove attested lab results.
57
+ """
58
+ shards = []
59
+ lab_datasets = ['LabResults_0', 'LabResults_1', 'LabResults_2']
60
+ for dataset in lab_datasets:
61
+ lab_data = pd.read_csv(
62
+ f'{data_dir}/{dataset}.csv',
63
+ sep='\t',
64
+ encoding='utf-16',
65
+ usecols=columns,
66
+ dtype=fields,
67
+ parse_dates=date_columns,
68
+ nrows=nrows,
69
+ )
70
+ shards.append(lab_data)
71
+ del lab_data
72
+ gc.collect()
73
+ data = pd.concat(shards)
74
+ del shards
75
+ gc.collect()
76
+ data.dropna(subset=['global_id'], inplace=True)
77
+ # data.set_index('global_id', inplace=True)
78
+ data.sort_index(inplace=True)
79
+ data['lab_id'] = data['global_id'].map(lambda x: x[x.find('WAL'):x.find('.')])
80
+ data = data.loc[data.lab_id != '']
81
+ return data
82
+
83
+
84
+ #------------------------------------------------------------------------------
85
+ # Combine lab result data with inventory data.
86
+ #------------------------------------------------------------------------------
87
+
88
+ # Define necessary lab result fields.
89
+ lab_result_fields = {
90
+ 'global_id' : 'string',
91
+ 'global_for_inventory_id': 'string'
92
+ }
93
+
94
+ # Read lab result fields necessary to connect with inventory data.
95
+ lab_results = read_lab_results(
96
+ columns=list(lab_result_fields.keys()),
97
+ fields=lab_result_fields,
98
+ )
99
+
100
+ # Save initial enhanced lab results.
101
+ lab_results.to_csv('../.datasets/augmented_lab_results.csv')
102
+
103
+ # Define inventory fields.
104
+ inventory_fields = {
105
+ 'global_id' : 'string',
106
+ 'inventory_type_id': 'string',
107
+ 'strain_id': 'string',
108
+ }
109
+ inventory_columns = list(inventory_fields.keys())
110
+
111
+ # Define chunking parameters.
112
+ # inventory_type_rows = get_number_of_lines('../.datasets/Inventories_0.csv')
113
+ inventory_row_count = 129_920_072
114
+ chunk_size = 30_000_000
115
+ read_rows = 0
116
+ skiprows = None
117
+ datatypes = {
118
+ 'global_id' : 'string',
119
+ 'global_for_inventory_id': 'string',
120
+ 'lab_id': 'string',
121
+ 'inventory_type_id': 'string',
122
+ 'strain_id': 'string',
123
+ }
124
+
125
+ # Read in a chunk at a time, match with lab results, and save the data.
126
+ while read_rows < inventory_row_count:
127
+
128
+ # Define the chunk size.
129
+ if read_rows:
130
+ skiprows = [i for i in range(1, read_rows)]
131
+
132
+ # 1. Open enhanced lab results.
133
+ lab_results = pd.read_csv(
134
+ '../.datasets/lab_results_with_ids.csv',
135
+ # index_col='global_id',
136
+ dtype=datatypes
137
+ )
138
+
139
+ # 2. Read chunk of inventories.
140
+ inventories = pd.read_csv(
141
+ '../.datasets/Inventories_0.csv',
142
+ sep='\t',
143
+ encoding='utf-16',
144
+ usecols=inventory_columns,
145
+ dtype=inventory_fields,
146
+ skiprows=skiprows,
147
+ nrows=chunk_size,
148
+ )
149
+
150
+ # 3. Merge inventories with enhanced lab results.
151
+ inventories.rename(columns={'global_id': 'inventory_id'}, inplace=True)
152
+ lab_results = pd.merge(
153
+ left=lab_results,
154
+ right=inventories,
155
+ how='left',
156
+ left_on='global_for_inventory_id',
157
+ right_on='inventory_id',
158
+ )
159
+
160
+ # Remove overlapping columns
161
+ try:
162
+ new_entries = lab_results[['inventory_type_id_y', 'strain_id_x']]
163
+ lab_results = lab_results.combine_first(new_entries)
164
+ lab_results.rename(columns={
165
+ 'inventory_type_id_x': 'inventory_type_id',
166
+ 'strain_id_x': 'strain_id',
167
+ }, inplace=True)
168
+ except KeyError:
169
+ pass
170
+ extra_columns = ['inventory_id', 'Unnamed: 0', 'inventory_type_id_y',
171
+ 'strain_id_y']
172
+ lab_results.drop(extra_columns, axis=1, inplace=True, errors='ignore')
173
+
174
+ # 4. Save lab results enhanced with IDs.
175
+ lab_results.to_csv('../.datasets/lab_results_with_ids.csv')
176
+ read_rows += chunk_size
177
+ print('Read:', read_rows)
178
+
179
+ del new_entries
180
+ del inventories
181
+ gc.collect()
182
+
183
+
184
+ #------------------------------------------------------------------------------
185
+ # Combine lab result data with inventory type data.
186
+ #------------------------------------------------------------------------------
187
+
188
+ results_with_ids = pd.read_csv('../.datasets/lab_results_with_ids.csv')
189
+
190
+ # Uncomment if you do not already have inventory_type_names.csv:
191
+
192
+ # Get only the inventory names from the inventory types data.
193
+ # from augment_inventory_types import augment_inventory_types
194
+ # augment_inventory_types()
195
+
196
+ # Get only the results with
197
+ results_with_ids = results_with_ids[~results_with_ids['inventory_type_id'].isna()]
198
+
199
+ # Read in inventory type names.
200
+ inventory_type_names = pd.read_csv(
201
+ '../.datasets/inventory_type_names.csv',
202
+ # index_col='global_id',
203
+ dtype={
204
+ 'global_id' : 'string',
205
+ 'inventory_name': 'string',
206
+ }
207
+ )
208
+
209
+ # Merge enhanced lab results with inventory type names.
210
+ results_with_ids = pd.merge(
211
+ left=results_with_ids,
212
+ right=inventory_type_names,
213
+ how='left',
214
+ left_on='inventory_type_id',
215
+ right_on='global_id',
216
+ )
217
+ results_with_ids.rename(columns={'global_id_x': 'global_id'}, inplace=True)
218
+ results_with_ids.drop(['global_id_y'], axis=1, inplace=True, errors='ignore')
219
+
220
+ # Save the lab results enhanced with inventory names.
221
+ results_with_ids.to_csv('../.datasets/lab_results_with_inventory_names.csv')
222
+
223
+
224
+ #------------------------------------------------------------------------------
225
+ # Combine lab result data with strain data.
226
+ #------------------------------------------------------------------------------
227
+
228
+ # Define strain fields.
229
+ strain_fields = {
230
+ 'global_id': 'string',
231
+ 'name': 'string',
232
+ }
233
+ strain_columns = list(strain_fields.keys())
234
+
235
+ # Read in strain data.
236
+ strains = pd.read_csv(
237
+ '../.datasets/Strains_0.csv',
238
+ sep='\t',
239
+ encoding='utf-16',
240
+ dtype=strain_fields,
241
+ usecols=strain_columns,
242
+ )
243
+
244
+ # Merge enhanced lab results with strain data.
245
+ strains.rename(columns={
246
+ 'global_id': 'strain_id',
247
+ 'name': 'strain_name',
248
+ }, inplace=True)
249
+ results_with_ids = pd.merge(
250
+ left=results_with_ids,
251
+ right=strains,
252
+ how='left',
253
+ left_on='strain_id',
254
+ right_on='strain_id',
255
+ )
256
+ results_with_ids.rename(columns={'global_id_x': 'global_id'}, inplace=True)
257
+ results_with_ids.drop(['global_id_y'], axis=1, inplace=True, errors='ignore')
258
+
259
+ # Save the extra lab results fields.
260
+ results_with_ids.to_csv('../.datasets/lab_results_with_strain_names.csv')
261
+
262
+ #------------------------------------------------------------------------------
263
+ # Combine lab result data with geocoded licensee data.
264
+ #------------------------------------------------------------------------------
265
+
266
+ # Add code variable to lab results with IDs.
267
+ results_with_ids['code'] = results_with_ids['global_for_inventory_id'].map(
268
+ lambda x: x[x.find('WA'):x.find('.')]
269
+ ).str.replace('WA', '')
270
+
271
+ # Specify the licensee fields.
272
+ licensee_fields = {
273
+ 'global_id' : 'string',
274
+ 'code': 'string',
275
+ 'name': 'string',
276
+ 'type': 'string',
277
+ 'address1': 'string',
278
+ 'address2': 'string',
279
+ 'city': 'string',
280
+ 'state_code': 'string',
281
+ 'postal_code': 'string',
282
+ }
283
+ licensee_date_fields = [
284
+ 'created_at', # No records if issued before 2018-02-21.
285
+ ]
286
+ licensee_columns = list(licensee_fields.keys()) + licensee_date_fields
287
+
288
+ # # Read in the licensee data.
289
+ licensees = pd.read_csv(
290
+ # '../.datasets/Licensees_0.csv',
291
+ '../.datasets/geocoded_licensee_data.csv',
292
+ # sep='\t',
293
+ # encoding='utf-16',
294
+ usecols=licensee_columns,
295
+ dtype=licensee_fields,
296
+ parse_dates=licensee_date_fields,
297
+ )
298
+
299
+ # Format the licensees data.
300
+ licensees.rename(columns={
301
+ 'global_id': 'mme_id',
302
+ 'created_at': 'license_created_at',
303
+ 'type': 'license_type',
304
+ }, inplace=True)
305
+
306
+ # Combine the data sets.
307
+ results_with_ids = pd.merge(
308
+ left=results_with_ids,
309
+ right=licensees,
310
+ how='left',
311
+ left_on='code',
312
+ right_on='code'
313
+ )
314
+ results_with_ids.rename(columns={'global_id_x': 'global_id'}, inplace=True)
315
+ results_with_ids.drop(['global_id_y'], axis=1, inplace=True, errors='ignore')
316
+
317
+ # Save lab results enhanced with additional fields.
318
+ results_with_ids.to_csv('../.datasets/lab_results_with_licensee_data.csv')
319
+
320
+
321
+ #------------------------------------------------------------------------------
322
+ # TODO: Combine lab result data with the labs' licensee data.
323
+ #------------------------------------------------------------------------------
324
+
325
+ # Read enhanced lab results.
326
+ results_with_ids = pd.read_csv('../.datasets/lab_results_with_licensee_data.csv')
327
+
328
+ # TODO: Combine each lab's licensee data.
329
+ # lab_name
330
+ # lab_address1
331
+ # lab_address2
332
+ # lab_ciy
333
+ # lab_postal_code
334
+ # lab_phone
335
+ # lab_certificate_number
336
+ # lab_global_id
337
+ # lab_code
338
+ # lab_created_at
339
+
340
+
341
+ # TODO: Save the data enhanced with the lab's licensee data.
342
+
343
+ #------------------------------------------------------------------------------
344
+ # Combine lab result data with enhanced lab results data.
345
+ #------------------------------------------------------------------------------
346
+
347
+ # Read in results with IDs.
348
+ results_with_ids = pd.read_csv(
349
+ '../.datasets/lab_results_with_licensee_data.csv',
350
+ dtype = {
351
+ 'global_id': 'string',
352
+ 'global_for_inventory_id': 'string',
353
+ 'lab_result_id': 'string',
354
+ 'inventory_type_id': 'string',
355
+ 'lab_id': 'string',
356
+ 'strain_id': 'string',
357
+ 'inventory_name': 'string',
358
+ 'strain_name': 'string',
359
+ 'code': 'string',
360
+ 'mme_id': 'string',
361
+ 'license_created_at': 'string',
362
+ 'name': 'string',
363
+ 'address1': 'string',
364
+ 'address2': 'string',
365
+ 'city': 'string',
366
+ 'state_code': 'string',
367
+ 'postal_code': 'string',
368
+ 'license_type': 'string',
369
+ # TODO: Re-run with latitude and longitude
370
+ 'latitude': 'float',
371
+ 'longitude': 'float',
372
+ },
373
+ )
374
+
375
+ # Read all lab results fields with any valuable data.
376
+ lab_result_fields = {
377
+ 'global_id' : 'string',
378
+ 'intermediate_type' : 'category',
379
+ 'status' : 'category',
380
+ 'cannabinoid_status' : 'category',
381
+ 'cannabinoid_cbc_percent' : 'float16',
382
+ 'cannabinoid_cbc_mg_g' : 'float16',
383
+ 'cannabinoid_cbd_percent' : 'float16',
384
+ 'cannabinoid_cbd_mg_g' : 'float16',
385
+ 'cannabinoid_cbda_percent' : 'float16',
386
+ 'cannabinoid_cbda_mg_g' : 'float16',
387
+ 'cannabinoid_cbdv_percent' : 'float16',
388
+ 'cannabinoid_cbg_percent' : 'float16',
389
+ 'cannabinoid_cbg_mg_g' : 'float16',
390
+ 'cannabinoid_cbga_percent' : 'float16',
391
+ 'cannabinoid_cbga_mg_g' : 'float16',
392
+ 'cannabinoid_cbn_percent' : 'float16',
393
+ 'cannabinoid_cbn_mg_g' : 'float16',
394
+ 'cannabinoid_d8_thc_percent' : 'float16',
395
+ 'cannabinoid_d8_thc_mg_g' : 'float16',
396
+ 'cannabinoid_d9_thca_percent': 'float16',
397
+ 'cannabinoid_d9_thca_mg_g' : 'float16',
398
+ 'cannabinoid_d9_thc_percent' : 'float16',
399
+ 'cannabinoid_d9_thc_mg_g' : 'float16',
400
+ 'cannabinoid_thcv_percent' : 'float16',
401
+ 'cannabinoid_thcv_mg_g' : 'float16',
402
+ 'solvent_status' : 'category',
403
+ 'solvent_acetone_ppm' : 'float16',
404
+ 'solvent_benzene_ppm' : 'float16',
405
+ 'solvent_butanes_ppm' : 'float16',
406
+ 'solvent_chloroform_ppm' : 'float16',
407
+ 'solvent_cyclohexane_ppm' : 'float16',
408
+ 'solvent_dichloromethane_ppm' : 'float16',
409
+ 'solvent_ethyl_acetate_ppm' : 'float16',
410
+ 'solvent_heptane_ppm' : 'float16',
411
+ 'solvent_hexanes_ppm' : 'float16',
412
+ 'solvent_isopropanol_ppm' : 'float16',
413
+ 'solvent_methanol_ppm' : 'float16',
414
+ 'solvent_pentanes_ppm' : 'float16',
415
+ 'solvent_propane_ppm' : 'float16',
416
+ 'solvent_toluene_ppm' : 'float16',
417
+ 'solvent_xylene_ppm' : 'float16',
418
+ 'foreign_matter' : 'bool',
419
+ 'foreign_matter_stems': 'float16',
420
+ 'foreign_matter_seeds': 'float16',
421
+ 'microbial_status' : 'category',
422
+ 'microbial_bile_tolerant_cfu_g' : 'float16',
423
+ 'microbial_pathogenic_e_coli_cfu_g' : 'float16',
424
+ 'microbial_salmonella_cfu_g' : 'float16',
425
+ 'moisture_content_percent' : 'float16',
426
+ 'moisture_content_water_activity_rate' : 'float16',
427
+ 'mycotoxin_status' : 'category',
428
+ 'mycotoxin_aflatoxins_ppb' : 'float16',
429
+ 'mycotoxin_ochratoxin_ppb' : 'float16',
430
+ 'thc_percent' : 'float16',
431
+ 'notes' : 'float32',
432
+ 'testing_status' : 'category',
433
+ 'type' : 'category',
434
+ 'external_id' : 'string',
435
+ }
436
+ lab_result_date_columns = ['created_at', 'updated_at', 'received_at',]
437
+ lab_result_columns = list(lab_result_fields.keys()) + lab_result_date_columns
438
+ complete_lab_results = read_lab_results(
439
+ columns=lab_result_columns,
440
+ fields=lab_result_fields,
441
+ date_columns=None,
442
+ )
443
+
444
+ # Merge lab results with the complete lab results data.
445
+ complete_lab_results.rename(columns={
446
+ 'global_id': 'lab_result_id',
447
+ 'type': 'sample_type',
448
+ }, inplace=True)
449
+ results_with_ids = pd.merge(
450
+ left=results_with_ids,
451
+ right=complete_lab_results,
452
+ how='left',
453
+ left_on='global_id',
454
+ right_on='lab_result_id',
455
+ )
456
+ results_with_ids.rename(columns={'lab_id_x': 'lab_id'}, inplace=True)
457
+ results_with_ids.drop([
458
+ 'Unnamed: 0',
459
+ 'Unnamed: 0.1',
460
+ 'global_id',
461
+ 'lab_id_y',
462
+ ], axis=1, inplace=True, errors='ignore')
463
+
464
+ # TODO: Fill missing cannabinoid percent or mg/g.
465
+
466
+ # FIXME: Are missing values posing a problem?
467
+ # Calculate total cannabinoids.
468
+ cannabinoids_wa = [
469
+ 'cannabinoid_d9_thca_percent',
470
+ 'cannabinoid_d9_thc_percent',
471
+ 'cannabinoid_d8_thc_percent',
472
+ 'cannabinoid_thcv_percent',
473
+ 'cannabinoid_cbd_percent',
474
+ 'cannabinoid_cbda_percent',
475
+ 'cannabinoid_cbdv_percent',
476
+ 'cannabinoid_cbg_percent',
477
+ 'cannabinoid_cbga_percent',
478
+ 'cannabinoid_cbc_percent',
479
+ 'cannabinoid_cbn_percent',
480
+ ]
481
+ results_with_ids['total_cannabinoids'] = results_with_ids[cannabinoids_wa].sum(axis=1)
482
+
483
+ # Save the complete lab results data to csv, xlsx, and json.
484
+ results_with_ids.to_excel('../.datasets/lab_results_complete.xlsx')
485
+ results_with_ids.to_csv('../.datasets/lab_results_complete.csv')
486
+ # FIXME: NAType is not JSON serializable
487
+ # with open('../.datasets/lab_results_complete.json', 'w') as outfile:
488
+ # data = results_with_ids.where(pd.notnull(results_with_ids), '')
489
+ # data = json.loads(json.dumps(list(data.T.to_dict().values())))
490
+ # json.dump(data, outfile)
algorithms/main.py ADDED
@@ -0,0 +1,370 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Get Cannabis Tests Data
3
+ Copyright (c) 2022 Cannlytics
4
+
5
+ Authors:
6
+ Keegan Skeate <https://github.com/keeganskeate>
7
+ Candace O'Sullivan-Sutherland <https://github.com/candy-o>
8
+ Created: 8/23/2022
9
+ Updated: 9/15/2022
10
+ License: MIT License <https://github.com/cannlytics/cannlytics/blob/main/LICENSE>
11
+
12
+ Description:
13
+
14
+ Periodically curate publicly published lab results by:
15
+
16
+ 1. Finding products and their COA URLS on the web.
17
+ 2. Downloading COA PDFs from their URLs.
18
+ 3. Using CoADoc to parse the COA PDFs (with OCR).
19
+ 4. Archiving the COA data in Firebase Firestore and Storage.
20
+
21
+ Data Sources:
22
+
23
+ - Raw Garden Lab Results
24
+ URL: <https://rawgarden.farm/lab-results/>
25
+
26
+ """
27
+ # # Standard imports.
28
+ # import base64
29
+ # from datetime import datetime, timedelta
30
+ # import os
31
+ # from time import sleep
32
+ # from typing import Any, List, Optional, Tuple
33
+
34
+ # # External imports.
35
+ # from bs4 import BeautifulSoup
36
+ # from firebase_admin import firestore, initialize_app
37
+ # import pandas as pd
38
+ # import requests
39
+
40
+ # # Internal imports.
41
+ # from cannlytics.data.coas import CoADoc
42
+ # from cannlytics.firebase import (
43
+ # get_document,
44
+ # initialize_firebase,
45
+ # update_documents,
46
+ # upload_file,
47
+ # )
48
+ # from cannlytics.utils import kebab_case, rmerge
49
+ # from cannlytics.utils.constants import DEFAULT_HEADERS
50
+
51
+ # # Specify where your data lives.
52
+ # BUCKET_NAME = 'cannlytics-company.appspot.com'
53
+ # COLLECTION = 'public/data/lab_results'
54
+ # STORAGE_REF = 'data/lab_results/raw_garden'
55
+
56
+ # # Create temporary directories.
57
+ # DATA_DIR = '/tmp'
58
+ # COA_DATA_DIR = f'{DATA_DIR}/lab_results/raw_garden'
59
+ # COA_PDF_DIR = f'{COA_DATA_DIR}/pdfs'
60
+ # TEMP_PATH = f'{COA_DATA_DIR}/tmp'
61
+ # if not os.path.exists(DATA_DIR): os.makedirs(DATA_DIR)
62
+ # if not os.path.exists(COA_DATA_DIR): os.makedirs(COA_DATA_DIR)
63
+ # if not os.path.exists(COA_PDF_DIR): os.makedirs(COA_PDF_DIR)
64
+ # if not os.path.exists(TEMP_PATH): os.makedirs(TEMP_PATH)
65
+
66
+ # # Define constants.
67
+ # BASE = 'https://rawgarden.farm/lab-results/'
68
+
69
+
70
+ # def get_rawgarden_products(
71
+ # start: Optional[Any] = None,
72
+ # end: Optional[Any] = None,
73
+ # ) -> pd.DataFrame:
74
+ # """Get Raw Garden's lab results page. Then get all of the product
75
+ # categories. Finally, get all product data, including: `coa_pdf`,
76
+ # `lab_results_url`, `product_name`, `product_subtype`, `date_retail`.
77
+ # Args:
78
+ # start (str or datetime): A point in time to begin restricting
79
+ # the product list by `date_retail` (optional).
80
+ # end (str or datetime): A point in time to end restricting
81
+ # the product list by `date_retail` (optional).
82
+ # Returns:
83
+ # (DataFrame): Returns a DataFrame of product data.
84
+ # """
85
+
86
+ # # Get the website.
87
+ # response = requests.get(BASE, headers=DEFAULT_HEADERS)
88
+ # soup = BeautifulSoup(response.content, 'html.parser')
89
+
90
+ # # Get all product data listed on the website.
91
+ # observations = []
92
+ # categories = soup.find_all('div', attrs={'class': 'category-content'})
93
+ # for category in categories:
94
+ # subtype = category.find('h3').text
95
+ # dates = category.findAll('h5', attrs={'class': 'result-date'})
96
+ # names = category.findAll('h5')
97
+ # names = [div for div in names if div.get('class') is None]
98
+ # links = category.findAll('a')
99
+ # for i, link in enumerate(links):
100
+ # try:
101
+ # href = link.get('href')
102
+ # date = pd.to_datetime(dates[i].text)
103
+ # name = names[i].text
104
+ # if href.endswith('.pdf'):
105
+ # observations.append({
106
+ # 'coa_pdf': href.split('/')[-1],
107
+ # 'lab_results_url': href,
108
+ # 'product_name': name,
109
+ # 'product_subtype': subtype,
110
+ # 'date_retail': date,
111
+ # })
112
+ # except AttributeError:
113
+ # continue
114
+
115
+ # # Restrict the observations to the desired time frame.
116
+ # results = pd.DataFrame(observations)
117
+ # dates = results['date_retail']
118
+ # if start:
119
+ # if isinstance(start, str):
120
+ # latest = pd.to_datetime(start)
121
+ # else:
122
+ # latest = start
123
+ # results = results.loc[dates >= latest]
124
+ # if end:
125
+ # if isinstance(end, str):
126
+ # earliest = pd.to_datetime(end)
127
+ # else:
128
+ # earliest = end
129
+ # results = results.loc[dates <= earliest]
130
+ # results['date_retail'] = dates.apply(lambda x: x.isoformat()[:19])
131
+ # return results
132
+
133
+
134
+ # def download_rawgarden_coas(
135
+ # items: pd.DataFrame,
136
+ # pause: Optional[float] = 0.24,
137
+ # verbose: Optional[bool] = True,
138
+ # ) -> None:
139
+ # """Download Raw Garden product COAs to `product_subtype` folders.
140
+ # Args:
141
+ # items: (DataFrame): A DataFrame of products with `product_subtype`
142
+ # and `lab_results_url` to download.
143
+ # pause (float): A pause to respect the server serving the PDFs,
144
+ # `0.24` seconds by default (optional).
145
+ # verbose (bool): Whether or not to print status, `True` by
146
+ # default (optional).
147
+ # """
148
+ # if verbose:
149
+ # total = len(items)
150
+ # print('Downloading %i PDFs, ETA > %.2fs' % (total, total * pause))
151
+
152
+ # # Create a folder of each of the subtypes.
153
+ # subtypes = list(items['product_subtype'].unique())
154
+ # for subtype in subtypes:
155
+ # folder = kebab_case(subtype)
156
+ # subtype_folder = f'{COA_PDF_DIR}/{folder}'
157
+ # if not os.path.exists(subtype_folder):
158
+ # os.makedirs(subtype_folder)
159
+
160
+ # # Download each COA PDF from its URL to a `product_subtype` folder.
161
+ # for i, row in enumerate(items.iterrows()):
162
+ # item = row[1]
163
+ # url = item['lab_results_url']
164
+ # subtype = item['product_subtype']
165
+ # filename = url.split('/')[-1]
166
+ # folder = kebab_case(subtype)
167
+ # outfile = os.path.join(COA_PDF_DIR, folder, filename)
168
+ # response = requests.get(url, headers=DEFAULT_HEADERS)
169
+ # with open(outfile, 'wb') as pdf:
170
+ # pdf.write(response.content)
171
+ # if verbose:
172
+ # message = 'Downloaded {}/{} | {}/{}'
173
+ # message = message.format(str(i + 1), str(total), folder, filename)
174
+ # print(message)
175
+ # sleep(pause)
176
+
177
+
178
+ # def parse_rawgarden_coas(
179
+ # directory: str,
180
+ # filenames: Optional[list] = None,
181
+ # temp_path: Optional[str] = '/tmp',
182
+ # verbose: Optional[bool] = True,
183
+ # **kwargs,
184
+ # ) -> Tuple[list]:
185
+ # """Parse Raw Garden lab results with CoADoc.
186
+ # Args:
187
+ # directory (str): The directory of files to parse.
188
+ # filenames (list): A list of files to parse (optional).
189
+ # temp_path (str): A temporary directory to use for any OCR (optional).
190
+ # verbose (bool): Whether or not to print status, `True` by
191
+ # default (optional).
192
+ # Returns:
193
+ # (tuple): Returns both a list of parsed and unidentified COA data.
194
+ # """
195
+ # parser = CoADoc()
196
+ # parsed, unidentified = [], []
197
+ # started = False
198
+ # for path, _, files in os.walk(directory):
199
+ # if verbose and not started:
200
+ # started = True
201
+ # if filenames:
202
+ # total = len(filenames)
203
+ # else:
204
+ # total = len(files)
205
+ # print('Parsing %i COAs, ETA > %.2fm' % (total, total * 25 / 60))
206
+ # for filename in files:
207
+ # if not filename.endswith('.pdf'):
208
+ # continue
209
+ # if filenames is not None:
210
+ # if filename not in filenames:
211
+ # continue
212
+ # doc = os.path.join(path, filename)
213
+ # try:
214
+ # # FIXME: Make API request to Cannlytics? Tesseract, etc.
215
+ # # are going to be too heavy for a cloud function.
216
+ # coa = parser.parse(doc, temp_path=temp_path, **kwargs)
217
+ # subtype = path.split('\\')[-1]
218
+ # coa[0]['product_subtype'] = subtype
219
+ # parsed.extend(coa)
220
+ # if verbose:
221
+ # print('Parsed:', filename)
222
+ # except Exception as e:
223
+ # unidentified.append({'coa_pdf': filename})
224
+ # if verbose:
225
+ # print('Error:', filename)
226
+ # print(e)
227
+ # pass
228
+ # return parsed, unidentified
229
+
230
+
231
+ # def upload_lab_results(
232
+ # observations: List[dict],
233
+ # collection: Optional[str] = None,
234
+ # database: Optional[Any] = None,
235
+ # update: Optional[bool] = True,
236
+ # verbose: Optional[bool] = True,
237
+ # ) -> None:
238
+ # """Upload lab results to Firestore.
239
+ # Args:
240
+ # observations (list): A list of lab results to upload.
241
+ # collection (str): The Firestore collection where lab results live,
242
+ # `'public/data/lab_results'` by default (optional).
243
+ # database (Client): A Firestore database instance (optional).
244
+ # update (bool): Whether or not to update existing entries, `True`
245
+ # by default (optional).
246
+ # verbose (bool): Whether or not to print status, `True` by
247
+ # default (optional).
248
+ # """
249
+ # if collection is None:
250
+ # collection = COLLECTION
251
+ # if database is None:
252
+ # database = initialize_firebase()
253
+ # refs, updates = [], []
254
+ # for obs in observations:
255
+ # sample_id = obs['sample_id']
256
+ # ref = f'{collection}/{sample_id}'
257
+ # if not update:
258
+ # doc = get_document(ref)
259
+ # if doc is not None:
260
+ # continue
261
+ # refs.append(ref)
262
+ # updates.append(obs)
263
+ # if updates:
264
+ # if verbose:
265
+ # print('Uploading %i lab results.' % len(refs))
266
+ # update_documents(refs, updates, database=database)
267
+ # if verbose:
268
+ # print('Uploaded %i lab results.' % len(refs))
269
+
270
+
271
+ def main(event, context):
272
+ """Archive Raw Garden data on a periodic basis.
273
+ Triggered from a message on a Cloud Pub/Sub topic.
274
+ Args:
275
+ event (dict): Event payload.
276
+ context (google.cloud.functions.Context): Metadata for the event.
277
+ """
278
+ raise NotImplementedError
279
+
280
+ # # Check that the PubSub message is valid.
281
+ # pubsub_message = base64.b64decode(event['data']).decode('utf-8')
282
+ # if pubsub_message != 'success':
283
+ # return
284
+
285
+ # # Get the most recent Raw Garden products.
286
+ # DAYS_AGO = 1
287
+ # start = datetime.now() - timedelta(days=DAYS_AGO)
288
+ # products = get_rawgarden_products(start=start)
289
+
290
+ # # Download Raw Garden product COAs to `product_subtype` folders.
291
+ # download_rawgarden_coas(products, pause=0.24, verbose=True)
292
+
293
+ # # Parse COA PDFs with CoADoc.
294
+ # coa_data, unidentified_coas = parse_rawgarden_coas(
295
+ # COA_PDF_DIR,
296
+ # filenames=products['coa_pdf'].to_list(),
297
+ # temp_path=TEMP_PATH,
298
+ # verbose=True,
299
+ # )
300
+
301
+ # # Merge the `products`'s `product_subtype` with the COA data.
302
+ # coa_dataframe = rmerge(
303
+ # pd.DataFrame(coa_data),
304
+ # products,
305
+ # on='coa_pdf',
306
+ # how='left',
307
+ # replace='right',
308
+ # )
309
+
310
+ # # Optional: Save the COA data to a workbook.
311
+ # parser = CoADoc()
312
+ # timestamp = datetime.now().isoformat()[:19].replace(':', '-')
313
+ # datafile = f'{COA_DATA_DIR}/rawgarden-coa-data-{timestamp}.xlsx'
314
+ # parser.save(coa_dataframe, datafile)
315
+
316
+ # # Optional: Save the unidentified COA data.
317
+ # errors = [x['coa_pdf'] for x in unidentified_coas]
318
+ # error_file = f'{COA_DATA_DIR}/rawgarden-unidentified-coas-{timestamp}.xlsx'
319
+ # products.loc[products['coa_pdf'].isin(errors)].to_excel(error_file)
320
+
321
+ # # Initialize Firebase.
322
+ # # FIXME: Ideally use the internal initialization.
323
+ # try:
324
+ # initialize_app()
325
+ # except ValueError:
326
+ # pass
327
+ # database = firestore.client()
328
+
329
+ # # Optional: Upload the lab results to Firestore.
330
+ # upload_lab_results(
331
+ # coa_dataframe.to_dict(orient='records'),
332
+ # database=database,
333
+ # update=False,
334
+ # verbose=False
335
+ # )
336
+
337
+ # # Optional: Upload datafiles to Firebase Storage.
338
+ # storage_error_file = '/'.join([STORAGE_REF, error_file.split('/')[-1]])
339
+ # upload_file(storage_error_file, error_file, bucket_name=BUCKET_NAME)
340
+
341
+
342
+ # === Test ===
343
+ if __name__ == '__main__':
344
+
345
+ from cannlytics.utils import encode_pdf
346
+ from cannlytics.utils.constants import DEFAULT_HEADERS
347
+ import requests
348
+
349
+ # [βœ“] TEST: Mock the Google Cloud Function scheduled routine.
350
+ # event = {'data': base64.b64encode('success'.encode())}
351
+ # get_rawgarden_data(event, context={})
352
+
353
+ # # [ ] Test: Post a PDF to the Cannlytics API for parsing.
354
+ # # FIXME:
355
+ # coa_doc_api = 'https://cannlytics.com/api/data/coas'
356
+ # folder = 'tests/assets/coas/'
357
+ # filename = f'{folder}/210000525-Citrus-Slurm-Diamonds.pdf'
358
+ # # files = {'upload_file': open(filename, 'rb')}
359
+ # # values = {'lims': 'Cannalysis'}
360
+ # # response = requests.post(base, files=files, data=values)
361
+ # with open(filename, 'rb') as f:
362
+ # response = requests.post(
363
+ # coa_doc_api,
364
+ # headers=DEFAULT_HEADERS,
365
+ # files={'file': f}
366
+ # )
367
+ # print(response.status_code)
368
+
369
+ # # Optional: Also allow for encoding of PDFs.
370
+ # encoded_pdf = encode_pdf(filename)
cannabis_tests.py CHANGED
@@ -6,7 +6,7 @@ Authors:
6
  Keegan Skeate <https://github.com/keeganskeate>
7
  Candace O'Sullivan-Sutherland <https://github.com/candy-o>
8
  Created: 9/10/2022
9
- Updated: 9/14/2022
10
  License: <https://github.com/cannlytics/cannlytics/blob/main/LICENSE>
11
  """
12
  import datasets
@@ -15,11 +15,11 @@ import pandas as pd
15
 
16
  # === Constants. ===
17
 
18
- _VERSION = '1.0.1'
19
  _HOMEPAGE = 'https://huggingface.co/datasets/cannlytics/cannabis_tests'
20
  _LICENSE = "https://opendatacommons.org/licenses/by/4-0/"
21
  _DESCRIPTION = """\
22
- Cannabis lab test results (https://cannlytics.com/data/tests) is a
23
  dataset of curated cannabis lab test results.
24
  """
25
  _CITATION = """\
 
6
  Keegan Skeate <https://github.com/keeganskeate>
7
  Candace O'Sullivan-Sutherland <https://github.com/candy-o>
8
  Created: 9/10/2022
9
+ Updated: 9/16/2022
10
  License: <https://github.com/cannlytics/cannlytics/blob/main/LICENSE>
11
  """
12
  import datasets
 
15
 
16
  # === Constants. ===
17
 
18
+ _VERSION = '1.0.2'
19
  _HOMEPAGE = 'https://huggingface.co/datasets/cannlytics/cannabis_tests'
20
  _LICENSE = "https://opendatacommons.org/licenses/by/4-0/"
21
  _DESCRIPTION = """\
22
+ Cannabis lab test results (https://cannlytics.com/data/results) is a
23
  dataset of curated cannabis lab test results.
24
  """
25
  _CITATION = """\
{rawgarden β†’ data/rawgarden}/details.csv RENAMED
File without changes
{rawgarden β†’ data/rawgarden}/results.csv RENAMED
File without changes
{rawgarden β†’ data/rawgarden}/values.csv RENAMED
File without changes
requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ # Cannabis Tests | Python Requirements
2
+ # Created: 9/15/2022
3
+ # Updated: 9/15/2022
4
+ beautifulsoup4==4.11.1
5
+ cannlytics==0.0.12
6
+ firebase_admin==5.3.0
7
+ pandas==1.4.4
test.py ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Test Cannabis Tests Dataset
3
+ Copyright (c) 2022 Cannlytics
4
+
5
+ Authors: Keegan Skeate <https://github.com/keeganskeate>
6
+ Created: 9/16/2022
7
+ Updated: 9/16/2022
8
+ License: CC-BY 4.0 <https://huggingface.co/datasets/cannlytics/cannabis_tests/blob/main/LICENSE>
9
+ """
10
+ from cannlytics.data.coas import CoADoc
11
+ from datasets import load_dataset
12
+ import pandas as pd
13
+
14
+ # Download Raw Garden lab result details.
15
+ repo = 'cannlytics/cannabis_tests'
16
+ dataset = load_dataset(repo, 'rawgarden')
17
+ details = dataset['details']
18
+
19
+ # Save the data locally with "Details", "Results", and "Values" worksheets.
20
+ outfile = 'rawgarden.xlsx'
21
+ parser = CoADoc()
22
+ parser.save(details.to_pandas(), outfile)
23
+
24
+ # Read the values.
25
+ values = pd.read_excel(outfile, sheet_name='Values')
26
+
27
+ # Read the results.
28
+ results = pd.read_excel(outfile, sheet_name='Results')