awacke1 commited on
Commit
18b7c86
·
1 Parent(s): 7631188

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +159 -153
app.py CHANGED
@@ -143,73 +143,6 @@ st.markdown("""
143
 
144
 
145
 
146
- st.markdown("""
147
-
148
- # Cognitive AI with Human Feedback (CAHF) [Example 🩺⚕️](https://huggingface.co/spaces/awacke1/Cognitive-AI-Episodic-Semantic-Memory-Demo):
149
-
150
- 1. Create and use Models to predict __outcomes__
151
- 2. Use AI to predict **conditions, disease, and opportunities** using AI with **explainability**.
152
- 3. **Cognitive AI** - Mimic how humans reason through decision making processes.
153
- 4. **Reasoning cycles** - "Recommended for You" reasoners - consider type of personalized needs and classification for users, to recommend products
154
- 5. **High Acuity Reasoners** - Make decisions on rules of **what it can and cannot do within human feedback** guidelines.
155
- -Emphasizes **explainability, transparency, and removing administrative burden** to **protocolize** and improve what staff is doing.
156
- -Vetted by SME's, adding value of **judgement and training** and pick up intelligence and **skills from human feedback**.
157
- -**Alert, Recommended Action, and Clinical Terms** per entity with vocabularies from LOINC, SNOMED, OMS, ICD10, RXNORM, SMILES, HCPCS, CPT, CQM, HL7, SDC and FHIR.
158
- 6. Non static multi agent cognitive approach using real time series to identify factors predictive of outcome.
159
- 7. Cognitive models form of Ontology - to create a type of computable sets and relationships stored in Ontology then ingested by reasoner
160
- -Use models of world to build predictions and recommendations with answers cumulative with information we know
161
- 8. Reasoners standardize making it easy as possible to do right thing using transfer learning and recommendation tools with questions and actions.
162
- """)
163
-
164
-
165
- st.markdown("""
166
-
167
- # 📚 Clinical Terminology and Ontologies [Example 🩺⚕️NLP Clinical Ontology Biomedical NER](https://huggingface.co/spaces/awacke1/Biomed-NLP-AI-Clinical-Terminology)
168
-
169
- ## Health Vocabularies, Systems of Coding, and Databases with Bibliographies
170
- ##__Keywords__:
171
-
172
- 1. __Clinical Terminology__: 💬 Words that doctors use to talk to each other about patients.
173
- 2. __Ontologies for Medications and Conditions__: 📚 A fancy way of organizing knowledge about medicine and health problems.
174
- 3. __Health Vocabularies__: 📝 A special list of words used in healthcare to talk about health issues.
175
- 4. __Systems of Coding__: 💻 A way of giving things like sicknesses and treatments special codes, so that doctors can remember them easily.
176
- 5. __Databases__: 🗄️ A computer system that stores information about patients, health research, and other healthcare things.
177
- 6. __Bibliographies__: 📖 A list of books or articles that doctors use to learn about new health information.
178
-
179
- 1. ## 1️⃣ National Library of Medicine's **RxNorm**:
180
- - Standardized nomenclature for clinical drugs developed by NLM
181
- - Provides links between drug names and related information such as ingredients, strengths, and dosages
182
- - **Data type: controlled vocabulary**
183
- - Access through **NLM's RxNorm website**: https://www.nlm.nih.gov/research/umls/rxnorm/index.html
184
- 2. ## 2️⃣ Centers for Medicare and Medicaid Services' Healthcare Common Procedure Coding System (HCPCS):
185
- - Coding system used to identify healthcare **services, procedures, and supplies**
186
- - Includes **codes for drugs, biologicals, and other items** used in medical care
187
- - **Data type: coding system**
188
- - Access through **CMS website**: https://www.cms.gov/Medicare/Coding/MedHCPCSGenInfo
189
- 3. ## 3️⃣ Unified Medical Language System (UMLS):
190
- - Set of files and software tools developed by NLM for integrating and mapping biomedical vocabularies
191
- - Includes RxNorm and other drug vocabularies, as well as other terminologies used in medicine
192
- - **Data type: controlled vocabulary**
193
- - Access through UMLS Metathesaurus: https://www.nlm.nih.gov/research/umls/index.html
194
- 4. ## 4️⃣ PubMed:
195
- - Database of **biomedical literature** maintained by the National Center for Biotechnology Information (NCBI)
196
- - Includes information about **drugs, including drug names, chemical structures, and pharmacological actions**
197
- - **Data type: bibliographic database**
198
- - Access through **PubMed website**: https://pubmed.ncbi.nlm.nih.gov/
199
- 5. ## 5️⃣ PubChem:
200
- - Database of chemical substances maintained by NCBI
201
- - Includes information about drugs, including **chemical structures, properties, and activities**
202
- - **Data type: chemical database**
203
- - Access through **PubChem website**: https://pubchem.ncbi.nlm.nih.gov/
204
- 6. ## 6️⃣ Behavioral Health Code Terminology Sets:
205
- - Code terminology sets specific to behavioral health
206
- - Includes **DSM** published by American Psychiatric Association, **ICD** published by World Health Organization, and **CPT** published by American Medical Association
207
- - **Data type: coding system**
208
- - Access through respective **organizations' websites**:
209
- 1. [DSM](https://www.psychiatry.org/psychiatrists/practice/dsm)
210
- 2. [ICD](https://www.who.int/standards/classifications/classification-of-diseases)
211
- 3. [CPT](https://www.ama-assn.org/practice-management/cpt/current-procedural-terminology-cpt)
212
- """)
213
 
214
  st.markdown("""
215
  1. # 📚Natural Language Processing🔤 - 🗣️🤖💭💬🌍🔍
@@ -291,6 +224,165 @@ st.markdown("""
291
  17. 🩺⚕️ Yolo Real Time Image Recognition from Webcam: https://huggingface.co/spaces/awacke1/Webcam-Object-Recognition-Yolo-n-Coco
292
  """)
293
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
294
  st.markdown("""
295
  4. # 🗣️Speech Recognition💬
296
  1. 🔊 **Continuous Speech Recognition**: Transcribe spoken words in real-time without pausing.
@@ -366,48 +458,6 @@ st.markdown("""
366
  7. Pyplot Dice Game: https://huggingface.co/spaces/awacke1/Streamlit-Pyplot-Math-Dice-Game
367
  """)
368
 
369
-
370
- st.markdown("""
371
-
372
- ## AI For Long Question Answering and Fact Checking [Example](🩺⚕️ https://huggingface.co/spaces/awacke1/StreamlitWikipediaChat)
373
- 1. 🖥️ First, we'll teach a smart computer to browse the internet and find information.
374
- - 🧠 It will be like having a super-smart search engine!
375
- 2. 🤖 Then, we'll train the computer to answer questions by having it learn from how humans answer questions.
376
- - 🤝 We'll teach it to imitate how people find and use information on the internet.
377
- 3. 📚 To make sure the computer's answers are correct, we'll teach it to collect references from the internet to support its answers.
378
- - 🔍 This way, it will only give answers that are true and based on facts.
379
- 4. 👨‍👩‍👧‍👦 We'll test our invention on a special set of questions that real people have asked.
380
- - 🧪 We'll make sure the computer's answers are as good as, or even better than, the answers from real people.
381
- 5. 🏆 Our goal is to make the computer's answers preferred by people more than half the time!
382
- - 🤞 If we can do that, it means the computer is really good at answering questions.
383
- """)
384
-
385
-
386
-
387
- st.markdown("""
388
- # Future of AI
389
- # Large Language Model - Human Feedback Metrics:
390
- **ROUGE** and **BLEU** are tools that help us measure how good a computer is at writing or translating sentences.
391
- ## 🩺⚕️ [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge)
392
- ## 🩺⚕️ [BLEU](https://huggingface.co/spaces/evaluate-metric/bleu)
393
- 1. ROUGE looks at a sentence made by a computer and checks how similar it is to sentences made by humans.
394
- 1. It tries to see if the important information is the same.
395
- 2. To do this, ROUGE looks at the groups of words that are the same in both the computer's sentence
396
- 1. and the human's sentence.
397
- 2. The more groups of words that are the same, the higher the score.
398
- 3. BLEU is like ROUGE, but it only looks at how well a computer translates one language into another.
399
- 1. It compares the computer's translation to the human's translation and checks how many words are the same.
400
- # If the scores for ROUGE or BLEU are high, it means that the computer is doing a good job.
401
- 1. But it's also important to remember that these tools have their limits,
402
- 2. and we need to use other ways to check if the computer is doing a good job.
403
- 1. **ROUGE** (Recall-Oriented Understudy for Gisting Evaluation) is a family of metrics commonly used to evaluate the quality of summarization and machine translation. ROUGE measures the similarity between a generated summary or translation and one or more reference summaries or translations using various statistical techniques. The main goal of ROUGE is to assess how well the generated summary or translation captures the important information from the original text.
404
- 2. **ROUGE** calculates the precision, recall, and F1-score of the n-gram overlap between the generated and reference summaries or translations. Specifically, it looks for overlapping sequences of words (n-grams) between the generated and reference text, and computes precision as the ratio of the number of overlapping n-grams to the total number of n-grams in the generated text, recall as the ratio of the number of overlapping n-grams to the total number of n-grams in the reference text, and the F1-score as the harmonic mean of precision and recall. ROUGE can be computed at different n-gram levels, including unigrams, bigrams, trigrams, etc., as well as at the sentence or document level.
405
- 3. **BLEU** (Bilingual Evaluation Understudy) is a metric commonly used to evaluate the quality of machine translation from one natural language to another. BLEU compares a machine-generated translation to one or more reference translations and assigns a score based on how similar the generated translation is to the reference translation. BLEU uses a modified form of precision to calculate the score.
406
- 4. **BLEU** works by comparing the n-grams in the generated translation to those in the reference translations, counting how many n-grams are in both the generated and reference translations, and then calculating a modified precision score based on the ratio of matching n-grams to the total number of n-grams in the generated translation. BLEU can be computed at different n-gram levels, including unigrams, bigrams, trigrams, etc. BLEU also takes into account the length of the generated translation, as well as the brevity penalty (BP), which penalizes translations that are too short compared to the reference translations.
407
- 5. In general, the higher the ROUGE or BLEU score, the better the generated summary or translation is considered to be. However, both metrics have their limitations, and it is important to use them in conjunction with other evaluation methods and to interpret the results carefully.
408
- """)
409
-
410
-
411
  st.markdown("""
412
  📊 Scoring Human Feedback Metrics with ROUGE and BLEU
413
 
@@ -451,50 +501,6 @@ Example:
451
 
452
 
453
 
454
- st.markdown("""
455
- # 🩺⚕️ Reinforcement Learning from Human Feedback (RLHF)
456
- ## 🤖 RLHF is a way for computers to learn how to do things better by getting help and feedback from people,
457
- - just like how you learn new things from your parents or teachers.
458
- 🎮 Let's say the computer wants to learn how to play a video game.
459
- - It might start by trying different things and seeing what happens.
460
- 👍 If it does something good, like getting a high score, it gets a reward.
461
- 👎 If it does something bad, like losing a life, it gets a punishment.
462
- 👩‍💻 Now, imagine that a person is watching the computer play the game and giving it feedback.
463
- -The person might say things like "Good job!" when the computer gets a high score
464
- - or "Oops, try again!" when it loses a life.
465
- 💡 This feedback helps the computer figure out which actions are good and which ones are bad.
466
- -The computer then uses this feedback to adjust its actions and get better at playing the game.
467
- 🤔 It might try different strategies and see which ones get the best feedback from the person.
468
- -Over time, the computer gets better and better at playing the game, just like how you get better at things by practicing and getting help from others.
469
- 🚀 RLHF is a cool way for computers to learn and improve with the help of people.
470
- -Who knows, maybe one day you can teach a computer to do something amazing!
471
-
472
- # Examples
473
-
474
- ## 🩺⚕️ Hospital Visualizations
475
- 🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsMinnesota
476
- 🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsNewJersey
477
- 🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsMentalHealth
478
- 🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-GraphViz-Folium-MapTopLargeHospitalsinWI
479
-
480
- # Card Game Activity
481
- https://huggingface.co/spaces/awacke1/CardGameActivity-GraphViz
482
- https://huggingface.co/spaces/awacke1/CardGameActivity-TwoPlayerAndAI
483
- https://huggingface.co/spaces/awacke1/CardGameActivity
484
- https://huggingface.co/spaces/awacke1/CardGameMechanics
485
-
486
- ## Scalable Vector Graphics (SVG)
487
- https://huggingface.co/spaces/awacke1/VizLib-SVGWrite-Streamlit
488
-
489
- ## Graph Visualization
490
- https://huggingface.co/spaces/awacke1/VizLib-GraphViz-SwimLanes-Digraph-ForMLLifecycle
491
-
492
- ## Clinical Terminology, Question Answering, Smart on FHIR
493
- https://huggingface.co/spaces/awacke1/ClinicalTerminologyNER-Refactored
494
- 🩺⚕️ https://huggingface.co/spaces/awacke1/Assessment-By-Organs
495
- 🩺⚕️ https://huggingface.co/spaces/awacke1/SMART-FHIR-Assessment-Test2
496
- 🩺⚕️ https://huggingface.co/spaces/awacke1/FHIRLib-FHIRKit
497
- """)
498
 
499
  st.markdown("""
500
  # GraphViz - Knowledge Graphs as Code
 
143
 
144
 
145
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
146
 
147
  st.markdown("""
148
  1. # 📚Natural Language Processing🔤 - 🗣️🤖💭💬🌍🔍
 
224
  17. 🩺⚕️ Yolo Real Time Image Recognition from Webcam: https://huggingface.co/spaces/awacke1/Webcam-Object-Recognition-Yolo-n-Coco
225
  """)
226
 
227
+
228
+
229
+
230
+
231
+ st.markdown("""
232
+
233
+ ## AI For Long Question Answering and Fact Checking [Example](🩺⚕️ https://huggingface.co/spaces/awacke1/StreamlitWikipediaChat)
234
+ 1. 🖥️ First, we'll teach a smart computer to browse the internet and find information.
235
+ - 🧠 It will be like having a super-smart search engine!
236
+ 2. 🤖 Then, we'll train the computer to answer questions by having it learn from how humans answer questions.
237
+ - 🤝 We'll teach it to imitate how people find and use information on the internet.
238
+ 3. 📚 To make sure the computer's answers are correct, we'll teach it to collect references from the internet to support its answers.
239
+ - 🔍 This way, it will only give answers that are true and based on facts.
240
+ 4. 👨‍👩‍👧‍👦 We'll test our invention on a special set of questions that real people have asked.
241
+ - 🧪 We'll make sure the computer's answers are as good as, or even better than, the answers from real people.
242
+ 5. 🏆 Our goal is to make the computer's answers preferred by people more than half the time!
243
+ - 🤞 If we can do that, it means the computer is really good at answering questions.
244
+ """)
245
+
246
+
247
+
248
+ st.markdown("""
249
+ # Future of AI
250
+ # Large Language Model - Human Feedback Metrics:
251
+ **ROUGE** and **BLEU** are tools that help us measure how good a computer is at writing or translating sentences.
252
+ ## 🩺⚕️ [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge)
253
+ ## 🩺⚕️ [BLEU](https://huggingface.co/spaces/evaluate-metric/bleu)
254
+ 1. ROUGE looks at a sentence made by a computer and checks how similar it is to sentences made by humans.
255
+ 1. It tries to see if the important information is the same.
256
+ 2. To do this, ROUGE looks at the groups of words that are the same in both the computer's sentence
257
+ 1. and the human's sentence.
258
+ 2. The more groups of words that are the same, the higher the score.
259
+ 3. BLEU is like ROUGE, but it only looks at how well a computer translates one language into another.
260
+ 1. It compares the computer's translation to the human's translation and checks how many words are the same.
261
+ # If the scores for ROUGE or BLEU are high, it means that the computer is doing a good job.
262
+ 1. But it's also important to remember that these tools have their limits,
263
+ 2. and we need to use other ways to check if the computer is doing a good job.
264
+ 1. **ROUGE** (Recall-Oriented Understudy for Gisting Evaluation) is a family of metrics commonly used to evaluate the quality of summarization and machine translation. ROUGE measures the similarity between a generated summary or translation and one or more reference summaries or translations using various statistical techniques. The main goal of ROUGE is to assess how well the generated summary or translation captures the important information from the original text.
265
+ 2. **ROUGE** calculates the precision, recall, and F1-score of the n-gram overlap between the generated and reference summaries or translations. Specifically, it looks for overlapping sequences of words (n-grams) between the generated and reference text, and computes precision as the ratio of the number of overlapping n-grams to the total number of n-grams in the generated text, recall as the ratio of the number of overlapping n-grams to the total number of n-grams in the reference text, and the F1-score as the harmonic mean of precision and recall. ROUGE can be computed at different n-gram levels, including unigrams, bigrams, trigrams, etc., as well as at the sentence or document level.
266
+ 3. **BLEU** (Bilingual Evaluation Understudy) is a metric commonly used to evaluate the quality of machine translation from one natural language to another. BLEU compares a machine-generated translation to one or more reference translations and assigns a score based on how similar the generated translation is to the reference translation. BLEU uses a modified form of precision to calculate the score.
267
+ 4. **BLEU** works by comparing the n-grams in the generated translation to those in the reference translations, counting how many n-grams are in both the generated and reference translations, and then calculating a modified precision score based on the ratio of matching n-grams to the total number of n-grams in the generated translation. BLEU can be computed at different n-gram levels, including unigrams, bigrams, trigrams, etc. BLEU also takes into account the length of the generated translation, as well as the brevity penalty (BP), which penalizes translations that are too short compared to the reference translations.
268
+ 5. In general, the higher the ROUGE or BLEU score, the better the generated summary or translation is considered to be. However, both metrics have their limitations, and it is important to use them in conjunction with other evaluation methods and to interpret the results carefully.
269
+ """)
270
+
271
+
272
+ st.markdown("""
273
+ # 🩺⚕️ Reinforcement Learning from Human Feedback (RLHF)
274
+ ## 🤖 RLHF is a way for computers to learn how to do things better by getting help and feedback from people,
275
+ - just like how you learn new things from your parents or teachers.
276
+ 🎮 Let's say the computer wants to learn how to play a video game.
277
+ - It might start by trying different things and seeing what happens.
278
+ 👍 If it does something good, like getting a high score, it gets a reward.
279
+ 👎 If it does something bad, like losing a life, it gets a punishment.
280
+ 👩‍💻 Now, imagine that a person is watching the computer play the game and giving it feedback.
281
+ -The person might say things like "Good job!" when the computer gets a high score
282
+ - or "Oops, try again!" when it loses a life.
283
+ 💡 This feedback helps the computer figure out which actions are good and which ones are bad.
284
+ -The computer then uses this feedback to adjust its actions and get better at playing the game.
285
+ 🤔 It might try different strategies and see which ones get the best feedback from the person.
286
+ -Over time, the computer gets better and better at playing the game, just like how you get better at things by practicing and getting help from others.
287
+ 🚀 RLHF is a cool way for computers to learn and improve with the help of people.
288
+ -Who knows, maybe one day you can teach a computer to do something amazing!
289
+
290
+ # Examples
291
+
292
+ ## 🩺⚕️ Hospital Visualizations
293
+ 🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsMinnesota
294
+ 🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsNewJersey
295
+ 🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsMentalHealth
296
+ 🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-GraphViz-Folium-MapTopLargeHospitalsinWI
297
+
298
+ # Card Game Activity
299
+ https://huggingface.co/spaces/awacke1/CardGameActivity-GraphViz
300
+ https://huggingface.co/spaces/awacke1/CardGameActivity-TwoPlayerAndAI
301
+ https://huggingface.co/spaces/awacke1/CardGameActivity
302
+ https://huggingface.co/spaces/awacke1/CardGameMechanics
303
+
304
+ ## Scalable Vector Graphics (SVG)
305
+ https://huggingface.co/spaces/awacke1/VizLib-SVGWrite-Streamlit
306
+
307
+ ## Graph Visualization
308
+ https://huggingface.co/spaces/awacke1/VizLib-GraphViz-SwimLanes-Digraph-ForMLLifecycle
309
+
310
+ ## Clinical Terminology, Question Answering, Smart on FHIR
311
+ https://huggingface.co/spaces/awacke1/ClinicalTerminologyNER-Refactored
312
+ 🩺⚕️ https://huggingface.co/spaces/awacke1/Assessment-By-Organs
313
+ 🩺⚕️ https://huggingface.co/spaces/awacke1/SMART-FHIR-Assessment-Test2
314
+ 🩺⚕️ https://huggingface.co/spaces/awacke1/FHIRLib-FHIRKit
315
+ """)
316
+
317
+
318
+ st.markdown("""
319
+
320
+ # Cognitive AI with Human Feedback (CAHF) [Example 🩺⚕️](https://huggingface.co/spaces/awacke1/Cognitive-AI-Episodic-Semantic-Memory-Demo):
321
+
322
+ 1. Create and use Models to predict __outcomes__
323
+ 2. Use AI to predict **conditions, disease, and opportunities** using AI with **explainability**.
324
+ 3. **Cognitive AI** - Mimic how humans reason through decision making processes.
325
+ 4. **Reasoning cycles** - "Recommended for You" reasoners - consider type of personalized needs and classification for users, to recommend products
326
+ 5. **High Acuity Reasoners** - Make decisions on rules of **what it can and cannot do within human feedback** guidelines.
327
+ -Emphasizes **explainability, transparency, and removing administrative burden** to **protocolize** and improve what staff is doing.
328
+ -Vetted by SME's, adding value of **judgement and training** and pick up intelligence and **skills from human feedback**.
329
+ -**Alert, Recommended Action, and Clinical Terms** per entity with vocabularies from LOINC, SNOMED, OMS, ICD10, RXNORM, SMILES, HCPCS, CPT, CQM, HL7, SDC and FHIR.
330
+ 6. Non static multi agent cognitive approach using real time series to identify factors predictive of outcome.
331
+ 7. Cognitive models form of Ontology - to create a type of computable sets and relationships stored in Ontology then ingested by reasoner
332
+ -Use models of world to build predictions and recommendations with answers cumulative with information we know
333
+ 8. Reasoners standardize making it easy as possible to do right thing using transfer learning and recommendation tools with questions and actions.
334
+ """)
335
+
336
+
337
+ st.markdown("""
338
+
339
+ # 📚 Clinical Terminology and Ontologies [Example 🩺⚕️NLP Clinical Ontology Biomedical NER](https://huggingface.co/spaces/awacke1/Biomed-NLP-AI-Clinical-Terminology)
340
+
341
+ ## Health Vocabularies, Systems of Coding, and Databases with Bibliographies
342
+ ##__Keywords__:
343
+
344
+ 1. __Clinical Terminology__: 💬 Words that doctors use to talk to each other about patients.
345
+ 2. __Ontologies for Medications and Conditions__: 📚 A fancy way of organizing knowledge about medicine and health problems.
346
+ 3. __Health Vocabularies__: 📝 A special list of words used in healthcare to talk about health issues.
347
+ 4. __Systems of Coding__: 💻 A way of giving things like sicknesses and treatments special codes, so that doctors can remember them easily.
348
+ 5. __Databases__: 🗄️ A computer system that stores information about patients, health research, and other healthcare things.
349
+ 6. __Bibliographies__: 📖 A list of books or articles that doctors use to learn about new health information.
350
+
351
+ 1. ## 1️⃣ National Library of Medicine's **RxNorm**:
352
+ - Standardized nomenclature for clinical drugs developed by NLM
353
+ - Provides links between drug names and related information such as ingredients, strengths, and dosages
354
+ - **Data type: controlled vocabulary**
355
+ - Access through **NLM's RxNorm website**: https://www.nlm.nih.gov/research/umls/rxnorm/index.html
356
+ 2. ## 2️⃣ Centers for Medicare and Medicaid Services' Healthcare Common Procedure Coding System (HCPCS):
357
+ - Coding system used to identify healthcare **services, procedures, and supplies**
358
+ - Includes **codes for drugs, biologicals, and other items** used in medical care
359
+ - **Data type: coding system**
360
+ - Access through **CMS website**: https://www.cms.gov/Medicare/Coding/MedHCPCSGenInfo
361
+ 3. ## 3️⃣ Unified Medical Language System (UMLS):
362
+ - Set of files and software tools developed by NLM for integrating and mapping biomedical vocabularies
363
+ - Includes RxNorm and other drug vocabularies, as well as other terminologies used in medicine
364
+ - **Data type: controlled vocabulary**
365
+ - Access through UMLS Metathesaurus: https://www.nlm.nih.gov/research/umls/index.html
366
+ 4. ## 4️⃣ PubMed:
367
+ - Database of **biomedical literature** maintained by the National Center for Biotechnology Information (NCBI)
368
+ - Includes information about **drugs, including drug names, chemical structures, and pharmacological actions**
369
+ - **Data type: bibliographic database**
370
+ - Access through **PubMed website**: https://pubmed.ncbi.nlm.nih.gov/
371
+ 5. ## 5️⃣ PubChem:
372
+ - Database of chemical substances maintained by NCBI
373
+ - Includes information about drugs, including **chemical structures, properties, and activities**
374
+ - **Data type: chemical database**
375
+ - Access through **PubChem website**: https://pubchem.ncbi.nlm.nih.gov/
376
+ 6. ## 6️⃣ Behavioral Health Code Terminology Sets:
377
+ - Code terminology sets specific to behavioral health
378
+ - Includes **DSM** published by American Psychiatric Association, **ICD** published by World Health Organization, and **CPT** published by American Medical Association
379
+ - **Data type: coding system**
380
+ - Access through respective **organizations' websites**:
381
+ 1. [DSM](https://www.psychiatry.org/psychiatrists/practice/dsm)
382
+ 2. [ICD](https://www.who.int/standards/classifications/classification-of-diseases)
383
+ 3. [CPT](https://www.ama-assn.org/practice-management/cpt/current-procedural-terminology-cpt)
384
+ """)
385
+
386
  st.markdown("""
387
  4. # 🗣️Speech Recognition💬
388
  1. 🔊 **Continuous Speech Recognition**: Transcribe spoken words in real-time without pausing.
 
458
  7. Pyplot Dice Game: https://huggingface.co/spaces/awacke1/Streamlit-Pyplot-Math-Dice-Game
459
  """)
460
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
461
  st.markdown("""
462
  📊 Scoring Human Feedback Metrics with ROUGE and BLEU
463
 
 
501
 
502
 
503
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
504
 
505
  st.markdown("""
506
  # GraphViz - Knowledge Graphs as Code