base_model: LeroyDyer/SpydazWeb_AI_HumanAI_RP
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
datasets:
- ChemistryVision/Tox21-V-SMILES
Introductionn To Isomeric SMILES
- Developed by: LeroyDyer
- License: apache-2.0
- Finetuned from model : LeroyDyer/SpydazWeb_AI_HumanAI_RP
personally I believe that mistral did not did Thier own due diligence on these models !
Dataset "LeroyDyer/Tox21-V-SMILES_QA_to_Base64" (ChemistryVision/Tox21-V-SMILES)
### Question:
Content :
Isomeric SMILES refers to a specialized version of the Simplified Molecular Input Line Entry System (SMILES) that includes additional specifications for isotopes and stereochemistry. This allows for a more detailed representation of chemical structures, accommodating variations in molecular configuration and isotopic composition.
Key Features of Isomeric SMILES
1. Definition and Purpose
Isomeric SMILES is designed to represent molecules with specific isotopic labels and chiral configurations. This enables chemists to uniquely identify different isomers of a compound, which can have distinct chemical properties and behaviors.
2. Isotopic Specification
In isomeric SMILES, isotopes are indicated by placing the atomic mass number before the atomic symbol within square brackets. For example, carbon-13 methane is represented as [13CH4]. This notation clearly distinguishes it from the more common carbon-12 variant.
3. Chirality Representation
Chirality in molecules can be specified using symbols such as "@" and "@@". The "@" symbol indicates that the arrangement of substituents around a chiral center is counterclockwise, while "@@" indicates a clockwise arrangement. This local chirality representation allows for partial specifications, which can be particularly useful when not all information about the molecule is known24.
4. Unique Isomeric SMILES
A unique isomeric SMILES string, referred to as an "absolute SMILES", ensures that each molecule's representation is consistent and universally understood across different platforms and databases. This uniqueness is crucial for effective communication in chemical informatics.

### Response:
{}
This model is being introduced to SMILES : As Well As associated tasks : used for chemical compounds etc .
method :
first is to train. the model to accept smiles identify the isometric smiles and the carnical smiles . the to add images with them smiles to associate the base64 image with the descriptions . then to perform tasks using the images and smiles . returning or identifying compounds and smiles and viruses which may also be described in this way : this provides and extension to the existing medical data and tasks internal to the model such as bio medical and stem sciences !
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
@misc{open-llm-leaderboard-v2, author = {Clémentine Fourrier and Nathan Habib and Alina Lozovskaya and Konrad Szafer and Thomas Wolf}, title = {Open LLM Leaderboard v2}, year = {2024}, publisher = {Hugging Face}, howpublished = "\url{https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard}", }
@software{eval-harness, author = {Gao, Leo and Tow, Jonathan and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and McDonell, Kyle and Muennighoff, Niklas and Phang, Jason and Reynolds, Laria and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy}, title = {A framework for few-shot language model evaluation}, month = sep, year = 2021, publisher = {Zenodo}, version = {v0.0.1}, doi = {10.5281/zenodo.5371628}, url = {https://doi.org/10.5281/zenodo.5371628}, }
@misc{zhou2023instructionfollowingevaluationlargelanguage, title={Instruction-Following Evaluation for Large Language Models}, author={Jeffrey Zhou and Tianjian Lu and Swaroop Mishra and Siddhartha Brahma and Sujoy Basu and Yi Luan and Denny Zhou and Le Hou}, year={2023}, eprint={2311.07911}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2311.07911}, }
@misc{suzgun2022challengingbigbenchtaskschainofthought, title={Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them}, author={Mirac Suzgun and Nathan Scales and Nathanael Schärli and Sebastian Gehrmann and Yi Tay and Hyung Won Chung and Aakanksha Chowdhery and Quoc V. Le and Ed H. Chi and Denny Zhou and Jason Wei}, year={2022}, eprint={2210.09261}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2210.09261}, }
@misc{hendrycks2021measuringmathematicalproblemsolving, title={Measuring Mathematical Problem Solving With the MATH Dataset}, author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt}, year={2021}, eprint={2103.03874}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2103.03874}, }
@misc{rein2023gpqagraduatelevelgoogleproofqa, title={GPQA: A Graduate-Level Google-Proof Q&A Benchmark}, author={David Rein and Betty Li Hou and Asa Cooper Stickland and Jackson Petty and Richard Yuanzhe Pang and Julien Dirani and Julian Michael and Samuel R. Bowman}, year={2023}, eprint={2311.12022}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2311.12022}, }
@misc{sprague2024musrtestinglimitschainofthought, title={MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning}, author={Zayne Sprague and Xi Ye and Kaj Bostrom and Swarat Chaudhuri and Greg Durrett}, year={2024}, eprint={2310.16049}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2310.16049}, }
@misc{wang2024mmluprorobustchallengingmultitask, title={MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark}, author={Yubo Wang and Xueguang Ma and Ge Zhang and Yuansheng Ni and Abhranil Chandra and Shiguang Guo and Weiming Ren and Aaran Arulraj and Xuan He and Ziyan Jiang and Tianle Li and Max Ku and Kai Wang and Alex Zhuang and Rongqi Fan and Xiang Yue and Wenhu Chen}, year={2024}, eprint={2406.01574}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2406.01574}, }
@misc{open-llm-leaderboard-v1, author = {Edward Beeching and Clémentine Fourrier and Nathan Habib and Sheon Han and Nathan Lambert and Nazneen Rajani and Omar Sanseviero and Lewis Tunstall and Thomas Wolf}, title = {Open LLM Leaderboard (2023-2024)}, year = {2023}, publisher = {Hugging Face}, howpublished = "\url{https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard}" }