modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
yangyida/llama_3_ecc_transcript
yangyida
"2024-06-23T04:30:14Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-23T04:30:01Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** yangyida - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
DokiQueen/Hypno
DokiQueen
"2024-06-23T04:42:25Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T04:40:12Z"
Entry not found
rita443/ontonotes
rita443
"2024-06-23T15:00:46Z"
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
"2024-06-23T04:42:09Z"
--- tags: - generated_from_trainer model-index: - name: ontonotes results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ontonotes This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 0.1 - num_epochs: 100 ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
Litzy619/MIS0623T1
Litzy619
"2024-06-23T15:58:56Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-06-23T04:43:00Z"
Entry not found
celinehoang/bge-reranker-v2-m3-onnx
celinehoang
"2024-06-23T04:50:54Z"
0
0
transformers
[ "transformers", "onnx", "xlm-roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T04:43:32Z"
Entry not found
zayyannaveed/sdxl-badminton-control-lora
zayyannaveed
"2024-06-23T04:44:32Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T04:44:32Z"
Entry not found
Anjali17/model.py
Anjali17
"2024-06-23T04:50:19Z"
0
0
null
[ "license:llama3", "region:us" ]
null
"2024-06-23T04:45:52Z"
--- license: llama3 ---
advokat/LunchIsMissing
advokat
"2024-06-25T05:15:47Z"
0
2
null
[ "region:us" ]
null
"2024-06-23T04:47:20Z"
--- {} --- * Do not use quality tags (masterpiece, very aesthetic, it will make the model generate hentai) * Responds to artist styles * If you get latent patterns define background Merge process: 1. get drunk 2. copy TE from animagine 3. stick it on animagine + (cashmoney - sdxl1.0)*0.5 4. lora_0003Pony_0003Delta-ponyDiffusionV6XL:0.5 License: Purpose This license gives everyone as much permission to work with this software as possible, while protecting contributors from liability, protecting the freedom of end users, and reducing harm. Definitions In this license, “model” refers to machine learning model weights, biases, parameters, optimizer states, and any byproducts of a training or pretraining process, whether in the form of checkpoints or any other form. The term “derived model” refers to any model based on this model. The term “software” also refers to any model along with documentation or other resources provided with the software. The term “source code” refers to the preferred form of making modifications to software. It also includes any models, if applicable, but it does not include any datasets used to train a model. To “modify” also means to perform any training on a model or to combine a model with another model. Acceptance In order to receive this license, you must agree to its rules. The rules of this license are both obligations under that agreement and conditions to your license. You must not do anything with this software that triggers a rule that you cannot or will not follow. If you do not agree, then you cannot use this software in any way. Copyright Each contributor licenses you to do everything with this software that would otherwise infringe that contributor’s copyright in it. Freedom Neither this software nor any work that is combined with this software will be considered a technological protection measure under the WIPO Copyright Treaty or any similar law. Reverse engineering of this software and of any work that is combined with this software is always allowed. Notices You must ensure that everyone who gets a copy of any part of this software from you, with or without changes, also gets the text of this license along with the corresponding source code. If you modify this software and allow users to interact with it through a computer network, you must ensure they have a reasonable way to receive the corresponding source code from you, whether that is via a download link or a prominent written offer. As a special case, if you are only allowing users to interact with a derived model, then you may choose to provide a download link or written offer only for the derived model. This software, all source code, and all modifications must be provided under this license or another license that allows everything this license allows. Note that this does not give you permission to change the license for this software. Excuse If anyone notifies you in writing that you have not complied with Notices, you can keep your license by taking all practical steps to comply within 30 days after the notice. If you do not do so, your license ends immediately. Output The output of this software is not covered by this license, and no contributor claims any rights to it. Patent Each contributor licenses you to do everything with this software that would otherwise infringe any patent claims they can license or become able to license. Reliability No contributor can revoke this license. Alternatives You can also use any non-model parts of this software under the terms of the GNU AGPL 3.0, or any later version of that license. If you do, No Harm and No Liability still apply. Revisions The Freedom of Development Project may publish revised or new versions of the Fair AI Public License. Those new versions will be similar in spirit to this license. Unless a contributor specifies otherwise, you have the option of following the terms of any later version of this license. Your choice to follow a later version of the license will not impose additional obligations on any contributor. Even if you do choose to follow a later version, the restrictions of Prohibited Uses will still apply. Survival The provisions of No Harm and No Liability survive the end of your license. No Harm You agree that no contributor’s conduct in the creation of this software has caused you any harm. As far as the law allows, you give up your right to pursue any kind of legal claim against any contributor for actions related the creation of this software, even if those actions broke a previous agreement. Additionally, you agree not to use this model for harmful purposes, as listed in Prohibited Uses. These restrictions do not apply to non-model parts of this software. No Liability As far as the law allows, this software comes as is, without any warranty or condition, and no contributor will be liable to anyone for any damages related to this software or this license, under any kind of legal claim. Prohibited Uses You may not use this model or any derived model for the following: In any way that violates any applicable national, federal, state, local or international law or regulation; For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; To generate or disseminate verifiably false information and/or content with the purpose of harming others; To generate or disseminate personal identifiable information that can be used to harm an individual; To defame, disparage or otherwise harass others; For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation; For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics; To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories; To provide medical advice and medical results interpretation; To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use).
b-fujino/LUM_13bfi_1000_B2
b-fujino
"2024-06-23T05:05:59Z"
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-23T04:59:11Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
surya-narayanan/computer_science
surya-narayanan
"2024-06-23T05:13:44Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-23T04:59:47Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AvocadoTreeABC/VoiceOverModel
AvocadoTreeABC
"2024-06-23T05:03:14Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T05:01:01Z"
Entry not found
inflaton/Qwen2-0.5B-Instruct-MAC-lora
inflaton
"2024-06-25T18:01:44Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/qwen2-0.5b-instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-06-23T05:01:10Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft base_model: unsloth/qwen2-0.5b-instruct-bnb-4bit --- # Uploaded model - **Developed by:** inflaton - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2-0.5b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RUXHIR2828/DorothyMannine
RUXHIR2828
"2024-06-23T05:01:48Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T05:01:33Z"
Entry not found
Radmir-Gabidullin/Ariana_Grande
Radmir-Gabidullin
"2024-06-23T05:25:47Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-23T05:02:43Z"
--- license: openrail ---
itay-nakash/model_e4ad58a464_sweep_lucky-universe-801
itay-nakash
"2024-06-23T05:03:14Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T05:03:14Z"
Entry not found
camenduru/EvTexture-2b
camenduru
"2024-06-23T05:04:53Z"
0
2
null
[ "region:us" ]
null
"2024-06-23T05:03:58Z"
Entry not found
Anjali17/chatbot
Anjali17
"2024-06-23T05:04:27Z"
0
0
null
[ "license:llama3", "region:us" ]
null
"2024-06-23T05:04:27Z"
--- license: llama3 ---
itay-nakash/model_e4ad58a464_sweep_fanciful-dew-802
itay-nakash
"2024-06-23T05:04:54Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T05:04:54Z"
Entry not found
Wawaworker/mrnx
Wawaworker
"2024-06-23T05:31:13Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T05:04:59Z"
Entry not found
itay-nakash/model_e4ad58a464_sweep_colorful-dragon-803
itay-nakash
"2024-06-23T05:07:46Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T05:07:46Z"
Entry not found
Ariffiq99/e_care_COPA_xlm_roberta_base_finetuned
Ariffiq99
"2024-06-23T05:08:55Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "multiple-choice", "generated_from_trainer", "base_model:Ariffiq99/COPA_xlm_roberta_base_finetuned", "license:mit", "endpoints_compatible", "region:us" ]
multiple-choice
"2024-06-23T05:08:25Z"
--- license: mit base_model: Ariffiq99/COPA_xlm_roberta_base_finetuned tags: - generated_from_trainer metrics: - f1 model-index: - name: e_care_COPA_xlm_roberta_base_finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # e_care_COPA_xlm_roberta_base_finetuned This model is a fine-tuned version of [Ariffiq99/COPA_xlm_roberta_base_finetuned](https://huggingface.co/Ariffiq99/COPA_xlm_roberta_base_finetuned) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6032 - F1: 0.6998 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6946 | 1.0 | 933 | 0.6225 | 0.6272 | | 0.6224 | 2.0 | 1866 | 0.5700 | 0.6880 | | 0.5761 | 3.0 | 2799 | 0.5574 | 0.6989 | | 0.5266 | 4.0 | 3732 | 0.5611 | 0.7008 | | 0.4866 | 5.0 | 4665 | 0.5616 | 0.6993 | | 0.4556 | 6.0 | 5598 | 0.5774 | 0.7031 | | 0.4245 | 7.0 | 6531 | 0.6032 | 0.6998 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
EmergenceAI/RAG-response-generation-model-v1
EmergenceAI
"2024-06-23T05:42:08Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "RAG", "EmergenceAI", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-23T05:08:54Z"
--- license: apache-2.0 tags: - RAG - EmergenceAI inference: false --- # Emergence-RAG-response-generation Emergence-RAG-response-generation is a 13b parameter decoder-style transformer model for RAG applications. It is fine-tuned from a [llama2-13b](https://huggingface.co/meta-llama/Llama-2-13b-hf) base-model. This model was trained by [Emergence AI](https://www.emergence.ai/). Emergence-RAG-response-generation is part of the family of Emergence models designed specifically for use in RAG applications. Emergence-RAG-response-generation is a corpus-grounded question-answering model that grounds answers in the provided information snippets. A typical use-case is as part of a larger retrieval-based corpus-grounded dialog system. ## Model Date August 8, 2023 ## Model License Apache-2.0 ## Usage Loading model and tokenizer: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path = "EmergenceAI/RAG-response-generation-model-v1" device = torch.device("cuda:0") # change device id as necessary model = AutoModelForCausalLM.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path, fast_tokenizer=True) model.to(device) # move to device ``` Prompt example: ```python info = '''Information:\tThe Solar System is about 4.6 billion years old. The Sun formed by gravity in a large molecular cloud. It is mainly hydrogen, which it converts into helium. Information:\tThe formation and evolution of the Solar System began 4.6 billion years ago with the gravitational collapse of a small part of a giant molecular cloud. Information:\tAstronomers are now more or less certain that the order of the planets was not always as it is today. Knowing what we know today, we can see the Solar System is strange. All other planetary system we are able to study have their largest planet close to their star. Also we have noticed other oddities in the Solar System. Mars is smaller than it ought to be, and the asteroid belt has been disturbed. Information:\tFor thousands of years, people had no need for a name for the "Solar System". They thought the Earth stayed still at the center of everything (geocentrism). The Greek philosopher Aristarchus of Samos suggested that there was a special order in the sky. Nicolaus Copernicus was the first to develop a mathematical system that described what we now call the "Solar System". This was called a "new system of the world". In the 17th century, Galileo Galilei, Johannes Kepler and Isaac Newton began to understand physics more clearly. People began to accept the idea that the Earth is a planet that moves around the Sun, and that the planets are worlds, and that all worlds are governed by the same same physical laws. More recently, telescopes and space probes sometimes let us see details directly. All inner planets have surface features. The gas giants (as the name suggests) have surfaces whose make-up is gradually being discovered. Information:\tThere are eight planets in the Solar System. From closest to farthest from the Sun, they are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune. The first four planets are called terrestrial planets. They are mostly made of rock and metal, and they are mostly solid. The last four planets are called gas giants. This is because they are much larger than other planets and are mostly made of gas. ''' qs = "Question:\tHow old is the Solar System?" prompt = tokenizer.bos_token prompt += '''Instruction:\tYou are to try to answer the following question using only the pieces of information given. Instruction:\tYour response should be a well formed JSON object with an 'answerable' property followed by an 'answer' property. Instruction:\tIf you cannot answer the question given the information, the value of the 'answerable' should be 'false' and the 'answer' should be an empty string. Instruction:\tIf you can answer the question given the information, the value of the 'answerable' should be 'true' and your answer should be the string value of the 'answer' property. ''' + info + qs + " Response:" ``` We recommend using newline character for stopping criterion, as follows: ```python from transformers import StoppingCriteria, StoppingCriteriaList eos_tokens = [tokenizer.eos_token,'\n'] eos_token_ids = [tokenizer.encode(token)[0] for token in eos_tokens] class MultipleEOSTokensStoppingCriteria(StoppingCriteria): def __init__(self, eos_token_ids): self.eos_token_ids = set(eos_token_ids) def __call__(self, input_ids, scores) -> bool: if input_ids.shape[-1] <= 1: return False for eos_token_id in self.eos_token_ids: if eos_token_id == input_ids[0, -1].item(): return True return False # Define stopping criteria multiple_eos_tokens_processor = MultipleEOSTokensStoppingCriteria(eos_token_ids) stopping_criteria = StoppingCriteriaList([multiple_eos_tokens_processor]) ``` Inference: ```python inputs = tokenizer(prompt, return_tensors="pt", return_token_type_ids=False).to(device) generate_ids = model.generate( **inputs, max_new_tokens=1024, temperature=0.0, num_beams=2, top_p=1, stopping_criteria=stopping_criteria ) response = tokenizer.decode(generate_ids[0], skip_special_tokens=True, clean_up_tokenization_spaces=True) ``` Example output (after response processing): ```json [{"answerable": "true", "answer": "4.6 billion years"}] ```
itay-nakash/model_e4ad58a464_sweep_efficient-firefly-804
itay-nakash
"2024-06-23T05:10:00Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T05:10:00Z"
Entry not found
SAIL-UoA/deberta-v2-xxlarge-our-model-cycorp-bs-1-lr-3e-6-version1
SAIL-UoA
"2024-06-23T05:11:52Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T05:11:52Z"
Entry not found
DBangshu/Base_gemma_e5_5_2
DBangshu
"2024-06-23T05:15:38Z"
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-23T05:13:36Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
surya-narayanan/economics
surya-narayanan
"2024-06-23T05:34:35Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-23T05:14:07Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Razer112/Amodel
Razer112
"2024-06-23T18:55:31Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-23T05:15:11Z"
--- license: openrail ---
donutsan/FullViewRatePredictorV1
donutsan
"2024-06-23T05:16:11Z"
0
0
transformers
[ "transformers", "safetensors", "big_bird", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T05:15:41Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ozch/sgmb2
ozch
"2024-06-23T05:22:54Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-23T05:16:11Z"
--- license: openrail ---
hangzeli/musicgen-melody-lora-punk
hangzeli
"2024-06-23T10:55:21Z"
0
0
peft
[ "peft", "safetensors", "musicgen_melody", "text-to-audio", "ylacombe/tiny-punk", "generated_from_trainer", "base_model:facebook/musicgen-melody", "license:cc-by-nc-4.0", "region:us" ]
text-to-audio
"2024-06-23T05:16:41Z"
--- license: cc-by-nc-4.0 library_name: peft tags: - text-to-audio - ylacombe/tiny-punk - generated_from_trainer base_model: facebook/musicgen-melody model-index: - name: musicgen-melody-lora-punk results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # musicgen-melody-lora-punk This model is a fine-tuned version of [facebook/musicgen-melody](https://huggingface.co/facebook/musicgen-melody) on the YLACOMBE/TINY-PUNK - DEFAULT dataset. It achieves the following results on the evaluation set: - Loss: 6.2333 - Clap: -0.0021 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 456 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Clap | |:-------------:|:------:|:----:|:---------------:|:-------:| | 7.1679 | 2.7778 | 25 | 6.2244 | -0.0018 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.0.0+cu117 - Datasets 2.19.2 - Tokenizers 0.19.1
HFatimaZahra/fnt-correction-AceGPT-v1.5-13B-Chat
HFatimaZahra
"2024-06-23T05:22:58Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T05:22:58Z"
Entry not found
EmergenceAI/RAG-response-generation-model-v2
EmergenceAI
"2024-06-23T05:55:52Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "RAG", "EmergenceAI", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-23T05:23:19Z"
--- license: apache-2.0 tags: - RAG - EmergenceAI inference: false --- # Emergence-RAG-response-generation Emergence-RAG-response-generation is a 13b parameter decoder-style transformer model for RAG applications. It is fine-tuned from a [llama2-13b](https://huggingface.co/meta-llama/Llama-2-13b-hf) base-model. This model was trained by [Emergence AI](https://www.emergence.ai/). Emergence-RAG-response-generation is part of the family of Emergence models designed specifically for use in RAG applications. Emergence-RAG-response-generation is a corpus-grounded question-answering model that grounds answers in the provided information snippets. A typical use-case is as part of a larger retrieval-based corpus-grounded dialog system. This version is upgraded version of the [v1 model](https://huggingface.co/EmergenceAI/RAG-response-generation-model-v1) and was improved by training it with additional "negative" samples so that the model understands how to reject some of the irrelevant details in the provided context. ## Model Date December 5, 2023 ## Model License Apache-2.0 ## Usage Loading model and tokenizer: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path = "EmergenceAI/RAG-response-generation-model-v2" device = torch.device("cuda:0") # change device id as necessary model = AutoModelForCausalLM.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path, fast_tokenizer=True) model.to(device) # move to device ``` Prompt example: ```python info = '''Information:\tThe Solar System is about 4.6 billion years old. The Sun formed by gravity in a large molecular cloud. It is mainly hydrogen, which it converts into helium. Information:\tThe formation and evolution of the Solar System began 4.6 billion years ago with the gravitational collapse of a small part of a giant molecular cloud. Information:\tAstronomers are now more or less certain that the order of the planets was not always as it is today. Knowing what we know today, we can see the Solar System is strange. All other planetary system we are able to study have their largest planet close to their star. Also we have noticed other oddities in the Solar System. Mars is smaller than it ought to be, and the asteroid belt has been disturbed. Information:\tFor thousands of years, people had no need for a name for the "Solar System". They thought the Earth stayed still at the center of everything (geocentrism). The Greek philosopher Aristarchus of Samos suggested that there was a special order in the sky. Nicolaus Copernicus was the first to develop a mathematical system that described what we now call the "Solar System". This was called a "new system of the world". In the 17th century, Galileo Galilei, Johannes Kepler and Isaac Newton began to understand physics more clearly. People began to accept the idea that the Earth is a planet that moves around the Sun, and that the planets are worlds, and that all worlds are governed by the same same physical laws. More recently, telescopes and space probes sometimes let us see details directly. All inner planets have surface features. The gas giants (as the name suggests) have surfaces whose make-up is gradually being discovered. Information:\tThere are eight planets in the Solar System. From closest to farthest from the Sun, they are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune. The first four planets are called terrestrial planets. They are mostly made of rock and metal, and they are mostly solid. The last four planets are called gas giants. This is because they are much larger than other planets and are mostly made of gas. ''' qs = "Question:\tHow old is the Solar System?" prompt = tokenizer.bos_token prompt += '''Instruction:\tYou are to try to answer the following question using only the pieces of information given. Instruction:\tYour response should be a well formed JSON object with an 'answerable' property followed by an 'answer' property. Instruction:\tIf you cannot answer the question given the information, the value of the 'answerable' should be 'false' and the 'answer' should be an empty string. Instruction:\tIf you can answer the question given the information, the value of the 'answerable' should be 'true' and your answer should be the string value of the 'answer' property. ''' + info + qs + " Response:" ``` We recommend using newline character for stopping criterion, as follows: ```python from transformers import StoppingCriteria, StoppingCriteriaList eos_tokens = [tokenizer.eos_token,'\n'] eos_token_ids = [tokenizer.encode(token)[0] for token in eos_tokens] class MultipleEOSTokensStoppingCriteria(StoppingCriteria): def __init__(self, eos_token_ids): self.eos_token_ids = set(eos_token_ids) def __call__(self, input_ids, scores) -> bool: if input_ids.shape[-1] <= 1: return False for eos_token_id in self.eos_token_ids: if eos_token_id == input_ids[0, -1].item(): return True return False # Define stopping criteria multiple_eos_tokens_processor = MultipleEOSTokensStoppingCriteria(eos_token_ids) stopping_criteria = StoppingCriteriaList([multiple_eos_tokens_processor]) ``` Inference: ```python inputs = tokenizer(prompt, return_tensors="pt", return_token_type_ids=False).to(device) generate_ids = model.generate( **inputs, max_new_tokens=1024, temperature=0.0, num_beams=2, top_p=1, stopping_criteria=stopping_criteria ) response = tokenizer.decode(generate_ids[0], skip_special_tokens=True, clean_up_tokenization_spaces=True) ``` Example output (after response processing): ```json {"answerable": "True", "answer": "The Solar System is about 4.6 billion years old."}
yueqingyou/BioQwen-0.5B-q4f16_1-mlc
yueqingyou
"2024-06-30T13:59:16Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T05:24:00Z"
Entry not found
Ariffiq99/e_care_COPA_xlm_roberta_large_finetuned
Ariffiq99
"2024-06-23T05:26:45Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "multiple-choice", "generated_from_trainer", "base_model:Ariffiq99/COPA_xlm_roberta_base_finetuned", "license:mit", "endpoints_compatible", "region:us" ]
multiple-choice
"2024-06-23T05:26:15Z"
--- license: mit base_model: Ariffiq99/COPA_xlm_roberta_base_finetuned tags: - generated_from_trainer metrics: - f1 model-index: - name: e_care_COPA_xlm_roberta_large_finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # e_care_COPA_xlm_roberta_large_finetuned This model is a fine-tuned version of [Ariffiq99/COPA_xlm_roberta_base_finetuned](https://huggingface.co/Ariffiq99/COPA_xlm_roberta_base_finetuned) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5818 - F1: 0.6744 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6938 | 1.0 | 933 | 0.6374 | 0.5999 | | 0.6404 | 2.0 | 1866 | 0.6066 | 0.6329 | | 0.62 | 3.0 | 2799 | 0.5958 | 0.6550 | | 0.5887 | 4.0 | 3732 | 0.5867 | 0.6659 | | 0.5705 | 5.0 | 4665 | 0.5818 | 0.6744 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
HFatimaZahra/HFatimaZahra
HFatimaZahra
"2024-06-23T05:28:12Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T05:28:12Z"
Entry not found
jschoormans/model_out_sdxl
jschoormans
"2024-06-23T05:29:58Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T05:29:58Z"
Entry not found
Amarjeet9/customgpt
Amarjeet9
"2024-06-23T05:32:57Z"
0
0
null
[ "license:llama3", "region:us" ]
null
"2024-06-23T05:32:57Z"
--- license: llama3 ---
wallaceblaia/whisper-large-v3-icm-novo
wallaceblaia
"2024-06-23T05:34:07Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T05:34:07Z"
Entry not found
Ariffiq99/e_care_COPA_albert_base_finetuned
Ariffiq99
"2024-06-23T05:58:59Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "albert", "multiple-choice", "generated_from_trainer", "base_model:Ariffiq99/COPA_Albert_Base_finetuned", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
"2024-06-23T05:34:08Z"
--- license: apache-2.0 base_model: Ariffiq99/COPA_Albert_Base_finetuned tags: - generated_from_trainer metrics: - f1 model-index: - name: e_care_COPA_albert_base_finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # e_care_COPA_albert_base_finetuned This model is a fine-tuned version of [Ariffiq99/COPA_Albert_Base_finetuned](https://huggingface.co/Ariffiq99/COPA_Albert_Base_finetuned) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3000 - F1: 0.7385 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5456 | 1.0 | 933 | 0.4938 | 0.7418 | | 0.3548 | 2.0 | 1866 | 0.5306 | 0.7545 | | 0.1696 | 3.0 | 2799 | 0.7674 | 0.7347 | | 0.0565 | 4.0 | 3732 | 0.9047 | 0.7535 | | 0.018 | 5.0 | 4665 | 1.0986 | 0.7413 | | 0.005 | 6.0 | 5598 | 1.2507 | 0.7455 | | 0.0022 | 7.0 | 6531 | 1.3000 | 0.7385 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
surya-narayanan/engineering
surya-narayanan
"2024-06-24T22:34:28Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-23T05:34:53Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NoNameFactory/llama-3-8b-4bit-ContdPT_1_10_noEOS
NoNameFactory
"2024-06-23T05:39:57Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-23T05:35:07Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** NoNameFactory - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jddllwqa/Qwen-Qwen1.5-0.5B-1719120921
jddllwqa
"2024-06-23T05:35:31Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "region:us" ]
null
"2024-06-23T05:35:22Z"
--- base_model: Qwen/Qwen1.5-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
HFatimaZahra/fnt_correction-AceGPT-v1.5-13B-Chat
HFatimaZahra
"2024-06-23T05:36:28Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T05:36:28Z"
Entry not found
jddllwqa/Qwen-Qwen1.5-1.8B-1719120992
jddllwqa
"2024-06-23T05:36:38Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "region:us" ]
null
"2024-06-23T05:36:32Z"
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
HFatimaZahra/fintuning_correction-AceGPT-v1.5-13B-Chat
HFatimaZahra
"2024-06-23T05:36:47Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T05:36:47Z"
Entry not found
surya-narayanan/health
surya-narayanan
"2024-06-23T06:29:21Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-23T05:37:01Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jddllwqa/google-gemma-2b-1719121044
jddllwqa
"2024-06-23T05:37:33Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "region:us" ]
null
"2024-06-23T05:37:24Z"
--- base_model: google/gemma-2b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
Bryan32/ChineseDonghua
Bryan32
"2024-07-01T02:42:09Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T05:37:30Z"
Entry not found
Laquehay/Inicio
Laquehay
"2024-06-23T05:38:22Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-23T05:38:22Z"
--- license: apache-2.0 ---
jitele2207/ViRV30B3
jitele2207
"2024-06-23T05:43:57Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-23T05:43:57Z"
--- license: mit ---
sandvichxyz/udisen
sandvichxyz
"2024-06-23T05:58:58Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T05:45:29Z"
Entry not found
jddllwqa/Qwen-Qwen1.5-7B-1719121771
jddllwqa
"2024-06-23T05:49:36Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-7B", "region:us" ]
null
"2024-06-23T05:49:32Z"
--- base_model: Qwen/Qwen1.5-7B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
logtd/cosmicman_comfy
logtd
"2024-06-23T05:59:36Z"
0
2
null
[ "region:us" ]
null
"2024-06-23T05:52:12Z"
[CosmicMan](https://github.com/cosmicman-cvpr2024/CosmicMan) converted to Comfy architecture
prakharprakhar/llama2_pos
prakharprakhar
"2024-06-23T06:27:21Z"
0
0
null
[ "safetensors", "license:llama2", "region:us" ]
null
"2024-06-23T05:53:03Z"
--- license: llama2 ---
mjfan1999/JasonAldean2023
mjfan1999
"2024-06-23T06:00:57Z"
0
0
null
[ "license:unknown", "region:us" ]
null
"2024-06-23T05:53:41Z"
--- license: unknown ---
yueqingyou/BioQwen-1.8B-q4f16_1-mlc
yueqingyou
"2024-06-30T13:58:37Z"
0
0
null
[ "BioQwen", "1.8B", "Biomedical", "MLC-LLM", "en", "zh", "dataset:yueqingyou/BioQwen", "license:apache-2.0", "region:us" ]
null
"2024-06-23T05:55:36Z"
--- license: apache-2.0 datasets: - yueqingyou/BioQwen language: - en - zh tags: - BioQwen - 1.8B - Biomedical - MLC-LLM --- # Model Card for BioQwen BioQwen: A Small-Parameter, High-Performance Bilingual Model for Biomedical Multi-Tasks ## Model Details ### Model Description - **Developed by:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data ### Results [More Information Needed] ## Citation [optional] **BibTeX:** [More Information Needed] **APA:** [More Information Needed]
okkokko/sduLLM
okkokko
"2024-06-23T06:00:01Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T06:00:01Z"
Entry not found
HikariLight/Mistral-3E-DW-DS-3
HikariLight
"2024-06-23T06:03:16Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-23T06:02:22Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DBangshu/Base_gemma_e5_6_2
DBangshu
"2024-06-23T06:05:00Z"
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-23T06:02:34Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Jsoo/Llama3-re-test6
Jsoo
"2024-06-23T06:03:27Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T06:03:27Z"
Entry not found
grid7/test
grid7
"2024-06-23T06:07:11Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-23T06:07:11Z"
--- license: apache-2.0 ---
Ariffiq99/e_care_KUCI_albert_base_finetuned
Ariffiq99
"2024-06-23T06:32:48Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "albert", "multiple-choice", "generated_from_trainer", "base_model:Ariffiq99/KUCI_albert_base_Finetuned", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
"2024-06-23T06:07:16Z"
--- license: apache-2.0 base_model: Ariffiq99/KUCI_albert_base_Finetuned tags: - generated_from_trainer metrics: - f1 model-index: - name: e_care_KUCI_albert_base_finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # e_care_KUCI_albert_base_finetuned This model is a fine-tuned version of [Ariffiq99/KUCI_albert_base_Finetuned](https://huggingface.co/Ariffiq99/KUCI_albert_base_Finetuned) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4966 - F1: 0.7253 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5723 | 1.0 | 933 | 0.5206 | 0.7201 | | 0.3965 | 2.0 | 1866 | 0.5367 | 0.7422 | | 0.218 | 3.0 | 2799 | 0.7913 | 0.7337 | | 0.0863 | 4.0 | 3732 | 1.0507 | 0.7366 | | 0.0285 | 5.0 | 4665 | 1.3223 | 0.7286 | | 0.0082 | 6.0 | 5598 | 1.4432 | 0.7248 | | 0.0029 | 7.0 | 6531 | 1.4966 | 0.7253 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
himychae/krbert-punctuation-restoration-seed
himychae
"2024-06-23T06:11:59Z"
0
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-06-23T06:11:03Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Adeptschneider/dyu-fr-opus-v1.0
Adeptschneider
"2024-06-23T06:22:37Z"
0
0
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "base_model:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-06-23T06:12:00Z"
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - generated_from_keras_callback model-index: - name: Adeptschneider/dyu-fr-opus-v1.0 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Adeptschneider/dyu-fr-opus-v1.0 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.7977 - Validation Loss: 3.2686 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.2450 | 3.6414 | 0 | | 3.4962 | 3.4432 | 1 | | 3.1166 | 3.3371 | 2 | | 2.7977 | 3.2686 | 3 | ### Framework versions - Transformers 4.38.2 - TensorFlow 2.15.0 - Datasets 2.18.0 - Tokenizers 0.15.2
openlynn/Soliloquy-7B-v3
openlynn
"2024-06-23T06:16:49Z"
0
0
null
[ "license:cc-by-nc-sa-4.0", "region:us" ]
null
"2024-06-23T06:16:49Z"
--- license: cc-by-nc-sa-4.0 ---
hishamcse/RND-MontezumaRevengeNoframeSkip-v4
hishamcse
"2024-06-25T08:17:37Z"
0
0
null
[ "reinforcement-learning", "deep-reinforcement-learning", "MontezumaRevengeNoFrameskip-v4", "RND", "CNN", "model-index", "region:us" ]
reinforcement-learning
"2024-06-23T06:18:27Z"
--- tags: - reinforcement-learning - deep-reinforcement-learning - MontezumaRevengeNoFrameskip-v4 - RND - CNN model-index: - name: RND-MontezumaRevengeNoframeSkip-v4 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: MontezumaRevengeNoFrameskip-v4 type: MontezumaRevengeNoFrameskip-v4 metrics: - type: mean_reward value: 550.00 +/- 986.15 name: mean_reward verified: false --- # **RND with CNN** Agent playing **MontezumaRevengeNoFrameskip-v4** This is a trained model of a **RND-CNN** agent playing **MontezumaRevengeNoFrameskip-v4** . To learn to use this model and train yours check this notebook on kaggle: https://www.kaggle.com/code/syedjarullahhisham/drl-extra-personal-unit-5-rnd-montezuma-mario-bros
caspro/mbart-large-50_Nepali_News_Summarization
caspro
"2024-06-23T06:22:33Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T06:22:33Z"
Entry not found
byos/PoePlayv1
byos
"2024-06-23T06:31:15Z"
0
0
adapter-transformers
[ "adapter-transformers", "text-to-image", "en", "dataset:NousResearch/CharacterCodex", "license:apache-2.0", "region:us" ]
text-to-image
"2024-06-23T06:25:09Z"
--- license: apache-2.0 datasets: - NousResearch/CharacterCodex language: - en metrics: - accuracy library_name: adapter-transformers pipeline_tag: text-to-image ---
Vk357/DistilBertDPOModel
Vk357
"2024-06-23T06:28:34Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T06:28:34Z"
Entry not found
surya-narayanan/history
surya-narayanan
"2024-06-23T13:40:39Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-23T06:31:12Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Solddrem/basem
Solddrem
"2024-06-23T06:31:56Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-23T06:31:56Z"
--- license: mit ---
Countigo/clip-roberta-finetuned
Countigo
"2024-06-23T06:34:11Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T06:34:10Z"
Entry not found
Artguy32/detail_tweaker_lora
Artguy32
"2024-06-23T06:36:06Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T06:35:36Z"
Entry not found
siacus/llama-2-70b-cap_v2
siacus
"2024-06-23T06:43:37Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T06:43:37Z"
Entry not found
fushenshen/lession_model
fushenshen
"2024-06-23T07:15:18Z"
0
1
null
[ "license:mit", "region:us" ]
null
"2024-06-23T06:45:30Z"
--- license: mit ---
ZeroZYbgp/pocket_nahida2-1.5b-lora
ZeroZYbgp
"2024-06-23T07:04:30Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-06-23T06:46:09Z"
# Github https://github.com/ZeroZY-bgp/pocket_nahida
SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_2.6bpw
SicariusSicariiStuff
"2024-06-23T06:46:39Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-23T06:46:39Z"
--- license: apache-2.0 ---
SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_3.0bpw
SicariusSicariiStuff
"2024-06-23T08:36:51Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "3-bit", "exl2", "region:us" ]
text-generation
"2024-06-23T06:47:08Z"
--- license: apache-2.0 ---
itay-nakash/model_e4ad58a464_sweep_firm-surf-819
itay-nakash
"2024-06-23T06:47:27Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T06:47:27Z"
Entry not found
SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_3.5bpw
SicariusSicariiStuff
"2024-06-23T08:32:17Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "exl2", "region:us" ]
text-generation
"2024-06-23T06:48:25Z"
--- license: apache-2.0 ---
SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_4.0bpw
SicariusSicariiStuff
"2024-06-23T08:26:28Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "exl2", "region:us" ]
text-generation
"2024-06-23T06:48:50Z"
--- license: apache-2.0 ---
SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_4.5bpw
SicariusSicariiStuff
"2024-06-23T08:42:49Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "exl2", "region:us" ]
text-generation
"2024-06-23T06:49:11Z"
--- license: apache-2.0 ---
SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_5.0bpw
SicariusSicariiStuff
"2024-06-23T08:22:53Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "5-bit", "exl2", "region:us" ]
text-generation
"2024-06-23T06:49:26Z"
Entry not found
SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_5.5bpw
SicariusSicariiStuff
"2024-06-23T13:58:47Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "exl2", "region:us" ]
text-generation
"2024-06-23T06:49:35Z"
Entry not found
SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_7.0bpw
SicariusSicariiStuff
"2024-06-23T13:56:54Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "7-bit", "exl2", "region:us" ]
text-generation
"2024-06-23T06:50:33Z"
--- license: apache-2.0 ---
SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_7.5bpw
SicariusSicariiStuff
"2024-06-23T13:40:31Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "exl2", "region:us" ]
text-generation
"2024-06-23T06:50:45Z"
--- license: apache-2.0 ---
SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_8.0bpw
SicariusSicariiStuff
"2024-06-23T13:18:02Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "exl2", "region:us" ]
text-generation
"2024-06-23T06:51:01Z"
--- license: apache-2.0 ---
DavidMazur/Accel_3B_V1
DavidMazur
"2024-06-23T06:51:47Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T06:51:47Z"
Entry not found
zhiyuanyou/DepictQA2-DQ495K-QInst
zhiyuanyou
"2024-06-23T07:23:32Z"
0
0
transformers
[ "transformers", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-23T07:00:25Z"
--- license: apache-2.0 --- DepictQA model weights trained on **DQ495K** dataset and **Q-Instruct** dataset. See https://github.com/XPixelGroup/DepictQA for details.
starnet/05-star-06-23-02
starnet
"2024-06-23T07:13:41Z"
0
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-23T07:10:57Z"
Entry not found
HikariLight/Mistral-3E-DW-DS-5
HikariLight
"2024-06-23T07:25:05Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-23T07:24:15Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
anhng94/MetaMath-LoRA-bs32-200it
anhng94
"2024-06-23T07:28:20Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T07:28:20Z"
Entry not found
Yoka95/WF_Bot
Yoka95
"2024-06-23T07:32:36Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T07:32:36Z"
Entry not found
itay-nakash/model_e4ad58a464_sweep_solar-snowflake-820
itay-nakash
"2024-06-23T07:32:49Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T07:32:49Z"
Entry not found
itay-nakash/model_e4ad58a464_sweep_fanciful-wind-821
itay-nakash
"2024-06-23T07:36:02Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T07:36:02Z"
Entry not found
iamalexcaspian/LynnLoudJr-TLH
iamalexcaspian
"2024-06-23T09:34:04Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T07:36:40Z"
Entry not found
StoneTZHENG/rloo_tldr1
StoneTZHENG
"2024-06-23T07:38:09Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-23T07:36:41Z"
--- tags: - generated_from_trainer model-index: - name: rloo_tldr1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rloo_tldr1 This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 16 - total_train_batch_size: 512 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
itay-nakash/model_e4ad58a464_sweep_vibrant-cloud-822
itay-nakash
"2024-06-23T07:37:40Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T07:37:40Z"
Entry not found
itay-nakash/model_e4ad58a464_sweep_confused-breeze-823
itay-nakash
"2024-06-23T07:39:25Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T07:39:25Z"
Entry not found
itay-nakash/model_e4ad58a464_sweep_hardy-grass-824
itay-nakash
"2024-06-23T07:40:57Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T07:40:57Z"
Entry not found