modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
thomas27530/stef | thomas27530 | "2024-06-23T18:07:06Z" | 0 | 0 | flair | [
"flair",
"medical",
"automatic-speech-recognition",
"aa",
"af",
"dataset:ShareGPT4Video/ShareGPT4Video",
"arxiv:1910.09700",
"license:artistic-2.0",
"region:us"
] | automatic-speech-recognition | "2024-06-23T18:05:26Z" | ---
license: artistic-2.0
datasets:
- ShareGPT4Video/ShareGPT4Video
language:
- aa
- af
metrics:
- character
library_name: flair
pipeline_tag: automatic-speech-recognition
tags:
- medical
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sudhakar6/example-model | Sudhakar6 | "2024-06-23T18:48:37Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T18:06:02Z" |
license: mit
this is model card read
|
manbeast3b/KinoInferTry13 | manbeast3b | "2024-06-23T18:08:28Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T18:08:22Z" | Entry not found |
jamshidahmadov/uzbek_tts | jamshidahmadov | "2024-06-23T18:10:12Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-23T18:10:12Z" | ---
license: mit
---
|
CoderJackZhu/eames_chair_result | CoderJackZhu | "2024-06-23T18:24:00Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-06-23T18:12:12Z" | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
base_model: CompVis/stable-diffusion-v1-4
inference: true
instance_prompt: a photo of sks eames chair
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - CoderJackZhu/eames_chair_result
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks eames chair using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
SAlonsoGar/AutoTrainedModelGamification | SAlonsoGar | "2024-06-23T18:15:57Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | "2024-06-23T18:13:16Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
Stephanie-S/roberta_base | Stephanie-S | "2024-06-23T18:30:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-23T18:14:11Z" | ---
license: mit
base_model: FacebookAI/roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_base
This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1851
- Accuracy: 0.9542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2379 | 1.0 | 625 | 0.1391 | 0.9502 |
| 0.1424 | 2.0 | 1250 | 0.1851 | 0.9542 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
abo-elfath/Arabic_mbart | abo-elfath | "2024-06-23T18:28:07Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | "2024-06-23T18:15:01Z" | Entry not found |
blockblockblock/llama3-turbcat-instruct-8b-bpw4.8-exl2 | blockblockblock | "2024-06-23T18:17:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"exl2",
"region:us"
] | text-generation | "2024-06-23T18:15:19Z" | ---
license: llama3
---
# Turbcat 8b
![image/png](3.png)
![image/png](4.png)
![image/png](5.png)
![image/png](6.png)
![image/png](7.png)
![image/png](8.png)
# Release notes
This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset.
The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml.
# Data Generation
In addition to the specified fortifications above, the data generation process is largely the same. Except for added Chinese Ph. D. Entrance exam, Traditional Chinese and Chinese story telling data.
## Special Highlights
* 20 postdocs (10 Chinese, 10 English speaking doctors specialized in computational biology, biomed, biophysics and biochemistry)participated in the annotation process.
* GRE and MCAT/Kaoyan questions were manually answered by the participants using strictly COT and BERT judges producing embeddings were trained based on the provided annotation. For an example of BERT embedding visualization and scoring, please refer to https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct
* Initial support of roleplay as api usage. When roleplaying as an API or function, the model does not produce irrelevant content that's not specified by the system prompt.
# Task coverage
## Chinese tasks on par with English data
![image/png](1.png)
For the Chinese portion of the dataset, we strictly kept its distribution and quality comparable to the English counterpart, as visualized by the close distance of the doublets. The overall QC is visualized by PCA after bert embedding
## Individual tasks Quality Checked by doctors
For each cluster, we QC using BERT embeddings on an umap:
![image/png](2.png)
The outliers have been manually checked by doctors.
# Thirdparty dataset
Thanks to the following people for their tremendous support for dataset generation:
* steelskull for the medical COT dataset with gpt4o
* Gryphe for the wonderful action packed dataset
* Turbca for being turbca
# Prompt format for 8b:
**llama3**
Example raw prompt:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
CatGPT really likes its new cat ears and ends every message with Nyan_<|eot_id|><|start_header_id|>user<|end_header_id|>
CatA: pats CatGPT cat ears<|eot_id|><|start_header_id|>assistant<|end_header_id|>
CatGPT:
```
# Prompt format for 72b:
**chatml**
Example raw prompt:
```
<|im_start|>system
CatGPT really likes its new cat ears and ends every message with Nyan_<|im_end|>
<|im_start|>user
CatA: pats CatGPT cat ears<|im_end|>
<|im_start|>assistant
CatGPT:
```
# Support
Please join https://discord.gg/DwGz54Mz for model support
|
sirishgam001/videomae-finetuned-engagenet-full | sirishgam001 | "2024-06-25T04:36:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"endpoints_compatible",
"region:us"
] | video-classification | "2024-06-23T18:22:18Z" | Entry not found |
bigstorm/Hermes-2-Theta-Llama-3-8B-8.0bpw-8hb-exl2 | bigstorm | "2024-06-23T18:51:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"merges",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"exl2",
"region:us"
] | text-generation | "2024-06-23T18:27:13Z" | ---
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
tags:
- Llama-3
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- merges
model-index:
- name: Hermes-2-Pro-Llama-3-Instruct-8B-Merge
results: []
language:
- en
datasets:
- teknium/OpenHermes-2.5
widget:
- example_title: Hermes 2 Pro Llama-3 Instruct Merge
messages:
- role: system
content: >-
You are a sentient, superintelligent artificial general intelligence, here
to teach and assist me.
- role: user
content: >-
Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
license: apache-2.0
---
# Exl2 Quantization 8.0BPW 8HB
- 8 head bits
- 8.0 bpw target
Enjoy! Feel free to reach out for other quants, or BPW levels.
# - Hermes-2 Θ Llama-3 8B
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/HQnQmNM1L3KXGhp0wUzHH.png)
## Model Description
Hermes-2 Θ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit.
Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
## Example Outputs
### Create New Mythos:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/dvKhnSvHdx4nTQIqB9Lpv.png)
### Chat with a Meta-Cognitive Entity
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/GwdCqowE6GQylineqehhx.png)
### Ask for a structured JSON output:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/us72aL9gwUXdqSHetRVRV.png)
# Prompt Format
Hermes 2 Θ uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/suBbCUIxpcRvhCv6-DBDQ.png)
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5529|± |0.0145|
| | |acc_norm|0.5870|± |0.0144|
|arc_easy | 0|acc |0.8371|± |0.0076|
| | |acc_norm|0.8144|± |0.0080|
|boolq | 1|acc |0.8599|± |0.0061|
|hellaswag | 0|acc |0.6133|± |0.0049|
| | |acc_norm|0.7989|± |0.0040|
|openbookqa | 0|acc |0.3940|± |0.0219|
| | |acc_norm|0.4680|± |0.0223|
|piqa | 0|acc |0.8063|± |0.0092|
| | |acc_norm|0.8156|± |0.0090|
|winogrande | 0|acc |0.7372|± |0.0124|
```
Average: 72.59
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2441|± |0.0270|
| | |acc_norm|0.2441|± |0.0270|
|agieval_logiqa_en | 0|acc |0.3687|± |0.0189|
| | |acc_norm|0.3840|± |0.0191|
|agieval_lsat_ar | 0|acc |0.2304|± |0.0278|
| | |acc_norm|0.2174|± |0.0273|
|agieval_lsat_lr | 0|acc |0.5471|± |0.0221|
| | |acc_norm|0.5373|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6617|± |0.0289|
| | |acc_norm|0.6357|± |0.0294|
|agieval_sat_en | 0|acc |0.7670|± |0.0295|
| | |acc_norm|0.7379|± |0.0307|
|agieval_sat_en_without_passage| 0|acc |0.4417|± |0.0347|
| | |acc_norm|0.4223|± |0.0345|
|agieval_sat_math | 0|acc |0.4000|± |0.0331|
| | |acc_norm|0.3455|± |0.0321|
```
Average: 44.05
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6000|± |0.0356|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6585|± |0.0247|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|± |0.0290|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2340|± |0.0224|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2980|± |0.0205|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2057|± |0.0153|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5367|± |0.0288|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.4040|± |0.0220|
|bigbench_navigate | 0|multiple_choice_grade|0.4970|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7075|± |0.0102|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4821|± |0.0236|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2295|± |0.0133|
|bigbench_snarks | 0|multiple_choice_grade|0.6906|± |0.0345|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5375|± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.6270|± |0.0153|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2216|± |0.0118|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1594|± |0.0088|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5367|± |0.0288|
```
Average: 44.13
**IFEval**: 72.64
**MT_Bench**: Turn 1 - 8.3875, Turn 2 - 8.00625, Average - 8.196875
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-8B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Theta-Llama-3-8B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png)
# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png)
## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Theta-Llama-3-8B,
url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B][NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B))},
title={Hermes-2-Theta-Llama-3-8B},
author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
``` |
jorker121/onychomycosisModels | jorker121 | "2024-06-23T20:35:57Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T18:27:56Z" | Entry not found |
Ejafa/chameleon-7b | Ejafa | "2024-06-23T19:07:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T18:28:45Z" | always ggez |
MichalMlodawski/open-closed-eye-detection | MichalMlodawski | "2024-06-23T18:58:06Z" | 0 | 0 | null | [
"onnx",
"safetensors",
"yolov8",
"object-detection",
"en",
"dataset:MichalMlodawski/closed-open-eyes",
"license:cc-by-nc-sa-4.0",
"region:us"
] | object-detection | "2024-06-23T18:28:57Z" | ---
license: cc-by-nc-sa-4.0
pipeline_tag: object-detection
tags:
- yolov8
- object-detection
datasets:
- MichalMlodawski/closed-open-eyes
language:
- en
---
**Links to Space:**
https://huggingface.co/spaces/MichalMlodawski/closed-open-eyes-detection
**Eval:**
| Epoch | Train Box Loss | Train Cls Loss | Train DFL Loss | Precision (B) | Recall (B) | mAP50 (B) | mAP50-95 (B) | Val Box Loss | Val Cls Loss | Val DFL Loss | LR PG0 | LR PG1 | LR PG2 |
|-------|----------------|----------------|----------------|---------------|------------|-----------|--------------|--------------|--------------|--------------|--------|--------|--------|
| 100 | 1.0201 | 0.4718 | 0.84219 | 0.95394 | 0.93356 | 0.96767 | 0.66184 | 0.98246 | 0.45574 | 0.83703 | 0.000199 | 0.000199 | 0.000199 |
**Example code to run the model:**
import os
from pathlib import Path
from ultralytics import YOLO
import cv2
import logging
import argparse
def setup_logging():
logging.basicConfig(level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s')
def process_images(model_path, test_images_path):
try:
# Path to the results directory
results_path = os.path.join(test_images_path, 'result')
# Create the results folder
os.makedirs(results_path, exist_ok=True)
logging.info(f'Created results directory: {results_path}')
# Load the model
model = YOLO(model_path)
logging.info(f'Loaded model from: {model_path}')
# Process images
for img_file in Path(test_images_path).glob('*.*'):
if img_file.suffix.lower() in ['.jpg', '.jpeg', '.png']: # Supports JPG, JPEG, and PNG formats
logging.info(f'Processing file: {img_file}')
# Detect objects in the image
results = model(img_file)
for result in results:
# Get the result image with detections drawn
result_img = result.plot()
# Save the result image to the results_path folder
result_image_path = os.path.join(results_path, img_file.name)
cv2.imwrite(result_image_path, result_img)
logging.info(f'Saved result image to: {result_image_path}')
logging.info("Image processing completed.")
except Exception as e:
logging.error(f'An error occurred: {e}')
def main():
parser = argparse.ArgumentParser(description='Process images using YOLO model.')
parser.add_argument('model_path', type=str, help='Path to the YOLO model.')
parser.add_argument('test_images_path', type=str, help='Path to the directory containing test images.')
args = parser.parse_args()
setup_logging()
process_images(args.model_path, args.test_images_path)
if __name__ == "__main__":
main()
**Command to run the program:**
python script_name.py path/to/your/yolo_model.pt path/to/test/images
|
blockblockblock/llama3-turbcat-instruct-8b-bpw4.6-exl2 | blockblockblock | "2024-06-23T18:31:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"exl2",
"region:us"
] | text-generation | "2024-06-23T18:29:37Z" | ---
license: llama3
---
# Turbcat 8b
![image/png](3.png)
![image/png](4.png)
![image/png](5.png)
![image/png](6.png)
![image/png](7.png)
![image/png](8.png)
# Release notes
This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset.
The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml.
# Data Generation
In addition to the specified fortifications above, the data generation process is largely the same. Except for added Chinese Ph. D. Entrance exam, Traditional Chinese and Chinese story telling data.
## Special Highlights
* 20 postdocs (10 Chinese, 10 English speaking doctors specialized in computational biology, biomed, biophysics and biochemistry)participated in the annotation process.
* GRE and MCAT/Kaoyan questions were manually answered by the participants using strictly COT and BERT judges producing embeddings were trained based on the provided annotation. For an example of BERT embedding visualization and scoring, please refer to https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct
* Initial support of roleplay as api usage. When roleplaying as an API or function, the model does not produce irrelevant content that's not specified by the system prompt.
# Task coverage
## Chinese tasks on par with English data
![image/png](1.png)
For the Chinese portion of the dataset, we strictly kept its distribution and quality comparable to the English counterpart, as visualized by the close distance of the doublets. The overall QC is visualized by PCA after bert embedding
## Individual tasks Quality Checked by doctors
For each cluster, we QC using BERT embeddings on an umap:
![image/png](2.png)
The outliers have been manually checked by doctors.
# Thirdparty dataset
Thanks to the following people for their tremendous support for dataset generation:
* steelskull for the medical COT dataset with gpt4o
* Gryphe for the wonderful action packed dataset
* Turbca for being turbca
# Prompt format for 8b:
**llama3**
Example raw prompt:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
CatGPT really likes its new cat ears and ends every message with Nyan_<|eot_id|><|start_header_id|>user<|end_header_id|>
CatA: pats CatGPT cat ears<|eot_id|><|start_header_id|>assistant<|end_header_id|>
CatGPT:
```
# Prompt format for 72b:
**chatml**
Example raw prompt:
```
<|im_start|>system
CatGPT really likes its new cat ears and ends every message with Nyan_<|im_end|>
<|im_start|>user
CatA: pats CatGPT cat ears<|im_end|>
<|im_start|>assistant
CatGPT:
```
# Support
Please join https://discord.gg/DwGz54Mz for model support
|
RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-test-gguf | RichardErkhov | "2024-06-23T18:29:59Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T18:29:59Z" | Entry not found |
Yuah/morgana | Yuah | "2024-06-23T18:32:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T18:32:27Z" | Entry not found |
anjunchen/llama-3-8bi-dx-tuned_model | anjunchen | "2024-06-23T18:33:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T18:33:00Z" | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** anjunchen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rafatsiddiqui/Meta-Llama-3-8B-SST-FineTune-16bit | rafatsiddiqui | "2024-06-24T02:18:55Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-23T18:39:51Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** rafatsiddiqui
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Unknown6197/res_classification | Unknown6197 | "2024-06-25T22:11:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T18:41:41Z" | Entry not found |
mimiklee/longt5-tglobal-large-16384-pubmed-3k_steps-finetuned-try | mimiklee | "2024-06-23T19:02:59Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"base_model:Stancld/longt5-tglobal-large-16384-pubmed-3k_steps",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-23T18:42:02Z" | ---
license: apache-2.0
base_model: Stancld/longt5-tglobal-large-16384-pubmed-3k_steps
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: longt5-tglobal-large-16384-pubmed-3k_steps-finetuned-try
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longt5-tglobal-large-16384-pubmed-3k_steps-finetuned-try
This model is a fine-tuned version of [Stancld/longt5-tglobal-large-16384-pubmed-3k_steps](https://huggingface.co/Stancld/longt5-tglobal-large-16384-pubmed-3k_steps) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4908
- Rouge1: 9.8332
- Rouge2: 4.5131
- Rougel: 7.9018
- Rougelsum: 9.1249
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 25 | 3.6842 | 9.7994 | 4.6182 | 8.4691 | 9.1692 | 19.0 |
| No log | 2.0 | 50 | 3.5808 | 10.1729 | 4.5409 | 8.4082 | 9.3547 | 19.0 |
| No log | 3.0 | 75 | 3.5255 | 10.2028 | 4.6251 | 8.3879 | 9.4187 | 19.0 |
| No log | 4.0 | 100 | 3.4989 | 9.8481 | 4.4739 | 7.8605 | 9.1653 | 19.0 |
| No log | 5.0 | 125 | 3.4908 | 9.8332 | 4.5131 | 7.9018 | 9.1249 | 19.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
blockblockblock/llama3-turbcat-instruct-8b-bpw4.4-exl2 | blockblockblock | "2024-06-23T18:46:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"exl2",
"region:us"
] | text-generation | "2024-06-23T18:43:52Z" | ---
license: llama3
---
# Turbcat 8b
![image/png](3.png)
![image/png](4.png)
![image/png](5.png)
![image/png](6.png)
![image/png](7.png)
![image/png](8.png)
# Release notes
This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset.
The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml.
# Data Generation
In addition to the specified fortifications above, the data generation process is largely the same. Except for added Chinese Ph. D. Entrance exam, Traditional Chinese and Chinese story telling data.
## Special Highlights
* 20 postdocs (10 Chinese, 10 English speaking doctors specialized in computational biology, biomed, biophysics and biochemistry)participated in the annotation process.
* GRE and MCAT/Kaoyan questions were manually answered by the participants using strictly COT and BERT judges producing embeddings were trained based on the provided annotation. For an example of BERT embedding visualization and scoring, please refer to https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct
* Initial support of roleplay as api usage. When roleplaying as an API or function, the model does not produce irrelevant content that's not specified by the system prompt.
# Task coverage
## Chinese tasks on par with English data
![image/png](1.png)
For the Chinese portion of the dataset, we strictly kept its distribution and quality comparable to the English counterpart, as visualized by the close distance of the doublets. The overall QC is visualized by PCA after bert embedding
## Individual tasks Quality Checked by doctors
For each cluster, we QC using BERT embeddings on an umap:
![image/png](2.png)
The outliers have been manually checked by doctors.
# Thirdparty dataset
Thanks to the following people for their tremendous support for dataset generation:
* steelskull for the medical COT dataset with gpt4o
* Gryphe for the wonderful action packed dataset
* Turbca for being turbca
# Prompt format for 8b:
**llama3**
Example raw prompt:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
CatGPT really likes its new cat ears and ends every message with Nyan_<|eot_id|><|start_header_id|>user<|end_header_id|>
CatA: pats CatGPT cat ears<|eot_id|><|start_header_id|>assistant<|end_header_id|>
CatGPT:
```
# Prompt format for 72b:
**chatml**
Example raw prompt:
```
<|im_start|>system
CatGPT really likes its new cat ears and ends every message with Nyan_<|im_end|>
<|im_start|>user
CatA: pats CatGPT cat ears<|im_end|>
<|im_start|>assistant
CatGPT:
```
# Support
Please join https://discord.gg/DwGz54Mz for model support
|
sigaldanilov/distilbert-base-uncased-finetuned-ner | sigaldanilov | "2024-06-23T18:44:47Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T18:44:47Z" | Entry not found |
rllover123/q-FrozenLake-v1-4x4-noSlippery | rllover123 | "2024-06-23T18:46:17Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-23T18:46:15Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rllover123/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
JapGuy/ARIZONA | JapGuy | "2024-06-23T19:00:41Z" | 0 | 0 | null | [
"music",
"rvc",
"A R I Z O N A",
"ARIZONA",
"Zachary",
"Charles",
"model",
"audio-to-audio",
"en",
"license:openrail",
"region:us"
] | audio-to-audio | "2024-06-23T18:51:31Z" | ---
license: openrail
language:
- en
pipeline_tag: audio-to-audio
tags:
- music
- rvc
- A R I Z O N A
- ARIZONA
- Zachary
- Charles
- model
---
![image.jpg](https://www.shutterstock.com/editorial/image-editorial/OfT0gf04O3T5c6z8OTg1Nw==/zachary-charles-440nw-9776344i.jpg)
# A R I Z O N A - Zachary Charles [EN]
# 1000 Epochs - RVC V2 - rmvpe - Titan Medium
Trained on 24 minutes 04 seconds of isolated acapellas using UVR (Voc FT + Reverb HQ)
and Audacity to remove parts with double vocals and vocals from others (+Noise Gate) |
minsi2004/astro | minsi2004 | "2024-06-24T20:01:22Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-23T18:52:29Z" | ---
license: openrail
---
|
Amr7x/Amr | Amr7x | "2024-06-23T18:56:55Z" | 0 | 0 | null | [
"license:afl-3.0",
"region:us"
] | null | "2024-06-23T18:56:55Z" | ---
license: afl-3.0
---
|
Feistel/output_model | Feistel | "2024-06-23T18:57:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T18:57:05Z" | Entry not found |
blockblockblock/llama3-turbcat-instruct-8b-bpw4.2-exl2 | blockblockblock | "2024-06-23T19:00:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"exl2",
"region:us"
] | text-generation | "2024-06-23T18:57:57Z" | ---
license: llama3
---
# Turbcat 8b
![image/png](3.png)
![image/png](4.png)
![image/png](5.png)
![image/png](6.png)
![image/png](7.png)
![image/png](8.png)
# Release notes
This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset.
The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml.
# Data Generation
In addition to the specified fortifications above, the data generation process is largely the same. Except for added Chinese Ph. D. Entrance exam, Traditional Chinese and Chinese story telling data.
## Special Highlights
* 20 postdocs (10 Chinese, 10 English speaking doctors specialized in computational biology, biomed, biophysics and biochemistry)participated in the annotation process.
* GRE and MCAT/Kaoyan questions were manually answered by the participants using strictly COT and BERT judges producing embeddings were trained based on the provided annotation. For an example of BERT embedding visualization and scoring, please refer to https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct
* Initial support of roleplay as api usage. When roleplaying as an API or function, the model does not produce irrelevant content that's not specified by the system prompt.
# Task coverage
## Chinese tasks on par with English data
![image/png](1.png)
For the Chinese portion of the dataset, we strictly kept its distribution and quality comparable to the English counterpart, as visualized by the close distance of the doublets. The overall QC is visualized by PCA after bert embedding
## Individual tasks Quality Checked by doctors
For each cluster, we QC using BERT embeddings on an umap:
![image/png](2.png)
The outliers have been manually checked by doctors.
# Thirdparty dataset
Thanks to the following people for their tremendous support for dataset generation:
* steelskull for the medical COT dataset with gpt4o
* Gryphe for the wonderful action packed dataset
* Turbca for being turbca
# Prompt format for 8b:
**llama3**
Example raw prompt:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
CatGPT really likes its new cat ears and ends every message with Nyan_<|eot_id|><|start_header_id|>user<|end_header_id|>
CatA: pats CatGPT cat ears<|eot_id|><|start_header_id|>assistant<|end_header_id|>
CatGPT:
```
# Prompt format for 72b:
**chatml**
Example raw prompt:
```
<|im_start|>system
CatGPT really likes its new cat ears and ends every message with Nyan_<|im_end|>
<|im_start|>user
CatA: pats CatGPT cat ears<|im_end|>
<|im_start|>assistant
CatGPT:
```
# Support
Please join https://discord.gg/DwGz54Mz for model support
|
joe-xhedi/transformer-es-en-model | joe-xhedi | "2024-06-23T19:03:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T18:59:04Z" | Refer to following notebook :
https://colab.research.google.com/drive/10Jk9i77Pk8nVuqTnx2HDjwtNfsXzgs8D?usp=sharing
|
MurtazaNasir/fireworks-ai_firefunction-v2-h6-exl2 | MurtazaNasir | "2024-06-23T20:13:43Z" | 0 | 0 | null | [
"function-calling",
"license:llama3",
"region:us"
] | null | "2024-06-23T19:02:38Z" | ---
license: llama3
tags:
- function-calling
---
# Quantized Versions:
- [4.25bpw EXL2 Quant](https://huggingface.co/MurtazaNasir/fireworks-ai_firefunction-v2-h6-exl2/tree/4.25bpw)
# FireFunction V2: Fireworks Function Calling Model
[**Try on Fireworks**](https://fireworks.ai/models/fireworks/firefunction-v2) | [**API Docs**](https://readme.fireworks.ai/docs/function-calling) | [**Demo App**](https://functional-chat.vercel.app/) | [**Discord**](https://discord.gg/mMqQxvFD9A)
<img src="https://cdn-uploads.huggingface.co/production/uploads/64b6f3a72f5a966b9722de88/nJNtxLzWswBDKK1iOZblb.png" alt="firefunction" width="400"/>
FireFunction is a state-of-the-art function calling model with a commercially viable license. View detailed info in our [announcement blog](https://fireworks.ai/blog/firefunction-v2-launch-post). Key info and highlights:
**Comparison with other models:**
- Competitive with GPT-4o at function-calling, scoring 0.81 vs 0.80 on a medley of public evaluations
- Trained on Llama 3 and retains Llama 3’s conversation and instruction-following capabilities, scoring 0.84 vs Llama 3’s 0.89 on MT bench
- Significant quality improvements over FireFunction v1 across the broad range of metrics
**General info:**
🐾 Successor of the [FireFunction](https://fireworks.ai/models/fireworks/firefunction-v1) model
🔆 Support of parallel function calling (unlike FireFunction v1) and good instruction following
💡 Hosted on the [Fireworks](https://fireworks.ai/models/fireworks/firefunction-v2) platform at < 10% of the cost of GPT 4o and 2x the speed
## Intended Use and Limitations
### Supported usecases
The model was tuned to perfom well on a range of usecases including:
* general instruction following
* multi-turn chat mixing vanilla messages with function calls
* single- and parallel function calling
* up to 20 function specs supported at once
* structured information extraction
The model has an 8k context window, like Llama 3
### Out-of-Scope Use
The model was not optimized for the following use cases:
* 100+ function specs
* nested function calling
## Metrics
| Benchmark | Firefunction v1 | Firefunction v2 | Llama 3 70b Instruct | Gpt-4o |
|:-----------------------------------|:----------------|:----------------|:---------------------|:-------|
| Gorilla simple | 0.91 | 0.94 | 0.925 | 0.88 |
| Gorilla multiple_function | 0.92 | 0.91 | 0.86 | 0.91 |
| Gorilla parallel_function | 0 | 0.9 | 0.86 | 0.89 |
| Gorilla parallel_multiple_function | 0 | 0.8 | 0.615 | 0.72 |
| Nexus parallel | 0.38 | 0.53 | 0.3 | 0.47 |
| Mtbench | 0.73 | 0.84 | 0.89 | 0.93 |
| Average | 0.49 | 0.82 | 0.74 | 0.8 |
## Example Usage
See [documentation](https://readme.fireworks.ai/docs/function-calling) for more detail.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import json
from datetime import datetime
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("fireworks-ai/firefunction-v2", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("fireworks-ai/firefunction-v2")
function_spec = [
{
"name": "get_stock_price",
"description": "Get the current stock price",
"parameters": {
"type": "object",
"properties": {
"symbol": {
"type": "string",
"description": "The stock symbol, e.g. AAPL, GOOG"
}
},
"required": [
"symbol"
]
}
},
{
"name": "check_word_anagram",
"description": "Check if two words are anagrams of each other",
"parameters": {
"type": "object",
"properties": {
"word1": {
"type": "string",
"description": "The first word"
},
"word2": {
"type": "string",
"description": "The second word"
}
},
"required": [
"word1",
"word2"
]
}
}
]
functions = json.dumps(function_spec, indent=4)
messages = [
{'role': 'system', 'content': 'You are a helpful assistant with access to functions. Use them if required.'},
{'role': 'user', 'content': 'Hi, can you tell me the current stock price of google and netflix?'}
]
now = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
model_inputs = tokenizer.apply_chat_template(messages, functions=functions, datetime=now, return_tensors="pt").to(model.device)
generated_ids = model.generate(model_inputs, max_new_tokens=128)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Resources
* [Fireworks discord with function calling channel](https://discord.gg/mMqQxvFD9A)
* [Documentation](https://readme.fireworks.ai/docs/function-calling)
* [Demo app](https://functional-chat.vercel.app/)
* [Try in Fireworks prompt playground UI](https://fireworks.ai/models/fireworks/firefunction-v2) |
Ejafa/chameleon-30b | Ejafa | "2024-06-23T19:39:29Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T19:07:06Z" | always ggez |
alphaveneno/marian-finetuned-kde4-en-to-fr-accelerate | alphaveneno | "2024-06-25T23:00:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-23T19:07:31Z" | Entry not found |
blockblockblock/llama3-turbcat-instruct-8b-bpw4-exl2 | blockblockblock | "2024-06-23T19:14:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"exl2",
"region:us"
] | text-generation | "2024-06-23T19:11:59Z" | ---
license: llama3
---
# Turbcat 8b
![image/png](3.png)
![image/png](4.png)
![image/png](5.png)
![image/png](6.png)
![image/png](7.png)
![image/png](8.png)
# Release notes
This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset.
The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml.
# Data Generation
In addition to the specified fortifications above, the data generation process is largely the same. Except for added Chinese Ph. D. Entrance exam, Traditional Chinese and Chinese story telling data.
## Special Highlights
* 20 postdocs (10 Chinese, 10 English speaking doctors specialized in computational biology, biomed, biophysics and biochemistry)participated in the annotation process.
* GRE and MCAT/Kaoyan questions were manually answered by the participants using strictly COT and BERT judges producing embeddings were trained based on the provided annotation. For an example of BERT embedding visualization and scoring, please refer to https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct
* Initial support of roleplay as api usage. When roleplaying as an API or function, the model does not produce irrelevant content that's not specified by the system prompt.
# Task coverage
## Chinese tasks on par with English data
![image/png](1.png)
For the Chinese portion of the dataset, we strictly kept its distribution and quality comparable to the English counterpart, as visualized by the close distance of the doublets. The overall QC is visualized by PCA after bert embedding
## Individual tasks Quality Checked by doctors
For each cluster, we QC using BERT embeddings on an umap:
![image/png](2.png)
The outliers have been manually checked by doctors.
# Thirdparty dataset
Thanks to the following people for their tremendous support for dataset generation:
* steelskull for the medical COT dataset with gpt4o
* Gryphe for the wonderful action packed dataset
* Turbca for being turbca
# Prompt format for 8b:
**llama3**
Example raw prompt:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
CatGPT really likes its new cat ears and ends every message with Nyan_<|eot_id|><|start_header_id|>user<|end_header_id|>
CatA: pats CatGPT cat ears<|eot_id|><|start_header_id|>assistant<|end_header_id|>
CatGPT:
```
# Prompt format for 72b:
**chatml**
Example raw prompt:
```
<|im_start|>system
CatGPT really likes its new cat ears and ends every message with Nyan_<|im_end|>
<|im_start|>user
CatA: pats CatGPT cat ears<|im_end|>
<|im_start|>assistant
CatGPT:
```
# Support
Please join https://discord.gg/DwGz54Mz for model support
|
Dari27/Dari | Dari27 | "2024-06-23T19:12:04Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T19:12:04Z" | Entry not found |
silent666/Qwen-Qwen1.5-1.8B-1719170286 | silent666 | "2024-06-23T19:18:09Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | "2024-06-23T19:18:06Z" | ---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
Sapien1000B/gemma_2b_sql | Sapien1000B | "2024-06-23T19:20:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T19:20:16Z" | ---
base_model: unsloth/gemma-2b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
---
# Uploaded model
- **Developed by:** Sapien1000B
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SampleTheory/t5-base-finetuned-wikitext2 | SampleTheory | "2024-06-23T19:23:30Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T19:23:30Z" | Entry not found |
Renatojdkalls/Loremzosjs | Renatojdkalls | "2024-06-23T19:24:35Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-23T19:24:35Z" | ---
license: apache-2.0
---
|
blockblockblock/llama3-turbcat-instruct-8b-bpw3.7-exl2 | blockblockblock | "2024-06-23T19:27:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"exl2",
"region:us"
] | text-generation | "2024-06-23T19:25:58Z" | ---
license: llama3
---
# Turbcat 8b
![image/png](3.png)
![image/png](4.png)
![image/png](5.png)
![image/png](6.png)
![image/png](7.png)
![image/png](8.png)
# Release notes
This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset.
The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml.
# Data Generation
In addition to the specified fortifications above, the data generation process is largely the same. Except for added Chinese Ph. D. Entrance exam, Traditional Chinese and Chinese story telling data.
## Special Highlights
* 20 postdocs (10 Chinese, 10 English speaking doctors specialized in computational biology, biomed, biophysics and biochemistry)participated in the annotation process.
* GRE and MCAT/Kaoyan questions were manually answered by the participants using strictly COT and BERT judges producing embeddings were trained based on the provided annotation. For an example of BERT embedding visualization and scoring, please refer to https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct
* Initial support of roleplay as api usage. When roleplaying as an API or function, the model does not produce irrelevant content that's not specified by the system prompt.
# Task coverage
## Chinese tasks on par with English data
![image/png](1.png)
For the Chinese portion of the dataset, we strictly kept its distribution and quality comparable to the English counterpart, as visualized by the close distance of the doublets. The overall QC is visualized by PCA after bert embedding
## Individual tasks Quality Checked by doctors
For each cluster, we QC using BERT embeddings on an umap:
![image/png](2.png)
The outliers have been manually checked by doctors.
# Thirdparty dataset
Thanks to the following people for their tremendous support for dataset generation:
* steelskull for the medical COT dataset with gpt4o
* Gryphe for the wonderful action packed dataset
* Turbca for being turbca
# Prompt format for 8b:
**llama3**
Example raw prompt:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
CatGPT really likes its new cat ears and ends every message with Nyan_<|eot_id|><|start_header_id|>user<|end_header_id|>
CatA: pats CatGPT cat ears<|eot_id|><|start_header_id|>assistant<|end_header_id|>
CatGPT:
```
# Prompt format for 72b:
**chatml**
Example raw prompt:
```
<|im_start|>system
CatGPT really likes its new cat ears and ends every message with Nyan_<|im_end|>
<|im_start|>user
CatA: pats CatGPT cat ears<|im_end|>
<|im_start|>assistant
CatGPT:
```
# Support
Please join https://discord.gg/DwGz54Mz for model support
|
isabelxiao1/val-image | isabelxiao1 | "2024-06-24T20:06:14Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T19:30:12Z" | Entry not found |
manoj-huggy-face/forex-portfolio | manoj-huggy-face | "2024-06-23T19:30:24Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T19:30:24Z" | Entry not found |
inflaton/Qwen2-1.5B-Instruct-bnb-4bit-MAC-lora | inflaton | "2024-06-26T16:50:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2-1.5B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-23T19:34:14Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
base_model: unsloth/Qwen2-1.5B-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** inflaton
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-1.5B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
blockblockblock/llama3-turbcat-instruct-8b-bpw3.5-exl2 | blockblockblock | "2024-06-23T19:41:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"exl2",
"region:us"
] | text-generation | "2024-06-23T19:39:37Z" | ---
license: llama3
---
# Turbcat 8b
![image/png](3.png)
![image/png](4.png)
![image/png](5.png)
![image/png](6.png)
![image/png](7.png)
![image/png](8.png)
# Release notes
This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset.
The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml.
# Data Generation
In addition to the specified fortifications above, the data generation process is largely the same. Except for added Chinese Ph. D. Entrance exam, Traditional Chinese and Chinese story telling data.
## Special Highlights
* 20 postdocs (10 Chinese, 10 English speaking doctors specialized in computational biology, biomed, biophysics and biochemistry)participated in the annotation process.
* GRE and MCAT/Kaoyan questions were manually answered by the participants using strictly COT and BERT judges producing embeddings were trained based on the provided annotation. For an example of BERT embedding visualization and scoring, please refer to https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct
* Initial support of roleplay as api usage. When roleplaying as an API or function, the model does not produce irrelevant content that's not specified by the system prompt.
# Task coverage
## Chinese tasks on par with English data
![image/png](1.png)
For the Chinese portion of the dataset, we strictly kept its distribution and quality comparable to the English counterpart, as visualized by the close distance of the doublets. The overall QC is visualized by PCA after bert embedding
## Individual tasks Quality Checked by doctors
For each cluster, we QC using BERT embeddings on an umap:
![image/png](2.png)
The outliers have been manually checked by doctors.
# Thirdparty dataset
Thanks to the following people for their tremendous support for dataset generation:
* steelskull for the medical COT dataset with gpt4o
* Gryphe for the wonderful action packed dataset
* Turbca for being turbca
# Prompt format for 8b:
**llama3**
Example raw prompt:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
CatGPT really likes its new cat ears and ends every message with Nyan_<|eot_id|><|start_header_id|>user<|end_header_id|>
CatA: pats CatGPT cat ears<|eot_id|><|start_header_id|>assistant<|end_header_id|>
CatGPT:
```
# Prompt format for 72b:
**chatml**
Example raw prompt:
```
<|im_start|>system
CatGPT really likes its new cat ears and ends every message with Nyan_<|im_end|>
<|im_start|>user
CatA: pats CatGPT cat ears<|im_end|>
<|im_start|>assistant
CatGPT:
```
# Support
Please join https://discord.gg/DwGz54Mz for model support
|
ismailpolas/e70bf264-489b-47d2-8c46-e363f34c1178 | ismailpolas | "2024-06-23T19:39:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T19:39:42Z" | Entry not found |
MarOsz/wav2vec2-large-xlsr-53-polish-extra-pipeline-test | MarOsz | "2024-06-24T01:03:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-23T19:40:17Z" | Entry not found |
szili2011/Video2ReactionText | szili2011 | "2024-06-23T20:22:57Z" | 0 | 0 | transformers | [
"transformers",
"jax",
"dallebart",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T19:40:52Z" | inference: false
co2_eq_emissions:
emissions: <emissions_value>
source: MLCo2 Machine Learning Impact calculator
geographical_location: <location>
hardware_used: <hardware_details>
tags:
- video-to-text
license: mit
language: en
model-index:
- name: Video2ReactionText
results: []
|
ismailpolas/cf852d42-0373-404d-8bc0-34d4d6ea9e1a | ismailpolas | "2024-06-23T19:41:48Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T19:41:48Z" | Entry not found |
DeeziumNitraat/DeeziumNitraat | DeeziumNitraat | "2024-06-23T19:44:18Z" | 0 | 0 | null | [
"en",
"dataset:OpenGVLab/ShareGPT-4o",
"license:apache-2.0",
"region:us"
] | null | "2024-06-23T19:43:29Z" | ---
license: apache-2.0
datasets:
- OpenGVLab/ShareGPT-4o
language:
- en
metrics:
- accuracy
--- |
BioMike/test_gliner | BioMike | "2024-06-23T19:43:54Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T19:43:54Z" | Entry not found |
wootzie/netts | wootzie | "2024-06-23T19:54:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T19:53:49Z" | Entry not found |
blockblockblock/llama3-turbcat-instruct-8b-bpw3-exl2 | blockblockblock | "2024-06-23T19:55:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"3-bit",
"exl2",
"region:us"
] | text-generation | "2024-06-23T19:53:54Z" | ---
license: llama3
---
# Turbcat 8b
![image/png](3.png)
![image/png](4.png)
![image/png](5.png)
![image/png](6.png)
![image/png](7.png)
![image/png](8.png)
# Release notes
This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset.
The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml.
# Data Generation
In addition to the specified fortifications above, the data generation process is largely the same. Except for added Chinese Ph. D. Entrance exam, Traditional Chinese and Chinese story telling data.
## Special Highlights
* 20 postdocs (10 Chinese, 10 English speaking doctors specialized in computational biology, biomed, biophysics and biochemistry)participated in the annotation process.
* GRE and MCAT/Kaoyan questions were manually answered by the participants using strictly COT and BERT judges producing embeddings were trained based on the provided annotation. For an example of BERT embedding visualization and scoring, please refer to https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct
* Initial support of roleplay as api usage. When roleplaying as an API or function, the model does not produce irrelevant content that's not specified by the system prompt.
# Task coverage
## Chinese tasks on par with English data
![image/png](1.png)
For the Chinese portion of the dataset, we strictly kept its distribution and quality comparable to the English counterpart, as visualized by the close distance of the doublets. The overall QC is visualized by PCA after bert embedding
## Individual tasks Quality Checked by doctors
For each cluster, we QC using BERT embeddings on an umap:
![image/png](2.png)
The outliers have been manually checked by doctors.
# Thirdparty dataset
Thanks to the following people for their tremendous support for dataset generation:
* steelskull for the medical COT dataset with gpt4o
* Gryphe for the wonderful action packed dataset
* Turbca for being turbca
# Prompt format for 8b:
**llama3**
Example raw prompt:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
CatGPT really likes its new cat ears and ends every message with Nyan_<|eot_id|><|start_header_id|>user<|end_header_id|>
CatA: pats CatGPT cat ears<|eot_id|><|start_header_id|>assistant<|end_header_id|>
CatGPT:
```
# Prompt format for 72b:
**chatml**
Example raw prompt:
```
<|im_start|>system
CatGPT really likes its new cat ears and ends every message with Nyan_<|im_end|>
<|im_start|>user
CatA: pats CatGPT cat ears<|im_end|>
<|im_start|>assistant
CatGPT:
```
# Support
Please join https://discord.gg/DwGz54Mz for model support
|
hemchande/handposetoWord | hemchande | "2024-06-23T19:57:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T19:54:23Z" | Entry not found |
NaveenHugs/llama-3-8b-Inst-bnb-4bit-Jokes-100K_R64_lmhead_1e-4 | NaveenHugs | "2024-06-23T19:58:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T19:55:58Z" | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** NaveenHugs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Augusto777/swinv2-finetuned-ve-Ub200 | Augusto777 | "2024-06-23T19:59:32Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swinv2-tiny-patch4-window8-256",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-23T19:57:00Z" | ---
license: apache-2.0
base_model: microsoft/swinv2-tiny-patch4-window8-256
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swinv2-finetuned-ve-Ub200
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.47058823529411764
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-finetuned-ve-Ub200
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5977
- Accuracy: 0.4706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 7.9891 | 0.0980 |
| No log | 2.0 | 13 | 7.4848 | 0.0980 |
| No log | 2.92 | 19 | 6.2378 | 0.0980 |
| No log | 4.0 | 26 | 4.8900 | 0.0980 |
| No log | 4.92 | 32 | 3.8155 | 0.0980 |
| No log | 6.0 | 39 | 2.7342 | 0.0980 |
| No log | 6.92 | 45 | 2.0612 | 0.0980 |
| No log | 8.0 | 52 | 1.5977 | 0.4706 |
| No log | 8.92 | 58 | 1.3671 | 0.4706 |
| No log | 10.0 | 65 | 1.2122 | 0.4706 |
| No log | 10.92 | 71 | 1.1823 | 0.4706 |
| No log | 12.0 | 78 | 1.1835 | 0.4706 |
| No log | 12.92 | 84 | 1.1838 | 0.4706 |
| No log | 14.0 | 91 | 1.1778 | 0.4706 |
| No log | 14.92 | 97 | 1.1769 | 0.4706 |
| 3.2267 | 16.0 | 104 | 1.1762 | 0.4706 |
| 3.2267 | 16.92 | 110 | 1.1758 | 0.4706 |
| 3.2267 | 18.0 | 117 | 1.1770 | 0.4706 |
| 3.2267 | 18.46 | 120 | 1.1771 | 0.4706 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
starnet/02-star21-06-23-02 | starnet | "2024-06-23T20:06:49Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | "2024-06-23T19:58:48Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mmommo/moo | mmommo | "2024-06-23T20:01:24Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T20:01:24Z" | Entry not found |
bigstorm/Codestral-22B-v0.1-8.0bpw-8hb-exl2 | bigstorm | "2024-06-23T20:33:35Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"8-bit",
"exl2",
"region:us"
] | text-generation | "2024-06-23T20:02:03Z" | ---
language:
- code
license: other
tags:
- code
inference: false
license_name: mnpl
license_link: https://mistral.ai/licences/MNPL-0.1.md
---
# Exl2 Quantization 8.0BPW 8HB
- 8 head bits
- 8.0 bpw target
Enjoy! Feel free to reach out for other quants, or BPW levels.
# Originial Model card 🅱️elow
# Model Card for Codestral-22B-v0.1
Codestral-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried:
- As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications
- As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code)
## Installation
It is recommended to use `mistralai/Codestral-22B-v0.1` with [mistral-inference](https://github.com/mistralai/mistral-inference).
```
pip install mistral_inference
```
## Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', 'Codestral-22B-v0.1')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Codestral-22B-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment.
```
mistral-chat $HOME/mistral_models/Codestral-22B-v0.1 --instruct --max_tokens 256
```
Will generate an answer to "Write me a function that computes fibonacci in Rust" and should give something along the following lines:
```
Sure, here's a simple implementation of a function that computes the Fibonacci sequence in Rust. This function takes an integer `n` as an argument and returns the `n`th Fibonacci number.
fn fibonacci(n: u32) -> u32 {
match n {
0 => 0,
1 => 1,
_ => fibonacci(n - 1) + fibonacci(n - 2),
}
}
fn main() {
let n = 10;
println!("The {}th Fibonacci number is: {}", n, fibonacci(n));
}
This function uses recursion to calculate the Fibonacci number. However, it's not the most efficient solution because it performs a lot of redundant calculations. A more efficient solution would use a loop to iteratively calculate the Fibonacci numbers.
```
### Fill-in-the-middle (FIM)
After installing `mistral_inference` and running `pip install --upgrade mistral_common` to make sure to have mistral_common>=1.2 installed:
```py
from mistral_inference.model import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.tokens.instruct.request import FIMRequest
tokenizer = MistralTokenizer.v3()
model = Transformer.from_folder("~/codestral-22B-240529")
prefix = """def add("""
suffix = """ return sum"""
request = FIMRequest(prompt=prefix, suffix=suffix)
tokens = tokenizer.encode_fim(request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])
middle = result.split(suffix)[0].strip()
print(middle)
```
Should give something along the following lines:
```
num1, num2):
# Add two numbers
sum = num1 + num2
# return the sum
```
## Usage with transformers library
This model is also compatible with `transformers` library, first run `pip install -U transformers` then use the snippet below to quickly get started:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Codestral-22B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem.
## Limitations
The Codestral-22B-v0.1 does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## License
Codestral-22B-v0.1 is released under the `MNLP-0.1` license.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Jean-Malo Delignon, Jia Li, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickael Seznec, Nicolas Schuhl, Patrick von Platen, Romain Sauvestre, Pierre Stock, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Thibault Schueller, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall |
ramz225/First | ramz225 | "2024-06-23T20:02:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T20:02:39Z" | Entry not found |
Dandandooo/user-sim__Meta-Llama-3-8B-Instruct__0_no_move | Dandandooo | "2024-06-23T20:02:58Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T20:02:58Z" | Entry not found |
Augusto777/vit-base-patch16-224-ve-U13b-R | Augusto777 | "2024-06-24T15:21:32Z" | 0 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-23T20:03:27Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-ve-U13b-R
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9347826086956522
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U13b-R
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3534
- Accuracy: 0.9348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3157 | 0.99 | 51 | 1.2967 | 0.3478 |
| 0.9801 | 2.0 | 103 | 0.9966 | 0.5870 |
| 0.7385 | 2.99 | 154 | 0.7600 | 0.7174 |
| 0.572 | 4.0 | 206 | 0.6425 | 0.7826 |
| 0.3646 | 4.99 | 257 | 0.7687 | 0.6957 |
| 0.3033 | 6.0 | 309 | 0.6336 | 0.7391 |
| 0.3073 | 6.99 | 360 | 0.3534 | 0.9348 |
| 0.1623 | 8.0 | 412 | 0.8559 | 0.6739 |
| 0.1079 | 8.99 | 463 | 0.9730 | 0.7391 |
| 0.2703 | 10.0 | 515 | 0.7768 | 0.8043 |
| 0.178 | 10.99 | 566 | 0.8520 | 0.7826 |
| 0.2191 | 12.0 | 618 | 1.0049 | 0.7391 |
| 0.0597 | 12.99 | 669 | 0.8334 | 0.7609 |
| 0.0881 | 14.0 | 721 | 0.9985 | 0.7609 |
| 0.1265 | 14.99 | 772 | 0.9443 | 0.8043 |
| 0.0696 | 16.0 | 824 | 0.9878 | 0.8261 |
| 0.1198 | 16.99 | 875 | 0.8784 | 0.8043 |
| 0.1484 | 18.0 | 927 | 0.9595 | 0.7609 |
| 0.2887 | 18.99 | 978 | 1.0563 | 0.8043 |
| 0.1423 | 20.0 | 1030 | 0.8550 | 0.8043 |
| 0.083 | 20.99 | 1081 | 0.9093 | 0.7826 |
| 0.0695 | 22.0 | 1133 | 1.2758 | 0.6739 |
| 0.0285 | 22.99 | 1184 | 1.0852 | 0.7609 |
| 0.0132 | 24.0 | 1236 | 1.3341 | 0.6957 |
| 0.0957 | 24.99 | 1287 | 1.1965 | 0.7391 |
| 0.0633 | 26.0 | 1339 | 1.1199 | 0.7609 |
| 0.0705 | 26.99 | 1390 | 1.0551 | 0.8043 |
| 0.0564 | 28.0 | 1442 | 1.4332 | 0.7391 |
| 0.0798 | 28.99 | 1493 | 1.3855 | 0.7391 |
| 0.0326 | 30.0 | 1545 | 1.0534 | 0.8043 |
| 0.092 | 30.99 | 1596 | 1.1745 | 0.7609 |
| 0.1243 | 32.0 | 1648 | 1.1341 | 0.8043 |
| 0.062 | 32.99 | 1699 | 1.2648 | 0.7826 |
| 0.0941 | 34.0 | 1751 | 1.1236 | 0.7826 |
| 0.0119 | 34.99 | 1802 | 1.1303 | 0.8043 |
| 0.044 | 36.0 | 1854 | 1.1848 | 0.7826 |
| 0.0073 | 36.99 | 1905 | 1.1796 | 0.7609 |
| 0.0149 | 38.0 | 1957 | 1.2491 | 0.7826 |
| 0.0194 | 38.99 | 2008 | 1.1812 | 0.7826 |
| 0.0577 | 39.61 | 2040 | 1.1777 | 0.7609 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
OreX/ComfyUI | OreX | "2024-06-23T20:56:51Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T20:07:43Z" | Entry not found |
blockblockblock/llama3-turbcat-instruct-8b-bpw2.25-exl2 | blockblockblock | "2024-06-23T20:09:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"exl2",
"region:us"
] | text-generation | "2024-06-23T20:08:01Z" | ---
license: llama3
---
# Turbcat 8b
![image/png](3.png)
![image/png](4.png)
![image/png](5.png)
![image/png](6.png)
![image/png](7.png)
![image/png](8.png)
# Release notes
This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset.
The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml.
# Data Generation
In addition to the specified fortifications above, the data generation process is largely the same. Except for added Chinese Ph. D. Entrance exam, Traditional Chinese and Chinese story telling data.
## Special Highlights
* 20 postdocs (10 Chinese, 10 English speaking doctors specialized in computational biology, biomed, biophysics and biochemistry)participated in the annotation process.
* GRE and MCAT/Kaoyan questions were manually answered by the participants using strictly COT and BERT judges producing embeddings were trained based on the provided annotation. For an example of BERT embedding visualization and scoring, please refer to https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct
* Initial support of roleplay as api usage. When roleplaying as an API or function, the model does not produce irrelevant content that's not specified by the system prompt.
# Task coverage
## Chinese tasks on par with English data
![image/png](1.png)
For the Chinese portion of the dataset, we strictly kept its distribution and quality comparable to the English counterpart, as visualized by the close distance of the doublets. The overall QC is visualized by PCA after bert embedding
## Individual tasks Quality Checked by doctors
For each cluster, we QC using BERT embeddings on an umap:
![image/png](2.png)
The outliers have been manually checked by doctors.
# Thirdparty dataset
Thanks to the following people for their tremendous support for dataset generation:
* steelskull for the medical COT dataset with gpt4o
* Gryphe for the wonderful action packed dataset
* Turbca for being turbca
# Prompt format for 8b:
**llama3**
Example raw prompt:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
CatGPT really likes its new cat ears and ends every message with Nyan_<|eot_id|><|start_header_id|>user<|end_header_id|>
CatA: pats CatGPT cat ears<|eot_id|><|start_header_id|>assistant<|end_header_id|>
CatGPT:
```
# Prompt format for 72b:
**chatml**
Example raw prompt:
```
<|im_start|>system
CatGPT really likes its new cat ears and ends every message with Nyan_<|im_end|>
<|im_start|>user
CatA: pats CatGPT cat ears<|im_end|>
<|im_start|>assistant
CatGPT:
```
# Support
Please join https://discord.gg/DwGz54Mz for model support
|
Yaaba/Training-Model | Yaaba | "2024-06-23T23:12:01Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-23T20:11:33Z" | ---
license: mit
---
|
maryyum/mt5-small-finetuned-pubmed | maryyum | "2024-06-23T20:17:16Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T20:17:16Z" | Entry not found |
philipp-zettl/multi-head-sequence-classification-model | philipp-zettl | "2024-06-25T20:18:27Z" | 0 | 0 | torch | [
"torch",
"safetensors",
"text-classification",
"multilingual",
"dataset:philipp-zettl/GGU-xx",
"dataset:philipp-zettl/sentiment",
"base_model:BAAI/bge-m3",
"license:mit",
"region:us"
] | text-classification | "2024-06-23T20:18:02Z" |
---
license: mit
language: multilingual
library_name: torch
tags: []
base_model: BAAI/bge-m3
datasets:
- philipp-zettl/GGU-xx
- philipp-zettl/sentiment
metrics:
- accuracy
- precision
- recall
- f1-score
model_name: Multi-Head Sequence Classification Model
pipeline_tag: text-classification
widget:
- text: "Hello, how are you?"
label: "[GGU] Greeting"
- text: "Thank you for your help"
label: "[GGU] Gratitude"
- text: "Hallo, wie geht es dir?"
label: "[GGU] Greeting (de)"
- text: "Danke dir."
label: "[GGU] Gratitude (de)"
- text: "I am not sure what you mean"
label: "[GGU] Other"
- text: "Generate me an image of a dog!"
label: "[GGU] Other"
- text: "What is the weather like today?"
label: "[GGU] Other"
- text: "Wie ist das Wetter heute?"
label: "[GGU] Other (de)"
---
# Multi-Head Sequence Classification Model
## Model description
The model is a simple sequence classification model based on hidden output layers of a pre-trained transformer model. Multiple heads are added to the output of the backbone to classify the input sequence.
### Model architecture
The model is a simple sequence classification model based on hidden output layers of a pre-trained transformer model.
The backbone of the model is BAAI/bge-m3 with 1024 output dimensions.
An additional layer of (GGU: 3, sentiment: 3) is added to the output of the backbone to classify the input sequence.
You can find a mapping for the labels here:
**GGU**
- 0: Greeting
- 1: Gratitude
- 2: Other
**sentiment**
- 0: Positive
- 1: Negative
- 2: Neutral
The joint architecture was trained using the provided implementation (in repository) of `MultiHeadClassificationTrainer`.
### Use cases
Use cases: text classification, sentiment analysis.
## Model Inference
Inference code:
```python
from transformers import AutoModel, AutoTokenizer
from .model import MultiHeadSequenceClassificationModel
import torch
model = MultiHeadSequenceClassificationModel.from_pretrained('philipp-zettl/multi-head-sequence-classification-model')
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-m3')
def predict(text):
inputs = tokenizer([text], return_tensors="pt", padding=True, truncation=True)
outputs = model(**inputs)
return outputs
```
## Model Training
#### Confusion Matrix
**GGU**
![Confusion Matrix GGU](assets/confusion_matrix_GGU.png)
**sentiment**
![Confusion Matrix sentiment](assets/confusion_matrix_sentiment.png)
#### Training Loss
**GGU**
![Loss GGU](assets/loss_plot_GGU.png)
**sentiment**
![Loss sentiment](assets/loss_plot_sentiment.png)
### Training data
The model has been trained on the following datasets:
- [philipp-zettl/GGU-xx](https://huggingface.co/datasets/philipp-zettl/GGU-xx)
- [philipp-zettl/sentiment](https://huggingface.co/datasets/philipp-zettl/sentiment)
Using the implementation provided by MultiHeadClassificationTrainer
### Training procedure
The following code has been executed to train the model:
```python
def train_classifier():
backbone = AutoModel.from_pretrained('BAAI/bge-m3').to(torch.float16)
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-m3')
device = 'cuda' if torch.cuda.is_available() else 'cpu'
ggu_label_map = {
0: 'Greeting',
1: 'Gratitude',
2: 'Other'
}
sentiment_label_map = {
0: 'Positive',
1: 'Negative',
2: 'Neutral'
}
num_labels = len(ggu_label_map.keys())
# HParams
dropout = 0.25
learning_rate = 3e-5
momentum = 0.9
l2_reg = 0.25
l2_loss_weight = 0.25
model_conf = {
'backbone': backbone,
'head_config': {
'GGU': num_labels,
},
'dropout': dropout,
'l2_reg': l2_reg,
}
optimizer_conf = {
'lr': learning_rate,
'momentum': momentum
}
scheduler_conf = {
'factor': 0.2,
'patience': 3,
'min_lr': 1e-8
}
train_run = 1000
trainer = MultiHeadClassificationTrainer(
model_conf=model_conf,
optimizer_conf={**optimizer_conf, 'lr': 1e-4},
scheduler_conf=scheduler_conf,
num_epochs=35,
l2_loss_weight=l2_loss_weight,
use_lr_scheduler=True,
train_run=train_run,
auto_find_batch_size=False
)
new_model, history = trainer.train(dataset_name='philipp-zettl/GGU-xx', target_heads=['GGU'])
metrics = history['metrics']
history['loss_plot'] = trainer._plot_history(**metrics)
res = trainer.eval({'GGU': ggu_label_map})
history['evaluation'] = res['GGU']
total_history = {
'GGU': deepcopy(history),
}
trainer.classifier.add_head('sentiment', 3)
trainer.auto_find_batch_size = False
new_model, history = trainer.train(dataset_name='philipp-zettl/sentiment', target_heads=['sentiment'], sample_key='text', num_epochs=10, lr=1e-4)
metrics = history['metrics']
history['loss_plot'] = trainer._plot_history(**metrics)
res = trainer.eval({'sentiment': sentiment_label_map}, sample_key='text')
history['evaluation'] = res['sentiment']
total_history['sentiment'] = deepcopy(history)
label_maps = {
'GGU': ggu_label_map,
'sentiment': sentiment_label_map,
}
return new_model, total_history, trainer, label_maps
```
### Evaluation
### Evaluation data
For model evaluation, a 20% validation split was used from the training data.
### Evaluation procedure
The model was evaluated using the `eval` method provided by the `MultiHeadClassificationTrainer` class:
```python
def _eval_model(self, dataloader, label_map, sample_key, label_key):
self.classifier.train(False)
eval_heads = list(label_map.keys())
y_pred = {h: [] for h in eval_heads}
y_test = {h: [] for h in eval_heads}
for sample in tqdm(dataloader, total=len(dataloader), desc='Evaluating model...'):
labels = {name: sample[label_key] for name in eval_heads}
embeddings = BatchEncoding({k: torch.stack(v, dim=1).to(self.device) for k, v in sample.items() if k not in [label_key, sample_key]})
output = self.classifier(embeddings.to('cuda'), head_names=eval_heads)
for head in eval_heads:
y_pred[head].extend(output[head].argmax(dim=1).cpu())
y_test[head].extend(labels[head])
torch.cuda.empty_cache()
accuracies = {h: accuracy_score(y_test[h], y_pred[h]) for h in eval_heads}
f1_scores = {h: f1_score(y_test[h], y_pred[h], average="macro") for h in eval_heads}
recalls = {h: recall_score(y_test[h], y_pred[h], average='macro') for h in eval_heads}
report = {}
for head in eval_heads:
cm = confusion_matrix(y_test[head], y_pred[head], labels=list(label_map[head].keys()))
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=list(label_map[head].values()))
clf_report = classification_report(
y_test[head], y_pred[head], output_dict=True, target_names=list(label_map[head].values())
)
del clf_report["accuracy"]
clf_report = pd.DataFrame(clf_report).T.reset_index()
report[head] = dict(
clf_report=clf_report, confusion_matrix=disp, metrics={'accuracy': accuracies[head], 'f1': f1_scores[head], 'recall': recalls[head]}
)
return report
```
### Metrics
For evaluation, we used the following metrics: accuracy, precision, recall, f1-score. You can find a detailed classification report here:
**GGU:**
| | index | precision | recall | f1-score | support |
|---:|:-------------|------------:|---------:|-----------:|----------:|
| 0 | Greeting | 0.904762 | 0.974359 | 0.938272 | 39 |
| 1 | Gratitude | 0.958333 | 0.851852 | 0.901961 | 27 |
| 2 | Other | 1 | 1 | 1 | 39 |
| 3 | macro avg | 0.954365 | 0.94207 | 0.946744 | 105 |
| 4 | weighted avg | 0.953912 | 0.952381 | 0.951862 | 105 |
**sentiment:**
| | index | precision | recall | f1-score | support |
|---:|:-------------|------------:|---------:|-----------:|----------:|
| 0 | Positive | 0.783088 | 0.861878 | 0.820596 | 12851 |
| 1 | Negative | 0.802105 | 0.819524 | 0.810721 | 14229 |
| 2 | Neutral | 0.7874 | 0.6913 | 0.736227 | 13126 |
| 3 | macro avg | 0.790864 | 0.790901 | 0.789181 | 40206 |
| 4 | weighted avg | 0.791226 | 0.7912 | 0.789557 | 40206 |
|
josec101/biblico | josec101 | "2024-06-23T20:22:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T20:22:00Z" | Entry not found |
tinyrolls/vilt_finetuned_200 | tinyrolls | "2024-06-23T20:22:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T20:22:11Z" | Entry not found |
DavidSilvaUB/sd-naruto-model | DavidSilvaUB | "2024-06-23T20:22:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T20:22:31Z" | Entry not found |
Augusto777/vit-base-patch16-224-ve-U13b-80R | Augusto777 | "2024-06-23T20:35:28Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-23T20:23:15Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-ve-U13b-80R
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8913043478260869
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U13b-80R
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4109
- Accuracy: 0.8913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3158 | 0.99 | 51 | 1.2967 | 0.3478 |
| 1.0472 | 2.0 | 103 | 0.9694 | 0.5 |
| 0.6641 | 2.99 | 154 | 0.7911 | 0.7391 |
| 0.5615 | 4.0 | 206 | 0.6850 | 0.7391 |
| 0.3458 | 4.99 | 257 | 0.4109 | 0.8913 |
| 0.3156 | 6.0 | 309 | 0.5213 | 0.8043 |
| 0.141 | 6.99 | 360 | 0.4793 | 0.8478 |
| 0.2016 | 8.0 | 412 | 0.6031 | 0.7826 |
| 0.2444 | 8.99 | 463 | 0.7324 | 0.8043 |
| 0.1501 | 10.0 | 515 | 0.6392 | 0.8043 |
| 0.1256 | 10.99 | 566 | 0.9706 | 0.7826 |
| 0.2421 | 12.0 | 618 | 0.8059 | 0.7826 |
| 0.103 | 12.99 | 669 | 0.7601 | 0.8478 |
| 0.1353 | 14.0 | 721 | 1.1986 | 0.7391 |
| 0.1095 | 14.99 | 772 | 1.0279 | 0.7609 |
| 0.065 | 16.0 | 824 | 1.2043 | 0.6957 |
| 0.1777 | 16.99 | 875 | 0.9779 | 0.8043 |
| 0.0813 | 18.0 | 927 | 1.3356 | 0.7391 |
| 0.2552 | 18.99 | 978 | 0.8483 | 0.8261 |
| 0.0941 | 20.0 | 1030 | 0.7106 | 0.8696 |
| 0.0486 | 20.99 | 1081 | 0.8359 | 0.8261 |
| 0.0361 | 22.0 | 1133 | 0.8710 | 0.8261 |
| 0.0361 | 22.99 | 1184 | 1.0301 | 0.8043 |
| 0.0136 | 24.0 | 1236 | 0.9015 | 0.8261 |
| 0.1441 | 24.99 | 1287 | 0.9958 | 0.8043 |
| 0.0181 | 26.0 | 1339 | 1.0793 | 0.7826 |
| 0.0612 | 26.99 | 1390 | 0.9678 | 0.8043 |
| 0.0814 | 28.0 | 1442 | 1.0320 | 0.7826 |
| 0.0479 | 28.99 | 1493 | 1.1845 | 0.7826 |
| 0.06 | 30.0 | 1545 | 1.2026 | 0.7826 |
| 0.0777 | 30.99 | 1596 | 1.1574 | 0.7826 |
| 0.0747 | 32.0 | 1648 | 1.3104 | 0.7609 |
| 0.0181 | 32.99 | 1699 | 1.1145 | 0.8043 |
| 0.0652 | 34.0 | 1751 | 1.1691 | 0.8043 |
| 0.0242 | 34.99 | 1802 | 1.2415 | 0.8043 |
| 0.0043 | 36.0 | 1854 | 1.1841 | 0.7826 |
| 0.0318 | 36.99 | 1905 | 1.2475 | 0.8043 |
| 0.0092 | 38.0 | 1957 | 1.2452 | 0.8043 |
| 0.0194 | 38.99 | 2008 | 1.2395 | 0.8043 |
| 0.0376 | 39.61 | 2040 | 1.2345 | 0.8043 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
namrahrehman/dinov2-base-finetuned-lora-rank8 | namrahrehman | "2024-06-23T23:03:30Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | "2024-06-23T20:29:50Z" | Entry not found |
katenkoy/glanacles_v1 | katenkoy | "2024-06-23T20:33:35Z" | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] | image-segmentation | "2024-06-23T20:30:56Z" | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- semantic-segmentation
- pytorch
- segmentation-models-pytorch
languages:
- python
---
# FPN Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.FPN.from_pretrained("glanacles_v1")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "resnet34",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"decoder_pyramid_channels": 256,
"decoder_segmentation_channels": 128,
"decoder_merge_policy": "add",
"decoder_dropout": 0.2,
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 4,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.6357604265213013,
"test_dataset_iou": 0.6359941959381104
}
]
```
## Dataset
Dataset name: CALFIN
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
tanisee/first_tester | tanisee | "2024-06-23T20:36:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T20:36:31Z" | Entry not found |
LarryAIDraw/Mina_Ashiro_v1_0 | LarryAIDraw | "2024-06-23T20:46:09Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-23T20:37:25Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/526875/mina-ashiro-kaiju-no-8-8 |
LarryAIDraw/Kikoru_Shinomiya_v1_0 | LarryAIDraw | "2024-06-23T20:46:18Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-23T20:37:51Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/526520/kikoru-shinomiya-kaiju-no-8-8 |
gustavogoularta/mdcelebt | gustavogoularta | "2024-06-23T20:39:26Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T20:38:42Z" | Entry not found |
LarryAIDraw/firefly_v1_fred_Z | LarryAIDraw | "2024-06-23T20:46:27Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-23T20:39:05Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/533575/honkai-star-rail-firefly |
Augusto777/vit-base-patch16-224-ve-U13b-80RX | Augusto777 | "2024-06-23T21:09:35Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-23T20:39:14Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-ve-U13b-80RX
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8478260869565217
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U13b-80RX
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6099
- Accuracy: 0.8478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3857 | 0.99 | 17 | 1.3703 | 0.5652 |
| 1.3134 | 1.98 | 34 | 1.2235 | 0.4565 |
| 1.0384 | 2.97 | 51 | 1.0173 | 0.5435 |
| 0.908 | 3.96 | 68 | 0.8346 | 0.7826 |
| 0.75 | 4.95 | 85 | 0.7343 | 0.7826 |
| 0.5131 | 6.0 | 103 | 0.6099 | 0.8478 |
| 0.395 | 6.99 | 120 | 0.5932 | 0.7826 |
| 0.355 | 7.98 | 137 | 0.7209 | 0.7391 |
| 0.2658 | 8.97 | 154 | 0.5652 | 0.8043 |
| 0.248 | 9.96 | 171 | 0.7103 | 0.7826 |
| 0.2086 | 10.95 | 188 | 0.6788 | 0.7609 |
| 0.1532 | 12.0 | 206 | 0.5725 | 0.7826 |
| 0.147 | 12.99 | 223 | 0.6130 | 0.8043 |
| 0.1145 | 13.98 | 240 | 0.6563 | 0.8043 |
| 0.1053 | 14.97 | 257 | 0.5993 | 0.8043 |
| 0.0971 | 15.96 | 274 | 0.8840 | 0.7391 |
| 0.0947 | 16.95 | 291 | 0.6256 | 0.8043 |
| 0.1055 | 18.0 | 309 | 0.8406 | 0.7609 |
| 0.0974 | 18.99 | 326 | 0.6355 | 0.8478 |
| 0.1215 | 19.98 | 343 | 0.6651 | 0.8043 |
| 0.108 | 20.97 | 360 | 0.8301 | 0.7826 |
| 0.0784 | 21.96 | 377 | 0.8837 | 0.7609 |
| 0.0919 | 22.95 | 394 | 0.6985 | 0.8043 |
| 0.064 | 24.0 | 412 | 0.6426 | 0.8043 |
| 0.0669 | 24.99 | 429 | 0.8102 | 0.7826 |
| 0.0878 | 25.98 | 446 | 0.7863 | 0.7391 |
| 0.0875 | 26.97 | 463 | 0.8777 | 0.7609 |
| 0.0441 | 27.96 | 480 | 0.7324 | 0.8043 |
| 0.088 | 28.95 | 497 | 0.8099 | 0.7826 |
| 0.0739 | 30.0 | 515 | 0.7776 | 0.8043 |
| 0.0598 | 30.99 | 532 | 0.8188 | 0.7826 |
| 0.0443 | 31.98 | 549 | 0.8549 | 0.8043 |
| 0.0376 | 32.97 | 566 | 0.8049 | 0.7826 |
| 0.0375 | 33.96 | 583 | 0.8037 | 0.8043 |
| 0.0346 | 34.95 | 600 | 0.8255 | 0.8261 |
| 0.0471 | 36.0 | 618 | 0.8239 | 0.8043 |
| 0.0669 | 36.99 | 635 | 0.8188 | 0.8043 |
| 0.0438 | 37.98 | 652 | 0.8443 | 0.8043 |
| 0.0549 | 38.97 | 669 | 0.8551 | 0.8043 |
| 0.0622 | 39.61 | 680 | 0.8551 | 0.8043 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LarryAIDraw/date_yuzuru | LarryAIDraw | "2024-06-23T20:46:40Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-23T20:39:28Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/533606/yuzuru-yamai-date-a-live-sdxl-lora-pony |
PulsarAI/Einstein-v6.1-developed-by-Weyaxi-Llama3-8B-adapter | PulsarAI | "2024-06-23T20:39:45Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Weyaxi/Einstein-v6.1-Llama3-8B",
"region:us"
] | null | "2024-06-23T20:39:42Z" | ---
base_model: Weyaxi/Einstein-v6.1-Llama3-8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
manbeast3b/KinoInferTry14 | manbeast3b | "2024-06-23T20:40:26Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T20:40:20Z" | Entry not found |
LarryAIDraw/kurumi-dal-01 | LarryAIDraw | "2024-06-23T20:47:00Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-23T20:40:23Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/195976/kurumi-tokisaki-date-a-live |
1231czx/2b_1_nll_dpo_iter1_from_sft_epoch3_250step | 1231czx | "2024-06-23T20:48:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-23T20:40:57Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Dandandooo/user-sim__gemma-2b-it-bnb-4bit__0_no_move | Dandandooo | "2024-06-23T20:41:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T20:41:36Z" | Entry not found |
glp500/Archivaris_Instruct_8b_2 | glp500 | "2024-06-23T20:45:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T20:45:09Z" | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** glp500
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Melady/TEMPO | Melady | "2024-07-01T04:47:28Z" | 0 | 1 | transformers | [
"transformers",
"Time-series",
"foundation-model",
"forecasting",
"time-series-forecasting",
"en",
"dataset:ETDataset/ett",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | time-series-forecasting | "2024-06-23T20:51:47Z" | ---
license: apache-2.0
datasets:
- ETDataset/ett
language:
- en
metrics:
- mse
- mae
library_name: transformers
pipeline_tag: time-series-forecasting
tags:
- Time-series
- foundation-model
- forecasting
---
# TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting
The official code for ICLR 2024 paper: "TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting (ICLR 2024)".
TEMPO is one of the very first open source **Time Series Foundation Models** for forecasting task v1.0 version.
![TEMPO-architecture](pics/TEMPO.png)
Please try our foundation model demo [[here]](https://4171a8a7484b3e9148.gradio.live).
![TEMPO-demo](pics/TEMPO_demo.jpg)
# Build the environment
```
conda create -n tempo python=3.8
```
```
conda activate tempo
```
```
pip install -r requirements.txt
```
# Get Data
Download the data from [[Google Drive]](https://drive.google.com/drive/folders/13Cg1KYOlzM5C7K8gK8NfC-F3EYxkM3D2?usp=sharing) or [[Baidu Drive]](https://pan.baidu.com/s/1r3KhGd0Q9PJIUZdfEYoymg?pwd=i9iy), and place the downloaded data in the folder`./dataset`. You can also download the STL results from [[Google Drive]](https://drive.google.com/file/d/1gWliIGDDSi2itUAvYaRgACru18j753Kw/view?usp=sharing), and place the downloaded data in the folder`./stl`.
# Run TEMPO
## Training Stage
```
bash [ecl, etth1, etth2, ettm1, ettm2, traffic, weather].sh
```
## Test
After training, we can test TEMPO model under the zero-shot setting:
```
bash [ecl, etth1, etth2, ettm1, ettm2, traffic, weather]_test.sh
```
![TEMPO-results](pics/results.jpg)
# Pre-trained Models
You can download the pre-trained model from [[Google Drive]](https://drive.google.com/file/d/11Ho_seP9NGh-lQCyBkvQhAQFy_3XVwKp/view?usp=drive_link) and then run the test script for fun.
# Multi-modality dataset: TETS dataset
Here is the prompts use to generate the coresponding textual informaton of time series via [[OPENAI ChatGPT-3.5 API]](https://platform.openai.com/docs/guides/text-generation)
![TEMPO-prompt](pics/TETS_prompt.png)
The time series data are come from [[S&P 500]](https://www.spglobal.com/spdji/en/indices/equity/sp-500/#overview). Here is the EBITDA case for one company from the dataset:
![Company1_ebitda_summary](pics/Company1_ebitda_summary.png)
Example of generated contextual information for the Company marked above:
![Company1_ebitda_summary_words.jpg](pics/Company1_ebitda_summary_words.jpg)
You can download the processed data with text embedding from GPT2 from: [[TETS]](https://drive.google.com/file/d/1Hu2KFj0kp4kIIpjbss2ciLCV_KiBreoJ/view?usp=drive_link
).
## Cite
```
@inproceedings{
cao2024tempo,
title={{TEMPO}: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting},
author={Defu Cao and Furong Jia and Sercan O Arik and Tomas Pfister and Yixiang Zheng and Wen Ye and Yan Liu},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=YH5w12OUuU}
}
``` |
Svngoku/Qwen2-7b-history | Svngoku | "2024-06-23T20:52:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/qwen2-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T20:51:48Z" | ---
base_model: unsloth/qwen2-7b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** Svngoku
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-7b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
1231czx/2b_1_nll_dpo_iter2_from_sft_epoch3_100step | 1231czx | "2024-06-23T21:02:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-23T20:54:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gmmarc/distilhubert-finetuned-gtzan | gmmarc | "2024-06-24T17:23:21Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-06-23T20:55:35Z" | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.83
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6247
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9843 | 1.0 | 113 | 1.8454 | 0.54 |
| 1.1501 | 2.0 | 226 | 1.2743 | 0.64 |
| 1.0625 | 3.0 | 339 | 1.0524 | 0.71 |
| 0.6394 | 4.0 | 452 | 0.8757 | 0.74 |
| 0.5293 | 5.0 | 565 | 0.7279 | 0.8 |
| 0.3523 | 6.0 | 678 | 0.6185 | 0.84 |
| 0.339 | 7.0 | 791 | 0.6317 | 0.85 |
| 0.154 | 8.0 | 904 | 0.6069 | 0.82 |
| 0.1612 | 9.0 | 1017 | 0.6387 | 0.83 |
| 0.1158 | 10.0 | 1130 | 0.6247 | 0.83 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
root-goksenin/job_labeller | root-goksenin | "2024-06-23T21:04:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T21:04:34Z" | Entry not found |
gas1963/sdxl_sree1 | gas1963 | "2024-06-23T21:11:51Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T21:05:00Z" | Entry not found |
Humedo/asd | Humedo | "2024-06-23T21:07:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T21:07:32Z" | Entry not found |
shivakethan2004/fine_tuned_model | shivakethan2004 | "2024-06-23T21:16:47Z" | 0 | 0 | null | [
"license:llama2",
"region:us"
] | null | "2024-06-23T21:15:40Z" | ---
license: llama2
---
|
Augusto777/vit-base-patch16-224-ve-U13b-80RX1 | Augusto777 | "2024-06-23T22:22:38Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-23T21:16:25Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-ve-U13b-80RX1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8478260869565217
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U13b-80RX1
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7770
- Accuracy: 0.8478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3157 | 0.99 | 51 | 1.2968 | 0.3478 |
| 1.0334 | 2.0 | 103 | 1.0060 | 0.5217 |
| 0.691 | 2.99 | 154 | 0.7506 | 0.7609 |
| 0.5005 | 4.0 | 206 | 0.6433 | 0.7826 |
| 0.3478 | 4.99 | 257 | 0.5674 | 0.7609 |
| 0.3339 | 6.0 | 309 | 0.6623 | 0.7609 |
| 0.2533 | 6.99 | 360 | 0.6905 | 0.7391 |
| 0.138 | 8.0 | 412 | 0.7251 | 0.7826 |
| 0.1289 | 8.99 | 463 | 0.7467 | 0.7391 |
| 0.152 | 10.0 | 515 | 0.9011 | 0.7174 |
| 0.2609 | 10.99 | 566 | 1.0150 | 0.7174 |
| 0.2202 | 12.0 | 618 | 0.9713 | 0.7826 |
| 0.1083 | 12.99 | 669 | 1.1106 | 0.6739 |
| 0.07 | 14.0 | 721 | 1.1211 | 0.7174 |
| 0.0791 | 14.99 | 772 | 1.1830 | 0.7609 |
| 0.0427 | 16.0 | 824 | 0.7770 | 0.8478 |
| 0.1219 | 16.99 | 875 | 1.0962 | 0.7391 |
| 0.0739 | 18.0 | 927 | 0.9447 | 0.7609 |
| 0.1989 | 18.99 | 978 | 1.1543 | 0.7391 |
| 0.1097 | 20.0 | 1030 | 1.1795 | 0.7609 |
| 0.1204 | 20.99 | 1081 | 1.2679 | 0.6739 |
| 0.0514 | 22.0 | 1133 | 1.0646 | 0.7174 |
| 0.0612 | 22.99 | 1184 | 1.1413 | 0.6957 |
| 0.0207 | 24.0 | 1236 | 0.8928 | 0.7826 |
| 0.1063 | 24.99 | 1287 | 1.1186 | 0.7609 |
| 0.1076 | 26.0 | 1339 | 1.1741 | 0.7609 |
| 0.0714 | 26.99 | 1390 | 1.0977 | 0.8043 |
| 0.062 | 28.0 | 1442 | 1.3965 | 0.7174 |
| 0.0617 | 28.99 | 1493 | 1.1849 | 0.7609 |
| 0.0536 | 30.0 | 1545 | 1.0865 | 0.7826 |
| 0.0707 | 30.99 | 1596 | 1.2081 | 0.7609 |
| 0.0967 | 32.0 | 1648 | 1.3300 | 0.7391 |
| 0.0564 | 32.99 | 1699 | 1.2240 | 0.7826 |
| 0.0435 | 34.0 | 1751 | 1.2391 | 0.7609 |
| 0.043 | 34.99 | 1802 | 1.1813 | 0.7609 |
| 0.0218 | 36.0 | 1854 | 1.2496 | 0.7826 |
| 0.0043 | 36.99 | 1905 | 1.2797 | 0.7174 |
| 0.0051 | 38.0 | 1957 | 1.2493 | 0.7391 |
| 0.0123 | 38.99 | 2008 | 1.2538 | 0.7391 |
| 0.0546 | 39.61 | 2040 | 1.2530 | 0.7609 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
gas1963/sdxl_sree2 | gas1963 | "2024-06-23T21:22:12Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T21:16:27Z" | Entry not found |
CuscuzBlack99/A | CuscuzBlack99 | "2024-06-23T21:18:22Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-23T21:18:22Z" | ---
license: apache-2.0
---
|
nataliedx/ModePonyXL | nataliedx | "2024-06-23T23:04:01Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T21:19:23Z" | Entry not found |
jlousada315/distilhubert-finetuned-gtzan | jlousada315 | "2024-06-25T02:36:27Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-06-23T21:19:37Z" | Entry not found |
pathlighter/mistral_sharegpt_echo_batch_2-ep-3 | pathlighter | "2024-06-23T21:23:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-23T21:20:54Z" | Entry not found |
jenniecoveria/NAYEON_ALL_SOLOS | jenniecoveria | "2024-06-23T21:24:47Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T21:22:32Z" | Entry not found |