modelId
stringlengths 4
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
387M
| likes
int64 0
6.55k
| library_name
stringclasses 368
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 51
values | createdAt
unknown | card
stringlengths 1
1M
|
---|---|---|---|---|---|---|---|---|---|
nolanaatama/thwkndrvc1000pchclbbdsm | nolanaatama | "2023-05-21T23:10:28Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-05-21T23:08:46Z" | ---
license: creativeml-openrail-m
---
|
ongknsro/ACARISBERT-BERT | ongknsro | "2023-05-21T23:14:20Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-21T23:12:31Z" | ---
language:
- en
metrics:
- accuracy
- f1
- recall
- precision
library_name: transformers
pipeline_tag: text-classification
--- |
yaqliu/results | yaqliu | "2023-05-21T23:15:04Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-21T23:15:04Z" | Entry not found |
Jikiwi/jokowi-voice-ai | Jikiwi | "2023-05-22T02:26:58Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-21T23:20:52Z" | Entry not found |
amd1729/solar-mask2-swin-small-ade-200-deepsolar-2023052103 | amd1729 | "2023-05-22T00:32:33Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"mask2former",
"endpoints_compatible",
"region:us"
] | null | "2023-05-21T23:21:52Z" | Entry not found |
alvinxd/Tokyo | alvinxd | "2023-05-21T23:36:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-21T23:36:02Z" | Entry not found |
sitownle/4bLLM2 | sitownle | "2023-05-21T23:47:52Z" | 0 | 0 | null | [
"en",
"dataset:FourthBrainGenAI/MarketMail-AI-Dataset",
"region:us"
] | null | "2023-05-21T23:41:06Z" | ---
datasets:
- FourthBrainGenAI/MarketMail-AI-Dataset
language:
- en
---
LLM Assignment 2 - Fine Tuning BLOOMZ with PyTorch, peft, and transformers from HuggingFace |
Ahmadswaid/california_housing | Ahmadswaid | "2023-05-21T23:52:12Z" | 0 | 0 | sklearn | [
"sklearn",
"skops",
"tabular-regression",
"region:us"
] | tabular-regression | "2023-05-21T23:46:15Z" | ---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-regression
widget:
structuredData:
AveBedrms:
- 0.9806451612903225
- 1.0379746835443038
- 0.9601449275362319
AveOccup:
- 2.587096774193548
- 2.8658227848101268
- 2.6449275362318843
AveRooms:
- 7.275268817204301
- 5.39493670886076
- 6.536231884057971
HouseAge:
- 38.0
- 25.0
- 39.0
Latitude:
- 37.44
- 37.31
- 34.16
Longitude:
- -122.19
- -122.03
- -118.07
MedInc:
- 9.3198
- 5.3508
- 6.4761
Population:
- 1203.0
- 1132.0
- 730.0
---
# Model description
[More Information Needed]
## Intended uses & limitations
[More Information Needed]
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|--------------------------|---------------|
| bootstrap | True |
| ccp_alpha | 0.0 |
| criterion | squared_error |
| max_depth | |
| max_features | 1.0 |
| max_leaf_nodes | |
| max_samples | |
| min_impurity_decrease | 0.0 |
| min_samples_leaf | 1 |
| min_samples_split | 2 |
| min_weight_fraction_leaf | 0.0 |
| n_estimators | 100 |
| n_jobs | |
| oob_score | False |
| random_state | |
| verbose | 0 |
| warm_start | False |
</details>
### Model Plot
The model plot is below.
<style>#sk-container-id-2 {color: black;background-color: white;}#sk-container-id-2 pre{padding: 0;}#sk-container-id-2 div.sk-toggleable {background-color: white;}#sk-container-id-2 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-2 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-2 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-2 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-2 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-2 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-2 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-2 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-2 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-2 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-2 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-2 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-2 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-2 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-2 div.sk-item {position: relative;z-index: 1;}#sk-container-id-2 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-2 div.sk-item::before, #sk-container-id-2 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-2 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-2 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-2 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-2 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-2 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-2 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-2 div.sk-label-container {text-align: center;}#sk-container-id-2 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-2 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-2" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>RandomForestRegressor()</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" checked><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">RandomForestRegressor</label><div class="sk-toggleable__content"><pre>RandomForestRegressor()</pre></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|---------|
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
[More Information Needed]
```
</details>
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
``` |
Firestqr/Guitar | Firestqr | "2023-05-21T23:59:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-21T23:59:11Z" | Entry not found |
fa5ih/f1 | fa5ih | "2023-05-22T00:06:35Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T00:06:35Z" | Entry not found |
Ahmadswaid/sklearn-mpg | Ahmadswaid | "2023-05-22T00:06:44Z" | 0 | 0 | sklearn | [
"sklearn",
"skops",
"tabular-regression",
"region:us"
] | tabular-regression | "2023-05-22T00:06:40Z" | ---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-regression
model_file: linreg.pkl
widget:
structuredData:
x0:
- -0.3839236795902252
- -0.9788183569908142
- 1.0937178134918213
x1:
- -0.5319488644599915
- -1.108436107635498
- 0.9354732036590576
x2:
- -0.38279563188552856
- -1.3128694295883179
- 1.4773520231246948
x3:
- 0.2815782427787781
- -0.11783809214830399
- -0.9529813528060913
x4:
- 1.0
- 1.0
- 0.0
x5:
- 0.0
- 0.0
- 0.0
x6:
- 0.0
- 0.0
- 0.0
x7:
- 0.0
- 0.0
- 1.0
x8:
- 0.0
- 1.0
- 0.0
x9:
- 0.0
- 0.0
- 0.0
---
# Model description
This is a regression model on MPG dataset trained.
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|------------------|------------|
| copy_X | True |
| fit_intercept | True |
| n_jobs | |
| normalize | deprecated |
| positive | False |
</details>
### Model Plot
The model plot is below.
<style>#sk-container-id-3 {color: black;background-color: white;}#sk-container-id-3 pre{padding: 0;}#sk-container-id-3 div.sk-toggleable {background-color: white;}#sk-container-id-3 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-3 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-3 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-3 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-3 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-3 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-3 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-3 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-3 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-3 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-3 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-3 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-3 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-3 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-3 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-3 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-3 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-3 div.sk-item {position: relative;z-index: 1;}#sk-container-id-3 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-3 div.sk-item::before, #sk-container-id-3 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-3 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-3 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-3 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-3 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-3 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-3 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-3 div.sk-label-container {text-align: center;}#sk-container-id-3 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-3 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-3" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>LinearRegression()</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" checked><label for="sk-estimator-id-3" class="sk-toggleable__label sk-toggleable__label-arrow">LinearRegression</label><div class="sk-toggleable__content"><pre>LinearRegression()</pre></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|--------------------|----------|
| Mean Squared Error | 5.01069 |
| R-Squared | 0.883503 |
# How to Get Started with the Model
Use the code below to get started with the model.
```python
import joblib
import json
import pandas as pd
clf = joblib.load(linreg.pkl)
with open("config.json") as f:
config = json.load(f)
clf.predict(pd.DataFrame.from_dict(config["sklearn"]["example_input"]))
```
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
``` |
latimar/airoboros-13b-ggml | latimar | "2023-05-22T11:10:04Z" | 0 | 8 | null | [
"region:us"
] | null | "2023-05-22T00:08:25Z" | ggml conversion of the https://huggingface.co/jondurbin/airoboros-13b |
ibrahimbush45/Idk | ibrahimbush45 | "2023-05-22T00:12:29Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T00:12:29Z" | Entry not found |
polypo/testing | polypo | "2023-05-22T00:31:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T00:15:42Z" | Entry not found |
ZyXin/rl_course_vizdoom_health_gathering_supreme | ZyXin | "2023-05-22T00:16:20Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-22T00:16:04Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.93 +/- 3.96
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r ZyXin/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
tchebonenko/Ass1c-BLOOMZ | tchebonenko | "2023-05-22T00:32:45Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T00:16:37Z" | # Fine-tune a BLOOMZ-based ad generation model using peft, transformers and bitsandbytes
##Dataset:
[MarketMail-AI dataset](https://huggingface.co/datasets/FourthBrainGenAI/MarketMail-AI) |
alvinxd/Initokyo | alvinxd | "2023-05-23T00:01:52Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-05-22T00:16:38Z" | ---
license: creativeml-openrail-m
---
|
DigitaalisetM/LoveLiveHPDY | DigitaalisetM | "2023-05-22T01:01:19Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T00:17:19Z" | Entry not found |
Ahmadswaid/example-california-housing | Ahmadswaid | "2023-05-22T00:18:08Z" | 0 | 0 | sklearn | [
"sklearn",
"skops",
"tabular-regression",
"region:us"
] | tabular-regression | "2023-05-22T00:18:03Z" | ---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-regression
model_format: skops
model_file: model.skops
widget:
structuredData:
AveBedrms:
- 0.9290780141843972
- 0.9458483754512635
- 1.087360594795539
AveOccup:
- 3.1134751773049647
- 3.0613718411552346
- 3.2657992565055762
AveRooms:
- 6.304964539007092
- 6.945848375451264
- 3.8884758364312266
HouseAge:
- 17.0
- 15.0
- 24.0
Latitude:
- 34.23
- 36.84
- 34.04
Longitude:
- -117.41
- -119.77
- -118.3
MedInc:
- 6.1426
- 5.3886
- 1.7109
Population:
- 439.0
- 848.0
- 1757.0
---
# Model description
Gradient boosting regressor trained on California Housing dataset
The model is a gradient boosting regressor from sklearn. On top of the standard
features, it contains predictions from a KNN models. These predictions are calculated
out of fold, then added on top of the existing features. These features are really
helpful for decision tree-based models, since those cannot easily learn from geospatial
data.
## Intended uses & limitations
This model is meant for demonstration purposes
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|-----------------------------------------------|--------------------------------------------------------------|
| cv | |
| estimators | [('knn@5', Pipeline(steps=[('select_cols',<br /> ColumnTransformer(transformers=[('long_and_lat', 'passthrough',<br /> ['Longitude', 'Latitude'])])),<br /> ('knn', KNeighborsRegressor())]))] |
| final_estimator__alpha | 0.9 |
| final_estimator__ccp_alpha | 0.0 |
| final_estimator__criterion | friedman_mse |
| final_estimator__init | |
| final_estimator__learning_rate | 0.1 |
| final_estimator__loss | squared_error |
| final_estimator__max_depth | 3 |
| final_estimator__max_features | |
| final_estimator__max_leaf_nodes | |
| final_estimator__min_impurity_decrease | 0.0 |
| final_estimator__min_samples_leaf | 1 |
| final_estimator__min_samples_split | 2 |
| final_estimator__min_weight_fraction_leaf | 0.0 |
| final_estimator__n_estimators | 500 |
| final_estimator__n_iter_no_change | |
| final_estimator__random_state | 0 |
| final_estimator__subsample | 1.0 |
| final_estimator__tol | 0.0001 |
| final_estimator__validation_fraction | 0.1 |
| final_estimator__verbose | 0 |
| final_estimator__warm_start | False |
| final_estimator | GradientBoostingRegressor(n_estimators=500, random_state=0) |
| n_jobs | |
| passthrough | True |
| verbose | 0 |
| knn@5 | Pipeline(steps=[('select_cols',<br /> ColumnTransformer(transformers=[('long_and_lat', 'passthrough',<br /> ['Longitude', 'Latitude'])])),<br /> ('knn', KNeighborsRegressor())]) |
| knn@5__memory | |
| knn@5__steps | [('select_cols', ColumnTransformer(transformers=[('long_and_lat', 'passthrough',<br /> ['Longitude', 'Latitude'])])), ('knn', KNeighborsRegressor())] |
| knn@5__verbose | False |
| knn@5__select_cols | ColumnTransformer(transformers=[('long_and_lat', 'passthrough',<br /> ['Longitude', 'Latitude'])]) |
| knn@5__knn | KNeighborsRegressor() |
| knn@5__select_cols__n_jobs | |
| knn@5__select_cols__remainder | drop |
| knn@5__select_cols__sparse_threshold | 0.3 |
| knn@5__select_cols__transformer_weights | |
| knn@5__select_cols__transformers | [('long_and_lat', 'passthrough', ['Longitude', 'Latitude'])] |
| knn@5__select_cols__verbose | False |
| knn@5__select_cols__verbose_feature_names_out | True |
| knn@5__select_cols__long_and_lat | passthrough |
| knn@5__knn__algorithm | auto |
| knn@5__knn__leaf_size | 30 |
| knn@5__knn__metric | minkowski |
| knn@5__knn__metric_params | |
| knn@5__knn__n_jobs | |
| knn@5__knn__n_neighbors | 5 |
| knn@5__knn__p | 2 |
| knn@5__knn__weights | uniform |
</details>
### Model Plot
The model plot is below.
<style>#sk-container-id-13 {color: black;background-color: white;}#sk-container-id-13 pre{padding: 0;}#sk-container-id-13 div.sk-toggleable {background-color: white;}#sk-container-id-13 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-13 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-13 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-13 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-13 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-13 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-13 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-13 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-13 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-13 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-13 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-13 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-13 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-13 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-13 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-13 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-13 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-13 div.sk-item {position: relative;z-index: 1;}#sk-container-id-13 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-13 div.sk-item::before, #sk-container-id-13 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-13 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-13 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-13 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-13 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-13 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-13 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-13 div.sk-label-container {text-align: center;}#sk-container-id-13 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-13 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-13" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>StackingRegressor(estimators=[('knn@5',Pipeline(steps=[('select_cols',ColumnTransformer(transformers=[('long_and_lat','passthrough',['Longitude','Latitude'])])),('knn',KNeighborsRegressor())]))],final_estimator=GradientBoostingRegressor(n_estimators=500,random_state=0),passthrough=True)</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-41" type="checkbox" ><label for="sk-estimator-id-41" class="sk-toggleable__label sk-toggleable__label-arrow">StackingRegressor</label><div class="sk-toggleable__content"><pre>StackingRegressor(estimators=[('knn@5',Pipeline(steps=[('select_cols',ColumnTransformer(transformers=[('long_and_lat','passthrough',['Longitude','Latitude'])])),('knn',KNeighborsRegressor())]))],final_estimator=GradientBoostingRegressor(n_estimators=500,random_state=0),passthrough=True)</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><label>knn@5</label></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-serial"><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-42" type="checkbox" ><label for="sk-estimator-id-42" class="sk-toggleable__label sk-toggleable__label-arrow">select_cols: ColumnTransformer</label><div class="sk-toggleable__content"><pre>ColumnTransformer(transformers=[('long_and_lat', 'passthrough',['Longitude', 'Latitude'])])</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-43" type="checkbox" ><label for="sk-estimator-id-43" class="sk-toggleable__label sk-toggleable__label-arrow">long_and_lat</label><div class="sk-toggleable__content"><pre>['Longitude', 'Latitude']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-44" type="checkbox" ><label for="sk-estimator-id-44" class="sk-toggleable__label sk-toggleable__label-arrow">passthrough</label><div class="sk-toggleable__content"><pre>passthrough</pre></div></div></div></div></div></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-45" type="checkbox" ><label for="sk-estimator-id-45" class="sk-toggleable__label sk-toggleable__label-arrow">KNeighborsRegressor</label><div class="sk-toggleable__content"><pre>KNeighborsRegressor()</pre></div></div></div></div></div></div></div></div></div></div><div class="sk-item"><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><label>final_estimator</label></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-46" type="checkbox" ><label for="sk-estimator-id-46" class="sk-toggleable__label sk-toggleable__label-arrow">GradientBoostingRegressor</label><div class="sk-toggleable__content"><pre>GradientBoostingRegressor(n_estimators=500, random_state=0)</pre></div></div></div></div></div></div></div></div></div></div></div></div>
## Evaluation Results
Metrics are calculated on the test set
| Metric | Value |
|-------------------------|--------------|
| Root mean squared error | 44273.5 |
| Mean absolute error | 30079.9 |
| R² | 0.805954 |
## Dataset description
California Housing dataset
--------------------------
**Data Set Characteristics:**
:Number of Instances: 20640
:Number of Attributes: 8 numeric, predictive attributes and the target
:Attribute Information:
- MedInc median income in block group
- HouseAge median house age in block group
- AveRooms average number of rooms per household
- AveBedrms average number of bedrooms per household
- Population block group population
- AveOccup average number of household members
- Latitude block group latitude
- Longitude block group longitude
:Missing Attribute Values: None
This dataset was obtained from the StatLib repository.
https://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.html
The target variable is the median house value for California districts,
expressed in hundreds of thousands of dollars ($100,000).
This dataset was derived from the 1990 U.S. census, using one row per census
block group. A block group is the smallest geographical unit for which the U.S.
Census Bureau publishes sample data (a block group typically has a population
of 600 to 3,000 people).
An household is a group of people residing within a home. Since the average
number of rooms and bedrooms in this dataset are provided per household, these
columns may take surpinsingly large values for block groups with few households
and many empty houses, such as vacation resorts.
It can be downloaded/loaded using the
:func:`sklearn.datasets.fetch_california_housing` function.
.. topic:: References
- Pace, R. Kelley and Ronald Barry, Sparse Spatial Autoregressions,
Statistics and Probability Letters, 33 (1997) 291-297
### Data distribution
<details>
<summary> Click to expand </summary>
![Data distribution](geographic.png)
</details>
# How to Get Started with the Model
Run the code below to load the model
```python
import json
import pandas as pd
import skops.io as sio
model = sio.load("model.skops")
with open("config.json") as f:
config = json.load(f)
model.predict(pd.DataFrame.from_dict(config["sklearn"]["example_input"]))
```
# Model Card Authors
Benjamin Bossan
# Model Card Contact
benjamin@huggingface.co
# Permutation Importances
![Permutation Importances](permutation-importances.png)
|
sevk/kkbot | sevk | "2023-05-22T00:18:46Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2023-05-22T00:18:46Z" | ---
license: mit
---
|
erickyue/moon | erickyue | "2024-02-10T16:42:23Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T00:26:42Z" | Entry not found |
jarl0415/jarl | jarl0415 | "2023-05-22T00:45:31Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-05-22T00:45:31Z" | ---
license: openrail
---
|
public-data/DeepDanbooru | public-data | "2022-01-23T22:31:55Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T00:53:07Z" | # DeepDanbooru
- https://github.com/KichangKim/DeepDanbooru
- https://github.com/KichangKim/DeepDanbooru/releases/tag/v3-20200915-sgd-e30
- https://github.com/KichangKim/DeepDanbooru/releases/download/v3-20200915-sgd-e30/deepdanbooru-v3-20200915-sgd-e30.zip
|
jie001/find | jie001 | "2023-05-22T00:54:36Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-05-22T00:54:35Z" | ---
license: openrail
---
|
Elonvrc/Yuzl | Elonvrc | "2023-05-22T00:55:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T00:55:24Z" | hu |
perfectino/naput | perfectino | "2023-05-23T17:25:20Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-05-22T00:56:06Z" | ---
license: creativeml-openrail-m
---
|
EDWINHM/egla | EDWINHM | "2023-05-22T00:56:18Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T00:56:18Z" | Entry not found |
ssxjz/singer | ssxjz | "2023-05-22T01:01:14Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T01:01:14Z" | Entry not found |
dongyoungkim/whisper-large-korean-A | dongyoungkim | "2023-05-22T01:03:23Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T01:03:23Z" | Entry not found |
max2lax/Newdataset | max2lax | "2023-05-22T01:13:35Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-05-22T01:13:35Z" | ---
license: apache-2.0
---
|
Laurie/lora-instruct-chat-50k-cn-en | Laurie | "2023-05-24T08:07:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T01:13:35Z" | # 经测试,此版本的效果较好😀
I use the 50k [Chinese data](https://huggingface.co/datasets/Chinese-Vicuna/instruct_chat_50k.jsonl), which is the combination of alpaca_chinese_instruction_dataset and the Chinese conversation data from sharegpt-90k data. I finetune the model for 3 epochs use a single 4090 GPU with cutoff_len=1024.
**Use in Python**:
from transformers import LlamaForCausalLM, LlamaTokenizer
from peft import PeftModel
import torch
tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")
model = LlamaForCausalLM.from_pretrained(
"decapoda-research/llama-7b-hf",
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto",
)
model = PeftModel.from_pretrained(
model,
"Laurie/lora-instruct-chat-50k-cn-en",
torch_dtype=torch.float16,
device_map={'': 0}
)
device = "cuda" if torch.cuda.is_available() else "cpu"
inputs = tokenizer("什么是自然语言处理?",return_tensors="pt" )
model.to(device)
with torch.no_grad():
inputs = {k: v.to(device) for k, v in inputs.items()}
outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=129)
print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))
|
raghudinesh/ddp_final | raghudinesh | "2023-05-22T01:15:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T01:15:05Z" | Entry not found |
dnsn/t5-large_PREFIX_TUNING_SEQ2SEQ | dnsn | "2023-05-22T01:21:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T01:21:34Z" | Entry not found |
duanhong/learning | duanhong | "2023-05-22T01:39:30Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T01:24:51Z" | Entry not found |
ntedeschi/reconcile_the_irreconcilable | ntedeschi | "2023-05-22T03:41:09Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T01:25:57Z" | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Reconcile the Irreconcilable
<!-- Provide a quick summary of what the model is/does. -->
This model tries to reconcile the views of Hegel and Ayn Rand on a given philosophical topic.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
The purpose of the model is to give the views of Hegel and Ayn Rand on a
given philosophical topic. Then it is supposed to write a paragraph reconciling
their (most likely contrary) views.
The model was multitasked fine-tuned on [bloomz-3b](https://huggingface.co/bigscience/bloomz-3b).
Data for the fine-tuning was generated using chatGPT GPT-4.
- **Developed by:** [Neil Tedeschi](https://huggingface.co/ntedeschi)
- **Training dataset:** [reconcile_the_irreconcilable](https://huggingface.co/datasets/ntedeschi/reconcile_the_irreconcilable)
- **Language(s) (NLP):** [bloomz-3b](https://huggingface.co/bigscience/bloomz-3b)
## Direct Use
I am not sure how to get the model to work on this site. So, annoyingly, you
need to cut and paste the following code into a notebook.
1. Download the adapter and connect with base model.
```
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "ntedeschi/reconcile_the_irreconcilable"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=False, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
```
2. Set up prompt and query function:
```
from IPython.display import display, Markdown
def make_inference(topic):
batch = tokenizer(
f"### INSTRUCTION\nBelow is a philosophy topic. Please write Hegel's view on the topic, Ayn Rand's view \
on the topic and a reconciliation of their views. \
\n\n### Topic:\n{topic}\n \
\n\n### Hegel:\n \
\n\n### Ayn Rand:\n \
\n\n### Reconciliation:\n",
return_tensors='pt'
)
with torch.cuda.amp.autocast():
output_tokens = model.generate(**batch, max_new_tokens=512)
display(Markdown((tokenizer.decode(output_tokens[0], skip_special_tokens=True))))
```
3. Make the inference by giving a philosophy topic. For example,
```
philosophy_topic = "Mind body dualism"
make_inference(philosophy_topic)
```
|
Gille/GilleMix_v2_tests | Gille | "2023-06-03T22:09:10Z" | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-05-22T01:31:26Z" | ---
license: creativeml-openrail-m
---
|
pennlio/test | pennlio | "2023-05-22T01:33:35Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T01:33:35Z" | Entry not found |
TKNM/nva-ze_sayaka04 | TKNM | "2023-05-22T01:39:18Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T01:38:46Z" | Entry not found |
outterseayi/childmask | outterseayi | "2023-05-22T01:38:51Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T01:38:51Z" | Entry not found |
collabrl/SpaceInvadersNoFrameskip-v4 | collabrl | "2023-05-22T01:40:23Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-22T01:39:43Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 808.00 +/- 284.47
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga collabrl -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga collabrl -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga collabrl
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
denaneek/fine-tuning_BLOOMZ_using_LoRA | denaneek | "2023-05-22T02:35:30Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T01:42:27Z" | Entry not found |
douglch/q-FrozenLake-v1-4x4-noSlippery | douglch | "2023-05-22T02:02:09Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-22T01:43:30Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="douglch/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
betterme/models | betterme | "2023-05-22T02:05:09Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T01:57:03Z" | Entry not found |
StupidGame/kyawawa_loha | StupidGame | "2023-05-22T01:59:53Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T01:59:07Z" | Entry not found |
ongknsro/ACARISBERT-RoBERTa | ongknsro | "2023-05-22T02:01:22Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-22T02:00:37Z" | Entry not found |
moghis/LunarLander-v2-scratch | moghis | "2023-05-22T02:09:43Z" | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-22T02:04:27Z" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -144.28 +/- 93.95
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 80000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'moghis/LunarLander-v2-scratch'
'batch_size': 512
'minibatch_size': 128}
```
|
douglch/q-learning_taxi_v3 | douglch | "2023-05-22T02:07:06Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-22T02:04:40Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-learning_taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.24 +/- 2.60
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="douglch/q-learning_taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ApolloFilippou/a2c-PandaReachDense-v2 | ApolloFilippou | "2023-05-22T02:09:07Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-22T02:06:27Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.44 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
addiekline/genaidemo | addiekline | "2023-05-22T02:12:24Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T02:12:23Z" | Entry not found |
dddx/mehmehmeh | dddx | "2023-05-22T02:29:59Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T02:29:59Z" | Entry not found |
AI2CC/luzhiy | AI2CC | "2023-05-22T02:50:15Z" | 0 | 0 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | "2023-05-22T02:42:30Z" | Entry not found |
Pegasus88/liuling1 | Pegasus88 | "2023-05-22T02:45:13Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-05-22T02:43:52Z" | ---
license: creativeml-openrail-m
---
|
Chen311/angie | Chen311 | "2023-05-22T02:51:07Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-05-22T02:46:50Z" | ---
license: creativeml-openrail-m
---
|
Zx1140199595/ControlNet | Zx1140199595 | "2023-05-27T12:52:45Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T02:54:00Z" | Entry not found |
DisasterArtist/DIO | DisasterArtist | "2023-05-23T22:49:16Z" | 0 | 0 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | "2023-05-22T02:54:56Z" | Entry not found |
albertStarCloud/mit-b0-scene-parse-150-lora | albertStarCloud | "2023-05-22T03:23:17Z" | 0 | 0 | null | [
"pytorch",
"tensorboard",
"region:us"
] | null | "2023-05-22T02:56:28Z" | Entry not found |
takesomerisks/loraBnbOpt13bOasst1Local | takesomerisks | "2023-05-22T13:17:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T02:58:03Z" | Entry not found |
warren2023/xlm-roberta-base-finetuned-panx-de-fr | warren2023 | "2023-05-22T03:10:54Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T03:10:54Z" | Entry not found |
Faiza3/martin_valen_dataset_faf | Faiza3 | "2023-05-22T03:11:40Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T03:11:40Z" | Entry not found |
chenbowen-184/GenerAd | chenbowen-184 | "2023-05-22T03:13:20Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T03:13:18Z" | Entry not found |
Pachosoad/Sad | Pachosoad | "2023-05-22T03:13:58Z" | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | "2023-05-22T03:13:58Z" | ---
license: bigscience-openrail-m
---
|
ApolloFilippou/poca-SoccerTwos | ApolloFilippou | "2023-05-22T03:18:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T03:18:11Z" | Entry not found |
joshxin/test-model | joshxin | "2023-05-22T03:24:18Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T03:24:18Z" | Entry not found |
xixixixihu/output | xixixixihu | "2023-05-22T03:26:06Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T03:26:06Z" | Entry not found |
ttogun/fourthbrain_wk1_finetuned_bloomz_ad_gen | ttogun | "2023-05-22T03:26:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T03:26:26Z" | Entry not found |
antruong/speecht5_tts_pierre | antruong | "2023-05-22T03:34:20Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T03:34:20Z" | Entry not found |
rxl7906/super-cool-model | rxl7906 | "2023-05-22T03:36:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T03:36:21Z" | Entry not found |
Skim7603/imneko | Skim7603 | "2023-05-22T03:40:59Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-05-22T03:37:22Z" | ---
license: creativeml-openrail-m
---
|
humin1102/vicuna-13b-all-v1.1 | humin1102 | "2023-05-22T06:25:03Z" | 0 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-05-22T03:45:18Z" | Entry not found |
marasama/nva-king_george_v | marasama | "2023-05-22T03:49:10Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T03:47:46Z" | Entry not found |
kebab111/SevensMix | kebab111 | "2023-05-22T03:52:41Z" | 0 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-05-22T03:51:32Z" | Entry not found |
nikitagricanuk/vicuna-13b | nikitagricanuk | "2023-05-22T03:53:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T03:53:34Z" | Entry not found |
iamanavk/qm_sum_t5-base_attempt_2 | iamanavk | "2023-05-22T03:58:08Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T03:58:08Z" | Entry not found |
kebab111/AresMix | kebab111 | "2023-05-22T03:59:24Z" | 0 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-05-22T03:58:20Z" | Entry not found |
moghis/rl_course_vizdoom_health_gathering_supreme | moghis | "2023-05-22T03:59:27Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-22T03:59:12Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 8.25 +/- 2.88
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r moghis/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
kirp/psy-llama-extend-delta | kirp | "2023-05-27T05:02:31Z" | 0 | 1 | null | [
"dataset:siyangliu/PsyQA",
"license:apache-2.0",
"region:us"
] | null | "2023-05-22T03:59:48Z" | ---
license: apache-2.0
datasets:
- siyangliu/PsyQA
---
Extend the vocab of llama to 52992 and random initialization. |
leonhe/ppo-LunarLander-v2 | leonhe | "2023-05-22T04:03:30Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-22T04:03:05Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 225.17 +/- 41.05
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yusufagung29/whisper_1e-4_clean_legion_fleurs | yusufagung29 | "2023-05-22T18:38:57Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-05-22T04:15:36Z" | Entry not found |
ShawnGGG/update | ShawnGGG | "2023-05-22T04:16:41Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T04:16:08Z" | Entry not found |
moin1234/MHAI | moin1234 | "2023-05-22T04:26:30Z" | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | "2023-05-22T04:22:08Z" | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BIDJOE/Safety-Protocol-YOLOv5-Vanilla | BIDJOE | "2023-05-22T04:29:06Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T04:28:11Z" | Entry not found |
fcuadra/FineTuningBLOOMZ | fcuadra | "2023-05-22T04:45:21Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"text-generation",
"dataset:fcuadra/MarketMail_AI",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | text-generation | "2023-05-22T04:32:01Z" | ---
license: bigscience-bloom-rail-1.0
datasets:
- fcuadra/MarketMail_AI
library_name: adapter-transformers
pipeline_tag: text-generation
--- |
prismaticholdings/prismaticholdings | prismaticholdings | "2023-05-22T04:35:44Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-05-22T04:35:44Z" | ---
license: openrail
---
|
lip421/wutoududou4 | lip421 | "2023-05-22T04:36:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T04:36:17Z" | Entry not found |
rewoo/planner_7B | rewoo | "2023-05-28T23:03:35Z" | 0 | 17 | null | [
"license:mit",
"region:us"
] | null | "2023-05-22T04:48:40Z" | ---
license: mit
---
Alpaca Lora adapter weight fine-tuned on following instruction dataset.
https://huggingface.co/datasets/rewoo/planner_instruction_tuning_2k/blob/main/README.md
Training script: borrowed from the official [Alpaca-LoRA](https://github.com/tloen/alpaca-lora) implementation
We use following parameter.
```
python finetune.py \
--base_model 'decapoda-research/llama-7b-hf' \
--data_path 'rewoo/planner_instruction_tuning_2k' \
--output_dir './lora-alpaca-planner' \
--batch_size 128 \
--micro_batch_size 8 \
--num_epochs 10 \
--learning_rate 1e-4 \
--cutoff_len 1024 \
--val_set_size 200 \
--lora_r 8 \
--lora_alpha 16 \
--lora_dropout 0.05 \
--lora_target_modules '[q_proj,v_proj]' \
--train_on_inputs \
--group_by_length \
--resume_from_checkpoint 'tloen/alpaca-lora-7b'
``` |
TKNM/nva-fh_nie2_02r | TKNM | "2023-05-22T04:57:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T04:57:22Z" | Entry not found |
natanasha/posing | natanasha | "2023-05-22T05:00:54Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T04:58:51Z" | Entry not found |
averageandyyy/brainheck_imda_wav2vec_hf | averageandyyy | "2023-05-22T05:02:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T05:02:36Z" | Entry not found |
Tony1810/Draft | Tony1810 | "2023-05-22T05:12:10Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T05:06:07Z" | Entry not found |
Trinity123/Testn1 | Trinity123 | "2023-05-22T05:09:16Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T05:09:16Z" | Entry not found |
GeneZC/bert-base-cola | GeneZC | "2023-05-22T08:34:09Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"dataset:glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2023-05-22T05:20:29Z" | ---
license: apache-2.0
datasets:
- glue
---
# Model Details
`bert-base-uncased` finetuned on `CoLA`.
## Parameter settings
batch size is 32, learning rate is 2e-5.
## Metrics
matthews_corr: 0.6295 |
KyungChang/TestModel | KyungChang | "2023-05-22T05:25:22Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-05-22T05:25:22Z" | ---
license: openrail
---
|
PragyanPrusty/wav2vec2-large-xlsr-53-odia-colab | PragyanPrusty | "2023-05-22T11:50:47Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T05:25:27Z" | Entry not found |
RanaReebaal/clip-embeddings | RanaReebaal | "2023-05-22T05:33:43Z" | 0 | 0 | null | [
"endpoints_compatible",
"region:us"
] | null | "2023-05-22T05:28:27Z" | Entry not found |
dustinator/BusinessGuy-GPT | dustinator | "2023-05-22T05:30:47Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T05:30:47Z" | Entry not found |
Odlanier968/Gggg | Odlanier968 | "2023-05-22T05:31:37Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T05:31:37Z" | Entry not found |
Valentine89/TwitterSentimentAnalysis | Valentine89 | "2023-05-22T05:32:12Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-05-22T05:32:12Z" | ---
license: openrail
---
|
oguzhanascr/oascier | oguzhanascr | "2023-05-22T05:32:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T05:32:34Z" | Entry not found |
warren2023/xlm-roberta-base-finetuned-panx-fr | warren2023 | "2023-05-22T05:35:43Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T05:35:43Z" | Entry not found |
sj1/textual_inversion_cat | sj1 | "2023-05-22T05:38:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-22T05:38:00Z" | Entry not found |