modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
tharun01/Llama3-16bit-All-3epoch | tharun01 | "2024-06-22T13:44:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-22T13:32:05Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Chidananda123/Health_Care_Assitent | Chidananda123 | "2024-06-22T13:34:23Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-22T13:34:22Z" | ---
license: apache-2.0
---
|
welsachy/mental-bert-base-uncased-finetuned-depression | welsachy | "2024-06-22T13:36:15Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:mental/mental-bert-base-uncased",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-22T13:35:49Z" | ---
license: cc-by-nc-4.0
base_model: mental/mental-bert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: mental-bert-base-uncased-finetuned-depression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mental-bert-base-uncased-finetuned-depression
This model is a fine-tuned version of [mental/mental-bert-base-uncased](https://huggingface.co/mental/mental-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5358
- Precision: 0.8986
- Recall: 0.8885
- F1: 0.8933
- Accuracy: 0.9158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 469 | 0.3929 | 0.8744 | 0.8346 | 0.8516 | 0.8849 |
| 0.4726 | 2.0 | 938 | 0.4405 | 0.9052 | 0.8359 | 0.8660 | 0.8955 |
| 0.2165 | 3.0 | 1407 | 0.4594 | 0.8627 | 0.8435 | 0.8515 | 0.8891 |
| 0.1263 | 4.0 | 1876 | 0.5213 | 0.9012 | 0.8781 | 0.8886 | 0.9094 |
| 0.0719 | 5.0 | 2345 | 0.4879 | 0.9036 | 0.8694 | 0.8851 | 0.9083 |
| 0.0471 | 6.0 | 2814 | 0.5628 | 0.9185 | 0.8639 | 0.8880 | 0.9104 |
| 0.0431 | 7.0 | 3283 | 0.5592 | 0.8980 | 0.8731 | 0.8846 | 0.9104 |
| 0.0402 | 8.0 | 3752 | 0.5948 | 0.9166 | 0.8591 | 0.8848 | 0.9094 |
| 0.0348 | 9.0 | 4221 | 0.5358 | 0.8986 | 0.8885 | 0.8933 | 0.9158 |
| 0.0276 | 10.0 | 4690 | 0.6361 | 0.9116 | 0.8619 | 0.8843 | 0.9094 |
| 0.0281 | 11.0 | 5159 | 0.6535 | 0.9095 | 0.8726 | 0.8897 | 0.9147 |
| 0.029 | 12.0 | 5628 | 0.6776 | 0.9098 | 0.8673 | 0.8868 | 0.9136 |
| 0.0188 | 13.0 | 6097 | 0.6940 | 0.9072 | 0.8629 | 0.8829 | 0.9072 |
| 0.0215 | 14.0 | 6566 | 0.7022 | 0.9168 | 0.8606 | 0.8856 | 0.9115 |
| 0.0184 | 15.0 | 7035 | 0.6996 | 0.9027 | 0.8687 | 0.8846 | 0.9126 |
| 0.0204 | 16.0 | 7504 | 0.6990 | 0.9063 | 0.8687 | 0.8861 | 0.9126 |
| 0.0204 | 17.0 | 7973 | 0.7268 | 0.9103 | 0.8677 | 0.8871 | 0.9115 |
| 0.0185 | 18.0 | 8442 | 0.7210 | 0.9066 | 0.8766 | 0.8907 | 0.9147 |
| 0.0181 | 19.0 | 8911 | 0.7346 | 0.9096 | 0.8732 | 0.8902 | 0.9147 |
| 0.0151 | 20.0 | 9380 | 0.7363 | 0.9090 | 0.8720 | 0.8892 | 0.9136 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
cheng-hust/rt-cope | cheng-hust | "2024-06-22T13:53:43Z" | 0 | 0 | null | [
"license:unlicense",
"region:us"
] | null | "2024-06-22T13:36:15Z" | ---
license: unlicense
---
this model is made of rt-detr and cope(without mask and npos_max=12) |
totoro2511/ppo-Huggy | totoro2511 | "2024-06-22T13:38:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T13:38:39Z" | Entry not found |
LarryAIDraw/kafka_xl_v1 | LarryAIDraw | "2024-06-22T13:49:22Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-22T13:38:46Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/51708?modelVersionId=586520 |
LarryAIDraw/firefly_honkai_star_rail_v1_pdxl_goofy | LarryAIDraw | "2024-06-22T13:49:32Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-22T13:39:08Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/529710/firefly-honkai-star-rail-or-goofy-ai |
AdamYijing/AdamYijing | AdamYijing | "2024-06-22T13:40:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T13:40:05Z" | <div><div><strong>Địa chỉ mua kính mắt cao cấp uy tín tại Việt Nam</strong><br></div><div>Kính mắt không chỉ là phụ kiện thời trang, mà còn là công cụ bảo vệ đôi mắt khỏi tác hại của ánh nắng mặt trời, bụi bẩn và các yếu tố môi trường khác. Với sự phát triển của ngành công nghiệp thời trang, kính mắt cao cấp ngày càng được ưa chuộng bởi chất liệu cao cấp, thiết kế tinh tế và công nghệ hiện đại. Trong bài viết này, chúng tôi sẽ giới thiệu đến bạn những địa chỉ uy tín để bạn có thể tìm mua kính mắt cao cấp phù hợp với phong cách và nhu cầu của mình.<br></div><div><strong>Kênh mua sắm online: Tiện lợi và đa dạng lựa chọn</strong><br></div><div>XEM THÊM: <strong><a href="https://buildeey.com/profile/cua-hang-mat-kinh-hang-hieu-kinh-hai-trieu-noi-mua-kinh-hieu-dep-cao-cap">mắt kính</a></strong><br></div><div><img src="https://cdn.chiaki.vn/unsafe/0x480/left/top/smart/filters:quality(75)/https://chiaki.vn/upload/product/2020/10/5f913b11e15d2-22102020145601.jpg"><br></div><div>Ngày nay, mua sắm online đã trở nên phổ biến và tiện lợi, mang đến cho người dùng nhiều lựa chọn đa dạng. Dưới đây là một số trang web và ứng dụng uy tín chuyên cung cấp kính mắt cao cấp:<br></div><div><strong>Lazada</strong><br></div><div>Lazada là trang thương mại điện tử hàng đầu Việt Nam với sự đa dạng sản phẩm, bao gồm cả kính mắt cao cấp từ nhiều thương hiệu nổi tiếng thế giới như Ray-Ban, Gucci, Dior, Chanel,. . . Lazada cung cấp nhiều ưu đãi hấp dẫn và dịch vụ giao hàng nhanh chóng.<br></div><div><strong>Shopee</strong><br></div><div>Shopee cũng là một trang thương mại điện tử phổ biến tại Việt Nam với nhiều ưu điểm như giao diện thân thiện, hỗ trợ thanh toán đa dạng và dịch vụ chăm sóc khách hàng tốt. Shopee có nhiều gian hàng bán kính mắt cao cấp chính hãng với mức giá cạnh tranh.<br></div><div><strong>Tiki</strong><br></div><div>Tiki là trang thương mại điện tử uy tín với hệ thống kho hàng rộng khắp cả nước, đảm bảo giao hàng nhanh chóng. Tiki cung cấp nhiều lựa chọn kính mắt cao cấp từ các thương hiệu nổi tiếng, cùng với đó là dịch vụ bảo hành và đổi trả hàng tiện lợi.<br></div><div><strong>Sendo</strong><br></div><div>Sendo là một trong những trang thương mại điện tử đầu tiên tại Việt Nam, cung cấp nhiều sản phẩm chất lượng từ các nhà bán lẻ uy tín. Sendo có nhiều chương trình khuyến mãi hấp dẫn và dịch vụ giao hàng nhanh chóng.<br></div><div><strong>Mắt kính online</strong><br></div><div>Bên cạnh các trang web thương mại điện tử, nhiều shop chuyên bán kính mắt online cũng đã xuất hiện và nhận được sự tin tưởng của khách hàng. Bạn có thể tìm kiếm trên Google hoặc các trang mạng xã hội để tìm các shop bán kính mắt cao cấp uy tín.<br></div><div>Ưu điểm của mua sắm kính mắt cao cấp online:<br></div><div>Tiện lợi: Mua sắm online giúp tiết kiệm thời gian và công sức. Bạn có thể lựa chọn sản phẩm và thanh toán trực tuyến mà không cần di chuyển đến cửa hàng.<br></div><div>Dễ so sánh: Bạn có thể dễ dàng so sánh giá cả, kiểu dáng, chất liệu và thương hiệu của các sản phẩm khác nhau từ nhiều nhà cung cấp.<br></div><div>Đa dạng lựa chọn: Mua sắm online mang đến cho bạn nhiều lựa chọn hơn so với mua sắm truyền thống.<br></div><div>Tiết kiệm chi phí: Mua kính online thường có giá thấp hơn so với mua ở cửa hàng truyền thống.<br></div><div>Nhược điểm của mua sắm kính mắt cao cấp online:<br></div><div>Không thể thử kính trực tiếp: Bạn không thể thử kính trực tiếp trước khi mua.<br></div><div>Rủi ro nhận hàng không như mong đợi: Có thể xảy ra trường hợp sản phẩm không đúng với mô tả hoặc bị lỗi.<br></div><div>Khó khăn trong việc khắc phục khiếm khuyết: Nếu sản phẩm bị lỗi, bạn phải liên hệ với shop để đổi trả, có thể mất thời gian và công sức.<br></div><div><strong>Cửa hàng chuyên bán kính mắt cao cấp: Uy tín và chuyên nghiệp</strong><br></div><div>THAM KHẢO: <strong><a href="https://www.behance.net/kinhhaitrieu">kính đẹp</a></strong><br></div><div><img src="https://cdn.tgdd.vn/hoi-dap/1192160/huong-dan-cach-lau-mat-kinh-dung-cach-sach-va-don-3-800x500.jpg"><br></div><div>Ngoài mua sắm online, bạn cũng có thể tìm mua kính mắt cao cấp tại các cửa hàng chuyên bán kính mắt uy tín. Dưới đây là một số địa chỉ được nhiều người tin tưởng:<br></div><div><strong>Lens &amp; Frame</strong><br></div><div>Lens &amp; Frame là chuỗi cửa hàng kính mắt cao cấp hàng đầu Việt Nam với nhiều thương hiệu nổi tiếng như Ray-Ban, Cartier, Dior, Tom Ford,. . . Lens &amp; Frame có đội ngũ nhân viên chuyên nghiệp, am hiểu về kính mắt và tư vấn tận tâm cho khách hàng.<br></div><div><strong>House of Eye</strong><br></div><div>House of Eye là chuỗi cửa hàng kính mắt cao cấp với nhiều thương hiệu độc quyền, cung cấp các sản phẩm kính mắt thời trang, kính mắt cận, viễn, loạn thị, kính mắt thể thao,. . . House of Eye có hệ thống trang thiết bị hiện đại để đo thị lực và lựa chọn kính mắt phù hợp với từng khách hàng.<br></div><div><strong>Mắt kính Lê Thị</strong><br></div><div>Mắt kính Lê Thị là chuỗi cửa hàng kính mắt uy tín với nhiều năm kinh nghiệm, chuyên cung cấp các sản phẩm kính mắt cao cấp từ các thương hiệu nổi tiếng. Mắt kính Lê Thị có đội ngũ bác sĩ nhãn khoa và nhân viên chuyên nghiệp, đảm bảo cung cấp dịch vụ khám mắt và tư vấn lựa chọn kính phù hợp.<br></div><div><strong>Optical Boutique</strong><br></div><div>Optical Boutique là chuỗi cửa hàng chuyên bán kính mắt cao cấp với nhiều kiểu dáng và thương hiệu đa dạng. Optical Boutique có đội ngũ nhân viên chuyên nghiệp, am hiểu về xu hướng thời trang và tư vấn cho khách hàng lựa chọn được sản phẩm phù hợp.<br></div><div>Ưu điểm của mua kính mắt cao cấp tại cửa hàng chuyên nghiệp:<br></div><div>Thử kính trực tiếp: Bạn có thể thử kính trực tiếp để lựa chọn sản phẩm phù hợp với khuôn mặt và sở thích của mình.<br></div><div>Nhận tư vấn chuyên nghiệp: Đội ngũ nhân viên tại các cửa hàng chuyên nghiệp có thể tư vấn cho bạn về lựa chọn kính phù hợp với nhu cầu sử dụng và phong cách của mình.<br></div><div>Dịch vụ bảo hành và sửa chữa: Các cửa hàng thường cung cấp dịch vụ bảo hành và sửa chữa cho sản phẩm mua tại cửa hàng.<br></div><div>Nhược điểm của mua kính mắt cao cấp tại cửa hàng chuyên nghiệp:<br></div><div>Giá thành cao hơn: Giá bán tại cửa hàng thường cao hơn so với mua online.<br></div><div>Lựa chọn hạn chế: Các cửa hàng thường có số lượng sản phẩm hạn chế so với mua online.<br></div><div><strong>Top 5 thương hiệu kính mắt cao cấp được yêu thích nhất</strong><br></div><div>Khi nhắc đến kính mắt cao cấp, không thể không nhắc đến những thương hiệu nổi tiếng thế giới như:<br></div><div><strong>Ray-Ban</strong><br></div><div>Ray-Ban là thương hiệu kính mắt nổi tiếng với thiết kế cổ điển, tinh tế và chất lượng hàng đầu. Ray-Ban được ưa chuộng bởi các mẫu kính Aviator, Wayfarer, Clubmaster,. . .<br></div><div><strong>Gucci</strong><br></div><div>Gucci là thương hiệu thời trang cao cấp của Ý, cũng nổi tiếng với các mẫu kính mắt thời thượng, sang trọng và cá tính.<br></div><div><strong>Dior</strong><br></div><div>Dior là thương hiệu thời trang Pháp với các mẫu kính mắt thanh lịch, sang trọng và đầy phong cách.<br></div><div><strong>Chanel</strong><br></div><div>Chanel là thương hiệu thời trang cao cấp của Pháp, nổi tiếng với các mẫu kính mắt thời trang, tinh tế và độc đáo.<br></div><div>TÌM HIỂU THÊM: <strong><a href="https://cademy.co.uk/matkinhcaocapchinhhang">https://cademy.co.uk/matkinhcaocapchinhhang</a></strong><br></div><div><strong>Tom Ford</strong><br></div><div>Tom Ford là thương hiệu thời trang Mỹ với các mẫu kính mắt cá tính, mạnh mẽ và đầy phong cách.<br></div><div><strong>Hướng dẫn chọn kính mắt cao cấp phù hợp với khuôn mặt</strong><br></div><div>Để chọn được kính mắt cao cấp phù hợp với khuôn mặt của bạn, bạn cần lưu ý một số điều sau:<br></div><div><strong>Xác định khuôn mặt của bạn</strong><br></div><div>Trước tiên, bạn cần xác định khuôn mặt của mình thuộc dạng gì, ví dụ như oval, vuông, tròn, tam giác, hoặc hình trái tim. Việc này sẽ giúp bạn lựa chọn được kiểu kính phù hợp.<br></div><div><strong>Chọn kích thước kính phù hợp</strong><br></div><div>Kích thước kính cũng rất quan trọng, cần phù hợp với khuôn mặt của bạn. Kính quá lớn sẽ làm khuôn mặt bạn trông nhỏ hơn, trong khi kính quá nhỏ sẽ làm khuôn mặt trông to hơn.<br></div><div><strong>Cân nhắc màu sắc và chất liệu</strong><br></div><div>Màu sắc và chất liệu kính cũng ảnh hưởng đến vẻ ngoài của bạn. Bạn nên chọn màu sắc phù hợp với tông da, màu tóc và phong cách của mình. Chất liệu cao cấp như acetate, titanium hoặc kim loại sẽ giúp kính trông sang trọng và bền hơn.<br></div><div><strong>Thử kính trước khi mua</strong><br></div><div>Nếu mua tại cửa hàng, hãy thử kính trước khi quyết định mua. Điều này sẽ giúp bạn chọn được kính phù hợp với khuôn mặt và sở thích của mình.<br></div><div><strong>Tham khảo ý kiến chuyên gia</strong><br></div><div>Nếu băn khoăn, bạn nên tham khảo ý kiến của nhân viên tư vấn chuyên nghiệp tại cửa hàng. Họ có thể giúp bạn lựa chọn kính phù hợp nhất.<br></div><div><strong>Bí quyết săn kính mắt cao cấp giá tốt</strong><br></div><div>Nếu muốn sở hữu kính mắt cao cấp với mức giá hợp lý, bạn có thể áp dụng một số bí quyết sau:<br></div><div><strong>Theo dõi các đợt khuyến mãi</strong><br></div><div>Theo dõi các trang web thương mại điện tử và cửa hàng bán kính mắt cao cấp để nắm bắt thông tin về các đợt khuyến mãi, giảm giá. Đây là cơ hội tốt để bạn săn được những sản phẩm chất lượng với giá hấp dẫn.<br></div><div><strong>Mua vào dịp cuối năm</strong><br></div><div>Thường vào những tháng cuối năm, các thương hiệu kính mắt sẽ tung ra nhiều chương trình khuyến mãi hấp dẫn để kích cầu mua sắm. Đây là thời điểm lý tưởng để bạn săn được những sản phẩm kính mắt cao cấp với giá ưu đãi.<br></div><div><strong>Đăng ký thành viên</strong><br></div><div>Đăng ký trở thành thành viên của các trang web hoặc cửa hàng bán kính mắt cao cấp để nhận được thông tin về các chương trình khuyến mãi, ưu đãi đặc biệt dành cho thành viên. Điều này giúp bạn tiết kiệm được nhiều chi phí khi mua sắm sản phẩm.<br></div><div><strong>Sử dụng mã giảm giá</strong><br></div><div>Nếu có mã giảm giá, hãy sử dụng để giảm tổng số tiền thanh toán. Các mã giảm giá thường được cung cấp trong các chương trình khuyến mãi hoặc thông qua các đối tác của cửa hàng.<br></div><div><strong>Kinh nghiệm mua kính mắt cao cấp online an toàn</strong><br></div><div>Với sự phát triển của mạng internet, việc mua sắm trực tuyến ngày càng phổ biến. Tuy nhiên, để mua kính mắt cao cấp trực tuyến một cách an toàn, bạn cần lưu ý một số điểm sau:<br></div><div><strong>Chọn nguồn hàng đáng tin cậy</strong><br></div><div>Khi mua kính mắt cao cấp online, bạn cần chọn các trang web uy tín, có đánh giá tích cực từ người dùng trước đó. Tránh mua hàng từ các trang web không rõ nguồn gốc để đảm bảo sản phẩm chất lượng và dịch vụ sau bán hàng.<br></div><div><strong>Đọc kỹ thông tin sản phẩm</strong><br></div><div>Trước khi quyết định mua, hãy đọc kỹ thông tin về sản phẩm, bao gồm hình ảnh, mô tả, chất liệu, kích thước và giá cả. Đảm bảo bạn hiểu rõ về sản phẩm để tránh mua phải hàng giả, hàng nhái.<br></div><div><strong>Kiểm tra chính sách đổi trả và bảo hành</strong><br></div><div>Trước khi mua hàng, hãy kiểm tra kỹ chính sách đổi trả và bảo hành của cửa hàng. Đảm bảo rằng bạn có quyền được đổi trả sản phẩm trong trường hợp có vấn đề về chất lượng hoặc không phù hợp với nhu cầu sử dụng của mình.<br></div><div><strong>Thanh toán an toàn và bảo mật</strong><br></div><div>Luôn chọn phương thức thanh toán an toàn và bảo mật khi mua sắm online. Tránh thanh toán bằng thẻ tín dụng trên các trang web không an toàn để tránh rủi ro thông tin cá nhân bị đánh cắp.<br></div><div><strong>Bảo quản và vệ sinh kính mắt cao cấp đúng cách</strong><br></div><div>Để kính mắt cao cấp luôn đẹp và bền lâu, bạn cần bảo quản và vệ sinh chúng đúng cách. Dưới đây là một số tips để giữ cho kính mắt của bạn luôn trong tình trạng tốt nhất:<br></div><div><strong>Sử dụng khăn mềm để lau kính</strong><br></div><div>Khi lau kính mắt, hãy sử dụng khăn mềm và sạch để tránh làm trầy hoặc làm hỏng lớp phủ bề mặt của kính.<br></div><div><strong>Tránh tiếp xúc với hóa chất</strong><br></div><div>Hóa chất như nước rửa chén, nước hoa, dung dịch tẩy rửa có thể làm hỏng kính mắt. Hãy tránh tiếp xúc của kính với các loại hóa chất này.<br></div><div><strong>Bảo quản kính đúng cách</strong><br></div><div>Khi không sử dụng, hãy đặt kính mắt vào hộp đựng kính hoặc túi vải riêng để tránh trầy xước và hỏng hóc.<br></div><div><strong>Thường xuyên vệ sinh kính</strong><br></div><div>Hãy vệ sinh kính mắt hàng ngày bằng nước ấm và xà phòng nhẹ để loại bỏ bụi bẩn và dầu mỡ.<br></div><div><strong>Điều chỉnh và bảo dưỡng kính</strong><br></div><div>Định kỳ mang kính đến cửa hàng để điều chỉnh và bảo dưỡng. Điều này giúp kính mắt luôn trong tình trạng hoạt động tốt nhất.<br></div><div><strong>Kết luận</strong><br></div><div>Trên đây là một số thông tin liên quan đến việc mua kính mắt cao cấp, từ cách chọn kính thích hợp với khuôn mặt, bí quyết săn kính giá tốt, đến kinh nghiệm mua hàng online an toàn và bảo quản kính mắt đúng cách. Hy vọng rằng những hướng dẫn và kiến thức trên sẽ giúp bạn có sự lựa chọn thông minh và hài lòng khi sở hữu những chiếc kính mắt cao cấp yêu thích.<br></div><div><strong>VTHE20240621<br></strong>#mắt kính, <br></div><div>#kính, <br></div><div>#kính mắt, <br></div><div>#cửa hàng mắt kính, <br></div><div>#mắt kính đẹp, <br></div><div>#kính hải triều, <br></div><div>#kính hiệu</div></div> |
ShiftAddLLM/Llama-3-70b-wbits2-acc | ShiftAddLLM | "2024-06-22T14:01:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-22T13:40:14Z" | Entry not found |
JoPmt/MistrLlama-3-instruct-v0.2-slerp | JoPmt | "2024-06-25T21:06:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-22T13:40:42Z" | ---
tags:
- merge
- mergekit
- lazymergekit
---
# MistrLlama-3-instruct-v0.2-slerp
MistrLlama-3-instruct-v0.2-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-Instruct-v0.2
- model: NousResearch/Meta-Llama-3-8B-Instruct
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.5, 0.5, 1]
- filter: mlp
value: [1, 0.5, 0.5, 0.5, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "JoPmt/MistrLlama-3-instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
myrulezzzz/mistral_custom4bit | myrulezzzz | "2024-06-22T13:43:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-22T13:41:50Z" | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
# Uploaded model
- **Developed by:** myrulezzzz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fruk19/E_ASR_MID | fruk19 | "2024-06-22T21:36:14Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"th",
"dataset:fruk19/E_SMALL",
"base_model:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-22T13:48:57Z" | ---
language:
- th
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- fruk19/E_SMALL
metrics:
- wer
model-index:
- name: South_asri
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: aicookcook
type: fruk19/E_SMALL
config: default
split: None
args: 'config: th'
metrics:
- name: Wer
type: wer
value: 6.109316028130006
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# South_asri
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the aicookcook dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0666
- Wer: 6.1093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.0464 | 2.0 | 6000 | 0.0702 | 9.2237 |
| 0.0095 | 4.0 | 12000 | 0.0648 | 6.6171 |
| 0.0007 | 6.0 | 18000 | 0.0666 | 6.1093 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.1+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
hchcsuim/batch-size16_Celeb-DF-v2_opencv-1FPS_faces-expand50-aligned_unaugmentation | hchcsuim | "2024-06-22T14:49:17Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-22T13:49:24Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_Celeb-DF-v2_opencv-1FPS_faces-expand50-aligned_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9923240090149807
- name: Precision
type: precision
value: 0.9939409866701707
- name: Recall
type: recall
value: 0.9975967879018786
- name: F1
type: f1
value: 0.9957655318682123
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_Celeb-DF-v2_opencv-1FPS_faces-expand50-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0227
- Accuracy: 0.9923
- Precision: 0.9939
- Recall: 0.9976
- F1: 0.9958
- Roc Auc: 0.9987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.1033 | 0.9994 | 1178 | 0.0227 | 0.9923 | 0.9939 | 0.9976 | 0.9958 | 0.9987 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
slelab/AES6 | slelab | "2024-06-22T14:35:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T13:53:27Z" | Entry not found |
GuyYariv/vLMIG | GuyYariv | "2024-06-22T13:55:16Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-22T13:53:44Z" | ---
license: mit
---
|
wonkitty/sullin | wonkitty | "2024-06-22T13:54:38Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-22T13:54:17Z" | ---
license: openrail
---
|
AriaRahmati1/222ghesmat4part2 | AriaRahmati1 | "2024-06-22T15:15:32Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-22T14:00:36Z" | ---
license: openrail
---
|
Hemantrao/config-0 | Hemantrao | "2024-06-23T11:32:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-22T14:01:58Z" | ---
license: apache-2.0
base_model: DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: config-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# config-0
This model is a fine-tuned version of [DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7](https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8339
- Wer: 0.9455
- Cer: 0.5454
- Mer: 0.9402
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Mer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| 7.7805 | 1.0050 | 100 | 3.5031 | 1.0 | 1.0 | 1.0 |
| 3.4127 | 2.0101 | 200 | 3.4317 | 1.0 | 0.9381 | 1.0 |
| 3.3348 | 3.0151 | 300 | 3.2768 | 0.9999 | 0.9359 | 0.9999 |
| 2.763 | 4.0201 | 400 | 1.8339 | 0.9455 | 0.5454 | 0.9402 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
senhorsapo/momoga | senhorsapo | "2024-06-22T14:05:06Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-22T14:02:55Z" | ---
license: openrail
---
|
tsavage68/Summary_L3_150steps_1e8rate_01beta_CSFTDPO | tsavage68 | "2024-06-22T14:07:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/Summary_L3_1000steps_1e7rate_SFT2",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-22T14:03:27Z" | ---
license: llama3
base_model: tsavage68/Summary_L3_1000steps_1e7rate_SFT2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Summary_L3_150steps_1e8rate_01beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Summary_L3_150steps_1e8rate_01beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/Summary_L3_1000steps_1e7rate_SFT2](https://huggingface.co/tsavage68/Summary_L3_1000steps_1e7rate_SFT2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6919
- Rewards/chosen: 0.0011
- Rewards/rejected: -0.0016
- Rewards/accuracies: 0.0800
- Rewards/margins: 0.0027
- Logps/rejected: -15.2799
- Logps/chosen: -9.3721
- Logits/rejected: -1.0959
- Logits/chosen: -1.0973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6914 | 0.2004 | 50 | 0.6919 | 0.0004 | -0.0022 | 0.0900 | 0.0026 | -15.2856 | -9.3787 | -1.0954 | -1.0968 |
| 0.6938 | 0.4008 | 100 | 0.6918 | 0.0000 | -0.0027 | 0.1050 | 0.0027 | -15.2908 | -9.3826 | -1.0961 | -1.0975 |
| 0.6936 | 0.6012 | 150 | 0.6919 | 0.0011 | -0.0016 | 0.0800 | 0.0027 | -15.2799 | -9.3721 | -1.0959 | -1.0973 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
ThomasAngelo/yes | ThomasAngelo | "2024-06-22T14:05:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T14:05:05Z" | Entry not found |
hchcsuim/batch-size16_Celeb-DF_opencv-1FPS_faces-expand10-aligned_unaugmentation | hchcsuim | "2024-06-22T14:17:44Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-22T14:07:12Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_Celeb-DF_opencv-1FPS_faces-expand10-aligned_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9529193123741676
- name: Precision
type: precision
value: 0.9541120912412011
- name: Recall
type: recall
value: 0.9913896861401722
- name: F1
type: f1
value: 0.9723937522702506
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_Celeb-DF_opencv-1FPS_faces-expand10-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1239
- Accuracy: 0.9529
- Precision: 0.9541
- Recall: 0.9914
- F1: 0.9724
- Roc Auc: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.1313 | 1.0 | 202 | 0.1239 | 0.9529 | 0.9541 | 0.9914 | 0.9724 | 0.9834 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
filosofomaster/master | filosofomaster | "2024-06-22T14:07:46Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T14:07:46Z" | Entry not found |
ugurcelebi/DevOpsGPT-1.1 | ugurcelebi | "2024-06-22T14:11:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2-7B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T14:09:34Z" | ---
base_model: unsloth/Qwen2-7B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** ugurcelebi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-7B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MayurPai/entity-extraction-Llama-2-7b-chat-hf | MayurPai | "2024-06-22T14:17:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T14:10:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ruihanglai/llava | ruihanglai | "2024-06-22T14:18:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T14:13:51Z" | Entry not found |
its1nonly/model_save | its1nonly | "2024-06-22T14:14:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T14:14:03Z" | Entry not found |
Hemantrao/config-1 | Hemantrao | "2024-06-23T13:13:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-22T14:18:40Z" | ---
license: apache-2.0
base_model: DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: config-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# config-1
This model is a fine-tuned version of [DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7](https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3415
- Wer: 0.9997
- Cer: 0.9398
- Mer: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Mer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| 9.6228 | 1.0050 | 100 | 3.5667 | 1.0 | 1.0 | 1.0 |
| 3.4908 | 2.0101 | 200 | 3.4348 | 0.9999 | 0.9755 | 0.9999 |
| 3.3938 | 3.0151 | 300 | 3.3869 | 1.0 | 0.9351 | 1.0 |
| 3.3501 | 4.0201 | 400 | 3.3415 | 0.9997 | 0.9398 | 0.9997 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
DavLam/Testing | DavLam | "2024-06-22T14:19:01Z" | 0 | 0 | null | [
"license:llama3",
"region:us"
] | null | "2024-06-22T14:19:01Z" | ---
license: llama3
---
|
ruihanglai/whisper | ruihanglai | "2024-06-22T14:24:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T14:23:06Z" | Entry not found |
hchcsuim/batch-size16_Celeb-DF_opencv-1FPS_faces-expand20-aligned_unaugmentation | hchcsuim | "2024-06-22T14:34:30Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-22T14:24:35Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_Celeb-DF_opencv-1FPS_faces-expand20-aligned_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9614966620090049
- name: Precision
type: precision
value: 0.9650122050447518
- name: Recall
type: recall
value: 0.9898914958731336
- name: F1
type: f1
value: 0.9772935359824207
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_Celeb-DF_opencv-1FPS_faces-expand20-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1108
- Accuracy: 0.9615
- Precision: 0.9650
- Recall: 0.9899
- F1: 0.9773
- Roc Auc: 0.9856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.1781 | 0.9975 | 201 | 0.1108 | 0.9615 | 0.9650 | 0.9899 | 0.9773 | 0.9856 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
manbeast3b/KinoInferTry3 | manbeast3b | "2024-06-22T14:25:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T14:25:27Z" | Entry not found |
Amadeus99/anomaly-detection-flow | Amadeus99 | "2024-06-22T15:12:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"timesformer",
"video-classification",
"generated_from_trainer",
"base_model:facebook/timesformer-base-finetuned-k400",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2024-06-22T14:26:22Z" | ---
license: cc-by-nc-4.0
base_model: facebook/timesformer-base-finetuned-k400
tags:
- generated_from_trainer
model-index:
- name: anomaly-detection-flow
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# anomaly-detection-flow
This model is a fine-tuned version of [facebook/timesformer-base-finetuned-k400](https://huggingface.co/facebook/timesformer-base-finetuned-k400) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6828
- Auc: 0.6963
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Auc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0734 | 1.0 | 805 | 0.8667 | 0.5397 |
| 0.7801 | 2.0 | 1610 | 0.6828 | 0.6963 |
| 0.7527 | 3.0 | 2415 | 1.0352 | 0.7117 |
| 0.6998 | 4.0 | 3220 | 1.0508 | 0.7091 |
| 0.5515 | 5.0 | 4025 | 1.6155 | 0.6930 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
thoth-a1/audr3y_asset2 | thoth-a1 | "2024-06-22T14:35:23Z" | 0 | 0 | null | [
"tensorboard",
"region:us"
] | null | "2024-06-22T14:33:37Z" | Entry not found |
Kokokojima/kane_new_type | Kokokojima | "2024-06-22T14:37:26Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T14:33:55Z" | Entry not found |
QuantiPhy/Salesforce_codegen2-7B_P | QuantiPhy | "2024-06-22T14:38:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"codegen",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-22T14:34:45Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hemantrao/config-2 | Hemantrao | "2024-06-23T15:05:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-22T14:34:53Z" | ---
license: apache-2.0
base_model: DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: config-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# config-2
This model is a fine-tuned version of [DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7](https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0612
- Wer: 0.9701
- Cer: 0.6014
- Mer: 0.9671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Mer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| 7.1407 | 1.0050 | 100 | 3.4310 | 0.9999 | 0.9815 | 0.9999 |
| 3.3804 | 2.0101 | 200 | 3.3707 | 1.0 | 0.9582 | 1.0 |
| 3.3051 | 3.0151 | 300 | 3.2022 | 1.0 | 0.9539 | 1.0 |
| 2.6907 | 4.0201 | 400 | 2.0612 | 0.9701 | 0.6014 | 0.9671 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Hamed7immortal/test2 | Hamed7immortal | "2024-06-22T14:36:38Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-22T14:36:38Z" | ---
license: openrail
---
|
alper54541/lanadalrey | alper54541 | "2024-06-22T14:37:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T14:37:11Z" | Entry not found |
cxfajar197/distilbert-base-uncased-finetuned-imdb-accelerate | cxfajar197 | "2024-06-22T14:38:06Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T14:38:06Z" | Entry not found |
sccengizlrn/donut-sciencedirect-header-parser-raw-3-epoch | sccengizlrn | "2024-06-22T14:40:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T14:40:02Z" | Entry not found |
its1nonly/food_classifier | its1nonly | "2024-06-23T14:16:10Z" | 0 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-22T14:42:23Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: its1nonly/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# its1nonly/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8165
- Validation Loss: 1.6500
- Train Accuracy: 0.84
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.8165 | 1.6500 | 0.84 | 0 |
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.16.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
cxfajar197/bert-multilingual-finetuned-iqbal-accelerate | cxfajar197 | "2024-06-22T14:44:35Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T14:44:35Z" | Entry not found |
maliijaz/Qwen2_new | maliijaz | "2024-06-22T14:46:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/qwen2-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T14:46:16Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
base_model: unsloth/qwen2-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** maliijaz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-7b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jddllwqa/Qwen-Qwen1.5-0.5B-1719067592 | jddllwqa | "2024-06-22T14:46:39Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | "2024-06-22T14:46:32Z" | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
hchcsuim/batch-size16_Celeb-DF_opencv-1FPS_faces-expand30-aligned_unaugmentation | hchcsuim | "2024-06-22T14:58:03Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-22T14:48:00Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_Celeb-DF_opencv-1FPS_faces-expand30-aligned_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9559498793680442
- name: Precision
type: precision
value: 0.955511881365017
- name: Recall
type: recall
value: 0.9936826458565589
- name: F1
type: f1
value: 0.974223517624556
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_Celeb-DF_opencv-1FPS_faces-expand30-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1173
- Accuracy: 0.9559
- Precision: 0.9555
- Recall: 0.9937
- F1: 0.9742
- Roc Auc: 0.9848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.1447 | 1.0 | 201 | 0.1173 | 0.9559 | 0.9555 | 0.9937 | 0.9742 | 0.9848 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
B20274/Llama-2-7b-peft-1 | B20274 | "2024-06-22T14:50:20Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T14:48:49Z" | First version of llama-2 "NousResearch/Llama-2-7b-hf" finetuned for LMSYS Kaggle competition.
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=CFG.epochs,
per_device_train_batch_size=CFG.batch_size,
per_device_eval_batch_size=CFG.batch_size,
gradient_accumulation_steps=1,
eval_strategy="steps",
eval_steps=1000,
logging_steps=1,
optim="paged_adamw_8bit",
learning_rate=2e-4,
lr_scheduler_type="linear",
warmup_steps=10,
#fp16=True,
gradient_checkpointing=True,
report_to="wandb",
)
|
kmafutah/DayTrader | kmafutah | "2024-06-22T14:51:31Z" | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | "2024-06-22T14:51:31Z" | ---
license: unknown
---
|
wenzhy7/ext-llama2 | wenzhy7 | "2024-06-22T15:08:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"dataset:wenzhy7/llama2_sft_ext",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-22T14:56:32Z" | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: meta-llama/Llama-2-7b-chat-hf
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- wenzhy7/llama2_sft_ext
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
Slabera/Envoxi | Slabera | "2024-06-22T14:56:39Z" | 0 | 0 | null | [
"license:c-uda",
"region:us"
] | null | "2024-06-22T14:56:39Z" | ---
license: c-uda
---
|
minsi2004/super_junior | minsi2004 | "2024-06-30T05:17:08Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-22T15:00:07Z" | ---
license: openrail
---
|
rain2017/zephyr-7b-dpo-qlora | rain2017 | "2024-06-22T15:02:51Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T15:02:51Z" | Entry not found |
slimepointe/LSMDYSLC | slimepointe | "2024-06-27T19:49:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T15:04:40Z" | Entry not found |
DysfunctionalHuman/bert-token-allfeat | DysfunctionalHuman | "2024-06-22T15:05:12Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T15:05:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Johmmyyyy/llama-medical | Johmmyyyy | "2024-06-24T08:00:16Z" | 0 | 0 | null | [
"medical",
"zh",
"dataset:Johmmyyyy/icd9cm3",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null | "2024-06-22T15:07:40Z" | ---
license: apache-2.0
language:
- zh
tags:
- medical
datasets:
- Johmmyyyy/icd9cm3
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Flamenco43/nan_lora_model | Flamenco43 | "2024-06-22T15:08:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-2-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T15:08:07Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-2-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** Flamenco43
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
valerielucro/mistral_gsm8k_dpo_cot_r64_epoch2 | valerielucro | "2024-06-22T15:14:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T15:13:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ChengYi1/internvl15_20b_drivellm | ChengYi1 | "2024-06-22T15:28:57Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T15:14:18Z" | Entry not found |
MinhhMinhh/THQvoice | MinhhMinhh | "2024-06-23T09:00:51Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-22T15:16:24Z" | ---
license: openrail
---
|
AriaRahmati1/222ghesmat6part1 | AriaRahmati1 | "2024-06-22T15:24:01Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-22T15:17:26Z" | ---
license: openrail
---
|
Leosagi89/Sagi | Leosagi89 | "2024-06-22T15:17:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T15:17:50Z" | Entry not found |
Huy227/llama3-instruct | Huy227 | "2024-06-22T15:18:14Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T15:18:14Z" | Entry not found |
igofishing/gogogo | igofishing | "2024-06-22T15:18:19Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-22T15:18:19Z" | ---
license: mit
---
|
SimoLM/lora_model | SimoLM | "2024-06-22T15:19:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/phi-3-medium-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T15:18:29Z" | ---
base_model: unsloth/phi-3-medium-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** tferdi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-medium-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Mesafint/Mesafinthealthconsultplc | Mesafint | "2024-06-22T15:20:32Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-22T15:20:32Z" | ---
license: apache-2.0
---
|
valerielucro/mistral_gsm8k_dpo_cot_epoch3 | valerielucro | "2024-06-22T15:20:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T15:20:40Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wenzhy7/int-llama2 | wenzhy7 | "2024-06-22T15:32:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"dataset:wenzhy7/llama2_sft_int",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-22T15:21:00Z" | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: meta-llama/Llama-2-7b-chat-hf
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- wenzhy7/llama2_sft_int
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
SimoLM/model | SimoLM | "2024-06-22T15:24:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/phi-3-medium-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T15:23:23Z" | ---
base_model: unsloth/phi-3-medium-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** tferdi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-medium-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
McaTech/qot | McaTech | "2024-06-28T18:19:41Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T15:26:01Z" | Entry not found |
AriaRahmati1/222ghesmat6part2 | AriaRahmati1 | "2024-06-22T15:36:56Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-22T15:29:28Z" | ---
license: openrail
---
|
miniVan/textSummV1 | miniVan | "2024-06-22T15:32:10Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-22T15:32:10Z" | ---
license: mit
---
|
ChengYi1/internvlTest | ChengYi1 | "2024-06-24T02:04:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T15:34:20Z" | Entry not found |
HAMZABZ/mistral_fine_tuned | HAMZABZ | "2024-06-22T15:35:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T15:35:33Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
honghanchen/mylunar | honghanchen | "2024-06-22T15:37:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T15:37:05Z" | Entry not found |
itisarainyday/llemma-2-7b-ft-test-v8 | itisarainyday | "2024-06-22T19:58:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T15:40:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ZidanAf/Zidan_model_output_v3 | ZidanAf | "2024-06-22T17:04:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indolem/indobert-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-22T15:40:56Z" | ---
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Zidan_model_output_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Zidan_model_output_v3
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8558
- Accuracy: 0.6364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 193 | 1.0399 | 0.5114 |
| No log | 2.0 | 386 | 0.9771 | 0.5758 |
| 1.0347 | 3.0 | 579 | 0.9387 | 0.5947 |
| 1.0347 | 4.0 | 772 | 0.8982 | 0.6136 |
| 1.0347 | 5.0 | 965 | 0.8807 | 0.625 |
| 0.848 | 6.0 | 1158 | 0.8706 | 0.6288 |
| 0.848 | 7.0 | 1351 | 0.8641 | 0.6515 |
| 0.7481 | 8.0 | 1544 | 0.8558 | 0.6364 |
| 0.7481 | 9.0 | 1737 | 0.8595 | 0.6326 |
| 0.7481 | 10.0 | 1930 | 0.8559 | 0.6439 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
JaronTHU/Video-CCAM-4B | JaronTHU | "2024-06-28T16:07:52Z" | 0 | 1 | null | [
"safetensors",
"license:mit",
"region:us"
] | null | "2024-06-22T15:47:52Z" | ---
license: mit
---
## Model Summary
Video-CCAM-4B is a lightweight Video-MLLM built on [Phi-3-Mini-4K-Instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [SigLIP SO400M](https://huggingface.co/google/siglip-so400m-patch14-384).
## Usage
Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.10:
```
torch==2.1.0
torchvision==0.16.0
transformers==4.40.2
peft==0.10.0
```
## Inference & Evaluation
Please refer to [Video-CCAM](https://github.com/QQ-MM/Video-CCAM) on inference and evaluation.
### Video-MME
|#Frames.|32|96|
|:-:|:-:|:-:|
|w/o subs|48.2|49.6|
|w subs|51.7|53.0|
### MVBench: 57.78 (32 frames)
## Acknowledgement
* [xtuner](https://github.com/InternLM/xtuner): Video-CCAM-9B is trained using the xtuner framework. Thanks for their excellent works!
* [Phi-3-Mini-4K-Instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct): Powerful language models developed by Microsoft.
* [SigLIP SO400M](https://huggingface.co/google/siglip-so400m-patch14-384): Outstanding vision encoder developed by Google.
## License
The model is licensed under the MIT license.
|
rvclone/ddpm-butterflies-128 | rvclone | "2024-06-22T15:48:26Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T15:48:26Z" | Entry not found |
saicharan8/telugu_bert_1 | saicharan8 | "2024-06-22T15:51:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T15:50:59Z" | Entry not found |
manbeast3b/KinoInferTry4 | manbeast3b | "2024-06-22T15:53:17Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T15:53:06Z" | Entry not found |
AriaRahmati1/222ghesmat6part3 | AriaRahmati1 | "2024-06-22T16:04:31Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-22T15:54:39Z" | ---
license: openrail
---
|
HinaBl/Elizabeth-Rose-Bloodflame | HinaBl | "2024-06-22T15:56:43Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-22T15:55:55Z" | ---
license: openrail
---
|
manbeast3b/KinoInferTry5 | manbeast3b | "2024-06-22T16:39:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T15:56:26Z" | Entry not found |
HinaBl/Gigi-Murin | HinaBl | "2024-06-22T15:57:19Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-22T15:56:53Z" | ---
license: openrail
---
|
hari02/llava_final_adapters_v2 | hari02 | "2024-06-22T16:01:49Z" | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | "2024-06-22T15:58:28Z" | ---
license: apache-2.0
---
|
hdotta/camilo | hdotta | "2024-06-22T16:00:44Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-22T16:00:44Z" | ---
license: openrail
---
|
dksfudrbs/dkdsid | dksfudrbs | "2024-06-22T16:02:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T16:02:39Z" | Entry not found |
tsavage68/Summary_L3_100steps_1e8rate_05beta_CSFTDPO | tsavage68 | "2024-06-22T16:14:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/Summary_L3_1000steps_1e7rate_SFT2",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-22T16:05:22Z" | ---
license: llama3
base_model: tsavage68/Summary_L3_1000steps_1e7rate_SFT2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Summary_L3_100steps_1e8rate_05beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Summary_L3_100steps_1e8rate_05beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/Summary_L3_1000steps_1e7rate_SFT2](https://huggingface.co/tsavage68/Summary_L3_1000steps_1e7rate_SFT2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6879
- Rewards/chosen: -0.0012
- Rewards/rejected: -0.0138
- Rewards/accuracies: 0.1000
- Rewards/margins: 0.0126
- Logps/rejected: -15.2914
- Logps/chosen: -9.3853
- Logits/rejected: -1.0958
- Logits/chosen: -1.0972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6824 | 0.2004 | 50 | 0.6901 | 0.0066 | -0.0020 | 0.0850 | 0.0086 | -15.2678 | -9.3695 | -1.0960 | -1.0974 |
| 0.6926 | 0.4008 | 100 | 0.6879 | -0.0012 | -0.0138 | 0.1000 | 0.0126 | -15.2914 | -9.3853 | -1.0958 | -1.0972 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
namrahrehman/dinov2-base-finetuned-lora-EyePacs-dinov2-base-rank8 | namrahrehman | "2024-06-22T19:47:03Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | "2024-06-22T16:07:44Z" | Entry not found |
2shan/distilbert-base-multilingual-cased | 2shan | "2024-06-22T16:13:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T16:09:27Z" | Entry not found |
CCB/abstracts_to_tweet_model | CCB | "2024-06-22T16:12:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"datadreamer",
"datadreamer-0.28.0",
"synthetic",
"gpt-4",
"base_model:google/t5-v1_1-base",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-22T16:10:12Z" |
---
base_model: google/t5-v1_1-base
tags:
- datadreamer
- datadreamer-0.28.0
- synthetic
- gpt-4
- gpt-4
- text2text-generation
widget:
- text: "In this paper, we present a novel method for Natural Language Processing (NLP) based on the introduction of deep learning techniques adapted to linguistics. We demonstrate that by integrating syntactic and semantic analysis in pre-processing stages, superior text understanding can be facilitated. Initial processes involve tokenization, POS-tagging, syntactic-semantic hinging for all corpus. To further the learning precision, we introduce a framework powered by a hybrid of Transformer and Recurrent Neural Networks architectures that manifest in increased efficiency both theoretically and empirically. This paper shares exhaustive results, detailing improvements in feature engineering, promising a reduction in human-size semantic labor. We additionally propose that integrating deep learning methods with traditional linguistics dramatically improves contextual understanding and performance on tasks such as language translation, sentiment analysis, and automated thesaurus generation. The innovations reported here make significant strides towards realizing viable, sophisticated machine-level NLP systems. Additionally, the research represents groundwork for further exploration and development promising higher degrees of culture-language contextuality and robustness integral in future NLP applications."
example_title: "Example 1"
- text: "This paper proposes a novel approach to improve performance in Natural Language Processing (NLP) tasks by harnessing the potential of deep learning algorithms using multilingual transformer models. Our work investigates the challenging problem of understanding and manipulating sentences to Toucan dialogue language in context-dependent situations. We present a comprehensive analysis of established NLP approaches, novel methods based on transformer models, and thorough experiments that demonstrate substantial advancements over the state-of-art performances. Our primary contribution lies in the intelligent integration of thematic role labeling with multilingual models to improve the comprehension of sentence structure; for instance, recognizing grammatical relations irrespective of a word\u2019s syntactic position or morphological form. In addition, our method progresses automatic predicate argument structure analysis, giving significance and having potential applications in tasks such as information extraction, summarization, and machine translation. We provide task-specific models that reveal the comparative strength of our architecture set over a cross-lingual task. Systematic evaluations conducted on several linguistic databases have demonstrated robust effectiveness in extracting and reconstructing meaningful entities from unstructured language data. The empirical results show notable enhancements in NLP task competence and thus stimulate further research avenues for substantial developments in multimodal natural language understanding and endow opportunities for practical applications."
example_title: "Example 2"
- text: "In recent years, natural language processing (NLP) has seen impressive advancements because of the advent of deep learning technologies, like transformer-based models such as BERT., However, there remain significant challenges in obtaining human-level understanding, notably concerning effectively extracting semantics from context, deeper discourse analysis and anticipatory prediction during discourse development. In this research paper, we propose a novel integrative NLP model named Contextualized Anticipatory Semantic Humor Analysis (CASHA), which creates a sophisticated blend of semantic context understanding, discourse reference instantiation, and humorous setting anticipation. Inspired by human cognitive processing, CASHA layers a sentence-level semantic extractor and a transformer-based discourse modelling layer harboring informal semantics to understand intricate discourse embeddings accurately. It subsequently employs an adaptive humor anticipation layer based logically on previous discourse understanding. For rigorous model evaluation, we performed several experiments across diverse data sets encompassing assorted types of humor. Results demonstrate significantly improved performance in both humor detection and humor semantics understanding. They prompt profound thinking about NLP applications regarding human-level understanding of semantics from context. This work represents a potentially influential step in advancing the transformative urban initiatives prioritized by smart cities-examples abound about interfaces for ordinary citizens to interact more creatively with city experiences and for cities authorities to react empathetically to citizen-specific humor, metaphors, and cultural dialects."
example_title: "Example 3"
pipeline_tag: text2text-generation
---
# Model Card
[Add more information here](https://huggingface.co/templates/model-card-example)
## Example Usage
```python3
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained('CCB/abstracts_to_tweet_model', revision=None) # Load tokenizer
model = AutoModelForSeq2SeqLM.from_pretrained('CCB/abstracts_to_tweet_model', revision=None) # Load model
pipe = pipeline('text2text-generation', model=model, tokenizer=tokenizer, pad_token_id=tokenizer.pad_token_id)
inputs = ['In this paper, we present a novel method for Natural Language Processing (NLP) based on the introduction of deep learning techniques adapted to linguistics. We demonstrate that by integrating syntactic and semantic analysis in pre-processing stages, superior text understanding can be facilitated. Initial processes involve tokenization, POS-tagging, syntactic-semantic hinging for all corpus. To further the learning precision, we introduce a framework powered by a hybrid of Transformer and Recurrent Neural Networks architectures that manifest in increased efficiency both theoretically and empirically. This paper shares exhaustive results, detailing improvements in feature engineering, promising a reduction in human-size semantic labor. We additionally propose that integrating deep learning methods with traditional linguistics dramatically improves contextual understanding and performance on tasks such as language translation, sentiment analysis, and automated thesaurus generation. The innovations reported here make significant strides towards realizing viable, sophisticated machine-level NLP systems. Additionally, the research represents groundwork for further exploration and development promising higher degrees of culture-language contextuality and robustness integral in future NLP applications.']
print(pipe(inputs, max_length=512, do_sample=False))
```
---
This model was trained with a synthetic dataset with [DataDreamer 🤖💤](https://datadreamer.dev). The synthetic dataset card and model card can be found [here](datadreamer.json). The training arguments can be found [here](training_args.json). |
tqfang229/deberta-v3-large-com2-car | tqfang229 | "2024-06-22T16:25:37Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-06-22T16:10:50Z" | ---
license: mit
---
|
AriaRahmati1/222ghesmat7part1 | AriaRahmati1 | "2024-06-22T16:20:20Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-22T16:11:30Z" | ---
license: openrail
---
|
Irathernotsay/qwen2-1.5b-medical_qa-GGUF | Irathernotsay | "2024-06-22T16:11:58Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T16:11:58Z" | Entry not found |
inflaton/mistral-7b-instruct-v0.3-MAC-merged_4bit_forced | inflaton | "2024-06-22T16:15:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-22T16:12:24Z" | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
# Uploaded model
- **Developed by:** inflaton
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ZackSock/whisper-tiny-zh | ZackSock | "2024-06-22T16:17:49Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T16:17:49Z" | Entry not found |
TommyBushetta/Ring | TommyBushetta | "2024-06-22T16:29:24Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T16:18:08Z" | the fellowship of the ring |
Fischerboot/Llama3-8B-Sophie-BROKEN-DONT-USE | Fischerboot | "2024-06-22T16:29:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge",
"base_model:Fischerboot/llama3-sophie-adapter-model",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-22T16:20:03Z" | ---
base_model:
- Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge
- Fischerboot/llama3-sophie-adapter-model
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge](https://huggingface.co/Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge) + [Fischerboot/llama3-sophie-adapter-model](https://huggingface.co/Fischerboot/llama3-sophie-adapter-model)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge+Fischerboot/llama3-sophie-adapter-model
merge_method: passthrough
dtype: bfloat16
```
|
Fischerboot/L3-Sophie-8b-test-v2 | Fischerboot | "2024-06-22T19:08:15Z" | 0 | 0 | peft | [
"peft",
"llama",
"generated_from_trainer",
"base_model:Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2024-06-22T16:21:37Z" | ---
base_model: Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge
library_name: peft
tags:
- generated_from_trainer
model-index:
- name: outputs/qlora-out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
chat_template: llama3
datasets:
- path: Fischerboot/dataset
type: sharegpt
conversation: llama3
dataset_prepared_path: last_run_prepared
val_set_size: 0.1
output_dir: ./outputs/qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 128
sample_packing: false
pad_to_sequence_len: true
lora_r: 1024
lora_alpha: 512
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 12
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
eval_sample_packing: false
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<|begin_of_text|>"
eos_token: "<|end_of_text|>"
pad_token: "<|end_of_text|>"
```
</details><br>
# outputs/qlora-out
This model is a fine-tuned version of [Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge](https://huggingface.co/Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.9056 | 0.0030 | 1 | 5.7722 |
| 0.0005 | 0.25 | 82 | 0.0921 |
| 0.0 | 0.5 | 164 | 0.0000 |
| 0.3708 | 0.75 | 246 | 0.0499 |
| 0.0006 | 1.0 | 328 | 0.0038 |
| 0.0 | 1.25 | 410 | 0.0000 |
| 0.3136 | 1.5 | 492 | 0.4388 |
| 0.0034 | 1.75 | 574 | 0.0247 |
| 0.0116 | 2.0 | 656 | 0.0023 |
| 0.001 | 2.25 | 738 | 0.0064 |
| 0.003 | 2.5 | 820 | 0.0092 |
| 0.0234 | 2.75 | 902 | 0.0134 |
| 0.0001 | 3.0 | 984 | 0.0001 |
| 1.0367 | 3.25 | 1066 | 0.6438 |
| 0.1164 | 3.5 | 1148 | 0.1633 |
| 0.3021 | 3.75 | 1230 | 0.1719 |
| 0.9067 | 4.0 | 1312 | 0.8432 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1 |
woweenie/v71-ds21-curated2-3e5cos-cd0.02-embeddingperturb0-3k-half | woweenie | "2024-06-22T16:26:26Z" | 0 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-06-22T16:23:34Z" | Entry not found |