Instructions to use GuysTrans/bart-base-re-attention with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use GuysTrans/bart-base-re-attention with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("GuysTrans/bart-base-re-attention") model = AutoModelForSeq2SeqLM.from_pretrained("GuysTrans/bart-base-re-attention") - Notebooks
- Google Colab
- Kaggle
Training in progress, step 20500
Browse files
pytorch_model.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 558025109
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f4948dcb8bd3d8a566e2530e6c5bbe759238f7d7b76156c940e43db8e10ab97a
|
| 3 |
size 558025109
|
runs/Oct09_12-54-54_d431b10e7393/events.out.tfevents.1696871313.d431b10e7393.28.1
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:aea564cd30d32fd2985dfab19b847ffb4a138d6f2dfeeb12b6a699299ec6a678
|
| 3 |
+
size 11752
|