Instructions to use Fanqi-Lin-IR/my_trained_fast_tokenizer with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Fanqi-Lin-IR/my_trained_fast_tokenizer with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Fanqi-Lin-IR/my_trained_fast_tokenizer", dtype="auto") - Notebooks
- Google Colab
- Kaggle
| { | |
| "action_dim": 20, | |
| "min_token": -60, | |
| "processor_class": "UniversalActionProcessor", | |
| "scale": 10, | |
| "time_horizon": 16, | |
| "vocab_size": 1024 | |
| } | |