Update Curator links
Browse files
README.md
CHANGED
@@ -44,9 +44,9 @@ This model is released under the [NVIDIA Open Model License Agreement](https://d
|
|
44 |
The model architecture uses a DeBERTa backbone and incorporates multiple classification heads, each dedicated to a task categorization or complexity dimension. This approach enables the training of a unified network, allowing it to predict simultaneously during inference. Deberta-v3-base can theoretically handle up to 12k tokens, but default context length is set at 512 tokens.
|
45 |
|
46 |
# How to Use in NVIDIA NeMo Curator
|
47 |
-
|
48 |
|
49 |
-
The inference code for this model is available through the NeMo Curator GitHub repository. Check out this [example notebook](https://github.com/NVIDIA/
|
50 |
|
51 |
# Input & Output
|
52 |
## Input
|
|
|
44 |
The model architecture uses a DeBERTa backbone and incorporates multiple classification heads, each dedicated to a task categorization or complexity dimension. This approach enables the training of a unified network, allowing it to predict simultaneously during inference. Deberta-v3-base can theoretically handle up to 12k tokens, but default context length is set at 512 tokens.
|
45 |
|
46 |
# How to Use in NVIDIA NeMo Curator
|
47 |
+
NeMo Curator improves generative AI model accuracy by processing text, image, and video data at scale for training and customization. It also provides pre-built pipelines for generating synthetic data to customize and evaluate generative AI systems.
|
48 |
|
49 |
+
The inference code for this model is available through the NeMo Curator GitHub repository. Check out this [example notebook](https://github.com/NVIDIA-NeMo/Curator/blob/main/tutorials/text/distributed-data-classification/prompt-task-complexity-classification.ipynb) to get started.
|
50 |
|
51 |
# Input & Output
|
52 |
## Input
|