docs
stringclasses 4
values | category
stringlengths 3
31
| thread
stringlengths 7
255
| href
stringlengths 42
278
| question
stringlengths 0
30.3k
| context
stringlengths 0
24.9k
| marked
int64 0
1
|
---|---|---|---|---|---|---|
huggingface
|
Amazon SageMaker
|
Different Summary Outputs Locally vs API for the Same Text
|
https://discuss.huggingface.co/t/different-summary-outputs-locally-vs-api-for-the-same-text/12454
|
Hi Team,
Whilst using the inference API to produce summaries of call using a private model I sometimes get different outputs to when I load the model and tokeniser locally even though I’m using the exact same parameters.
To generate the summary locally I run:
device = 'cuda' if cuda.is_available() else 'cpu'
inputs = tokenizer(txt, return_tensors='pt')
summary_ids = model.generate(inputs['input_ids'].to(device), no_repeat_ngram_size = 2, max_length=75, top_k = 50, top_p=0.95, early_stopping = True)
summary_3 = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids]
to generate the summary using pipeline / inference api I run:
output = query({
"inputs": txt,
"parameters" : {"max_length": 75,
"no_repeat_ngram_size": 2,
"early_stopping": True,
"top_k": 50,
"top_p": 0.95},
})
output[0]['summary_text']
or
summarizer = pipeline("summarization", model="kaizan/production-bart-large-cnn-samsum", use_auth_token=TOKEN)
output = summarizer(txt, max_length=75, no_repeat_ngram_size = 2, top_k = 50, top_p = 0.95, early_stopping = True)
output[0]['summary_text']
In the pipeline / inference API case I get exactly the same output but when I run it manually I get a different output which makes me think there is a variable / seed value that might be set in the pipeline somewhere which I’m not dealing with when running this set up manually. Does anyone know what the variable causing this difference could be?
Thanks,
Karim
|
Hey Karim,
thanks for opening the Thread. When you said “Inference API” are you talking about the hosted inference API (Overview — Api inference documentation) or did you deploy a Model to Amazon SageMaker?
| 0 |
huggingface
|
Amazon SageMaker
|
Push_to_hub() from HuggingFaceEstimator in AWS Sagemaker
|
https://discuss.huggingface.co/t/push-to-hub-from-huggingfaceestimator-in-aws-sagemaker/12512
|
Hi,
I guess there is an easy answer but I did not find it.
Use case: I did train a model in a HF Training DLC in AWS SageMaker through a HuggingFaceEstimator.fit(). Now, I would like to upload this trained model to the model hub of Hugging Face.
What is the equivalent code to the model.push_to_hub() for a HuggingFaceEstimator in AWS Sagemaker?
|
Hello,
If you want to push your model to the Hugging Face Hub you can do this directly inside the train.py as shown here: notebooks/sagemaker/14_train_and_push_to_hub at master · huggingface/notebooks · GitHub
or you need to download the model.tar.gz and push it to hub afterwards Documentation. There is currently no method inside the sagemaker-sdk to push models to the hub after training.
| 1 |
huggingface
|
Amazon SageMaker
|
NER on SageMaker Run run_ner.py
|
https://discuss.huggingface.co/t/ner-on-sagemaker-run-run-ner-py/10112
|
Hello @philschmid I hope you are doing well. Question for you, do you an example of the expected format of the data in order to been able to use this script ( run_ner.py) in sagemaer training?
Thanks,
Jorge
|
Hey,
you can find the data format of all examples/ always inside the script. For run_ner.py it is here: transformers/run_ner.py at b518aaf193938247f698a7c4522afe42b025225a · huggingface/transformers · GitHub 6
if data_args.text_column_name is not None:
text_column_name = data_args.text_column_name
elif "tokens" in column_names:
text_column_name = "tokens"
else:
text_column_name = column_names[0]
if data_args.label_column_name is not None:
label_column_name = data_args.label_column_name
elif f"{data_args.task_name}_tags" in column_names:
label_column_name = f"{data_args.task_name}_tags"
else:
label_column_name = column_names[1]
In detail, you can either define text_column_name & label_column_name as hyperparameter to the define the column/key of your text/token and label field is. If you are not defining something it will pick index 0 for text/token and 1 for the label.
You can provide your dataset in data file formats, which are compatible with the datasets library, e.g. csv, json more to this here: Loading a Dataset — datasets 1.11.0 documentation
| 0 |
huggingface
|
Amazon SageMaker
|
SageMaker doesn’t support argparse actions
|
https://discuss.huggingface.co/t/sagemaker-doesn-t-support-argparse-actions/12467
|
Hi,
I saw that there is a difference about how to get arguments between script in
Prepare a Transformers fine-tuning script 1 that uses argparse.ArgumentParser() and parser.add_argument()
and the ones in transformers/examples/pytorch/ that uses HfArgumentParser() and parser.parse_args_into_dataclasses()
but I need some explanation.
Script in “Prepare a Transformers fine-tuning script”
SageMaker doesn’t support argparse actions: what does it means?
The `hyperparameters` defined in the [Hugging Face Estimator](https://huggingface.co/docs/sagemaker/train#create-an-huggingface-estimator)
are passed as named arguments and processed by `ArgumentParser()` .
import transformers
import datasets
import argparse
import os
if __name__ == "__main__":
parser = argparse.ArgumentParser()
# hyperparameters sent by the client are passed as command-line arguments to the script
parser.add_argument("--epochs", type=int, default=3)
parser.add_argument("--per_device_train_batch_size", type=int, default=32)
parser.add_argument("--model_name_or_path", type=str)
Note that SageMaker doesn’t support argparse actions.
For example, if you want to use a boolean hyperparameter,
specify type as bool in your script and provide an explicit True or False value.
Script in transformers/examples/pytorch
For example, in the script run_ner.py, the formulation is different.
from transformers import (
(...),
HfArgumentParser,
(...)
)
(...)
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
"""
model_name_or_path: str = field(
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
(...)
def main():
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
# If we pass only one argument to the script and it's the path to a json file,
# let's parse it to get our arguments.
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
else:
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
Someone could explain the differences and if in SageMaker, we must rephrase the arguments section of the scripts in transformers/examples/pytorch/ as formulated in Prepare a Transformers fine-tuning script 1 or not? Thanks.
|
pierreguillou:
SageMaker doesn’t support argparse actions : what does it means?
This means you cannot use parser.add_argument("--args", action="store_true")
The HfArgumentParser is a custom on top implementation on the argsparser to make it easy create python scripts for Transformers. You can use the HfArgumentParser if you want and feel confident, for example with the HfArgumentParser you don’t need to define the TrainingArguments as parser.add_argument since they are added behind the scenes.
For the SageMaker examples we went with the default argsparser since it is easier and faster to get started for non Transformers experts and it might have been difficult to understand that you don’t need to define per_device_train_batch_size in the train.py but can use it as hyperparameter in the notebook.
Someone could explain the differences and if in SageMaker, we must rephrase the arguments section of the scripts in transformers/examples/pytorch/ as formulated in Prepare a Transformers fine-tuning script or not?
No you don’t need to rephrase them, since the HfArgumentParser is creating add_argument behind the scenes it works with SageMaker. So you can decide how you would like to structure your script
| 1 |
huggingface
|
Amazon SageMaker
|
Transformers 4.6.0 on SageMaker?
|
https://discuss.huggingface.co/t/transformers-4-6-0-on-sagemaker/6217
|
Hi all,
Is there a timeline for when Transformers 4.6.0 will be supported in the HuggingFace SDK on SageMaker?
I’ve recently been having issues with CUDA running out of memory while training a distilBert model:
RuntimeError: CUDA out of memory. Tried to allocate 6.87 GiB (GPU 0; 15.78 GiB total capacity; 7.35 GiB already allocated; 2.79 GiB free; 11.78 GiB reserved in total by PyTorch)
It seems like this has been acknowledged and fixed in a recent commit 3 - also similarly described here 3. Looks like this has also been added to Transformers 4.6.0 and I can confirm that using this latest version (without the HuggingFace SDK) fixes the OOM issues for me.
When configuring the HuggingFace estimator, it seems like the latest supported version of Transformers is version 4.5.0
ValueError: Unsupported huggingface version: 4.6.0. You may need to upgrade your SDK version (pip install -U sagemaker) for newer huggingface versions. Supported huggingface version(s): 4.4.2, 4.5.0, 4.4, 4.5.
Does anyone have an idea of when we can expect version 4.6.0 to be supported?
Thanks!
|
hey @nreamaroon,
We already opened a PR for a DLC with transformers 4.6.0. I hope we can get as merged as soon as possible.
github.com/aws/deep-learning-containers
[huggingface_tensorflow, huggingface_pytorch] update for Transformers to 4.6.0 10
aws:master ← philschmid:patch-1
opened
May 13, 2021
philschmid
+8
-8
… 4.6.0
*Issue #, if available:*
## PR Checklist
- [x] I've prepended PR …tag with frameworks/job this applies to : [mxnet, tensorflow, pytorch] | [ei/neuron] | [build] | [test] | [benchmark] | [ec2, ecs, eks, sagemaker]
- [x] (If applicable) I've documented below the DLC image/dockerfile this relates to
- [x] (If applicable) I've documented below the tests I've run on the DLC image
- [x] (If applicable) I've reviewed the licenses of updated and new binaries and their dependencies to make sure all licenses are on the Apache Software Foundation Third Party License Policy Category A or Category B license list. See [https://www.apache.org/legal/resolved.html](https://www.apache.org/legal/resolved.html).
- [x] (If applicable) I've scanned the updated and new binaries to make sure they do not have vulnerabilities associated with them.
## Pytest Marker Checklist
- [ ] (If applicable) I have added the marker `@pytest.mark.model("<model-type>")` to the new tests which I have added, to specify the Deep Learning model that is used in the test (use `"N/A"` if the test doesn't use a model)
- [ ] (If applicable) I have added the marker `@pytest.mark.integration("<feature-being-tested>")` to the new tests which I have added, to specify the feature that will be tested
- [ ] (If applicable) I have added the marker `@pytest.mark.multinode(<integer-num-nodes>)` to the new tests which I have added, to specify the number of nodes used on a multi-node test
- [ ] (If applicable) I have added the marker `@pytest.mark.processor(<"cpu"/"gpu"/"eia"/"neuron">)` to the new tests which I have added, if a test is specifically applicable to only one processor type
#### EIA/NEURON Checklist
* When creating a PR:
- [ ] I've modified `src/config/build_config.py` in my PR branch by setting `ENABLE_EI_MODE = True` or `ENABLE_NEURON_MODE = True`
* When PR is reviewed and ready to be merged:
- [ ] I've reverted the code change on the config file mentioned above
#### Benchmark Checklist
* When creating a PR:
- [ ] I've modified `src/config/test_config.py` in my PR branch by setting `ENABLE_BENCHMARK_DEV_MODE = True`
* When PR is reviewed and ready to be merged:
- [ ] I've reverted the code change on the config file mentioned above
## Reviewer Checklist
* For reviewer, before merging, please cross-check:
- [ ] I've verified the code change on the config file mentioned above has already been reverted
*Description:*
This PRs updates the `transformers` version inside the `huggingface` PyTorch and TensorFlow DLC.
*Tests run:*
https://github.com/huggingface/transformers/tree/master/tests/sagemaker
| ID | Description | Platform | #GPUS | Collected & evaluated metrics |
|-------------------------------------|-------------------------------------------------------------------|-----------------------------|-------|------------------------------------------|
| pytorch-transfromers-test-single | test bert finetuning using BERT fromtransformerlib+PT | SageMaker createTrainingJob | 1 | train_runtime, eval_accuracy & eval_loss |
| pytorch-transfromers-test-2-ddp | test bert finetuning using BERT from transformer lib+ PT DPP | SageMaker createTrainingJob | 16 | train_runtime, eval_accuracy & eval_loss |
| pytorch-transfromers-test-2-smd | test bert finetuning using BERT from transformer lib+ PT SM DDP | SageMaker createTrainingJob | 16 | train_runtime, eval_accuracy & eval_loss |
| pytorch-transfromers-test-1-smp | test roberta finetuning using BERT from transformer lib+ PT SM MP | SageMaker createTrainingJob | 8 | train_runtime, eval_accuracy & eval_loss |
| tensorflow-transfromers-test-single | Test bert finetuning using BERT from transformer lib+TF | SageMaker createTrainingJob | 1 | train_runtime, eval_accuracy & eval_loss |
| tensorflow-transfromers-test-2-smd | test bert finetuning using BERT from transformer lib+ TF SM DDP | SageMaker createTrainingJob | 16 | train_runtime, eval_accuracy & eval_loss |
*DLC image/dockerfile:*
The Hugging Face DLCs for PyTorch and TensorFlow
*Additional context:*

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
| 0 |
huggingface
|
Amazon SageMaker
|
Cuda memory error on unchanged workshop 1 notebooks
|
https://discuss.huggingface.co/t/cuda-memory-error-on-unchanged-workshop-1-notebooks/12329
|
I am running notebooks 1 and 3 unchanged from huggingface-sagemaker-workshop-series/workshop_1_getting_started_with_amazon_sagemaker at main · philschmid/huggingface-sagemaker-workshop-series · GitHub 2
And I get the following error:
RuntimeError: CUDA out of memory. Tried to allocate 192.00 MiB (GPU 0; 15.78 GiB total capacity; 14.80 GiB already allocated; 44.75 MiB free; 14.83 GiB reserved in total by PyTorch)
I am trying with different batch sizes and learning rates, but can someone help me understand why not everyone got the same error if we’re all using the same AWS resources?
|
Hello @kjackson,
I run the notebook now twice as it on “main” and never got any error.
2021-12-01 08:46:23 Uploading - Uploading generated training model
2021-12-01 08:48:23 Completed - Training job completed
ProfilerReport-1638347816: NoIssuesFound
Training seconds: 490
Billable seconds: 490
| 0 |
huggingface
|
Amazon SageMaker
|
ClientError: Artifact upload failed:Error 5
|
https://discuss.huggingface.co/t/clienterror-artifact-upload-failed-error-5/12073
|
after 0.07 epochs the training job stops and gives the following error :-
[INFO|trainer.py:1885] 2021-11-19 18:43:48,502 >> Saving model checkpoint to /opt/ml/model/checkpoint-3500
[INFO|configuration_utils.py:351] 2021-11-19 18:43:48,503 >> Configuration saved in /opt/ml/model/checkpoint-3500/config.json
#015Downloading: 0%| | 0.00/7.78k [00:00<?, ?B/s]#015Downloading: 28.8kB [00:00, 16.0MB/s]
#015Downloading: 0%| | 0.00/4.47k [00:00<?, ?B/s]#015Downloading: 28.7kB [00:00, 17.6MB/s]
2021-11-19 18:44:10 Uploading - Uploading generated training model
2021-11-19 18:44:10 Failed - Training job failed
ProfilerReport-1637341687: Stopping
---------------------------------------------------------------------------
UnexpectedStatusException Traceback (most recent call last)
<ipython-input-13-6a9d8eb3a402> in <module>
33
34 # starting the train job
---> 35 huggingface_estimator.fit()
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sagemaker/estimator.py in fit(self, inputs, wait, logs, job_name, experiment_config)
690 self.jobs.append(self.latest_training_job)
691 if wait:
--> 692 self.latest_training_job.wait(logs=logs)
693
694 def _compilation_job_name(self):
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sagemaker/estimator.py in wait(self, logs)
1650 # If logs are requested, call logs_for_jobs.
1651 if logs != "None":
-> 1652 self.sagemaker_session.logs_for_job(self.job_name, wait=True, log_type=logs)
1653 else:
1654 self.sagemaker_session.wait_for_job(self.job_name)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sagemaker/session.py in logs_for_job(self, job_name, wait, poll, log_type)
3776
3777 if wait:
-> 3778 self._check_job_status(job_name, description, "TrainingJobStatus")
3779 if dot:
3780 print()
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sagemaker/session.py in _check_job_status(self, job, desc, status_key_name)
3333 ),
3334 allowed_statuses=["Completed", "Stopped"],
-> 3335 actual_status=status,
3336 )
3337
UnexpectedStatusException: Error for Training job huggingface-pytorch-training-2021-11-19-17-08-07-355: Failed. Reason: ClientError: Artifact upload failed:Error 5: Received a failed archive status.
Thank you for your help.
|
You could use checkpointing with sagemaker 3, which automatically syncs the checkpoints to S3 when they are created and only save the model without checkpoints at the end to /opt/ml/model
| 1 |
huggingface
|
Amazon SageMaker
|
Some issues when training model on Sagemaker
|
https://discuss.huggingface.co/t/some-issues-when-training-model-on-sagemaker/12213
|
Hello world,
I’m getting two issues when I fine-tuning on my model using this sagemaker notebook.
No GUI login prompt out when running notebook_login(), instead I’m getting this:
error2991×230 16.6 KB
As a workaround, I’m using hardcoded token
Hit ResourceLimitExceeded Error when running huggingface_estimater.fit(…):
error984×607 36.8 KB
For item 2, 1. I have opened an issue on AWS support to request for increasing the limit but I will expect a slow reply from them. Is there any other way to get around this while getting GPU boost from Sagemaker?
FYI, I’m using my own AWS account (Free Tier account but having some credits).
Thanks.
|
Hello @ivanlau,
thanks for opening the thread.
To 1. where are you running the sagemaker notebook?
To 2. I think you can go with the ml.g4dn.xlarge it also has 1 GPU and shouldn’t need a limit increase for that.
| 0 |
huggingface
|
Amazon SageMaker
|
Error deploying BERT on SageMaker
|
https://discuss.huggingface.co/t/error-deploying-bert-on-sagemaker/8401
|
I fine-tuned BERT for text-classification on a custom dataset using HuggingFace and Tensorflow, and now I’m trying to deploy the model for inference through SageMaker. I followed this HuggingFace tutorial 5 but I get the following error. I spent a while looking through the SageMaker HuggingFace documentation 3 to no avail. The error says that model_uri is set to None, but model_uri is not a parameter that I can pass, and I just want it to pull my model from the HuggingFace Hub.
I also tried downloading the model from the Hub, zipping it, uploading it to S3, and passing model_data=“model.tar.gz”, but that didn’t work either.
image1885×856 134 KB
Any help would be greatly appreciated!
|
Resolved: I just needed to add an image_uri!
| 0 |
huggingface
|
Amazon SageMaker
|
Returning Multiple Answers for a QA Model on SageMaker
|
https://discuss.huggingface.co/t/returning-multiple-answers-for-a-qa-model-on-sagemaker/12016
|
Hi,
I’ve currently fine-tuned this 1 Question-Answering model to fit a specific business use case we have (identifying the name of a company from a piece of text). When it comes to inference, I’ve found as @sgugger has very clearly explained in this notebook 1 that sometimes the best answer isn’t the one with the best start and end logits as sometimes the highest scoring combination can produce an answer that is too long or too short (just one character).
As such when I was predicting using this model locally I created a return_best_combinaton function that finds the most practical answer using the list of logit scores.
When I used this model using the SageMaker API I realised it just returns one single answer with a score assigned to it. I wanted to check how this answer is produced? (happy to just be directed to the source code if it’s available) and wether it’s possible to return n number of likely answers instead of just 1.
Thanks every so much,
Karim
|
The Inference Toolkit uses the transformers pipelines under the hood. So if your are deploying a model for Question-Answering it would use the pipeline("question-answering"). You can find the code for this here: transformers/question_answering.py at master · huggingface/transformers · GitHub 3
But if you want to use your own function return_best_combinaton you could create a custom inference.py with your own “prediction” step.
huggingface.co
Deploy models to Amazon SageMaker 3
| 0 |
huggingface
|
Amazon SageMaker
|
Error while finding module specification for ‘run_glue.py’
|
https://discuss.huggingface.co/t/error-while-finding-module-specification-for-run-glue-py/11991
|
Code written:-
!pip install git+https://github.com/huggingface/transformers
!python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I hate you'))"
!git clone https://github.com/huggingface/transformers.git
%cd "transformers/examples/pytorch/text-classification/"
!python3 -m run_glue.py --model_name_or_path "microsoft/deberta-v3-large" --task_name "mnli" --do_train --do_eval --evaluation_strategy steps --max_seq_length 256 --warmup_steps 50 --per_device_train_batch_size 8 --learning_rate 6e-6 --num_train_epochs 2 --output_dir "ds_results" --overwrite_output_dir --logging_steps 1000 --logging_dir "ds_results"
These commands were primarily taken from (and slightly modified) - microsoft/deberta-v3-large · Hugging Face
Error:-
/home/ec2-user/anaconda3/envs/pytorch_p36/bin/python: Error while finding module specification for 'run_glue.py' (AttributeError: module 'run_glue' has no attribute '__path__')
I have tried this on my own system it didn’t give this error. Am I missing something? @lewtun
|
I think the problem is that you’re using the -m flag which tries to import run_glue.py as a module and then run it. Does it work if you remove the -m flag?
| 1 |
huggingface
|
Amazon SageMaker
|
Failed. Reason: Please make sure all images included in the model for the production variant AllTraffic exist, and that the execution role used to create the model has permissions to access them
|
https://discuss.huggingface.co/t/failed-reason-please-make-sure-all-images-included-in-the-model-for-the-production-variant-alltraffic-exist-and-that-the-execution-role-used-to-create-the-model-has-permissions-to-access-them/9731
|
Hey!
Been experiencing this error and have tried diagnosing but not sure where the problem might be.
UnexpectedStatusException: Error hosting endpoint summarization-endpoint: Failed. Reason: Please make sure all images included in the model for the production variant AllTraffic exist, and that the execution role used to create the model has permissions to access them..
I initially thought it had to do with IAM Permissions but I am no longer sure that is the case since I added all permissions that might be relevant and I don’t think it’s an issue of the resource not being assigned to the right policy. The model is being created but the endpoint is not being processed correctly. I also considered whether my model.tar.gz was corrupted but even when I tried uploading a model directly from the Hugging Face Hub, I am met with this error message. For some reason as well, no CloudWatch logs are being saved despite the CloudWatch Log Group being created for /aws/sagemaker/Endpoints/summarization-endpoint and having all relevant permissions.
The script is below:
from sagemaker.serializers import JSONSerializer
from sagemaker.deserializers import BytesDeserializer
import sagemaker
model_name = 'summarization-model'
endpoint_name = 'summarization-endpoint'
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
# hub = {
# 'HF_MODEL_ID':'google/pegasus-large',
# 'HF_TASK':'summarization'
# }
# # create Hugging Face Model Class
# huggingface_model = HuggingFaceModel(
# transformers_version='4.6.1',
# pytorch_version='1.7.1',
# py_version='py36',
# env=hub,
# role=role,
# )
# # create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
model_data="s3://qfn-transcription/ujjawal_files/model.tar.gz", # path to your trained sagemaker model
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.6.1", # transformers version used
pytorch_version="1.7.1", # pytorch version used
py_version='py36',
name=model_name
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.g4dn.xlarge',#'ml.m5.xlarge',ml.inf1.xlarge
endpoint_name=endpoint_name,
)
predictor.predict({
'inputs': "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."
})
Thanks!
IAM Permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"iam:GetRole",
"iam:PassRole",
"sagemaker:GetRecord"
],
"Resource": [
"arn:aws:sagemaker:*:216283767174:feature-group/*",
"arn:aws:s3:::qfn-transcription/*",
"arn:aws:iam::216283767174:role/callTranscriptionsRole"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"sagemaker:CreateModel",
"logs:GetLogRecord",
"logs:DescribeSubscriptionFilters",
"logs:StartQuery",
"logs:DescribeMetricFilters",
"ecr:BatchDeleteImage",
"logs:ListLogDeliveries",
"ecr:DeleteRepository",
"logs:CreateLogStream",
"logs:TagLogGroup",
"logs:CancelExportTask",
"logs:GetLogEvents",
"logs:FilterLogEvents",
"logs:DescribeDestinations",
"sagemaker:CreateEndpoint",
"logs:StopQuery",
"cloudwatch:GetMetricStatistics",
"logs:CreateLogGroup",
"ecr:PutImage",
"logs:PutMetricFilter",
"logs:CreateLogDelivery",
"servicecatalog:ListAcceptedPortfolioShares",
"sagemaker:CreateEndpointConfig",
"logs:PutResourcePolicy",
"logs:DescribeExportTasks",
"sagemaker:ListActions",
"logs:GetQueryResults",
"sagemaker:DescribeEndpointConfig",
"logs:UpdateLogDelivery",
"ecr:BatchGetImage",
"logs:PutSubscriptionFilter",
"ecr:InitiateLayerUpload",
"logs:ListTagsLogGroup",
"sagemaker:EnableSagemakerServicecatalogPortfolio",
"logs:DescribeLogStreams",
"ecr:UploadLayerPart",
"logs:GetLogDelivery",
"cloudwatch:ListMetrics",
"servicecatalog:AcceptPortfolioShare",
"logs:CreateExportTask",
"ecr:CompleteLayerUpload",
"logs:AssociateKmsKey",
"sagemaker:DescribeEndpoint",
"logs:DescribeQueryDefinitions",
"logs:PutDestination",
"logs:DescribeResourcePolicies",
"ecr:DeleteRepositoryPolicy",
"logs:DescribeQueries",
"logs:DisassociateKmsKey",
"sagemaker:DeleteApp",
"logs:UntagLogGroup",
"logs:DescribeLogGroups",
"logs:PutDestinationPolicy",
"logs:TestMetricFilter",
"logs:PutQueryDefinition",
"logs:DeleteDestination",
"logs:PutLogEvents",
"s3:ListAllMyBuckets",
"ecr:SetRepositoryPolicy",
"logs:PutRetentionPolicy",
"logs:GetLogGroupFields"
],
"Resource": "*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"sagemaker:CreateApp"
],
"Resource": [
"arn:aws:sagemaker:*:216283767174:app/*/*/*/*",
"arn:aws:s3:::qfn-transcription/*"
]
},
{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": "sagemaker:DescribeApp",
"Resource": "arn:aws:sagemaker:*:216283767174:app/*/*/*/*"
},
{
"Sid": "VisualEditor4",
"Effect": "Allow",
"Action": [
"sagemaker:DescribeTrainingJob",
"sagemaker:CreateMonitoringSchedule",
"sagemaker:PutRecord",
"sagemaker:CreateTrainingJob",
"sagemaker:CreateProcessingJob"
],
"Resource": [
"arn:aws:sagemaker:*:216283767174:feature-group/*",
"arn:aws:sagemaker:*:216283767174:monitoring-schedule/*",
"arn:aws:sagemaker:*:216283767174:processing-job/*",
"arn:aws:sagemaker:*:216283767174:training-job/*"
]
},
{
"Sid": "VisualEditor5",
"Effect": "Allow",
"Action": [
"sagemaker:DescribeNotebookInstanceLifecycleConfig",
"sagemaker:StopNotebookInstance",
"sagemaker:DescribeNotebookInstance"
],
"Resource": [
"arn:aws:sagemaker:*:216283767174:feature-group/*",
"arn:aws:sagemaker:*:216283767174:notebook-instance-lifecycle-config/*",
"arn:aws:sagemaker:*:216283767174:notebook-instance/*"
]
},
{
"Sid": "VisualEditor6",
"Effect": "Allow",
"Action": [
"ecr:SetRepositoryPolicy",
"ecr:CompleteLayerUpload",
"ecr:BatchGetImage",
"ecr:BatchDeleteImage",
"ecr:UploadLayerPart",
"ecr:DeleteRepositoryPolicy",
"ecr:InitiateLayerUpload",
"ecr:DeleteRepository",
"ecr:PutImage"
],
"Resource": "arn:aws:ecr:*:*:repository/*"
}
]
}```
|
Hey @ujjirox,
The error is 100% related to your IAM Permissions. See:
amazon web services - 'all images for the production variant AllTraffic exist, the execution role used to create the model has permissions to access them' - Stack Overflow 18
https://github.com/aws/sagemaker-python-sdk/issues/1835 8
https://github.com/aws/sagemaker-python-sdk/discussions/2365 9
I saw that you edited you IAM permissions manually (VisualEditorX), e.g I miss ecr:GetDownloadUrlForLayer for downloading the ecr correctly.
Can you test the deployment with the AmazonSageMakerFullAccess see here: SageMaker Roles - Amazon SageMaker 9
The IAM managed policy, AmazonSageMakerFullAccess , used in the following procedure only grants the execution role permission to perform certain Amazon S3 actions on buckets or objects with SageMaker , Sagemaker , sagemaker , or aws-glue in the name. To learn how to add an additional policy to an execution role to grant it access to other Amazon S3 buckets and objects, see Add Additional Amazon S3 Permissions to an SageMaker Execution Role 5.
If this works you can take a look at more detailed permissions here: SageMaker Roles - Amazon SageMaker 9
And then create a new clean role.
| 0 |
huggingface
|
Amazon SageMaker
|
Truncation of input data for Summarization pipeline
|
https://discuss.huggingface.co/t/truncation-of-input-data-for-summarization-pipeline/11825
|
I’m using bert-large-cnn for a summarization task. I have been truncating the input text in order to avoid exceeding the maximum sequence length:
tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn")
model = BartForConditionalGeneration.from_pretrained(
"facebook/bart-large-cnn"
)
inputs = tokenizer(
input_text, return_tensors="pt", max_length=1024, truncation=True
)
outputs = model.generate(
inputs["input_ids"],
max_length=300,
min_length=100,
length_penalty=2.0,
num_beams=4,
early_stopping=True,
)
summary = " ".join(
tokenizer.decode(
g, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
for g in outputs
)
How can I replicate this in SageMaker? I don’t see a way to pass configuration values to the tokenizer when calling predict on an instance of HuggingFaceModel? Here is my code so far:
env = {
"HF_MODEL_ID": "facebook/bart-large-cnn",
"HF_TASK": "summarization",
}
huggingface_model = HuggingFaceModel(
env=env,
role=role,
transformers_version="4.6",
pytorch_version="1.7",
py_version="py36",
)
data = {
"inputs": input_text,
"parameters": {
"max_length": 300,
"min_length": 100,
"length_penalty": 2.0,
"num_beams": 4,
}
}
result = predictor.predict(data)
Thank you!
|
I could reproduce the issue and also found the root cause of it. The issue that
Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
And there is currently no way to pass in the max_length to the inference toolkit.
There are now 2 options to solve this you could either for the model into your own repository and add a tokenizer_config.json similar to this one tokenizer_config.json · distilbert-base-uncased-finetuned-sst-2-english at main 2.
or you could provide a custom inference.py as entry_point when creating the HuggingFaceModel.
e.g.
huggingface_model = HuggingFaceModel(
env=env,
role=role,
transformers_version="4.6",
pytorch_version="1.7",
py_version="py36",
entry_point="inference.py",
)
The inference.py then need to contain predict_fn and a model_fn. Pseudo code below.
def model_fn(model_dir):
""""model_dir is the location where the model is stored"""
tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn")
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
return model, tokenizer
def predict_fn(data, model):
"""model is the return of the model_fn and data is the json from the request as python dict"""
model,tokenzier = model
outputs = model.generate(
inputs["input_ids"],
max_length=300,
min_length=100,
length_penalty=2.0,
num_beams=4,
early_stopping=True,
)
summary = " ".join(
tokenizer.decode(
g, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
for g in outputs
)
return {"summary":summary}
You can find more documentation here: Deploy models to Amazon SageMaker
| 1 |
huggingface
|
Amazon SageMaker
|
How to deploy a huggingface model from S3 outside a Jupyter Notebook
|
https://discuss.huggingface.co/t/how-to-deploy-a-huggingface-model-from-s3-outside-a-jupyter-notebook/11391
|
I’ve successfully deployed my model from S3 in a jupyter notebook. Unfortunately my organization requires that all production AWS apps need to be 100% terraform. The guide here says “you can also instantiate Hugging Face endpoints with lower-level SDK such as boto3 and AWS CLI , Terraform and with CloudFormation templates.”
So I’m fairly sure that it’s possible to do this, but I can’t find any documentation anywhere regarding terraform deployment.
|
Hey @sdegrace,
We have an example for AWS CDK already: cdk-samples/sagemaker-endpoint-huggingface at master · philschmid/cdk-samples · GitHub 4, CDK is similar to Terraform.
But if your company wants to use terraform you can easily create it.
For a successful deployment of a SageMaker endpoint you need:
a SageMaker model: Terraform documentation 2
a Endpoint Configuration: Terraform documentation
a SageMaker Endpoint: Terraform documentation
below you can find “pseudo” code of how this is going to look like:
resource "aws_sagemaker_model" "huggingface" {
name = "bert"
execution_role_arn = "arn:aws:iam::111111111111:role/service-role/AmazonSageMaker-ExecutionRole-20200101T000001"
primary_container {
# CPU Image
image="763104351884.dkr.ecr.us-west-2.amazonaws.com/huggingface-pytorch-inference:1.9.1-transformers4.12.3-cpu-py38-ubuntu20.04"
# GPU Image image = "763104351884.dkr.ecr.us-west-2.amazonaws.com/huggingface-pytorch-inference:1.9.1-transformers4.12.3-gpu-py38-cu111-ubuntu20.04"
model_data_url="s3://your-model"
}
}
resource "aws_sagemaker_endpoint_configuration" "huggingface" {
name = "bert"
production_variants {
variant_name = "variant-1"
model_name = aws_sagemaker_model.huggingface.name
initial_instance_count = 1
instance_type = "ml.t2.medium"
}
}
resource "aws_sagemaker_endpoint" "huggingface" {
name = "bert"
endpoint_config_name = aws_sagemaker_endpoint_configuration.huggingface.name
}
| 0 |
huggingface
|
Amazon SageMaker
|
Lambda and Batch Transform
|
https://discuss.huggingface.co/t/lambda-and-batch-transform/11538
|
Hi!
I am trying to setup a lambda and a batch transform. These are the steps that was executed in Lambda:
HuggingFaceModel()
HuggingFaceModel.transformer()
HuggingFaceModel.transfomer.transform()
The issue is that step #3 waits for a response return from sagemaker’s batch transform job. This takes several minutes which is billed for lambda. For other batch transform jobs using boto3.sagemaker.create_transform_job, the status is returned immediately after a successful send. Does anyone have any experience with the batch transform on lambda and having it returned immediately without waiting for a response? Thanks!
|
Hello @vng510,
you can still use the sagemaker classes in your lambda functions.
vng510:
HuggingFaceModel()
HuggingFaceModel.transformer()
HuggingFaceModel.transfomer.transform()
HuggingFaceModel.transformer.transform() accepts a parameter called wait which defines wether the call should wait until the job completes.
So when you add HuggingFaceModel.transformer.transform(wait=False) it will create the Batch Job but doesn’t wait for anything.
https://sagemaker.readthedocs.io/en/stable/api/inference/transformer.html#sagemaker.transformer.Transformer.transform 1
| 1 |
huggingface
|
Amazon SageMaker
|
Training on Sagemaker with Trainer() Instance
|
https://discuss.huggingface.co/t/training-on-sagemaker-with-trainer-instance/11326
|
Hello everyone,
I wanted to train a NLP classifer on our server but it takes around 9 hours for a training. So I wanted to switch the training process to Sagemaker. When I just copy my code with the Trainer() instance (trainer.train()) I get the following error:
ImportError: torch>=1.5.0 is required for a normal functioning of this module, but found torch==1.4.0.
And it looks like I cannot update my torch on Sagemaker to 1.5.0.
On my research I found out that many trainings are done with the HugginFace estimator. Do I always have to use this and change my “local server code”?
|
How did you start your training? Which service di you use?
SageMaker has different options there are Notebook instances which are just hoster Jupyter Services and then there is also the Training Platform, which uses the HuggingFace estimator as shown in all examples.
| 0 |
huggingface
|
Amazon SageMaker
|
How to fix “ValueError: Need either a GLUE task or a training/validation file.”
|
https://discuss.huggingface.co/t/how-to-fix-valueerror-need-either-a-glue-task-or-a-training-validation-file/11236
|
I’m not sure what is causing this error during training. I’m using the GLUE task, and I think I have the correct transformers version, but I keep getting this error when I try to train the model: “ValueError: Need either a GLUE task or a training/validation file.”
Here’s the notebook: aws-sagemaker-deploy-sentiment-analysis/huggingface_pl_sentiment_analysis.ipynb at main · JayThibs/aws-sagemaker-deploy-sentiment-analysis · GitHub 1
|
You are not passing your datasets to your training job when starting it using the sagemaker SDK.
Currently, you have
huggingface_estimator.fit() # put this inside fit if using train.py: {'train': training_input_path, 'test': test_input_path}
=> Saying to sagemaker you are not passing data from s3.
You need to change this to
huggingface_estimator.fit({'train': training_input_path, 'test': test_input_path})
| 0 |
huggingface
|
Amazon SageMaker
|
Multi Instance Training Error
|
https://discuss.huggingface.co/t/multi-instance-training-error/11069
|
Ive been working closely with AWS to solve this issue. They told me to post here. Ive been trying to get multi instance working with AWS Sagemaker x Hugging Face estimators. My code works okay for single instance non distributed training and single instance distributed training. It does not work for multi instance distributed training. I am using the huggingface-pytorch-training:1.7-transformers4.6-gpu-py36-cu110-ubuntu18.04 image. The image is in our internal ECR because we run in a VPC.
Here is the code I am using. Its calling the same train.py from this repo (SageMaker-HuggingFace-Workshop/train.py at main · C24IO/SageMaker-HuggingFace-Workshop · GitHub 1). I get a FileNotFoundError error after training when the script is trying to load the model. I must be forgetting to set the correct path somewhere.
import sagemaker
import time
from sagemaker.huggingface import HuggingFace
import logging
import os
from sagemaker.s3 import S3Uploader
role = 'ROLE'
default_bucket = 'BUCKET_NAME'
local_train_dataset = "amazon_us_reviews_apparel_v1_00_train.json"
local_test_dataset = "amazon_us_reviews_apparel_v1_00_test.json"
# s3 uris for datasets
remote_train_dataset = f"s3://{default_bucket}/"
remote_test_dataset = f"s3://{default_bucket}/"
# upload datasets
S3Uploader.upload(local_train_dataset,remote_train_dataset)
S3Uploader.upload(local_test_dataset,remote_test_dataset)
print(f"train dataset uploaded to: {remote_train_dataset}/{local_train_dataset}")
print(f"test dataset uploaded to: {remote_test_dataset}/{local_test_dataset}")
# hyperparameters, which are passed into the training job
hyperparameters={'epochs': 1, # number of training epochs
'train_batch_size': 32, # batch size for training
'eval_batch_size': 64, # batch size for evaluation
'learning_rate': 3e-5, # learning rate used during training
'model_id':'distilbert-base-uncased', # pre-trained model
'fp16': True, # Whether to use 16-bit (mixed) precision training
'train_file': local_train_dataset, # training dataset
'test_file': local_test_dataset, # test dataset
}
metric_definitions=[
{'Name': 'eval_loss', 'Regex': "'eval_loss': ([0-9]+(.|e\-)[0-9]+),?"},
{'Name': 'eval_accuracy', 'Regex': "'eval_accuracy': ([0-9]+(.|e\-)[0-9]+),?"},
{'Name': 'eval_f1', 'Regex': "'eval_f1': ([0-9]+(.|e\-)[0-9]+),?"},
{'Name': 'eval_precision', 'Regex': "'eval_precision': ([0-9]+(.|e\-)[0-9]+),?"}]
# define Training Job Name
job_name = f'huggingface-workshop-{time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())}'
discovery_bucket_kms = 'KMS'
subnets = ['subnet-xxx']
security_group_ids = ['sg-xxx','sg-xxz','sg-xxy','sg-xx10']
distribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}
logging.debug('Creating the Estimator')
# create the Estimator
huggingface_estimator = HuggingFace(
entry_point = 'train.py',
source_dir = 'scripts',
instance_type = 'ml.p3.16xlarge',
instance_count = 2,
base_job_name = job_name,
role = role,
transformers_version = '4.6',
pytorch_version = '1.7',
py_version = 'py36',
hyperparameters = hyperparameters,
metric_definitions = metric_definitions,
sagemaker_session=sess,
distribution = distribution,
# SECURITY CONFIGS
output_kms_key = discovery_bucket_kms,
subnets = subnets,
security_group_ids = security_group_ids,
enable_network_isolation = True,
encrypt_inter_container_traffic = True,
image_uri = 'INTERNAL_ECR_URI'
)
# define a data input dictonary with our uploaded s3 uris
training_data = {
'train': remote_train_dataset,
'test': remote_test_dataset
}
logging.debug('Running Fit')
huggingface_estimator.fit(training_data)
|
I figured out my issues. The error I was getting was from loading the best model after training using the flag load_best_model_at_end=True. The model is not saved on every node when doing multi instance training, so the FileNoteFound error occurs. This flag will cause issues in the trainer if youre using transformers <= 4.6.
This issue is solved when using version of the AWS Deep Learning containers with a version of Hugging Face greater than 4.6 such as 763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-training:1.9.0-transformers4.11.0-gpu-py38-cu111-ubuntu20.04, or by setting load_best_model_at_end=False
Thanks for the help!
| 1 |
huggingface
|
Amazon SageMaker
|
Hitting Deployed Endpoint *Outside* of Notebook
|
https://discuss.huggingface.co/t/hitting-deployed-endpoint-outside-of-notebook/11006
|
All the tutorials tend to end at:
predictor.predict({"input": "YOUR_TEXT_GOES_HERE"})
It’s great that the notebooks deliver you to inference, but I have no idea how to hit this endpoint outside of the context of a Jupyter Notebook. I basically have Amazon AWS Java sdk code that does this:
AmazonSageMakerRuntime runtime = AmazonSageMakerRuntimeClientBuilder.defaultClient();
String body = "{\"instances\": [{\"data\": { \"input\": \"Hello World\"}}]}";
ByteBuffer bodyBuffer = ByteBuffer.wrap(body.getBytes());
InvokeEndpointRequest request = new InvokeEndpointRequest()
.withEndpointName("huggingface-pytorch-training-....")
.withBody(bodyBuffer);
InvokeEndpointResult invokeEndpointResult = runtime.invokeEndpoint(request);
Unfortunately, I get an error:
{
"code": 400,
"type": "InternalServerException",
"message": "Content type is not supported by this framework.\n\n Please implement input_fn to to deserialize the request data or an output_fn to\n serialize the response. For more information, see the SageMaker Python SDK README."
}
Am I missing something?
|
Hey @rosenjcb,
Thank you for opening this thread. Yes you can use the endpoint with the aws sdk for this you can use the InvokeEndpoint method. Java doc
It looks like you are already doing this and there are only a few missing parts i guess.
The Endpoint expects a JSON as HTTP Body and as the error says you are missing the Content-Type: application/json for that.
I have to say i have no JAVA experience at all but i found this on StackOverflow:
InvokeEndpointRequest invokeEndpointRequest = new InvokeEndpointRequest();
invokeEndpointRequest.setContentType("application/x-image");
ByteBuffer buf = ByteBuffer.wrap(image);
invokeEndpointRequest.setBody(buf);
invokeEndpointRequest.setEndpointName(endpointName);
invokeEndpointRequest.setAccept("application/json");
AmazonSageMakerRuntime amazonSageMaker = AmazonSageMakerRuntimeClientBuilder.defaultClient();
InvokeEndpointResult invokeEndpointResult = amazonSageMaker.invokeEndpoint(invokeEndpointRequest)
maybe this helps you crafting your request.
You can also find a example on using the aws sdk with python below
response = client.invoke_endpoint(
EndpointName=ENDPOINT_NAME,
ContentType="application/json",
Accept="application/json",
Body=JSON_STRING,
)
| 0 |
huggingface
|
Amazon SageMaker
|
Sagemaker downloads huggingface model image every time on running fit
|
https://discuss.huggingface.co/t/sagemaker-downloads-huggingface-model-image-every-time-on-running-fit/10976
|
Sagemaker downloads huggingface model image every time on running huggingface_estimator.fit()
|
In a local notebook, you could always save the model locally and refer to it using a file path (e.g. /Users/sanjanag/models/bert-base). But since this is a job running on ECS, I don’t think you get that choice. It runs from scratch every time.
| 0 |
huggingface
|
Amazon SageMaker
|
Sagemaker gpt-j train file error
|
https://discuss.huggingface.co/t/sagemaker-gpt-j-train-file-error/9619
|
import sagemaker
from sagemaker.huggingface import HuggingFace
# gets role for executing training job
role = sagemaker.get_execution_role()
hyperparameters = {
'epochs': 1,
'train_batch_size': 128,
'model_name_or_path':'EleutherAI/gpt-j-6B',
'output_dir':'/opt/ml/model'
# add your remaining hyperparameters
# more info here https://github.com/huggingface/transformers/tree/v4.6.1/examples/pytorch/language-modeling
}
# git configuration to download our fine-tuning script
git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.6.1'}
# creates Hugging Face estimator
huggingface_estimator = HuggingFace(
entry_point='run_clm.py',
source_dir='./examples/pytorch/language-modeling',
instance_type='ml.p3.2xlarge',
instance_count=1,
role=role,
git_config=git_config,
transformers_version='4.6.1',
pytorch_version='1.7.1',
py_version='py36',
hyperparameters = hyperparameters
)
# starting the train job
huggingface_estimator.fit({'training': 's3://domain-gen-data/domain-gen-training.jsonl'})
above is the code and below is the error
2021-08-31 07:31:41 Starting - Starting the training job...
2021-08-31 07:32:07 Starting - Launching requested ML instancesProfilerReport-1630395096: InProgress
......
2021-08-31 07:33:08 Starting - Preparing the instances for training......
2021-08-31 07:34:08 Downloading - Downloading input data...
2021-08-31 07:34:28 Training - Downloading the training image..................
2021-08-31 07:37:33 Training - Training image download completed. Training in progress.bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
2021-08-31 07:37:34,584 sagemaker-training-toolkit INFO Imported framework sagemaker_pytorch_container.training
2021-08-31 07:37:34,615 sagemaker_pytorch_container.training INFO Block until all host DNS lookups succeed.
2021-08-31 07:37:36,036 sagemaker_pytorch_container.training INFO Invoking user training script.
2021-08-31 07:37:36,481 sagemaker-training-toolkit INFO Installing dependencies from requirements.txt:
/opt/conda/bin/python3.6 -m pip install -r requirements.txt
Requirement already satisfied: datasets>=1.1.3 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 1)) (1.6.2)
Requirement already satisfied: sentencepiece!=0.1.92 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 2)) (0.1.91)
Requirement already satisfied: protobuf in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 3)) (3.17.1)
Requirement already satisfied: multiprocess in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (0.70.11.1)
Requirement already satisfied: pyarrow>=1.0.0<4.0.0 in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (4.0.0)
Requirement already satisfied: numpy>=1.17 in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (1.19.1)
Requirement already satisfied: xxhash in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (2.0.2)
Requirement already satisfied: tqdm<4.50.0,>=4.27 in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (4.49.0)
Requirement already satisfied: huggingface-hub<0.1.0 in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (0.0.8)
Requirement already satisfied: packaging in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (20.9)
Requirement already satisfied: importlib-metadata in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (4.0.1)
Requirement already satisfied: fsspec in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (2021.5.0)
Requirement already satisfied: requests>=2.19.0 in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (2.25.1)
Requirement already satisfied: dataclasses in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (0.8)
Requirement already satisfied: pandas in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (1.1.5)
Requirement already satisfied: dill in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (0.3.3)
Requirement already satisfied: filelock in /opt/conda/lib/python3.6/site-packages (from huggingface-hub<0.1.0->datasets>=1.1.3->-r requirements.txt (line 1)) (3.0.12)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.6/site-packages (from requests>=2.19.0->datasets>=1.1.3->-r requirements.txt (line 1)) (2020.12.5)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.6/site-packages (from requests>=2.19.0->datasets>=1.1.3->-r requirements.txt (line 1)) (1.25.11)
Requirement already satisfied: chardet<5,>=3.0.2 in /opt/conda/lib/python3.6/site-packages (from requests>=2.19.0->datasets>=1.1.3->-r requirements.txt (line 1)) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.6/site-packages (from requests>=2.19.0->datasets>=1.1.3->-r requirements.txt (line 1)) (2.10)
Requirement already satisfied: six>=1.9 in /opt/conda/lib/python3.6/site-packages (from protobuf->-r requirements.txt (line 3)) (1.16.0)
Requirement already satisfied: typing-extensions>=3.6.4 in /opt/conda/lib/python3.6/site-packages (from importlib-metadata->datasets>=1.1.3->-r requirements.txt (line 1)) (3.10.0.0)
Requirement already satisfied: zipp>=0.5 in /opt/conda/lib/python3.6/site-packages (from importlib-metadata->datasets>=1.1.3->-r requirements.txt (line 1)) (3.4.1)
Requirement already satisfied: pyparsing>=2.0.2 in /opt/conda/lib/python3.6/site-packages (from packaging->datasets>=1.1.3->-r requirements.txt (line 1)) (2.4.7)
Requirement already satisfied: python-dateutil>=2.7.3 in /opt/conda/lib/python3.6/site-packages (from pandas->datasets>=1.1.3->-r requirements.txt (line 1)) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /opt/conda/lib/python3.6/site-packages (from pandas->datasets>=1.1.3->-r requirements.txt (line 1)) (2021.1)
WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv
2021-08-31 07:37:39,041 sagemaker-training-toolkit INFO Invoking user script
Training Env:
{
"additional_framework_parameters": {},
"channel_input_dirs": {
"training": "/opt/ml/input/data/training"
},
"current_host": "algo-1",
"framework_module": "sagemaker_pytorch_container.training:main",
"hosts": [
"algo-1"
],
"hyperparameters": {
"train_batch_size": 128,
"output_dir": "/opt/ml/model",
"epochs": 1,
"model_name_or_path": "EleutherAI/gpt-j-6B"
},
"input_config_dir": "/opt/ml/input/config",
"input_data_config": {
"training": {
"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated",
"RecordWrapperType": "None"
}
},
"input_dir": "/opt/ml/input",
"is_master": true,
"job_name": "huggingface-pytorch-training-2021-08-31-07-31-36-059",
"log_level": 20,
"master_hostname": "algo-1",
"model_dir": "/opt/ml/model",
"module_dir": "s3://sagemaker-us-east-1-765248384165/huggingface-pytorch-training-2021-08-31-07-31-36-059/source/sourcedir.tar.gz",
"module_name": "run_clm",
"network_interface_name": "eth0",
"num_cpus": 8,
"num_gpus": 1,
"output_data_dir": "/opt/ml/output/data",
"output_dir": "/opt/ml/output",
"output_intermediate_dir": "/opt/ml/output/intermediate",
"resource_config": {
"current_host": "algo-1",
"hosts": [
"algo-1"
],
"network_interface_name": "eth0"
},
"user_entry_point": "run_clm.py"
}
Environment variables:
SM_HOSTS=["algo-1"]
SM_NETWORK_INTERFACE_NAME=eth0
SM_HPS={"epochs":1,"model_name_or_path":"EleutherAI/gpt-j-6B","output_dir":"/opt/ml/model","train_batch_size":128}
SM_USER_ENTRY_POINT=run_clm.py
SM_FRAMEWORK_PARAMS={}
SM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}
SM_INPUT_DATA_CONFIG={"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}}
SM_OUTPUT_DATA_DIR=/opt/ml/output/data
SM_CHANNELS=["training"]
SM_CURRENT_HOST=algo-1
SM_MODULE_NAME=run_clm
SM_LOG_LEVEL=20
SM_FRAMEWORK_MODULE=sagemaker_pytorch_container.training:main
SM_INPUT_DIR=/opt/ml/input
SM_INPUT_CONFIG_DIR=/opt/ml/input/config
SM_OUTPUT_DIR=/opt/ml/output
SM_NUM_CPUS=8
SM_NUM_GPUS=1
SM_MODEL_DIR=/opt/ml/model
SM_MODULE_DIR=s3://sagemaker-us-east-1-765248384165/huggingface-pytorch-training-2021-08-31-07-31-36-059/source/sourcedir.tar.gz
SM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{"training":"/opt/ml/input/data/training"},"current_host":"algo-1","framework_module":"sagemaker_pytorch_container.training:main","hosts":["algo-1"],"hyperparameters":{"epochs":1,"model_name_or_path":"EleutherAI/gpt-j-6B","output_dir":"/opt/ml/model","train_batch_size":128},"input_config_dir":"/opt/ml/input/config","input_data_config":{"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","is_master":true,"job_name":"huggingface-pytorch-training-2021-08-31-07-31-36-059","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-765248384165/huggingface-pytorch-training-2021-08-31-07-31-36-059/source/sourcedir.tar.gz","module_name":"run_clm","network_interface_name":"eth0","num_cpus":8,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"run_clm.py"}
SM_USER_ARGS=["--epochs","1","--model_name_or_path","EleutherAI/gpt-j-6B","--output_dir","/opt/ml/model","--train_batch_size","128"]
SM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate
SM_CHANNEL_TRAINING=/opt/ml/input/data/training
SM_HP_TRAIN_BATCH_SIZE=128
SM_HP_OUTPUT_DIR=/opt/ml/model
SM_HP_EPOCHS=1
SM_HP_MODEL_NAME_OR_PATH=EleutherAI/gpt-j-6B
PYTHONPATH=/opt/ml/code:/opt/conda/bin:/opt/conda/lib/python36.zip:/opt/conda/lib/python3.6:/opt/conda/lib/python3.6/lib-dynload:/opt/conda/lib/python3.6/site-packages
Invoking script with the following command:
/opt/conda/bin/python3.6 run_clm.py --epochs 1 --model_name_or_path EleutherAI/gpt-j-6B --output_dir /opt/ml/model --train_batch_size 128
Traceback (most recent call last):
File "run_clm.py", line 468, in <module>
main()
File "run_clm.py", line 182, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/opt/conda/lib/python3.6/site-packages/transformers/hf_argparser.py", line 187, in parse_args_into_dataclasses
obj = dtype(**inputs)
File "<string>", line 12, in __init__
File "run_clm.py", line 161, in __post_init__
raise ValueError("Need either a dataset name or a training/validation file.")
ValueError: Need either a dataset name or a training/validation file.
2021-08-31 07:37:44,472 sagemaker-training-toolkit ERROR ExecuteUserScriptError:
Command "/opt/conda/bin/python3.6 run_clm.py --epochs 1 --model_name_or_path EleutherAI/gpt-j-6B --output_dir /opt/ml/model --train_batch_size 128"
Traceback (most recent call last):
File "run_clm.py", line 468, in <module>
main()
File "run_clm.py", line 182, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/opt/conda/lib/python3.6/site-packages/transformers/hf_argparser.py", line 187, in parse_args_into_dataclasses
obj = dtype(**inputs)
File "<string>", line 12, in __init__
File "run_clm.py", line 161, in __post_init__
raise ValueError("Need either a dataset name or a training/validation file.")
ValueError: Need either a dataset name or a training/validation file.
2021-08-31 07:37:49 Uploading - Uploading generated training model
2021-08-31 07:37:49 Failed - Training job failed
---------------------------------------------------------------------------
UnexpectedStatusException Traceback (most recent call last)
<ipython-input-5-33c7f1decb60> in <module>
31
32 # starting the train job
---> 33 huggingface_estimator.fit({'training': 's3://domain-gen-data/domain-gen-training.jsonl'})
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sagemaker/estimator.py in fit(self, inputs, wait, logs, job_name, experiment_config)
680 self.jobs.append(self.latest_training_job)
681 if wait:
--> 682 self.latest_training_job.wait(logs=logs)
683
684 def _compilation_job_name(self):
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sagemaker/estimator.py in wait(self, logs)
1625 # If logs are requested, call logs_for_jobs.
1626 if logs != "None":
-> 1627 self.sagemaker_session.logs_for_job(self.job_name, wait=True, log_type=logs)
1628 else:
1629 self.sagemaker_session.wait_for_job(self.job_name)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sagemaker/session.py in logs_for_job(self, job_name, wait, poll, log_type)
3731
3732 if wait:
-> 3733 self._check_job_status(job_name, description, "TrainingJobStatus")
3734 if dot:
3735 print()
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sagemaker/session.py in _check_job_status(self, job, desc, status_key_name)
3291 ),
3292 allowed_statuses=["Completed", "Stopped"],
-> 3293 actual_status=status,
3294 )
3295
UnexpectedStatusException: Error for Training job huggingface-pytorch-training-2021-08-31-07-31-36-059: Failed. Reason: AlgorithmError: ExecuteUserScriptError:
Command "/opt/conda/bin/python3.6 run_clm.py --epochs 1 --model_name_or_path EleutherAI/gpt-j-6B --output_dir /opt/ml/model --train_batch_size 128"
Traceback (most recent call last):
File "run_clm.py", line 468, in <module>
main()
File "run_clm.py", line 182, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/opt/conda/lib/python3.6/site-packages/transformers/hf_argparser.py", line 187, in parse_args_into_dataclasses
obj = dtype(**inputs)
File "<string>", line 12, in __init__
File "run_clm.py", line 161, in __post_init__
raise ValueError("Need either a dataset name or a training/validation file.")
ValueError: Need either a dataset name or a training/validation file.
|
Hello @danurahul,
Thanks for opening the thread. EleutherAI/gpt-j-6B is not yet trainable with Amazon SageMaker, since the PR is not yet merged into transformers for GPT-J and when it is merged, we need to update the DLC or you have to include the new version of transformers in the requirements.txt.
In addition to this is GPT-J-6B 22GB big and won’t fit on a singe ml.p3.2xlarge instance it would be possible to train it, when merged with distributed training, see Run training on Amazon SageMaker 9
Also even adjusting the two mentioned points above your script would still not work since you are missing a few hyperparameters. The most crucial on is train_file which should have your input file as value, in your case it would be:
/opt/ml/input/data/training/domain-gen-training.jsonl
| 0 |
huggingface
|
Amazon SageMaker
|
Batch transform inference job - downloading model from the Hugging Face Hub on start up
|
https://discuss.huggingface.co/t/batch-transform-inference-job-downloading-model-from-the-hugging-face-hub-on-start-up/10632
|
I try to run
github.com
huggingface/notebooks/blob/master/sagemaker/12_batch_transform_inference/sagemaker-notebook.ipynb 9
{
"cells": [
{
"cell_type": "markdown",
"source": [
"# Huggingface Sagemaker-sdk - Run a batch transform inference job with 🤗 Transformers\n"
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"1. [Introduction](#Introduction) \n",
"2. [Run Batch Transform after training a model](#Run-Batch-Transform-after-training-a-model) \n",
"3. [Run Batch Transform Inference Job with a fine-tuned model using `jsonl`](#Run-Batch-Transform-Inference-Job-with-a-fine-tuned-model-using-jsonl) \n",
"\n",
"Welcome to this getting started guide, we will use the new Hugging Face Inference DLCs and Amazon SageMaker Python SDK to deploy two transformer model for inference. \n",
"In the first example we deploy a trained Hugging Face Transformer model on to SageMaker for inference.\n",
"In the second example we directly deploy one of the 10 000+ Hugging Face Transformers from the [Hub](https://huggingface.co/models) to Amazon SageMaker for Inference.<"
],
This file has been truncated. show original
However I get this error:
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: [https://huggingface.co/api/models/cardiffnlp/twitter-roberta-base-sentiment]
It looks like an instance created by AWS SageMaker (client) is not allowed to download the model. Something specific I need to add in the role I use with AWS Sagemaker?
or there is an issue with a token in
model_info = _api.model_info(repo_id=model_id, revision=revision, token=use_auth_token)
Thanks,
Kate
|
I rerun the notebook: notebooks/sagemaker/12_batch_transform_inference at master · huggingface/notebooks · GitHub 16
and for me it worked. Can you please test again.
| 0 |
huggingface
|
Amazon SageMaker
|
Infer with SageMaker for a Private Model
|
https://discuss.huggingface.co/t/infer-with-sagemaker-for-a-private-model/10677
|
Hello,
I have a private model I wish to use for inference using Amazon SageMaker. I’ve found the documentation and code snippets are great for public models but for private models I’m just provided with the same code snippet to use which doesn’t have any reference to my authentication tokens and so when I try running it I get a 404 error on the generated instance in SageMaker. Does anyone know how to add to this code to make it work for private models?
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'private_model',
'HF_TASK':'text2text-generation'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.6.1',
pytorch_version='1.7.1',
py_version='py36',
env=hub,
role=role,
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.m5.xlarge' # ec2 instance type
)
predictor.predict({
'inputs': "The answer to the universe is"
})
Thanks so much,
Karim
|
Hey @kmfoda,
If you want to use private models you also need to define HF_API_TOKEN in the hub dictionary. More here GitHub - aws/sagemaker-huggingface-inference-toolkit 2.
| 1 |
huggingface
|
Amazon SageMaker
|
Training model file too large and fail to deploy
|
https://discuss.huggingface.co/t/training-model-file-too-large-and-fail-to-deploy/10676
|
hi, community!
I have trained a Bert classification model with below config on sagemaker:
from sagemaker.huggingface import HuggingFace
# hyperparameters, which are passed into the training job
hyperparameters={'epochs': 5,
'train_batch_size': 4,
'model_name':'bert-base-uncased',
'num_labels':31,
}
huggingface_estimator = HuggingFace(entry_point='train.py',
source_dir='./scripts',
instance_type='ml.p3.16xlarge',
instance_count=1,
role=role,
# add volume size set up
volume_size=200,
transformers_version='4.6',
pytorch_version='1.7',
py_version='py36',
hyperparameters = hyperparameters)
# starting the train job with our uploaded datasets as input
huggingface_estimator.fit({'train': training_input_path, 'test': test_input_path})
this works perfectly. however, I have two problems:
the generated model file is huge: 64gb. why is that?
I can’t deploy endpoint successfully, I am guessing due to the model size too big, any idea on how to solve?
|
Hey @jackieliu930,
Yes, I guess your artifact is so big because all saved checkpoints during training are included. You can either change your checkpoint saving strategy in your train.py or the location where the checkpoints are saved.
Or you could load your model.tar.gz and remove all checkpoints from it and then upload it so s3 again. Documentation here: Deploy models to Amazon SageMaker 2
Another solution would be to upload your model to Models - Hugging Face and then deploy using HF_MODE_ID and HF_TASK.
| 0 |
huggingface
|
Amazon SageMaker
|
Train end-to-end text classication on sagemaker
|
https://discuss.huggingface.co/t/train-end-to-end-text-classication-on-sagemaker/10610
|
hi,
I’m following the guidence on training text-classfication using my own dataset,
refer to notebooks/sagemaker-notebook.ipynb at master · huggingface/notebooks · GitHub 2
I have two questions:
should the dataset contain label column only support int? in other words, I need to preprocess my data, convert categories to 1,2,3…?
do I need to specify the class number? if so, where?
thanks!
jackie
|
Hey @jackieliu930,
Yes, the labels need to be int values
Yes you need to modify the .from_pretrained method here: notebooks/train.py at 3fdb8bd61ed2f2b499dcd55034b1ee58be5cfabb · huggingface/notebooks · GitHub 3
You could also use the run_glue.py from examples 2 using the git_config then you don’t need to provide your own training script.
| 0 |
huggingface
|
Amazon SageMaker
|
Inference Hyperparameters
|
https://discuss.huggingface.co/t/inference-hyperparameters/9539
|
Hi,
I am interested in deploying a HuggingFace Model on AWS SageMaker. Let’s say for example I deploy “google/pegasus-large” on AWS. You have very generously given the code to deploy this shown below. I was wondering if, as part of the predict function we have additional arguments. I would like to incorporate a custom length penalty as well as repetition penalty. Would you be able to share where in this code these arguments would be inserted?
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'google/pegasus-large',
'HF_TASK':'summarization'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.6.1',
pytorch_version='1.7.1',
py_version='py36',
env=hub,
role=role,
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.m5.xlarge' # ec2 instance type
)
predictor.predict({
'inputs': "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."
})
This is what the inference looks like right now in my notebook:
batch = tokenizer([source], truncation=True, padding='longest', return_tensors="pt").to('cuda')
predicted = tokenizer.batch_decode(model.generate(**batch, length_penalty = 1.5, repetition_penalty=4.), skip_special_tokens=True, )[0]
Thanks!
|
Hey @ujjirox,
Nice to hear that you want to use Amazon SageMaker for deploying your Summarization model, and yes it is possible.
The Inference Toolkit supports the functionalities as the transformers pipelines. You can provide all of your additional prediction parameters in the parameters attribute. I attached an example below of how the request body looks like.
{
"inputs": "Hugging Face, the winner of VentureBeat’s Innovation in Natural Language Process/Understanding Award for 2021, is looking to level the playing field. The team, launched by Clément Delangue and Julien Chaumond in 2016, was recognized for its work in democratizing NLP, the global market value for which is expected to hit $35.1 billion by 2026. This week, Google’s former head of Ethical AI Margaret Mitchell joined the team.",
"parameters": {
"repetition_penalty": 4.0,
"length_penalty": 1.5
}
}
You can find more information/parameter at Pipelines — transformers 4.10.1 documentation 11 or at our sagemaker documentation Deploy models to Amazon SageMaker 6
Here is an end-to-end example for the google/pegasus-large how to deploy and use your parameters.
Note: the code snippet is scrollable
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'google/pegasus-large',
'HF_TASK':'summarization'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.6.1',
pytorch_version='1.7.1',
py_version='py36',
env=hub,
role=role,
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.m5.xlarge' # ec2 instance type
)
long_text= """
Hugging Face, the winner of VentureBeat’s Innovation in Natural Language Process/Understanding Award for 2021,
is looking to level the playing field. The team, launched by Clément Delangue and Julien Chaumond in 2016,
was recognized for its work in democratizing NLP, the global market value for which is expected to
hit $35.1 billion by 2026. This week, Google’s former head of Ethical AI Margaret Mitchell joined the team.
"""
parameters = {'repetition_penalty':4.,'length_penalty': 1.5}
predictor.predict({"inputs":long_text,"parameters":parameters})
| 0 |
huggingface
|
Amazon SageMaker
|
Finetune bart for text summary has nan loss
|
https://discuss.huggingface.co/t/finetune-bart-for-text-summary-has-nan-loss/10040
|
hi,
I‘m using Sagemaker to finetune on amazon-review(chinese) for text summary and trying to use mt5 as backbone. but got loss nan, and wondering why
training code:
import sagemaker
import boto3
from sagemaker.huggingface import HuggingFace
# gets role for executing training job
#iam_client = boto3.client('iam')
sess = sagemaker.Session()
role = sagemaker.get_execution_role()
print(f"IAM role arn used for running training: {role}")
print(f"S3 bucket used for storing artifacts: {sess.default_bucket()}")
hyperparameters = {
'model_name_or_path':'google/mt5-small',
'output_dir':'/opt/ml/model',
'dataset_name': 'amazon_reviews_multi',
'dataset_config_name': 'zh',
'output_dir': '/opt/ml/model',
'do_train': True,
'do_eval': True,
'do_predict': True,
'predict_with_generate': True,
'num_train_epochs': 5,
'learning_rate': 5e-5,
'seed': 7,
'fp16': True,
# add your remaining hyperparameters
# more info here https://github.com/huggingface/transformers/tree/v4.6.1/examples/pytorch/seq2seq
}
# git configuration to download our fine-tuning script
git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.6.1'}
# creates Hugging Face estimator
huggingface_estimator = HuggingFace(
entry_point='run_summarization.py',
source_dir='/home/ec2-user/SageMaker/transformers/examples/seq2seq',
instance_type='ml.p3.2xlarge',
instance_count=1,
role=role,
git_config=git_config,
transformers_version='4.6.1',
pytorch_version='1.7.1',
py_version='py36',
hyperparameters = hyperparameters
)
# starting the train job
huggingface_estimator.fit()
log:
IAM role arn used for running training: arn:aws:iam::847380964353:role/spot-bot-SpotSageMakerExecutionRole-917OYJPI7O18
S3 bucket used for storing artifacts: sagemaker-us-west-2-847380964353
2021-09-16 05:19:41 Starting - Starting the training job...
2021-09-16 05:20:05 Starting - Launching requested ML instancesProfilerReport-1631769575: InProgress
...
2021-09-16 05:20:39 Starting - Preparing the instances for training............
2021-09-16 05:22:29 Downloading - Downloading input data
2021-09-16 05:22:29 Training - Downloading the training image.................bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
2021-09-16 05:25:28,430 sagemaker-training-toolkit INFO Imported framework sagemaker_pytorch_container.training
2021-09-16 05:25:28,454 sagemaker_pytorch_container.training INFO Block until all host DNS lookups succeed.
2021-09-16 05:25:31,492 sagemaker_pytorch_container.training INFO Invoking user training script.
2021-09-16 05:25:31,980 sagemaker-training-toolkit INFO Installing dependencies from requirements.txt:
/opt/conda/bin/python3.6 -m pip install -r requirements.txt
2021-09-16 05:25:27 Training - Training image download completed. Training in progress.Requirement already satisfied: datasets>=1.1.3 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 1)) (1.6.2)
Requirement already satisfied: sentencepiece!=0.1.92 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 2)) (0.1.91)
Requirement already satisfied: protobuf in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 3)) (3.17.1)
Collecting sacrebleu>=1.4.12
Downloading sacrebleu-2.0.0-py3-none-any.whl (90 kB)
Collecting rouge-score
Downloading rouge_score-0.0.4-py2.py3-none-any.whl (22 kB)
Collecting nltk
Downloading nltk-3.6.2-py3-none-any.whl (1.5 MB)
Requirement already satisfied: pandas in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (1.1.5)
Requirement already satisfied: requests>=2.19.0 in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (2.25.1)
Requirement already satisfied: fsspec in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (2021.5.0)
Requirement already satisfied: huggingface-hub<0.1.0 in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (0.0.8)
Requirement already satisfied: xxhash in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (2.0.2)
Requirement already satisfied: tqdm<4.50.0,>=4.27 in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (4.49.0)
Requirement already satisfied: dill in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (0.3.3)
Requirement already satisfied: multiprocess in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (0.70.11.1)
Requirement already satisfied: dataclasses in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (0.8)
Requirement already satisfied: importlib-metadata in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (4.0.1)
Requirement already satisfied: packaging in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (20.9)
Requirement already satisfied: pyarrow>=1.0.0<4.0.0 in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (4.0.0)
Requirement already satisfied: numpy>=1.17 in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (1.19.1)
Requirement already satisfied: regex in /opt/conda/lib/python3.6/site-packages (from sacrebleu>=1.4.12->-r requirements.txt (line 4)) (2021.4.4)
Collecting portalocker
Downloading portalocker-2.3.2-py2.py3-none-any.whl (15 kB)
Requirement already satisfied: colorama in /opt/conda/lib/python3.6/site-packages (from sacrebleu>=1.4.12->-r requirements.txt (line 4)) (0.4.3)
Collecting tabulate>=0.8.9
Downloading tabulate-0.8.9-py3-none-any.whl (25 kB)
Requirement already satisfied: filelock in /opt/conda/lib/python3.6/site-packages (from huggingface-hub<0.1.0->datasets>=1.1.3->-r requirements.txt (line 1)) (3.0.12)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.6/site-packages (from requests>=2.19.0->datasets>=1.1.3->-r requirements.txt (line 1)) (2020.12.5)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.6/site-packages (from requests>=2.19.0->datasets>=1.1.3->-r requirements.txt (line 1)) (1.25.11)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.6/site-packages (from requests>=2.19.0->datasets>=1.1.3->-r requirements.txt (line 1)) (2.10)
Requirement already satisfied: chardet<5,>=3.0.2 in /opt/conda/lib/python3.6/site-packages (from requests>=2.19.0->datasets>=1.1.3->-r requirements.txt (line 1)) (3.0.4)
Requirement already satisfied: six>=1.9 in /opt/conda/lib/python3.6/site-packages (from protobuf->-r requirements.txt (line 3)) (1.16.0)
Collecting absl-py
Downloading absl_py-0.13.0-py3-none-any.whl (132 kB)
Requirement already satisfied: joblib in /opt/conda/lib/python3.6/site-packages (from nltk->-r requirements.txt (line 6)) (1.0.1)
Requirement already satisfied: click in /opt/conda/lib/python3.6/site-packages (from nltk->-r requirements.txt (line 6)) (7.1.2)
Requirement already satisfied: typing-extensions>=3.6.4 in /opt/conda/lib/python3.6/site-packages (from importlib-metadata->datasets>=1.1.3->-r requirements.txt (line 1)) (3.10.0.0)
Requirement already satisfied: zipp>=0.5 in /opt/conda/lib/python3.6/site-packages (from importlib-metadata->datasets>=1.1.3->-r requirements.txt (line 1)) (3.4.1)
Requirement already satisfied: pyparsing>=2.0.2 in /opt/conda/lib/python3.6/site-packages (from packaging->datasets>=1.1.3->-r requirements.txt (line 1)) (2.4.7)
Requirement already satisfied: python-dateutil>=2.7.3 in /opt/conda/lib/python3.6/site-packages (from pandas->datasets>=1.1.3->-r requirements.txt (line 1)) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /opt/conda/lib/python3.6/site-packages (from pandas->datasets>=1.1.3->-r requirements.txt (line 1)) (2021.1)
Installing collected packages: tabulate, portalocker, nltk, absl-py, sacrebleu, rouge-score
Attempting uninstall: tabulate
Found existing installation: tabulate 0.8.7
Uninstalling tabulate-0.8.7:
Successfully uninstalled tabulate-0.8.7
Successfully installed absl-py-0.13.0 nltk-3.6.2 portalocker-2.3.2 rouge-score-0.0.4 sacrebleu-2.0.0 tabulate-0.8.9
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
aws-parallelcluster 2.10.4 requires tabulate<=0.8.7,>=0.8.2, but you have tabulate 0.8.9 which is incompatible.
WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv
2021-09-16 05:25:39,039 sagemaker-training-toolkit INFO Invoking user script
Training Env:
{
"additional_framework_parameters": {},
"channel_input_dirs": {},
"current_host": "algo-1",
"framework_module": "sagemaker_pytorch_container.training:main",
"hosts": [
"algo-1"
],
"hyperparameters": {
"predict_with_generate": true,
"seed": 7,
"do_predict": true,
"do_train": true,
"dataset_name": "amazon_reviews_multi",
"num_train_epochs": 5,
"do_eval": true,
"dataset_config_name": "zh",
"output_dir": "/opt/ml/model",
"learning_rate": 5e-05,
"model_name_or_path": "google/mt5-small",
"fp16": true
},
"input_config_dir": "/opt/ml/input/config",
"input_data_config": {},
"input_dir": "/opt/ml/input",
"is_master": true,
"job_name": "huggingface-pytorch-training-2021-09-16-05-19-35-810",
"log_level": 20,
"master_hostname": "algo-1",
"model_dir": "/opt/ml/model",
"module_dir": "s3://sagemaker-us-west-2-847380964353/huggingface-pytorch-training-2021-09-16-05-19-35-810/source/sourcedir.tar.gz",
"module_name": "run_summarization",
"network_interface_name": "eth0",
"num_cpus": 8,
"num_gpus": 1,
"output_data_dir": "/opt/ml/output/data",
"output_dir": "/opt/ml/output",
"output_intermediate_dir": "/opt/ml/output/intermediate",
"resource_config": {
"current_host": "algo-1",
"hosts": [
"algo-1"
],
"network_interface_name": "eth0"
},
"user_entry_point": "run_summarization.py"
}
Environment variables:
SM_HOSTS=["algo-1"]
SM_NETWORK_INTERFACE_NAME=eth0
SM_HPS={"dataset_config_name":"zh","dataset_name":"amazon_reviews_multi","do_eval":true,"do_predict":true,"do_train":true,"fp16":true,"learning_rate":5e-05,"model_name_or_path":"google/mt5-small","num_train_epochs":5,"output_dir":"/opt/ml/model","predict_with_generate":true,"seed":7}
SM_USER_ENTRY_POINT=run_summarization.py
SM_FRAMEWORK_PARAMS={}
SM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}
SM_INPUT_DATA_CONFIG={}
SM_OUTPUT_DATA_DIR=/opt/ml/output/data
SM_CHANNELS=[]
SM_CURRENT_HOST=algo-1
SM_MODULE_NAME=run_summarization
SM_LOG_LEVEL=20
SM_FRAMEWORK_MODULE=sagemaker_pytorch_container.training:main
SM_INPUT_DIR=/opt/ml/input
SM_INPUT_CONFIG_DIR=/opt/ml/input/config
SM_OUTPUT_DIR=/opt/ml/output
SM_NUM_CPUS=8
SM_NUM_GPUS=1
SM_MODEL_DIR=/opt/ml/model
SM_MODULE_DIR=s3://sagemaker-us-west-2-847380964353/huggingface-pytorch-training-2021-09-16-05-19-35-810/source/sourcedir.tar.gz
SM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{},"current_host":"algo-1","framework_module":"sagemaker_pytorch_container.training:main","hosts":["algo-1"],"hyperparameters":{"dataset_config_name":"zh","dataset_name":"amazon_reviews_multi","do_eval":true,"do_predict":true,"do_train":true,"fp16":true,"learning_rate":5e-05,"model_name_or_path":"google/mt5-small","num_train_epochs":5,"output_dir":"/opt/ml/model","predict_with_generate":true,"seed":7},"input_config_dir":"/opt/ml/input/config","input_data_config":{},"input_dir":"/opt/ml/input","is_master":true,"job_name":"huggingface-pytorch-training-2021-09-16-05-19-35-810","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-west-2-847380964353/huggingface-pytorch-training-2021-09-16-05-19-35-810/source/sourcedir.tar.gz","module_name":"run_summarization","network_interface_name":"eth0","num_cpus":8,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"run_summarization.py"}
SM_USER_ARGS=["--dataset_config_name","zh","--dataset_name","amazon_reviews_multi","--do_eval","True","--do_predict","True","--do_train","True","--fp16","True","--learning_rate","5e-05","--model_name_or_path","google/mt5-small","--num_train_epochs","5","--output_dir","/opt/ml/model","--predict_with_generate","True","--seed","7"]
SM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate
SM_HP_PREDICT_WITH_GENERATE=true
SM_HP_SEED=7
SM_HP_DO_PREDICT=true
SM_HP_DO_TRAIN=true
SM_HP_DATASET_NAME=amazon_reviews_multi
SM_HP_NUM_TRAIN_EPOCHS=5
SM_HP_DO_EVAL=true
SM_HP_DATASET_CONFIG_NAME=zh
SM_HP_OUTPUT_DIR=/opt/ml/model
SM_HP_LEARNING_RATE=5e-05
SM_HP_MODEL_NAME_OR_PATH=google/mt5-small
SM_HP_FP16=true
PYTHONPATH=/opt/ml/code:/opt/conda/bin:/opt/conda/lib/python36.zip:/opt/conda/lib/python3.6:/opt/conda/lib/python3.6/lib-dynload:/opt/conda/lib/python3.6/site-packages
Invoking script with the following command:
/opt/conda/bin/python3.6 run_summarization.py --dataset_config_name zh --dataset_name amazon_reviews_multi --do_eval True --do_predict True --do_train True --fp16 True --learning_rate 5e-05 --model_name_or_path google/mt5-small --num_train_epochs 5 --output_dir /opt/ml/model --predict_with_generate True --seed 7
09/16/2021 05:25:45 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: True
09/16/2021 05:25:45 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(output_dir='/opt/ml/model', overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=True, evaluation_strategy=<IntervalStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=5.0, max_steps=-1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, warmup_ratio=0.0, warmup_steps=0, logging_dir='runs/Sep16_05-25-45_algo-1', logging_strategy=<IntervalStrategy.STEPS: 'steps'>, logging_first_step=False, logging_steps=500, save_strategy=<IntervalStrategy.STEPS: 'steps'>, save_steps=500, save_total_limit=None, no_cuda=False, seed=7, fp16=True, fp16_opt_level='O1', fp16_backend='auto', fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=[], dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='/opt/ml/model', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, length_column_name='length', report_to=[], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, use_legacy_prediction_loop=False, push_to_hub=False, resume_from_checkpoint=None, mp_parameters='', sortish_sampler=False, predict_with_generate=True)
Downloading and preparing dataset amazon_reviews_multi/zh (download: 109.09 MiB, generated: 52.01 MiB, post-processed: Unknown size, total: 161.10 MiB) to /root/.cache/huggingface/datasets/amazon_reviews_multi/zh/1.0.0/724e94f4b0c6c405ce7e476a6c5ef4f87db30799ad49f765094cf9770e0f7609...
Dataset amazon_reviews_multi downloaded and prepared to /root/.cache/huggingface/datasets/amazon_reviews_multi/zh/1.0.0/724e94f4b0c6c405ce7e476a6c5ef4f87db30799ad49f765094cf9770e0f7609. Subsequent calls will reuse this data.
https://huggingface.co/google/mt5-small/resolve/main/config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpsal7nwx8
storing https://huggingface.co/google/mt5-small/resolve/main/config.json in cache at /root/.cache/huggingface/transformers/97693496c1a0cae463bd18428187f9e9924d2dfbadaa46e4d468634a0fc95a41.dadce13f8f85f4825168354a04675d4b177749f8f11b167e87676777695d4fe4
creating metadata file for /root/.cache/huggingface/transformers/97693496c1a0cae463bd18428187f9e9924d2dfbadaa46e4d468634a0fc95a41.dadce13f8f85f4825168354a04675d4b177749f8f11b167e87676777695d4fe4
loading configuration file https://huggingface.co/google/mt5-small/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/97693496c1a0cae463bd18428187f9e9924d2dfbadaa46e4d468634a0fc95a41.dadce13f8f85f4825168354a04675d4b177749f8f11b167e87676777695d4fe4
Model config MT5Config {
"architectures": [
"MT5ForConditionalGeneration"
],
"d_ff": 1024,
"d_kv": 64,
"d_model": 512,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "gated-gelu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "mt5",
"num_decoder_layers": 8,
"num_heads": 6,
"num_layers": 8,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"tie_word_embeddings": false,
"tokenizer_class": "T5Tokenizer",
"transformers_version": "4.6.1",
"use_cache": true,
"vocab_size": 250112
}
loading configuration file https://huggingface.co/google/mt5-small/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/97693496c1a0cae463bd18428187f9e9924d2dfbadaa46e4d468634a0fc95a41.dadce13f8f85f4825168354a04675d4b177749f8f11b167e87676777695d4fe4
Model config MT5Config {
"architectures": [
"MT5ForConditionalGeneration"
],
"d_ff": 1024,
"d_kv": 64,
"d_model": 512,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "gated-gelu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "mt5",
"num_decoder_layers": 8,
"num_heads": 6,
"num_layers": 8,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"tie_word_embeddings": false,
"tokenizer_class": "T5Tokenizer",
"transformers_version": "4.6.1",
"use_cache": true,
"vocab_size": 250112
}
https://huggingface.co/google/mt5-small/resolve/main/spiece.model not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmp1dhqa6qf
storing https://huggingface.co/google/mt5-small/resolve/main/spiece.model in cache at /root/.cache/huggingface/transformers/37d0f67f084f8c5fc5589e0bba5ff3c6307af833bb0b7f4eb33fbfd8d4038a9d.84ea7af2df68dc8db434d3160aab65cce8ac63ce5b6f7743f8c9a4a14b4f77e2
creating metadata file for /root/.cache/huggingface/transformers/37d0f67f084f8c5fc5589e0bba5ff3c6307af833bb0b7f4eb33fbfd8d4038a9d.84ea7af2df68dc8db434d3160aab65cce8ac63ce5b6f7743f8c9a4a14b4f77e2
https://huggingface.co/google/mt5-small/resolve/main/special_tokens_map.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpqgr1lhnz
storing https://huggingface.co/google/mt5-small/resolve/main/special_tokens_map.json in cache at /root/.cache/huggingface/transformers/685ac0ca8568ec593a48b61b0a3c272beee9bc194a3c7241d15dcadb5f875e53.f76030f3ec1b96a8199b2593390c610e76ca8028ef3d24680000619ffb646276
creating metadata file for /root/.cache/huggingface/transformers/685ac0ca8568ec593a48b61b0a3c272beee9bc194a3c7241d15dcadb5f875e53.f76030f3ec1b96a8199b2593390c610e76ca8028ef3d24680000619ffb646276
https://huggingface.co/google/mt5-small/resolve/main/tokenizer_config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpvibtracc
storing https://huggingface.co/google/mt5-small/resolve/main/tokenizer_config.json in cache at /root/.cache/huggingface/transformers/6a9e52d6dd21568e37b65fc180ada927968e8f7124f0acd6efcaf90cd2e0f4bb.4b81e5d952ad810ca1de2b3e362b9a26a5cc77b4b75daf20caf69fb838751c32
creating metadata file for /root/.cache/huggingface/transformers/6a9e52d6dd21568e37b65fc180ada927968e8f7124f0acd6efcaf90cd2e0f4bb.4b81e5d952ad810ca1de2b3e362b9a26a5cc77b4b75daf20caf69fb838751c32
loading file https://huggingface.co/google/mt5-small/resolve/main/spiece.model from cache at /root/.cache/huggingface/transformers/37d0f67f084f8c5fc5589e0bba5ff3c6307af833bb0b7f4eb33fbfd8d4038a9d.84ea7af2df68dc8db434d3160aab65cce8ac63ce5b6f7743f8c9a4a14b4f77e2
loading file https://huggingface.co/google/mt5-small/resolve/main/tokenizer.json from cache at None
loading file https://huggingface.co/google/mt5-small/resolve/main/added_tokens.json from cache at None
loading file https://huggingface.co/google/mt5-small/resolve/main/special_tokens_map.json from cache at /root/.cache/huggingface/transformers/685ac0ca8568ec593a48b61b0a3c272beee9bc194a3c7241d15dcadb5f875e53.f76030f3ec1b96a8199b2593390c610e76ca8028ef3d24680000619ffb646276
loading file https://huggingface.co/google/mt5-small/resolve/main/tokenizer_config.json from cache at /root/.cache/huggingface/transformers/6a9e52d6dd21568e37b65fc180ada927968e8f7124f0acd6efcaf90cd2e0f4bb.4b81e5d952ad810ca1de2b3e362b9a26a5cc77b4b75daf20caf69fb838751c32
https://huggingface.co/google/mt5-small/resolve/main/pytorch_model.bin not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmplpu5vm63
storing https://huggingface.co/google/mt5-small/resolve/main/pytorch_model.bin in cache at /root/.cache/huggingface/transformers/8e7b2a80ddcb5611b27d8c89e1e8e33a947e105415051402a22b9c8d7d1caeb0.e22331f3a065b885b30ae3dd1ff11ccaf7fbc444485f6eb07ef5e0138bca8b70
creating metadata file for /root/.cache/huggingface/transformers/8e7b2a80ddcb5611b27d8c89e1e8e33a947e105415051402a22b9c8d7d1caeb0.e22331f3a065b885b30ae3dd1ff11ccaf7fbc444485f6eb07ef5e0138bca8b70
loading weights file https://huggingface.co/google/mt5-small/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/8e7b2a80ddcb5611b27d8c89e1e8e33a947e105415051402a22b9c8d7d1caeb0.e22331f3a065b885b30ae3dd1ff11ccaf7fbc444485f6eb07ef5e0138bca8b70
All model checkpoint weights were used when initializing MT5ForConditionalGeneration.
All the weights of MT5ForConditionalGeneration were initialized from the model checkpoint at google/mt5-small.
If your task is similar to the task the model of the checkpoint was trained on, you can already use MT5ForConditionalGeneration for predictions without further training.
Using amp fp16 backend
***** Running training *****
Num examples = 200000
Num Epochs = 5
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 125000
[2021-09-16 05:26:59.501 algo-1:31 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None
[2021-09-16 05:26:59.621 algo-1:31 INFO profiler_config_parser.py:102] User has disabled profiler.
[2021-09-16 05:26:59.621 algo-1:31 INFO json_config.py:91] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.
[2021-09-16 05:26:59.622 algo-1:31 INFO hook.py:201] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.
[2021-09-16 05:26:59.624 algo-1:31 INFO hook.py:255] Saving to /opt/ml/output/tensors
[2021-09-16 05:26:59.624 algo-1:31 INFO state_store.py:77] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.
[2021-09-16 05:26:59.881 algo-1:31 INFO hook.py:591] name:shared.weight count_params:128057344
[2021-09-16 05:26:59.881 algo-1:31 INFO hook.py:591] name:encoder.block.0.layer.0.SelfAttention.q.weight count_params:196608
[2021-09-16 05:26:59.882 algo-1:31 INFO hook.py:591] name:encoder.block.0.layer.0.SelfAttention.k.weight count_params:196608
[2021-09-16 05:26:59.882 algo-1:31 INFO hook.py:591] name:encoder.block.0.layer.0.SelfAttention.v.weight count_params:196608
[2021-09-16 05:26:59.882 algo-1:31 INFO hook.py:591] name:encoder.block.0.layer.0.SelfAttention.o.weight count_params:196608
[2021-09-16 05:26:59.882 algo-1:31 INFO hook.py:591] name:encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight count_params:192
[2021-09-16 05:26:59.882 algo-1:31 INFO hook.py:591] name:encoder.block.0.layer.0.layer_norm.weight count_params:512
[2021-09-16 05:26:59.882 algo-1:31 INFO hook.py:591] name:encoder.block.0.layer.1.DenseReluDense.wi_0.weight count_params:524288
[2021-09-16 05:26:59.882 algo-1:31 INFO hook.py:591] name:encoder.block.0.layer.1.DenseReluDense.wi_1.weight count_params:524288
[2021-09-16 05:26:59.882 algo-1:31 INFO hook.py:591] name:encoder.block.0.layer.1.DenseReluDense.wo.weight count_params:524288
[2021-09-16 05:26:59.882 algo-1:31 INFO hook.py:591] name:encoder.block.0.layer.1.layer_norm.weight count_params:512
[2021-09-16 05:26:59.883 algo-1:31 INFO hook.py:591] name:encoder.block.1.layer.0.SelfAttention.q.weight count_params:196608
[2021-09-16 05:26:59.883 algo-1:31 INFO hook.py:591] name:encoder.block.1.layer.0.SelfAttention.k.weight count_params:196608
[2021-09-16 05:26:59.883 algo-1:31 INFO hook.py:591] name:encoder.block.1.layer.0.SelfAttention.v.weight count_params:196608
[2021-09-16 05:26:59.883 algo-1:31 INFO hook.py:591] name:encoder.block.1.layer.0.SelfAttention.o.weight count_params:196608
[2021-09-16 05:26:59.883 algo-1:31 INFO hook.py:591] name:encoder.block.1.layer.0.layer_norm.weight count_params:512
[2021-09-16 05:26:59.883 algo-1:31 INFO hook.py:591] name:encoder.block.1.layer.1.DenseReluDense.wi_0.weight count_params:524288
name:encoder.block.5.layer.0.SelfAttention.q.weight count_params:196608
[2021-09-16 05:26:59.887 algo-1:31 INFO hook.py:591] name:encoder.block.5.layer.0.SelfAttention.k.weight count_params:196608
[2021-09-16 05:26:59.887 algo-1:31 INFO hook.py:591] name:encoder.block.5.layer.0.SelfAttention.v.weight count_params:196608
[2021-09-16 05:26:59.887 algo-1:31 INFO hook.py:591] name:encoder.block.5.layer.0.SelfAttention.o.weight count_params:196608
[2021-09-16 05:26:59.887 algo-1:31 INFO hook.py:591] ....arams:512
[2021-09-16 05:26:59.902 algo-1:31 INFO hook.py:591] name:decoder.block.7.layer.2.DenseReluDense.wi_0.weight count_params:524288
[2021-09-16 05:26:59.902 algo-1:31 INFO hook.py:591] name:decoder.block.7.layer.2.DenseReluDense.wi_1.weight count_params:524288
[2021-09-16 05:26:59.902 algo-1:31 INFO hook.py:591] name:decoder.block.7.layer.2.DenseReluDense.wo.weight count_params:524288
[2021-09-16 05:26:59.903 algo-1:31 INFO hook.py:591] name:decoder.block.7.layer.2.layer_norm.weight count_params:512
[2021-09-16 05:26:59.903 algo-1:31 INFO hook.py:591] name:decoder.final_layer_norm.weight count_params:512
[2021-09-16 05:26:59.903 algo-1:31 INFO hook.py:591] name:lm_head.weight count_params:128057344
[2021-09-16 05:26:59.903 algo-1:31 INFO hook.py:593] Total Trainable Params: 300176768
[2021-09-16 05:26:59.903 algo-1:31 INFO hook.py:425] Monitoring the collections: losses
[2021-09-16 05:26:59.906 algo-1:31 INFO hook.py:488] Hook is writing from the hook with pid: 31
{'loss': nan, 'learning_rate': 4.98664e-05, 'epoch': 0.02}
Saving model checkpoint to /opt/ml/model/checkpoint-500
Configuration saved in /opt/ml/model/checkpoint-500/config.json
Model weights saved in /opt/ml/model/checkpoint-500/pytorch_model.bin
tokenizer config file saved in /opt/ml/model/checkpoint-500/tokenizer_config.json
Special tokens file saved in /opt/ml/model/checkpoint-500/special_tokens_map.json
Copy vocab file to /opt/ml/model/checkpoint-500/spiece.model
{'loss': nan, 'learning_rate': 4.96664e-05, 'epoch': 0.04}
Saving model checkpoint to /opt/ml/model/checkpoint-1000
Configuration saved in /opt/ml/model/checkpoint-1000/config.json
Model weights saved in /opt/ml/model/checkpoint-1000/pytorch_model.bin
tokenizer config file saved in /opt/ml/model/checkpoint-1000/tokenizer_config.json
Special tokens file saved in /opt/ml/model/checkpoint-1000/special_tokens_map.json
Copy vocab file to /opt/ml/model/checkpoint-1000/spiece.model
{'loss': nan, 'learning_rate': 4.9466400000000005e-05, 'epoch': 0.06}
Saving model checkpoint to /opt/ml/model/checkpoint-1500
Configuration saved in /opt/ml/model/checkpoint-1500/config.json
Model weights saved in /opt/ml/model/checkpoint-1500/pytorch_model.bin
tokenizer config file saved in /opt/ml/model/checkpoint-1500/tokenizer_config.json
Special tokens file saved in /opt/ml/model/checkpoint-1500/special_tokens_map.json
Copy vocab file to /opt/ml/model/checkpoint-1500/spiece.model
{'loss': nan, 'learning_rate': 4.92664e-05, 'epoch': 0.08}
Saving model checkpoint to /opt/ml/model/checkpoint-2000
Configuration saved in /opt/ml/model/checkpoint-2000/config.json
Model weights saved in /opt/ml/model/checkpoint-2000/pytorch_model.bin
tokenizer config file saved in /opt/ml/model/checkpoint-2000/tokenizer_config.json
Special tokens file saved in /opt/ml/model/checkpoint-2000/special_tokens_map.json
Copy vocab file to /opt/ml/model/checkpoint-2000/spiece.model
|
Hey @jackieliu930,
When using run_summarization.py with a T5 like the model you need to add an additional hyperparameter source_prefix: "summarize: ".
Only T5 models t5-small , t5-base , t5-large , t5-3b and t5-11b must use an additional argument: --source_prefix "summarize: " .
You can find more information about the run_summarization.py here:transformers/examples/pytorch/summarization at master · huggingface/transformers · GitHub
| 0 |
huggingface
|
Amazon SageMaker
|
Inference Toolkit - Init and default template for custom inference
|
https://discuss.huggingface.co/t/inference-toolkit-init-and-default-template-for-custom-inference/10469
|
Hey,
Had some quick questions regarding the Inference Toolkit. Is there a way to add an init function in the custom inference.py script. I was thinking I could just add what I needed in the model_fn function but when I tried running just the basics, I got an error attached below. This leads into the second question.
Do you have a default template for the custom inference.py script. I saw that you had some documentation on GitHub - aws/sagemaker-huggingface-inference-toolkit 3 but I was wondering if you might have an actual script we could modify to our liking.
Thanks!
# This is the script that will be used in the inference container
import os
import json
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
def model_fn(model_dir):
"""
Load the model and tokenizer for inference
"""
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained(model_dir)
model = AutoModelForSeq2SeqLM.from_pretrained(model_dir).to(device)
model_dict = {'model':model, 'tokenizer':tokenizer}
return model_dict
def predict_fn(input_data, model):
"""
Make a prediction with the model
"""
text = input_data.pop('inputs')
parameters = input_data.pop('parameters', None)
tokenizer = model['tokenizer']
model = model['model']
# Parameters may or may not be passed
input_ids = tokenizer(text, truncation=True, padding='longest', return_tensors="pt").input_ids
output = model.generate(input_ids, **parameters) if parameters is not None else model.generate(input_ids)
return tokenizer.batch_decode(output, skip_special_tokens=True)[0]
def input_fn(request_body, request_content_type):
"""
Transform the input request to a dictionary
"""
request = json.loads(request_body)
return request
def output_fn(prediction, response_content_type):
"""
Return model's prediction
"""
return {'generated_text':prediction}
image3280×900 365 KB
|
You can add a requirements.txt into the code/ and the archive and upload it to s3 and provide it as model_data. This should work. You can use my example to test it.
Additionally, does the FrameworkModel class have the attribute dependencies, but it looks way more complex to add your dependencies.
dependencies ( list [ str ] ) –A list of paths to directories (absolute or relative) with any additional libraries that will be exported to the container (default: []). The library folders will be copied to SageMaker in the same folder where the entrypoint is copied. If ‘git_config’ is provided, ‘dependencies’ should be a list of relative locations to directories with any additional libraries needed in the Git repo. If the source_dir points to S3, code will be uploaded and the S3 location will be used instead.
Example
The following call
Model(entry_point=‘inference.py’, … dependencies=[‘my/libs/common’, ‘virtual-env’])
results in the following inside the container:
$ ls
opt/ml/code
|------ inference.py
|------ common
|------ virtual-env
This is not supported with “local code” in Local Mode.
If you want to go with source_dir and entry_point I would suggest building a helper function that is executed before all imports like install_dependencies.
| 1 |
huggingface
|
Amazon SageMaker
|
TensorFlow Getting Started Demo model outputs the same label for all examples
|
https://discuss.huggingface.co/t/tensorflow-getting-started-demo-model-outputs-the-same-label-for-all-examples/10354
|
Hi,
I’ve been following the SageMaker demo available here 1 - for training a binary sentiment classifier on the IMDB dataset.
I ran the notebook as provided, to get a feel for how to use Hugging Face with SageMaker, and the notebook successfully runs and the model trains. However, upon testing the resulting model I found that the output was the same, regardless of the input sentence. For example:
>>>sentiment_input= {"inputs":"This is the best movie I have ever watched. It is amazing!"}
>>>print(predictor.predict(sentiment_input))
[{'label': 'LABEL_0', 'score': 0.9999932050704956}]
>>>sentiment_input= {"inputs":"This is the worst movie I have ever watched. It is terrible!"}
>>>print(predictor.predict(sentiment_input))
[{'label': 'LABEL_0', 'score': 0.9999932050704956}]
I’ve looked through the code, but I can’t seem to find what might cause this behaviour. Unless the dataset isn’t being loaded properly (eg. if the classifier is only training with one label), or something is going wrong with the tokenization. However, I haven’t made any changes to the notebook and training script provided, so this seems unlikely.
Any help would be much appreciated! Thanks in advance.
|
I believe I’ve found the problem. The dataset is never shuffled in the train.py script provided with the demo. As a result the model was learning to assign LABEL_0 to any input. After adding a shuffle to the test set (not strictly necessary) and train set here (after line 45):
# Load dataset
train_dataset, test_dataset = load_dataset("imdb", split=["train", "test"])
train_dataset = train_dataset.shuffle()
test_dataset = test_dataset.shuffle()
# Preprocess train dataset
the model trains successfully and returns the following labels as expected:
>>>sentiment_input= {"inputs":"This is the best movie I have ever watched. It is amazing!"}
>>>print(predictor.predict(sentiment_input))
[{'label': 'LABEL_1', 'score': 0.995592474937439}]
>>>sentiment_input= {"inputs":"This is the worst movie I have ever watched. It is terrible!"}
>>>print(predictor.predict(sentiment_input))
[{'label': 'LABEL_0', 'score': 0.9919235110282898}]
| 1 |
huggingface
|
Amazon SageMaker
|
Batch_transform Pipeline?
|
https://discuss.huggingface.co/t/batch-transform-pipeline/10275
|
Hello,
I fine tuned 2 BERT models which (1.) classify different customer reviews and tag them with different labels and (2.) detect the sentiment in each text. I process these texts in a weekly batch on AWS Sagemaker. Right now I am writing two different batch transform jobs which (1.) predict the class & (2.) predict the text. My question now if it is possible to integrate both models to one batchtransform job. My fine tuned models are in my S3 bucket in the tar.gz format and my code currently looks like this:
# package the inference scrip and pre-trained classifier model into .tar.gz format
!cd model_token && tar zcvf model.tar.gz *
!mv model_token/model.tar.gz ./model.tar.gz
# upload pre-trained classifier model to s3 bucket
model_url = s3_path_join("s3://",sagemaker_session_bucket,"batch_transform/model")
print(f"Uploading Model to {model_url}")
model_uri = S3Uploader.upload('model.tar.gz',model_url)
print(f"Uploaded model to {model_uri}")
#same procedure for sentiment model
!cd sentiment_token && tar zcvf sentiment.tar.gz *
!mv sentiment_token/sentiment.tar.gz ./sentiment.tar.gz
model_url = s3_path_join("s3://",sagemaker_session_bucket,"batch_transform/model")
print(f"Uploading Model to {model_url}")
sentiment_uri = S3Uploader.upload('sentiment.tar.gz',model_url)
print(f"Uploaded model to {sentiment_uri}")
from sagemaker.huggingface.model import HuggingFaceModel
# create Hugging Face Model Class for classifier
huggingface_model = HuggingFaceModel(
model_data=model_uri, # configuration for loading model from Hub
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.6", # transformers version used
pytorch_version="1.7", # pytorch version used
py_version='py36', # python version used
)
# create Transformer to run our batch job
batch_job = huggingface_model.transformer(
instance_count=1,
instance_type='ml.g4dn.xlarge',
output_path=output_s3_path, # we are using the same s3 path to save the output with the input
strategy='SingleRecord')
# starts batch transform job and uses s3 data as input
batch_job.transform(
data=s3_file_uri,
content_type='application/json',
split_type='Line')
#same for sentiment
huggingface_model = HuggingFaceModel(
model_data=sentiment_uri, # configuration for loading model from Hub
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.6", # transformers version used
pytorch_version="1.7", # pytorch version used
py_version='py36', # python version used
)
# create Transformer to run our batch job
batch_job = huggingface_model.transformer(
instance_count=1,
instance_type='ml.g4dn.xlarge',
output_path=output_s3_path, # we are using the same s3 path to save the output with the input
strategy='SingleRecord')
# starts batch transform job and uses s3 data as input
batch_job.transform(
data=s3_file_uri,
content_type='application/json',
split_type='Line')
Thanks in advance!
|
As the errors says you need to adjust AssembleWith to be the same.
=>
# create Transformer to run our batch job
batch_job = huggingface_model.transformer(
instance_count=1,
instance_type='ml.g4dn.xlarge',
output_path=output_s3_path, # we are using the same s3 path to save the output with the input
accept="application/json",
assemble_with="Line",
strategy='SingleRecord')
You can find the documentation here Transformer — sagemaker 2.59.4 documentation
assemble_with ( str ) – How the output is assembled (default: None). Valid values: ‘Line’ or ‘None’.
| 1 |
huggingface
|
Amazon SageMaker
|
Rouge eval metrics in summarization not computing
|
https://discuss.huggingface.co/t/rouge-eval-metrics-in-summarization-not-computing/10191
|
Hello @philschmid I was trying to fine tune a summarization model with a very small data set (100 samples) and none of the rouge eval metrics are being computed in the training job. Is there size limit in terms of samples that could be causing this?
Thank you.
|
Hello @Jorgeutd,
Are you using the examples/ script? and are you passing the hyperparameter → 'predict_with_generate': True,
predict_with_generate ( bool , optional, defaults to False ):
Whether to use generate to calculate generative metrics (ROUGE, BLEU).`
| 0 |
huggingface
|
Amazon SageMaker
|
Endpoint Deployment
|
https://discuss.huggingface.co/t/endpoint-deployment/10077
|
Hello everyone,
I deployed my BERT classification model for batch jobs on Sagemaker with
create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
model_data=model_uri, # configuration for loading model from Hub
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.6", # transformers version used
pytorch_version="1.7", # pytorch version used
py_version='py36', # python version used
)
# create Transformer to run our batch job
batch_job = huggingface_model.transformer(
instance_count=1,
instance_type='ml.g4dn.xlarge',
output_path=output_s3_path, # we are using the same s3 path to save the output with the input
strategy='SingleRecord')
# starts batch transform job and uses s3 data as input
batch_job.transform(
data=s3_file_uri,
content_type='application/json',
split_type='Line')
utput_file = f"{dataset_jsonl_file}.out"
output_path = s3_path_join(output_s3_path,output_file)
# download file
S3Downloader.download(output_path,'.')
batch_transform_result = []
with open(output_file) as f:
for line in f:
# converts jsonline array to normal array
line = "[" + line.replace("[","").replace("]",",") + "]"
batch_transform_result = literal_eval(line)
Anyways, when I want to predict a new batch it feels like I always have to start my Notebook on Sagemaker Studio etc. Is there a way to create an API which I can feed with my data to predict from the outside? Is there any cool tutorial or anything? Thanks in advance
|
Hey @marlon89,
I sadly don’t have yet a tutorial or a sample for it. I am looking into creating something like that in the next weeks, with cdk support.
I created an architecture of how this API can look like.
SageMaker-batch-cycle.drawio1522×1602 231 KB
You can basically leverage AWS Lambda with S3 Trigger to start an create you batch transform jobs after a file Is uploaded to a prefix on an s3 bucket.
| 0 |
huggingface
|
Amazon SageMaker
|
Sentence similarity models on Sagemaker
|
https://discuss.huggingface.co/t/sentence-similarity-models-on-sagemaker/10045
|
Hello,
I am wondering if the “sentence-similarity” pipelines can be used on Sagemaker from Hub with the same ease as popular piplines like “question-answering”?
I was trying to start an endpoint with sentence-similarity, but it gave me this error:
2021-09-16 13:13:49,252 [INFO ] W-sentence-transformers__pa-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - mms.service.PredictionException: "Unknown task sentence-similarity, available tasks are ['feature-extraction', 'text-classification', 'token-classification', 'question-answering', 'table-question-answering', 'fill-mask', 'summarization', 'translation', 'text2text-generation', 'text-generation', 'zero-shot-classification', 'conversational', 'image-classification', 'translation_XX_to_YY']" : 400
This is how it was deployed inside Sagemaker Studio:
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'sentence-transformers/paraphrase-xlm-r-multilingual-v1',
'HF_TASK':'sentence-similarity' # NLP task you want to use for predictions
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
env=hub,
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.6", # transformers version used
pytorch_version="1.7", # pytorch version used
py_version="py36", # python version of the DLC
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.t2.medium"
)
|
Hello @pavel-nesterov,
Currently, the SageMaker Inference Toolkit only supports the NLP-based pipelines from the transformers library for zero-code deployments.
But you can easily create your own inference.py which contains the code you would need for that.
Here is Documentation on that: Deploy models to Amazon SageMaker 14
Here the code for the sentence-simliarity pipeline: huggingface_hub/sentence_similarity.py at main · huggingface/huggingface_hub · GitHub 20
Thank you for the request I will add it as potential feature to the roadmap.
| 0 |
huggingface
|
Amazon SageMaker
|
[SOLVED] Serverless+Sagemaker+Lambda
|
https://discuss.huggingface.co/t/solved-serverless-sagemaker-lambda/10043
|
Hi everyone (especially those, who uses a of HuggingFace+Sagemaker+ServerlessFramework+Lambda).
Since yesterday I tried 20+ options to properly import Sagemaker module to use the HuggingFacePredictor within the Lambda function, but no success. This is why I am asking for help, because I couldn’t find the correct way to make it work (even on serverless.com forums).
Here is what I am trying to do within Lambda:
try:
print('attempting to unzip_requirements...')
import unzip_requirements
print('succesfully imported unzip_requirements to prepare the zipped requirements')
except ImportError:
print('failed to import unzip_requirements - if you are running locally this is not a problem')
pass
...
import sagemaker
from sagemaker.huggingface import HuggingFacePredictor
predictor = HuggingFacePredictor(ENDPOINT_NAME)
Here is the log:
attempting to unzip_requirements...
succesfully imported unzip_requirements to prepare the zipped requirements
...
[ERROR] PackageNotFoundError: No package metadata was found for sagemaker
Traceback (most recent call last):
File "/var/task/serverless_sdk/__init__.py", line 144, in wrapped_handler
return user_handler(event, context)
File "/var/task/handler.py", line 184, in ProductMapper
import sagemaker
File "/tmp/sls-py-req/sagemaker/__init__.py", line 18, in <module>
from sagemaker import estimator, parameter, tuner # noqa: F401
File "/tmp/sls-py-req/sagemaker/estimator.py", line 28, in <module>
from sagemaker import git_utils, image_uris
File "/tmp/sls-py-req/sagemaker/image_uris.py", line 22, in <module>
from sagemaker.spark import defaults
File "/tmp/sls-py-req/sagemaker/spark/__init__.py", line 16, in <module>
from sagemaker.spark.processing import PySparkProcessor, SparkJarProcessor # noqa: F401
File "/tmp/sls-py-req/sagemaker/spark/processing.py", line 35, in <module>
from sagemaker.local.image import _ecr_login_if_needed, _pull_image
File "/tmp/sls-py-req/sagemaker/local/__init__.py", line 16, in <module>
from .local_session import ( # noqa: F401
File "/tmp/sls-py-req/sagemaker/local/local_session.py", line 23, in <module>
from sagemaker.local.image import _SageMakerContainer
File "/tmp/sls-py-req/sagemaker/local/image.py", line 39, in <module>
import sagemaker.local.data
File "/tmp/sls-py-req/sagemaker/local/data.py", line 27, in <module>
import sagemaker.local.utils
File "/tmp/sls-py-req/sagemaker/local/utils.py", line 22, in <module>
from sagemaker import s3
File "/tmp/sls-py-req/sagemaker/s3.py", line 20, in <module>
from sagemaker.session import Session
File "/tmp/sls-py-req/sagemaker/session.py", line 37, in <module>
from sagemaker.user_agent import prepend_user_agent
File "/tmp/sls-py-req/sagemaker/user_agent.py", line 21, in <module>
SDK_VERSION = importlib_metadata.version("sagemaker")
File "/tmp/sls-py-req/importlib_metadata/__init__.py", line 994, in version
return distribution(distribution_name).version
File "/tmp/sls-py-req/importlib_metadata/__init__.py", line 967, in distribution
return Distribution.from_name(distribution_name)
File "/tmp/sls-py-req/importlib_metadata/__init__.py", line 561, in from_name
raise PackageNotFoundError(name)
Here is serverless.yml:
org: **********
app: rec*************-app
service: re********service
frameworkVersion: '2'
custom:
bucket: re********bucket
pythonRequirements:
dockerizePip: false
invalidateCaches: true
zip: true
slim: true
strip: false
noDeploy:
- docutils
- jmespath
- pip
- python-dateutil
- setuptools
- six
- tensorboard
useStaticCache: false
useDownloadCache: false
package:
include:
- requirements.txt
exclude:
- aws-deployment*
- .dockerignore
- Dockerfile
- README.md
- .gitignore
- venv/**
- test/**
provider:
name: aws
runtime: python3.8
lambdaHashingVersion: 20201221
stage: develop-lenovo
region: eu-central-1
memorySize: 256
logRetentionInDays: 30
iam:
role:
statements:
- Effect: Allow
Action: s3:*
Resource: "*"
- Effect: Allow
Action: sagemaker:InvokeEndpoint
Resource: "*"
s3:
telegramPuctures:
name: ***************
functions:
PictureSaver:
handler: handler.PictureSaver
layers:
arn:aws:lambda:eu-central-1:770693421928:layer:Klayers-python38-requests:20
TableExtractorWithVeryfi:
handler: handler.TableExtractor
timeout: 20 # optional, in seconds, default is 6
layers:
arn:aws:lambda:eu-central-1:770693421928:layer:Klayers-python38-requests:20
ProductMapper:
handler: handler.ProductMapper
timeout: 100 # optional, in seconds, default is 6
layers:
arn:aws:lambda:eu-central-1:770693421928:layer:Klayers-python38-requests:20
stepFunctions:
stateMachines:
re*********step-function-IaC:
events:
- http:
path: /
method: POST
definition:
StartAt: PictureSaverStep
States:
PictureSaverStep:
Type: Task
Resource:
Fn::GetAtt: [PictureSaver, Arn]
Next: TableExtractorStep
TableExtractorStep:
Type: Task
Resource:
Fn::GetAtt: [TableExtractorWithVeryfi, Arn]
Next: ProductMapperStep
ProductMapperStep:
Type: Task
Resource:
Fn::GetAtt: [ProductMapper, Arn]
End: true
tracingConfig:
enabled: true
plugins:
- serverless-step-functions
- serverless-python-requirements
And this is what is inside requirements.txt
requests == 2.22.0
sagemaker == 2.59.1.post0
Can maybe some one give me a hint, what could help? It looks like I’m failing to import requirements completely…
|
The issue was in this line in serverless.yml:
slim: true
Changing it to “false” solves the issue, but slightly increases the package size (20% in my case).
slim: false
| 0 |
huggingface
|
Amazon SageMaker
|
Finetuning text summary model support change pretrained model?
|
https://discuss.huggingface.co/t/finetuning-text-summary-model-support-change-pretrained-model/9756
|
hi,
I am following the tutorial 08 distributed training and i am dealing with text summary in different language(here, chinese),i am wondering, any guidence on how to change pretrained models for multi-language?
best,
jackie
|
You can adjust the model, which should be used for training in the hyperparameters. Replace it with the model you want to use.
# hyperparameters, which are passed into the training job
hyperparameters={'per_device_train_batch_size': 4,
'per_device_eval_batch_size': 4,
'model_name_or_path': 'facebook/bart-large-cnn', # model used for training
| 0 |
huggingface
|
Amazon SageMaker
|
Deployed HF model from the hub and got an error: ‘numpy.ndarray’ object has no attribute ‘pop’
|
https://discuss.huggingface.co/t/deployed-hf-model-from-the-hub-and-got-an-error-numpy-ndarray-object-has-no-attribute-pop/10007
|
Hi, dear community. I am new member here and I got stuck with the inference with HF model.
Here is what I’m trying to do:
there is a pre-trained HF model deployed as Sagemaker endpoint (code below, #1)
I am trying to access this endpoint from outside Sagemaker - first from Lambda, then from Colab
both cases gave me the same error:
2021-09-15 06:41:14,481 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - AttributeError: 'numpy.ndarray' object has no attribute 'pop'
Here is the code #1 of endpoint deployment (I deploy it from SageMaker studio):
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
#'HF_MODEL_ID':'distilbert-base-uncased-distilled-squad', # model_id from hf.co/models
'HF_MODEL_ID':'distilbert-base-uncased', # model_id from hf.co/models
'HF_TASK':'question-answering' # NLP task you want to use for predictions
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
env=hub,
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.6", # transformers version used
pytorch_version="1.7", # pytorch version used
py_version="py36", # python version of the DLC
)
Code#2 for inference from Colab
!pip install boto3
import os
import io
import boto3
from botocore.config import Config
import json
import csv
# grab environment variables
my_config = Config(
region_name='eu-central-1'
)
ENDPOINT_NAME = 'huggingface-pytorch-inference-2021-09-15-06-35-36-297'
runtime= boto3.client('runtime.sagemaker',
region_name='eu-central-1',
aws_access_key_id = 'AK************************RW',
aws_secret_access_key= 'dZ*************************************o9I')
payload = {
"inputs": {
"question": "What is used for inference?",
"context": "My Name is Philipp and I live in Nuremberg. This model is used with sagemaker for inference."
}
}
print(payload)
response = runtime.invoke_endpoint(EndpointName=ENDPOINT_NAME,
ContentType='text/csv',
Body=json.dumps(payload))
print(response)
result = json.loads(response['Body'].read().decode())
print(result)
Here is a full log of the invocation from Colab.
This is an experimental beta features, which allows downloading model from the Hugging Face Hub on start up. It loads the model defined in the env var `HF_MODEL_ID`
#015Downloading: 0%| | 0.00/8.54k [00:00<?, ?B/s]#015Downloading: 100%|██████████| 8.54k/8.54k [00:00<00:00, 7.39MB/s]
#015Downloading: 0%| | 0.00/483 [00:00<?, ?B/s]#015Downloading: 100%|██████████| 483/483 [00:00<00:00, 498kB/s]
#015Downloading: 0%| | 0.00/268M [00:00<?, ?B/s]#015Downloading: 3%|▎ | 8.57M/268M [00:00<00:03, 85.7MB/s]#015Downloading: 6%|▋ | 17.4M/268M [00:00<00:02, 87.1MB/s]#015Downloading: 10%|▉ | 26.4M/268M [00:00<00:02, 88.5MB/s]#015Downloading: 13%|█▎ | 35.5M/268M [00:00<00:02, 89.6MB/s]#015Downloading: 17%|█▋ | 44.5M/268M [00:00<00:02, 89.2MB/s]#015Downloading: 20%|█▉ | 53.4M/268M [00:00<00:02, 89.2MB/s]#015Downloading: 23%|██▎ | 62.3M/268M [00:00<00:02, 89.0MB/s]#015Downloading: 27%|██▋ | 71.2M/268M [00:00<00:02, 87.6MB/s]#015Downloading: 30%|██▉ | 80.0M/268M [00:00<00:02, 83.0MB/s]#015Downloading: 33%|███▎ | 89.0M/268M [00:01<00:02, 85.1MB/s]#015Downloading: 36%|███▋ | 97.6M/268M [00:01<00:02, 77.0MB/s]#015Downloading: 39%|███▉ | 105M/268M [00:01<00:02, 70.4MB/s] #015Downloading: 42%|████▏ | 113M/268M [00:01<00:02, 65.5MB/s]#015Downloading: 45%|████▌ | 121M/268M [00:01<00:02, 70.0MB/s]#015Downloading: 48%|████▊ | 130M/268M [00:01<00:01, 75.3MB/s]#015Downloading: 52%|█████▏ | 139M/268M [00:01<00:01, 79.6MB/s]#015Downloading: 55%|█████▌ | 147M/268M [00:01<00:01, 81.1MB/s]#015Downloading: 58%|█████▊ | 156M/268M [00:01<00:01, 78.2MB/s]#015Downloading: 61%|██████ | 164M/268M [00:02<00:01, 78.6MB/s]#015Downloading: 64%|██████▍ | 172M/268M [00:02<00:01, 79.1MB/s]#015Downloading: 68%|██████▊ | 181M/268M [00:02<00:01, 82.7MB/s]#015Downloading: 71%|███████ | 190M/268M [00:02<00:00, 85.2MB/s]#015Downloading: 74%|███████▍ | 199M/268M [00:02<00:00, 87.0MB/s]#015Downloading: 78%|███████▊ | 208M/268M [00:02<00:00, 87.6MB/s]#015Downloading: 81%|████████ | 217M/268M [00:02<00:00, 54.2MB/s]#015Downloading: 84%|████████▍ | 226M/268M [00:02<00:00, 61.7MB/s]#015Downloading: 88%|████████▊ | 235M/268M [00:03<00:00, 68.4MB/s]#015Downloading: 91%|█████████ | 244M/268M [00:03<00:00, 74.2MB/s]#015Downloading: 94%|█████████▍| 253M/268M [00:03<00:00, 78.5MB/s]#015Downloading: 98%|█████████▊| 262M/268M [00:03<00:00, 81.8MB/s]#015Downloading: 100%|██████████| 268M/268M [00:03<00:00, 78.5MB/s]
#015Downloading: 0%| | 0.00/466k [00:00<?, ?B/s]#015Downloading: 18%|█▊ | 86.0k/466k [00:00<00:00, 484kB/s]#015Downloading: 96%|█████████▌| 446k/466k [00:00<00:00, 1.39MB/s]#015Downloading: 100%|██████████| 466k/466k [00:00<00:00, 1.30MB/s]
#015Downloading: 0%| | 0.00/28.0 [00:00<?, ?B/s]#015Downloading: 100%|██████████| 28.0/28.0 [00:00<00:00, 16.8kB/s]
#015Downloading: 0%| | 0.00/232k [00:00<?, ?B/s]#015Downloading: 37%|███▋ | 86.0k/232k [00:00<00:00, 484kB/s]#015Downloading: 100%|██████████| 232k/232k [00:00<00:00, 862kB/s]
WARNING - Overwriting /.sagemaker/mms/models/distilbert-base-uncased ...
2021-09-15 06:40:02,907 [INFO ] main com.amazonaws.ml.mms.ModelServer -
MMS Home: /opt/conda/lib/python3.6/site-packages
Current directory: /
Temp directory: /home/model-server/tmp
Number of GPUs: 0
Number of CPUs: 1
Max heap size: 3201 M
Python executable: /opt/conda/bin/python3.6
Config file: /etc/sagemaker-mms.properties
Inference address: http://0.0.0.0:8080
Management address: http://0.0.0.0:8080
Model Store: /.sagemaker/mms/models
Initial Models: ALL
Log dir: /logs
Metrics dir: /logs
Netty threads: 0
Netty client threads: 0
Default workers per model: 1
Blacklist Regex: N/A
Maximum Response Size: 6553500
Maximum Request Size: 6553500
Preload model: false
Prefer direct buffer: false
2021-09-15 06:40:03,058 [WARN ] W-9000-distilbert-base-uncased com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-9000-distilbert-base-uncased
2021-09-15 06:40:03,161 [INFO ] W-9000-distilbert-base-uncased-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - model_service_worker started with args: --sock-type unix --sock-name /home/model-server/tmp/.mms.sock.9000 --handler sagemaker_huggingface_inference_toolkit.handler_service --model-path /.sagemaker/mms/models/distilbert-base-uncased --model-name distilbert-base-uncased --preload-model false --tmp-dir /home/model-server/tmp
2021-09-15 06:40:03,162 [INFO ] W-9000-distilbert-base-uncased-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Listening on port: /home/model-server/tmp/.mms.sock.9000
2021-09-15 06:40:03,162 [INFO ] W-9000-distilbert-base-uncased-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - [PID] 38
2021-09-15 06:40:03,162 [INFO ] W-9000-distilbert-base-uncased-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - MMS worker started.
2021-09-15 06:40:03,162 [INFO ] W-9000-distilbert-base-uncased-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Python runtime: 3.6.13
2021-09-15 06:40:03,163 [INFO ] main com.amazonaws.ml.mms.wlm.ModelManager - Model distilbert-base-uncased loaded.
2021-09-15 06:40:03,171 [INFO ] main com.amazonaws.ml.mms.ModelServer - Initialize Inference server with: EpollServerSocketChannel.
2021-09-15 06:40:03,188 [INFO ] W-9000-distilbert-base-uncased com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000
2021-09-15 06:40:03,278 [INFO ] main com.amazonaws.ml.mms.ModelServer - Inference API bind to: http://0.0.0.0:8080
Model server started.
2021-09-15 06:40:03,281 [INFO ] W-9000-distilbert-base-uncased-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000.
2021-09-15 06:40:03,283 [WARN ] pool-2-thread-1 com.amazonaws.ml.mms.metrics.MetricCollector - worker pid is not available yet.
2021-09-15 06:40:04,972 [WARN ] W-9000-distilbert-base-uncased-stderr com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Some weights of the model checkpoint at /.sagemaker/mms/models/distilbert-base-uncased were not used when initializing DistilBertForQuestionAnswering: ['vocab_projector.bias', 'vocab_layer_norm.weight', 'vocab_transform.weight', 'vocab_transform.bias', 'vocab_projector.weight', 'vocab_layer_norm.bias']
2021-09-15 06:40:04,972 [WARN ] W-9000-distilbert-base-uncased-stderr com.amazonaws.ml.mms.wlm.WorkerLifeCycle - - This IS expected if you are initializing DistilBertForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
2021-09-15 06:40:04,972 [WARN ] W-9000-distilbert-base-uncased-stderr com.amazonaws.ml.mms.wlm.WorkerLifeCycle - - This IS NOT expected if you are initializing DistilBertForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
2021-09-15 06:40:04,973 [WARN ] W-9000-distilbert-base-uncased-stderr com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Some weights of DistilBertForQuestionAnswering were not initialized from the model checkpoint at /.sagemaker/mms/models/distilbert-base-uncased and are newly initialized: ['qa_outputs.bias', 'qa_outputs.weight']
2021-09-15 06:40:04,973 [WARN ] W-9000-distilbert-base-uncased-stderr com.amazonaws.ml.mms.wlm.WorkerLifeCycle - You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
2021-09-15 06:40:05,199 [INFO ] W-9000-distilbert-base-uncased-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model distilbert-base-uncased loaded io_fd=da001afffe0afdd4-0000001a-00000000-e2ff1d120c2f0e8e-0bb0052c
2021-09-15 06:40:05,206 [INFO ] W-9000-distilbert-base-uncased com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 1871
2021-09-15 06:40:05,209 [WARN ] W-9000-distilbert-base-uncased com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-distilbert-base-uncased-1
2021-09-15 06:40:11,097 [INFO ] pool-1-thread-3 ACCESS_LOG - /10.32.0.2:37696 "GET /ping HTTP/1.1" 200 41
2021-09-15 06:40:15,795 [INFO ] pool-1-thread-3 ACCESS_LOG - /10.32.0.2:37696 "GET /ping HTTP/1.1" 200 0
2021-09-15 06:40:20,794 [INFO ] pool-1-thread-3 ACCESS_LOG - /10.32.0.2:37696 "GET /ping HTTP/1.1" 200 0
2021-09-15 06:40:25,794 [INFO ] pool-1-thread-3 ACCESS_LOG - /10.32.0.2:37696 "GET /ping HTTP/1.1" 200 0
2021-09-15 06:40:30,794 [INFO ] pool-1-thread-3 ACCESS_LOG - /10.32.0.2:37696 "GET /ping HTTP/1.1" 200 0
2021-09-15 06:40:35,794 [INFO ] pool-1-thread-3 ACCESS_LOG - /10.32.0.2:37696 "GET /ping HTTP/1.1" 200 0
2021-09-15 06:40:40,794 [INFO ] pool-1-thread-3 ACCESS_LOG - /10.32.0.2:37696 "GET /ping HTTP/1.1" 200 0
2021-09-15 06:40:45,794 [INFO ] pool-1-thread-3 ACCESS_LOG - /10.32.0.2:37696 "GET /ping HTTP/1.1" 200 1
2021-09-15 06:40:50,794 [INFO ] pool-1-thread-3 ACCESS_LOG - /10.32.0.2:37696 "GET /ping HTTP/1.1" 200 1
2021-09-15 06:40:55,794 [INFO ] pool-1-thread-3 ACCESS_LOG - /10.32.0.2:37696 "GET /ping HTTP/1.1" 200 1
2021-09-15 06:41:00,794 [INFO ] pool-1-thread-3 ACCESS_LOG - /10.32.0.2:37696 "GET /ping HTTP/1.1" 200 0
2021-09-15 06:41:05,794 [INFO ] pool-1-thread-3 ACCESS_LOG - /10.32.0.2:37696 "GET /ping HTTP/1.1" 200 0
2021-09-15 06:41:10,794 [INFO ] pool-1-thread-3 ACCESS_LOG - /10.32.0.2:37696 "GET /ping HTTP/1.1" 200 1
2021-09-15 06:41:14,478 [WARN ] W-distilbert-base-uncased-1-stderr com.amazonaws.ml.mms.wlm.WorkerLifeCycle - /opt/conda/lib/python3.6/site-packages/sagemaker_inference/decoder.py:58: VisibleDeprecationWarning: Reading unicode strings without specifying the encoding argument is deprecated. Set the encoding, use None for the system default.
2021-09-15 06:41:14,478 [WARN ] W-distilbert-base-uncased-1-stderr com.amazonaws.ml.mms.wlm.WorkerLifeCycle - return np.genfromtxt(stream, dtype=dtype, delimiter=",")
2021-09-15 06:41:14,479 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Prediction error
2021-09-15 06:41:14,480 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Traceback (most recent call last):
2021-09-15 06:41:14,480 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.6/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 222, in handle
2021-09-15 06:41:14,480 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - response = self.transform_fn(self.model, input_data, content_type, accept)
2021-09-15 06:41:14,480 [INFO ] W-9000-distilbert-base-uncased com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 4
2021-09-15 06:41:14,480 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.6/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 181, in transform_fn
2021-09-15 06:41:14,480 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - predictions = self.predict(processed_data, model)
2021-09-15 06:41:14,481 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.6/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 142, in predict
2021-09-15 06:41:14,481 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - inputs = data.pop("inputs", data)
2021-09-15 06:41:14,481 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - AttributeError: 'numpy.ndarray' object has no attribute 'pop'
2021-09-15 06:41:14,481 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -
2021-09-15 06:41:14,481 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - During handling of the above exception, another exception occurred:
2021-09-15 06:41:14,481 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -
2021-09-15 06:41:14,482 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Traceback (most recent call last):
2021-09-15 06:41:14,482 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.6/site-packages/mms/service.py", line 108, in predict
2021-09-15 06:41:14,482 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - ret = self._entry_point(input_batch, self.context)
2021-09-15 06:41:14,482 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/opt/conda/lib/python3.6/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 231, in handle
2021-09-15 06:41:14,482 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - raise PredictionException(str(e), 400)
2021-09-15 06:41:14,482 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - mms.service.PredictionException: 'numpy.ndarray' object has no attribute 'pop' : 400
|
Hello @pavel-nesterov,
Welcome to the Community .
First of all, I saw in Code#1 that you deployed 'distilbert-base-uncased' with question-answering, which is definitely not recommended since it is not fine-tuned for question answering.
The Error:
2021-09-15 06:41:14,481 [INFO ] W-distilbert-base-uncased-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - AttributeError: 'numpy.ndarray' object has no attribute 'pop'
raises because you are forcing ContentType='text/csv' for a JSON input, that’s not going to work.
Change ContentType in code#2 to application/json and it should work. Additionally instead of using boto3 + with runtime.sagemaker you could use the sagemaker sdk which provides a HuggingFacePredictor to invoke your endpoints.Hugging Face — sagemaker 2.59.1.post0 documentation 1
from sagemaker.huggingface import HuggingFacePredictor
predictor = HuggingFacePredictor(ENDPOINT_NAME)
response = predictor.predict(payload)
print(response)
| 0 |
huggingface
|
Amazon SageMaker
|
Using sagemaker in jupyter notebook
|
https://discuss.huggingface.co/t/using-sagemaker-in-jupyter-notebook/9970
|
Hi everyone,
I am trying to train a huggingface model on jupyternotebook on my local machine.
can i get some advice on: how do i pass my IAM role access key ?
i have installed sagemaker on my jupyternotebook, however, how do i connect to my AWS account? i found a line of code:
Blockquote
gets role for executing training job
iam_client = boto3.client(‘iam’)
role = iam_client.get_role(RoleName=’{IAM_ROLE_WITH_SAGEMAKER_PERMISSIONS}’)[‘Role’][‘Arn’]
hyperparameters = {
‘model_name_or_path’:‘finiteautomata/beto-sentiment-analysis’,
‘output_dir’:’/opt/ml/model’
# add your remaining hyperparameters
# more info here transformers/examples/pytorch/text-classification at v4.6.1 · huggingface/transformers · GitHub 3
}
Blockquote
i have set up a new user in my AWS account and have my access keys. just don’t understand how do it pass it on to my local machine?
everytime i only get errors like:
Blockquote
ValueError: Must setup local AWS configuration with a region supported by SageMaker.
NoCredentialsError: Unable to locate credentials
Blockquote
please advise!
thank you in advance.
meng
|
Hey @tumon,
You need to create an IAM Role for sagemaker with permission to access all required resources. You can find instructions on how to do this here: SageMaker Roles - Amazon SageMaker 10
To create a new role
Open the IAM console at https://console.aws.amazon.com/iam/ 14.
Select Roles and then select Create role .
Select SageMaker .
Select Next: Permissions .
The IAM managed policy, AmazonSageMakerFullAccess is automatically attached to this role. To see the permissions included in this policy, select the sideways arrow next to the policy name. Select Next: Tags .
(Optional) Add tags and select Next: Review .
Give the role a name in the text field under Role name and select Create role .
On the Roles section of the IAM console, select the role you just created. If needed, use the text box to search for the role using the role name you entered in step 7.
On the role summary page, make note of the ARN.
| 0 |
huggingface
|
Amazon SageMaker
|
InternalServer Exception when deploying fine tuned model on Sagemaker
|
https://discuss.huggingface.co/t/internalserver-exception-when-deploying-fine-tuned-model-on-sagemaker/9978
|
Hello everyone,
I got a fine tuned german classification model which I want to deploy on Sagemaker. To test my model I want to predict a couple of texts in the json format. Here is my code:
import csv
import json
import sagemaker
from sagemaker.s3 import S3Uploader,s3_path_join
# get the s3 bucket
sess = sagemaker.Session()
role = sagemaker.get_execution_role()
sagemaker_session_bucket = sess.default_bucket()
# uploads a given file to S3.
input_s3_path = s3_path_join("s3://",sagemaker_session_bucket,"batch_transform/input")
output_s3_path = s3_path_join("s3://",sagemaker_session_bucket,"batch_transform/output")
#s3_file_uri = S3Uploader.upload(dataset_jsonl_file,input_s3_path)
# datset files
dataset_csv_file="test"
dataset_jsonl_file="test.jsonl"
with open(dataset_csv_file, "r+") as infile, open(dataset_jsonl_file, "w+") as outfile:
reader = csv.DictReader(infile)
for row in reader:
# remove @
json.dump(row, outfile)
outfile.write('\n')
s3_file_uri = S3Uploader.upload(dataset_jsonl_file,input_s3_path)
#package my fine tuned model
!cd model_token && tar zcvf model.tar.gz *
!mv model_token/model.tar.gz ./model.tar.gz
model_url = s3_path_join("s3://",sagemaker_session_bucket,"batch_transform/model")
print(f"Uploading Model to {model_url}")
model_uri = S3Uploader.upload('model.tar.gz',model_url)
print(f"Uploaded model to {model_uri}")
from sagemaker.huggingface.model import HuggingFaceModel
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
model_data=model_uri, # configuration for loading model from Hub
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.6", # transformers version used
pytorch_version="1.7", # pytorch version used
py_version='py36', # python version used
)
# create Transformer to run our batch job
batch_job = huggingface_model.transformer(
instance_count=1,
instance_type='ml.g4dn.xlarge',
output_path=output_s3_path, # we are using the same s3 path to save the output with the input
strategy='SingleRecord')
# starts batch transform job and uses s3 data as input
batch_job.transform(
data=s3_file_uri,
content_type='application/json',
split_type='Line')
When I execute that batch tranform job I get the following error:
“code”: 400,
2021-09-14T12:01:09.338:[sagemaker logs]: sagemaker-eu-central-1-422630546972/batch_transform/input/test.jsonl: “type”: “InternalServerException”,
2021-09-14T12:01:09.338:[sagemaker logs]: sagemaker-eu-central-1-422630546972/batch_transform/input/test.jsonl: “message”: “text input must of type str (single example), List[str] (batch or single pretokenized example) or List[List[str]] (batch of pretokenized examples).”
What am I missing?
|
Hey @marlon89,
could you share the cloudwatch logs? They should provide more information to why the batch transform job is failing.
Additionally, can you share an example of your test.jsonl?
| 0 |
huggingface
|
Amazon SageMaker
|
Transformer Version train vs. Sagemaker
|
https://discuss.huggingface.co/t/transformer-version-train-vs-sagemaker/9979
|
Hello everyone,
I fine tuned a pretrained Bert model locally using Tranformers Version 4.9.1. Anyways Sagemaker just allows versions til 4.6 Can this cause problems when I want to deploy my model on Sagemaker?
|
Hello @marlon89,
No, your transformers 4.9.1 trained BERT should be compatible with SageMaker and 4.6.1.
| 0 |
huggingface
|
Amazon SageMaker
|
Directly load models from a remote storage like S3
|
https://discuss.huggingface.co/t/directly-load-models-from-a-remote-storage-like-s3/9862
|
Hi,
Instead of download the transformers model to the local file, could we directly read and write models from S3?
I have tested that we can read csv and txt files directly from S3, but not for models. Is there any solution?
|
Hey @leifan,
Is your question related to inference or training?
If it is related to the question you could download a model from s3 when starting the training job the same way as you would do with data.
huggingface_estimator.fit({
'train': 's3://<my-bucket>/<prefix>/train', # containing train files
'test': 's3://<my-bucket>/<prefix>/test', # containing test files
'model': 's3://<another-bucket>/<prefix>/model', # containing model files (config.json, pytroch_model.bin, etc.)
})
SageMaker will then download when starting the training job all of these files into your container.
The Path of the files can either be accessed from the env var SM_CHANNEL_XXXX, e.g. SM_CHANNEL_TRAIN, SM_CHANNEL_MODEL or directly from, e.g. /opt/ml/input/train
And then you can load your model in your training script with
AutoModelForXXX.from_pretrained(os.environ.get('SM_CHANNEL_MODEL',None))
| 0 |
huggingface
|
Amazon SageMaker
|
Batching in SageMaker Inference Toolkit
|
https://discuss.huggingface.co/t/batching-in-sagemaker-inference-toolkit/9709
|
Thanks for putting together this great toolkit. I had a question of how inference batching is handled. I noticed that the examples here 1 all appear to have a single input request. Once deployed, if multiple requests are made to the endpoint of a deployed model at once or in quick succession, are they automatically batched under the hood, or is there something you need to do before hitting the endpoint to feed in a batch of inputs manually?
|
pinging @philschmid and @jeffboudier in case they hadn’t seen this!
| 0 |
huggingface
|
Amazon SageMaker
|
How are the inputs tokenized when model deployment?
|
https://discuss.huggingface.co/t/how-are-the-inputs-tokenized-when-model-deployment/9692
|
Hi.
I’m working through the series of sagemaker-hugginface notebooks and it is not clear to me how the predict data is preprocess before call the model.
The notebook 01_getting_started_pytorch.ipynb shows these 3 steps:
preprocess datasets
save datsets on s3
train the model using sagemaker Huggingface API
once model trained, deploy model and make predictions from a input data in a dictionary format like: {"inputs": "blablabla"}
my question is: ¿how are these input data being tokenized before to get in the model?
|
Hey @Oigres,
Which tokenization step do you mean for training or inference?
For training, the tokenization is done in the preprocessing in the notebook.
For inference, the tokenization is done in the sagemaker-huggingface-inference-toolkit 7 and the toolkit leverages the transformers pipeline.
| 0 |
huggingface
|
Amazon SageMaker
|
SageMaker Inference for Model Tuned Elsewhere
|
https://discuss.huggingface.co/t/sagemaker-inference-for-model-tuned-elsewhere/9606
|
Reading through the documentation for HuggingFace & SageMaker as we are evaluating it and found the following:
Q: Which models can I deploy for Inference?
A: You can deploy
any Transformers model trained in Amazon SageMaker, or other compatible platforms and that can accomodate the SageMaker Hosting design
any of the 10 000+ publicly available Transformer models from the Hugging Face Model Hub, or
your private models hosted in your Hugging Face premium account!
Is it possible to fine-tune a model elsewhere, outside of SageMaker Training, (for instance, just through a regular PyTorch training loop on a pretrained transformers model), and then deploy it for Inference without hosting it on an account?
Would appreciate any pointers y’all can give on this.
|
Hello @charlesatftl,
Yes, you can fine-tune transformers anywhere you want and use it in SageMaker for Inference.
There are currently two options to use your model then:
Push your model to Models - Hugging Face 2 and deploy it directory from there, see: Deploy models to Amazon SageMaker 1
Create a model.tar.gz upload it to s3 and deploy it from there, see: Deploy models to Amazon SageMaker 5
In Addition to 2. here is the documentation on how to create a model.tar.gz Deploy models to Amazon SageMaker 16
| 0 |
huggingface
|
Amazon SageMaker
|
Problems in deployment when I configure my own labels
|
https://discuss.huggingface.co/t/problems-in-deployment-when-i-configure-my-own-labels/9529
|
Hi.
I am training a binary classification model based on this model checkpoint: dccuchile/bert-base-spanish-wwm-cased
When deployment, it uses by default these labels: LABEL_0, LABEL_1.
My goal is to deploy it with my own labels. Browsing the forum I came across this thread, and it seems that resolve a similiar problem.
Change label names on inference API Beginners
Hi there,
I recently uploaded my first model to the model hub and I’m wondering how I can change the label names that are returned by the inference API. Right now, the API returns “LABEL_0”, “LABEL_1”, etc. with the predictions and I would like it to be something like “Economy”, “Welfare”, etc.
I looked at the files of other hosted models and I saw that others changed the id2label and label2id in the config.json file, so I also did that here, but the inference API still returns “LABEL_0”. Do I…
After read this thread, my code looks like:
label2id = {
"0": "goodUser",
"1": "badUserFraud"
}
id2label = {
"goodUser": 0,
"badUserFraud": 1
}
# download model from model hub
config = AutoConfig.from_pretrained(args.model_name, label2id=label2id, id2label=id2label)
model = AutoModelForSequenceClassification.from_pretrained(args.model_name, config=config)
tokenizer = AutoTokenizer.from_pretrained(args.model_name)
The training step going well, but when deployment step I find this error:
image1632×552 101 KB
Thanks in advance
|
I would like to know this aswell
| 0 |
huggingface
|
Amazon SageMaker
|
Highest Transformer vers. that works with Sagemaker?
|
https://discuss.huggingface.co/t/highest-transformer-vers-that-works-with-sagemaker/9669
|
Hi,
Me and my research team have been wanting to deploy a transformer model on AWS.
Just wanted to know what is the highest version of transformer api that can run with estimator and AWS tuner ?
Last time i saw was that transformer version 3.5.1 works but can i use 4.10?
Have someone tried it?
Any input is much appreciated.
|
Hey @moma1820,
The current latest version of transformers for SageMaker is 4.6.1.
You can find a list of the currently supported versions in our documentation here: Hugging Face on Amazon SageMaker 1
We are keeping this up to date.
Speaking of transformers 4.10.0 we are in the middle of creating a new DLC and support for the Estimator. You can find the PR here [huggingface_tensorflow, huggingface_pytorch] update for Transformers to 4.10 by philschmid · Pull Request #1286 3
But IF you want to use a specific transformers version for either inference and training you can always create an additional requirements.txt including your preferred transformers version and provide it for either training or inference.
| 0 |
huggingface
|
Amazon SageMaker
|
InternalServerError after model training finishes, but fails to upload?
|
https://discuss.huggingface.co/t/internalservererror-after-model-training-finishes-but-fails-to-upload/9535
|
I’m using HuggingFace/SageMaker to fine-tune a distilBert model. Lately, I’ve been running into an issue where model training/evaluation finishes 100% without any errors, gets stuck for a few hours during the model uploading process, and then the job fails with the following error message:
Screen Shot 2021-08-27 at 10.19.51 AM861×103 7.5 KB
Screen Shot 2021-08-27 at 10.39.25 AM810×382 35.2 KB
I didn’t see any indication of why the job would fail in the logs I have access to (e.g. training/eval fully finishes, no issues with CUDA/memory, oddities in the data, etc) and AWS support doesn’t seem to have a clue either.
Screen Shot 2021-08-27 at 10.36.33 AM808×594 172 KB
Screen Shot 2021-08-27 at 10.36.45 AM1015×372 70 KB
This issue doesn’t happen when the model trains on a subset of the available training data (e.g. using 30-50% of the available training data to train) and only seems to occur when training with all the available training data - same model, same config, same instances, etc. So at first, I thought it had to do with S3 checkpoints and distributed training since this only happens when training on our larger dataset.
I’m using 2x of the ml.p4d.24xlarge instances with distributed training for this job. I did see that AWS had a document on model parallel troubleshooting 1 and have tried their suggestions of removing the debug hook config + disabling checkpointing but no luck either.
Here’s my estimator config, just in case:
huggingface_estimator = HuggingFace(entry_point='train.py',
source_dir='./scripts',
instance_type=instance_type,
instance_count=instance_count,
base_job_name='test-run-no-debug-no-checkpoints',
# checkpoint_s3_uri=f's3://{sess.default_bucket()}/checkpoints',
# use_spot_instances=True,
# max_wait=(2*24*60*60),
max_run=(2*24*60*60),
volume_size=volume_size,
role=role,
transformers_version='4.6.1',
pytorch_version='1.7.1',
py_version='py36',
hyperparameters = hyperparameters,
distribution = distribution,
debugger_hook_config=False)
I’m not sure what’s causing this issue and was wondering if anyone has any insight about this?
|
Hello @nreamaroon,
thank you for opening the thread! That is indeed strange.
Did you use your own train.py or already created an example? If you wrote your own one can you share your training script?
Especially, which saving strategy you use. And could you also share the size of the dataset and model you use and the hyperparamter?
The only understandable reason for me currently is that during training on the large dataset SageMaker is creating this many checkpoints that it somehow fails uploading at the end. But this wouldn’t make any sense at all.
An easy way to test this would be only saving the last model in opt/ml/model.
In Addition to this, I have shared your issue with the AWS Team directly!
| 0 |
huggingface
|
Amazon SageMaker
|
How to improve tqdm log information when training?
|
https://discuss.huggingface.co/t/how-to-improve-tqdm-log-information-when-training/9496
|
Hi.
The logs of training executions look very bad. Do someone know fix them and get a normal-pretty print?
image1416×763 242 KB
|
Hey @Oigres,
The log looks this bad because Training Job is piping the stdout to SageMaker/Cloudwatch and every new “tick” download progress is recorded as separate log.
The logs should look better if you open it in the aws management console in cloudwatch.
Currently there is no way to deactivate the progress bars, when using transformers, but we are on it.
| 0 |
huggingface
|
Amazon SageMaker
|
HuggingFace with Sagemaker tutorial doesn’t work
|
https://discuss.huggingface.co/t/huggingface-with-sagemaker-tutorial-doesnt-work/9468
|
Hi.
I am going through the HuggingFace - Sagemaker tutorial that you provide in github. I am working with the notebook 1 : 01_getting_started_pytorch. I am using Sagemaker Studio with the image: Python 3 (PyTorch 1.6 Python 3.6 CPU Optimized)
When I try to download the imdb dataset I have this error:
image1376×808 135 KB
Do someone know how to fix this issue?
Thanks in advance.
Sergio
|
Hey @Oigres,
which datasets version have you installed?
| 0 |
huggingface
|
Amazon SageMaker
|
Monitoring Metric “Transform Fn”
|
https://discuss.huggingface.co/t/monitoring-metric-transform-fn/9361
|
Hi- It’s me again.
I want to get a better sense of the system latency for the speech model I have deployed.
AWS provides Invocation Endpoint Metrics 1 like ModelLatency but that is more end-to-end. I am particularly interested in how much time is spent with the forward pass. I think I have two options:
1- HuggingFace model logs preprocess,predict, and postprocess times here 2 but there is a bug as below where predict time is not captured correctly.
Screen Shot 2021-08-19 at 5.23.12 PM2616×226 70.5 KB
2- Here 1, the metrics Transform Fn is added to the context using the API described here 2.
My question is where does the Transform Fn go? I have hard time finding it on CloudWatch. I will also submit a PR to fix the predict time logging.
Thanks so much!
Deniz
|
Hey @dzorlu,
thank you for finding the bug, please let me know if you not manage to open PR, then I would take care of it.
Maybe @dan21c can tell more about the transform_fn and where is stored?
context.metrics.add_time("Transform Fn", round((predict_end - predict_start) * 1000, 2))
| 0 |
huggingface
|
Amazon SageMaker
|
KeyError: ‘_data’ when training on AWS
|
https://discuss.huggingface.co/t/keyerror-data-when-training-on-aws/5711
|
Hi all,
I’ve been working through adapting the getting started notebook 1 to my particular use case. I wrote out my data to s3, and kicked off .fit(), but am getting this error block:
2021-04-23 04:58:40,552 sagemaker-training-toolkit ERROR ExecuteUserScriptError:
Command "/opt/conda/bin/python3.6 train.py --epochs 1 --model_name distilbert-base-uncased --train_batch_size 32"
Traceback (most recent call last):
File "train.py", line 42, in <module>
train_dataset = load_from_disk(args.training_dir)
File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 781, in load_from_disk
return Dataset.load_from_disk(dataset_path, fs)
File "/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 684, in load_from_disk
state = {k: state[k] for k in dataset.__dict__.keys()} # in case we add new fields
File "/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 684, in <dictcomp>
state = {k: state[k] for k in dataset.__dict__.keys()} # in case we add new fields
KeyError: '_data'
What’s leaving me scratching my head is that when I reference the arrow_dataset.py 1 file, I can’t find lines of this kind, making me think there’s some discrepancy in whatever AWS’ container is and that file.
Regardless, does anyone have any advice/intuition on what may be going on here? I don’t know what the ‘_data’ key would refer to in this case, and am looking for help. Thanks!
|
Hey @cccx3,
Thank you for creating this topic! There was an error in the 01_getting_started_pytorch where it installed datasets 1.6.0, which has some changes, and in the DLC currently 1.5.0 is installed.
This was already fixed in a PR 11 yesterday.
To solve this you need to install datasets==1.5.0 in your notebook and pre-process the data again.
!pip install "sagemaker>=2.31.0" "transformers==4.4.2" "datasets[s3]==1.5.0" --upgrade
| 0 |
huggingface
|
Amazon SageMaker
|
Is there a way I can use Transformers 3.1.0 in SageMaker?
|
https://discuss.huggingface.co/t/is-there-a-way-i-can-use-transformers-3-1-0-in-sagemaker/9336
|
I’m trying to use the following package in SageMaker: GitHub - georgian-io/Multimodal-Toolkit: Multimodal model for text and tabular data with HuggingFace t 2
I’m actually not sure if it will be possible unless I make a PR to make it work with Transformers 4.6.1 or I build my own multimodal network.
They used Transformers 3.1.0 and haven’t updated it since. Is there a way around this so that I can use multimodal transformers in SageMaker?
Thanks!
|
Hey @JacquesThibs,
Sadly there is no DLC for version 3.1.0 the first DLC contains transformers 4.4.2.
Could elaborate a bit more on what you want to do? Maybe can find a workaround.
Additionally, it is always possible to provide a custom requirements.txt where you could downgrade transformers to 3.1.0.
Do you want to use this for training or inference?
| 0 |
huggingface
|
Amazon SageMaker
|
Model works but MultiDataModel doesn’t
|
https://discuss.huggingface.co/t/model-works-but-multidatamodel-doesnt/8530
|
This is a continuation of my post here 5. I’m trying to deploy BERT for text classification with tensorflow. When I use the model.deploy() method, I can successfully get inferences from BERT. Here’s my problem: I have four different models for classification and I want to run them on the same instance, not multiple instances, to save on cost. So I tried using the MultiDataModel class, but I keep getting the following error:
image979×1189 72.2 KB
The CloudWatch logs don’t add any additional information, unfortunately. Here’s the structure of counterargument.tar.gz in the s3 bucket, which I cloned from my HuggingFace account and zipped.
counterargument.tar.gz
config.json
special_tokens_map.json
tf_model.h5
tokenizer.json
tokenizer_config.json
vocab.txt
The most puzzling thing about this error is that model.deploy() worked fine, but multi_model.deploy() doesn’t! Thanks in advance.
|
@wsunadawong when using Multi-Model-Endpoints SageMaker stores the models differently. That’s why model.deploy() seems to work and MME does not. We (Amazon & HF) are looking into it. Hopefully, we can come back with a fix as soon as possible!
| 0 |
huggingface
|
Amazon SageMaker
|
Transformers 4.9.0 on SageMaker
|
https://discuss.huggingface.co/t/transformers-4-9-0-on-sagemaker/9145
|
Hello-
I am working on deploying a speech-recognition app using HuggingFace following the instructions here 1. My understanding is that the inference toolkit uses pipelines, but the speech-recognition is only introduced with the > 4.9.0 releases, whereas the current AWS images are pointing to 4.6.x.
Is there any way around this? What do you think suggest that I do to make the deployment work? My hunch is that I need to supply a new image_uri.
Thank you!
Deniz
.
|
Hello @dzorlu,
Great to hear that you are working on a speech task!! Yes, the inference toolkit uses the pipelines from transformers. The code is open source you want to take a deeper look GitHub - aws/sagemaker-huggingface-inference-toolkit 2.
I am happy to share that we are working on new releases for the DLC, which include 4.9 and higher. Sadly I think it will take around 2 more weeks to be around.
In the meantime, you could use the official DLC and provide as model_data a model.tar.gz which contains a custom module, documented here: Deploy models to Amazon SageMaker 1
With a custom module, you can provide a requirements.txt to upgrade the dependencies and then provide a inference.py with a custom model_fn to load the asr pipeline.
| 0 |
huggingface
|
Amazon SageMaker
|
InternalServerException when running a model loaded on S3
|
https://discuss.huggingface.co/t/internalserverexception-when-running-a-model-loaded-on-s3/9003
|
Hi there,
I am trying to deploy a model loaded on S3, following the steps found mainly on this video: [Deploy a Hugging Face Transformers Model from S3 to Amazon SageMaker](https://www.youtube.com/watch?v=pfBGgSGnYLs).
For that I have downloaded a model into a S3 bucket and use this image URI for DLC: image_uri = “763104351884.dkr.ecr.eu-west-1.amazonaws.com/huggingface-pytorch-inference:1.7.1-transformers4.6.1-cpu-py36-ubuntu18.04 1”
When I run the predictor.predict(data) command, I get this error:
Captura de pantalla 2021-08-05 a las 18.13.381168×1127 179 KB
The model I use fot these tests is this one: dccuchile/bert-base-spanish-wwm-uncased, and I could not find the way for letting the model know which action should perform.
I am pretty new with HuggingFace technology, and probably I am missing the point for fixing that.
Please, could you let me know what should I do for informing the model about what to do?
Thank you!
|
Hey @vicente-m47 ,
Could you please provide the whole code you executed?
If you want to deploy a model from Hugging Face – The AI community building the future. 1 you can use the “deploy” button on each of the model pages.
image1456×382 55.7 KB
This will generate a code snippet for you
image1455×1159 178 KB
From reading the code you attached you are trying to send an input for question-answering to a model (dccuchile/bert-base-spanish-wwm-uncased), which is not fine-tuned for question-answering, also you are sending an English input to a Spanish model.
A good starting point for new Hugging Facer is our course at Transformer models - Hugging Face Course.
You can find more information about deploying to sagemaker in the documentation here Deploy models to Amazon SageMaker 1
| 0 |
huggingface
|
Amazon SageMaker
|
CreateTrainingJob ValidationException
|
https://discuss.huggingface.co/t/createtrainingjob-validationexception/8963
|
I am running a Hugging Face training job (which works locally) in a container running in EC2. When I call fit on the estimator I get the error:
An error occurred (ValidationException) when calling the CreateTrainingJob operation: TrainingImageConfig with TrainingRepositoryAccessMode set to VPC must be provided when using a training image from a private Docker registry. Please provideTrainingImageConfig and TrainingRepositoryAccessMode set to VPC when using a training image from a private Docker registry.
The variables mentioned do not exist in the documentation and I can’t find it in the source code: TrainingImageConfig, TrainingRepositoryAccessMode.
There are some mentions for vpc_config and vpcConfig but there doesn’t seem to be a way to pass these things through to SM from HF.
My code is basically this:
hyperparameters = {'epochs': 1,
'per_device_train_batch_size': 32,
'model_name_or_path': model_name
}
huggingface_estimator = HuggingFace(
entry_point='train.py',
source_dir='./scripts',
instance_type=instance_type,
instance_count=1,
role=role,
image_uri='docker.artifactory.xxxx.com/yyyy/mlaeep/mlaeep.0.0.1-dev',
transformers_version='4.4',
# pytorch_version='1.6',
py_version='py36',
hyperparameters=hyperparameters
)
huggingface_estimator.fit(
{'train': training_input_path,
'test': test_input_path
},
job_name='MlAeepTrainer'
)
|
@philschmid or @OlivierCR may be able to help with this.
| 0 |
huggingface
|
Amazon SageMaker
|
Distributed Training on Sagemaker
|
https://discuss.huggingface.co/t/distributed-training-on-sagemaker/7074
|
Hey! Sorry to post again. This error is really over my head. The script works when on just one GPU. However, after adding in the argument for distributed training, it results in some really unusual errors. I essentially just added the distribution argument and changed instance type and # of instances. Am I missing something here? This is being run on the run _summarization.py. I also deleted some parts of the log that weren’t relevant because of the space limitations for # of characters in a post. Do I potentially have to use the wrapper script you have in the git examples folder that is meant to be used for distributed training? Thanks a lot!
from sagemaker.huggingface import HuggingFace
hyperparameters={
'model_name_or_path': 'google/pegasus-large',
'train_file': "/opt/ml/input/data/train/final_aws_deepgram_train.csv",
'test_file': "/opt/ml/input/data/test/final_aws_deepgram_test.csv",
'validation_file': "/opt/ml/input/data/validation/final_aws_deepgram_validation.csv",
'text_column': 'document',
'summary_column': 'summary',
'do_train': True,
'do_eval': True,
'fp16': True,
'per_device_train_batch_size': 2,
'per_device_eval_batch_size': 2,
'evaluation_strategy': "steps",
'eval_steps': 1000,
'weight_decay': 0.01,
'learning_rate': 2e-5,
'max_grad_norm': 1,
'max_steps': 2000,
'max_source_length': 500,
'max_target_length': 100,
'load_best_model_at_end': True,
'output_dir': '/opt/ml/model'
}
# configuration for running training on smdistributed Data Parallel
distribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}
# git configuration to download our fine-tuning script
git_config = {'repo': 'https://github.com/huggingface/transformers.git', 'branch': 'v4.6.1'} #'branch': 'v4.6.1'
# instance configurations
instance_type='ml.p3.16xlarge'
instance_count=2
volume_size=200
# estimator
huggingface_estimator = HuggingFace(entry_point='run_summarization_original.py',
source_dir='transformers/examples/pytorch/summarization',
git_config=git_config,
instance_type=instance_type,
instance_count=instance_count,
volume_size=volume_size,
role=role,
transformers_version='4.6.1',
pytorch_version='1.7.1',
py_version='py36',
distribution= distribution,
hyperparameters = hyperparameters)
2021-06-22 21:50:12,132 sagemaker-training-toolkit ERROR ExecuteUserScriptError:
Command "mpirun --host algo-1:8,algo-2:8 -np 16 --allow-run-as-root --tag-output --oversubscribe -mca btl_tcp_if_include eth0 -mca oob_tcp_if_include eth0 -mca plm_rsh_no_tree_spawn 1 -mca pml ob1 -mca btl ^openib -mca orte_abort_on_non_zero_status 1 -mca btl_vader_single_copy_mechanism none -mca plm_rsh_num_concurrent 2 -x NCCL_SOCKET_IFNAME=eth0 -x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH -x SMDATAPARALLEL_USE_HOMOGENEOUS=1 -x FI_PROVIDER=efa -x RDMAV_FORK_SAFE=1 -x LD_PRELOAD=/opt/conda/lib/python3.6/site-packages/gethostname.cpython-36m-x86_64-linux-gnu.so -x SMDATAPARALLEL_SERVER_ADDR=algo-1 -x SMDATAPARALLEL_SERVER_PORT=7592 -x SAGEMAKER_INSTANCE_TYPE=ml.p3.16xlarge smddprun /opt/conda/bin/python3.6 -m mpi4py run_summarization_original.py --do_eval True --do_train True --eval_steps 1000 --evaluation_strategy steps --fp16 True --learning_rate 2e-05 --load_best_model_at_end True --max_grad_norm 1 --max_source_length 500 --max_steps 2000 --max_target_length 100 --model_name_or_path google/pegasus-large --output_dir /opt/ml/model --per_device_eval_batch_size 2 --per_device_train_batch_size 2 --summary_column summary --test_file /opt/ml/input/data/test/final_aws_deepgram_test.csv --text_column document --train_file /opt/ml/input/data/train/final_aws_deepgram_train.csv --validation_file /opt/ml/input/data/validation/final_aws_deepgram_validation.csv --weight_decay 0.01"
Warning: Permanently added 'algo-2,10.2.251.196' (ECDSA) to the list of known hosts.#015
[1,13]<stderr>:#0150 tables [00:00, ? tables/s][1,0]<stderr>:#0150 tables [00:00, ? tables/s][1,13]<stderr>:#0151 tables [00:00, 7.31 tables/s][1,13]<stderr>:#015 #015[1,13]<stderr>:#0150 tables [00:00, ? tables/s][1,13]<stderr>:#015 #015[1,13]<stderr>:#0150 tables [00:00, ? tables/s][1,13]<stderr>:#015 #015[1,0]<stderr>:#0151 tables [00:00, 7.17 tables/s][1,0]<stderr>:#015 #015[1,0]<stderr>:#0150 tables [00:00, ? tables/s][1,0]<stderr>:#015 #015[1,0]<stderr>:#0150 tables [00:00, ? tables/s][1,13]<stderr>:#015Downloading: 0%| | 0.00/3.09k [00:00<?, ?B/s][1,13][00:00<00:00, 3.60MB/s]
[1,0]<stderr>:#015 #015[1,0]<stderr>:https://huggingface.co/google/pegasus-large/resolve/main/config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmp2ojn0fqy
[1,8]<stderr>:loading configuration file https://huggingface.co/google/pegasus-large/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/3fa0446657dd3714a950ba400a3fa72686d0f815da436514e4823a973ef23e20.7a0cb161a6d34c3881891b70d4fa06557175ac7b704a19bf0100fb9c21af9286
[1,0]<stderr>:#015Downloading: 0%| | 0.00/3.09k [00:00<?, ?B/s][1,0]<stderr>:#015Downloading: 100%|ââââââââââ| 3.09k/3.09k [00:00<00:00, 2.57MB/s]
[1,0]<stderr>:storing https://huggingface.co/google/pegasus-large/resolve/main/config.json in cache at /root/.cache/huggingface/transformers/3fa0446657dd3714a950ba400a3fa72686d0f815da436514e4823a973ef23e20.7a0cb161a6d34c3881891b70d4fa06557175ac7b704a19bf0100fb9c21af9286
[1,0]<stderr>:creating metadata file for /root/.cache/huggingface/transformers/3fa0446657dd3714a950ba400a3fa72686d0f815da436514e4823a973ef23e20.7a0cb161a6d34c3881891b70d4fa06557175ac7b704a19bf0100fb9c21af9286
[1,0]<stderr>:loading configuration file https://huggingface.co/google/pegasus-large/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/3fa0446657dd3714a950ba400a3fa72686d0f815da436514e4823a973ef23e20.7a0cb161a6d34c3881891b70d4fa06557175ac7b704a19bf0100fb9c21af9286
[1,0]<stderr>:Model config PegasusConfig {
[1,0]<stderr>: "_name_or_path": "google/pegasus-large",
[1,0]<stderr>: "activation_dropout": 0.1,
[1,0]<stderr>: "activation_function": "relu",
[1,0]<stderr>: "add_bias_logits": false,
[1,0]<stderr>: "add_final_layer_norm": true,
[1,0]<stderr>: "architectures": [
[1,0]<stderr>: "PegasusForConditionalGeneration"
[1,0]<stderr>: ],
[1,8]<stderr>:loading weights file https://huggingface.co/google/pegasus-large/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/ef3a8274e003ba4d3ae63f2728378e73affec0029e797c0bbb80be8856130c4f.a99cb24bd92c7087e95d96a1c3eb660b51e498705f8bd068a58c69c20616f514
[1,0]<stderr>:loading weights file https://huggingface.co/google/pegasus-large/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/ef3a8274e003ba4d3ae63f2728378e73affec0029e797c0bbb80be8856130c4f.a99cb24bd92c7087e95d96a1c3eb660b51e498705f8bd068a58c69c20616f514
[1,12]<stderr>:#015 0%| | 0/2 [00:00<?, ?ba/s][1,15]<stderr>:#015 0%| | 0/2 [00:00<?, ?ba/s][1,9]<stderr>:#015 0%| | 0/2 [00:00<?, ?ba/s][1,8]<stderr>:All model checkpoint weights were used when initializing PegasusForConditionalGeneration.
[1,8]<stderr>: "max_position_embeddings": 1024
[1,8]<stderr>: },
[1,8]<stderr>: "summarization_reddit_tifu": {
[1,8]<stderr>: "length_penalty": 0.6,
[1,8]<stderr>: "max_length": 128,
[1,8]<stderr>: "max_position_embeddings": 512
[1,8]<stderr>: },
[1,8]<stderr>: "summarization_wikihow": {
[1,8]<stderr>: "length_penalty": 0.6,
[1,8]<stderr>: "max_length": 256,
[1,8]<stderr>: "max_position_embeddings": 512
[1,8]<stderr>: },
[1,8]<stderr>: "summarization_xsum": {
[1,8]<stderr>: "length_penalty": 0.8,
[1,8]<stderr>: "max_length": 64,
[1,8]<stderr>: "max_position_embeddings": 512
[1,8]<stderr>: }
[1,8]<stderr>: },
[1,8]<stderr>: "transformers_version": "4.6.1",
[1,8]<stderr>: "use_cache": true,
[1,8]<stderr>: "vocab_size": 96103
[1,8]<stderr>:
[1,8]<stderr>:All the weights of PegasusForConditionalGeneration were initialized from the model checkpoint at google/pegasus-large.
[1,8]<stderr>:If your task is similar to the task the model of the checkpoint was trained on, you can already use PegasusForConditionalGeneration for predictions without further training.
[1,10]<stderr>:#015 0%| | 0/2 [00:00<?, ?ba/s][1,8]<stderr>:#015 0%| | 0/2 [00:00<?, ?ba/s][1,11]<stderr>:#015 0%| | 0/2 [00:00<?, ?ba/s][1,14]<stderr>:#015 0%| | 0/2 [00:00<?, ?ba/s][1,13]<stderr>:#015 0%| | 0/2 [00:00<?, ?ba/s][1,12]<stderr>:#015 50%|█████ | 1/2 [00:01<00:01, 1.94s/ba][1,15]<stderr>:#015 50%|█████ | 1/2 [00:02<00:02, 2.35s/ba][1,12]<stderr>:#015100%|██████████| 2/2 [00:02<00:00, 1.56s/ba][1,12]<stderr>:#015100%|██████████| 2/2 [00:02<00:00, 1.30s/ba][1,12]<stderr>:
[1,12]<stderr>:#015 0%| | 0/1 [00:00<?, ?ba/s][1,12]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 4.28ba/s][1,12]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 4.27ba/s]
[1,10]<stderr>:#015 50%|█████ | 1/2 [00:02<00:02, 2.75s/ba][1,8]<stderr>:#015 50%|█████ | 1/2 [00:02<00:02, 2.83s/ba][1,15]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 1.84s/ba][1,15]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 1.51s/ba][1,15]<stderr>:
[1,5]<stderr>:#015 0%| | 0/2 [00:00<?, ?ba/s][1,6]<stderr>:#015 0%| | 0/2 [00:00<?, ?ba/s][1,15]<stderr>:#015 0%| | 0/1 [00:00<?, ?ba/s][1,9]<stderr>:#015 50%|█████ | 1/2 [00:03<00:03, 3.09s/ba][1,4]<stderr>:#015 0%| | 0/2 [00:00<?, ?ba/s][1,3]<stderr>:#015 0%| | 0/2 [00:00<?, ?ba/s][1,2]<stderr>:#015 0%| | 0/2 [00:00<?, ?ba/s][1,1]<stderr>:#015 0%| | 0/2 [00:00<?, ?ba/s][1,7]<stderr>:#015 0%| | 0/2 [00:00<?, ?ba/s][1,12]<stderr>:#015Downloading: 0%| | 0.00/2.17k [00:00<?, ?B/s][1,12]<stderr>:#015Downloading: 5.61kB [00:00, 2.18MB/s] [1,12]<stderr>:
[1,11]<stderr>:#015 50%|█████ | 1/2 [00:03<00:03, 3.04s/ba][1,0]<stderr>:All model checkpoint weights were used when initializing PegasusForConditionalGeneration.
[1,0]<stderr>:
[1,0]<stderr>:All the weights of PegasusForConditionalGeneration were initialized from the model checkpoint at google/pegasus-large.
[1,0]<stderr>:If your task is similar to the task the model of the checkpoint was trained on, you can already use PegasusForConditionalGeneration for predictions without further training.
[1,0]<stderr>:#015 0%| | 0/2 [00:00<?, ?ba/s][1,14]<stderr>:#015 50%|█████ | 1/2 [00:03<00:03, 3.27s/ba][1,15]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 1.96ba/s][1,15]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 1.95ba/s][1,15]<stderr>:
[1,13]<stderr>:#015 50%|█████ | 1/2 [00:03<00:03, 3.48s/ba][1,8]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 2.24s/ba][1,8]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 1.85s/ba][1,8]<stderr>:
[1,10]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 2.22s/ba][1,10]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 1.87s/ba][1,10]<stderr>:
[1,10]<stderr>:#015 0%| | 0/1 [00:00<?, ?ba/s][1,8]<stderr>:#015 0%| | 0/1 [00:00<?, ?ba/s][1,9]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 2.43s/ba][1,9]<stderr>:#015100%|██████████| 2/2 [00:04<00:00, 2.01s/ba][1,9]<stderr>:
[1,11]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 2.40s/ba][1,11]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 1.97s/ba][1,11]<stderr>:
[1,9]<stderr>:#015 0%| | 0/1 [00:00<?, ?ba/s][1,11]<stderr>:#015 0%| | 0/1 [00:00<?, ?ba/s][1,10]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 3.23ba/s][1,10]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 3.21ba/s][1,10]<stderr>:
[1,8]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 3.18ba/s][1,8]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 3.18ba/s]
[1,14]<stderr>:#015100%|██████████| 2/2 [00:04<00:00, 2.55s/ba][1,14]<stderr>:#015100%|██████████| 2/2 [00:04<00:00, 2.07s/ba]
[1,9]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 4.84ba/s][1,9]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 4.83ba/s]
[1,14]<stderr>:#015 0%| | 0/1 [00:00<?, ?ba/s][1,13]<stderr>:#015100%|██████████| 2/2 [00:04<00:00, 2.62s/ba][1,13]<stderr>:#015100%|██████████| 2/2 [00:04<00:00, 2.05s/ba][1,13]<stderr>:
[1,11]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 4.78ba/s][1,11]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 4.75ba/s][1,11]<stderr>:
[1,13]<stderr>:#015 0%| | 0/1 [00:00<?, ?ba/s][1,14]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 3.90ba/s][1,14]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 3.89ba/s]
[1,13]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 5.18ba/s][1,13]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 5.17ba/s]
[1,5]<stderr>:#015 50%|█████ | 1/2 [00:01<00:01, 1.97s/ba][1,8]<stderr>:max_steps is given, it will override any value given in num_train_epochs
[1,8]<stderr>:Using amp fp16 backend
[1,5]<stderr>:#015100%|██████████| 2/2 [00:02<00:00, 1.56s/ba][1,5]<stderr>:#015100%|██████████| 2/2 [00:02<00:00, 1.28s/ba]
[1,6]<stderr>:#015 50%|█████ | 1/2 [00:02<00:02, 2.55s/ba][1,5]<stderr>:#015 0%| | 0/1 [00:00<?, ?ba/s][1,2]<stderr>:#015 50%|█████ | 1/2 [00:02<00:02, 2.70s/ba][1,1]<stderr>:#015 50%|█████ | 1/2 [00:02<00:02, 2.79s/ba][1,3]<stderr>:#015 50%|█████ | 1/2 [00:02<00:02, 2.81s/ba][1,5]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 2.81ba/s][1,5]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 2.81ba/s][1,5]<stderr>:
[1,4]<stderr>:#015 50%|█████ | 1/2 [00:02<00:02, 2.87s/ba][1,7]<stderr>:#015 50%|█████ | 1/2 [00:02<00:02, 2.85s/ba][1,5]<stderr>:#015Downloading: 0%| | 0.00/2.17k [00:00<?, ?B/s][1,5]<stderr>:#015Downloading: 5.61kB [00:00, 1.62MB/s]
[1,6]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 2.06s/ba][1,6]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 1.74s/ba]
[1,2]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 2.11s/ba][1,2]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 1.72s/ba]
[1,6]<stderr>:#015 0%| | 0/1 [00:00<?, ?ba/s][1,2]<stderr>:#015 0%| | 0/1 [00:00<?, ?ba/s][1,4]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 2.23s/ba][1,4]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 1.80s/ba][1,4]<stderr>:
[1,1]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 2.19s/ba][1,1]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 1.79s/ba][1,1]<stderr>:
[1,7]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 2.22s/ba][1,7]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 1.79s/ba][1,7]<stderr>:
[1,4]<stderr>:#015 0%| | 0/1 [00:00<?, ?ba/s][1,3]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 2.21s/ba][1,3]<stderr>:#015100%|██████████| 2/2 [00:03<00:00, 1.82s/ba][1,1]<stderr>:#015 0%| | 0/1 [00:00<?, ?ba/s][1,3]<stderr>:
[1,6]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 4.08ba/s][1,6]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 4.07ba/s][1,6]<stderr>:
[1,3]<stderr>:#015 0%| | 0/1 [00:00<?, ?ba/s][1,7]<stderr>:#015 0%| | 0/1 [00:00<?, ?ba/s][1,0]<stderr>:#015 50%|█████ | 1/2 [00:03<00:03, 3.68s/ba][1,2]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 3.76ba/s][1,2]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 3.75ba/s][1,2]<stderr>:
[1,4]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 3.01ba/s][1,4]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 2.88ba/s][1,4]<stderr>:
[1,7]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 3.09ba/s][1,7]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 3.08ba/s]
[1,1]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 2.31ba/s][1,1]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 2.27ba/s]
[1,3]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 2.84ba/s][1,3]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 2.84ba/s]
[1,0]<stderr>:#015100%|██████████| 2/2 [00:04<00:00, 2.73s/ba][1,0]<stderr>:#015100%|██████████| 2/2 [00:04<00:00, 2.09s/ba]
[1,8]<stderr>:}
[1,8]<stderr>:
[1,0]<stderr>:loading configuration file https://huggingface.co/google/pegasus-large/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/3fa0446657dd3714a950ba400a3fa72686d0f815da436514e4823a973ef23e20.7a0cb161a6d34c3881891b70d4fa06557175ac7b704a19bf0100fb9c21af9286
[1,0]<stderr>:Model config PegasusConfig {
[1,0]<stderr>: "_name_or_path": "google/pegasus-large",
[1,0]<stderr>: "activation_dropout": 0.1,
[1,0]<stderr>: "activation_function": "relu",
[1,0]<stderr>: "add_bias_logits": false,
[1,0]<stderr>: "add_final_layer_norm": true,
[1,0]<stderr>: "architectures": [
[1,0]<stderr>: "PegasusForConditionalGeneration"
[1,0]<stderr>: ],
[1,0]<stderr>: "attention_dropout": 0.1,
[1,0]<stderr>: "bos_token_id": 0,
[1,0]<stderr>: "classif_dropout": 0.0,
[1,0]<stderr>: "classifier_dropout": 0.0,
[1,0]<stderr>: "d_model": 1024,
[1,0]<stderr>: "decoder_attention_heads": 16,
[1,0]<stderr>: "decoder_ffn_dim": 4096,
[1,0]<stderr>: "decoder_layerdrop": 0.0,
[1,0]<stderr>: "decoder_layers": 16,
[1,0]<stderr>: "decoder_start_token_id": 0,
[1,0]<stderr>: "dropout": 0.1,
[1,0]<stderr>: "encoder_attention_heads": 16,
[1,0]<stderr>: "encoder_ffn_dim": 4096,
[1,0]<stderr>: "encoder_layerdrop": 0.0,
[1,0]<stderr>: "encoder_layers": 16,
[1,0]<stderr>: "eos_token_id": 1,
[1,0]<stderr>: "extra_pos_embeddings": 1,
[1,0]<stderr>: "force_bos_token_to_be_generated": false,
[1,0]<stderr>: "forced_eos_token_id": 1,
[1,0]<stderr>: "gradient_checkpointing": false,
[1,0]<stderr>: "id2label": {
[1,0]<stderr>:#015 0%| | 0/1 [00:00<?, ?ba/s][1,0]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 5.17ba/s][1,0]<stderr>:#015100%|██████████| 1/1 [00:00<00:00, 5.16ba/s]
[1,0]<stderr>:max_steps is given, it will override any value given in num_train_epochs
[1,0]<stderr>:Using amp fp16 backend
[1,0]<stderr>:***** Running training *****
[1,0]<stderr>: Num examples = 1558
[1,0]<stderr>: Num Epochs = 41
[1,0]<stderr>: Instantaneous batch size per device = 2
[1,0]<stderr>: Total train batch size (w. parallel, distributed & accumulation) = 32
[1,0]<stderr>: Gradient Accumulation steps = 1
[1,0]<stderr>: Total optimization steps = 2000
[1,0]<stderr>:#015 0%| | 0/2000 [00:00<?, ?it/s][1,8]<stderr>:***** Running training *****
[1,8]<stderr>: Num examples = 1558
[1,8]<stderr>: Num Epochs = 41
[1,8]<stderr>: Instantaneous batch size per device = 2
[1,8]<stderr>: Total train batch size (w. parallel, distributed & accumulation) = 32
[1,8]<stderr>: Gradient Accumulation steps = 1
[1,0]<stderr>: "0": "LABEL_0",
[1,0]<stderr>: "1": "LABEL_1",
[1,0]<stderr>: "2": "LABEL_2"
[1,0]<stderr>: },
[1,0]<stderr>: "init_std": 0.02,
[1,0]<stderr>: "is_encoder_decoder": true,
[1,0]<stderr>: "label2id": {
[1,0]<stderr>: "LABEL_0": 0,
[1,0]<stderr>: "LABEL_1": 1,
[1,0]<stderr>: "LABEL_2": 2
[1,0]<stderr>: },
[1,0]<stderr>: "length_penalty": 0.8,
[1,0]<stderr>: "max_length": 256,
[1,0]<stderr>: "max_position_embeddings": 1024,
[1,0]<stderr>: "model_type": "pegasus",
[1,0]<stderr>: "normalize_before": true,
[1,0]<stderr>: "normalize_embedding": false,
[1,0]<stderr>: "num_beams": 8,
[1,0]<stderr>: "num_hidden_layers": 16,
[1,0]<stderr>: "pad_token_id": 0,
[1,0]<stderr>: "scale_embedding": true,
[1,0]<stderr>: "static_position_embeddings": true,
[1,0]<stderr>: "task_specific_params": {
[1,0]<stderr>: "summarization_aeslc": {
[1,0]<stderr>: "length_penalty": 0.6,
[1,0]<stderr>: "max_length": 32,
[1,0]<stderr>: "max_position_embeddings": 512
[1,0]<stderr>: },
[1,0]<stderr>: "summarization_arxiv": {
[1,0]<stderr>: "length_penalty": 0.8,
[1,0]<stderr>: "max_length": 256,
[1,8]<stderr>: Total optimization steps = 2000
[1,8]<stderr>:#015 0%| | 0/2000 [00:00<?, ?it/s][1,8]<stderr>:#015 0%| | 1/2000 [00:07<4:03:57, 7.32s/it][1,0]<stderr>:#015 0%| | 1/2000 [00:07<4:07:45, 7.44s/it][1,8]<stderr>:#015 0%| | 2/2000 [00:09<3:17:17, 5.92s/it][1,0]<stderr>:#015 0%| | 2/2000 [00:10<3:19:39, 6.00s/it][1,0]<stderr>:#015 0%| | 3/2000 [00:11<2:33:21, 4.61s/it][1,8]<stderr>:#015 0%| | 3/2000 [00:11<2:33:05, 4.60s/it][1,0]<stderr>:#015 0%| | 4/2000 [00:12<2:01:18, 3.65s/it][1,8]<stderr>:#015 0%| | 4/2000 [00:12<2:01:26, 3.65s/it][1,0]<stderr>:#015 0%| | 5/2000 [00:15<1:47:36, 3.24s/it][1,8]<stderr>:#015 0%| | 5/2000 [00:15<1:47:43, 3.24s/it]--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 6 in communicator MPI COMMUNICATOR 5 DUP FROM 0
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[algo-1:00046] 13 more processes have sent help message help-mpi-api.txt / mpi-abort
[algo-1:00046] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
MPI_ABORT was invoked on rank 6 in communicator MPI COMMUNICATOR 5 DUP FROM 0
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[algo-1:00046] 13 more processes have sent help message help-mpi-api.txt / mpi-abort
[algo-1:00046] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
2021-06-22 21:50:32 Failed - Training job failed
ProfilerReport-1624398094: Stopping
2021-06-22 21:50:42,166 sagemaker-training-toolkit INFO MPI process finished.
2021-06-22 21:50:42,166 sagemaker-training-toolkit INFO Reporting training SUCCESS
---------------------------------------------------------------------------
UnexpectedStatusException Traceback (most recent call last)
<ipython-input-10-7e1bcc378f37> in <module>
3 {'train': 's3://qfn-transcription/ujjawal_files/final_aws_deepgram_train.csv',
4 'test': 's3://qfn-transcription/ujjawal_files/final_aws_deepgram_test.csv',
----> 5 'validation': 's3://qfn-transcription/ujjawal_files/final_aws_deepgram_validation.csv'}
6 )
~/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/sagemaker/estimator.py in fit(self, inputs, wait, logs, job_name, experiment_config)
680 self.jobs.append(self.latest_training_job)
681 if wait:
--> 682 self.latest_training_job.wait(logs=logs)
683
684 def _compilation_job_name(self):
~/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/sagemaker/estimator.py in wait(self, logs)
1623 # If logs are requested, call logs_for_jobs.
1624 if logs != "None":
-> 1625 self.sagemaker_session.logs_for_job(self.job_name, wait=True, log_type=logs)
1626 else:
1627 self.sagemaker_session.wait_for_job(self.job_name)
~/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/sagemaker/session.py in logs_for_job(self, job_name, wait, poll, log_type)
3694
3695 if wait:
-> 3696 self._check_job_status(job_name, description, "TrainingJobStatus")
3697 if dot:
3698 print()
~/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/sagemaker/session.py in _check_job_status(self, job, desc, status_key_name)
3254 ),
3255 allowed_statuses=["Completed", "Stopped"],
-> 3256 actual_status=status,
3257 )
3258
UnexpectedStatusException: Error for Training job huggingface-pytorch-training-2021-06-22-21-41-34-638: Failed. Reason: AlgorithmError: ExecuteUserScriptError:
Command "mpirun --host algo-1:8,algo-2:8 -np 16 --allow-run-as-root --tag-output --oversubscribe -mca btl_tcp_if_include eth0 -mca oob_tcp_if_include eth0 -mca plm_rsh_no_tree_spawn 1 -mca pml ob1 -mca btl ^openib -mca orte_abort_on_non_zero_status 1 -mca btl_vader_single_copy_mechanism none -mca plm_rsh_num_concurrent 2 -x NCCL_SOCKET_IFNAME=eth0 -x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH -x SMDATAPARALLEL_USE_HOMOGENEOUS=1 -x FI_PROVIDER=efa -x RDMAV_FORK_SAFE=1 -x LD_PRELOAD=/opt/conda/lib/python3.6/site-packages/gethostname.cpython-36m-x86_64-linux-gnu.so -x SMDATAPARALLEL_SERVER_ADDR=algo-1 -x SMDATAPARALLEL_SERVER_PORT=7592 -x SAGEMAKER_INSTANCE_TYPE=ml.p3.16xlarge smddprun /opt/conda/bin/python3.6 -m mpi4py run_summarization_original.py --do_eval True --do_train True --eval_steps 1000 --evaluation_strategy steps --fp16 True --learning_rate 2e-05 --load_best_model_at_end True --max_grad_norm 1 --max_source_length 500 --max_steps 2000 --max_target_length 100 --m
|
Hey @ujjirox,
Could you upload the logs as files maybe? When running distributed training it sometimes happen that the real error is way above the exit.
Without seeing the full error. It might be possible that your batch_size is too big. When scaling up from p3.2xlarge to p3.16xlarge (same GPUs) SageMaker might use more of the GPU memory for the distribution.
| 0 |
huggingface
|
Amazon SageMaker
|
Distributed Training run_summarization.py
|
https://discuss.huggingface.co/t/distributed-training-run-summarization-py/8789
|
Hi,
I cannot for the life of me figure out what is going wrong. I am following the tutorial 08 distributed training and when sending the job to AWS the error continually shows up. I am running it on a local jupyter notebook with these setups:
image1718×1356 272 KB
…
[1,0]:storing https://huggingface.co/facebook/bart-large-cnn/resolve/main/pytorch_model.bin in cache at /root/.cache/huggingface/transformers/4ccdf4cdc01b790f9f9c636c7695b5d443180e8dbd0cbe49e07aa918dda1cef0.fa29468c10a34ef7f6cfceba3b174d3ccc95f8d755c3ca1b829aff41cc92a300
[1,0]:creating metadata file for /root/.cache/huggingface/transformers/4ccdf4cdc01b790f9f9c636c7695b5d443180e8dbd0cbe49e07aa918dda1cef0.fa29468c10a34ef7f6cfceba3b174d3ccc95f8d755c3ca1b829aff41cc92a300
[1,5]:Environment variable SAGEMAKER_INSTANCE_TYPE is not set
[1,7]:Environment variable SAGEMAKER_INSTANCE_TYPE is not set
[1,3]:Environment variable SAGEMAKER_INSTANCE_TYPE is not set
[1,0]:Environment variable SAGEMAKER_INSTANCE_TYPE is not set
[1,1]:Environment variable SAGEMAKER_INSTANCE_TYPE is not set
[1,4]:Environment variable SAGEMAKER_INSTANCE_TYPE is not set
[1,6]:Environment variable SAGEMAKER_INSTANCE_TYPE is not set
…
[1,6]:#015100%|ââââââââââ| 1/1 [00:00<00:00, 8.65ba/s][1,6]:#015100%|ââââââââââ| 1/1 [00:00<00:00, 8.62ba/s]
[1,7]:#015Downloading: 0%| | 0.00/2.17k [00:00<?, ?B/s][1,7]:#015Downloading: 5.61kB [00:00, 5.02MB/s]
[1,0]:#015Downloading: 0%| | 0.00/2.17k [00:00<?, ?B/s][1,0]:#015Downloading: 5.61kB [00:00, 4.54MB/s]
[1,0]:Using amp fp16 backend
[1,0]:***** Running training *****
[1,0]: Num examples = 14732
[1,0]: Num Epochs = 3
[1,0]: Instantaneous batch size per device = 4
[1,0]: Total train batch size (w. parallel, distributed & accumulation) = 32
[1,0]: Gradient Accumulation steps = 1
[1,0]: Total optimization steps = 1383
[1,0]:#015 0%| | 0/1383 [00:00<?, ?it/s][1,0]:#015 0%| | 1/1383 [00:02<59:19, 2.58s/it][1,0]:#015 0%| | 2/1383 [00:04<51:42, 2.25s/it][1,0]:#015 0%| | 3/1383 [00:05<45:46, 1.99s/it][1,0]:#015 0%| | 4/1383 [00:07<45:13, 1.97s/it][1,0]:#015 0%| | 5/1383 [00:09<43:12, 1.88s/it][1,0]:#015 0%| | 6/1383 [00:10<41:53, 1.83s/it][1,0]:#015 1%| | 7/1383 [00:12<41:48, 1.82s/it][1,0]:#015 1%| | 8/1383 [00:14<43:16, 1.89s/it][1,0]:#015 1%| | 9/1383 [00:16<44:01, 1.92s/it][1,0]:#015 1%| | 10/1383 [00:18<44:03, 1.93s/it][1,0]:#015 1%| | 11/1383 [00:20<44:13, 1.93s/it][1,0]:#015 1%| | 12/1383 [00:22<44:31, 1.95s/it][1,0]:#015 1%| | 13/1383 [00:24<44:06, 1.93s/it][1,0]:#015 1%| | 14/1383 [00:26<43:58, 1.93s/it][1,0]:#015 1%| | 15/1383 [00:28<43:42, 1.92s/it][1,0]:#015 1%| | 16/1383 [00:30<43:46, 1.92s/it][1,0]:#015 1%| | 17/1383 [00:32<43:47, 1.92s/it]--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI COMMUNICATOR 5 DUP FROM 0
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
2021-07-29 04:41:50 Uploading - Uploading generated training model
2021-07-29 04:41:50 Failed - Training job failed
…
UnexpectedStatusException: Error for Training job huggingface-pytorch-training-2021-07-29-04-30-08-439: Failed. Reason: AlgorithmError: ExecuteUserScriptError:
Command “mpirun --host algo-1 -np 8 --allow-run-as-root --tag-output --oversubscribe -mca btl_tcp_if_include eth0 -mca oob_tcp_if_include eth0 -mca plm_rsh_no_tree_spawn 1 -mca pml ob1 -mca btl ^openib -mca orte_abort_on_non_zero_status 1 -mca btl_vader_single_copy_mechanism none -mca plm_rsh_num_concurrent 1 -x NCCL_SOCKET_IFNAME=eth0 -x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH -x SMDATAPARALLEL_USE_SINGLENODE=1 -x FI_PROVIDER=efa -x RDMAV_FORK_SAFE=1 -x LD_PRELOAD=/opt/conda/lib/python3.6/site-packages/gethostname.cpython-36m-x86_64-linux-gnu.so smddprun /opt/conda/bin/python3.6 -m mpi4py run_summarization.py --dataset_name samsum --do_eval True --do_predict True --do_train True --fp16 True --learning_rate 5e-05 --model_name_or_path facebook/bart-large-cnn --num_train_epochs 3 --output_dir /opt/ml/model --per_device_eval_batch_size 4 --per_device_train_batch_size 4 --predict_with_generate True --seed 7”
[1,2]:Environment variable SAGEMAKER_INSTANCE_TYPE is n
Any help would be greatly appreciated. The CloudWatch logs don’t really have anything else to say. Definitely seems like a problem with setting the environment variable SAGEMAKER_INSTANCE_TYPE but I thought I already set it by specifying instance_type when initializing huggingface_estimator??
Thanks
|
Hey @cdwyer1bod,
Thanks for opening the thread. Happy to help you.
Could still share the full cloudwatch logs? sometimes the errors are a bit hidden.
I saw you changed the instance ml.p3dn.24xlarge to ml.p3.16xlarge and kept the batch_size this could be the issue. Could reduce the batch_size to 2 or change the instances type?
| 0 |
huggingface
|
Amazon SageMaker
|
FP16 doesn’t reduce Trainer Training time
|
https://discuss.huggingface.co/t/fp16-doesnt-reduce-trainer-training-time/8517
|
Hi,
I’m using this SageMaker HF sample notebooks/sagemaker-notebook.ipynb at master · huggingface/notebooks · GitHub 5 adjusted with train_batch_size = 128, tested on both 1 p3.16xlarge and 1 p4d.24xlarge. For each instance I’m doing a job with fp16=True and a job without the flag. GPU usage is a erratic (sawtooth oscillating between 50 and 750%)
The impact of fp16=True is only a 1% training time reduction, on each instance. Is it because:
Not specifying fp16 in the trainer already uses fp16? (seems to be false by default though)
There is a lot of CPU work & I/O in that demo that will not leverage float16?
Transformers don’t benefit from fp16 training?
|
Hey @OlivierCR,
can you please share all of your hyperparameters you used? tran_batch_size = 128 seems pretty high to me to work.
Are you using distributed training?
The training time reduction should be way higher more around 20-40%.
When using the example have you adjusted train.py to accept fp16 as hyperparameter or have you defined it directly in the script?
| 0 |
huggingface
|
Amazon SageMaker
|
ValueError: Source directory does not exist in the repo. Training causal lm in sagemaker
|
https://discuss.huggingface.co/t/valueerror-source-directory-does-not-exist-in-the-repo-training-causal-lm-in-sagemaker/8566
|
ValueError: Source directory does not exist in the repo. (Looks like the link is broken).
I am getting the error above when I am trying to train a text generation model in the sagemaker.
Please see the script and configuration that I am using:
import sagemaker
from sagemaker.huggingface import HuggingFace
git_config = {‘repo’: ‘https://github.com/huggingface/transformers.git’,'branch’: ‘v4.6.1’} # v4.6.1 is referring to the transformers_version you use in the estimator.
gets role for executing training job
role = sagemaker.get_execution_role()
hyperparameters = {
‘model_name_or_path’:‘ktangri/gpt-neo-demo’,
‘output_dir’:’/opt/ml/model’,
‘fp16’: True,
‘train_file’: ‘/opt/ml/input/data/train/train.csv’,
‘validation_file’: ‘/opt/ml/input/data/validation/validation.csv’
# add your remaining hyperparameters
# more info here https://github.com/huggingface/transformers/tree/v4.6.1/examples/language-modeling
}
#configuration for running training on smdistributed Data Parallel
distribution = {‘smdistributed’:{‘dataparallel’:{ ‘enabled’: True}}}
git configuration to download our fine-tuning script
git_config = {‘repo’: ‘https://github.com/huggingface/transformers.git’,'branch’: ‘v4.6.1’}
creates Hugging Face estimator
huggingface_estimator = HuggingFace(
entry_point=‘run_clm.py’,
source_dir=’/examples/language-modeling’,
instance_type=‘ml.p3.16xlarge’,
instance_count=2,
role=role,
git_config=git_config,
transformers_version=‘4.6.1’,
pytorch_version=‘1.7.1’,
py_version=‘py36’,
hyperparameters = hyperparameters
)
huggingface_estimator.fit(
{‘train’: ‘s3://ch-questions-dataset-east1/train/train.csv’,
‘validation’: ‘s3://ch-questions-dataset-east1/validation/validation.csv’}
)
Thank you.
|
Thanks for pointing this out with transformers 4.6.1 the examples/ structure changed.
It is for source_dir now →
source_dir=’examples/pytorch/language-modeling’,
We’ll fix the code snippet on the hub.
| 0 |
huggingface
|
Amazon SageMaker
|
Running custom data files on run_summarization.py
|
https://discuss.huggingface.co/t/running-custom-data-files-on-run-summarization-py/7070
|
Hi there,
I have been running a script to train a pretrained transformer on a summarization task. I am using custom data which I have put into my S3 bucket which is also the default bucket for this job.
I have been getting this error in return, and have not been able to figure out what the solution is. I have run the exact same script on the xsum dataset just to see if it’s the custom dataset that is the issue, and I can indeed confirm that the job works when using the xsum dataset.
from sagemaker.huggingface import HuggingFace
hyperparameters={
'model_name_or_path': 'google/pegasus-large',
'train_file': "/opt/ml/input/data/train/final_aws_deepgram_train.csv",
'test_file': "/opt/ml/input/data/test/final_aws_deepgram_test.csv",
'validation_file': "/opt/ml/input/data/validation/final_aws_deepgram_validation.csv",
'text_column': 'document',
'summary_column': 'summary',
'do_train': True,
'do_eval': True,
'fp16': True,
'per_device_train_batch_size': 2,
'per_device_eval_batch_size': 2,
'evaluation_strategy': "steps",
'eval_steps': 200,
'weight_decay': 0.01,
'learning_rate': 2e-5,
'max_grad_norm': 1,
'max_steps': 200,
'max_source_length': 500,
'max_target_length': 100,
'load_best_model_at_end': True,
'output_dir': '/opt/ml/model'
}
# git configuration to download our fine-tuning script
git_config = {'repo': 'https://github.com/huggingface/transformers.git', 'branch': 'v4.6.1'} #'branch': 'v4.6.1'
# instance configurations
instance_type='ml.p3.2xlarge'
instance_count=1
volume_size=200
# metric definition to extract the results
metric_definitions=[
{"Name": "train_runtime", "Regex": "train_runtime.*=\D*(.*?)$"},
{'Name': 'train_samples_per_second', 'Regex': "train_samples_per_second.*=\D*(.*?)$"}
]
huggingface_estimator = HuggingFace(entry_point='run_summarization_original.py',
source_dir='transformers/examples/pytorch/summarization',
git_config=git_config,
metric_definitions=metric_definitions,
instance_type=instance_type,
instance_count=instance_count,
volume_size=volume_size,
role=role,
transformers_version='4.6.1',
pytorch_version='1.7.1',
py_version='py36',
hyperparameters = hyperparameters)
# starting the train job
huggingface_estimator.fit(
{'train': 's3://qfn-transcription/ujjawal_files/final_aws_deepgram_train.csv',
'test': 's3://qfn-transcription/ujjawal_files/final_aws_deepgram_test.csv',
'validation': 's3://qfn-transcription/ujjawal_files/final_aws_deepgram_validate.csv'}
)
2021-06-22 15:34:39 Starting - Starting the training job...
2021-06-22 15:35:03 Starting - Launching requested ML instancesProfilerReport-1624376073: InProgress
.........
2021-06-22 15:36:33 Starting - Preparing the instances for training.........
2021-06-22 15:38:06 Downloading - Downloading input data
2021-06-22 15:38:06 Training - Downloading the training image.....................
2021-06-22 15:41:34 Uploading - Uploading generated training model
2021-06-22 15:41:34 Failed - Training job failed
..
---------------------------------------------------------------------------
UnexpectedStatusException Traceback (most recent call last)
<ipython-input-7-ca8819244de5> in <module>
3 {'train': 's3://qfn-transcription/ujjawal_files/final_aws_deepgram_train.csv',
4 'test': 's3://qfn-transcription/ujjawal_files/final_aws_deepgram_test.csv',
----> 5 'validation': 's3://qfn-transcription/ujjawal_files/final_aws_deepgram_validate.csv'}
6 )
~/anaconda3/envs/python3/lib/python3.6/site-packages/sagemaker/estimator.py in fit(self, inputs, wait, logs, job_name, experiment_config)
680 self.jobs.append(self.latest_training_job)
681 if wait:
--> 682 self.latest_training_job.wait(logs=logs)
683
684 def _compilation_job_name(self):
~/anaconda3/envs/python3/lib/python3.6/site-packages/sagemaker/estimator.py in wait(self, logs)
1623 # If logs are requested, call logs_for_jobs.
1624 if logs != "None":
-> 1625 self.sagemaker_session.logs_for_job(self.job_name, wait=True, log_type=logs)
1626 else:
1627 self.sagemaker_session.wait_for_job(self.job_name)
~/anaconda3/envs/python3/lib/python3.6/site-packages/sagemaker/session.py in logs_for_job(self, job_name, wait, poll, log_type)
3683
3684 if wait:
-> 3685 self._check_job_status(job_name, description, "TrainingJobStatus")
3686 if dot:
3687 print()
~/anaconda3/envs/python3/lib/python3.6/site-packages/sagemaker/session.py in _check_job_status(self, job, desc, status_key_name)
3243 ),
3244 allowed_statuses=["Completed", "Stopped"],
-> 3245 actual_status=status,
3246 )
3247
UnexpectedStatusException: Error for Training job huggingface-pytorch-training-2021-06-22-15-34-33-634: Failed. Reason: AlgorithmError: ExecuteUserScriptError:
Command "/opt/conda/bin/python3.6 run_summarization_original.py --do_eval True --do_train True --eval_steps 200 --evaluation_strategy steps --fp16 True --learning_rate 2e-05 --load_best_model_at_end True --max_grad_norm 1 --max_source_length 500 --max_steps 200 --max_target_length 100 --model_name_or_path google/pegasus-large --output_dir /opt/ml/model --per_device_eval_batch_size 2 --per_device_train_batch_size 2 --summary_column summary --test_file /opt/ml/input/data/test/final_aws_deepgram_test.csv --text_column document --train_file /opt/ml/input/data/train/final_aws_deepgram_train.csv --validation_file /opt/ml/input/data/validation/final_aws_deepgram_validation.csv --weight_decay 0.01"
Traceback (most recent call last):
File "run_summarization_original.py", line 606, in <module>
main()
File "run_summarization_original.py", line 325, in main
datasets = load_dataset(extension, data_files=data_files, cache_dir=model_args.cache_dir)
File "/opt/conda/lib/p
Thanks!
|
Hey @ujjirox,
Is the error log you attached the whole error log? They should be more, maybe in cloudwatch.
Additionally
2021-06-22 15:34:39 Starting - Starting the training job...
2021-06-22 15:35:03 Starting - Launching requested ML instancesProfilerReport-1624376073: InProgress
.........
2021-06-22 15:36:33 Starting - Preparing the instances for training.........
2021-06-22 15:38:06 Downloading - Downloading input data
2021-06-22 15:38:06 Training - Downloading the training image.....................
2021-06-22 15:41:34 Uploading - Uploading generated training model
2021-06-22 15:41:34 Failed - Training job failed
→ shows that you ran training for 4minutes and also SageMaker tried to upload your model. Maybe there was an issue with uploading the model.
| 0 |
huggingface
|
Amazon SageMaker
|
Using custom csv data with run_summarization.py in sagemaker
|
https://discuss.huggingface.co/t/using-custom-csv-data-with-run-summarization-py-in-sagemaker/6890
|
Hi -
I am trying to use the sagemaker Huggingface estimator to fine-tune a model for summarization using the run_summarization.py 4 entry_point. I have created a Sagemaker Studio notebook based on the code from the summarization example notebook 3 provided in the Sagemaker examples.
I would like to use my own data to train the model and so have added the following code to make train and validation datasets I have uploaded to s3 storage available to the estimator:
Define s3 locations:
training_input_path = "s3://sagemaker-eu-central-1-88888888888/train_20210607.csv"
test_input_path = "s3://sagemaker-eu-central-1-88888888888/val_20210607.csv"
Define file locations in hyperparameters:
hyperparameters={
...,
'train_file': '/opt/ml/input/data/train_20210607.csv',
'validation_file': '/opt/ml/input/data/val_20210607.csv',
'text_column': 'document',
'summary_column': 'summary',
...
}
Ensure data is loaded when starting training job:
huggingface_estimator.fit({'train': training_input_path, 'test': test_input_path})
It looks from the log that the data has been loaded into the expected directory:
SM_HP_VALIDATION_FILE=/opt/ml/input/data/val_20210607.csv
SM_HP_TRAIN_FILE=/opt/ml/input/data/train_20210607.csv
However, after the run_summarization.py script is run, I get the following error:
FileNotFoundError: [Errno 2] No such file or directory: '/opt/ml/input/data/train_20210607.csv'
Apologies if I am missing something obvious, but would be great if anyone could let me know how I should be referencing my data so that it can be used by the run_summarization.py script?
Thank you!
Ben
|
Hey @benG,
Do you have the full error log? like where the error is thrown in the run_summarization.py.
And which transformers_version are you using and how does your git_config looks like?
| 0 |
huggingface
|
Amazon SageMaker
|
‘DistributedDataParallel’ object has no attribute ‘no_sync’
|
https://discuss.huggingface.co/t/distributeddataparallel-object-has-no-attribute-no-sync/5469
|
Hi,
I am trying to fine-tune layoutLM using with the following:
distribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}
estimator = HuggingFace(
entry_point = 'train.py',
py_version = 'py36',
transformers_version='4.4.2',
pytorch_version='1.6.0',
role = role,
instance_type='ml.p3.16xlarge',
instance_count=1,
checkpoint_s3_uri=checkpoint_dir,
checkpoint_local_path='/opt/ml/checkpoints',
hyperparameters = {'epochs': 3,
'batch-size': 16,
'learning-rate': 5e-5,
'use-cuda': True,
'model-name':'microsoft/layoutlm-base-uncased'
},
debugger_hook_config=False,
volume_size = 40,
distribution = distribution,
source_dir = source_dir)
estimator.fit({'input_data_dir': data_uri}, wait = True)
Relevant code in train.py file:
model = LayoutLMForTokenClassification.from_pretrained('microsoft/layoutlm-base-uncased',num_labels = len(labels))
training_args = TrainingArguments(
output_dir='./results',
num_train_epochs=4,
per_device_train_batch_size=16,
per_device_eval_batch_size=32,
warmup_ratio=0.1,
weight_decay=0.01,
report_to='wandb',
run_name = 'test_run',
logging_steps = 500,
fp16 = True,
load_best_model_at_end = True,
evaluation_strategy = 'steps',
gradient_accumulation_steps = 1,
save_steps = 500,
save_total_limit = 5,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
data_collator = data_collator,
compute_metrics=compute_metrics,
callbacks = [EarlyStoppingCallback]
)
trainer.train()
Unfortunately I keep getting the following error. Tried tracking down the problem but cant seem to figure it out.
[1,7]<stdout>:Traceback (most recent call last):
[1,7]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
[1,7]<stdout>: "__main__", mod_spec)
[1,7]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
[1,7]<stdout>: exec(code, run_globals)
[1,7]<stdout>: File "/opt/conda/lib/python3.6/site-packages/mpi4py/__main__.py", line 7, in <module>
[1,7]<stdout>: main()
[1,7]<stdout>: File "/opt/conda/lib/python3.6/site-packages/mpi4py/run.py", line 196, in main
[1,7]<stdout>: run_command_line(args)
[1,7]<stdout>: File "/opt/conda/lib/python3.6/site-packages/mpi4py/run.py", line 47, in run_command_line
[1,7]<stdout>: run_path(sys.argv[0], run_name='__main__')
[1,7]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 263, in run_path
[1,7]<stdout>: pkg_name=pkg_name, script_name=fname)
[1,7]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 96, in _run_module_code
[1,7]<stdout>: mod_name, mod_spec, pkg_name, script_name)
[1,7]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
[1,7]<stdout>: exec(code, run_globals)
[1,7]<stdout>: File "train.py", line 619, in <module>
[1,7]<stdout>: trainer.train()
[1,7]<stdout>: File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1050, in train
[1,7]<stdout>: with model.no_sync():
[1,7]<stdout>: File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 800, in __getattr__
[1,7]<stdout>: type(self).__name__, name))
[1,7]<stdout>:torch.nn.modules.module.ModuleAttributeError: 'DistributedDataParallel' object has no attribute 'no_sync'
Any help would be appreciated!
|
Oh and running the same code without the ddp and using a 1 GPU instance works just fine but obviously takes much longer to complete
| 0 |
huggingface
|
Amazon SageMaker
|
Where is SageMaker Distributed configured in HF Trainer?
|
https://discuss.huggingface.co/t/where-is-sagemaker-distributed-configured-in-hf-trainer/6026
|
I see that the HF Trainer run_qa script 1 is compatible with SageMaker Distributed Data Parallel, but I don’t see where is it configured?
In particular, I can see in the training_args 1 that smdist gets imported and configured, but where is the model wrapped with smdist DDP?
According to the smdist doc 1 the below snippet is a required step ; I’d like to understand where it’s done with HF Trainer
from smdistributed.dataparallel.torch.parallel.distributed import DistributedDataParallel as DDP
model = DDP(Net().to(device))
|
Hey @OlivierCR,
both the SageMaker Distributed Data-Parallel and the Model-Parallel library are directly integrated into the Trainer API, which uses and initializes both libraries automatically.
For SMD:
The library is first imported with an alias for the default PyTorch DDP library here 5
and then wrapps the model here 3
P.S. The _wrap_model() 1 function also handles SMP
| 0 |
huggingface
|
Amazon SageMaker
|
“No space left on device” when using HuggingFace + SageMaker
|
https://discuss.huggingface.co/t/no-space-left-on-device-when-using-huggingface-sagemaker/5406
|
Hi, I’m trying to train a model using a HuggingFace estimator in SageMaker but I keep getting this error after a few minutes:
[1,15]: File “pyarrow/ipc.pxi”, line 365, in pyarrow.lib._CRecordBatchWriter.write_batch
[1,15]: File “pyarrow/error.pxi”, line 97, in pyarrow.lib.check_status
[1,15]:OSError: [Errno 28] Error writing bytes to file. Detail: [errno 28] No space left on device
[1,15]:
I’m not sure what is triggering this problem because the volume size is high (volume_size=1024)
My hyperparameters are:
{‘per_device_train_batch_size’: 4,
‘per_device_eval_batch_size’: 4,
‘model_name_or_path’: ‘google/mt5-small’,
‘dataset_name’: ‘mlsum’,
‘dataset_config’: ‘es’,
‘text_column’: ‘text’,
‘summary_column’: ‘summary’,
‘max_target_length’: 64,
‘do_train’: True,
‘do_eval’: True,
‘do_predict’: True,
‘predict_with_generate’: True,
‘output_dir’: ‘/opt/ml/model’,
‘num_train_epochs’: 3,
‘seed’: 7,
‘fp16’: True,
‘save_strategy’: ‘no’}
And my estimator is:
create the Estimator
huggingface_estimator = HuggingFace(
entry_point=‘run_summarization.py’, # script
source_dir=’./examples/seq2seq’, # relative path to example
git_config=git_config,
instance_type=‘ml.p3.16xlarge’,
instance_count=2,
volume_size=1024,
transformers_version=‘4.4.2’,
pytorch_version=‘1.6.0’,
py_version=‘py36’,
role=role,
hyperparameters = hyperparameters,
distribution = distribution
)
Any help would be very much appreciated!
Some more details:
I’m calling fit without extra params, just like this:
huggingface_estimator.fit()
The entry point is this public script:
transformers/run_summarization.py at master · huggingface/transformers · GitHub 11
From traceback I see that the error is happening on line 433:
load_from_cache_file=not data_args.overwrite_cache,
(I guess something is happening here but not totally sure what)
At the moment I’m not saving checkpoints (to prevent that causing the error), using the param ‘save_strategy’: ‘no’
The dataset isn’t that big, 1.7 GB.
The model is quite big, but less than 3 GB
My volume is 1024 GB
|
Could you also include your .fit() call so that the example can be reproduced? And a link to the run_summarization.py if public? Do you have a sense of what could take up a lot of storage? do you checkpoint a large model very frequently? or do you read a large dataset?
| 0 |
huggingface
|
Amazon SageMaker
|
NER on SageMaker Ground Truth annotations
|
https://discuss.huggingface.co/t/ner-on-sagemaker-ground-truth-annotations/5437
|
Anybody has a public sample showing how to run NER on annotations coming from SageMaker Ground Truth NER 9?
|
Hey @OlivierCR,
Sorry, I don’t have an example using NER annotations coming from SageMaker Ground Truth NER.
Using the example you have sent me
HF Datasets (conll2003)
{‘chunk_tags’: [11, 21, 11, 12, 21, 22, 11, 12, 0],
‘id’: ‘0’,
‘ner_tags’: [3, 0, 7, 0, 0, 0, 7, 0, 0],
‘pos_tags’: [22, 42, 16, 21, 35, 37, 16, 21, 7],
‘tokens’: [‘EU’,
‘rejects’,
‘German’,
‘call’,
‘to’,
‘boycott’,
‘British’,
‘lamb’,
‘.’]}
SM Ground Truth NER 1 (doc)
{ “crowd-entity-annotation”: { “entities”: [ { “endOffset”: 26, “label”: “software”, “startOffset”: 0 }, { “endOffset”: 38, “label”: “version”, “startOffset”: 35 }, { “endOffset”: 88, “label”: “software”, “startOffset”: 84 }, { “endOffset”: 90, “label”: “version”, “startOffset”: 89 }, { “endOffset”: 93, “label”: “version”, “startOffset”: 92 }, { “endOffset”: 100, “label”: “version”, “startOffset”: 98 } ] } }
You could use the load_dataset 1 to load your JSON files coming from SM Ground Truth and then use dataset.map() 1 to iterate through it and adjust it to the datasets format
| 0 |
huggingface
|
Spaces
|
Latest Streamlit Version
|
https://discuss.huggingface.co/t/latest-streamlit-version/13888
|
Hey Everyone!
Thanks for the great application!
I was wondering, are there plans to upgrade the streamlit library to a more recent verison e.g. 1.4?
Many thanks
|
Yes, we’ll be doing this and it’s in the roadmap.
cc @cbensimon
| 0 |
huggingface
|
Spaces
|
Space stopped working due to uvicorn reload issue
|
https://discuss.huggingface.co/t/space-stopped-working-due-to-uvicorn-reload-issue/14131
|
Hello I have a space that is running TTS demo using gradio.
Spaces Link 1
I didn’t do anything to the space but on restarting it is giving me the below error:
Screenshot 2022-01-26 at 7.04.10 PM1802×314 44.7 KB
|
The space is up now. We got some issues yesterday but restarted all broken spaces.
| 0 |
huggingface
|
Spaces
|
Build error: failed to create endpoint sleepy_shirley on network bridge
|
https://discuss.huggingface.co/t/build-error-failed-to-create-endpoint-sleepy-shirley-on-network-bridge/13988
|
I’m struggling to understand this error, any assistance?
Build error
failed to create endpoint sleepy_shirley on network bridge: adding interface vethb79c97b to bridge docker0 failed: exchange full
Here’s my Spaces:
huggingface.co
Recommendations_Metacritic_Scores - a Hugging Face Space by seyia92coding 5
Discover amazing ML apps made by the community
|
The space seems to be working now after being restarted
| 0 |
huggingface
|
Spaces
|
Record audio from browser in streamlit
|
https://discuss.huggingface.co/t/record-audio-from-browser-in-streamlit/13837
|
Hey guys,
How does one record audio from the browser in streamlit? Nothing fancy, just audio capture with a push of a button similar to how every speech-to-text model on HF Hub has it, for example, facebook/wav2vec2-base-960h · Hugging Face 1
TIA,
Vladimir
|
Hey there! This question might be a better fit in the Streamlit forums or Discord. I found this dicussions from some time ago which I hope helps!
Streamlit – 15 Sep 20
Record sound from the user's microphone with streamlit 1
Hello. If audio input is possible with stremlit, the scalability of machine learning project will be much wider. I would like to know if this feature is currently in progress as a development project inside streamlit to make it possible for...
Reading time: 1 mins 🕑
Likes: 5 ❤
| 0 |
huggingface
|
Spaces
|
Build error when accessing space app
|
https://discuss.huggingface.co/t/build-error-when-accessing-space-app/14057
|
Hi,
There’s a build error when I accessing my space app. It seems the error caused by the streamlit installation. It’s working fine beforehand, until today this error popped out.
huggingface.co
Language Detection Xlm Roberta Base - a Hugging Face Space by ivanlau 2
Discover amazing ML apps made by the community
error log:
image1493×270 18.6 KB
Any thoughts on fixing this?
Thanks
|
I restarted it in “Settings” and it’s working now
| 0 |
huggingface
|
Spaces
|
On which OS are the Spaces running?
|
https://discuss.huggingface.co/t/on-which-os-are-the-spaces-running/13338
|
Hello
I have some troubles with installing dependencies, the Space can’t resolve the correct versions from the requirements.txt and I think this is due to missing wheels for certain OS.
It’d be great to know on which OS the apps are run.
Thanks and kind regards!
|
Hi @edichief! @cbensimon might be able to help you a bit here, but let me provide some tips. You can click in “build error” to see all the logs, although in this case it might not be super useful. I’m not sure which is the exact OS being used, but it’s linux based.
In my local Ubuntu computer, doing pip install -r requirements.txt in a new environment with your requirements give me exactly the same error.
Looking in indexes: https://pypi.org/simple, https://packagecloud.io/github/git-lfs/pypi/simple
Collecting spacy==3.1.4
Downloading spacy-3.1.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.1 MB)
|████████████████████████████████| 6.1 MB 4.4 MB/s
ERROR: Could not find a version that satisfies the requirement holmes-extractor==3.0.0 (from -r requirements.txt (line 2)) (from versions: 2.0, 2.0.1, 2.0.2, 2.0.3, 2.0.4, 2.1.0, 2.2.0, 2.2.1)
ERROR: No matching distribution found for holmes-extractor==3.0.0 (from -r requirements.txt (line 2))
| 0 |
huggingface
|
Spaces
|
Build error : failed to create endpoint hungry_leavitt on network bridge: adding interface veth1291603 to bridge docker0 failed: exchange full
|
https://discuss.huggingface.co/t/build-error-failed-to-create-endpoint-hungry-leavitt-on-network-bridge-adding-interface-veth1291603-to-bridge-docker0-failed-exchange-full/13752
|
huggingface.co
ohayo_face2 - a Hugging Face Space by Reeve 2
Discover amazing ML apps made by the community
This is my space.
A build error occurred. But I don’t know how to solve this error.
huggingface.co
ohayo_face - a Hugging Face Space by Reeve 3
Discover amazing ML apps made by the community
And this is my other space,
There was no error in this space, but the error occurred as soon as I changed the readme.
commit/3f1b52788af3f3d06698c453f4567a96b955f733
I created a test space, wrote a very simple code, and built it.
huggingface.co
Test - a Hugging Face Space by Reeve 2
Discover amazing ML apps made by the community
However, the test space did not work either.
I don’t think this is my problem.
|
It may be a server-wide problem.
The same happens to mine radiobee aligner - a Hugging Face Space by mikeee 2 which ran Okay previously.
Logs say Successfully built 989912a9c4b6 Successfully tagged space-cpu-mikeee/radiobee-aligner-8dd688d44271d792c7b84f717e8e285aae89f07c:latest
Yet the app does not run – a Build error shows up.
| 0 |
huggingface
|
Spaces
|
Connection error when building
|
https://discuss.huggingface.co/t/connection-error-when-building/13779
|
Hello!
I tried updating our gadio implementation in our HF Space, to better handle the increased traffic we have had the last few days. The build seems to be successful but we get a connection error when loading the tokenizer (although I guess that part is anecdotal and the connection error might be more general?). Here is the full traceback:
Traceback (most recent call last):
File "app.py", line 19, in <module>
tokenizer = AutoTokenizer.from_pretrained('gpt2')
File "/home/user/.local/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 464, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 330, in get_tokenizer_config
resolved_config_file = cached_path(
File "/home/user/.local/lib/python3.8/site-packages/transformers/file_utils.py", line 1491, in cached_path
output_path = get_from_cache(
File "/home/user/.local/lib/python3.8/site-packages/transformers/file_utils.py", line 1715, in get_from_cache
raise ValueError(
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
Any ideas what we can do to fix this?
Thanks in advance,
Theodore.
|
Hi Theodore!
This same as the same issue as the one discussed in Build error : failed to create endpoint hungry_leavitt on network bridge: adding interface veth1291603 to bridge docker0 failed: exchange full - #2 by mikeee 8. Looking into it!
| 0 |
huggingface
|
Spaces
|
Space running issue
|
https://discuss.huggingface.co/t/space-running-issue/13326
|
I tried my times checked everything but still I getting this error please someone help to fix it
20220103_2133511080×532 148 KB
|
These errors are for GPU-related things. Your space is running under CPU, so that would explain the errors
| 0 |
huggingface
|
Spaces
|
Plot Graphs on Gradio Partially Cut off on X- Axis
|
https://discuss.huggingface.co/t/plot-graphs-on-gradio-partially-cut-off-on-x-axis/13503
|
Hi,
The boxplot on the output of my gradio interface cuts off just below the x-axis so you can’t see the titles.
image1233×491 32.2 KB
See the code here - app.py · seyia92coding/Popular_Spotify_Albums at main 2
Has anyone had much luck with seaborn plot outputs with Gradio?
Thanks
|
Can you try plt.tight_layout() and let me know if it fixes things?
| 0 |
huggingface
|
Spaces
|
How to install a specific version of gradio in Spaces?
|
https://discuss.huggingface.co/t/how-to-install-a-specific-version-of-gradio-in-spaces/13552
|
Hi,
The last version of gradio (2.7.0) comes with a display bug (launch this notebook diff_texts.ipynb 1 from gradio at HighlightedText 1).
image1827×921 52.3 KB
With the version 2.6.4 of gradio, there was no display bug:
image1815×787 39.6 KB
However, when you create an App with gradio, Hugging Face Spaces says that the gradio package comes pre-installed at version 2.2.6. I think this text needs an update. (cc @sgugger)
image1914×337 36.1 KB
More, I did try to create a new Spaces App with gradio. In the requirements.txt file, I added gradio==2.6.4 but it did not work.
image1166×550 29.4 KB
How can I install the gradio version of my choice for a Spaces App? Thank you.
Note: see this issue on gradio github about the version 2.7.0 and HighlightedText 2.
|
Sorry about that! We’re fixing the bug but for now you can install a specific version of Gradio by adding lines like this at the top of your app.py file:
import os
os.system("pip uninstall -y gradio")
os.system("pip install gradio==2.6.4")
| 1 |
huggingface
|
Spaces
|
Numpy version mismatch in spaces
|
https://discuss.huggingface.co/t/numpy-version-mismatch-in-spaces/13375
|
Hi,
I am getting very weird behaviour in spaces.
The same code is working in colab but not in spaces due to numpy version mismatch.
Error:
Screenshot 2022-01-04 at 11.14.48 PM2284×866 127 KB
Colab Link where code is working: Google Colab
Spaces Link
|
os.system('pip uninstall -y numpy')
os.system('pip install numpy')
import gradio as grd
Using the above code in the exact order solved my problem.
| 1 |
huggingface
|
Spaces
|
Can I create a confidential app using Spaces?
|
https://discuss.huggingface.co/t/can-i-create-a-confidential-app-using-spaces/13258
|
Hello there!
I am intrigued by spaces but I am wondering if I can create an app that is only accessible to me (and not to Huggingface’s staff for instance) and to others with a password. Is this level of privacy possible?
Thanks!
|
You can create a private space. Private spaces are visible to only you (personal space) or members of your organization (organization space) as per HF.
Maybe they are visible to HF staff too, I am not sure about it.
If you don’t want to create space on HF, you can do it on gradio.app 1 or streamlit.io 2
| 0 |
huggingface
|
Spaces
|
How to debug Spaces on hf.co
|
https://discuss.huggingface.co/t/how-to-debug-spaces-on-hf-co/13191
|
I have a model/app deployment that works perfectly on my local machine and errors out after 1-2 seconds upon pressing “submit” on the hosted Gradio app. Where can I check a log of what is going on? I have no clue… and the documentation for spaces seems incomplete, at least from what I found.
|
Hello @pszemraj
Can you send me the link to your Space?
If you’re using it with gradio try setting debug = True when launching (one thing I can think of).
| 0 |
huggingface
|
Spaces
|
Gradio fn function errors right after 60 seconds
|
https://discuss.huggingface.co/t/gradio-fn-function-errors-right-after-60-seconds/13048
|
Always exactly after 60 seconds since execution, the function I pass to gradio.interface errors, and in the web console I get a JSON parsing error. I set enable_queue to True but that didn’t seem to change anything.
Here’s how I launch:
iface = gr.Interface(f, [
"text",
temperature,
top_p,
gr.inputs.Slider(
minimum=20, maximum=512, default=30, label="max length"),
gr.inputs.Dropdown(["GPT-J-6B", "GPT-2"], type="index", label="model"),
gr.inputs.Textbox(lines=1, placeholder="xxxxxxxx", label="space verification key")
], outputs="text", title=title, examples=examples)
iface.launch(enable_queue=True)
How can I prevent it timing out after 60 seconds every time the function takes long?
^as soon as that reaches 60, it errors
|
You can see Is there a timeout (max runtime) for spaces? - #2 by Epoching or the Gradio docs. You need to set enable_queue to True for longer inference
enable_queue (bool) - if True, inference requests will be served through a queue instead of with parallel threads. Required for longer inference times (> 1min) to prevent timeout.
| 1 |
huggingface
|
Spaces
|
Gradio iframe embedding
|
https://discuss.huggingface.co/t/gradio-iframe-embedding/13021
|
hi all,
hf spaces gradio iframe embedding appears useful, seems to work fine but does not seem to be documented well.
are there plans to expland the gradio documentation for hf spaces?
if yes, is there a place to make suggestions?
in good spirits,
luis
|
Hi @luisoala, one of the developers of Gradio here. Thanks for your question - the Gradio website has some documentation on hosting on Spaces (Gradio 1), but we can definitely improve it.
What improvements would you like to see?
| 0 |
huggingface
|
Spaces
|
Show python output?
|
https://discuss.huggingface.co/t/show-python-output/11545
|
Hey All, I wonder if there is any way to show python output
—i.e.logs or print(‘Using device:’, device)—
in Spaces.
Thanks in advance
|
Hi,
You can check the output by clicking on the “building” or “running” button next to the name of the Space.
Schermafbeelding 2021-11-10 om 11.05.471164×196 27.2 KB
| 0 |
huggingface
|
Spaces
|
Access a folder in my spaces thru app.py
|
https://discuss.huggingface.co/t/access-a-folder-in-my-spaces-thru-app-py/13051
|
hi … im not sure if this is considered a beginners question or not but
im trying to load a model into my app.py which exists in files sections …
folder name containing the model is “spacy.aravec.model”… i tried runing it using :
nlp = spacy.load(“spacy.aravec.model”)
but got an error :
ValueError: Expected object or value
Screen Shot 2021-12-22 at 10.07.15 PM710×573 72.6 KB
|
Hi @Ralfouzan! It seems like you uploaded meta.json with GIT LFS so the call to read_json is causing some issues here.
Would you be able to delete that file, and re-upload it without using git lfs to see if that alleviates the issue?
| 0 |
huggingface
|
Spaces
|
How would I hide my files?
|
https://discuss.huggingface.co/t/how-would-i-hide-my-files/13035
|
I need to include my API key to use the inference API, but how do I do that without making my key publicly visible in my space’s files?
I don’t want to make it private.
|
You can add secrets to have access to variables such as API keys in your Space
If your app requires secret keys or tokens, don’t hard-code them inside your app! Instead, go to the Settings page of your Space repository and enter your secrets there. The secrets will be exposed to your app with Streamlit Secrets Management 1 if you use Streamlit, and as environment variables in other cases.
| 1 |
huggingface
|
Spaces
|
Gradio save_state problem and introducing a basic chatbot
|
https://discuss.huggingface.co/t/gradio-save-state-problem-and-introducing-a-basic-chatbot/12197
|
Hello all,
I have uploaded a more basic version of my chatbot to https://huggingface.co/spaces/gorkemgoknar/moviechatbot 2 .
If you checkout app.py code you can see how you can use a transformers model types which is not included in standart interface (do not forget requirements.txt to import modules)
I also shared my model and information to use https://huggingface.co/gorkemgoknar/gpt2chatbotenglish
A more feature rich version (including movie and character selection) is on : metayazar.com/chatbot with Turkish version on https://www.metayazar.com/chatbot_tr
While building gradio app I noticed that save_state(history) get_state() does not work in embedded application. While gradio app is full screen it works but in HF spaces information is lost (likely due to Flask session not being statefull inside embedded fram).
To overcome this I use a global history but likely this would make models history being shared between different users (not a problem for a demo and it may be fun), yet for information there should be a way to save state if HF Spaces would be a different product.
|
Thanks for the detailed feedback @gorkemgoknar! I one of the Gradio developers and I’ll share this feedback with my team so that we can fix it. (We are working on a better Chatbot output component altogether!)
| 0 |
huggingface
|
Spaces
|
Is there a timeout (max runtime) for spaces?
|
https://discuss.huggingface.co/t/is-there-a-timeout-max-runtime-for-spaces/12979
|
I’ve been trying to make a 3d photo inpainting project work on huggingface spaces. After finally getting PyQt5 working with a headless display (lots of fun debugging via subprocess calls via python in app.py, since we don’t have access to the shell 0.0), it turns out spaces automatically timesout at around ~60 seconds?
The documentation said to use enable_queue=True in the iface.launch(enable_queue=True) function for gradio, but this doesn’t seem to help. Does HF Spaces automatically drop requests that span longer than 60 seconds? (the gradio timer on the spaces page counts up to 60, then prints error afterwards).
Checking the logs does show that the inference is able to finish running, but won’t be able to return anything back to the front-end via gradio since the connection gets cut.
Spaces currently set to private, but if any HF devs want to take a look & have access to private instances, here’s the link 3D_Photo_Inpainting - a Hugging Face Space by Classified
Error on request:
Traceback (most recent call last):
File "/home/user/.local/lib/python3.8/site-packages/werkzeug/serving.py", line 319, in run_wsgi
execute(self.server.app)
File "/home/user/.local/lib/python3.8/site-packages/werkzeug/serving.py", line 311, in execute
write(data)
File "/home/user/.local/lib/python3.8/site-packages/werkzeug/serving.py", line 290, in write
self.wfile.write(data)
File "/usr/local/lib/python3.8/socketserver.py", line 826, in write
self._sock.sendall(b)
TimeoutError: [Errno 110] Connection timed out
|
Update 2 / Solution:
So turns out using enable_queue=True in launch() doesn’t work, e.g:
iface = gr.Interface(fn=..., inputs=..., outputs=...)
iface.launch(enable_queue=True)
After looking at the source code, I saw that enable_queue is a deprecated argument in Inference()
Doing the following fixed the 60 second timeout for me:
iface = gr.Interface(fn=..., inputs=..., outputs=..., enable_queue=True)
iface.launch()
| 1 |
huggingface
|
Spaces
|
Github Actions Integration
|
https://discuss.huggingface.co/t/github-actions-integration/12625
|
I was wondering if there are any additional resources aside from the hf docs here for setting up an automatic pipeline between github and spaces? I keep running into a bug and I am not sure how to resolve it so I was hoping someone might be able to point me in the right direction.
I have been trying to sync my github repo and my hf space using github actions but unfortunately keep getting the following error:
<i>Run git push ***[huggingface.co/spaces/pleonova/multi-label-long-text](http://huggingface.co/spaces/pleonova/multi-label-long-text) main
remote: Invalid username or password.
fatal: Authentication failed for 'https://huggingface.co/spaces/pleonova/multi-label-long-text/'
Error: Process completed with exit code 128.</i>
I added the actions to my workflow. I have also tried removing pleonova in the url but that does not work either. (same error as above). I added the hf token to my github repo environment as well as enabled all actions in the actions permissions.
Any advice would be greatly appreciated!
|
This was solved in (Spaces) Unable to sync with github actions · Issue #534 · huggingface/huggingface_hub · GitHub 8
| 0 |
huggingface
|
Spaces
|
Hosting a HF Space for Ultra-Large Language Models
|
https://discuss.huggingface.co/t/hosting-a-hf-space-for-ultra-large-language-models/11661
|
Hi all!
My collaborators and I would like to host a research project demo on spaces. The challenge is that we operate on ultra-large language models such as GPT-2 XL (requires ~6GB VRAM) and GPT-J-6B (~24GB VRAM). Our code itself does not use much VRAM outside of the loading the model (it basically makes some user-specified changes to model and lets users generate text using the new model).
It seems like we can fit GPT-2 XL into our 16 GB allowance for T4s, but what about GPT-J-6B? How is this even hosted at EleutherAI/gpt-j-6B · Hugging Face 3?
Thanks for your insight
|
Discussed over email but let’s also try to paste relevant discussion items here in the future so that interesting discussions can be publicly available
| 0 |
huggingface
|
Spaces
|
Import error for cv2
|
https://discuss.huggingface.co/t/import-error-for-cv2/12404
|
I tried using import cv2 and various combinations of opencv and cv2 codes in the requirements.txt as well as the app.py.
I get this ImportError:
File "/home/user/.local/lib/python3.8/site-packages/cv2/__init__.py", line 8, in <module>
from .cv2 import *
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
I’ve seen a couple of Spaces that have the same error, and have not been able to see a Space that does deploy a program with a cv2 successfully. I would suggest leads on how to address this Import Error, or some samples of working code.
Thank you!
Maria
|
Hey there!
You can create a packages.txt file and use python3-opencv there. I think this worked for me last time I checked. Let me know if that works please
Thanks,
Omar
| 1 |
huggingface
|
Spaces
|
Widespread server errors on various Spaces? Site-wide?
|
https://discuss.huggingface.co/t/widespread-server-errors-on-various-spaces-site-wide/12580
|
Tried all the Spaces of the Week, they all give some form of error. So far today I’ve seen mostly 500, but also a few others including 504.
I’m guessing this is known already, but I haven’t seen an announcement here or on Twitter. Where would I go to check for the latest?
|
resolved, apparently.
| 0 |
huggingface
|
Spaces
|
Advanced search of Spaces
|
https://discuss.huggingface.co/t/advanced-search-of-spaces/12419
|
Hey,
I have no technical knowledge about all this stuff, just a “consumer”
So when i go to the spaces tab (Spaces - Hugging Face 1) it seems like the list only goes back to Nov. 25. Is that the whole list of spaces or does it cut off from there? Sadly there seems to be no ability to search via tags (like video/photo/audio/etc). How can i discover some more cool spaces that are created before Nov. 25?
thanks in advance!
|
Agree. I hope there’s a search or filter widget above the space (circle region) so that we can search what we want easily (such as which demo is built from streamlit/gradio, tags etc.).
image1904×616 149 KB
Thanks
| 0 |
huggingface
|
Spaces
|
Uploading large files (>5GB) to HF Spaces
|
https://discuss.huggingface.co/t/uploading-large-files-5gb-to-hf-spaces/12001
|
Hi everyone,
I need to use an 8.5 GB faiss index for my HF Spaces app. Although I’ve successfully uploaded large files to datasets and model hub before, I am now getting an error when pushing to HF Spaces repo. I read all the issues and troubleshooting others had with git lfs, but this one I really couldn’t solve after trying for some time.
Is this the recommended approach for large faiss files (pushing with git lfs)? I am using a cloud server so upload speed shouldn’t be an issue. It seems like the first 5 GB are transferred ok, and then towards the end of the transfer, I notice “400 Bad request” error logs. Here are my full git logs.
TIA, Vladimir
|
i don’t remember, did we implement lfs-largefiles for Spaces or no? @cbensimon @pierric
| 0 |
huggingface
|
Spaces
|
Copy to Clipboard doesn’t work in Spaces
|
https://discuss.huggingface.co/t/copy-to-clipboard-doesnt-work-in-spaces/12123
|
We are trying to use copy to clipboard functionality in HF Spaces, following is the code:
import streamlit as st
from bokeh.models.widgets import Button
from bokeh.models import CustomJS
from streamlit_bokeh_events import streamlit_bokeh_events
from datetime import datetime
current_time = datetime.now().strftime("%H:%M:%S")
text = f"## Current time: {current_time}"
st.write(text)
copy_button = Button(label="Copy Text")
copy_button.js_on_event("button_click", CustomJS(args=dict(text=text), code="""
navigator.clipboard.writeText(text);
"""))
no_event = streamlit_bokeh_events(
copy_button,
events="GET_TEXT",
key="get_text",
refresh_on_update=True,
override_height=75,
debounce_time=0)
It works well locally, but it doesn’t work on Spaces. It seems copy to clipboard is blocked in an iframe (errror message in the browser console: “Uncaught (in promise) DOMException: The Clipboard API has been blocked because of a permissions policy applied to the current document. See Deprecating Permissions in Cross-Origin Iframes - The Chromium Projects for more details.”), it needs to add specific allowance in iframe, which I don’t know how to do it. Maybe someone has a solution?
|
maybe this discussion is relevant Feature-Policy: clipboard-read and clipboard-write · Issue #322 · w3c/webappsec-permissions-policy · GitHub 1
| 0 |
huggingface
|
Spaces
|
Can one allow Spaces-hosted apps to take up the entire screen?
|
https://discuss.huggingface.co/t/can-one-allow-spaces-hosted-apps-to-take-up-the-entire-screen/12121
|
This is a bit of a cheeky feature request maybe…
Is it possible/planned to allow streamlit apps hosted on Spaces to take the entirety of the screen i.e. removing the components on the top of the page (see what I show below)?
image708×215 14.5 KB
Ideally, I would a URL that allows to access the app without the elements above. It’d be great to reduce “clutter” when demoing things (maybe at the price of paying for the premium or having some small HF logo a la “Made with Streamlit” “Hosted on Hugging Face Spaces” on the bottom of the page).
Thanks for the great work with Spaces!
|
@cbensimon @victor for info
| 0 |
huggingface
|
Spaces
|
Any way to install transformers in a PR branch?
|
https://discuss.huggingface.co/t/any-way-to-install-transformers-in-a-pr-branch/10999
|
Currently, my PR on FlaxVisionEncoderDecoderModel is not merged to transformers master yet. But I can’t wait to update my poor-french-image-captioning Spaces demo app.
So I need to
git clone https://github.com/ydshieh/transformers.git
cd transformers
git checkout flax_vision_encoder_decoder
pip install --upgrade .[dev]
Is this doable in Spaces currently? I can probably upload a copy of transformers to avoid git part, but still need pip install.
Otherwise, I think I can still figure out a hard way to make the demo work.
|
I saw that we can specify the branch/fork in the requirements.txt, I will give it a try.
| 0 |
huggingface
|
Model cards
|
What stack is used to create chat boxes in conversational model cards?
|
https://discuss.huggingface.co/t/what-stack-is-used-to-create-chat-boxes-in-conversational-model-cards/12401
|
Hello!
I was wondering if the frontend components used in conversational model cards on the HF website are open-sourced or not.
Take e.g. microsoft/DialoGPT-small · Hugging Face page (shown below) it looks great! It’d be nice to reuse it to show chatbot prototypes, as it doesn’t include distracting entities (e.g. the name of the sender, timestamp, sent/delivered/read marks, etc.).
image863×902 74.4 KB
Streamlit doesn’t have a dialog component AFAIK and I was wondering if the underlying code could be adapted to create custom components a la Create a Component - Streamlit Docs 1, e.g. to power chatbots on HF Spaces. I have seen bots implementations in streamlit, but the ones I saw are way less polished than this.
Not being a frontend developer myself, I’d have a hard time creating a chat box component from scratch, so it’d be cool to reuse yours somehow!
Thanks!
|
Hi,
Yes the code for the widgets is open-source (and part of the huggingface_hub library), you can view the code of the conversational widget here: huggingface_hub/ConversationalWidget.svelte at main · huggingface/huggingface_hub · GitHub 6
| 1 |
huggingface
|
Model cards
|
Missing dataset card for id_personachat
|
https://discuss.huggingface.co/t/missing-dataset-card-for-id-personachat/11495
|
Hi @cahya , can you please add a dataset card with explanation for id_personachat 2?
|
Ok, I’ll have a look on it. Are you working on a chatbot project btw?
| 0 |
huggingface
|
Model cards
|
[Announcement] Model cards metadata automatic cleaning on the hub
|
https://discuss.huggingface.co/t/announcement-model-cards-metadata-automatic-cleaning-on-the-hub/10267
|
Hi everyone,
HuggingFace’s team started a model metadata automatic correction project !
This comes just after deploying a mode card metadata changes validation on the git server side.
Starting with licenses and invalid “null” value : and the licenses sidebar is much better organized now : Models - Hugging Face 2
Model repositories concerned present a commit authored from me (“elishowk”) with “Automatic correction” message.
Now every license in the hub fits into the documented list
Coming-soon : automatic correction for languages and model-index.
Feel free to ask for any question.
Regards
|
That’s awesome!
| 0 |
huggingface
|
Model cards
|
Wrong paper link on model card
|
https://discuss.huggingface.co/t/wrong-paper-link-on-model-card/9953
|
Not sure where to report this…
For this model: google/byt5-large · Hugging Face 1
The paper link is not correct. Instead it should be this: [2105.13626] ByT5: Towards a token-free future with pre-trained byte-to-byte models
|
mrdrozdov:
google/byt5-large · Hugging Face
Thanks, I’ve fixed the paper link for all byT5 models on the hub.
| 0 |
huggingface
|
Model cards
|
Model card for microsoft/xprophetnet-large-wiki100-cased-xglue-qg is inccorect
|
https://discuss.huggingface.co/t/model-card-for-microsoft-xprophetnet-large-wiki100-cased-xglue-qg-is-inccorect/9230
|
Hi,
the model card for microsoft/xprophetnet-large-wiki100-cased-xglue-qg seems to be incorrect (I think it’s the one from the xglue-ntg model).
Just try out the example – it generates to ‘PAD’ tokens, and that’s it. It’s also unclear on how to use it for question generation.
I’m not sure who the creators are, so I can’t tag they here.
Best,
Niklas
|
Maybe @patrickvonplaten could help here since he reviewed the original PR (I don’t think Weizhen - the creator - is in the forum).
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.