docs
stringclasses 4
values | category
stringlengths 3
31
| thread
stringlengths 7
255
| href
stringlengths 42
278
| question
stringlengths 0
30.3k
| context
stringlengths 0
24.9k
| marked
int64 0
1
|
---|---|---|---|---|---|---|
huggingface
|
🤗Tokenizers
|
Issue with tokenizer.tokenize
|
https://discuss.huggingface.co/t/issue-with-tokenizer-tokenize/1891
|
when I test tokenizer.tokenize(‘how do you do’) (RobertaTokenizer in pytorch_transformers.tokenization_roberta.py ), it returns [‘how’, ‘Ġ’, ‘do’, ‘Ġ’, ‘you’, ‘Ġ’, ‘do’], wants to know where is the wrong
|
There is some discussion in this therad 5 and this 2. Perhaps it helps?
| 0 |
huggingface
|
🤗Tokenizers
|
Where to find the “wiki-big.train.raw” data as mentioned in the snippet for tokenizers 0.9?
|
https://discuss.huggingface.co/t/where-to-find-the-wiki-big-train-raw-data-as-mentioned-in-the-snippet-for-tokenizers-0-9/1796
|
I came across this short snippet of code on LinkedIn by HuggingFace, introducing tokenizers 0.9.
LinkedIn URL: snippet for tokenizers 0.9
How do I get the following dataset to run the code snippet? Is it available on huggingface.datasets?
files = ["../../data/wiki-big.train.raw"]
2
|
This dataset 6 can probably get you started. This gist 8 by @thomwolf may also prove useful.
| 0 |
huggingface
|
🤗Tokenizers
|
Loading pretrained SentencePiece tokenizer from Fairseq
|
https://discuss.huggingface.co/t/loading-pretrained-sentencepiece-tokenizer-from-fairseq/1326
|
Hello. I have a pretrained RoBERTa model on fairseq, which contains dict.txt, model.pt, sentencepiece.bpe.model.
I have found a way to convert a fairseq checkpoint to huggingface format in https://github.com/huggingface/transformers/blob/master/src/transformers/convert_roberta_original_pytorch_checkpoint_to_pytorch.py 36
Howerver, I couldn’t find a similar method to convert the tokenizer in fairseq sentencepiece.bpe.model to huggingface’s format.
Is there any existing solution? Or do I have to convert it by myself?
Thanks.
|
@proxyht were you able to convert sentencepiece model to huggingface tokenizer. As I am facing similar issues as well.
| 0 |
huggingface
|
🤗Tokenizers
|
Speed up Longformer Tokenizer
|
https://discuss.huggingface.co/t/speed-up-longformer-tokenizer/1598
|
Hello,
My current Longformer model takes 2.5 hours to classify 80k documents. The majority of time is consumed by the tokenizer. Is there any way to speed up the tokenizer?
|
Which environment are you using, GPU or?
I tried transformer, may be you need to use some data structure to compress the input data, I am looking for such data structure e.g., dictionary, any update from you will be appreciated.
| 0 |
huggingface
|
🤗Tokenizers
|
What does `tokenizers.normalizer.normalize` do?
|
https://discuss.huggingface.co/t/what-does-tokenizers-normalizer-normalize-do/1463
|
Hey all! Loving the updated tokenizer docs and playing around with normalizers at the moment. I’d like to update my article here 4 about text preprocessing and using Datasets but I had a quick question:
I know .normalize_str works like so:
normalizer.normalize_str("Héllò hôw are ü?")
# "Hello how are u?"
But normalizer.normalize doesn’t seem to be documented? Is this something that maybe I should be using, or is more for internal use?
Just wondering if normalizer.normalize_str is the most efficient way to use a normalizer with datasets.map or if normalizer.normalize can do some magic? Is there a way to use batched=True within datasets.map to make things even faster?
Or if I added normalizer to a pretrained tokenizer and then call the tokenizer with datasets, will that also carry out the normalization before doing the tokenization?
|
The Normalizer.normalize function is for our internal use in the pipeline: it doesn’t take a string but a string with the offsets, as it adds some functionality to keep track of the original offsets wrt to the original text and works in place (so you can combine several normalizers easily).
I don’t think normalize_str is less efficient. Also don’t think you can make things faster with batched=True here as it will just iterate on the elements of a batch.
Pinging @anthony and @Narsil that may have more insight
| 0 |
huggingface
|
🤗Tokenizers
|
How to truncate from the head in AutoTokenizer?
|
https://discuss.huggingface.co/t/how-to-truncate-from-the-head-in-autotokenizer/676
|
When we are tokenizing the input like this. If the text token number exceeds set max_lenth, the tokenizer will truncate from the tail end to limit the number of tokens to the max_length.
tokenizer = AutoTokenizer.from_pretrained('MODEL_PATH')
inputs = tokenizer(text, max_length=max_length, truncation=True,
padding=True, return_tensors='pt')
Is there a way to change the behavior and truncate from the head end?
For example if text = ['cat', 'dog', 'human'] and max_length=2, currently the last word 'human' will be dropped, is it possible to add a truncation_end parameter to AutoTokenizer, and when truncation_end='head' drops 'cat' from the head of the sentence?
|
I’m not sure if there is a built in way to do this, but are you able to reverse the list before calling the tokenizer? I.e., change text to [‘human’, ‘dog’, ‘cat’] then call the tokenizer so cat is dropped from the tail.
| 0 |
huggingface
|
🤗Tokenizers
|
How much memory is needed for training ByteLevelBPETokenizer?
|
https://discuss.huggingface.co/t/how-much-memory-is-needed-for-training-bytelevelbpetokenizer/1165
|
Hi, I’m trying to train LM for Japanese from scratch.
To be honest, I copied almost everything from https://github.com/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb 12.
I just changed datasets from Esperanto datasets to Japanese wiki datasets.
But when I tried to train tokenizer, my notebook crashed and restarted probably because of out of memory. My datasets is an entire wikipedia text which is 5.1 G. But my server had 64G memory.
How much memory do I need to train tokenizer from scratch?
or can I prevent out of memory error with some options?
Thanks in advance.
|
Pinging @anthony
| 0 |
huggingface
|
🤗Tokenizers
|
How to make tokenizer convert subword token to an independent token?
|
https://discuss.huggingface.co/t/how-to-make-tokenizer-convert-subword-token-to-an-independent-token/1015
|
Recently, I have been using bert-base-multilingual-cased for my work in bengali. When I feed in a sentence like “আজকে হবে না” to BertTokenizer, I get the following output.
Sentence: আজকে হবে না
Tokens: ['আ', '##জ', '##কে', 'হবে', 'না']
To Int: [938, 24383, 18243, 53761, 26109]
But when I fed in a sentence like “আজকে হবেনা” with “না” not spaced, I see tokens been “##না” with corresponding index also been changed
Sentence: আজকে হবেনা
Tokens: ['আ', '##জ', '##কে', 'হবে', '##না']
To Int: [938, 24383, 18243, 53761, 20979]
Now, I was hoping is there anyway to let tokenizer to know that if they find anything like ‘##না’, convert them to ‘না’ for all such cases.
|
It might be easier to replace the না in the sentence with “space” না before you tokenize.
Is it just the ##না that is a problem, or do you want to get rid of all the ## continuation tokens?
| 0 |
huggingface
|
🤗Tokenizers
|
Using a pretrained tokenizer vs training a one from scratch
|
https://discuss.huggingface.co/t/using-a-pretrained-tokenizer-vs-training-a-one-from-scratch/783
|
Hi
For domain-specific data, let’s say medical drug data with complicated chemical compounds names. Would it be beneficial to train a tokenizer on the text if the size was nearly 18 M entries? In the bioBERT paper, they used a pre-trained BERT paper for the following reasons:
compatibility of BioBERT with BERT, which allows BERT pre-trained on general domain corpora to be re-used, and makes it easier to interchangeably use existing models based on BERT and BioBERT
any new words may still be represented and fine-tuned for the biomedical domain using the original WordPiece vocabulary of BERT.
|
How many different chemical compound names are there in the 18 M entries?
Having lots of data is good, but I don’t think training a tokenizer would help you unless the words you are interested in are frequent enough to be selected for the tokenizer’s vocabulary. I’m not sure how the tokenizer chooses its vocabulary, but word-frequency must be important. I’m guessing that “medical drug data” would still include lots of normal-English words, many of which would be more frequent than the chemicals.
[I am not an expert].
| 0 |
huggingface
|
🤗Tokenizers
|
Masking Probability
|
https://discuss.huggingface.co/t/masking-probability/746
|
Hi
I am wondering whether the masking of tokens [MASK] is done by applying the masking probability to a given sequence or the whole batch altogether.
|
In DataCollatorForLanguageModeling 7 the masking is done on the tensor directly.
| 0 |
huggingface
|
🤗Tokenizers
|
Add new tokens for subwords
|
https://discuss.huggingface.co/t/add-new-tokens-for-subwords/489
|
Hey, I am trying to add subword tokens to bert base uncased as follows:
num = tokenizer_bert.add_tokens(’##committed’)
tokenizer_bert.tokenize(’##committed’)
[’##committed’]
tokenizer_bert.tokenize(‘hellocommitted’)
[‘hello’, ‘##com’, ‘##mit’, ‘##ted’]
It seems like the tokenizer is literally adding the hashtags, when I would want to create a new subword called ##commited. I am doing this to deal with hashtags & thinking of initializing those new subwords to their original words.
Any solutions / better ways to deal with hashtags, would be really appreciated!
Thanks –
|
I’m not sure if adding subwords directly is possible, you can try to add them as special tokens instead so only id will be created for them instead of splitting. Pinging @anthony for more details.
| 0 |
huggingface
|
🤗Tokenizers
|
Token alignment for word-level tasks
|
https://discuss.huggingface.co/t/token-alignment-for-word-level-tasks/577
|
Hey guys,
I want to make a POS tagger on top of BERT. The dataset is in conll-u format [1], input sentences are already tokenized; input word tokens are mapped to labels (POS etc ). Therefore, I have to take special care of input/output alignment as BERT will add additional tokens during tokenization, similar to what is described in the original BERT repo [2].
So I took the alignment algorithm outlined in [2]; I tokenized each input word and map/expanded labels for each BERT tokenized word. I then needed to add padding to each sentence, attach attention_mask, convert to tensors etc. Looking at Tokenizer API I didn’t see how to do this easily if I already have sentences and labels converted to token ids in the first step.
This whole exercise left me wondering if there is a simpler and less verbose approach to these word-level tasks using Tokenizer API?
Cheers,
Vladimir
[1] https://universaldependencies.org/format#conll-u-format 4
[2] https://github.com/google-research/bert#tokenization 13
|
I found an answer in utils_ner.py 147 See function convert_examples_to_features; it does what I was doing except in a more general model-agnostic approach.
Cheers,
Vladimir
| 0 |
huggingface
|
🤗Tokenizers
|
Use a pretrained ByteLevelBPETokenizer on text
|
https://discuss.huggingface.co/t/use-a-pretrained-bytelevelbpetokenizer-on-text/348
|
Hi
I am asking whether there’s a simple way to tokenize a piece of text “I will go to the bedroom” to BPE " I will go to the bed ##room" without training a tokenizer from scratch.
|
Hi @abdallah197!
There is a bunch of pre-trained tokenizers in the huggingface/transformers 29 library that you can use directly, without having to train anything. You won’t have any control over how the tokens are split though, as this is based on what the tokenizer learned during training, and the size of its vocabulary. bedroom isn’t really a rare word, so often it will have its own token in the vocabulary.
Your example looks like a WordPiece (instead of BPE), given the ## in ##room which is very specific to this kind of tokenizers. You can try to use those from BERT in the library to see if anything fits your needs, for example with:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
tokenizer.tokenize("I will go to the bedroom")
# ['I', 'will', 'go', 'to', 'the', 'bedroom']
tokenizer.tokenize("I will go to the Bedroom")
# ['I', 'will', 'go', 'to', 'the', 'Bed', '##room']
| 0 |
huggingface
|
🤗Accelerate
|
Multiple GPUs do not speed up the training
|
https://discuss.huggingface.co/t/multiple-gpus-do-not-speed-up-the-training/11165
|
I am trying to train the Bert-base-uncased model on Nvidia 3080. However, the strange thing is, the time spent on one step grows linearly with the number of GPU. For example, when I maintain the same batch size, if one step needs 2s/ite on single GPU, the two GPUs need around 4s/ite. Although I know some time may spent on the synchronization, I don’t think it counts too much. As a result, the total time using multiple GPUs is similar to single GPU, which looks like the GPUs run one by one. I directly run the sample code provided on this link and the problem still occurs. BTW, I have run the transformers.trainer using multiple GPUs on this machine, and the distributed training works.
The CUDA version shown by nvidia-smi is 11.4 and the environment is:
transformers version: 4.11.3
Platform: Linux-5.11.0-38-generic-x86_64-with-debian-bullseye-sid
Python version: 3.7.6
PyTorch version (GPU?): 1.9.0+cu111 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?:
Using distributed or parallel set-up in script?:
The accelerate config is:
In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0
Which type of machine are you using? ([0] No distributed training, [1] multi-CPU, [2] multi-GPU, [3] TPU): 2
How many different machines will you use (use more than 1 for multi-node training)? [1]: 1
Do you want to use DeepSpeed? [yes/NO]: no
How many processes in total will you use? [1]: 4
Do you wish to use FP16 (mixed precision)? [yes/NO]: no
The relevant outputs are:
Note that --use_env is set by default in torchrun.
If your script expects --local_rank argument to be set, please
change it to read from os.environ['LOCAL_RANK'] instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
FutureWarning,
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal pe
rformance in your application as needed.
*****************************************
10/28/2021 16:10:50 - INFO - __main__ - Distributed environment: MULTI_GPU Backend: nccl
Num processes: 4
Process index: 0
Local process index: 0
Device: cuda:0
Use FP16 precision: False
10/28/2021 16:10:50 - INFO - __main__ - Distributed environment: MULTI_GPU Backend: nccl
Num processes: 4
Process index: 3
Local process index: 3
Device: cuda:3
Use FP16 precision: False
10/28/2021 16:10:50 - INFO - __main__ - Distributed environment: MULTI_GPU Backend: nccl
Num processes: 4
Process index: 2
Local process index: 2
Device: cuda:2
Use FP16 precision: False
10/28/2021 16:10:50 - INFO - __main__ - Distributed environment: MULTI_GPU Backend: nccl
Num processes: 4
Process index: 1
Local process index: 1
Device: cuda:1
Use FP16 precision: False
.........
# and in the training loop, these four lines occur:
10/28/2021 16:11:45 - INFO - root - Reducer buckets have been rebuilt in this iteration.
10/28/2021 16:11:45 - INFO - root - Reducer buckets have been rebuilt in this iteration.
10/28/2021 16:11:45 - INFO - root - Reducer buckets have been rebuilt in this iteration.
10/28/2021 16:11:45 - INFO - root - Reducer buckets have been rebuilt in this iteration.
|
@ezio98 did you ever figure this out?
| 0 |
huggingface
|
🤗Accelerate
|
TypeError: Can’t apply _send_to_device on object of type <class ‘str’>, only of nested list/tuple/dicts of objects that satisfy _has_to_method
|
https://discuss.huggingface.co/t/typeerror-cant-apply-send-to-device-on-object-of-type-class-str-only-of-nested-list-tuple-dicts-of-objects-that-satisfy-has-to-method/13491
|
I am trying to do multi-task learning with transformers with my own custom training loop
and accelerate.
I have prepared my model,data and optimizer like this, but when I try to loop my batch I get this issue, also I have checked that batch’s type is still dict.
Is there any way to solve this?
accelerator = Accelerator()
device = accelerator.device
model_1,model_2, optimizer_1, optimizer_2, train_dl, dev_dl = accelerator.prepare(model_1,model_2, optimizer_1, optimizer_2, train_dl, dev_dl)
--> 1 for batch in train_dl:
2 print(batch)
7 frames
/usr/local/lib/python3.7/dist-packages/accelerate/data_loader.py in __iter__(self)
304 if state.distributed_type == DistributedType.TPU:
305 xm.mark_step()
--> 306 yield batch if self.device is None else send_to_device(batch, self.device)
307
308
/usr/local/lib/python3.7/dist-packages/accelerate/utils.py in send_to_device(tensor, device)
202 return hasattr(t, "to")
203
--> 204 return recursively_apply(_send_to_device, tensor, device, test_type=_has_to_method, error_on_other_type=True)
205
206
/usr/local/lib/python3.7/dist-packages/accelerate/utils.py in recursively_apply(func, data, test_type, error_on_other_type, *args, **kwargs)
169 func, v, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs
170 )
--> 171 for k, v in data.items()
172 }
173 )
/usr/local/lib/python3.7/dist-packages/accelerate/utils.py in <dictcomp>(.0)
169 func, v, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs
170 )
--> 171 for k, v in data.items()
172 }
173 )
/usr/local/lib/python3.7/dist-packages/accelerate/utils.py in recursively_apply(func, data, test_type, error_on_other_type, *args, **kwargs)
160 func, o, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs
161 )
--> 162 for o in data
163 ),
164 )
/usr/local/lib/python3.7/dist-packages/accelerate/utils.py in honor_type(obj, generator)
119 # Can instantiate a namedtuple from a generator directly, contrary to a tuple/list.
120 return type(obj)(*list(generator))
--> 121 return type(obj)(generator)
122
123
/usr/local/lib/python3.7/dist-packages/accelerate/utils.py in <genexpr>(.0)
160 func, o, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs
161 )
--> 162 for o in data
163 ),
164 )
/usr/local/lib/python3.7/dist-packages/accelerate/utils.py in recursively_apply(func, data, test_type, error_on_other_type, *args, **kwargs)
176 elif error_on_other_type:
177 raise TypeError(
--> 178 f"Can't apply {func.__name__} on object of type {type(data)}, only of nested list/tuple/dicts of objects "
179 f"that satisfy {test_type.__name__}."
180 )
TypeError: Can't apply _send_to_device on object of type <class 'str'>, only of nested list/tuple/dicts of objects that satisfy _has_to_method.
|
This has been fixed on master, so you should just do a source install
| 1 |
huggingface
|
🤗Accelerate
|
Can accelerator handle the distributed sampler?
|
https://discuss.huggingface.co/t/can-accelerator-handle-the-distributed-sampler/12943
|
As far as I know, for Pytorch, RandomSampler can not be directly used in the distributed data parallel training since DistributedSampler is desired (this link discusses the problem). I am wondering whether accelerator.prepare(dataloader) handles the data split for multiple GPUs if I use the RandomSampler, so that the sub-dataset on each device are exclusive.
|
You don’t have to worry about using a distributed sampler with Accelerate. Whatever your sampler is, Accelerate will automatically shard it for all processes.
| 0 |
huggingface
|
🤗Accelerate
|
How to generate output from a model trained with accelerate?
|
https://discuss.huggingface.co/t/how-to-generate-output-from-a-model-trained-with-accelerate/12738
|
I’ve trained a mt5 model with accelerate. But it only save the model.config file and the pytorch.model file?
How can I generate output from my model without the tokenizer files?
When I try to load the model it gives me the following error.
OSError: Can’t load tokenizer for ‘saved’. Make sure that:
‘saved’ is a correct model identifier listed on ‘Models - Hugging Face’
(make sure ‘saved’ is not a path to a local directory with something else, in that case)
or ‘saved’ is the correct path to a directory containing relevant tokenizer files
|
It looks like you are trying to load the tokenizer for that model, so you need to save it in the same directory (hard to be sure since you are not sharing your code!)
| 1 |
huggingface
|
🤗Accelerate
|
Simple NLP Example not working
|
https://discuss.huggingface.co/t/simple-nlp-example-not-working/9913
|
Hi,
I’m looking for an example to learn how to use TPUs on Colab running PyTorch.
I’m glad to find the Simple NLP Example 6 which is unfortunately not working.
Running w/o modifications leads to following error message running the last cell:
from accelerate import notebook_launcher
notebook_launcher(training_function)
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-50-a91f3c0bb4fd> in <module>()
1 from accelerate import notebook_launcher
2
----> 3 notebook_launcher(training_function)
1 frames
/usr/local/lib/python3.7/dist-packages/torch_xla/__init__.py in <module>()
99 from ._patched_functions import _apply_patches
100 from .version import __version__
--> 101 import _XLAC
102
103
ImportError: /usr/local/lib/python3.7/dist-packages/_XLAC.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNK3c1010TensorImpl20is_contiguous_customENS_12MemoryFormatE
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
I found a workaround description here 1 which says:
... downgrading PyTorch to torch-1.8.2+cpu,
but that leads to another error message
ProcessExitedException: process 0 terminated with signal SIGSEGV
What is necessary to run that example?
Do you know any other example that meets my requirements (Colab, TPUs, PyTorch) and runs?
Thanks for any comment
|
I’m guessing this is due to a version mismatch between PyTorch XLA and PyTorch (PyTorch XLA is installed with a version for PyTorch 1.9 and Colab now uses PyTorch 1,10). I’ve asked for an updated link to install the proper version of PyTorch XLA, but in the meantime, you cna solve the issue by downgrading PyTorch to 1.9.1 in the Colab you are running.
| 0 |
huggingface
|
🤗Accelerate
|
Problems with hanging process at the end when using dataloaders on each process
|
https://discuss.huggingface.co/t/problems-with-hanging-process-at-the-end-when-using-dataloaders-on-each-process/11955
|
I am trying to get accelerate working on a video task and I am running into problems with processes getting stuck.
Here’s a brief summary of my problem: I have multiple directories containing multiple (up to a thousand) image frames. Because loading all images for a batch of videos at once is not possible due to memory constraints, I am trying to iteratively encode a batch of videos using a resnet and feed the cached embeddings to a sequence model. I want to fine-tune the encoder as well and for that reason precomputing the embeddings is not possible.
My thinking goes like this:
Get a list of paths to all video directories.
Distribute subsets of the paths evenly among all available GPUs.
Within each GPU we then sequentially loop over the subset of paths and:
3.1 For each path to a video directory create a dataset and -loader
3.2 and iteratively encode batches of this loader with a partially frozen resnet and store results in a cache
3.3 Finally, we aggregate the caches for a given batch, pad all image sequences to same length and feed the resulting batch to a sequence model.
Here’s the code that I use:
import torch
from torchvision.models import resnet18
from accelerate import Accelerator
from torch.utils.data import Dataset, DataLoader
from pathlib import Path
from torchvision import transforms
from tqdm import tqdm
from PIL import Image
def chunker(seq, size):
"""chunk given sequence into batches of given size"""
return (seq[pos:pos + size] for pos in range(0, len(seq), size))
class ImageDataset(Dataset):
def __init__(self, img_dir):
super().__init__()
self.img_paths = list(img_dir.rglob('*.jpg'))
self.transform = transforms.Compose([
transforms.Resize(224),
transforms.ToTensor(),
transforms.Normalize(
(0.485, 0.456, 0.406),
(0.229, 0.224, 0.225)),
])
def __getitem__(self, idx):
path = self.img_paths[idx]
img = self.transform(Image.open(path))
return img
def __len__(self):
return len(self.img_paths)
def main():
torch.multiprocessing.set_sharing_strategy('file_system') # resolves too many files open error
accelerator = Accelerator(fp16=True, device_placement=False)
# partially freeze network
model = resnet18(pretrained=False)
for param in model.parameters():
param.requires_grad = False
model.fc = torch.nn.Linear(model.fc.in_features, 256)
model.to(accelerator.device)
model = accelerator.prepare_model(model)
# 1. Get a list of paths to all image directories.
data_path = Path('/path/to/data_root/')
img_dirs = list(data_path.glob('*'))
# 2. distribute subsets of the paths evenly among all available GPUs
n_dirs = len(img_dirs)
split_size = n_dirs // accelerator.num_processes
img_dirs = img_dirs[accelerator.process_index*split_size : (accelerator.process_index+1)*split_size]
# just use single image bag for testing, in practise we would loop over these items
img_dir_batch = list(chunker(img_dirs, 1))[0]
states = [] # container to collect outputs
for img_dir in img_dir_batch:
# 3.1 create dataset and loader for current video
ds = ImageDataset(img_dir)
dl = DataLoader(ds, batch_size=16, num_workers=1)
# 3.2 iteratively encode and cache frames
outs = []
progress = tqdm(dl, disable=not accelerator.is_local_main_process)
for img in progress:
torch.cuda.empty_cache() # free memory
out = model(img.to(accelerator.device))
outs.append(out)
outs = torch.cat(outs,dim=0)
states.append(outs)
# 3.3 aggregate batch containing multiple videos
# here we would then zero pad `states` into a single batch such that sequences have same length
# next we feed the batch into another model
if __name__=="__main__":
main()
print('done.')
This seems to work and the first (out of two) processes finishes fine (i.e. reaching the print statement at the end). The second process however completes the encoding stage and then just hangs indefinitely at the end.
I am using a single machine with 2 GPUs.
Thank you all, any help is appreciated
|
It seems to work only when I wrap the forward call in with model.no_sync() and set the number of workers to 0. This makes the loading very slow however. And when I try to use a config with Deepspeed it throws an error that the model object has no attribute no_sync.
| 0 |
huggingface
|
🤗Accelerate
|
Main code executed twice per process. Normal behaviour?
|
https://discuss.huggingface.co/t/main-code-executed-twice-per-process-normal-behaviour/11863
|
Hello everyone. I am just getting started with accelerate and distributed training in general. To test the number of GPUs used I created a simple script containing a simple main function:
def main():
deepspeed_plugin = DeepSpeedPlugin(zero_stage=3, gradient_accumulation_steps=1)
accelerator = Accelerator(fp16=True, deepspeed_plugin=deepspeed_plugin)
print(f'Num Processes: {accelerator.num_processes}; Device: {accelerator.device}; Process Index: {accelerator.process_index}')
I launch this with CUDA_VISIBLE_DEVICES=0,1 accelerate launch --config_file accelerate_config.yaml accelerate_test.py. I get the following output:
Num Processes: 2; Device: cuda:0; Process Index: 0
Num Processes: 2; Device: cuda:0; Process Index: 0
Num Processes: 2; Device: cuda:1; Process Index: 1
Num Processes: 2; Device: cuda:1; Process Index: 1
It seems like main is executed twice for each process. My question is whether this is expected behaviour?
accelerate_config.yaml contains:
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumulation_steps: 1
offload_optimizer_device: cpu
zero_stage: 3
distributed_type: DEEPSPEED
fp16: false
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
num_machines: 1
num_processes: 2
Thank you.
|
Ah I rechecked and figured it out. I made a stupid formatting error and main was accidentally called twice in my script. Everything works as expected. Sorry for the inconveniences.
| 1 |
huggingface
|
🤗Accelerate
|
Accelerate Multi-GPU on several Nodes How to
|
https://discuss.huggingface.co/t/accelerate-multi-gpu-on-several-nodes-how-to/10736
|
Hi,
I wonder how to setup Accelerate or possibly train a model if I have 2 physical machines sitting in the same network. Each machine has 4 GPUs.
Can I use Accelerate + DeepSpeed to train a model with this configuration ?
Can’t seem to be able to find any writeups or example how to perform the “accelerate config”.
Thanks.
|
I’m not sure what documentation you need, just type accelerate config in the terminal of both machines and follow the prompts.
| 0 |
huggingface
|
🤗Accelerate
|
How to enable BF16 on tpus?
|
https://discuss.huggingface.co/t/how-to-enable-bf16-on-tpus/10542
|
With notebook_launcher(main, use_fp16=True) my data still fp32
|
YEs that would be expected, this does not control bfloat16. Support for bfloat16 on TPUs is not in Accelerate yet.
| 0 |
huggingface
|
🤗Accelerate
|
With accelerate and colab tpu all devices always xla:0 and none of them is_main_process
|
https://discuss.huggingface.co/t/with-accelerate-and-colab-tpu-all-devices-always-xla-0-and-none-of-them-is-main-process/10324
|
Here is code
colab.research.google.com
Google Colaboratory 3
So, seems like a bug, but I’m not sure
|
I’m not sure where in the notebook you see that. Could you post a minimal reproducer? Printing the accelerator.device in the training function shows 8 different devices.
| 0 |
huggingface
|
🤗Accelerate
|
Accelerate / TPU with bigger models: process 0 terminated with signal SIGKILL
|
https://discuss.huggingface.co/t/accelerate-tpu-with-bigger-models-process-0-terminated-with-signal-sigkill/10317
|
Hello all,
I’ve written a chatbot that works fine in a Trainer / PyTorch based environment mode on one GPU and with different models.
I tested with distilbert-base-uncased, bert-large-uncased, roberta-base, roberta-large, microsoft/deberta-large.
After making necessary modifications to run the program with Accelerator on 8 TPU it works fine for distilbert-base-uncased. Using roberta-base model the program runs in slooow motion and for all other (bigger?) models the program terminates with the following error message:
Launching a training on 8 TPU cores.
loading configuration file https://huggingface.co/bert-large-uncased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/1cf090f220f9674b67b3434decfe4d40a6532d7849653eac435ff94d31a4904c.1d03e5e4fa2db2532c517b2cd98290d8444b237619bd3d2039850a6d5e86473d
Model config BertConfig {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2",
...
"LABEL_99": 99
},
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.10.3",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
loading weights file https://huggingface.co/bert-large-uncased/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/1d959166dd7e047e57ea1b2d9b7b9669938a7e90c5e37a03961ad9f15eaea17f.fea64cd906e3766b04c92397f9ad3ff45271749cbe49829a079dd84e34c1697d
---------------------------------------------------------------------------
ProcessExitedException Traceback (most recent call last)
<ipython-input-54-a91f3c0bb4fd> in <module>()
1 from accelerate import notebook_launcher
2
----> 3 notebook_launcher(training_function)
3 frames
/usr/local/lib/python3.7/dist-packages/accelerate/notebook_launcher.py in notebook_launcher(function, args, num_processes, use_fp16, use_port)
67 launcher = PrepareForLaunch(function, distributed_type="TPU")
68 print(f"Launching a training on {num_processes} TPU cores.")
---> 69 xmp.spawn(launcher, args=args, nprocs=num_processes, start_method="fork")
70 else:
71 # No need for a distributed launch otherwise as it's either CPU or one GPU.
/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py in spawn(fn, args, nprocs, join, daemon, start_method)
392 join=join,
393 daemon=daemon,
--> 394 start_method=start_method)
395
396
/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py in start_processes(fn, args, nprocs, join, daemon, start_method)
186
187 # Loop on join until it returns True or raises an exception.
--> 188 while not context.join():
189 pass
190
/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py in join(self, timeout)
134 error_pid=failed_process.pid,
135 exit_code=exitcode,
--> 136 signal_name=name
137 )
138 else:
ProcessExitedException: process 0 terminated with signal SIGKILL
I tested with different batch_sizes down to 1 and reduced max_length down to 32. No effect.
This case seems to be similiar to TPU memory issues 1.
Do I have a possibility to make some necessary modifications / settings or is the Accelerator / TPU currently not compatible with bigger models?
|
You won’t be able to use large models on Colab as they don’t give enough RAM on those instances to properly load the model, you will need to go through a GCP instance for that.
| 0 |
huggingface
|
🤗Accelerate
|
Get the loss from all TPU cores
|
https://discuss.huggingface.co/t/get-the-loss-from-all-tpu-cores/7652
|
In the distributed evaluation section 2 of the docs, it is said that one should use accelerator.gather to collect the data from the various devices.
My question: do you also need to use accelerator.gather when collecting the losses from the various cores? I defined the loss calculation as follows:
# Evaluate at the end of the epoch (distributed evaluation as we have 8 TPU cores)
model.eval()
validation_losses = []
for batch in val_dataloader:
with torch.no_grad():
outputs = model(**batch)
loss = outputs.loss
# We gather the loss from the 8 TPU cores to have them all.
validation_losses.append(accelerator.gather(loss))
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", sum(validation_losses) / len(validation_losses))
However, this fails:
File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 916, in mesh_reduce
return reduce_fn(xldata) if xldata else cpu_data
File "/usr/local/lib/python3.7/dist-packages/accelerate/accelerator.py", line 290, in gather
return gather(tensor)
File "/usr/local/lib/python3.7/dist-packages/accelerate/utils.py", line 144, in _tpu_gather
return xm.mesh_reduce(name, tensor, torch.cat)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 916, in mesh_reduce
return reduce_fn(xldata) if xldata else cpu_data
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
File "/usr/local/lib/python3.7/dist-packages/accelerate/utils.py", line 171, in gather
return _tpu_gather(tensor, name="accelerate.utils.gather")
File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 916, in mesh_reduce
return reduce_fn(xldata) if xldata else cpu_data
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
File "/usr/local/lib/python3.7/dist-packages/accelerate/utils.py", line 144, in _tpu_gather
return xm.mesh_reduce(name, tensor, torch.cat)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 916, in mesh_reduce
return reduce_fn(xldata) if xldata else cpu_data
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
Because the loss is a zero-dimensional tensor instead of 2D.
|
Also note that now Accelerate automatically adds that dimension when doing a gather of 0d-tensors, so you can now just do:
validation_losses.append(accelerator.gather(loss))
| 1 |
huggingface
|
🤗Accelerate
|
Using Accelerate on an HPC (Slurm)
|
https://discuss.huggingface.co/t/using-accelerate-on-an-hpc-slurm/6286
|
Hi,
I am performing some tests with Accelerate on an HPC (where slurm is usually how we distribute computation). It works on one node and multiple GPU but now I want to try a multi node setup.
I will use your launcher accelerate launch --config_file <config-file> <my script> but then I need to be able to update a couple of the fields from the json file in my script (so during the creation of the Accelerator ?) :
main_process_ip
machine_rank
How can I do that ? Will it be working ?
I am right to think that if my setup is two nodes, each one with 4 GPU, the (range of) value(s) should be:
for “num_process”: 8 (the number of gpu)
for “num_machine”: 2
for “machine rank”: [0,1]
for “distributed_type” : “MULTI_GPU”
Thanks
|
How do you usually distribute in multi-node with slurm?
In PyTorch distributed, the main_process_ip is the IP address of the machine of rank 0, so it should work if you enter that.
| 0 |
huggingface
|
🤗Accelerate
|
Use CUDA_VISIBLE_DEVICES with accelarator
|
https://discuss.huggingface.co/t/use-cuda-visible-devices-with-accelarator/9326
|
I try to set CUDA_VISIBLE_DEVICES on the command line. But when accelerator set the device, it chose the wrong one. What should I do?
Thanks
|
How do you know it chose the wrong one? What command did you type exactly?
| 0 |
huggingface
|
🤗Accelerate
|
Saving optimizer
|
https://discuss.huggingface.co/t/saving-optimizer/8884
|
From documentation prepare() is used to send model, optimizer, data loaders to each TPU core etc…
If I want to save the model I will unwrap the model first by doing unwrap_model().
Is there a function to unwrap the optimizer? I assume unwrap_model() is just a function to detach from all cores? So I can use it also like unwrap_model(optimizer)
Thanks
|
The wrapper of the optimizer in the Accelerate library does not add anything to its state dictionary, so you can just call optimizer.state_dict and save it.
| 0 |
huggingface
|
🤗Accelerate
|
Check how many TPU cores are using
|
https://discuss.huggingface.co/t/check-how-many-tpu-cores-are-using/8821
|
I followed the instruction in notebooks/simple_nlp_example.ipynb at master · huggingface/notebooks · GitHub 2
to setup the environment.
Is there a function to check how many TPU cores the model is using? Like XLA’s xm.xrt_world_size()?
Thanks
|
If you follow the notebook, you will see the launcher says: “Launching training on 8 TPUs”. You can print accelerator.state in your training_function if you want to be sure of that.
| 0 |
huggingface
|
🤗Accelerate
|
Loading custom class model instance saved using accelerate library fails
|
https://discuss.huggingface.co/t/loading-custom-class-model-instance-saved-using-accelerate-library-fails/8795
|
Hi,
I trained a model defined using custom class on 8-GPU setup using Accelerate library. The model trains on multi-GPU setup and is saved successfully.
Saving model:
accelerator.wait_for_everyone()
unwrapped_model = accelerator.unwrap_model(model)
unwrapped_model.eval()
accelerator.save(unwrapped_model.state_dict(), save_model)
Now I want to fine-tune previously saved model on different dataset and this is how I load the model:
model.load_state_dict(torch.load(model_path_level_1))
And then pass it on to accelerate.prepare:
accelerator = Accelerator(kwargs_handlers=[DistributedDataParallelKwargs(find_unused_parameters=True)])
model, optimizer, train_dataloader, validation_dataloader = accelerator.prepare(model, optimizer, train_dataloader, validation_dataloader)
This script launches 8 processes on GPU:0 and the code fails with Out of memory issues after sometime. After launching 8 processes, one process each is launched on other 7 GPUs before script crashes.
Is this a problem with loading custom class models?
Thank you.
|
Is this all in the same script or in separate scripts? You nay have the old model taking space in the memory in the first case.
| 0 |
huggingface
|
🤗Accelerate
|
Error when instantiating dataloaders outside training_function
|
https://discuss.huggingface.co/t/error-when-instantiating-dataloaders-outside-training-function/7649
|
Currently testing out HuggingFace Accelerate on TPUs on Colab based on the example notebook 1.
My question: Do you have to instantiate the dataloaders inside the training function? Because I didn’t, and then I get this error:
UnboundLocalError: local variable 'train_dataloader' referenced before assignment
Exception in device=TPU:5: local variable 'train_dataloader' referenced before assignment
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
fn(gindex, *args)
File "/usr/local/lib/python3.7/dist-packages/accelerate/utils.py", line 274, in __call__
self.launcher(*args)
File "<ipython-input-20-001611a4ea01>", line 33, in training_function
model, optimizer, train_dataloader, val_dataloader, test_dataloader
UnboundLocalError: local variable 'train_dataloader' referenced before assignment
Exception in device=TPU:4: local variable 'train_dataloader' referenced before assignment
It’s weird, because accelerator.prepare is defined within the training_function. The train_dataloader is a global variable, is that the problem?
cc @sgugger
|
Update: apparently you have to, because it only works when instantiating the dataloaders inside the training_function.
| 0 |
huggingface
|
🤗Accelerate
|
Is, or will be, GPU accelerating supported on Mac device?
|
https://discuss.huggingface.co/t/is-or-will-be-gpu-accelerating-supported-on-mac-device/7554
|
Dear all Developers,
Apple has just announced the TensorFlow-Metal package 42 for GPU/NPU accelerating on Mac devices. Therefore, I am wondering that if it is feasible to solve NLP tasks with HuggingFace transformers through TensorFlow-macOS and TensorFlow-Metal.
To figure it out, I installed TensorFlow-macOS, TensorFlow-Metal, and HuggingFace on my local device. Then, I ran the testing code to check everything installed correctly, and here was what I got.
截圖 2021-06-29 00.53.461364×966 310 KB
It seems everything works fine. But, I get the following error while I attempt to fine-tune a BERT model.
InvalidArgumentError: Cannot assign a device for operation tf_bert_for_sequence_classification/bert/embeddings/Gather: Could not satisfy explicit device specification '' because the node {{colocation_node tf_bert_for_sequence_classification/bert/embeddings/Gather}} was colocated with a group of nodes that required incompatible device '/job:localhost/replica:0/task:0/device:GPU:0'. All available devices [/job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:GPU:0].
Colocation Debug Info:
Colocation group had the following types and supported devices:
Root Member(assigned_device_name_index_=2 requested_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' assigned_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' resource_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
RealDiv: GPU CPU
Sqrt: GPU CPU
UnsortedSegmentSum: CPU
AssignVariableOp: GPU CPU
AssignSubVariableOp: GPU CPU
ReadVariableOp: GPU CPU
StridedSlice: GPU CPU
NoOp: GPU CPU
Mul: GPU CPU
Shape: GPU CPU
_Arg: GPU CPU
ResourceScatterAdd: GPU CPU
Unique: CPU
AddV2: GPU CPU
ResourceGather: GPU CPU
Const: GPU CPU
So, I checked that if TensorFlow detected GPU correctly, and here is what I had.
tf.test.is_gpu_available()
WARNING:tensorflow:From <ipython-input-2-17bb7203622b>:1: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
WARNING:tensorflow:From <ipython-input-2-17bb7203622b>:1: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2021-06-29 01:56:25.862829: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2021-06-29 01:56:25.862893: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
Out[2]: True
It looks like that HuggingFace is unable to detect the proper device. Is there any way to solve this issue, or would be solved in near future?
I appreciate and looking forward to your kind assistance.
Sincerely,
hawkiyc
|
hawkiyc:
(/device:GPU:0 with 0 MB memory)
It doesn’t look like whatever device TF recongizes is actually usable - perhaps that may be the reason why HF can’t leverage it? if you can’t allocate tensors to GPU then there is little scope of executing ops there.
| 0 |
huggingface
|
🤗Accelerate
|
Accelerator object
|
https://discuss.huggingface.co/t/accelerator-object/6349
|
Hi! I have a newbie question regarding the Accelerator object. I have a code that consists of several training modules, and I’m wondering if it is okay to instantiate an Accelerator object in each of the modules? Or would it be better to create something like a config file to share a global Accelerator object across all modules?
|
It’s fine to have several accelerator objects, they will all share the same state (which contains your distributed config).
| 0 |
huggingface
|
🤗Accelerate
|
Missing positional arguments when try to use multiple GPUs with accelerator
|
https://discuss.huggingface.co/t/missing-positional-arguments-when-try-to-use-multiple-gpus-with-accelerator/6112
|
First time using accelerator and notebook_launcher.
I have 2 GPUs in the desktop.
When I do this:
notebook_launcher(training_process, (train_loader, val_loader), num_processes=1) → working great on my CPU.
Then I try this:
notebook_launcher(training_process, (train_loader, val_loader), num_processes=2)
Got error message: TypeError: training_process() missing 2 required positional arguments: ‘train_loader’ and ‘val_loader’
Did I miss something? Plz point me a direction to resolve this… Thanks.
|
That’s because those args are not passed in this case
Will send a PR today to fix this!
| 0 |
huggingface
|
🤗AutoNLP
|
Helsinki-NLP/opus-mt-en-es page says it’s AutoNLP compatible, but is it?
|
https://discuss.huggingface.co/t/helsinki-nlp-opus-mt-en-es-page-says-its-autonlp-compatible-but-is-it/13665
|
MT doesn’t appear to be in the list of supported tasks. How do I use AutoNLP for this model?
usage: autonlp <command> [<args>] create_project [-h] --name NAME --task TASK [--language LANGUAGE] --max_models MAX_MODELS
[--hub_model HUB_MODEL]
autonlp <command> [<args>] create_project: error: argument --task: invalid choice: 'translation' (choose from 'binary_classification', 'multi_class_classification', 'entity_extraction', 'extractive_question_answering', 'summarization', 'single_column_regression', 'speech_recognition')
|
We havent enabled autonlp for translation tasks. I’ll take a look and see how easy it is
| 0 |
huggingface
|
🤗AutoNLP
|
Login autonlp behind a proxy server
|
https://discuss.huggingface.co/t/login-autonlp-behind-a-proxy-server/5776
|
Hello,
My account is enabled for AutoNLP.
I’m following the page 7 to install autonlp on a Windows 10, Python version 3.8.5.
I tried the autonlp login via a terminal.
Command :
autonlp login --api-key MY_HUGGING_FACE_API_TOKEN
Results:
Traceback (most recent call last):
File “c:\users<my-username>\dev\huggingface\venv\lib\site-packages\autonlp\utils.py”, line 41, in http_get
response = requests.get(
File “c:\users<my-username>\dev\huggingface\venv\lib\site-packages\requests\api.py”, line 76, in get
return request(‘get’, url, params=params, **kwargs)
File “c:\users<my-username>\dev\huggingface\venv\lib\site-packages\requests\api.py”, line 61, in request
return session.request(method=method, url=url, **kwargs)
File “c:\users<my-username>\dev\huggingface\venv\lib\site-packages\requests\sessions.py”, line 542, in request
resp = self.send(prep, **send_kwargs)
File “c:\users<my-username>\dev\huggingface\venv\lib\site-packages\requests\sessions.py”, line 655, in send
r = adapter.send(request, **kwargs)
File “c:\users<my-username>\dev\huggingface\venv\lib\site-packages\requests\adapters.py”, line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host=‘huggingface.co’, port=443): Max retries exceeded with url: /api/whoami-v2 (Caused by SSLError(SSLCertVerificationError(1, ‘SSL: CERTIFICATE_VERIFY_FAILED certificate verify failed: self signed certificate in certificate chain (_ssl.c:1123)’)))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “C:\Users<my-username>\AppData\Local\Programs\Python\Python38\lib\runpy.py”, line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File “C:\Users<my-username>\AppData\Local\Programs\Python\Python38\lib\runpy.py”, line 87, in run_code
exec(code, run_globals)
File "C:\Users<my-username>\Dev\huggingface\venv\Scripts\autonlp.exe_main.py", line 7, in
File “c:\users<my-username>\dev\huggingface\venv\lib\site-packages\autonlp\cli\autonlp.py”, line 52, in main
command.run()
File “c:\users<my-username>\dev\huggingface\venv\lib\site-packages\autonlp\cli\login.py”, line 31, in run
client.login(token=self._api_key)
File “c:\users<my-username>\dev\huggingface\venv\lib\site-packages\autonlp\autonlp.py”, line 41, in login
auth_resp = http_get(path="/whoami-v2", domain=config.HF_API, token=token, token_prefix=“Bearer”)
File “c:\users<my-username>\dev\huggingface\venv\lib\site-packages\autonlp\utils.py”, line 45, in http_get
raise UnreachableAPIError(“ Failed to reach AutoNLP API, check your internet connection”)
autonlp.utils.UnreachableAPIError: Failed to reach AutoNLP API, check your internet connection
Could you please tell me how to specify a proxy in the login command ?
Many thanks for your help on this,
++
|
We would look into this issue. Since its a feature request, would you mind creating an issue in the Github repo: GitHub - huggingface/autonlp: 🤗 AutoNLP: train state-of-the-art natural language processing models and deploy them in a scalable environment automatically 115
| 0 |
huggingface
|
🤗AutoNLP
|
How to read the loss value
|
https://discuss.huggingface.co/t/how-to-read-the-loss-value/12457
|
Hi everyone,
I just used AutoNLP for the first time tor fine-tune a model on a question-answering tasks. The training worked without frictions, but I am looking for more information on how to read the loss function-results I get for the 15 models trained (see picture below). The lower the loss function the better, but is 2.6 a good value? Can anyone provide me a link to a more detailed documentation?
Thanks!
Chris
image1926×602 60.8 KB
|
Dear autoNLP-team, any link to a documentation of the loss function used / explanation on how to read the values would be hugely appreciated. Thanks!
| 0 |
huggingface
|
🤗AutoNLP
|
Is AutoNLP HIPAA complient?
|
https://discuss.huggingface.co/t/is-autonlp-hipaa-complient/12510
|
We are interested in trying AutoNLP with patient medical information. Is this use case supported on HuggingFace platform and AutoNLP?
|
Hello,
our services are not HIPAA compliant. However, we are currently working on a SOC2 audit, which we expect to complete in the first half of 2022.
If that’s an option for you, we have private deployment options. Feel free to contact me at julsimon@huggingface.com if you’d like to learn more.
Best regards,
–
Julien Simon
Chief Evangelist, Hugging Face
| 0 |
huggingface
|
🤗AutoNLP
|
How to replicate the best model from AutoNLP?
|
https://discuss.huggingface.co/t/how-to-replicate-the-best-model-from-autonlp/11726
|
Hi,
First of all, a big thanks to the Hugging face team for bringing AutoNLP to us. A really amazing tool. I had the chance to try it out today and it looks very promising and interesting. I was trying to find a best model for a binary text classification task and the config of the best model is shown below:
{
"_name_or_path": "AutoNLP",
"_num_labels": 2,
"architectures": [
"BertForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"id2label": {
"0": "negative",
"1": "non-negative"
},
"initializer_range": 0.02,
"intermediate_size": 4096,
"label2id": {
"negative": 0,
"non-negative": 1
},
"layer_norm_eps": 1e-12,
"max_length": 96,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pad_token_id": 0,
"padding": "max_length",
"position_embedding_type": "absolute",
"problem_type": "single_label_classification",
"transformers_version": "4.8.0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
If I want to start from the scratch and replicate this best model, will I get the same results? From the config file, it seems the best model uses a Bert-large model, drop-out of 0.1, gelu activation. Where can I find the optimizer that was used/ learning rate/batch size and other things to replicate the code?
Say, for example, I am using the code from this Collab notebook, how can I replicate the best model from scratch?
Thanks again!!
|
Hey! Not at all!!! I was just explaining
| 1 |
huggingface
|
🤗AutoNLP
|
Export AutoNLP models to custom S3
|
https://discuss.huggingface.co/t/export-autonlp-models-to-custom-s3/10890
|
Hi,
is it possible to export AutoNLP models to a custom S3 bucket? what API can help do that?
|
Hi,
since all the trained models are stored in your private repo on huggingface hub, you are free to do whatever you want with them. Just clone the repo and upload to s3!
| 0 |
huggingface
|
🤗AutoNLP
|
How to configure # models to use for training
|
https://discuss.huggingface.co/t/how-to-configure-models-to-use-for-training/10530
|
Hi! is there a way to reduce the proposed budget? Thank you!
image1174×259 15.4 KB
|
Hey Abe! This is the expected price which I think is super affordable to train 15 models here. If you train less models (5 instead of 15), it will be cheaper!
| 0 |
huggingface
|
🤗AutoNLP
|
Model Billing Seems Awkward
|
https://discuss.huggingface.co/t/model-billing-seems-awkward/10281
|
Hello, I received a billing from team@huggingface.co through bnc3.mailjet.com 1 with the link for payment from two different models that I have trainned. Is the link trustworthy?
It just seems odd for me.
For instance, why would autonlp trust me and require me to pay only after a model is trainned and available for use? (usually, in other plataforms, you would be able to train only after providing paymet methods)
Why isn’t the billing on my hugginface/autonlp portal, and only in my email as I commented earlier?
Maybe this is because of the beta phase, but it seems sketchy or just a security fault if HF trusts users that much.
I am 100% willing to pay for the trainning, I just need to clarify the billing legimacy first, please don’t get me wrong.
Thanks
|
hi @brandaobrandisborges !
The links are completely trustworthy.
We know that Hugging Face users are very loyal and in turn, we also trust our users a lot! That’s why we send an invoice only after the models are trained (one invoice per project)
P.S.: If you have a pro plan, your CC will be charged automatically
| 0 |
huggingface
|
🤗AutoNLP
|
Does AutoNLP support multi label classification?
|
https://discuss.huggingface.co/t/does-autonlp-support-multi-label-classification/10253
|
Does AutoNLP support multi label classification? If not, are there plans in the future to do so?
|
Hi and thanks for the question!
Multi-class classification is supported today, see Multi Class Classification — AutoNLP documentation 14
Multi-label classification is in the works, we’ll update with an ETA when we have it.
| 1 |
huggingface
|
🤗AutoNLP
|
Could not load model with any of the following classes
|
https://discuss.huggingface.co/t/could-not-load-model-with-any-of-the-following-classes/9479
|
Hi,
I’m unable to run the model I trained with AutoNLP. I did everything through the UI, but when I make a request to the inference API, I get this error:
Could not load model [model id here] with any of the following classes: (<class 'transformers.models.bert.modeling_bert.BertForSequenceClassification'>, <class 'transformers.models.bert.modeling_tf_bert.TFBertForSequenceClassification'>).
I get the same error when I try the inference through the UI on the model page. Any ideas?
|
Hello @arshsingh !
Thanks for reporting this issue. This should be solved now, can you confirm that it works for you ?
Have a good one,
Simon
| 0 |
huggingface
|
🤗AutoNLP
|
What kind of models is AutoNLP using?
|
https://discuss.huggingface.co/t/what-kind-of-models-is-autonlp-using/6805
|
Hi!
I’m curious about AutoNLP and have a question regarding what kind of models AutoNLP uses. e.g. I have a NER task, and from what I got on the documentation of the AutoNLP, - it will do search a best one. But let’s say, I have a domain-specific corpus with specific sets of labels, how it will be working in this case and what kind of model will be used? usual Bert model on NER hugginface pipeline? Is there way to customize a model and settings, e.g. vocabulary?
|
Hi with the new hub model training feature, you can do this now. Read more about it here: Training A Model From Hugging Face Hub — AutoNLP documentation 24
If you have any questions, please let me know.
| 0 |
huggingface
|
🤗AutoNLP
|
Access denied error
|
https://discuss.huggingface.co/t/access-denied-error/8538
|
List item
Installed AutoNLP on Windows OS, installed Git lfs, etc. Signed up for AutoNLP, upgraded to paid account, have access key in account.
Running AutoML CLI command:
$ autonlp login -api-key MY-KEY
Access denied
Used python to login and was successful:
from autonlp import AutoNLP
client = AutoNLP()
client.login(token=“MY-KEY”)
|
Hi @jhakim!
Do you mind opening a GitHub issue in AutoNLP’s repository, and post the full stack trace there? Sign in to GitHub · GitHub 1
Thanks
| 0 |
huggingface
|
🤗AutoNLP
|
Model training got status “failed”. Why?
|
https://discuss.huggingface.co/t/model-training-got-status-failed-why/6653
|
Hi! I’ve submitted job to train a model with the task “Entity extraction”. But my model got status “Failed” without any details. So I don’t understand the cause of the failure.
The only thing where I could make a mistake is my dataset fatvvs/autonlp-data-entity_model_conll2003 · Datasets at Hugging Face 2 . But the dataset has been loaded successfully without any error.
How can I find the cause of the failure?
|
is there any experts to answer this question because we are also facing the same kind it’s been a 24 days and no experts has answered this question ??
| 0 |
huggingface
|
🤗AutoNLP
|
Invoice AutoNLP
|
https://discuss.huggingface.co/t/invoice-autonlp/8069
|
Hi everyone,
who can I contact to change the invoice specifications?
Thanks a lot
|
please mail to autonlp [at] huggingface.co
| 0 |
huggingface
|
🤗AutoNLP
|
Where can I give my feedbacks?
|
https://discuss.huggingface.co/t/where-can-i-give-my-feedbacks/4349
|
Hello Everyone,
I am lucky that my invite was accepted and I got to try out a hands on AutoNLP and I personally believe that autoNLP is AWESOME. I am wondering if there is a form where I can suggest or give my feedbacks based on the experience of using AutoNLP. I am yet to try out all the things but I would like to share my thoughts as I do so.
Thank you for your time. Kudos to the team for launching this amazing product. AutoNLP is going to be the next big thing.
|
Hi @prateekagrawal - thanks we would love to hear your feedback! Please email autonlp at huggingface dot co to reach the whole team. Looking forward to reading you!
| 0 |
huggingface
|
🤗Hub
|
Custom forward function
|
https://discuss.huggingface.co/t/custom-forward-function/13996
|
Hi, is it possible to upload a model to the hub with a custom forward function? It seems like you can’t but i just wanted to check?
|
Hello! You might want to check this PR 1. It guides through the process to achieve this.
| 0 |
huggingface
|
🤗Hub
|
List model names filtered by pipeline tag
|
https://discuss.huggingface.co/t/list-model-names-filtered-by-pipeline-tag/12973
|
Hi,
Is there a way to get all the model names for a particular tag programmatically instead of visiting the page (Models - Hugging Face)? Thanks.
|
For those who stumble into this post, here’s the answer.
You can list all the models and its details by using this method here from huggingface_hub import list_models
Have fun exploring,
| 1 |
huggingface
|
🤗Hub
|
Pinning models doesn’t seem to work
|
https://discuss.huggingface.co/t/pinning-models-doesnt-seem-to-work/13675
|
We’ve pinned models (both using the API call and using the dashboard at Dashboard - Hosted API - HuggingFace), but still get “currently loading” errors when we try to make inference API calls.
One example: model https://huggingface.co/redwoodresearch/redwood_deberta-v3-sift_82b19d290a74410caa804fa47e94a80b 1 (private but we can make it public if that would help). It’s currently (supposedly) pinned but still requires a minute of warmup after a period of inactivity.
Let me know if there’s anything we should do differently!
|
@Narsil Seems like you’ve worked on pinned models before - any chance you could take a look?
| 0 |
huggingface
|
🤗Hub
|
ArkeynGan does not work
|
https://discuss.huggingface.co/t/arkeyngan-does-not-work/13799
|
Good afternoon. Please tell me why the ArkeynGan does not work?!
|
Hello,
The team is currently looking into it. Thanks a lot for your patience.
| 0 |
huggingface
|
🤗Hub
|
Account deletion
|
https://discuss.huggingface.co/t/account-deletion/13592
|
hi, can you delete my account please
sorry, I don’t have time so I’m typing lazy
|
Hi @Bence, I deleted your account
| 0 |
huggingface
|
🤗Hub
|
Model hub: Can’t load tokenizer using from_pretrained
|
https://discuss.huggingface.co/t/model-hub-cant-load-tokenizer-using-from-pretrained/13359
|
Hi,
Until today (I just tested) my 2 NER models (BERT base and large) have been working well on the HF model hub:
NER with BERT base: pierreguillou/ner-bert-base-cased-pt-lenerbr · Hugging Face 1
NER with BERT large: pierreguillou/ner-bert-large-cased-pt-lenerbr · Hugging Face 1
Now, when I start a compute in the widget, I get this error:
Can't load tokenizer using from_pretrained, please update its configuration: missing field direction at line 1 column 85
See as well this screenshot:
nererror1282×894 56.8 KB
Coming from the new version of transformers?
Another point: I use these 2 models in a Spaces App and there’s no problem:
huggingface.co
Ner Bert Pt Lenerbr - a Hugging Face Space by pierreguillou
Discover amazing ML apps made by the community
Strange, no?
cc @lysandre
|
Hey @pierreguillou, if I’m not mistaken these were temporary errors as I can click on “compute” without any issues.
Does it work for you too? Really cool models, by the way!
| 1 |
huggingface
|
🤗Hub
|
Two texts inputs for Text Classification in Inference API?
|
https://discuss.huggingface.co/t/two-texts-inputs-for-text-classification-in-inference-api/12805
|
Two texts inputs for Text Classification in Inference API?
Hello!,
I’ve trained a model to classify if two texts have a relation between them. Therefore I need to introduce two independent strings as input.
As I’m using the tokenizer I can introduce two strings as input for the model using code, however I cannot reproduce that functionality in “Hosted Inference API”.
tokenizer = AutoTokenizer.from_pretrained("my-model")
model = AutoModelForSequenceClassification.from_pretrained("my-model")
input = tokenizer('I love you','I like you',return_tensors='pt')
model(**input)
When calling tokenizer with two separated strings the token type id distinguishes between text 1 and 2:
{'input_ids': tensor([[ 101, 1045, 2293, 2017, 102, 1045, 2066, 2017, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 1, 1, 1, 1]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1]])}
However using just one string and [SEP] this is not the case, and the behaviour is the same as in “Inference API”
{'input_ids': tensor([[ 101, 1045, 2293, 2017, 101, 1045, 2066, 2017, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1]])}
Would it be possible to add a second input text to “Hosted Inference API” in order to reproduce the same behaviour that code allows?
Thanks!
|
Hello,
You’re right that text classification hosted widget only takes one text and there are text classification models taking multiple text inputs. I opened an feature request to have a more flexible behavior. In the meanwhile if you want this for demonstration purposes, you can create a Space based on your model.
| 0 |
huggingface
|
🤗Hub
|
Possible to upgrade GPU pinned instance with more memory?
|
https://discuss.huggingface.co/t/possible-to-upgrade-gpu-pinned-instance-with-more-memory/12911
|
I have a large language model that I’m using for text-generation, and I have it deployed right now, with GPU pinning enabled.
I’m extremely happy with the results so far.
But I frequently get CUDA out-of-memory errors if I supply too many tokens in my prompt, or if I request too many tokens in the completion. I don’t have exact numbers quite yet, but things seem to fail when the total token count (prompt + completion) is greater than roughly 500.
The model is based on GPT-J, which can theoretically handle 2048 context tokens, given sufficient memory, and I’d like to run some tests, with the model somewhat closer to the limits of its capabilities.
So I’d like to ask if it’s possible to upgrade my account, in order to get a larger allotment of GPU memory?
|
@Narsil
Any upgrade options available? Please, take my money!
| 0 |
huggingface
|
🤗Hub
|
Streaming partial results from hosted text-generation APIs?
|
https://discuss.huggingface.co/t/streaming-partial-results-from-hosted-text-generation-apis/12507
|
Is it possible to call the hosted text-generation APIs in such a way as to get low-latency partial streaming results, without having to wait for the full completion to be returned as JSON?
OpenAI has a stream parameter, documented here:
beta.openai.com
OpenAI API
An API for accessing new AI models developed by OpenAI
And InferKit has a streamResponse parameter, documented here:
https://inferkit.com/docs/api/generation
But I can’t find anything similar in the Huggingface API docs:
api-inference.huggingface.co
Detailed parameters
Which task is used by this model ?: In general the 🤗 Hosted API Inference accepts a simple string as an input. However, more advanced usage depends on the “t...
|
Based on the lack of response, I’m assuming this isn’t currently possible with the hosted Huggingface APIs. Is this the kind of thing that might easily be implemented, if I file a feature-request ticket on the github project?
| 0 |
huggingface
|
🤗Hub
|
Error executing pinned inference model
|
https://discuss.huggingface.co/t/error-executing-pinned-inference-model/12116
|
Hello @julien-c!
Last week, I uploaded a private Text Generation model to my Huggingface account…
https://huggingface.co/shaxpir/prosecraft_linear_43195/ 1
And then I enabled pinning on that model in our account here:
https://api-inference.huggingface.co/dashboard/pinned_models 1
But when I try to execute an API call on this model, I always get an error message.
The API call looks like this…
curl -X POST https://api-inference.huggingface.co/models/shaxpir/prosecraft_linear_43195 \
-H "Authorization: Bearer <<REDACTED>>" \
-H "Content-Type: application/json" \
-d \
'{
"inputs":"Once upon a time, there was a grumpy old toad who",
"options":{"wait_for_model":true},
"parameters": {"max_length": 500}
}'
And the error is:
{"error":"We waited for too long for model shaxpir/prosecraft_linear_43195 to load. Please retry later or contact us. For very large models, this might be expected."}
I’ve been trying repeatedly, and waiting long intervals, but I still get this error every time.
It is quite a large model, but there are other larger models on public model cards that don’t seem to suffer from this problem. And I don’t see any documentation about model-size limitations for pinned private models (on CPU or GPU). Is there any guidance on that topic? Or is there anything that the support team can do to help me get un-stuck?
(Also, the “Pricing” page says that paid “Lab” plans come with email support, but the email address doesn’t seem to be published anywhere… I tried emailing api-enterprise@huggingface.co but got no response for 9 days. And the obvious support@huggingface.co bounced back to me… Can you let me know where to send support emails?)
Thank you so much!!
|
@Narsil for guidance
| 0 |
huggingface
|
🤗Hub
|
Backend for the hub models executed by widgets
|
https://discuss.huggingface.co/t/backend-for-the-hub-models-executed-by-widgets/12631
|
Hi, what is the backend used when I run my model through the widget on model-card page ? Is this a GPU or CPU ?
|
This uses the Inference API and is CPU by default. Subscribers can get GPUs and pin the models so they load very quickly, but you can read more about it at Inference API - Hugging Face and Overview — Api inference documentation
| 0 |
huggingface
|
🤗Hub
|
How to get Accelerated Inference API for T5 models?
|
https://discuss.huggingface.co/t/how-to-get-accelerated-inference-api-for-t5-models/11972
|
Hi,
I just copy/paste the following codes in a Google Colab notebook with my TOKEN_API in order to check the inference time with the t5-base from the HF model hub.
Note: code inspiration from
Translation task
[Startup Plan] Don’t manage to get CPU optimized inference API 2.
import json
import requests
API_TOKEN = 'xxxxxxx' # my HF API token
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://api-inference.huggingface.co/models/t5-base"
def query(payload):
data = json.dumps(payload)
response = requests.request("POST", API_URL, headers=headers, data=data)
return json.loads(response.content.decode("utf-8")), response.headers.get('x-compute-type')
And then, I run the following code in another cell of the same notebook:
%%time
data, x_compute_type = query(
{
"inputs": "Translate English to German: My name is Wolfgang.",
}
)
print('data: ',data)
print('x_compute_type: ',x_compute_type)
I got the following output:
data: [{'translation_text': 'Übersetzen Sie meinen Namen Wolfgang.'}]
x_compute_type: cpu
CPU times: user 17.3 ms, sys: 871 µs, total: 18.1 ms
Wall time: 668 ms
When I launch a second time this cell, I got the following output that comes from the cache:
data: [{'translation_text': 'Übersetzen Sie meinen Namen Wolfgang.'}]
x_compute_type: cache
CPU times: user 16.7 ms, sys: 0 ns, total: 16.7 ms
Wall time: 180 ms
2 remarks:
the x_compute_type is cpu, not cpu+optimized (see doc “Using CPU-Accelerated Inference (~10x speedup) 2”)
this is confirmed by the inference time of about 700ms (inference time I get when I run model.generate() for T5 in a Google Colab notebook without the use of the inference API) that should be 70ms with Accelerated Inference API, no?
even the cache inference time (nearly 200ms) is not really low even if it is almost 4 times less than the initial one.
How can I get Accelerated Inference API for a T5 model? Thanks.
cc @jeffboudier
|
Just to confirm what I wrote in the first post of this thread, I did the same tests with InferenceApi from huggingface_hub.inference_api.
Indeed, the huggingface_hub library has a client wrapper to access the Inference API programmatically (doc: “How to programmatically access the Inference API 1”).
Therefore, I did run the following code in a Google Colab notebook:
!pip install huggingface_hub
from huggingface_hub.inference_api import InferenceApi
API_TOKEN = 'xxxxxxx' # my HF API token
model_name = "t5-base"
inference = InferenceApi(repo_id=model_name, token=API_TOKEN)
print(inference)
I got as output:
InferenceApi(options='{'wait_for_model': True, 'use_gpu': False}', headers='{'Authorization': 'xxxxxx'}', task='translation', api_url='https://api-inference.huggingface.co/pipeline/translation/t5-base')
Then, I ran the following code:
%%time
inputs = "Translate English to German: My name is Claude."
output = inference(inputs=inputs)
print(output)
And I got as output:
[{'translation_text': 'Mein Name ist Claude.'}]
CPU times: user 14 ms, sys: 1.05 ms, total: 15.1 ms
Wall time: 651 ms
When I ran a second time the same code, I got the cache output:
[{'translation_text': 'Mein Name ist Claude.'}]
CPU times: user 14.3 ms, sys: 581 µs, total: 14.9 ms
Wall time: 133 ms
We can observe that the inference times (initial and cache) correspond to those published in my first post (I guess this is normal because the code behind is the same). However, we end up with the same question: how can I get Accelerated Inference API for a T5 model?
| 0 |
huggingface
|
🤗Hub
|
Pipeline cannot infer suitable model classes
|
https://discuss.huggingface.co/t/pipeline-cannot-infer-suitable-model-classes/11339
|
When I try to use the inference pipeline widget for my private project, I got this error.
Screenshot_20211103-195337_Chrome1093×1404 142 KB
But the prediction is working when I do locally (gives back proper labels)
|
Pinging @Narsil
| 0 |
huggingface
|
🤗Hub
|
Inference API returns Unkown Error
|
https://discuss.huggingface.co/t/inference-api-returns-unkown-error/11502
|
I cannot query the API usage for my inference api as it returns Unkown Error. When I signed up for the Paid plan it said I would have email support access but there’s no way to find what is the email for contacting support.
|
I think we discussed this over email so this is now solved. Let us know if not the case, thanks!
| 1 |
huggingface
|
🤗Hub
|
Next sentence prediction with google/mobilebert-uncased producing massive, near-identical logits > 10^8 for its documentation example (and >2k others tried)
|
https://discuss.huggingface.co/t/next-sentence-prediction-with-google-mobilebert-uncased-producing-massive-near-identical-logits-10-8-for-its-documentation-example-and-2k-others-tried/10750
|
With a fresh install of transformers and pytorch, I ran the lines of example code from MobileBERT — transformers 4.11.3 documentation
>>> from transformers import MobileBertTokenizer, MobileBertForNextSentencePrediction
>>> import torch
>>> tokenizer = MobileBertTokenizer.from_pretrained('google/mobilebert-uncased')
>>> model = MobileBertForNextSentencePrediction.from_pretrained('google/mobilebert-uncased')
>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> next_sentence = "The sky is blue due to the shorter wavelength of blue light."
>>> encoding = tokenizer(prompt, next_sentence, return_tensors='pt')
>>> outputs = model(**encoding, labels=torch.LongTensor([1]))
>>> loss = outputs.loss
>>> logits = outputs.logits
Printing the logits, we get tensor([[2.7888e+08, 2.7884e+08]], grad_fn=<AddmmBackward>)
For comparison, the logits produced on the same example using BertForNextSentencePrediction with bert-base-uncased instead are tensor([[-3.0729, 5.9056]], grad_fn=<AddmmBackward>).
I tried lots of different examples, and got the same strange behavior: logits of about 2e+08 for both classes, and higher for the first class in the 3rd or 4th significant figure. Given the sizes, it leads to a softmax score of 1 “is the next sentence” (the first class) and 0 for the other no matter what the first and second sentence is, no matter how unrelated the second sentence is.
Is there something not in the example code from the documentation that needs to be done in order to get non-degenerate outputs for the Next Sentence Prediction task it was pretrained on?
cc @vshampor
|
Linked issue: Logit explosion in MobileBertForNextSentencePrediction example from documentation (and all others tried) · Issue #13990 · huggingface/transformers · GitHub 2
| 0 |
huggingface
|
🤗Hub
|
Disable Hosted inference API
|
https://discuss.huggingface.co/t/disable-hosted-inference-api/10379
|
Hi,
How can I disable the hosted inference API for my model in HF?
Thanks
|
I don’t know if you can. Out of curiosity, what is your use-case that you do not want that the widget? If you want to private models you’ll need to get a paid subscription AFAIK.
| 0 |
huggingface
|
🤗Hub
|
Availability of models pushed to Hub
|
https://discuss.huggingface.co/t/availability-of-models-pushed-to-hub/10200
|
I pushed my first model to the hub using push_to_hub function around an hour ago. I can see it up on the Hub website at https://huggingface.co/zyl1024/bert-base-cased-finetuned-qqp 1. However, when I try to use it in transformer, by following the instruction on that page,
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("zyl1024/bert-base-cased-finetuned-qqp")
model = AutoModelForSequenceClassification.from_pretrained("zyl1024/bert-base-cased-finetuned-qqp")
I get an error saying
OSError: Can't load tokenizer for 'zyl1024/bert-base-cased-finetuned-qqp'. Make sure that:
- 'zyl1024/bert-base-cased-finetuned-qqp' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'zyl1024/bert-base-cased-finetuned-qqp' is the correct path to a directory containing relevant tokenizer files
Do I need do something else or just wait for a while (if so, how long?) for it to become available for download?
|
Do I need do something else or just wait for a while (if so, how long?) for it to become available for download?
No, normally it should be directly accessible.
However, I see why it doesn’t work: your model repository only contains modeling files (config.json and pytorch_model.bin), but no tokenizer files (such as vocab.txt). Hence, only loading the model will work. You can easily save the files of a tokenizer as follows:
tokenizer.save_pretrained("path_to_directory")
You can then upload those files to your repo on the hub.
| 1 |
huggingface
|
🤗Hub
|
How to fork (in the git sense) a model repository?
|
https://discuss.huggingface.co/t/how-to-fork-in-the-git-sense-a-model-repository/9663
|
Hi,
For production, we need to use models from the hub, from which we control the updates.
Ideally, we need a way to fork the model repo in our own (public) organization, so that we control the updates ourselves. We cannot incur the risk that someone would delete a repo and changing it without us knowing.
We could create a “copy” of the model manually and re-commit it to our organization, but this is not ideal, as we would lose the ability to track and merge future updates of the original repo.
Is there any way to achieve a fork in the current state of the huggingface hub?
Thanks in advance,
Alex Combessie
|
You can add the original repository as “upstream” repository in order to track and merge future updates, like so:
git remote add upstream <URL of model>.git
You can then sync again by doing:
git fetch upstream
git rebase upstream/master
| 0 |
huggingface
|
🤗Hub
|
Spikes on downloads of my model on the huggingface hub
|
https://discuss.huggingface.co/t/spikes-on-downloads-of-my-model-on-the-huggingface-hub/8860
|
I uploaded my model ‘MathBERT’ onto the huggingface hub and for the past 2 month before 7/28/2021, it retained for 100+ downloads for the last 30 days. Starting from Jul 29, 2021, i noticed the number got a big spike such as 8,237 downloads for the ‘last 30 days’ stats and i checked again today, it is now 31,370 downloads. As much as i wanted my model to be a popular model, i suspect if there’s any systematic error on calculating the model downloads for my model. For anybody to troubleshoot, my model link is here: tbs17/MathBERT · Hugging Face 2
Let me know if anybody discovers the reason:-)
Thank you!
|
cc @pierric
| 0 |
huggingface
|
🤗Hub
|
Error while using Accelerated API {‘error’: “[Errno 2] No such file or directory: ‘/data/sbert.net_models
|
https://discuss.huggingface.co/t/error-while-using-accelerated-api-error-errno-2-no-such-file-or-directory-data-sbert-net-models/8871
|
I have pinned the model to my organisation and calling it using the correct token too.
‘{“pinned_models”:[{“model_id”:“sentence-transformers/stsb-xlm-r-multilingual”,“compute_type”:“cpu”}],“allowed_pinned_models”:1}’
But still get this error: {‘error’: “[Errno 2] No such file or directory: ‘/data/sbert.net_models_sentence-transformers_stsb-xlm-r-multilingual/modules.json’”}
|
cc @Narsil
| 0 |
huggingface
|
🤗Hub
|
License of flair/ner-english-ontonotes-fast
|
https://discuss.huggingface.co/t/license-of-flair-ner-english-ontonotes-fast/8767
|
HI,
What is the license for using “flair/ner-english-ontonotes-fast” model?
While “flair” is licensed as MIT, can we use this model will also have MIT license?
|
Please don’t duplicate your posts, you can edit them to change the category if need be.
| 0 |
huggingface
|
Amazon SageMaker
|
About the Amazon SageMaker category
|
https://discuss.huggingface.co/t/about-the-amazon-sagemaker-category/4603
|
This category is for any questions related to using Hugging Face Transformers with Amazon SageMaker. Don’t forget to check the announcement blogpost 14 for more resources.
|
Thanks for this amazing project, definitely HuggingFace, and Sagemaker, both are the leading in their particular domains, and integrating both, will definitely enhance their effectiveness.
Is it currently possible to deploy real-time endpoints with Sagemaker, using Huggingface?
Thanks…
| 0 |
huggingface
|
Amazon SageMaker
|
Predict function ignore parameters
|
https://discuss.huggingface.co/t/predict-function-ignore-parameters/14192
|
Hey,
I’m trying to deploy a Hugging Face Model (GPT-Neo) on the SageMaker endpoint. I followed the official example and this forum. but it seems that the generate function is totally ignoring my parameters (it generates just one word despite setting the min length to 10000!) Any idea what is wrong?
My code:
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'EleutherAI/gpt-neo-1.3B',
'HF_TASK':'text-generation'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.6.1',
pytorch_version='1.7.1',
py_version='py36',
env=hub,
role=role,
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.g4dn.xlarge' # ec2 instance type
)
prompt = "Some prompt"
gen_tex = predictor.predict({
"inputs": prompt,
"parameters" : {"min_length":10,}
})
print(gen_tex[0]['generated_text'])
|
Could you please update to the latest version? which would be transformers_version="4.12" and pytorch_version="1.9" and test it again?
What is the output of gen_tex[0]['generated_text']? Is the , in the “parameters” on purpose?
| 0 |
huggingface
|
Amazon SageMaker
|
Create a batch transform job with custom trained biobert model
|
https://discuss.huggingface.co/t/create-a-batch-transform-job-with-custom-trained-biobert-model/14107
|
Hi Team,
We have trained a biobert model on custom data using pytorch framework outside of sagemaker. we want to bring this model to sagemkaer to run the batch transform job on it.
Is there any different way we can try or any suggestion you have.
Thanks,
Akash
|
Hello @akash97715,
can the model be loaded with .from_pretrained? If so I see no problem for batch transform. It then just depends if your model is stored on S3 or Models - Hugging Face.
You can check-out this example: notebooks/sagemaker-notebook.ipynb at master · huggingface/notebooks · GitHub 1
| 0 |
huggingface
|
Amazon SageMaker
|
Sagemaker Serverless Inference for LayoutLMv2 model
|
https://discuss.huggingface.co/t/sagemaker-serverless-inference-for-layoutlmv2-model/14186
|
Hi everyone,
I am experimenting with recently released Sagemaker Serverless inference thanks to Julien Simon’s tutorial
Following it I managed to train a custom DistillBERT model locally, upload to S3 and create a Serverless checkpoint that works.
Right now I am pushing it further by trying it with LayoutLMv2 model.
However, it is not clear to me how to pass inputs to it. For example in DistillBert I just create input like this test_data_16 = {'inputs': 'Amazing!'} and pass it as JSON in invoke_endpoint function.
In LayoutLMv2 input consists of three parts: image, text and bounding boxes. What keys do I use to pass them ? Here is the link to the call of the processor 1
Second question is: It is not clear to me how to make modifications to the default settings of processor when creating the endpoint. For example, I would like to set the flag only_label_first_subword True by default in the processor. How to do that?
Thanks!
|
Hi Elman, thanks for opening this thread, this is a super interesting topic
No matter what model you deploy to a Sagemaker (SM) endpoint, the input always requires preprocessing before it can be passed to the model. The reason you can just pass some text in the case of DistilBert without having to do the processing yourself is that the SageMaker Hugging Face Inference Toolkit does all the work for you. This toolkit on builds on top of the Pipeline API, which is what makes it so easy to call.
What does that mean for you when you want to use a LayoutLMV2 model? I see two possibilities:
The Pipeline API offers a class for Object Detection: Pipelines. I’m not familiar with it but I would imagine that it is quite straightforward to use. Again, because the Inference Toolkit is based on Pipelines, once you figure out how to use the Pipeline API for Object Detection you can use the same call for the SM Endpoint
The Inference Toolkit also allows you to provide your own preprocessing script, see more details here: Deploy models to Amazon SageMaker. That means you can process the inputs yourself before passing it to the model. What I would do (because I’m lazy) is to just look at an already existing demo to see how the preprocessing for a LayoutLMV2 model works. For example this one: app.py · nielsr/LayoutLMv2-FUNSD at main 1, and use that.
Hope this helps, please let me know how it goes and/or reach out if any questions.
Cheers
Heiko
| 0 |
huggingface
|
Amazon SageMaker
|
Deploying Huggingface Sagemaker Models with Elastic Inference
|
https://discuss.huggingface.co/t/deploying-huggingface-sagemaker-models-with-elastic-inference/8736
|
When I try to deploy a HuggingFace Sagemaker model with elastic inference (denoted by the accelerator_type parameter) I get an error.
Deploy Snippet:
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.t2.medium",
accelerator_type='ml.eia2.medium'
)
Error Msg:
~/miniconda3/envs/ner/lib/python3.8/site-packages/sagemaker/image_uris.py in _validate_arg(arg, available_options, arg_name)
305 """Checks if the arg is in the available options, and raises a ``ValueError`` if not."""
306 if arg not in available_options:
--> 307 raise ValueError(
308 "Unsupported {arg_name}: {arg}. You may need to upgrade your SDK version "
309 "(pip install -U sagemaker) for newer {arg_name}s. Supported {arg_name}(s): "
ValueError: Unsupported image scope: eia. You may need to upgrade your SDK version (pip install -U sagemaker) for newer image scopes. Supported image scope(s): training, inference.
The model deploys successfully if I do not provide an accelerator (i.e., no Elastic Inference).
Do the HuggingFace Sagemaker models support EI? If yes, how might I deploy the model successfully with EI? And if not, is EI support on the roadmap?
Much thanks in advance!
|
Hey @schopra,
Sadly speaking we don’t have EI DLCs yet. We are working on it and it is on the roadmap with one of the highest priorities.
I would update this thread here when I got any news.
| 0 |
huggingface
|
Amazon SageMaker
|
How to deploy a T5 model to AWS SageMaker for fast inference?
|
https://discuss.huggingface.co/t/how-to-deploy-a-t5-model-to-aws-sagemaker-for-fast-inference/11992
|
Hi,
I just watched the video of the Workshop: Going Production: Deploying, Scaling & Monitoring Hugging Face Transformer models 2 (11/02/2021) from Hugging Face.
With the informations about how to deploy (timeline start: 28:14), I created a notebook instance (type: ml.m5.xlarge) on AWS SageMaker where I did upload the notebook lab3_autoscaling.ipynb from huggingface-sagemaker-workshop-series >> workshop_2_going_production in github.
I ran it and got a inference time of about 70ms for the QA model (distilbert-base-uncased-distilled-squad). Great!
Then, I changed the model to be loaded from the HF model hub to t5-base with the following code:
hub = {
'HF_MODEL_ID':'t5-base', # model_id from hf.co/models
'HF_TASK':'translation' # NLP task you want to use for predictions
}
I did make the deploy through the following code:
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.m5.xlarge"
)
And then, I did launch an inference… but the inference time goes up to more than 700ms!
As in the video (timeline start: 57:05), @philschmid said that there are still models that can not be deployed this way, I would like to check if T5 models (up to ByT5) are optimized or not for inference in AWS SageMaker (quantization through ONNX for example or not)?
If they are not yet optimized (as it looks like), when will they be?
awssagemaker_qavsT5o1924×1164 171 KB
Note: I noticed the same problem about T5 inference through the Inference API (see this thread: How to get Accelerated Inference API for T5 models? ).
|
for large DL models such as transformers, inference on CPU is slower than on GPU. And T5 is much bigger than the distillbert used in the demo. 700ms is actually not that bad for a CPU transformer try replacing m5.xlarge by g4dn.xlarge to reduce latency.
| 0 |
huggingface
|
Amazon SageMaker
|
Sagemaker Serverless Inference
|
https://discuss.huggingface.co/t/sagemaker-serverless-inference/13246
|
Hi there,
I have been trying to use the new serverless feature from Sagemaker Inference, following the different steps very well explained by @juliensimon in his video (using same Image for the container and same ServerlessConfig) to use an HuggingFace model (not fine-tuned on my side). However after having successfully created/deployed all resources (Model, EndpointConfig, Endpoint) and trying to invokeEndpoint, I encountered this error :
'Message': 'An exception occurred from internal dependency. Please contact customer support regarding request ...'
And when looking on cloudwatch, I also get this message :
python: can't open file "/usr/local/bin/deep_learning_container.py": [Errno 13] Permission denied
Error that I don’t get when using invokeEndpoint for a non-serverless Inference Endpoint.
Did someone already encounter this error ?
Thanks in advance !!
|
Hey @YannAgora,
Thanks for opening the thread. We encountered this error as well. See blog post 13
found limitation when testing: Currently, Transformer models > 512MB create errors
We already reported this error to the SageMaker team.
Which model are you trying to use? with which memory configuration?
| 0 |
huggingface
|
Amazon SageMaker
|
Return all class labels from SageMaker invoke_endpoint
|
https://discuss.huggingface.co/t/return-all-class-labels-from-sagemaker-invoke-endpoint/13669
|
I’ve deployed the Hate-speech-CNERG/bert-base-uncased-hatexplain multi-class text classification model (along with others) to a SageMaker endpoint. Using the TextClassificationPipeline you’re able to pass in the return_all_scores=True parameter to see all class labels for the input.
However when using the SageMakerRuntime.invoke_endpoint function, I can only get one class per input. Anyone have any thoughts on an equivalent to the return_all_scores parameter?
|
Hi Will, when deploying a model to a Sagemaker endpoint you can provide a custom inference script in which you can control the behaviour of the model for inference requests. You can find the documentation on how to do this here: Deploy models to Amazon SageMaker
In your particular scenario I would think hat you can override the predict_fn and/or the output_fn methods to get to the desired result. You can find an example for an inference.py file here 2 (note that this particular file is for text summarization, so it won’t apply for your use case).
To see what code you need to get all scores returned you can refer back to the code from the TextClassification Pipeline, that you have already mentioned: transformers/text_classification.py at master · huggingface/transformers · GitHub 2. If you implement this code into your inference script you should be all set.
Hope that helps, let me know if any questions.
Cheers
Heiko
| 0 |
huggingface
|
Amazon SageMaker
|
How can I create lambda function code after deploying bart transformer for summarization in sagemaker?
|
https://discuss.huggingface.co/t/how-can-i-create-lambda-function-code-after-deploying-bart-transformer-for-summarization-in-sagemaker/10295
|
I deployed bart-large-cnn model in aws sagemaker for summarization task by following the code.
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
Hub Model configuration. Models - Hugging Face 2
hub = {
‘HF_MODEL_ID’:‘facebook/bart-large-cnn’, # model_id from Models - Hugging Face
‘HF_TASK’:‘summarization’ # NLP task you want to use for predictions
}
create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
env=hub,
role=role, # iam role with permissions to create an Endpoint
transformers_version=“4.6”, # transformers version used
pytorch_version=“1.7”, # pytorch version used
py_version=“py36”, # python version of the DLC
)
deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type=“ml.m5.xlarge”
)
Then tested with text to extract summaries and it performed well.
What code i needed to add in lambda function, for creating a rest api to call the summariZation model endpoint using aws lambda and api gateway.
|
Hello @vihaary,
Create to hear it worked as expected. Invoking your endpoint from an AWS Lambda function is pretty easy you can either use the boto3 sagemaker-runtime or install the sagemaker sdk into your lambda function.
boto3 snippet:
import boto3
import json
client = boto3.client('sagemaker-runtime')
response = client.invoke_endpoint(
EndpointName=ENDPOINT_NAME,
ContentType="application/json",
Accept="application/json",
Body=json.dumps(payload),
)
print(response['Body'].read().decode())
sagemaker snippet
from sagemaker.huggingface import HuggingFacePredictor
predictor = HuggingFacePredictor(ENDPOINT_NAME)
response = predictor.predict(payload)
print(response)
| 0 |
huggingface
|
Amazon SageMaker
|
Endpoint reuse & serverless endpoints
|
https://discuss.huggingface.co/t/endpoint-reuse-serverless-endpoints/13828
|
Hi all. I’ve been using the HF SDK for SM, and it’s working well. I have two questions:
It looks like the SDK always deploys Endpoint resources in “real-time” mode. Is support for the new “serverless” mode forthcoming? Could help small devs like me save a lot of money.
There doesn’t seem to be a way to connect HF to an existing Endpoint resource. When my python app restarts, a new Endpoint resource must be created. Are there any plans to allow the SDK to gain control of an existing Endpoint? That would be great because it takes a lot of time to redeploy new Endpoint resources. (And the old ones have to be cleaned up manually too.)
Thank you!
|
Hi! regarding endpoint re-use: is your goal to connect to an endpoint that is already live?
You can use the Predictor class for this:
from sagemaker.huggingface.model import HuggingFacePredictor
predictor = HuggingFacePredictor(endpoint_name="<my existing endpoints>")
| 1 |
huggingface
|
Amazon SageMaker
|
Deploying HG Pipelines on AWS Sagemaker
|
https://discuss.huggingface.co/t/deploying-hg-pipelines-on-aws-sagemaker/13735
|
I need some help with deploying HG Pipelines on AWS Sagemaker
This is how I deploy a pre-trained model in a pipeline in my local environment and get a prediction.
local_model = BertForSequenceClassification.from_pretrained(“local_model_path”)
tokenizer = BertTokenizer.from_pretrained(“"bert-base-uncased", padding=True, truncation=True)
pipe = pipeline("text-classification", model= local_model, tokenizer=.tokenizer)
output = pipe(input)
I could not find a way to deploy this HG Pipeline on Sagemaker. I followed the following documentation Deploy models to Amazon SageMaker and tried to use HuggingFaceModel. However, I am not sure how I can provide the other pipeline parameters.
huggingface_model = HuggingFaceModel(
model_data="s3://model.tar.gz",
role=role,
transformers_version="4.12",
pytorch_version="1.9",
py_version='py38'
)
Where and how can I provide the task (text-classification) and tokenizer information? If this is not possible, should I first tokenize my data and give that as an input to predictor? predictor.predict(data_tokenized)?
Any other suggestions? Thanks so much!!!
|
Hi Tom, I think you are on the right path. Provided your model is already on S3 and packaged as tar.gz file you can just use the deploy() method to deploy the model. To get predictions you can use predictor.predict, as you have already pointed out.
This notebook by @philschmid should be useful: notebooks/deploy_transformer_model_from_s3.ipynb at master · huggingface/notebooks · GitHub 1
Hope that helps, let me know if there are any questions.
Cheers
Heiko
| 0 |
huggingface
|
Amazon SageMaker
|
Access tokenizer from within predict_fn
|
https://discuss.huggingface.co/t/access-tokenizer-from-within-predict-fn/13697
|
I’m trying to overwrite predict_fn for a named-entity recognition task. Mostly because I provide very long sequences. In this case, I need to use the tokenizer to break the long sequence down, and then merge the predictions of all the sub-sequences.
I call the tokenizer as:
sentences = tokenizer(sentence, max_length=max_length, stride=stride, truncation=True,
return_overflowing_tokens=True)
Since I have a stride, I need to properly care for overlapping tokens and their predictions.
I can access the model, as it is received as a parameter of the predict_fn, but how do I access the tokenizer?
Thanks!
|
Hi Rogerio
model_fn actually takes the model directory as a parameter, not the model itself (see here: Deploy models to Amazon SageMaker). This means that you can load the tokenizer within model_fn like so:
tokenizer = AutoTokenizer.from_pretrained(model_dir)
Cheers
Heiko
| 0 |
huggingface
|
Amazon SageMaker
|
Can’t deploy conversational HF model on AWS - Logs say model-path not a valid directory
|
https://discuss.huggingface.co/t/cant-deploy-conversational-hf-model-on-aws-logs-say-model-path-not-a-valid-directory/13613
|
I am using the given template code to make a SageMaker endpoint from my conversational HF model:
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'rachelcorey/DialoGPT-medium-kramer',
'HF_TASK':'conversational'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.12',
pytorch_version='1.9',
py_version='py38',
env={ 'HF_TASK':'conversational' },
role=role,
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.t2.medium' # ec2 instance type
)
The code has been revised based on the answers from this thread: Deploying a conversational pipeline on AWS - #3 by philschmid which is doing exactly what I want to do. I’ve installed the proper versions of pytorch and transformers.
When I run the code, I get this error:
UnexpectedStatusException: Error hosting endpoint huggingface-pytorch-inference-2022-01-12-15-00-34-671: Failed. Reason: The primary container for production variant AllTraffic did not pass the ping health check. Please check CloudWatch logs for this endpoint..
When I go into the logs, I see this message about 100000 times:
ERROR - Given model-path /opt/ml/model is not a valid directory. Point to a valid model-path directory.
And also this message a couple times, but I assume it’s related to the previous error:
subprocess.CalledProcessError: Command '['model-archiver', '--model-name', 'model', '--handler', 'sagemaker_huggingface_inference_toolkit.handler_service', '--model-path', '/opt/ml/model', '--export-path', '/.sagemaker/mms/models', '--archive-format', 'no-archive', '--f']' returned non-zero exit status 1.
I tried to create this directory in opt/ml/ before running the code but it has no effect on the issue it seems.
According to some research, this is the directory that SageMaker puts trained models… However, my model is on HF, not trained by SageMaker… should I put the model files in that directory? Not sure what to do here. Do I ask AWS support for help with this instead of posting here? Any help is much appreciated, thank you in advance!
|
Hey @rachelcorey,
you are not passing in your hub configuration containing your model_id into the HuggingFaceModel meaning you are trying to create an endpoint without a model at all.
you just need to change
env={ 'HF_TASK':'conversational' } => env=hub
| 1 |
huggingface
|
Amazon SageMaker
|
Error 403 when downloading model for Sagemaker batch inference
|
https://discuss.huggingface.co/t/error-403-when-downloading-model-for-sagemaker-batch-inference/12571
|
I am creating a batch job with the code below. However it fails immediately with 403 forbidden client error. My cloudwatch has the following output (full traceback below)
This is an experimental beta features, which allows downloading model from the
Hugging Face Hub on start up. It loads the model defined in the env var `HF_MODEL_ID'
.
immediately followed by:
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/models/sentence-transformers/all-mpnet-base-v2
after which the batch job fails. Deploying to an endpoint is working fine.
The full code for batch job:
from sagemaker.huggingface import HuggingFaceModel
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'sentence-transformers/all-mpnet-base-v2',
'HF_TASK':'feature-extraction'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.6',
pytorch_version='1.7',
py_version='py36',
env=hub,
role=role,
)
batch_job = huggingface_model.transformer(
instance_count=1,
instance_type='ml.p3.2xlarge',
output_path='s3://kj-temp/hf/out', # we are using the same s3 path to save the output with the input
strategy='SingleRecord'
)
# starts batch transform job and uses s3 data as input
batch_job.transform(
data=test_input,
content_type='application/json',
split_type='Line',
wait = False)
and the full traceback:
Traceback (most recent call last):
File "/usr/local/bin/dockerd-entrypoint.py", line 23, in <module>
serving.main()
File "/opt/conda/lib/python3.6/site-packages/sagemaker_huggingface_inference_toolkit/serving.py", line 34, in main
_start_mms()
File "/opt/conda/lib/python3.6/site-packages/retrying.py", line 49, in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
File "/opt/conda/lib/python3.6/site-packages/retrying.py", line 206, in call
return attempt.get(self._wrap_exception)
File "/opt/conda/lib/python3.6/site-packages/retrying.py", line 247, in get
six.reraise(self.value[0], self.value[1], self.value[2])
File "/opt/conda/lib/python3.6/site-packages/six.py", line 719, in reraise
raise value
File "/opt/conda/lib/python3.6/site-packages/retrying.py", line 200, in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
File "/opt/conda/lib/python3.6/site-packages/sagemaker_huggingface_inference_toolkit/serving.py", line 30, in _start_mms
mms_model_server.start_model_server(handler_service=HANDLER_SERVICE)
File "/opt/conda/lib/python3.6/site-packages/sagemaker_huggingface_inference_toolkit/mms_model_server.py", line 75, in start_model_server
use_auth_token=HF_API_TOKEN,
File "/opt/conda/lib/python3.6/site-packages/sagemaker_huggingface_inference_toolkit/transformers_utils.py", line 154, in _load_model_from_hub
model_info = _api.model_info(repo_id=model_id, revision=revision, token=use_auth_token)
File "/opt/conda/lib/python3.6/site-packages/huggingface_hub/hf_api.py", line 155, in model_info
r.raise_for_status()
File "/opt/conda/lib/python3.6/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
|
Can you please retry, there was an issue with loading Sentence Transformers from the Hub.
| 0 |
huggingface
|
Amazon SageMaker
|
ClientErro:400 when using batch transformer for inference
|
https://discuss.huggingface.co/t/clienterro-400-when-using-batch-transformer-for-inference/13476
|
Hi everyone,
I try to do sentiment analysis on a bunch of data and follow the example notebook notebooks/sagemaker-notebook.ipynb at master · huggingface/notebooks · GitHub and below is my code:
from sagemaker.huggingface.model import HuggingFaceModel
hub = {
'HF_MODEL_ID':'cardiffnlp/twitter-roberta-base-sentiment',
'HF_TASK':'text-classification'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
env=hub,
role=role,
transformers_version='4.6',
pytorch_version="1.7",
py_version='py36',
)
# create Transformer to run our batch job
batch_job = huggingface_model.transformer(
instance_count=1,
instance_type='ml.p3.2xlarge',
output_path=output_s3_path,
strategy='SingleRecord')
# starts batch transform job and uses s3 data as input
batch_job.transform(
data=input_s3_path,
content_type="application/json",
split_type="Line")
My input jsonl file is formatted as
image1485×231 19.8 KB
The model successfully infer when I feed with first 10 rows (totally this data is about 6k rows)
but it throw errors when I expand to 100 rows.
I’m super new to sagemaker and hugging face , can anyone tell me what I’m missing ? Thank you
|
sorry it’s still me, just attach more pics so the context might be better understood.
I heard BERT has limitation on text length so I truncated each line to 460 words
image1501×577 14.6 KB
| 0 |
huggingface
|
Amazon SageMaker
|
Deploying a conversational pipeline on AWS
|
https://discuss.huggingface.co/t/deploying-a-conversational-pipeline-on-aws/13280
|
I am following the instructions for deploying a custom Microsoft/DialoGPT-medium on AWS Sagemaker. As a first step, I am using the instructions included on the model hub, but they do not work and appear to be out of date.
The code that follows leads to this error:
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{
"code": 400,
"type": "InternalServerException",
"message": "ConversationalPipeline expects a Conversation or list of Conversations as an input"
}
Code from the documentation:
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'microsoft/DialoGPT-medium',
'HF_TASK':'conversational'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.6.1',
pytorch_version='1.7.1',
py_version='py36',
env=hub,
role=role,
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.m5.xlarge' # ec2 instance type
)
predictor.predict({
'inputs': {
"past_user_inputs": ["Which movie is the best ?"],
"generated_responses": ["It's Die Hard for sure."],
"text": "Can you explain why ?",
}
})
I can make those lists into Conversation objects, but even still I get an error. I am not even sure they will be json serializable anymore.
The larger issue is that I want to use a finetuned Microsoft/DialoGPT-medium that I trained on my local machine. I am following a Hugging Face youtube tutorial for that one, but once again the code presented in the video does not work out of the box.
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
model_data="s3://xxxxxxxxxxxxxxx/model.tar.gz", # path to your trained SageMaker model
role=role, # IAM role with permissions to create an endpoint
transformers_version="4.6", # Transformers version used
pytorch_version="1.7", # PyTorch version used
py_version='py36', # Python version used
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.m5.xlarge"
)
This fails with the following error:
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{
"code": 400,
"type": "InternalServerException",
"message": "(\"You need to define one of the following [\u0027feature-extraction\u0027, \u0027text-classification\u0027, \u0027token-classification\u0027, \u0027question-answering\u0027, \u0027table-question-answering\u0027, \u0027fill-mask\u0027, \u0027summarization\u0027, \u0027translation\u0027, \u0027text2text-generation\u0027, \u0027text-generation\u0027, \u0027zero-shot-classification\u0027, \u0027conversational\u0027, \u0027image-classification\u0027] as env \u0027TASK\u0027.\", 403)"
This is my fourth time deploying a Hugging Face model to AWS Sagemaker, and the process is so incredibly complex and non-intuitive. I feel like the entire process is held up by toothpicks. Thanks for any light you can shed.
|
I think you forgot to add the env task when creating your HuggingFaceModel:
huggingface_model = HuggingFaceModel(
model_data="s3://xxxxxxxxxxxxxxx/model.tar.gz", # path to your trained SageMaker model
role=role, # IAM role with permissions to create an endpoint
transformers_version="4.6", # Transformers version used
pytorch_version="1.7", # PyTorch version used
py_version='py36', # Python version used
env={ 'HF_TASK':'conversational' },
)
| 0 |
huggingface
|
Amazon SageMaker
|
ClientError:400 when using batch transformer on sagemaker for inference
|
https://discuss.huggingface.co/t/clienterror-400-when-using-batch-transformer-on-sagemaker-for-inference/13481
|
Hello everyone,
I’m new to hugging face and try to do sentiment analysis on bunch of texts with batch transformer job , here is my code and I follow through example notebook : notebooks/sagemaker-notebook.ipynb at master · huggingface/notebooks · GitHub 2
Here I rewrite the input json file as {“inputs”: “xxxxxxxxxx”}, and truncate long text into 460 words since BERT model has limitation on text length,
image1490×495 12.3 KB
following is my code :
hub = {
'HF_MODEL_ID':'cardiffnlp/twitter-roberta-base-sentiment',
'HF_TASK':'text-classification'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
env=hub,
role=role,
transformers_version='4.6',
pytorch_version="1.7",
py_version='py36',
)
# create Transformer to run our batch job
batch_job = huggingface_model.transformer(
instance_count=1,
instance_type='ml.p3.2xlarge',
output_path=output_s3_path,
strategy='SingleRecord')
# starts batch transform job and uses s3 data as input
batch_job.transform(
data=input_s3_path,
content_type="application/json",
split_type="Line")
I test it with first 10 rows (data is 6k rows totally). It did successfully output result, however when I expand input to 100 rows , it prompts client error as this:
No older events at this moment. Retry
2022-01-07T13:33:06.824-08:00 2022-01-07T21:33:03.067:[sagemaker logs]: MaxConcurrentTransforms=1, MaxPayloadInMB=6, BatchStrategy=SINGLE_RECORD
2022-01-07T13:33:10.825-08:00 2022-01-07T21:33:10.019:[sagemaker logs]: soa-pax-processed/sentiment_hf/month_test.jsonl: ClientError: 400
2022-01-07T13:33:10.825-08:00 2022-01-07T21:33:10.019:[sagemaker logs]: soa-pax-processed/sentiment_hf/month_test.jsonl:
2022-01-07T13:33:10.825-08:00 2022-01-07T21:33:10.019:[sagemaker logs]: soa-pax-processed/sentiment_hf/month_test.jsonl: Message:
2022-01-07T13:33:10.825-08:00 2022-01-07T21:33:10.020:[sagemaker logs]: soa-pax-processed/sentiment_hf/month_test.jsonl: {
2022-01-07T13:33:10.825-08:00 2022-01-07T21:33:10.020:[sagemaker logs]: soa-pax-processed/sentiment_hf/month_test.jsonl: “code”: 400,
2022-01-07T13:33:10.825-08:00 2022-01-07T21:33:10.020:[sagemaker logs]: soa-pax-processed/sentiment_hf/month_test.jsonl: “type”: “InternalServerException”,
2022-01-07T13:33:10.825-08:00 2022-01-07T21:33:10.020:[sagemaker logs]: soa-pax-processed/sentiment_hf/month_test.jsonl: “message”: “CUDA error: device-side assert triggered”
2022-01-07T13:33:10.825-08:00 2022-01-07T21:33:10.021:[sagemaker logs]: soa-pax-processed/sentiment_hf/month_test.jsonl: }
Can anyone help me on what I’m mssing? Any help is highly appreciated!
|
Hey @miOmiO,
Happy to help you here! to narrow down your issue. I think the first step would be to check if the dataset was created in the sample (notebooks/sagemaker-notebook.ipynb at master · huggingface/notebooks · GitHub 1) works or if it also errors out.
Additionally, could you bump the version of the HuggingFaceModel to the latest one? For transformers_version that’s 4.12.3 and for pytorch_version its 1.9.1 maybe this already solves your issue. You can find the list of available containers here: Reference 3
Also worth testing is to replace your model with a different model, e.g. distilbert-base-uncased-finetuned-sst-2-english
| 0 |
huggingface
|
Amazon SageMaker
|
Training Spot Instance: meaning of parameters max_run and max_wait
|
https://discuss.huggingface.co/t/training-spot-instance-meaning-of-parameters-max-run-and-max-wait/13401
|
Hi,
in the notebook sagemaker >> 05_spot_instances >> sagemaker-notebook.ipynb, we need to setup the parameters max_run and max_wait as said in the Hugging Face doc Spot instances.
However, there is no explanation about their meaning.
I searched more information the AWS SageMaker doc and found the parameters MaxRuntimeInSeconds and MaxWaitTimeInSeconds:
MaxRuntimeInSeconds: The maximum length of time, in seconds, that a training or compilation job can run.
MaxWaitTimeInSeconds: The maximum length of time, in seconds, that a managed Spot training job has to complete. It is the amount of time spent waiting for Spot capacity plus the amount of time the job can run. It must be equal to or greater than MaxRuntimeInSeconds . If the job does not complete during this time, Amazon SageMaker ends the job.
I guess that:
max_run = MaxRuntimeInSeconds
max_wait = MaxWaitTimeInSeconds
max_run
My question is: about max_run and in the case of the training of a Hugging Face model with Trainer(), what is a “training or compilation job”?
Is it the whole training job from the first step to the final one (ie, all the epochs)?
Is it the training job by checkpoint (by epoch or by the number of defined steps) and separately, the evaluation job at the end of each checkpoint?
max_wait
My question is: about max_wait and in the case of the training of a Hugging Face model with Trainer(), what is a “managed Spot training job”?
(same question 1 as for max_run) whole training job?
(same question 2 as for max_run) training job by checkpoint?
cc @philschmid
|
@pierreguillou To rephrase what @OlivierCR said:
max_run : answer 1: the whole training job
max_wait: none of your answers. This is the max time to get a spot instance allocated, plus the max_run
| 1 |
huggingface
|
Amazon SageMaker
|
Slow inference using most recent docker image
|
https://discuss.huggingface.co/t/slow-inference-using-most-recent-docker-image/12423
|
We have been deploying a BERT model to a SageMaker endpoint using the g4dn.xlarge instance. The deploy is managed with terraform and we use the following image:
{account}.dkr.ecr.{region}.amazonaws.com/huggingface-pytorch-inference:1.7-transformers4.6-gpu-py36-cu110-ubuntu18.04
We’re really happy with the model latency, which is about 0.05 seconds for the typical request.
Now we were experimenting with using the more recent image:
{account}.dkr.ecr.{region}.amazonaws.com/huggingface-pytorch-inference:1.9.1-transformers4.12.3-gpu-py38-cu111-ubuntu20.04
and found that model latency dropped by a factor of about 10, everything else being equal. I noticed in the sagemaker sdk docs that it says the supported versions of pytorch, transformers and python respectively are 1.7.1, 4.6.1 and 3.6 respectively.
So my question is more of a curiosity - why does the model latency drop on this most recent image? Is there some issue using the GPU here?
Thanks,
Owen
|
Hey @ojturner,
Thank you for opening the Thread. Performance drop should not appear! Can you share which model (at least the architecture and task) you use and which instance type?
| 0 |
huggingface
|
Amazon SageMaker
|
Training compiler for all NLP models?
|
https://discuss.huggingface.co/t/training-compiler-for-all-nlp-models/13178
|
Hi @philschmid,
I liked very much your post “Hugging Face Transformers BERT fine-tuning using Amazon SageMaker and Training Compiler 5” (and its promises of saving time and money when training NLP models ) and I did try it with a T5 model.
However, I got the following error: The training task fails due to a missing XLA configuration
I found an article from AWS on “Training Job Fails Due to Missing XLA Configuration 4” but it did not fix the problem.
Do you have any suggestion? Thank you.
|
Hey @pierreguillou,
could you share how you created your Training Job (python estiomator + hyperparameters) and the training script you use?
Iin addition to this, I am not sure if T5 is well supported yet. As mentioned in my blog
The Amazon Training Compiler works best with Encoder Type models, like BERT , RoBERTa , ALBERT , DistilBERT .
The Training Compiler currently works best with encoder-type models. In the test I ran a couple of weeks back T5 performed worse than the default training.
| 1 |
huggingface
|
Amazon SageMaker
|
Sagemaker DLC and Log4j
|
https://discuss.huggingface.co/t/sagemaker-dlc-and-log4j/12897
|
Hi all,
I asked a question in the Discord a few days ago and was told to move it here instead.
In the wake of the Log4j vulnerability, should we expect any updated versions of the HuggingFace Deep Learning Container images used for the SageMaker integration 1, or have y’all confirmed that none of your versions are vulnerable in this way?
All the best,
Charles
|
Hey @charlesatftl,
I was in contact with AWS and the SageMaker Team and got the following response from Yogesh Sharma Engineering Manager for the DLCs 2
Hello,
We ran a canary scan to exercise caution and find out if HF DLCs are impacted by the log4j vulnerability. I can confirm that both Hugging Face TensorFlow and Hugging Face PyTorch DLCs are not impacted by the log4j issue.
Some of DLC’s upstream libraries use log4j v1.2 which is old but is not impacted by this CVE (it impacts v2.x). Teams at AWS have decided that we will upgrade the log4j version early January to v2.16 just to be on the latest safe version.
Let us know if you have further questions, thanks.
| 0 |
huggingface
|
Amazon SageMaker
|
Sagemaker not being able to download EleutherAI/gpt-j-6B model from the Hugging Face Hub on start up
|
https://discuss.huggingface.co/t/sagemaker-not-being-able-to-download-eleutherai-gpt-j-6b-model-from-the-hugging-face-hub-on-start-up/12717
|
Trying to create a basic inference on Sagemaker, but can see that the model downloads 30 -40% and then the download restarts again and the loop keeps on happening . And after 20 -30 mins it just fails.
Tried with different instances as well, but still the same issue persists.
Any help would be really appreciated.
Here is he exact code,
hub = {
‘HF_MODEL_ID’:‘EleutherAI/gpt-j-6B’,
‘HF_TASK’:‘text-generation’
}
create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version=‘4.6.1’,
pytorch_version=‘1.7.1’,
py_version=‘py36’,
env=hub,
role=role,
)
deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type=‘ml.m5.4xlarge’ # ec2 instance type
)
predictor.predict({
‘inputs’: "Can you please let us know more details about your "
})
|
Pinging @philschmid here
| 0 |
huggingface
|
Amazon SageMaker
|
Batch Tansform and accents in the json file
|
https://discuss.huggingface.co/t/batch-tansform-and-accents-in-the-json-file/12773
|
Hi.
I used the notebook lab2_batch_transform.ipynb of @philschmid to launch batch for inferences. My dataset is composed of texts in Portuguese (ie, with accents).
When I download the json file created and open it with Sublime Text or note, I see that all letters with accents were converted as following:
(...)
{"inputs": "forma\u00e7\u00e3o ...}
(...)
In this example, forma\u00e7\u00e3o is formação.
What do you think? I can use my json file or I need to solve this problem in order to get (real) letters with accents in my json file? Thanks.
|
Thanks @philschmid. I tested your code with encoding='utf-8' but it did not change the content of my json file with strange letters instead of letters with accents.
However, I just found the complementary code in this post 1 that solves my problem: ensure_ascii=False as an argument of json.dump().
Here the code I use now (taken from notebook lab2_batch_transform.ipynb and modified with the 2 cited arguments):
with open(dataset_csv_file, "r+") as infile, open(dataset_jsonl_file, "w+", encoding='utf-8') as outfile:
reader = csv.DictReader(infile)
for row in reader:
json.dump(row, outfile, ensure_ascii=False)
outfile.write('\n')
| 1 |
huggingface
|
Amazon SageMaker
|
Batch Tansform and API token for private model
|
https://discuss.huggingface.co/t/batch-tansform-and-api-token-for-private-model/12774
|
Hi.
I used the notebook lab2_batch_transform.ipynb 2 of @philschmid to launch batch for inferences.
The model I used in in private mode in the HF model hub. I do not see in the following code from the notebook which argument to use in order to pass the API_TOKEN:
from sagemaker.huggingface.model import HuggingFaceModel
# Hub Model configuration. <https://huggingface.co/models>
hub = {
'HF_MODEL_ID':model_checkpoint,
'HF_TASK':'text2text-generation'
}
Who knows the answer? Thanks.
|
I just found the answer in github at Environment variables: HF_API_TOKEN
from sagemaker.huggingface.model import HuggingFaceModel
# Hub Model configuration. <https://huggingface.co/models>
hub = {
'HF_MODEL_ID':model_checkpoint,
'HF_TASK':'text2text-generation',
'HF_API_TOKEN':API_TOKEN
}
| 1 |
huggingface
|
Amazon SageMaker
|
How to export training logs from CloudWatch
|
https://discuss.huggingface.co/t/how-to-export-training-logs-from-cloudwatch/12698
|
Hi,
When I launch a model training in AWS SageMaker & HF Training DLC, there is a training job created in AWS SageMaker that has a link to CloudWatch > Logs > Logs groups > /aws/sagemaker/TrainingJobs > job name of my training > Log events where I can see the logs of my training.
Great.
Now, I would like to export all these logs to a txt file. Strange but I do not see a button for that.
Who knows how to do it? Thanks.
|
Hi! you can either export logs to S3 via the log export feature:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html 1
You can also query the logs via SDK, for example CloudWatchLogs — Boto3 Docs 1.20.23 documentation 2
| 0 |
huggingface
|
Amazon SageMaker
|
Spot instances with Sagemaker batch transform?
|
https://discuss.huggingface.co/t/spot-instances-with-sagemaker-batch-transform/12681
|
Is it possible to use spot instances for batch transform? Don’t see in either of these places
https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/sagemaker.huggingface.html
ht tp s://sagemaker.readthedocs.io/en/stable/api/inference/transformer.html
Trying to pass “use_spot_instances=True” to either the HuggingFaceModel, HuggingFaceModel.transformer or HuggingFaceModel.transformer.transform() gives an error. I am using this notebook as an example.
github.com
huggingface/notebooks/blob/master/sagemaker/12_batch_transform_inference/sagemaker-notebook.ipynb
{
"cells": [
{
"cell_type": "markdown",
"source": [
"# Huggingface Sagemaker-sdk - Run a batch transform inference job with 🤗 Transformers\n"
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"1. [Introduction](#Introduction) \n",
"2. [Run Batch Transform after training a model](#Run-Batch-Transform-after-training-a-model) \n",
"3. [Run Batch Transform Inference Job with a fine-tuned model using `jsonl`](#Run-Batch-Transform-Inference-Job-with-a-fine-tuned-model-using-jsonl) \n",
"\n",
"Welcome to this getting started guide, we will use the new Hugging Face Inference DLCs and Amazon SageMaker Python SDK to deploy two transformer model for inference. \n",
"In the first example we deploy a trained Hugging Face Transformer model on to SageMaker for inference.\n",
"In the second example we directly deploy one of the 10 000+ Hugging Face Transformers from the [Hub](https://huggingface.co/models) to Amazon SageMaker for Inference.<"
],
This file has been truncated. show original
THanks
|
not possible today. If you want to use Spot for inference, you can write inference code as a custom python script and run inference in the Training API / HuggingFace Estimator
| 0 |
huggingface
|
Amazon SageMaker
|
How to access to /opt/ml/model before the end of the model training?
|
https://discuss.huggingface.co/t/how-to-access-to-opt-ml-model-before-the-end-of-the-model-training/12669
|
Hi,
In hyperparameters of my notebook in AWS SageMaker & HF Training DLC, I defined:
'output_dir': '/opt/ml/model'
In the CloudWatch logs, I can see for example:
Model weights saved in /opt/ml/model/checkpoint-6000/pytorch_model.bin
I would like to test this checkpoint-6000 in another notebook without waiting for the end of the model training.
How can I access to the content of /opt/ml/model/checkpoint-6000/ within the AWS site or inside another notebook? (I do not want to use aws cli in a terminal).
Thanks.
|
Hi @pierreguillou ; the content of opt/ml/model is accessible only at the end of training (and it will be compressed in a model.tar.gz, which will be enormous if you save dozen/hundreds of transformers checkpoints there). If you want to export models to s3 during training, without interruption, in the same file format as saved locally, you need to save them in /opt/ml/checkpoints, and specify the S3 sync location in your SDK call with the parameter checkpoint_s3_uri. Then use the AWS CLI or boto3 download_file to bring the from S3 back to your notebook.
See here the doc of the checkpoint feature, which is in my opinion one of the best features of SageMaker
Use Checkpoints in Amazon SageMaker - Amazon SageMaker 2
notebooks/sagemaker-notebook.ipynb at master · huggingface/notebooks · GitHub
| 0 |
huggingface
|
Amazon SageMaker
|
Training Metrics in AWS SageMaker
|
https://discuss.huggingface.co/t/training-metrics-in-aws-sagemaker/12513
|
Hi,
in the notebook 06_sagemaker_metrics 1 / sagemaker-notebook.ipynb, there is the code to get training and eval metrics at the end of the training from the HuggingFaceEstimator.
How we can get them DURING the training?
Great, but I don’t understand how we can get them DURING the training to check how good (or not) the training is (for example, to detect overfitting and then, stop training before the last epoch).
My idea was to create a duplicate notebook (without running fit() in this duplicated one) for that purpose. The following text in the notebook seems to say that it is possible but how can we get specifiying the exact training job name in the TrainingJobAnalytics API call? Thanks.
Note that you can also copy this code and run it from a different place (as long as connected to the cloud and authorized to use the API), by specifiying the exact training job name in the TrainingJobAnalytics API call.)
Problem: “Warning: No metrics called eval_loss found”
I have a second question.
I used the metrics code (copy/paste) from 06_sagemaker_metrics 1 / sagemaker-notebook.ipynb in a NER finetuning notebook on AWS SageMaker.
The code of my NER notebook uses directly the script run_ner.py from github (through the argument git_config in my Hugging Face Estimator).
metric_definitions=[
{'Name': 'loss', 'Regex': "'loss': ([0-9]+(.|e\-)[0-9]+),?"},
{'Name': 'learning_rate', 'Regex': "'learning_rate': ([0-9]+(.|e\-)[0-9]+),?"},
{'Name': 'eval_loss', 'Regex': "'eval_loss': ([0-9]+(.|e\-)[0-9]+),?"},
{'Name': 'eval_accuracy', 'Regex': "'eval_accuracy': ([0-9]+(.|e\-)[0-9]+),?"},
{'Name': 'eval_f1', 'Regex': "'eval_f1': ([0-9]+(.|e\-)[0-9]+),?"},
{'Name': 'eval_precision', 'Regex': "'eval_precision': ([0-9]+(.|e\-)[0-9]+),?"},
{'Name': 'eval_recall', 'Regex': "'eval_recall': ([0-9]+(.|e\-)[0-9]+),?"},
{'Name': 'eval_runtime', 'Regex': "'eval_runtime': ([0-9]+(.|e\-)[0-9]+),?"},
{'Name': 'eval_samples_per_second', 'Regex': "'eval_samples_per_second': ([0-9]+(.|e\-)[0-9]+),?"},
{'Name': 'epoch', 'Regex': "'epoch': ([0-9]+(.|e\-)[0-9]+),?"}]
git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.12.3'}
huggingface_estimator = HuggingFace(
entry_point = 'run_ner.py',
source_dir = './examples/pytorch/token-classification',
git_config = git_config,
(...),
metric_definitions = metric_definitions,
)
I have no problem of training but when I want to display the metrics, most of them were not found (see the following screen shot):
metrics2038×940 59.8 KB
I compared the code relative to the logs in the 2 scripts and they are different.
In the train.py:
# Set up logging
logger = logging.getLogger(__name__)
logging.basicConfig(
level=logging.getLevelName("INFO"),
handlers=[logging.StreamHandler(sys.stdout)],
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
)
In the run_ner.py:
logger = logging.getLogger(__name__)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[logging.StreamHandler(sys.stdout)],
)
This is the reason of the problem?
|
@pierreguillou - how to find the metrics - in addition to dashboarding them in Cloudwatch via the path provided by Philipp (job detail page in the console, then “algorithm metrics” link in the bottom), you can also pull them in real time with the SDK
from sagemaker.analytics import TrainingJobAnalytics
df = TrainingJobAnalytics( training_job_name="jobname").dataframe()
Capture d’écran 2021-12-09 à 17.07.251144×1134 93.4 KB
| 1 |
huggingface
|
Amazon SageMaker
|
Finetuning sentence embedding model with SageMaker - how to compute loss?
|
https://discuss.huggingface.co/t/finetuning-sentence-embedding-model-with-sagemaker-how-to-compute-loss/12568
|
I’m looking for a model that will return an embedding vector that can be used in downstream classification tasks. I have been able to deploy the pretrained model sentence-transformers/all-mpnet-base-v2 · Hugging Face 1
to an endpoint and get embeddings from it. However, when I try to finetune the model with huggingface_estimator I get an error as it doesn’t appear this model returns a loss since it doesn’t have labels (the data I am supplying is simply more domain specific text examples. How can I pass a loss when fine tuning a sentence transformer?
So fist I used the HuggingFaceModel from the sagemaker toolkit
from sagemaker.huggingface import HuggingFaceModel
hub = {
'HF_MODEL_ID':'sentence-transformers/all-mpnet-base-v2',
'HF_TASK':'feature-extraction'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.6',
pytorch_version='1.7',
py_version='py36',
env=hub,
role=role,
)
In this example from amazon Fine-tune and host Hugging Face BERT models on Amazon SageMaker | AWS Machine Learning Blog 2 they are first fine tuning the model using the huggingfacee_estimator, and then they create a new HuggingFaceModel object and pass the model_data from the huggingface_estimator fine tuning job. Like this:
from sagemaker.huggingface.model import HuggingFaceModel
huggingface_model = sagemaker.huggingface.HuggingFaceModel(
env={ 'HF_TASK':'sentiment-analysis' },
model_data=huggingface_estimator.model_data,
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.6.1", # transformers version used
pytorch_version="1.7.1", # pytorch version used
py_version='py36', # python version
)
My problem is when I try to finetune on my data, since it is not a classifier, the model output is not return a loss and I get an invalid key error from compute_loss() in transformer/trainer.py transformers/trainer.py at 3977b58437b8ce1ea1da6e31747d888efec2419b · huggingface/transformers · GitHub
KeyError: 'loss'
Does it make sense to finetune an embedding model? Is there a way to pass a loss function and have it included in the model output.
I am just passing it additional unlabelled data. How would one do this in sagemaker given that it is a feature extraction model and not a classification/prediction model?
Thanks
|
Hello @kjackson,
How did you try to fine-tune it the code you shared is only on how to deploy it.
We have a detailed “Getting Started” example with video support to run your first training on Amazon SageMaker: Get started 4 This might help you get started.
Does it make sense to finetune an embedding model? Is there a way to pass a loss function and have it included in the model output.
Yes it makes sense to further fine-tune a language model to let it “learn” a new context/domain
| 0 |
huggingface
|
Amazon SageMaker
|
Huggingface_hub integration: ModuleNotFoundError: No module named ‘huggingface_hub’
|
https://discuss.huggingface.co/t/huggingface-hub-integration-modulenotfounderror-no-module-named-huggingface-hub/12511
|
Hi Philipp,
I have been trying to use the new functionally of push to hub on my script and I could not even past the installation, I ran the: !pip install “sagemaker==2.69.0” “transformers==4.12.3” --upgrade command and for some reason sagemaker is not getting updated.
I am using a notebook instance.
Thanks,
Jorge
|
Hey @Jorgeutd,
The sagemaker notebooks are currently containing an incompatible/old version of boto3 with the latest dataset and transformers.
Thats how I solved it:
!pip install "sagemaker>=2.69.0" "transformers==4.12.3" --upgrade
# using older dataset due to incompatibility of sagemaker notebook & aws-cli with > s3fs and fsspec to >= 2021.10
!pip install "datasets==1.13" --upgrade
BTW. we also have an example notebook on how to push models to the hub during sagemaker training.
github.com
huggingface/notebooks/blob/master/sagemaker/14_train_and_push_to_hub/sagemaker-notebook.ipynb 1
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# HuggingFace Hub meets Amazon SageMaker\n",
"### Fine-tune a Multi-Class Classification with `Trainer` and `emotion` dataset and push it to the [Hugging Face Hub](https://huggingface.co/models)"
]
},
{
"attachments": {
"emotion-widget.png": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAABRYAAAM4CAYAAAC5rb6iAAAMamlDQ1BJQ0MgUHJvZmlsZQAASImVVwdYU8kWnltSSWiBCEgJvQnSq5QQWgABqYKNkAQSSowJQcVeFhVcKyKKFV0VUXR1BWRREXtZFHtfLKgo66IuiqLyJiSg677yvfN9c+fPmTP/KZm5dwYArV6eVJqHagOQLymQJUSEsMampbNIHYAMDAAGtIA7jy+XsuPjYwCUwf7v8u4GQJT9VScl1z/H/6voCoRyPgDIeIgzBXJ+PsTNAOAb+FJZAQBEpd5yaoFUiedCrCeDAUJcpsTZKrxLiTNVuGnAJimBA/FlAMg0Hk+WDYDmPahnFfKzIY/mJ4hdJAKxBACtERAH8kU8AcTK2Efk509W4gqI7aC9FGIYD/DJ/IYz+2/8mUP8PF72EFblNSDkULFcmseb/n+W5n9Lfp5i0IcNbDSRLDJBmT+s4a3cydFKTIO4S5IZG6esNcS9YoGq7gCgVJEiMllljxrz5RxYP8CE2EXAC42G2BjicElebIxan5klDudCDFcLOk1cwE2C2ADixUJ5WKLaZotscoLaF1qXJeOw1fqzPNmAX6WvB4rcZLaa/41IyFXzY5pFoqRUiKkQWxWKU2Ih1oTYWZ6bGK22GVUk4sQO2sgUCcr4rSBOEEoiQlT8WGGWLDxBbV+SLx/MF9siEnNj1fhAgSgpUlUf7CSfNxA/zAW7LJSwkwd5hPKxMYO5CIShYarcsedCSXKimqdXWhCSoJqLU6V58Wp73EKYF6HUW0DsIS9MVM/FUwrg4lTx41nSgvgkVZx4UQ4vKl4VD74CxAAOCAUsoIAtE0wGOUDc2lXfBX+pRsIBD8hANhACJ7VmcEbqwIgEPhNBEfgDIiGQD80LGRgVgkKo/zykVT2dQNbAaOHAjFzwFOJ8EA3y4G/FwCzJkLcU8ARqxP/wzoOND+PNg005/u/1g9qvGjbUxKg1ikGPLK1BS2IYMZQYSQwn2uNGeCDuj8fAZzBsbrgP7juYx1d7wlNCG+ER4TqhnXB7kni+7LsoR4N2yB+urkXmt7XAbSCnJx6CB0B2yIwzcSPghHtAP2w8CHr2hFqOOm5lVVjfcf8tg2/+DbUdxYWCUoZRgil238/UdND0HGJR1vrb+qhizRyqN2do5Hv/nG+qL4B99PeW2GLsIHYGO46dw5qwesDCjmEN2EXsiBIPra4nA6tr0FvCQDy5kEf8D388tU9lJeUuNS6dLp9UYwXCaQXKjceZLJ0uE2eLClhs+HUQsrgSvvMIlpuLmysAym+N6vX1ljnwDUGY57/qFpgDEDC9v7+/6asuGr5zDx6B2//OV51tB3xNnAfg7Fq+Qlao0uHKBwG+JbTgTjMEpsAS2MF83IAX8AfBIAxEgTiQBNLARFhlEVznMjAVzATzQDEoBSvAGrAebAbbwC6wFxwA9aAJHAenwQVwGVwHd+Hq6QAvQTd4B/oQBCEhdISBGCJmiDXiiLghPkggEobEIAlIGpKBZCMSRIHMRBYgpcgqZD2yFalGfkYOI8eRc0gbcht5iHQib5CPKIbSUD3UBLVBR6I+KBuNRpPQCWg2OgUtQheiy9AKtArdg9ahx9EL6HW0HX2J9mAA08CYmDnmhPlgHCwOS8eyMBk2GyvByrEqrBZrhP/zVawd68I+4EScgbNwJ7iCI/FknI9PwWfjS/H1+C68Dj+JX8Uf4t34FwKdYExwJPgRuISxhGzCVEIxoZywg3CIcArupQ7COyKRyCTaEr3hXkwj5hBnEJcSNxL3EZuJbcTHxB4SiWRIciQFkOJIPFIBqZi0jrSHdIx0hdRB6iVrkM3IbuRwcjpZQp5PLifvJh8lXyE/I/dRtCnWFD9KHEVAmU5ZTtlOaaRconRQ+qg6VFtqADWJmkOdR62g1lJPUe9R32poaFho+GqM0RBrzNWo0NivcVbjocYHmi7NgcahjacpaMtoO2nNtNu0t3Q63YYeTE+nF9CX0avpJ+gP6L2aDE1nTa6mQHOOZqVmneYVzVdaFC1rLbbWRK0irXKtg1qXtLq0Kdo22hxtnvZs7Urtw9o3tXt0GDquOnE6+TpLdXbrnNN5rkvStdEN0xXoLtTdpntC9zEDY1gyOAw+YwFjO+MUo0OPqGerx9XL0SvV26vXqtetr6vvoZ+iP02/Uv+IfjsTY9owucw85nLmAeYN5sdhJsPYw4TDlgyrHXZl2HuD4QbBBkKDEoN9BtcNPhqyDMMMcw1XGtYb3jfCjRyMxhhNNdpkdMqoa7jecP/h/OElww8Mv2OMGjsYJxjPMN5mfNG4x8TUJMJEarLO5IRJlynTNNg0x7TM9KhppxnDLNBMbFZmdszsBUufxWblsSpYJ1nd5sbmkeYK863mreZ9FrYWyRbzLfZZ3LekWvpYZlmWWbZYdluZWY22mmlVY3XHmmLtYy2yXmt9xvq9ja1Nqs0im3qb57YGtlzbItsa23t2dLsguyl2VXbX7In2Pva59hvtLzugDp4OIodKh0uOqKOXo9hxo2PbCMII3xGSEVUjbjrRnNhOhU41Tg+dmc4xzvOd651fjbQamT5y5cgzI7+4eLrkuWx3ueuq6xrlOt+10fWNm4Mb363S7Zo73T3cfY57g/trD0cPoccmj1ueDM/Rnos8Wzw/e3l7ybxqvTq9rbwzvDd43/TR84n3Wepz1pfgG+I7x7fJ94Ofl1+B3wG/P/2d/HP9d/s/H2U7Sjhq+6jHARYBvICtAe2BrMCMwC2B7UHmQbygqqBHwZbBguAdwc/Y9uwc9h72qxCXEFnIoZD3HD/OLE5zKBYaEVoS2hqmG5Yctj7sQbhFeHZ4TXh3hGfEjIjmSEJkdOTKyJtcEy6fW83tjvKOmhV1MpoWnRi9PvpRjEOMLKZxNDo6avTq0fdirWMlsfVxII4btzrufrxt/JT4X8cQx8SPqRzzNME1YWbCmURG4qTE3YnvkkKSlifdTbZLViS3pGiljE+pTnmfGpq6KrV97Mixs8ZeSDNKE6c1pJPSU9J3pPeMCxu3ZlzHeM/xxeNvTLCdMG3CuYlGE/MmHpmkNYk36WAGISM1Y3fGJ14cr4rXk8nN3JDZzefw1/JfCoIFZYJOYYBwlfBZVkDWqqzn2QHZq7M7RUGiclGXmCNeL36dE5mzOed9blzuztz+vNS8ffnk/Iz8wxJdSa7k5GTTydMmt0kdpcXS9il+U9ZM6ZZFy3bIEfkEeUOBHjzUX1TYKX5QPCwMLKws7J2aMvXgNJ1pkmkXpztMXzL9WVF40U8z8Bn8GS0zzWfOm/lwFnvW1tnI7MzZLXMs5yyc0zE3Yu6uedR5ufN+m+8yf9X8vxakLmhcaLJw7sLHP0T8UFOsWSwrvrnIf9Hmxfhi8eLWJe5L1i35UiIoOV/qUlpe+mkpf+n5H11/rPixf1nWstblXss3rSCukKy4sTJo5a5VOquKVj1ePXp1XRmrrKTsrzWT1pwr9yjfvJa6VrG2vSKmomGd1boV6z6tF62/XhlSuW+D8YYlG95vFGy8sil4U+1mk82lmz9uEW+5tTVia12VTVX5NuK2wm1Pt6dsP/OTz0/VO4x2lO74vFOys31Xwq6T1d7V1buNdy+vQWsUNZ17xu+5vDd0b0OtU+3Wfcx9pfvBfsX+Fz9n/HzjQPSBloM+B2t/sf5lwyHGoZI6pG56XXe9qL69Ia2h7XDU4ZZG/8ZDvzr/urPJvKnyiP6R5UepRxce7T9WdKynWdrcdTz7+OOWSS13T4w9ce3kmJOtp6JPnT0dfvrEGfaZY2cDzjad8zt3+LzP+foLXhfqLnpePPSb52+HWr1a6y55X2q47Hu5sW1U29ErQVeOXw29evoa99qF67HX224k37h1c/zN9luCW89v591+fafwTt/dufcI90rua98vf2D8oOp3+9/3tXu1H3kY+vDio8RHdx/zH798In/yqWPhU/rT8mdmz6qfuz1v6gzvvPxi3IuOl9KXfV3Ff+j8seGV3atf/gz+82L32O6O17LX/W+WvjV8u/Mvj79aeuJ7HrzLf9f3vqTXsHfXB58PZz6mfnzWN/UT6VPFZ/vPjV+iv9zrz+/vl/JkvIGjAAYbmpUFwJudANDTAGDAMwR1nOouOCCI6v46gMB/wqr74oB4AVALO+UxntMMwH7YbOZCbtgrj/BJwQB1dx9qapFnubupuGjwJkTo7e9/awIAqRGAz7L+/r6N/f2ft8NgbwPQPEV1B1UKEd4ZtoQq0e3VE+aC70R1P/0mx+97oIzAA3zf/wt4tJFlaPiOogAAAIplWElmTU0AKgAAAAgABAEaAAUAAAABAAAAPgEbAAUAAAABAAAARgEoAAMAAAABAAIAAIdpAAQAAAABAAAATgAAAAAAAACQAAAAAQAAAJAAAAABAAOShgAHAAAAEgAAAHigAgAEAAAAAQAABRagAwAEAAAAAQAAAzgAAAAAQVNDSUkAAABTY3JlZW5zaG90qFMKUgAAAAlwSFlzAAAWJQAAFiUBSVIk8AAAAddpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IlhNUCBDb3J
This file has been truncated. show original
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.