Error: Could not locate file 'codegen-350M-mono/resolve/main/onnx/encoder_model_quantized.onnx' with transformers.js

#1
by riosje - opened

Hi guys I'm trying to geeting started with this model using the transformers.js package but i get this error

Error: Could not locate file: "[https://huggingface.co/Xenova/codegen-350M-mono/resolve/main/onnx/encoder_model_quantized.onnx](https://huggingface.co/Xenova/codegen-350M-mono/resolve/main/onnx/encoder_model_quantized.onnx)".

but when i use the model of the examples it works properly
Xenova/distilbert-base-uncased-finetuned-sst-2-english

this is my sample code

import { pipeline } from '@xenova/transformers';

let code = await pipeline('text2text-generation', 'Xenova/codegen-350M-mono');
let result = await code('Write bash script to say hello world')
console.log(result)

Hi there. So, codegen is a text-generation model, not a text2text-generation model. In other words, it is a decoder-only language model (and not an encoder-decoder model).

So, the correct usage is:

import { pipeline } from '@xenova/transformers';

let code = await pipeline('text-generation', 'Xenova/codegen-350M-mono');
let result = await code('Write bash script to say hello world')
console.log(result)

However, making this mistake is completely understandable, since we currently do not show the task on the model's page. You should see that update very soon.

Hi guys, thank you so much to take the time to answer.
I'm still learning about all this, I've learn that the text2text-generation are the T5 - T0 and BART models.
How can I identify the text-generation models uploaded on Xenova/*

https://huggingface.co/tasks/text-generation#text-to-text-generation-models

I got this error while running that sample code, but i guess that error is because I'm using it in the wrong way, soo...

Error: Non-zero status code returned while running If node. Name:'optimum::if' Status Message: Non-zero status code returned while running Gather node. Name:'/transformer/h.0/attn/Gather_14' Status Message: indices element out of data bounds, idx=2048 must be within the inclusive range [-2048,2047]

Thank you very much guys

How can I identify the text-generation models uploaded on Xenova/*

We'll soon add filtering so you can choose based on task.

I got this error while running that sample code, but i guess that error is because I'm using it in the wrong way, soo...

Can you provide the code you used?

Just an update on this: You can now filter transformers.js models on the Hub! For example, https://huggingface.co/models?pipeline_tag=text-generation&library=transformers.js&sort=trending will show all text-generation models that are compatible with Transformers.js

@Xenova - this error appears with other models when using lots of context. ie - same error @riosje mentioned!

Sign up or log in to comment