Thomas Tong PRO

gtvracer
ยท

AI & ML interests

None yet

Recent Activity

Organizations

Santa Cruz AI Community's profile picture

gtvracer's activity

posted an update about 22 hours ago
view post
Post
668
Is HuggingFace having issues with meta-llama/Llama-3.2-3B-Instruct? InferenceClient isn't returning any results.
  • 3 replies
ยท
replied to their post 26 days ago
replied to their post 27 days ago
view reply

ok, it came back on... HF! What happened?!

replied to their post 27 days ago
posted an update 27 days ago
posted an update 6 months ago
view post
Post
632
Model is always disabled?
#script...
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2",
token="xxxxxx")

That loads the model fine. But if used by index returned from VectorStoreIndex for QDrant like this:

#script...
query_engine = index_from_nodes.as_query_engine(llm=model, streaming=True)

response = query_engine.query(
"What is formula 1?"
)

response.print_response_stream()

It errors out with a disabled error:
AssertionError Traceback (most recent call last)
Cell In[34], line 1
----> 1 query_engine = index_from_nodes.as_query_engine(llm=model, streaming=True)
3 response = query_engine.query(
4 "What is formula 1?"
5 )
7 response.print_response_stream()

File ~/miniconda/lib/python3.9/site-packages/llama_index/core/indices/base.py:376, in BaseIndex.as_query_engine(self, llm, **kwargs)
370 from llama_index.core.query_engine.retriever_query_engine import (
371 RetrieverQueryEngine,
372 )
374 retriever = self.as_retriever(**kwargs)
375 llm = (
--> 376 resolve_llm(llm, callback_manager=self._callback_manager)
377 if llm
378 else Settings.llm
379 )
381 return RetrieverQueryEngine.from_args(
382 retriever,
383 llm=llm,
384 **kwargs,
385 )

File ~/miniconda/lib/python3.9/site-packages/llama_index/core/llms/utils.py:102, in resolve_llm(llm, callback_manager)
99 print("LLM is explicitly disabled. Using MockLLM.")
100 llm = MockLLM()
--> 102 assert isinstance(llm, LLM)
104 llm.callback_manager = callback_manager or Settings.callback_manager
106 return llm

AssertionError:

So why is the LLM disabled?
Thanks!
  • 1 reply
ยท
replied to their post 6 months ago
replied to their post 6 months ago
view reply

They were all default files loaded by created the space. So some template they are using are not synched or mislabeled for a Gradio/chatbot project.

You'd expect them to compile and run out of the box...and have to adjust it to the environment you chose (ZeroGPU).

replied to their post 6 months ago
view reply

Thanks John6666! Adding the import and spaces.GPU fixed the issue and the base ZeroGPU gradio app runs!

Hope HuggingFace fixes their template so others aren't facing this as their first experience with this platform. Because it really doesn't look good if the basic project supplied by HF won't run.

reacted to their post with ๐Ÿ‘€ 6 months ago
view post
Post
1518
Hello Everyone,
I signed up as Pro and started a ZeroGPU space with a Gradio chatbot project as default. When building the space, it won't even start the sample Gradio app.. Pretty disappointing when right out of the box, it fails...

Have anyone encountered this yet?
Thanks...

This is the output, odd since it seems to be just a warning. So why wouldn't it start?

/usr/local/lib/python3.10/site-packages/gradio/components/chatbot.py:228: UserWarning: The 'tuples' format for chatbot messages is deprecated and will be removed in a future version of Gradio. Please set type='messages' instead, which uses openai-style 'role' and 'content' keys.
warnings.warn(
* Running on local URL: http://0.0.0.0:7860, with SSR โšก

To create a public link, set share=True in launch().

Stopping Node.js server...
  • 6 replies
ยท
posted an update 6 months ago
view post
Post
1518
Hello Everyone,
I signed up as Pro and started a ZeroGPU space with a Gradio chatbot project as default. When building the space, it won't even start the sample Gradio app.. Pretty disappointing when right out of the box, it fails...

Have anyone encountered this yet?
Thanks...

This is the output, odd since it seems to be just a warning. So why wouldn't it start?

/usr/local/lib/python3.10/site-packages/gradio/components/chatbot.py:228: UserWarning: The 'tuples' format for chatbot messages is deprecated and will be removed in a future version of Gradio. Please set type='messages' instead, which uses openai-style 'role' and 'content' keys.
warnings.warn(
* Running on local URL: http://0.0.0.0:7860, with SSR โšก

To create a public link, set share=True in launch().

Stopping Node.js server...
  • 6 replies
ยท