Apply for community grant: Academic project (gpu and storage)

#2
by abinayam - opened
AI Tamil Nadu org

AI Tamil Nadu (https://aitamilnadu.org/) is a non-profit hyperlocal community that aims to democratize AI specially in the Tamil Nadu, India. We are a bunch of volunteers spending our free time to run technical events and impart AI knowledge to the common public.

We're collaborating with CIP (https://cip.org/) to build a constitution around Tamil LLM by involving the people from the AITN community (15-20 people to start with). We want to convene a small Tamil-speaking group to run a Pol.is, to generate principles for a collective constitution, prompt a default Tamil model (HF space: https://huggingface.co/spaces/aitamilnadu/LLaMa_Advisor_Tamil) based on it, and assess how well it behaves. The bot will be designed to be an lifestyle/realtionship advisor for the Tamil community where people can prompt the model and get advice.

The objective here is to first run a MVP to get the opinion from the Tamil community about the model's actual behavior and supposedly expected behavior and build a constitution around it and use it to build better models for Tamil. This activity is run as a open-source initiative. It would be really great if we could get GPU and persitent storage support for this activity from HuggingFace for the duration of the planned project (approximately a month or two for the MVP and extend it to a larger timeframe where we open this to the general Tamil public). Awaiting a positive response.

Regards,
Abinaya Mahendiran
On behalf of AI Tamil Nadu

Hi @abinayam , we've assigned ZeroGPU to this Space. Please check the compatibility and usage sections of this page so your Space can run on ZeroGPU.

As this Space seems to be a modified version of the llama 2 Space, I thought it should be able to run on Zero as is, but I'm seeing this error:

Traceback (most recent call last):
  File "/home/user/app/app.py", line 115, in <module>
    gr.Examples(examples=examples, inputs=[msg], label="எடுத்துக்காட்டுகள் / Examples")
  File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 56, in create_examples
    examples_obj = Examples(
  File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 154, in __init__
    raise ValueError("If caching examples, `fn` and `outputs` must be provided")
ValueError: If caching examples, `fn` and `outputs` must be provided
IMPORTANT: You are using gradio version 4.27.0, however version 4.29.0 is available, please upgrade.

I think you can fix this by simply upgrading the gradio to 4.29.0. You can change the version in the README.md. https://huggingface.co/spaces/aitamilnadu/tamil-ai-assistant/blob/45634489e48314354f4feb9fef9506265073f885/README.md?code=true#L7

BTW, on Spaces, you can use 50GB of ephemeral storage https://huggingface.co/docs/hub/en/spaces-storage#persistent-storage-specs and most of the Spaces don't need the persistent storage, but let us know if you actually need one.

AI Tamil Nadu org

Hi @hysts : Thank you :) And let me fix those errors and also, see if the 50GB storage is enough (must be fine) and let you know if we need one.

Sign up or log in to comment