Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
m-ricย 
posted an update Jul 25
Post
2277
๐—”๐—ด๐—ฒ๐—ป๐˜๐—ถ๐—ฐ ๐——๐—ฎ๐˜๐—ฎ ๐—ฎ๐—ป๐—ฎ๐—น๐˜†๐˜€๐˜: ๐—ฑ๐—ฟ๐—ผ๐—ฝ ๐˜†๐—ผ๐˜‚๐—ฟ ๐—ฑ๐—ฎ๐˜๐—ฎ ๐—ณ๐—ถ๐—น๐—ฒ, ๐—น๐—ฒ๐˜ ๐˜๐—ต๐—ฒ ๐—Ÿ๐—Ÿ๐—  ๐—ฑ๐—ผ ๐˜๐—ต๐—ฒ ๐—ฎ๐—ป๐—ฎ๐—น๐˜†๐˜€๐—ถ๐˜€ ๐Ÿ“Šโš™๏ธ

Need to make quick exploratory data analysis? โžก๏ธ Get help from an agent.

I was impressed by Llama-3.1's capacity to derive insights from data. Given a csv file, it makes quick work of exploratory data analysis and can derive interesting insights.

On the data from the Kaggle titanic challenge, that records which passengers survived the Titanic wreckage, it was able by itself to derive interesting trends like "passengers that paid higher fares were more likely to survive" or "survival rate was much higher for women than men".

The cookbook even lets the agent built its own submission to the challenge, and it ranks under 3,000 out of 17,000 submissions: ๐Ÿ‘ not bad at all!

Try it for yourself in this Space demo ๐Ÿ‘‰ m-ric/agent-data-analyst

Thanks for providing this very nice example.

I have tried to reproduce the example both in Colab and locally without success. Get errors reported on calling the APIs. I have a HF token and have also received permission to use the Meta-Llama3.1 model on HF.

Part of the error output on Colab:
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3.1-70B-Instruct (Request ID: eVZLb-h9UvHfNxNBGeMUw)

Rate limit reached. Please log in or use a HF access token

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/transformers/agents/agents.py", line 711, in direct_run
step_logs = self.step()
File "/usr/local/lib/python3.10/dist-packages/transformers/agents/agents.py", line 883, in step
raise AgentGenerationError(f"Error in generating llm output: {e}.")
transformers.agents.agents.AgentGenerationError: Error in generating llm output: 429 Client Error: Too Many Requests for url: https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3.1-70B-Instruct (Request ID: eVZLb-h9UvHfNxNBGeMUw)

Rate limit reached. Please log in or use a HF access token.
Reached max iterations.
NoneType: None

Wow!! Is all I have to say. The ReAct framework is simply a game changer.

Thanks so much for this demo. I've made my own implementation of it here. Please check it out and let me know what you think.
https://huggingface.co/spaces/dkondic/data-analyst

I've made it a bit more general with the base prompt so the user can input specifics in their own input, or just ask a simple question.

I did notice the max token is 8.2 k even with llama3.1. Is that by design or a bug somewhere?

I'm also having a hard time understanding the difference between the code agents and agents. Would you be able to provide some insight on those?

Thanks again!