How does the Agent is supposed to be working?

#2
by Maverick17 - opened

In your prompt you're are writing:

{"type": "text", "text": "Task instruction: to allow the user to enter their first name\nHistory: null" },

Would you please clarify what the "History" is then? Should I append every executed instruction separated by a comma to the History: task1, task2, taskn? Or is it something like an append of the assistant role output to the overall messages list?

But then, there is no "planner", so how does the agent decide what alternative trajectories might be? Additionally, I have observed that the "thought" of the agent does not represent a good reasoning process. Of course, it's just a 7B model, so I wasn't expecting good agent behavior. Nice try anyway :)

Hello,

Just wondering how did you get the model to work it seems the starter transformer code lists invalid models, and processors e.g. /nas/shared/NLP_A100/wuzhenyu/ckpt/20240928_finetune_qwen_7b_3m_imgsiz_1024_bs_1024_lr_1e-7_wd_1e-3_mixture I put in the OS-Atlas_pro-7B in its place but am getting some issues could you please share your model configuration

Hello @aswad546 ,

I just implemented a kind of ReAct loop, but as I've written, this model simply can't work as a agent very well due to missing reasoning within the training.

Maverick17 changed discussion status to closed

Thanks @Maverick17 for your response. Do you any good open source agents that do not use paid models like GPT under the hood?

@aswad546 . It depends, how difficult the task for an agent is. Currently the only way to go is to built on top of gpt4o + small language action model.

Yes, I've noticed people heavily favour GPT for this use case I was hoping to see some sort of open source implementation but I guess models like Llama 3.2 and Qwen VL (even for vision understanding) just are not there yet. I am curious to see how GPT o1 does on these tasks since as per my research it seems that the accuracy for task completion is still quite low. I think it will become better with time and maybe open source models can compete as well.

@aswad546 True, I’ve noticed the same thing. There’s no way a 7B model, even if it’s specialized (fine-tuned) for mobile task execution, could outperform the "big players" like GPT-4o or Anthropic paired with a small language-action model. This is because specialized models often lack strong reasoning abilities. Even top-tier models from families like Qwen-2-VL or InternVL2 excel in understanding both images and tasks more broadly.

I think we’re at least a year away from achieving 90% accuracy in these scenarios.

By the way, you mentioned the O1 model—does it even support vision, or not?

Oops my bad it seems the o1 model does indeed not support vision based tasks yet. I think a year is still optimistic since the tools available currently aren't really cutting it enough to be reliable. But I am very interested to see where these automated agents go they could have a lot of applications.

Sign up or log in to comment