VideoChatGPT
Edit model card

DriVLMe: Enhancing LLM-based Autonomous Driving Agents with Embodied and Social Experiences

Project Page | Paper | Video | Code

Yidong Huang, Jacob Sansom, Ziqiao Ma, Felix Gervits, Joyce Chai
University of Michigan, ARL
IROS 2024

You can also download the pretrained checkpoints from this link

To run the open-loop evaluation, we can use the command

python drivlme/single_video_inference_SDN.py --model-name  /nfs/turbo/coe-chaijy-unreplicated/pre-trained-weights/LLaVA/LLaVA-7B-Lightening-v1-1/ --projection_path ./DriVLMe_model_weights/bddx_pretrain_ckpt/mm_projector.bin --lora_path  ./DriVLMe_model_weights/DriVLMe/ --json_path datasets/SDN_test_actions.json --video_root videos/SDN_test_videos/ --out_path SDN_test_actions.json

python evaluation/physical_action_acc.py

for NfD task and

python drivlme/single_video_inference_SDN.py --model-name  /nfs/turbo/coe-chaijy-unreplicated/pre-trained-weights/LLaVA/LLaVA-7B-Lightening-v1-1/ --projection_path ./DriVLMe_model_weights/bddx_pretrain_ckpt/mm_projector.bin --lora_path  ./DriVLMe_model_weights/DriVLMe/ --json_path datasets/SDN_test_conversations.json --video_root videos/SDN_test_videos/ --out_path SDN_test_conversations.json

python evaluation/diag_action_acc.py

for RfN task.

Downloads last month
6
Inference API
Unable to determine this model's library. Check the docs .