liltom-eth commited on
Commit
47ee9f7
1 Parent(s): fbd0dea

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -8,14 +8,14 @@ inference: false
8
  # LLaVA Model Card
9
 
10
  ## Model details
11
- This is a fork from origianl [liuhaotian/llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b). This repo added `code/inference.py` and `code/requirements.txt` to provide customize inference script and environment for SageMaker deployment.
12
 
13
  **Model type:**
14
  LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
15
  It is an auto-regressive language model, based on the transformer architecture.
16
 
17
  **Model date:**
18
- LLaVA-v1.5-13B was trained in September 2023.
19
 
20
  **Paper or resources for more information:**
21
  https://llava-vl.github.io/
@@ -28,7 +28,7 @@ Following `deploy_llava.ipynb` (full tutorial [here](https://medium.com/@liltom.
28
  from sagemaker.s3 import S3Uploader
29
 
30
  # upload model.tar.gz to s3
31
- s3_model_uri = S3Uploader.upload(local_path="./model.tar.gz", desired_s3_uri=f"s3://{sess.default_bucket()}/llava-v1.5-13b")
32
 
33
  print(f"model uploaded to: {s3_model_uri}")
34
  ```
@@ -53,7 +53,6 @@ predictor = huggingface_model.deploy(
53
  instance_type="ml.g5.xlarge",
54
  )
55
  ```
56
-
57
  ## Inference on SageMaker
58
  Default `conv_mode` for llava-1.5 is setup as `llava_v1` to process `raw_prompt` into meaningful `prompt`. You can also setup `conv_mode` as `raw` to directly use `raw_prompt`.
59
  ```python
 
8
  # LLaVA Model Card
9
 
10
  ## Model details
11
+ This is a fork from origianl [liuhaotian/llava-v1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b). This repo added `code/inference.py` and `code/requirements.txt` to provide customize inference script and environment for SageMaker deployment.
12
 
13
  **Model type:**
14
  LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
15
  It is an auto-regressive language model, based on the transformer architecture.
16
 
17
  **Model date:**
18
+ LLaVA-v1.5-7B was trained in September 2023.
19
 
20
  **Paper or resources for more information:**
21
  https://llava-vl.github.io/
 
28
  from sagemaker.s3 import S3Uploader
29
 
30
  # upload model.tar.gz to s3
31
+ s3_model_uri = S3Uploader.upload(local_path="./model.tar.gz", desired_s3_uri=f"s3://{sess.default_bucket()}/llava-v1.5-7b")
32
 
33
  print(f"model uploaded to: {s3_model_uri}")
34
  ```
 
53
  instance_type="ml.g5.xlarge",
54
  )
55
  ```
 
56
  ## Inference on SageMaker
57
  Default `conv_mode` for llava-1.5 is setup as `llava_v1` to process `raw_prompt` into meaningful `prompt`. You can also setup `conv_mode` as `raw` to directly use `raw_prompt`.
58
  ```python