Muiru commited on
Commit
08a0f53
·
1 Parent(s): c7921a2

docs: update README to use relative paths and clarify sync script

Browse files

- Replace absolute Windows path with relative path for local directory references.
- Clarify that the sync script requires HF_TOKEN and HF_REPO_ID environment variables.
- Minor formatting fix: remove UTF-8 BOM from the first line.

Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -1,4 +1,4 @@
1
- ### Cogni-OpenModel:
2
 
3
  - Safety‑aware, non‑clinical conversational AI for supportive mental health and wellbeing use‑cases, built on Meta Llama 3.1 8B and fine‑tuned with LoRA. This repository contains the model configuration, tokenizer, generation defaults, and adapter metadata for efficient deployment.
4
 
@@ -60,7 +60,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
60
 
61
  # Base-only inference (loads Unsloth 4-bit backbone)
62
  MODEL_ID = "unsloth/meta-llama-3.1-8b-bnb-4bit"
63
- LOCAL_DIR = "c:/Users/Public/Cogni-OpenModel"
64
 
65
  tokenizer = AutoTokenizer.from_pretrained(LOCAL_DIR)
66
  model = AutoModelForCausalLM.from_pretrained(MODEL_ID, device_map="auto")
@@ -90,7 +90,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
90
  from peft import PeftModel
91
 
92
  BASE_ID = "unsloth/meta-llama-3.1-8b-bnb-4bit"
93
- LOCAL_DIR = "c:/Users/Public/Cogni-OpenModel" # contains adapter_config.json
94
  ADAPTER_DIR = LOCAL_DIR # place adapter weights here (adapter_model.bin)
95
 
96
  tokenizer = AutoTokenizer.from_pretrained(LOCAL_DIR)
@@ -105,7 +105,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
105
 
106
  ### Restoring Adapter Weights:
107
 
108
- - Place `adapter_config.json` and either `adapter_model.safetensors` or `adapter_model.bin` in the project root (`c:/Users/Public/Cogni-OpenModel`).
109
  - The Streamlit demo auto‑attaches the adapter only when both config and weights are present; otherwise it runs base‑only and shows a warning.
110
 
111
  ### Chat Prompting:
@@ -186,7 +186,8 @@ Read `CONTRIBUTING.md` to get started. Open issues for discussion and submit foc
186
  - Set environment variables: `HF_TOKEN=<your_token>` and `HF_REPO_ID=<org/model>`
187
  - Install: `pip install -r requirements.txt`
188
  - Sync model card: `python tools/sync_hf_readme.py` or `python tools/sync_hf_readme.py <org/model>`
189
- - This replaces the HF repo `README.md` with `README.hf.md` so metadata renders correctly.
 
190
 
191
  ### Acknowledgements:
192
 
 
1
+ ### Cogni-OpenModel:
2
 
3
  - Safety‑aware, non‑clinical conversational AI for supportive mental health and wellbeing use‑cases, built on Meta Llama 3.1 8B and fine‑tuned with LoRA. This repository contains the model configuration, tokenizer, generation defaults, and adapter metadata for efficient deployment.
4
 
 
60
 
61
  # Base-only inference (loads Unsloth 4-bit backbone)
62
  MODEL_ID = "unsloth/meta-llama-3.1-8b-bnb-4bit"
63
+ LOCAL_DIR = "./"
64
 
65
  tokenizer = AutoTokenizer.from_pretrained(LOCAL_DIR)
66
  model = AutoModelForCausalLM.from_pretrained(MODEL_ID, device_map="auto")
 
90
  from peft import PeftModel
91
 
92
  BASE_ID = "unsloth/meta-llama-3.1-8b-bnb-4bit"
93
+ LOCAL_DIR = "./" # contains adapter_config.json
94
  ADAPTER_DIR = LOCAL_DIR # place adapter weights here (adapter_model.bin)
95
 
96
  tokenizer = AutoTokenizer.from_pretrained(LOCAL_DIR)
 
105
 
106
  ### Restoring Adapter Weights:
107
 
108
+ - Place `adapter_config.json` and either `adapter_model.safetensors` or `adapter_model.bin` in the project root.
109
  - The Streamlit demo auto‑attaches the adapter only when both config and weights are present; otherwise it runs base‑only and shows a warning.
110
 
111
  ### Chat Prompting:
 
186
  - Set environment variables: `HF_TOKEN=<your_token>` and `HF_REPO_ID=<org/model>`
187
  - Install: `pip install -r requirements.txt`
188
  - Sync model card: `python tools/sync_hf_readme.py` or `python tools/sync_hf_readme.py <org/model>`
189
+ - This replaces the HF repo `README.md` with `README.hf.md` so metadata (license, tags, etc.) renders correctly.
190
+ - Ensure you have `HF_TOKEN` and `HF_REPO_ID` set in your environment.
191
 
192
  ### Acknowledgements:
193