Chat2Find commited on
Commit
a1c16d8
·
verified ·
1 Parent(s): a345f95

Cleaned up model description (removed Lankabizz and hardware specs)

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -31,7 +31,6 @@ Chat2Find-CPT is a specialized version of the Qwen 3.5 4B model, enhanced via **
31
  ## Technical Specifications
32
 
33
  ### Training Hardware
34
- - **GPU:** NVIDIA GeForce RTX 3060 (12GB VRAM)
35
  - **Frameworks:** Unsloth, Hugging Face Transformers, PEFT
36
 
37
  ### Training Hyperparameters
@@ -45,7 +44,6 @@ Chat2Find-CPT is a specialized version of the Qwen 3.5 4B model, enhanced via **
45
  ### Dataset
46
  The model was trained on a curated corpus of ~270,000 sequences focusing on:
47
  - **Sri Lankan News & Media:** Current events and reporting styles.
48
- - **Business & Logistics:** Domain-specific data from Lankabizz and local commerce.
49
  - **Cultural Context:** General web-scraped data reflecting local nuances.
50
 
51
  ## Capabilities
@@ -64,7 +62,7 @@ from unsloth import FastLanguageModel
64
  import torch
65
 
66
  model, tokenizer = FastLanguageModel.from_pretrained(
67
- model_name = "SENTIENT_ID/Chat2Find-CPT",
68
  max_seq_length = 2048,
69
  load_in_4bit = True,
70
  )
@@ -84,7 +82,7 @@ print(tokenizer.batch_decode(outputs))
84
  ```python
85
  from transformers import AutoModelForCausalLM, AutoTokenizer
86
 
87
- model_name = "SENTIENT_ID/Chat2Find-CPT"
88
  tokenizer = AutoTokenizer.from_pretrained(model_name)
89
  model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
90
 
 
31
  ## Technical Specifications
32
 
33
  ### Training Hardware
 
34
  - **Frameworks:** Unsloth, Hugging Face Transformers, PEFT
35
 
36
  ### Training Hyperparameters
 
44
  ### Dataset
45
  The model was trained on a curated corpus of ~270,000 sequences focusing on:
46
  - **Sri Lankan News & Media:** Current events and reporting styles.
 
47
  - **Cultural Context:** General web-scraped data reflecting local nuances.
48
 
49
  ## Capabilities
 
62
  import torch
63
 
64
  model, tokenizer = FastLanguageModel.from_pretrained(
65
+ model_name = "Chat2Find/Chat2Find-CPT",
66
  max_seq_length = 2048,
67
  load_in_4bit = True,
68
  )
 
82
  ```python
83
  from transformers import AutoModelForCausalLM, AutoTokenizer
84
 
85
+ model_name = "Chat2Find/Chat2Find-CPT"
86
  tokenizer = AutoTokenizer.from_pretrained(model_name)
87
  model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
88