nickmalhotra zicsx commited on
Commit
4c6a206
1 Parent(s): b2e94cf

Update README.md (#4)

Browse files

- Update README.md (0ebbf982375b197c97aa70c403d9c75f5027beef)


Co-authored-by: Satish Kumar Mishra <zicsx@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +20 -2
README.md CHANGED
@@ -421,9 +421,9 @@ To begin using Project Indus LLM for your projects, follow these steps to set up
421
  # Load model directly
422
 
423
  ```python
424
- from transformers import AutoModel, AutoTokenizer
425
 
426
- model = AutoModel.from_pretrained("nickmalhotra/ProjectIndus")
427
  tokenizer = AutoTokenizer.from_pretrained("nickmalhotra/ProjectIndus")
428
 
429
  # Example inference
@@ -454,3 +454,21 @@ output = model.generate(input_ids,
454
  num_return_sequences=1,
455
  )
456
  print(tokenizer.decode(output[0], skip_special_tokens=False))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
421
  # Load model directly
422
 
423
  ```python
424
+ from transformers import AutoModelForCausalLM, AutoTokenizer
425
 
426
+ model = AutoModelForCausalLM.from_pretrained("nickmalhotra/ProjectIndus")
427
  tokenizer = AutoTokenizer.from_pretrained("nickmalhotra/ProjectIndus")
428
 
429
  # Example inference
 
454
  num_return_sequences=1,
455
  )
456
  print(tokenizer.decode(output[0], skip_special_tokens=False))
457
+
458
+ ## Disclaimer
459
+
460
+ #### Model Limitations
461
+
462
+ Project Indus LLM is trained with single instruction tuning, which may result in hallucinations—instances where the model generates plausible but inaccurate information. Users should exercise caution, especially in scenarios requiring high factual accuracy.
463
+
464
+ #### Adaptation for Specific Use Cases
465
+
466
+ Project Indus LLM is designed as a foundational model suitable for further development and fine-tuning. Users are encouraged to adapt and refine the model to meet specific requirements of their applications.
467
+
468
+ #### Recommendations for Fine-Tuning
469
+
470
+ - **Identify Specific Needs**: Clearly define the requirements of your use case to guide the fine-tuning process.
471
+ - **Curate Targeted Data**: Ensure the training data is relevant and of high quality to improve model performance.
472
+ - **Continuous Evaluation**: Regularly assess the model's performance during and after fine-tuning to maintain accuracy and reduce biases.
473
+
474
+ This disclaimer aims to provide users with a clear understanding of the model's capabilities and limitations, facilitating its effective application and development.