Pradeep T
Pradeep1995
AI & ML interests
None yet
Organizations
None yet
Pradeep1995's activity
how to use this model on sagemaker endpoints
2
#1 opened 4 months ago
by
LorenzoCevolaniAXA
![](https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/dWy8pkzRWllMotX8WHVOf.jpeg)
What is the actual context size of google/gemma-7b model
1
#81 opened 3 months ago
by
Pradeep1995
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1599822346546-noauth.jpeg)
What is the actual context size of mistralai/Mixtral-8x7B-Instruct-v0.1 model
3
#186 opened 3 months ago
by
Pradeep1995
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1599822346546-noauth.jpeg)
PEFT based Fine Tuned model hallucinates values from the fine tuning training data while inferencing.
7
#111 opened 6 months ago
by
Pradeep1995
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1599822346546-noauth.jpeg)
Special token( </s>) not generating in the model.generate() method
7
#47 opened 5 months ago
by
Pradeep1995
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1599822346546-noauth.jpeg)
Can we save the finetuned Mistral model by exporting to TorchScript
1
#46 opened 5 months ago
by
Pradeep1995
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1599822346546-noauth.jpeg)
What is the best way for the inference process in LORA in PEFT approach
8
#70 opened 6 months ago
by
Pradeep1995
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1599822346546-noauth.jpeg)
What is the best way for the inference process in LORA in PEFT approach
#3 opened 6 months ago
by
Pradeep1995
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1599822346546-noauth.jpeg)
What is the best way for the inference process in LORA in PEFT approach
#53 opened 6 months ago
by
Pradeep1995
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1599822346546-noauth.jpeg)
What is the best way for the inference process in LORA in PEFT approach
#43 opened 6 months ago
by
Pradeep1995
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1599822346546-noauth.jpeg)
What is the best way for the inference process in LORA in PEFT approach
#96 opened 6 months ago
by
Pradeep1995
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1599822346546-noauth.jpeg)
Which is the actual way to store the adapters after PEFT finetuning
4
#67 opened 6 months ago
by
Pradeep1995
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1599822346546-noauth.jpeg)
Which is the actual way to store the Adapter after PEFT finetuning
#42 opened 6 months ago
by
Pradeep1995
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1599822346546-noauth.jpeg)
should we follow the same openchat prompt structure while finetuning time?
3
#38 opened 7 months ago
by
Pradeep1995
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1599822346546-noauth.jpeg)
Incomplete Output even with max_new_tokens
12
#107 opened 7 months ago
by
Pradeep1995
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1599822346546-noauth.jpeg)
should we follow the same mistral prompt structure while finetuning time?
#110 opened 7 months ago
by
Pradeep1995
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1599822346546-noauth.jpeg)
Incomplete Output even with max_new_tokens
1
#37 opened 7 months ago
by
Pradeep1995
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1599822346546-noauth.jpeg)
Incomplete Output even with max_new_tokens
6
#27 opened 10 months ago
by
vermanic
Prompt format of mistralai/Mixtral-8x7B-v0.1 model
1
#22 opened 7 months ago
by
Pradeep1995
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1599822346546-noauth.jpeg)