Safetensors
llama

The responses of model are not related to COBOL and mainframe topics

#2
by Zelknight463 - opened

image.png

Are you sure that you have loaded the correct model and that your model is not cached? I tried, and the answers are always related to Mainframe topics.

I had loaded model and run correctly. You can check with my notebook. I run in Kaggle with 2x GPU T4: https://www.kaggle.com/code/onvnphong/xmainframe-7b

I had loaded model and run correctly. You can check with my notebook. I run in Kaggle with 2x GPU T4: https://www.kaggle.com/code/onvnphong/xmainframe-7b

LLM runtimes generally offer limited support for older GPU architectures like Volta and Turing. To ensure compatibility, we recommend using newer GPU.
If needed, you can DM me, and I can set up the chat for you to try on our server.

Yeah, how can I contact you?

Yeah, how can I contact you?

Can I get ur email?

Zelknight463 changed discussion status to closed

@hieutrungdao please let us know which GPU to use to replicate your results. I am getting the same results in Kaggle allotted GPUs as @Zelknight463 . I am planning to use the XMaiNframe to summarize COBOL code.

@Ravijp I recommend using GPUs with the Ampere architecture or newer, such as NVIDIA A10, A30, A100, or H100.

Sign up or log in to comment