GGUF
Inference Endpoints
conversational
siacus commited on
Commit
3352d85
·
verified ·
1 Parent(s): 21ecb2d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -0
README.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - siacus/cap_pe_verified
5
+ base_model:
6
+ - meta-llama/Llama-2-7b-chat-hf
7
+ new_version: siacus/llama-2-7b-cap_verified
8
+ ---
9
+
10
+
11
+ The data used to train the model are on Huggingface under [siacus/cap_pe_verified](https://huggingface.co/datasets/siacus/cap_pe_verified)
12
+
13
+
14
+ F16 version from merged weights created with [llama.cpp](https://github.com/ggerganov/llama.cpp) on a
15
+ CUDA GPU and the 4bit quantized version created on a Mac M2 Ultra Metal architecture. If you want
16
+ to use the 4bit quantized version on CUDA,
17
+ please quantize it directly from the F16 version.
18
+
19
+
20
+ For more information about this model refer the [main repository](https://github.com/siacus/rethinking-scale) for the supplementary material of the manuscript [Rethinking Scale: The Efficacy of Fine-Tuned Open-Source LLMs in Large-Scale Reproducible Social Science Research](https://arxiv.org/abs/2411.00890).
21
+