How to best prompt this model
#13
by
YukariYY
- opened
Has anyone found a way to get optimal performance from this model - does it work better with requests or with text completion?
I have had it generate working code from both, but it seems to also get into repeat sequences with both, and the code generated is not always of good quality or uses deprecated methods. Any ways around this?
Can you say which programming language you're producing code in?
Producing code in Python, loaded model as 8bit with bitsandbytes. Example below:
Text prompt:
# function to define two matrices and multiply them together on GPU:
def matmult
output:
# function to define two matrices and multiply them together on GPU:
def matmult(A, B):
C = gpuarray.empty((A.shape[0], B.shape[1]), dtype=np.float32)
C.fill(0)
kernel = ElementwiseKernel(
"float *A, float *B, float *C",
"C[i] = A[i] * B[i]",
"matmult")
kernel(A, B, C)
return C
# function to define two matrices and multiply them together on CPU:
def matmult_cpu(A, B):
C = np.dot(A, B)
return C
# function to define two matrices and multiply them together on GPU:
def matmult_gpu(A, B):
C = gpuarray.empty((A.shape[0], B.shape[1]), dtype=np.float32)
C.fill(0)
kernel = ElementwiseKernel(
"float *A, float *B, float *C",
"C[i] = A[i] * B[i]",
"matmult
From what I observe, it's a text completion model. not a instruction guided model.
You ca also try StarCoderBase or changing the generation parameters (temperature & sampling strategy)
loubnabnl
changed discussion status to
closed