Edit model card

WestLake-7B-laser-truthy-dpo-GGUF

image/webp

iMatrix

iMatrix quantizations are now available thanks to user Ji-Ha

Chat Template

*I am using ooba (text generation web ui) for inference

The GGUF version defaults to alpaca:

11:40:53-940260 INFO LOADER: llama.cpp
11:40:53-940970 INFO TRUNCATION LENGTH: 32768
11:40:53-941299 INFO INSTRUCTION TEMPLATE: Alpaca
11:40:53-941580 INFO Loaded the model in 4.55 seconds.

{%- set ns = namespace(found=false) -%}
{%- for message in messages -%}
    {%- if message['role'] == 'system' -%}
        {%- set ns.found = true -%}
    {%- endif -%}
{%- endfor -%}
{%- if not ns.found -%}
    {{- '' + 'Below is an instruction that describes a task. Write a response that appropriately completes the request.' + '\n\n' -}}
{%- endif %}
{%- for message in messages %}
    {%- if message['role'] == 'system' -%}
        {{- '' + message['content'] + '\n\n' -}}
    {%- else -%}
        {%- if message['role'] == 'user' -%}
            {{-'### Instruction:\n' + message['content'] + '\n\n'-}}
        {%- else -%}
            {{-'### Response:\n' + message['content'] + '\n\n' -}}
        {%- endif -%}
    {%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
    {{-'### Response:\n'-}}
{%- endif -%}

And can be manually overridden to chatML successfully:

{%- set ns = namespace(found=false) -%}
{%- for message in messages -%}
    {%- if message['role'] == 'system' -%}
        {%- set ns.found = true -%}
    {%- endif -%}
{%- endfor -%}
{%- if not ns.found -%}
    {{- '<|im_start|>system\n' + '' + '<|im_end|>\n' -}}
{%- endif %}
{%- for message in messages %}
    {%- if message['role'] == 'system' -%}
        {{- '<|im_start|>system\n' + message['content'] + '<|im_end|>\n' -}}
    {%- else -%}
        {%- if message['role'] == 'user' -%}
            {{-'<|im_start|>user\n' + message['content'] + '<|im_end|>\n'-}}
        {%- else -%}
            {{-'<|im_start|>assistant\n' + message['content'] + '<|im_end|>\n' -}}
        {%- endif -%}
    {%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
    {{-'<|im_start|>assistant\n'-}}
{%- endif -%}

Here is the output of the model when I override the chat template to chatML

<|im_start|>system <|im_end|> <|im_start|>user write me a quicksort algorithm in python<|im_end|> <|im_start|>assistant Sure, here's a basic implementation of Quicksort algorithm in Python:

def partition(array, low, high):
    """Sorts elements around pivot"""
    i = (low - 1)   # Index of smaller element
    pivot = array[high]   # Select rightmost element as pivot

    for j in range(low, high):
        if array[j] <= pivot:
            i += 1
            array[i], array[j] = array[j], array[i]

    array[i + 1], array[high] = array[high], array[i + 1]   # Put the pivot where it belongs
    return (i + 1)                              # Return index after pivot point

def quicksort(array, low, high):
    """Implementing Quicksort"""
    if len(array) == 1 or low >= high:        # Base conditions
        return array
    elif low < high:   # Recursive step
        pi = partition(array, low, high)      # Separate smaller and greater elements around pivot point
        quicksort(array, low, pi - 1)         # Sort first half
        quicksort(array, pi + 1, high)        # Sort second half
    return array                             # After recursion ends we have a sorted list

To use it:

arr = [45, 23, 76, 89, 32, 65]   # Your unsorted array
print("Original Array:", arr)    # Print original array
quicksort(arr, 0, len(arr)-1)    # Call Quicksort function with given range for sorting the entire array.
print("Sorted Array:", arr)      # Print sorted array

<|im_end|> ```

Downloads last month
330
GGUF
Unable to determine this model's library. Check the docs .

Dataset used to train macadeliccc/WestLake-7B-v2-laser-truthy-dpo-GGUF

Collections including macadeliccc/WestLake-7B-v2-laser-truthy-dpo-GGUF