--- language: - en license: apache-2.0 library_name: transformers tags: - python - java - cpp - sql - function calling - unit tests - causalLM - codeLLAMA modified archi - document - code - code2doc - instruction_tuned - basemodel - pytorch - docstring - documentation - text-generation-inference - plan - planner - llama-cpp - gguf-my-repo metrics: - accuracy pipeline_tag: text-generation widget: - text: '--code:def function_divide2(x): return x / 2--question:Document the code--doc:Description:This function takes a number and divides it by 2.Parameters:- x (numeric): The input value to be divided by 2.Returns:- float: The result of x divided by 2.Example:To call the function, use the following code:function_divide2(1.0)def _plot_bounding_polygon(polygons_coordinates, output_html_path=bounding_polygon_map.html):map_center = [sum([coord[0]for polygon_coords in polygons_coordinatesfor coord in polygon_coords])/ sum([len(polygon_coords) for polygon_coords in polygons_coordinates]),sum([coord[1]for polygon_coords in polygons_coordinatesfor coord in polygon_coords])/ sum([len(polygon_coords) for polygon_coords in polygons_coordinates]),]my_map = folium.Map(location=map_center, zoom_start=12)for polygon_coords in polygons_coordinates:folium.Polygon(locations=polygon_coords,color=blue,fill=True,fill_color=blue,fill_opacity=0.2,).add_to(my_map)marker_cluster = MarkerCluster().add_to(my_map)for polygon_coords in polygons_coordinates:for coord in polygon_coords:folium.Marker(location=[coord[0], coord[1]], popup=fCoordinates: {coord}).add_to(marker_cluster)draw = Draw(export=True)draw.add_to(my_map)my_map.save(output_html_path)return output_html_pathDocument the python code above giving function description ,parameters and return type and example how to call the function' example_title: example --- # NikolayKozloff/pip-code-bandit-Q8_0-GGUF This model was converted to GGUF format from [`PipableAI/pip-code-bandit`](https://huggingface.co/PipableAI/pip-code-bandit) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/PipableAI/pip-code-bandit) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo NikolayKozloff/pip-code-bandit-Q8_0-GGUF --model pip-code-bandit.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo NikolayKozloff/pip-code-bandit-Q8_0-GGUF --model pip-code-bandit.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m pip-code-bandit.Q8_0.gguf -n 128 ```