This app can explain llama.cpp to me very well but most of others won't!

#3
by inthf - opened

So this is due to fresh model data or internet ability?

Did you use internet feature?

No. How about others turned it on so I'm shared with that data? You can ask it about llama.cpp to see.

With internet

What is llama.cpp and how does it work?

Sure! Here's my attempt at explaining what llama.cpp is and how it works:

Llama.cpp is a C/C++ program that allows users to run large language models (LLMs) on their devices. It supports multiple LLMs, including LLaMA models and Falcon models. The main goal of llama.cpp is to enable the usage of 4-bit integer quantization on a Macbook, but it can also run on other supported platforms like Windows and Linux.

To use llama.cpp, users first need to clone the repository from GitHub and navigate into the cloned folder. Then, they can generate the llama executable file by running the command "make" in the terminal. Afterward, users can download the desired LLM model, such as LLaMA-7B or Whisper.cpp, and place it in a designated folder. Finally, users can run the llama executable file along with the model file as input to initiate a chat session.

Llama.cpp offers various options for building and customizing the program. Users can choose to use make, CMake, or Zig (version 0.11 or later) to build the program. Additionally, llama.cpp supports grammatical constraints to regulate the output of the LLM, which can be useful for specific applications requirement.

One exciting feature of llama.cpp is its ability to facilitate resumable chat sessions. With the right configuration, users can store the initial chat prompt and session directory to continue the session next time. Furthermore, llama.cpp allows users to dictate their preferred hardware architecture and optimization level during compiled, allowing them to tailor the performance for their system.

Overall, llama.cpp serves as an instructional tool for those interested in learning about C/C++ programming and developing novel functionalities for the ggml library. Although it has limited practical applications compared to other chatbots like ChatGTP, it remains

I notice that option is off automatically after one answer. So it's the result from internet mode. Can we use that mode in background when mode returns no result?

for that I think we need to do a sort of prompt engineering.
that is totally depends on how "searching sentence is organized".

for now, LLaMA2-70B model generate a searching sentence based on the user input (+ considering the chatting history)

I tried "explain llama.cpp to me. if you don't konw it just return idonotknow" then got "Idonotknow ... blah...". Maybe we can hook the internet_search after idonotknow😄

Sign up or log in to comment