Cannot create tokenizer

#2
by jobenb - opened

Hi mate, I am getting the following error trying to create a tokenizer using AutoTokenizer:

OSError: Can't load tokenizer for 'TheBloke/LLaMa-7B-GGML'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'TheBloke/LLaMa-7B-GGML' is the correct path to a directory containing all relevant files for a LlamaTokenizerFast tokenizer.

Thanks for all these models you upload this is awesome.

Getting this model from the hub models api with config I can only see this:

{
"_id": "6464cfb34855e06b950548ca",
"id": "TheBloke/LLaMa-13B-GGML",
"likes": 14,
"private": false,
"config": {
"model_type": "llama"
},
"downloads": 79,
"tags": [
"llama",
"transformers",
"license:other",
"text-generation-inference"
],
"modelId": "TheBloke/LLaMa-13B-GGML"
}

This is a GGML model, it can't be loaded directly from Huggingface Transformers code. Check out ctransformers for a library that can load GGML models from Python code: https://github.com/marella/ctransformers

Thank you :)

Bring in deps

import streamlit as st
from langchain.llms import LlamaCpp
from langchain.embeddings import LlamaCppEmbeddings
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma

Customize the layout

st.set_page_config(page_title="DOCAI", page_icon="πŸ€–", layout="wide", )
st.markdown(f"""

""", unsafe_allow_html=True)

function for writing uploaded file in temp

def write_text_file(content, file_path):
try:
with open(file_path, 'w') as file:
file.write(content)
return True
except Exception as e:
print(f"Error occurred while writing the file: {e}")
return False

set prompt template

prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.

{context}

Question: {question}
Answer:"""
prompt = PromptTemplate(template=prompt_template, input_variables=["context", "question"])

initialize hte LLM & Embeddings

llm = LlamaCpp(model_path="./models/llama-7b.ggmlv3.q4_0.bin")
embeddings = LlamaCppEmbeddings(model_path="models/llama-7b.ggmlv3.q4_0.bin")
llm_chain = LLMChain(llm=llm, prompt=prompt)

st.title("πŸ“„ Document Conversation πŸ€–")
uploaded_file = st.file_uploader("Upload an article", type="txt")

if uploaded_file is not None:
content = uploaded_file.read().decode('utf-8')
# st.write(content)
file_path = "temp/file.txt"
write_text_file(content, file_path)

loader = TextLoader(file_path)
docs = loader.load()    
text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0)
texts = text_splitter.split_documents(docs)
db = Chroma.from_documents(texts, embeddings)    
st.success("File Loaded Successfully!!")

# Query through LLM    
question = st.text_input("Ask something from the file", placeholder="Find something similar to: ....this.... in the text?", disabled=not uploaded_file,)    
if question:
    similar_doc = db.similarity_search(question, k=1)
    context = similar_doc[0].page_content
    query_llm = LLMChain(llm=llm, prompt=prompt)
    response = query_llm.run({"context": context, "question": question})        
    st.write(response)        this is the come, how should i integrate GPU on this code

i have changed the version of libraries and it worked

i have changed the version of libraries and it worked

Did it work directly to use tokenizer for 'TheBloke/CodeLlama-7B-Instruct-GGUF' or you used a library that loads GGML models?

Hi , i am also getting a similar error trying to create a tokenizer using AutoTokenizer but for the 'TheBloke/Llama-2-7b-Chat-GGUF.
i would appreciate the help. Thank you.
OSError: Can't load tokenizer for 'TheBloke/Llama-2-7b-Chat-GGUF'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'TheBloke/Llama-2-7b-Chat-GGUF' is the correct path to a directory containing all relevant files for a LlamaTokenizerFast tokenizer.

I'm having this issue with a lot of different models from Huggingface. And it happens to me with text-generation-webui, langchain and also now with axolotl. Something very weird...

i have changed the version of libraries and it worked

hi dear? how to solved it? tanks

Sign up or log in to comment