A newer version of the Gradio SDK is available:
5.4.0
macOS
Supports CPU and MPS (Metal M1/M2).
Install
- Download and Install Miniconda for Python 3.10.
- Run Miniconda
- Setup environment with Conda Rust:
conda create -n h2ogpt python=3.10 rust conda activate h2ogpt
- Install dependencies:
git clone https://github.com/h2oai/h2ogpt.git cd h2ogpt # fix any bad env pip uninstall -y pandoc pypandoc pypandoc-binary pip install --upgrade pip python -m pip install --upgrade setuptools # Install Torch: pip install -r requirements.txt --extra-index https://download.pytorch.org/whl/cpu
- Install document question-answer dependencies:
# Required for Doc Q/A: LangChain: pip install -r reqs_optional/requirements_optional_langchain.txt # Required for CPU: LLaMa/GPT4All: pip uninstall -y llama-cpp-python llama-cpp-python-cuda pip install -r reqs_optional/requirements_optional_gpt4all.txt pip install librosa pip install llama-cpp-python # Optional: PyMuPDF/ArXiv: pip install -r reqs_optional/requirements_optional_langchain.gpllike.txt # Optional: Selenium/PlayWright: pip install -r reqs_optional/requirements_optional_langchain.urls.txt # Optional: DocTR OCR: pip install -r reqs_optional/requirements_optional_doctr.txt # Optional: for supporting unstructured package python -m nltk.downloader all
- For supporting Word and Excel documents, download libreoffice: https://www.libreoffice.org/download/download-libreoffice/ .
- To support OCR, install Tesseract Documentation:
brew install libmagic brew link libmagic brew install poppler brew install tesseract brew install tesseract-lang brew install rubberband brew install pygobject3 gtk4 brew install libjpeg brew install libpng
See FAQ for how to run various models. See CPU and GPU for some other general aspects about using h2oGPT on CPU or GPU, such as which models to try.
Issues
Metal M1/M2 Only: Verify whether torch uses MPS, run below python script:
import torch if torch.backends.mps.is_available(): mps_device = torch.device("mps") x = torch.ones(1, device=mps_device) print (x) else: print ("MPS device not found.")
Output
tensor([1.], device='mps:0')
If you see
ld: library not found for -lSystem
then ensure you do below and then retry from scratch to dopip install
commands:export LDFLAGS=-L/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/lib`
If conda Rust has issus, you can download and install Native Rust:
curl –proto ‘=https’ –tlsv1.2 -sSf https://sh.rustup.rs | sh # enter new shell and test: rustc --version
When running a Mac with Intel hardware (not M1), you may run into
_clang: error: the clang compiler does not support '-march=native'_
during pip install. If so, set your archflags during pip install. E.g.
ARCHFLAGS="-arch x86_64" pip install -r requirements.txt
Metal M1/M2 Only
- By default requirements_optional_gpt4all.txt should install correct llama_cpp_python packages for GGUF. See https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels/releases or https://github.com/abetlen/llama-cpp-python/releases for other releases if you encounter any issues.
- If any issues, then compile:
pip uninstall llama-cpp-python -y CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python==0.2.26 --no-cache-dir
If you encounter an error while building a wheel during the
pip install
process, you may need to install a C++ compiler on your computer.