File size: 2,093 Bytes
b42e24a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
language:
- en
license: apache-2.0
library_name: llamafile
tags:
- mteb
- transformers.js
- transformers
- llamafile
pipeline_tag: feature-extraction
model_creator: mixedbread-ai
model_name: mxbai-embed-large-v1
base_model: mixedbread-ai/mxbai-embed-large-v1
---
# mxbai-embed-large-v1 - llamafile

This repository contains executable weights (which we call [llamafiles](https://github.com/Mozilla-Ocho/llamafile)) that run on Linux, MacOS, Windows, FreeBSD, OpenBSD, and NetBSD for AMD64 and ARM64.

- Model creator: [mixedbread-ai](https://www.mixedbread.ai/)
- Original model: [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1)
- Built with [llamafile 0.8.4](https://github.com/Mozilla-Ocho/llamafile/releases/tag/0.8.4)
 
## Quickstart

Running the following on a desktop OS will launch a server on `http://localhost:8080` to which you can send HTTP requests to in order to get embeddings:

```
chmod +x mxbai-embed-large-v1-f16.llamafile
./mxbai-embed-large-v1-f16.llamafile --server --nobrowser --embedding
```

Then, you can use your favorite HTTP client to call the server's `/embedding` endpoint:

```
curl \
-X POST \
-H "Content-Type: application/json" \
-d '{"text": "Hello, world!"}' \
http://localhost:8080/embedding
```

For further information, please see the [llamafile README](https://github.com/mozilla-ocho/llamafile/) and the [llamafile server docs](https://github.com/Mozilla-Ocho/llamafile/blob/main/llama.cpp/server/README.md).

Having **trouble?** See the ["Gotchas" section](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas) of the README or contact us on [Discord](https://discord.com/channels/1089876418936180786/1182689832057716778).

## About llamafile

llamafile is a new format introduced by Mozilla Ocho on Nov 20th 2023.
It uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp
binaries that run on the stock installs of six OSes for both ARM64 and
AMD64.

---

# Model Card

See [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1)