Whimsical Waffle: The Curious Case of LLMs and Their Linguistic Shenanigans
yay
Actually, to get the DRYRUN test, all we would have to do is to get rid of the MAP_POPULATE in:
mmap(NULL, 31937041504, PROT_READ, MAP_SHARED|MAP_POPULATE, 4, 0) = 0x7ff79c600000
Because I think with the right switches, we can otherwise avoid touching the memory (alternatively, map /dev/null). Of course, the measurements allowed by DRYRUN are much more worthwhile. Basically, it's the killer feature if we could make it available and it turns out to be feasible. Thats the really interesting (to me) todo point: create a script that downloads the gguf header only from huggingface and recreates a dummy gguf. Too bad the gguf file format is so badly designed - you have to decode the whole header incrementally to know how long it is.
(using fuse to mount a file via https is cheating)
btw., in the case of blacksheep, i take the lists of quants done from the "quantize" script and patch the job like this:
"iquants": "Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M Q6_K IQ4_XS Q3_K_S Q3_K_L Q5_K_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S",
and fore the jais models for example, I removed the *0, *1, IQ4_NL quzant, essentially:
"squants": "x-f16 Q4_K_S Q2_K Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS",
"iquants": "Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M IQ3_XS IQ3_S",
it's in theory possible to do this when adding the job (not via llmc, because reasons), but that requires us to predict with some accuracy that this will happen, so is rarely useful
Actually, to get the DRYRUN test, all we would have to do is to get rid of the MAP_POPULATE in:
mmap(NULL, 31937041504, PROT_READ, MAP_SHARED|MAP_POPULATE, 4, 0) = 0x7ff79c600000
I'm a bit confused. Dryrun doesn't even use mmap. I explicitly disable it and even print "mmap is not supported for dry-run so it is now disabled" as warning if you don't specify --no-mmap
. Why would you even want mmap for dry-run? You are not allocating any memory when loading the model so what would be the point of it?
Because I think with the right switches, we can otherwise avoid touching the memory (alternatively, map /dev/null).
What you mean with touching memory? No additional RAM or GPU memory should get allocated when loading a model. Obviously llama.cpp requires some memory to function like any application but that is so little it can be ignored.
Of course, the measurements allowed by DRYRUN are much more worthwhile. Basically, it's the killer feature if we could make it available and it turns out to be feasible. Thats the really interesting (to me) todo point: create a script that downloads the gguf header only from huggingface and recreates a dummy gguf. Too bad the gguf file format is so badly designed - you have to decode the whole header incrementally to know how long it is.
I don't think the header can be that big so you can likely just download enough for the full header to always be present.
btw., in the case of blacksheep, i take the lists of quants done from the "quantize" script and patch the job like this
"iquants": "Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M Q6_K IQ4_XS Q3_K_S Q3_K_L Q5_K_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S"
I assume you are setting this inside llmjob edit
.
Wouldn't the scripts synchronize when it is available again?
Altogether it's 3GB, not just scripts, but also, of course, llama.cpp. I added a hack so when removing the disable flag it will sync automatically, but I also update llama.cpp from home, and every node has a different combination of llama.cpp variants (probably the easiest way around is to change that).
But, yeah, that's not effectively automatable.
Yes even for me it would now be inconvenient to switch as I memorized the path so well.
embrace the difference :)
Oh, let's hope for the best. No imatrix failure so far but a lot of imatrix tasks will only be started at 22:00 due to most of them currently being timeofday blocked.
I am pretty sure the dryrun test works - the onyl way it could fail is if it somehow succeeds despite the model being broken. Likely there are some tests in llama.cpp that are only done at inference time, the question is, how many, and are they important :) We will find out.
Just so you know DRYRUN is supposed to work with every llama.cpp executable that loades a model so you are not limited to llama-cli.
To... some extent (i.e. tracking allocations)? Surely you have not found a generic way to exit all of these at just the right time.
Then just don't use llama-cli but any other one that doesn't do this.
Haha, "just". Love it :) Anyway, are there any? There is the server, but the server seems to do the same thing.
Nice. No idea why everyone keeps renaming thair models but us having a diffrent name makes ouer models hard to find so automated renames would be quite usefull.
They rename it because they want to be able to erase it and create a different one without having to come up with a new final name, in case it sucks. Models are also regularly moved, and sometimes even aparently cloned, to other users.
It does make them harder to find, but at least I stopped using the search function by hf and started to use the quantisations link.
That would be amazing! There are quite a lot of factors that influence vram usage but maybe you can find a pattern by playing around with dryrun.
I would allow the user to specify VRAM for 0, 1 or 2 gpus, tensor split, some flags like flash attention, and then probably do a binary search to find the maximum -ngl value.
models always show the date when they were last updated
You'll have to check wuant file dates anyway if you need some kind of date. And then, it's pretty useless.
I guess we can at least try to update them in chronological order, so the order stays the same. Or can we?!?
The updates would almost certainly go from newest to oldest, even (or rather, reverse order in how hf lists them for me), with some randomness.
GIT_COMMITTER_DATE and GIT_AUTHOR_DATE environment variables before committing using git
If I can't do it via the api it will not happen. Messing in scripts with git will be a disaster. Besides, will the server-side git really just accept any client-side garbage date when pushed?
as this will hopefully be the last time we ever edit all of them.
The other a-ha moment I had last week was when I realised that this is the problem and must give. I have versioned the model cards now, so we can keep any number of different co,patible card formats and update at our own pace.
I don't think with us publishing 100+ repos a day anybody would care about 20000 updates even per day.
I'm a bit confused. Dryrun doesn't even use mmap. I explicitly disable it and even print "mmap is not supported for dry-run so it is now disabled" as warning if you don't specify --no-mmap. Why would you even want mmap for dry-run? You are not allocating any memory when loading the model so what would be the point of it?
I was talking about an alternative way to achieve just the validity testing without changing llama.cpp. It's entirely hypothetical.
I don't think the header can be that big so you can likely just download enough for the full header to always be present.
The header is pretty massive - tiny if you look at the whole file, but many megabytes in size to warrant an optimisation. My first computer had ~100 octets usable memory. I sawe amazing sofwtare wirtten in 20k of memory. When I see a bash process using 2MB of RAM I regularly get dizzy.
Anyway, gguf is very wasteful, or example, every vocabulary entry is 8 bytes string length + string. Also, "likely enough" means you still have to be prepared for it to not be enough in edge cases.
And to be honest, what worries me most is that aws typically charges for the full file even if only a few bytes of it are being downloaded. But since the gguf parse on the hf page exists, I am sure it doesn't matter :)
To... some extent (i.e. tracking allocations)? Surely you have not found a generic way to exit all of these at just the right time.
It should work for majority of them. Almost all that load a model are using the same code to do so. I just tested llama-imatrix
, llama-perplexity
, llama-simple
, llama-simple-chat
and llama-run
all of which were fully compatible with DRYRUN despite me never testing them before. It’s not that they are just working they also tell you how much memory would be required to fulfill to load the model in a way that fulfills thar purpose as they essentially just load the model with the exact parameters they require.
Haha, "just". Love it :) Anyway, are there any?
No ide. Try the ones I mentioned above and if they all do it than this is likely something in the model loading code in which case I can take a look at the code and see if we can change this.
I would allow the user to specify VRAM for 0, 1 or 2 gpus, tensor split, some flags like flash attention, and then probably do a binary search to find the maximum -ngl value.
That would be so awesome. This is actually exactly what I'm currently for what I'm using DRYRUN myself.
Keep in mind that DRYRUN only tells you the memory required to load the model and allocate enough memory for its context. Memory used during inference for things like attention is not considered but is easy to estimate. In fact, more memory is required to load a model if flash attention is enabled due to additional overheads associated with its implementation.
If I can't do it via the api it will not happen. Messing in scripts with git will be a disaster.
Totally understandable.
will the server-side git really just accept any client-side garbage date when pushed?
All git servers seam to do. git servers kind of trust client side garbage by design. I had to spoof dates/name/emails for author/committer so many times in the past and I not once had a git server refuse the commit. The only thing I'm not sure if HuggingFace uses the time in the git commit like GitHub/GitLab do or if it uses the server time of the push. Now I'm a bit curious so the next time I upload a model I might try it.
The other a-ha moment I had last week was when I realized that this is the problem and must give. I have versioned the model cards now, so we can keep any number of different compatible card formats and update at our own pace.
I don't think with us publishing 100+ repos a day anybody would care about 20000 updates even per day.
Yes it should be fine unless we hit some kind of rate limit.
The header is pretty massive - tiny if you look at the whole file, but many megabytes in size to warrant an optimization. My first computer had ~100 octets usable memory. I saw amazing software written in 20k of memory. When I see a bash process using 2MB of RAM I regularly get dizzy.
My first "Gameboy" which in fact was a Voyage 200 calculator for school had 188 kB RAM and 2,7 MB ROM and it was enough to play all kind of games. I even had something like Maro Maker on there. I actually had that Voyage 200 calculator 5 years before I had my first mobile phone and used it from everything from reading, writing, programming and gaming.
In case you wonder my first PC was a Windows 2000 with 13 GB of HDD storage and I think 128 MB of RAM. My first programming language was BlitzBasic to write PC games followed by Compact-C which I used to program C-Control Pro microcontrollers which had 2 KB of usable RAM, 10 KB of usable flash storage, 1 KB EEPROM and a 14.7456 MHz CPU so I know your feeling.
Anyway, gguf is very wasteful, or example, every vocabulary entry is 8 bytes string length + string.
That is indeed terrible wasteful. 1 byte would have been enough.
Also, "likely enough" means you still have to be prepared for it to not be enough in edge cases.
Which should be fine as llama.cpp was so nice to put stupid limits everywhere so most edge cases likely already failed when we tried converting them into GGUF.
And to be honest, what worries me most is that aws typically charges for the full file even if only a few bytes of it are being downloaded. But since the gguf parse on the hf page exists, I am sure it doesn't matter :)
S3 only charges for the actually used bandwidth as far I'm aware. So if you only download the first 10 MB HuggingFace should only be charged for 10 MB. They do charge per 10K API calls a very low amount but this doesn't at all matter as we only have around 500K quants. I'm mostly worried about HuggingFace might be using intelligent tiering in which case us accessing all the quants might cause them to be copied into hot storage which then would cost them the transfer fee plus 30 days of hot storage. But in any case, there is not much we can do about any of this unless we find a storage usage pattern and can based on one quant tell how much all the others require which I think might be possible.
Memory used during inference for things like attention is not considered but is easy to estimate. In fact, more memory is required to load a model if flash attention is enabled due to additional overheads associated with its implementation.
That's a bummer then... So how would you easily estimate it? And what you mean more is required to "load" a model - after loading, flash attention surely uses less memory.
Yes it should be fine unless we hit some kind of rate limit.
That doesn't worry me either - I envisaged some kind of bulk update because I thought versioning the readmes is a bad idea. But, I changed my mind. IF we hit a rate limit, it will take a few y<ears to update old repos - so what.
Voyage 200 calculator for school
I got the first HP 48SX in germany (or so I was actually told by HP). Sigh. hp calculators... were so nice...
Windows 2000
Wow. That is so long after I had switched to GNU/Linux. (I switched from DOS to Linux just before win 3 became ubiquitous (in 1994, with 1.0.2 or something - I was even late to the game, or so it felt))
That is indeed terrible wasteful. 1 byte would have been enough.
Yeah, or 4 octet (or even 8 octet) header length + json/msgpack/cbor/... and yes, one octet would be enough if you limit strings to 127 octets, but to be fair, that's a limit of the encoder, not a limit of the format.
I'd say whoever designed it (well, gerganov) was probably paranoid of not running into arbitrary 4GB limits anywhere. Puzzlingly enough, though, the primitive types numbers (there are 13) are stored in 32 bit ints. And no, everything is just octet-aligned, so it's nothing to do with that.
To it's defence, the gguf decoder I wrote in Perl is just 80 lines of code. So in that sense, it lends itself to a very simple implementation. But using an existing JSON decoder with that header would just be 3 lines or so...
I think ggerganov has a major fear of external dependencies - even more than me, and I thought I was a bit on the extreme side.
S3 only charges for the actually used bandwidth as far I'm aware.
I admit I am no expert, but it seems to be a well known attack to request only part of a large file and get billed with much larger transfer costs because aws does not bill octets downloaded but octets prepared for download, regardless of how much actually was used (or even requested). So yes, only actually used bandwidth, but it's their internal fantasy made up bandwidth, not the external customer-measurable bandwidth. It is possible that it only affects some S3 storage products, but it's a concern. Well, it's not a concern, because huggingface does it themselves, and I am happy to cache things...
Updating...
Great! I just rebooted StormPeak into low ARC mode so we get an additinal 24 GB of RAM.
... or not :)
Everything is ready!
Also, reminds me that I will have to think about how to update llama on nico2 when it's down, also. Probably from nico1 when it enables it.
For now this is not needed. nico2 will stay turned off during RPC computation as CastlePeak is hosting the RPC server using a diffrent LXC container but yes updating it on wake would make sense.
nico1 is currently idle and all remaining lownice tasks seam to not require any imatrix so now seams like the perfect time to start RPC. Also timing wise we should booth be awake when it finishes when we start now which is really nice.
Starting it now also has the advantage that I might still be awake in case we OOM while loading the model and could adjust RPC servers accordingly.
For now this is not needed.
It is needed, because when nico2 comes up, it should not run on outdated llama.cpp till I remember to update it maybe in a few weeks :)
Case in point, in my euphoria I started the imatrix job before the update was finished, because I did run the update earlier and forgot that it had failed. Probably would have worked, but would have been a mistake nevertheless.
Thanks a lot for starting the imatrix computation.
Case in point, in my euphoria I started the imatrix job before the update was finished, because I did run the update earlier and forgot that it had failed. Probably would have worked, but would have been a mistake nevertheless.
It probably would have worked but nice you cough it. Sorry that I just happen to reboot at the exact time you made the update. I only checked that everything on the status page is idle but forgot about llama.cpp updates. I should have rebooted way earlier when I setup the entire RPC setup but forgot about the changing the ZFS ARC cache size requiring a reboot as usually it never needs a reboot but if I want to make it quite low have to put the value into modprobe, rebuild initramfs and reboot or it will be ignored.
It is needed, because when nico2 comes up, it should not run on outdated llama.cpp till I remember to update it maybe in a few weeks :)
No worries I will remind you if you forget.
kaos now has all the llama variants and should be able to update nico2 whenever it comes up again. in theory, of course.
Sorry that I just happen to reboot at the exact time you made the update.
You couldn't know, it's not a big deal. It did remind me to change things around, so whenever nodes become enabled, they will be auto-updated now. Other than the rpc link.
No worries I will remind you if you forget.
Right - and now that I thankfully have my own llama upstream maintainer, you think you can add the current build number of git revision to to ggufs in quantize? A simple string in mradermacher.llama_build or so would suffice. That doesn't tell us what version of convert*py did the thing, but it we often wondered which version of llama.cpp did a certain quant, exactly, and that at least gives us the version at quantize time.
PS: forgot if I asked already, if yes, and it was too annoying, just ignore me. this is not a repeat nudge :)