Stas Bekman

stas

AI & ML interests

Toolmaker. Software creator, optimizer and harmonizer. Makes things work and fly at Contextual.AI Training LLM/RAG/Generative AI/Machine Learning/Scalability

Articles

Organizations

stas's activity

posted an update about 2 months ago
view post
Post
A combined effort from the IBM + Pytorch teams achieved an incredible training performance with ZeRO/FSDP on par with 3D parallelism on H100s, while having just 800Gbps inter-node connection.

This is because they got an almost full overlap between comms and compute and have introduced a novel selective activation recomputation method which recalculates only large but inexpensive activations.

Check out their post here: https://pytorch.org/blog/maximizing-training/
posted an update about 2 months ago
replied to their post 3 months ago
view reply

I pinged Elio to see if he wants to join.

posted an update 3 months ago
view post
Post
Hear, hear, AMD MI300Xs have started to emerge much sooner than expected.

Here is a 2-part benchmarks report on performing BLOOM-176B inference using @MSFTDeepSpeed optimized for AMD MI300X.

1. https://www.evp.cloud/post/diving-deeper-insights-from-our-llm-inference-testing
2. https://www.evp.cloud/post/diving-deeper-insights-from-our-llm-inference-testing-part-2

This was published in response to our BLOOM-176B super-fast inference blog post https://huggingface.co/blog/bloom-inference-pytorch-scripts

Note that these have 192GB of HBM!

The NVIDIA monopoly is strong, but it'll have to start sharing the pie and hopefully drive the costs down at least somewhat.

Thanks to https://www.linkedin.com/in/eliovp for sharing this writeup with me.

p.s. at the PyTorch conference in the fall, the AMD representative said we will see MI300X available to us mortals in Q4-2024/Q1-2025.
·
replied to their post 3 months ago
view reply

Thank you for the kind words, Jeff!

We are still waiting for BLOOM v2.0 from HF!

posted an update 3 months ago
posted an update 3 months ago
view post
Post
Do you have a hidden massive storage leak thanks to HF hub models and datasets revisions adding up and not getting automatically deleted?

Here is how to delete all old revisions and only keeping main in a few quick steps and no tedious manual editing.

In terminal A:
$ pip install huggingface_hub["cli"] -U
$ huggingface-cli delete-cache --disable-tui
File to edit: /tmp/tmpundr7lky.txt
0 revisions selected counting for 0.0. Continue ? (y/N)

Do not answer the prompt and proceed with my instructions.

(note your tmp file will have a different path, so adjust it below)

In terminal B:
$ cp /tmp/tmpedbz00ox.txt cache.txt
$ perl -pi -e 's|^#(.*detached.*)|$1|' cache.txt
$ cat cache.txt >>  /tmp/tmpundr7lky.txt

The perl one-liner uncommented out all lines that had (detached) in it - so can be wiped out. And then we pasted it back into the tmp file huggingface-cli expects to be edited.

Now go back to terminal A and hit: N, Y, Y, so it looks like:

0 revisions selected counting for 0.0. Continue ? (y/N) n
89 revisions selected counting for 211.7G. Continue ? (y/N) y
89 revisions selected counting for 211.7G. Confirm deletion ? (Y/n) y

Done.

If you messed up with the prompt answering you still have cache.txt file which you can feed again to the new tmp file it'll create when you run huggingface-cli delete-cache --disable-tui again.

For more details and additional techniques please see https://github.com/stas00/ml-engineering/tree/master/storage#huggingface-hub-caches