Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
4
Sambit Kumar Barik
NaiveAttention
Follow
Nikhil7280's profile picture
1 follower
ยท
24 following
Sam-364
sambit-kumar-barik-1237ba204
AI & ML interests
LLM | VLM | Natural Language Processing
Recent Activity
reacted
to
ImranzamanML
's
post
with ๐
13 days ago
Here is how we can calculate the size of any LLM model: Each parameter in LLM models is typically stored as a floating-point number. The size of each parameter in bytes depends on the precision. 32-bit precision: Each parameter takes 4 bytes. 16-bit precision: Each parameter takes 2 bytes To calculate the total memory usage of the model: Memory usage (in bytes) = No. of Parameters ร Size of Each Parameter For example: 32-bit Precision (FP32) In 32-bit floating-point precision, each parameter takes 4 bytes. Memory usage in bytes = 1 billion parameters ร 4 bytes 1,000,000,000 ร 4 = 4,000,000,000 bytes In gigabytes: โ 3.73 GB 16-bit Precision (FP16) In 16-bit floating-point precision, each parameter takes 2 bytes. Memory usage in bytes = 1 billion parameters ร 2 bytes 1,000,000,000 ร 2 = 2,000,000,000 bytes In gigabytes: โ 1.86 GB It depends on whether you use 32-bit or 16-bit precision, a model with 1 billion parameters would use approximately 3.73 GB or 1.86 GB of memory, respectively.
updated
a model
27 days ago
NaiveAttention/LeVIT-364-Finetuned
View all activity
Organizations
NaiveAttention
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
liked
a dataset
9 months ago
liuhaotian/LLaVA-Instruct-150K
Preview
โข
Updated
Jan 3
โข
2.61k
โข
470
liked
a dataset
10 months ago
glaiveai/glaive-function-calling-v2
Viewer
โข
Updated
Sep 27, 2023
โข
113k
โข
609
โข
399
liked
a model
10 months ago
NaiveAttention/NexusRaven-V2-13B-awq
Text Generation
โข
Updated
Feb 12
โข
18
โข
3
liked
a Space
11 months ago
Sleeping
1
๐ป
GradTrainer