language: en
tags:
- multimodal
- text
- image
license: apache 2.0
datasets:
- obelisc
Model Card for m4-80b
Some cool model...
Table of Contents
- Model Card for m4-80b
- Table of Contents
- Model Details
- Uses
- Bias, Risks, and Limitations
- Training Details
- Evaluation
- Model Examination
- Environmental Impact
- Technical Specifications [optional]
- Citation
- Glossary [optional]
- More Information [optional]
- Model Card Authors [optional]
- Model Card Contact
- How to Get Started with the Model
Model Details
Model Description
Some cool model...
- Developed by: HuggingFace
- Model type: Multi-modal model (text+image)
- Language(s) (NLP): en
- License: apache-2.0
- Parent Model: laion/CLIP-ViT-H-14-laion2B-s32B-b79K and huggingface/llama-65b
- Resources for more information: More information needed
- GitHub Repo
- Associated Paper: Flamingo: a Visual Language Model for Few-Shot Learning
Uses
Direct Use
Downstream Use [Optional]
Out-of-Scope Use
Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
Recommendations
Training Details
Training Data
More information on training data needed
Training Procedure
Preprocessing
More information needed
Speeds, Sizes, Times
More information needed
Evaluation
Testing Data, Factors & Metrics
Testing Data
More information needed
Factors
More information needed
Metrics
More information needed
Results
More information needed
Model Examination
More information needed
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: 64 nodes of 8x 80GB A100 gpus, EFA network
- Hours used: ~672 node hours
- Cloud Provider: AWS Sagemaker
- Carbon Emitted: unknown
Technical Specifications [optional]
Model Architecture and Objective
More information needed
Compute Infrastructure
More information needed
Hardware
The training was performed on AWS SageMaker cluster with 64 nodes of 8x80GB A100 GPUs (512 GPUs total). The cluster uses the current EFA network which provides about 340GBps throughput.
As the network is quite slow for the needs of DeepSpeed ZeRO-3 we were only able to clock ~90 TFLOPs.
Software
The training software is built on top of HuggingFace Transformers + Accelerate, and DeepSpeed ZeRO-3. Plus WebDataset for data loading.
Citation
BibTeX:
More information needed
APA:
More information needed
Glossary [optional]
More information needed
More Information [optional]
More information needed
Model Card Authors [optional]
V, i, c, t, o, r, ,, , S, t, a, s, ,, , X, X, X
Model Card Contact
More information needed
How to Get Started with the Model
Use the code below to get started with the model.
Click to expand
More information needed