--- license: creativeml-openrail-m datasets: - TempoFunk/tempofunk-sdance language: - en pipeline_tag: text-to-video --- # Model Card for TempoFunk A community produced Text-To-Video model using Temporal Attention # Table of Contents - [Model Card for TempoFunk](#model-card-for--model_id-) - [Table of Contents](#table-of-contents) - [Table of Contents](#table-of-contents-1) - [Model Details](#model-details) - [Model Description](#model-description) - [Uses](#uses) - [Direct Use](#direct-use) - [Downstream Use [Optional]](#downstream-use-optional) - [Out-of-Scope Use](#out-of-scope-use) - [Bias, Risks, and Limitations](#bias-risks-and-limitations) - [Recommendations](#recommendations) - [Training Details](#training-details) - [Training Data](#training-data) - [Environmental Impact](#environmental-impact) - [Technical Specifications [optional]](#technical-specifications-optional) - [Model Architecture and Objective](#model-architecture-and-objective) - [Compute Infrastructure](#compute-infrastructure) - [Hardware](#hardware) - [Software](#software) - [Model Card Authors [optional]](#model-card-authors-optional) - [How to Get Started with the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description A community produced Text-To-Video model using Temporal Attention - **Developed by:** Lopho, Chavez, Davut Emre, Julian Herrera - **Shared by [Optional]:** More information needed - **Model type:** Text-To-Video - **Language(s) (NLP):** en - **License:** creativeml-openrail-m - **Resources for more information:** More information needed - [GitHub Repo](https://github.com/lopho/makeavid-sd-tpu) # Uses The TempoFunk model is meant to be used as a Video Production Program. ## Direct Use Produce Generative Video ## Downstream Use [Optional] Meme production Visualization Personalized Text-To-Video ## Out-of-Scope Use Produce Disinformation Produce Gore # Bias, Risks, and Limitations During usage of TempoFunk, it may generate obscene or otherwise unpleasant to look imagery. This is because of both the VAE and the low amount of samples seen by the TempoFunk model. Video generated by TempoFunk may be uncanny. ## Recommendations Use superres or other methods to clean up visuals before publishing or using. # Training Details ## Training Data TempoFunk was trained on movement data from dancing videos. These dancing videos were scrapped and encoded into Stable Diffusion Vae Latents. More information forthcoming. ## Results [https://huggingface.co/spaces/TempoFunk/makeavid-sd-jax] # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications [optional] ## Model Architecture and Objective The temporal layers are a port of Make-A-Video PyTorch to FLAX. The convolution is pseudo 3D and seperately convolves accross the spatial dimension in 2D and over the temporal dimension in 1D. Temporal attention is purely self attention and also separately attends to time. Only the new temporal layers have been fine tuned on a dataset of videos themed around dance. The model has been trained for 80 epochs on a dataset of 18,000 Videos with 120 frames each, randomly selecting a 24 frame range from each sample. ## Compute Infrastructure TPU_V4 ### Hardware TPU_V4 ### Software Google JAX Google FLAX # Model Card Authors [optional] Lopho, Chavez, Davut Emre, Julian Herrera # How to Get Started with the Model Use the space below to get started! [https://huggingface.co/spaces/TempoFunk/makeavid-sd-jax]