Full-text search
+ 1,000 results
AhmedEwis / Shagardi_chatbot
app.py
space
871 matches
mrm8488 / stablelm2-1.6b-ft-openhermes
README.md
model
11 matches
tags:
transformers, safetensors, stablelm_epoch, text-generation, custom_code, en, dataset:teknium/openhermes, license:wtfpl, autotrain_compatible, region:us
11
12
13
14
15
# **S**tablelm2- (1.6b) fine-tuned on OpenHermes
<div style="text-align:center;">
<img src="https://huggingface.co/mrm8488/stablelm2-1.6b-ft-openhermes/resolve/main/logo.png" alt="logo" style="width:150px; height:200px;">
</div>
rc9494 / SP500_Date_Offset
README.md
dataset
14 matches
tags:
language:en, license:cc-by-4.0, finance, economics, time series, region:us
11
12
13
14
15
# S&P 500 Date Offset
Financial markets and the economy go hand-in-hand: we expect positive growth during good economic times and corrections during recessions. That said, at any given time, the investor or trader cannot have perfect information about the current state of the economy. Since there is often a several-months lag between when economic conditions are experienced and when they are officially reported, an amount of speculation gets priced into the market. The accuracy of this speculation is checked when the official economic data is finally released, often resulting in price swings when expectations and reality are not in line.
While this begs the important question as to whether this over- or under-confidence within the market can be measured, issues with data formatting persist. It is standard practice for economic data to be back-dated to reflect the first date of the time period it is meant to reflect, rather than the day of publication (ie CPI data for January is released in mid-February and GDP data for Q1 is released in late-April, but both are dated to January 1st).
This creates a significant problem: it is not enough to pull together data from various APIs and merge along a date index. This project seeks to off-set the economic data to reflect those dates
stabilityai / stable-audio-open-1.0
README.md
model
74 matches
shenzhi-wang / Llama3-8B-Chinese-Chat
README.md
model
452 matches
tags:
transformers, safetensors, llama, text-generation, llama-factory, orpo, conversational, en, zh, base_model:meta-llama/Meta-Llama-3-8B-Instruct, doi:10.57967/hf/2316, license:llama3, autotrain_compatible, endpoints_compatible, text-generation-inference, region:us
14
15
16
17
18
e.co/shenzhi-wang/Llama3-70B-Chinese-Chat)! Full-parameter fine-tuned on a mixed Chinese-English dataset of ~100K preference pairs, its Chinese performance **surpasses ChatGPT** and **matches GPT-4**, as shown by C-Eval and CMMLU results. [Llama3-**70B**-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-70B-Chinese-Chat) is much more powerful than Llama3-8B-Chinese-Chat. If you love our Llama3-8B-Chinese-Chat, you must have a try on our [Llama3-**70B**-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-70B-Chinese-Chat)!
🌟 We included all instructions on how to download, use, and reproduce our various kinds of models at [this GitHub repo](https://github.com/Shenzhi-Wang/Llama3-Chinese-Chat). If you like our models, we would greatly appreciate it if you could star our Github repository. Additionally, please click "like" on our HuggingFace repositories. Thank you!
Salesforce / xgen-mm-phi3-mini-instruct-r-v1
README.md
model
58 matches
tags:
transformers, safetensors, xgenmm, feature-extraction, image-text-to-text, custom_code, en, license:cc-by-nc-4.0, region:us
10
11
12
13
14
BLIP series** into **XGen-MM**, to be better aligned with Salesforce's unified XGen initiative for large foundation models! This rebranding marks a significant step in our ongoing development of cutting-edge multimodal technologies.
`XGen-MM` is a series of the latest foundational Large Multimodal Models (LMMs) developed by Salesforce AI Research. This series advances upon the successful designs of the `BLIP` series, incorporating fundamental enhancements that ensure a more robust and superior foundation. \
These models have been trained at scale on high-quality image caption datasets and interleaved image-text data. XGen-MM highlights a few features below,
runwayml / stable-diffusion-v1-5
README.md
model
103 matches
tags:
diffusers, safetensors, stable-diffusion, stable-diffusion-diffusers, text-to-image, arxiv:2207.12598, arxiv:2112.10752, arxiv:2103.00020, arxiv:2205.11487, arxiv:1910.09700, license:creativeml-openrail-m, endpoints_compatible, diffusers:StableDiffusionPipeline, region:us
20
21
22
23
24
# Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion).
Sao10K / L3-8B-Stheno-v3.1
README.md
model
28 matches
tags:
transformers, safetensors, llama, text-generation, conversational, en, license:cc-by-nc-4.0, autotrain_compatible, endpoints_compatible, text-generation-inference, region:us
7
8
9
10
11
<img src="https://w.forfun.com/fetch/cb/cba2205390e517bea1ea60ca0b491af4.jpeg" style="width: 80%; min-width: 400px; display: block; margin: auto;">
**Model: Llama-3-8B-Stheno-v3.1**
> **NEWER VERSION IS OUT** <br>
stabilityai / sdxl-turbo
README.md
model
72 matches
tags:
diffusers, onnx, safetensors, text-to-image, license:other, diffusers:StableDiffusionXLPipeline, region:us
9
10
11
12
13
# SDXL-Turbo Model Card
<!-- Provide a quick summary of what the model is/does. -->
![row01](output_tile.jpg)
SDXL-Turbo is a fast generative text-to-image model that can synthesize photorealistic images from a text prompt in a single network evaluation.
shenzhi-wang / Llama3-70B-Chinese-Chat
README.md
model
730 matches
tags:
transformers, safetensors, llama, text-generation, llama-factory, orpo, conversational, en, zh, base_model:meta-llama/Meta-Llama-3-70B-Instruct, doi:10.57967/hf/2315, license:llama3, autotrain_compatible, endpoints_compatible, text-generation-inference, region:us
15
16
17
18
19
odel's identity. Thus, inquiries such as "Who are you" or "Who developed you" may yield random responses that are not necessarily accurate.
# Updates:
- 🚀🚀🚀 [May 9, 2024] We're excited to introduce Llama3-70B-Chinese-Chat! Full-parameter fine-tuned on a mixed Chinese-English dataset of **~100K preference pairs**, its Chinese performance **surpasses ChatGPT and matches GPT-4**, as shown by C-Eval and CMMLU results.
stabilityai / stable-video-diffusion-img2vid-xt
README.md
model
91 matches
tags:
diffusers, safetensors, image-to-video, license:other, diffusers:StableVideoDiffusionPipeline, region:us
8
9
10
11
12
# Stable Video Diffusion Image-to-Video Model Card
<!-- Provide a quick summary of what the model is/does. -->
![row01](output_tile.gif)
Stable Video Diffusion (SVD) Image-to-Video is a diffusion model that takes in a still image as a conditioning frame, and generates a video from it.
stabilityai / TripoSR
app.py
space
40 matches
Sao10K / L3-8B-Stheno-v3.2
README.md
model
29 matches
tags:
transformers, safetensors, llama, text-generation, conversational, en, dataset:Gryphe/Opus-WritingPrompts, dataset:Sao10K/Claude-3-Opus-Instruct-15K, dataset:Sao10K/Short-Storygen-v2, dataset:Sao10K/c2-Logs-Filtered, license:cc-by-nc-4.0, autotrain_compatible, endpoints_compatible, text-generation-inference, region:us
12
13
14
15
16
or a service or something. We can talk.*
*Train used 1x H100 SXM for like a total of 24 Hours over multiple runs.*
Support me here if you're interested:
shenzhi-wang / Llama3-8B-Chinese-Chat-GGUF-8bit
README.md
model
450 matches
tags:
transformers, gguf, llama, text-generation, llama-factory, orpo, en, zh, base_model:meta-llama/Meta-Llama-3-8B-Instruct, license:llama3, autotrain_compatible, endpoints_compatible, text-generation-inference, region:us
14
15
16
17
18
.com/Shenzhi-Wang/Llama3-Chinese-Chat). If you like our models, we would greatly appreciate it if you could star our Github repository. Additionally, please click "like" on our HuggingFace repositories. Thank you!
❗️❗️❗️NOTICE: The main branch contains the **q8_0 GGUF files** for [Llama3-8B-Chinese-Chat-**v2.1**](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat). If you want to use our q8_0 GGUF files for Llama3-8B-Chinese-Chat-**v1**, please refer to [the `v1` branch](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-8bit/tree/v1); if you want to use our q8_0 GGUF files for Llama3-8B-Chinese-Chat-**v2**, please refer to [the `v2` branch](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-8bit/tree/v2).
stabilityai / sv3d
README.md
model
47 matches
tags:
image-to-video, dataset:allenai/objaverse, arxiv:2403.12008, license:other, region:us
21
22
23
24
25
# Stable Video 3D
![](sv3doutputs.gif)
**Stable Video 3D (SV3D)** is a generative model based on [Stable Video Diffusion](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt) that takes in a still image of an object as a conditioning frame, and generates an orbital video of that object.
Please note: For commercial use, please refer to https://stability.ai/membership.
satellogic / EarthView
README.md
dataset
177 matches
suno / bark
README.md
model
64 matches
tags:
transformers, pytorch, bark, text-to-audio, audio, text-to-speech, en, de, es, fr, hi, it, ja, ko, pl, pt, ru, tr, zh, license:mit, endpoints_compatible, region:us
30
31
32
33
34
by [Suno](https://www.suno.ai).
Bark can generate highly realistic, multilingual speech as well as other audio - including music,
background noise and simple sound effects. The model can also produce nonverbal
communications like laughing, sighing and crying. To support the research community,
we are providing access to pretrained model checkpoints ready for inference.
CompVis / stable-diffusion-v1-4
README.md
model
120 matches
tags:
diffusers, safetensors, stable-diffusion, stable-diffusion-diffusers, text-to-image, arxiv:2207.12598, arxiv:2112.10752, arxiv:2103.00020, arxiv:2205.11487, arxiv:1910.09700, license:creativeml-openrail-m, autotrain_compatible, endpoints_compatible, diffusers:StableDiffusionPipeline, region:us
30
31
32
33
34
# Stable Diffusion v1-4 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with 🧨Diffusers blog](https://huggingface.co/blog/stable_diffusion).
stabilityai / stable-diffusion-2-1
README.md
model
111 matches
tags:
diffusers, safetensors, stable-diffusion, text-to-image, arxiv:2112.10752, arxiv:2202.00512, arxiv:1910.09700, license:openrail++, endpoints_compatible, diffusers:StableDiffusionPipeline, region:us
9
10
11
12
13
# Stable Diffusion v2-1 Model Card
This model card focuses on the model associated with the Stable Diffusion v2-1 model, codebase available [here](https://github.com/Stability-AI/stablediffusion).
This `stable-diffusion-2-1` model is fine-tuned from [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) (`768-v-ema.ckpt`) with an additional 55k steps on the same dataset (with `punsafe=0.1`), and then fine-tuned for another 155k extra steps with `punsafe=0.98`.