text
stringlengths 74
478k
| repo
stringlengths 7
106
|
---|---|
vuejs/core-vapor;Vue Vapor This repository is a fork of vuejs/core and is used for research and development of no virtual dom mode. Vapor Playground Vapor Template Explorer Vue Vapor Team Evan You - Creator and designer of Vue.js and Vue Vapor. Kevin Deng 三咲智子 - Lead contributor and developer for Vue Vapor. Rizumu Ayaka - Active contributor. Ubugeeei - Active contributor. Doctor Wu - Active contributor. TODO [x] Counter App [x] simple bindings [x] simple events [x] TODO-MVC App [x] Repl [x] transform [x] NodeTransform [x] DirectiveTransform [ ] directives [x] v-once [x] v-html [x] v-text [x] v-pre [x] v-cloak [x] v-bind [x] simple expression [x] compound expression [x] modifiers [x] .camel [x] .prop [x] .attr [x] v-on [x] simple expression [x] compound expression [x] modifiers [x] runtime directives [ ] v-memo [on hold] [x] v-model [x] v-if / v-else / v-else-if [x] v-for [x] v-show [x] Fragment [ ] Codegen [x] CodegenContext [x] indent [x] Source map [x] scope id [ ] Function mode [on hold] [ ] SSR [ ] Component WIP [ ] Built-in Components [ ] Transition [ ] TransitionGroup [ ] KeepAlive [ ] Teleport [ ] Suspense [ ] Performance & Optimization [ ] remove unnecessary close tag </div> Sponsors Vue.js is an MIT-licensed open source project with its ongoing development made possible entirely by the support of these awesome backers . If you'd like to join them, please consider sponsoring Vue's development . Special Sponsor License MIT Copyright (c) 2013-present, Yuxi (Evan) You;Vue Vapor is a variant of Vue that offers rendering without the Virtual DOM.;[] | vuejs/core-vapor |
WisdomShell/codeshell;🤗 Hugging Face • 🤖 ModelScope • ⭕️ WiseModel • 🌐 PKU-KCL [![license](https://img.shields.io/github/license/modelscope/modelscope.svg)](https://github.com/WisdomShell/codeshell/blob/main/License.pdf) 中文 | English ## Introduction
CodeShell是[北京大学知识计算实验室](http://se.pku.edu.cn/kcl/)联合四川天府银行AI团队研发的多语言代码大模型基座。CodeShell具有70亿参数,在五千亿Tokens进行了训练,上下文窗口长度为8192。在权威的代码评估Benchmark(HumanEval与MBPP)上,CodeShell取得同等规模最好的性能。与此同时,我们提供了与CodeShell配套的部署方案与IDE插件,请参考代码库[CodeShell](https://github.com/WisdomShell/codeshell)。同时,为了方便中国用户下载,我们在[Modelscope](https://modelscope.cn/organization/WisdomShell)和[Wisemodel](https://www.wisemodel.cn/models/WisdomShell/CodeShell-7B/)中也上传了对应版本,国内用户可以访问。
本次开源的模型如下:
- CodeShell Base :CodelShell底座模型,具有强大的代码基础能力。
- CodeShell Chat :CodelShell对话模型,在代码问答、代码补全等下游任务重性能优异。
- CodeShell Chat 4bit :CodelShell对话模型4bit量化版本,在保证模型性能的前提下内存消耗更小,速度更快。
- CodeShell CPP :CodelShell对话模型CPP版本,支持开发者在没有GPU的个人电脑中使用。注意,CPP版本同样支持量化操作,用户可以在最小内存为8G的个人电脑中运行CodeShell。
## Main Characteristics of CodeShell
- **强大的性能**:CodelShell在HumanEval和MBPP上达到了7B代码基座大模型的最优性能
- **完整的体系**:除了代码大模型,同时开源IDE(VS Code与JetBrains)插件,形成开源的全栈技术体系
- **轻量化部署**:支持本地C++部署,提供轻量快速的本地化软件开发助手解决方案
- **全面的评测**:提供支持完整项目上下文、覆盖代码生成、代码缺陷检测与修复、测试用例生成等常见软件开发活动的多任务评测体系(即将开源)
- **高效的训练**:基于高效的数据治理体系,CodeShell在完全冷启动情况下,只训练了五千亿Token即获得了优异的性能
## Performance
我们选取了目前最流行的两个代码评测数据集(HumanEval与MBPP)对模型进行评估,与目前最先进的两个7b代码大模型CodeLlama与Starcoder相比,Codeshell 取得了最优的成绩。具体评测结果如下。
| 任务 | CodeShell-7b | CodeLlama-7b | Starcoder-7b |
| ------- | --------- | --------- | --------- |
| humaneval | **34.32** | 29.44 | 27.80 |
| mbpp | **38.65** | 37.60 | 34.16 |
| multiple-js | **33.17** | 31.30 | 27.02 |
| multiple-java | **30.43** | 29.24 | 24.30 |
| multiple-cpp | **28.21** | 27.33 | 23.04 |
| multiple-swift | 24.30 | **25.32** | 15.70 |
| multiple-php | **30.87** | 25.96 | 22.11 |
| multiple-d | 8.85 | **11.60** | 8.08 |
| multiple-jl | 22.08 | **25.28** | 22.96 |
| multiple-lua | 22.39 | **30.50** | 22.92 |
| multiple-r | **20.52** | 18.57 | 14.29 |
| multiple-rkt | **17.20** | 12.55 | 10.43 |
| multiple-rs | 24.55 | **25.90** | 22.82 |
## Requirements
- python 3.8 and above
- pytorch 2.0 and above are recommended
- transformers 4.32 and above
- CUDA 11.8 and above are recommended (this is for GPU users, flash-attention users, etc.)
## Quickstart
CodeShell系列模型已经上传至 Hugging Face ,开发者可以通过Transformers快速调用CodeShell和CodeShell-Chat。
在开始之前,请确保已经正确设置了环境,并安装了必要的代码包,以及满足上一小节的环境要求。你可以通过下列代码快速安装相关依赖。
```
pip install -r requirements.txt
```
接下来你可以通过Transformers使用CodeShell。
### Code Generation
开发者可以使用CodeShell快速生成代码,加速开发效率。
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = AutoTokenizer.from_pretrained("WisdomShell/CodeShell-7B")
model = AutoModelForCausalLM.from_pretrained("WisdomShell/CodeShell-7B", trust_remote_code=True, torch_dtype=torch.bfloat16).to(device)
inputs = tokenizer('def merge_sort():', return_tensors='pt').to(device)
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```
- Fill in the Moddle
CodeShell 支持Fill-in-the-Middle模式,从而更好的支持软件开发过程。
```python
input_text = " def print_hello_world():\n \n print('Hello world!') "
inputs = tokenizer(input_text, return_tensors='pt').to(device)
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```
- 代码问答
CodeShell同时开源了代码助手模型CodeShell-7B-Chat,开发者可以通过下列代码与模型进行交互。
```python
model = AutoModelForCausalLM.from_pretrained('WisdomShell/CodeShell-7B-Chat', trust_remote_code=True, torch_dtype=torch.bfloat16).to(device)
tokenizer = AutoTokenizer.from_pretrained('WisdomShell/CodeShell-7B-Chat')
history = []
query = '你是谁?'
response = model.chat(query, history, tokenizer)
print(response)
history.append((query, response))
query = '用Python写一个HTTP server'
response = model.chat(query, history, tokenizer)
print(response)
history.append((query, response))
```
开发者也可以通过VS Code与JetBrains插件与CodeShell-7B-Chat交互,详情请参[VSCode插件仓库](https://github.com/WisdomShell/codeshell-vscode)与[IntelliJ插件仓库](https://github.com/WisdomShell/codeshell-intellij)。
- Model Quantization
CodeShell 支持4 bit/8 bit量化,4 bit量化后,占用显存大小约6G,用户可以在显存较小的GPU上使用CodeShell。
```python
model = AutoModelForCausalLM.from_pretrained('WisdomShell/CodeShell-7B-Chat-int4', trust_remote_code=True).to(device)
tokenizer = AutoTokenizer.from_pretrained('WisdomShell/CodeShell-7B-Chat-int4')
```
- CodeShell in c/c++
由于大部分个人电脑没有GPU,CodeShell提供了C/C++版本的推理支持,开发者可以根据本地环境进行编译与使用,详见[CodeShell C/C++本地化版](https://github.com/WisdomShell/llama_cpp_for_codeshell)。
## Demo
我们提供了Web-UI、命令行、API、IDE四种形式的Demo。
### Web UI
开发者通过下列命令启动Web服务,服务启动后,可以通过`https://127.0.0.1:8000`进行访问。
```
python demos/web_demo.py
```
### CLI Demo
我们也提供了命令行交互的Demo版本,开发者可以通过下列命令运行。
```
python demos/cli_demo.py
```
### API
CodeShell也提供了基于OpenAI API的部署方法。
```
python demos/openai_api.py
```
启动后即可通过HTTP请求与CodeShell交互。
```
curl http://127.0.0.1:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "CodeShell-7B-Chat",
"messages": [
{
"role": "user",
"content": "你好"
}
]
}'
```
### IDE
CodeShell最后提供了线上IDE,开发者可以通过IDE进行代码补全、代码问答等操作。同时,IDE插件也同时发布,开发者可以自行在本地进行安装使用。插件相关问题欢迎在[VSCode插件仓库](https://github.com/WisdomShell/codeshell-vscode)与[IntelliJ插件仓库](https://github.com/WisdomShell/codeshell-intellij)中讨论。
## Model Details
Code Shell使用GPT-2作为基础架构,采用Grouped-Query Attention、RoPE相对位置编码等技术。
### Hyper-parameter
| Hyper-parameter | Value |
|---|---|
| n_layer | 42 |
| n_embd | 4096 |
| n_inner | 16384 |
| n_head | 32 |
| num_query_groups | 8 |
| seq-length | 8192 |
| vocab_size | 70144 |
### Data
CodeShell基于自己爬取的Github数据、Big Code开源的Stack和StarCoder数据集、以及少量高质量的中英文数据进行训练。在原始数据集的基础上,CodeShell采用基于Minihash对数据去重,基于KenLM以及高质量数据筛选模型对数据进行了过滤与筛选,最终得到高质量的预训练数据集。
### Tokenizer
CodeShell基于Starcoder词表进行了优化,去除了使用频率较低的词语,并添加了部分中文词表,显著提升了中文的压缩率,为Chat版本的训练提供了基础。
| Tokenizer | Size | Chinese | English | Code | Total|
|---|---|---|---|---|---|
| Starcoder | 49152 | 1.22 | 3.47 | 3.30 | 2.66 |
| CodeShell | 70020 | 1.50 | 3.47 | 3.30 | 2.95 |
## License
社区使用CodeShell模型需要遵循[《CodeShell模型许可协议》](https://github.com/WisdomShell/codeshell/blob/main/License.pdf)及[Apache 2.0许可协议](https://www.apache.org/licenses/LICENSE-2.0)。CodeShell模型允许用于商业用途,但如果您计划将CodeShell模型或其派生产品用于商业用途,需要您确认主体符合以下条件:
1. 关联方的服务或产品的每日平均活跃用户数(DAU)不能超过100万。
2. 关联方不得是软件服务提供商或云服务提供商。
3. 关联方不存在将获得授予的商业许可,在未经许可的前提下将其再授权给其他第三方的可能性。
在满足上述条件的前提下,您需要通过向codeshell.opensource@gmail.com发送电子邮件,提交《CodeShell模型许可协议》要求的申请材料。经审核通过后,将授予您一个全球的、非排他的、不可转让的、不可再授权的商业版权许可。
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=WisdomShell/codeshell&type=Date)](https://star-history.com/#WisdomShell/codeshell&Date);A series of code large language models developed by PKU-KCL;[] | WisdomShell/codeshell |
ChenyangSi/FreeU;FreeU: Free Lunch in Diffusion U-Net () Chenyang Si , Ziqi Huang , Yuming Jiang , Ziwei Liu S-Lab, Nanyang Technological University [Paper](https://arxiv.org/pdf/2309.11497.pdf) | [Project Page](https://chenyangsi.top/FreeU/) | [Video](https://www.youtube.com/watch?v=-CZ5uWxvX30&t=2s) | [Demo](https://huggingface.co/spaces/ChenyangSi/FreeU) CVPR2024 Oral [![Hugging Face](https://img.shields.io/badge/Demo-%F0%9F%A4%97%20Hugging%20Face-66cdaa)](https://huggingface.co/spaces/ChenyangSi/FreeU) We propose FreeU, a method that substantially improves diffusion model sample quality at no costs: no training, no additional parameter introduced, and no increase in memory or sampling time. :open_book: For more visual results, go checkout our project page Usage A demo is also available on the (huge thanks to AK and all the HF team for their support). You can use the gradio demo locally by running python demos/app.py . FreeU Code ```python
def Fourier_filter(x, threshold, scale):
# FFT
x_freq = fft.fftn(x, dim=(-2, -1))
x_freq = fft.fftshift(x_freq, dim=(-2, -1)) B, C, H, W = x_freq.shape
mask = torch.ones((B, C, H, W)).cuda()
crow, ccol = H // 2, W //2
mask[..., crow - threshold:crow + threshold, ccol - threshold:ccol + threshold] = scale
x_freq = x_freq * mask
# IFFT
x_freq = fft.ifftshift(x_freq, dim=(-2, -1))
x_filtered = fft.ifftn(x_freq, dim=(-2, -1)).real
return x_filtered class Free_UNetModel(UNetModel):
"""
:param b1: backbone factor of the first stage block of decoder.
:param b2: backbone factor of the second stage block of decoder.
:param s1: skip factor of the first stage block of decoder.
:param s2: skip factor of the second stage block of decoder.
""" def __init__(
self,
b1,
b2,
s1,
s2,
*args,
**kwargs
):
super().__init__(*args, **kwargs)
self.b1 = b1
self.b2 = b2
self.s1 = s1
self.s2 = s2
def forward(self, x, timesteps=None, context=None, y=None, **kwargs):
"""
Apply the model to an input batch.
:param x: an [N x C x ...] Tensor of inputs.
:param timesteps: a 1-D batch of timesteps.
:param context: conditioning plugged in via crossattn
:param y: an [N] Tensor of labels, if class-conditional.
:return: an [N x C x ...] Tensor of outputs.
"""
assert (y is not None) == (
self.num_classes is not None
), "must specify y if and only if the model is class-conditional"
hs = []
t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False)
emb = self.time_embed(t_emb)
if self.num_classes is not None:
assert y.shape[0] == x.shape[0]
emb = emb + self.label_emb(y)
h = x.type(self.dtype)
for module in self.input_blocks:
h = module(h, emb, context)
hs.append(h)
h = self.middle_block(h, emb, context)
for module in self.output_blocks:
hs_ = hs.pop()
# --------------- FreeU code -----------------------
# Only operate on the first two stages
if h.shape[1] == 1280:
hidden_mean = h.mean(1).unsqueeze(1)
B = hidden_mean.shape[0]
hidden_max, _ = torch.max(hidden_mean.view(B, -1), dim=-1, keepdim=True)
hidden_min, _ = torch.min(hidden_mean.view(B, -1), dim=-1, keepdim=True)
hidden_mean = (hidden_mean - hidden_min.unsqueeze(2).unsqueeze(3)) / (hidden_max - hidden_min).unsqueeze(2).unsqueeze(3)
h[:,:640] = h[:,:640] * ((self.b1 - 1 ) * hidden_mean + 1)
hs_ = Fourier_filter(hs_, threshold=1, scale=self.s1)
if h.shape[1] == 640:
hidden_mean = h.mean(1).unsqueeze(1)
B = hidden_mean.shape[0]
hidden_max, _ = torch.max(hidden_mean.view(B, -1), dim=-1, keepdim=True)
hidden_min, _ = torch.min(hidden_mean.view(B, -1), dim=-1, keepdim=True)
hidden_mean = (hidden_mean - hidden_min.unsqueeze(2).unsqueeze(3)) / (hidden_max - hidden_min).unsqueeze(2).unsqueeze(3)
h[:,:320] = h[:,:320] * ((self.b2 - 1 ) * hidden_mean + 1)
hs_ = Fourier_filter(hs_, threshold=1, scale=self.s2)
# ---------------------------------------------------------
h = th.cat([h, hs_], dim=1)
h = module(h, emb, context)
h = h.type(x.dtype)
if self.predict_codebook_ids:
return self.id_predictor(h)
else:
return self.out(h) ``` Parameters Feel free to adjust these parameters based on your models, image/video style, or tasks. The following parameters are for reference only. SD1.4: (will be updated soon) b1 : 1.3, b2 : 1.4, s1 : 0.9, s2 : 0.2 SD1.5: (will be updated soon) b1 : 1.5, b2 : 1.6, s1 : 0.9, s2 : 0.2 SD2.1 ~~ b1 : 1.1, b2 : 1.2, s1 : 0.9, s2 : 0.2~~ b1 : 1.4, b2 : 1.6, s1 : 0.9, s2 : 0.2 SDXL b1 : 1.3, b2 : 1.4, s1 : 0.9, s2 : 0.2 SDXL results Range for More Parameters When trying additional parameters, consider the following ranges:
- b1 : 1 ≤ b1 ≤ 1.2
- b2 : 1.2 ≤ b2 ≤ 1.6
- s1 : s1 ≤ 1
- s2 : s2 ≤ 1 Results from the community If you tried FreeU and want to share your results, let me know and we can put up the link here. SDXL from Nasir Khalid comfyUI from Abraham SD2.1 from Justin DuJardin SDXL from Sebastian SDXL from tintwotin ComfyUI-FreeU (YouTube) ComfyUI-FreeU (中文) Rerender Collaborative-Diffusion BibTeX @inproceedings{si2023freeu,
title={FreeU: Free Lunch in Diffusion U-Net},
author={Si, Chenyang and Huang, Ziqi and Jiang, Yuming and Liu, Ziwei},
booktitle={CVPR},
year={2024}
} :newspaper_roll: License Distributed under the MIT License. See LICENSE for more information.;FreeU: Free Lunch in Diffusion U-Net (CVPR2024 Oral);[] | ChenyangSi/FreeU |
Nutlope/notesGPT;notesGPT Generate action items from your notes in seconds. Powered by Convex, Together.ai, and Whisper. Tech Stack · Deploy Your Own · Future Tasks Tech Stack Convex for the database and cloud functions Next.js App Router for the framework Together Inference for the LLM (Mixtral) Together Embeddings for the embeddings for search Convex File Storage for storing voice notes Convex Vector search for vector search Replicate for Whisper transcriptions Clerk for user authentication Tailwind CSS for styling Deploy Your Own You can deploy this template by setting up the following services and adding their environment variables: Run npm install to install dependencies. Run npm run dev . It will prompt you to log into Convex and create a project. It will then ask you to supply the CLERK_ISSUER_URL . To do this: Make a Clerk account. Copy both the CLERK_SECRET_KEY and NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY API keys into .env.local . Do steps 1-3 here and copy the Issuer URL.
It should look something like https://some-animal-123.clerk.accounts.dev . Add CLERK_ISSUER_URL to your Convex Environment Variables (deep link also available in your terminal). Paste the Issuer URL as the value and click "Save". Now your frontend and backend should be running and you should be able to log in but not record. Make a Together account to get your API key . Make a Replicate account to get your API key . Save your environment variables in Convex as REPLICATE_API_KEY and TOGETHER_API_KEY . Future tasks [ ] Keep recording for future playback and display it on the page somewhere [ ] Animate the purple microphone to be in sync with your voice [ ] Store completed action items for the future instead of fully deleting them [ ] Make text/images in the landing page smaller to account for multiple screen sizes. [ ] Make the search experience a little smoother overall by searching automatically on entering text [ ] Be able to have this as a PWA if there's an easy step to do that [ ] Prompt engineer the summary a little more to be more useful than what's currently displaying [ ] Add a Notion integration to be able to get the transcript + summary + action items on there [ ] UI updates to make it look a little nicer based on Youssef's redesign [ ] Be able to edit action items after the fact and set a due date for them [ ] Account for layout shift on the dashboard page when refreshing – show a loading state on content? [ ] Make action items animate out + make checkbox rounded + add a little check icon on hover [ ] Migrate to incredibly fast whisper;Record voice notes & transcribe, summarize, and get tasks;[] | Nutlope/notesGPT |
kepano/flexoki;Flexoki is an inky color scheme for prose and code. Flexoki is designed for reading and writing on digital screens. It is inspired by analog printing inks and warm shades of paper. Learn more at stephango.com/flexoki Syntax highlighting Ports Flexoki is available for the following apps and tools. Apps Alacritty by @willtheodore Discord by @thatoddshade using Discord Recolor Emacs by @crmsnbleyd Firefox by @ddogfoodd fish by @Orest58008 IntelliJ by @annoyingmouse iTerm2 by @techvlad and @pingiun Kitty by @peterjbachman Lite XL by @chambored macOS Terminal by @getninjaN Neovim by @stevedylandev and @dinocosta Obsidian and part of Minimal theme by @kepano Slack by @maxobat Standard Notes by @myreli Sublime Text by @kepano Tape by @TheLastZombie Tealdeer by @Orest58008 Telegram by @Orest58008 tmux by @tklai Ulysses by @jasonekratz Visual Studio Code by @Railly Warp by @tplesnar Waybar by @Orest58008 WezTerm by @jbromley Windows Terminal by @joukevandermaas Xresources by @dmb2 System themes gtk by @hydroakri qt5ct/qt6ct by @hydroakri Frameworks Shadcn by @phenomen Tailwind by @martin-mael theme.sh by @TheLastZombie pywal by @hydroakri Other Figma by @kepano GIMP palette by @xTibor Contributing Flexoki is MIT licensed. You are free to port Flexoki to any app. Please include attribution and a link to stephango.com/flexoki . Add a link to your port in the list above via pull request. Screenshots Sublime Text VS Code Colors Please see documentation for how to apply these colors. Base tones | Value | Hex | RGB |
| ----- | --------- | --------------- |
| black | #100F0F | 16, 15, 15 |
| 950 | #1C1B1A | 28, 27, 26 |
| 900 | #282726 | 40, 39, 38 |
| 850 | #343331 | 52, 51, 49 |
| 800 | #403E3C | 64, 62, 60 |
| 700 | #575653 | 87, 86, 83 |
| 600 | #6F6E69 | 111, 110, 105 |
| 500 | #878580 | 135, 133, 128 |
| 300 | #B7B5AC | 183, 181, 172 |
| 200 | #CECDC3 | 206, 205, 195 |
| 150 | #DAD8CE | 218, 216, 206 |
| 100 | #E6E4D9 | 230, 228, 217 |
| 50 | #F2F0E5 | 242, 240, 229 |
| paper | #FFFCF0 | 255, 252, 240 | Dark colors | Color | Hex | RGB |
| ------- | --------- | -------------- |
| red | #AF3029 | 175, 48, 41 |
| orange | #BC5215 | 188, 82, 21 |
| yellow | #AD8301 | 173, 131, 1 |
| green | #66800B | 102, 128, 11 |
| cyan | #24837B | 36, 131, 123 |
| blue | #205EA6 | 32, 94, 166 |
| purple | #5E409D | 94, 64, 157 |
| magenta | #A02F6F | 160, 47, 111 | Light colors | Color | Hex | RGB |
| ------- | --------- | --------------- |
| red | #D14D41 | 209, 77, 65 |
| orange | #DA702C | 218, 112, 44 |
| yellow | #D0A215 | 208, 162, 21 |
| green | #879A39 | 135, 154, 57 |
| cyan | #3AA99F | 58, 169, 159 |
| blue | #4385BE | 67, 133, 190 |
| purple | #8B7EC8 | 139, 126, 200 |
| magenta | #CE5D97 | 206, 93, 151 |;An inky color scheme for prose and code.;color-scheme,color,colors,iterm2,iterm2-color-scheme,neovim-colorscheme,theme,vscode-theme,wezterm-colorscheme,kitty | kepano/flexoki |
SUDO-AI-3D/zero123plus;Zero123++: A Single Image to Consistent Multi-view Diffusion Base Model [Report] [Official Demo] [Demo by @yvrjsharma] [Google Colab] [Replicate demo] UPDATES v1.2 We are thrilled to release Zero123++ v1.2! Main changes: Camera intrinsics are handled more delibrately. The v1.2 model is more robust to a wider range of input field of views, croppings and unifies the output field of view to 30° to better reflect that of realistic close-up views. The fixed set of elevations are changed from 30° and -20° to 20° and -10° . In contrast with novel-view synthesis, the model focuses more for 3D generation. The model always outputs a set of views assuming a normalized object size instead of changing w.r.t. the input. Additionally, we have a normal generator ControlNet that can generate view-space normal images. The output can also be used to obtain a more accurate mask than the SAM-based approach. Validation metrics on our validation set from Objaverse: alpha (before matting) IoU 98.81%, mean normal angular error 10.75°, normal PSNR 26.93 dB. Usage Use of the v1.2 base model is unchanged. Please see the sections below for usage. Use of the normal generator: See examples/normal_gen.py . For alpha mask generation from the normal images, please see examples/matting_postprocess.py and examples/normal_gen.py . License The code is released under Apache 2.0 and the model weights are released under CC-BY-NC 4.0. This means that you cannot use the model (or its derivatives) in a commercial product pipeline, but you can still use the outputs from the model freely. And, you are accountable for the output you generate and its subsequent uses. Get Started You will need torch (recommended 2.0 or higher), diffusers (recommended 0.20.2 ), and transformers to start. If you are using torch 1.x , it is recommended to install xformers to compute attention in the model efficiently. The code also runs on older versions of diffusers , but you may see a decrease in model performance. And you are all set! We provide a custom pipeline for diffusers , so no extra code is required. To generate multi-view images from a single input image, you can run the following code (also see examples/img_to_mv.py ): ```python
import torch
import requests
from PIL import Image
from diffusers import DiffusionPipeline, EulerAncestralDiscreteScheduler Load the pipeline pipeline = DiffusionPipeline.from_pretrained(
"sudo-ai/zero123plus-v1.1", custom_pipeline="sudo-ai/zero123plus-pipeline",
torch_dtype=torch.float16
) Feel free to tune the scheduler! timestep_spacing parameter is not supported in older versions of diffusers so there may be performance degradations We recommend using diffusers==0.20.2 pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(
pipeline.scheduler.config, timestep_spacing='trailing'
)
pipeline.to('cuda:0') Download an example image. cond = Image.open(requests.get("https://d.skis.ltd/nrp/sample-data/lysol.png", stream=True).raw) Run the pipeline! result = pipeline(cond, num_inference_steps=75).images[0] for general real and synthetic images of general objects usually it is enough to have around 28 inference steps for images with delicate details like faces (real or anime) you may need 75-100 steps for the details to construct result.show()
result.save("output.png")
``` The above example requires ~5GB VRAM to run.
The input image needs to be square, and the recommended image resolution is >=320x320 . By default, Zero123++ generates opaque images with a gray background (the zero for Stable Diffusion VAE).
You may run an extra background removal pass like rembg to remove the gray background. ```python !pip install rembg import rembg
result = rembg.remove(result)
result.show()
``` To run the depth ControlNet, you can use the following example (also see examples/depth_controlnet.py ): ```python
import torch
import requests
from PIL import Image
from diffusers import DiffusionPipeline, EulerAncestralDiscreteScheduler, ControlNetModel Load the pipeline pipeline = DiffusionPipeline.from_pretrained(
"sudo-ai/zero123plus-v1.1", custom_pipeline="sudo-ai/zero123plus-pipeline",
torch_dtype=torch.float16
)
pipeline.add_controlnet(ControlNetModel.from_pretrained(
"sudo-ai/controlnet-zp11-depth-v1", torch_dtype=torch.float16
), conditioning_scale=0.75) Feel free to tune the scheduler pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(
pipeline.scheduler.config, timestep_spacing='trailing'
)
pipeline.to('cuda:0') Run the pipeline cond = Image.open(requests.get("https://d.skis.ltd/nrp/sample-data/0_cond.png", stream=True).raw)
depth = Image.open(requests.get("https://d.skis.ltd/nrp/sample-data/0_depth.png", stream=True).raw)
result = pipeline(cond, depth_image=depth, num_inference_steps=36).images[0]
result.show()
result.save("output.png")
``` This example requires ~5.7GB VRAM to run. Models The models are available at https://huggingface.co/sudo-ai : sudo-ai/zero123plus-v1.1 , base Zero123++ model release (v1.1). sudo-ai/controlnet-zp11-depth-v1 , depth ControlNet checkpoint release (v1) for Zero123++ (v1.1). sudo-ai/zero123plus-v1.2 , base Zero123++ model release (v1.2). sudo-ai/controlnet-zp12-normal-gen-v1 , normal generation ControlNet checkpoint release (v1) for Zero123++ (v1.2). The source code for the diffusers custom pipeline is available in the diffusers-support directory. Camera Parameters Output views are a fixed set of camera poses: Azimuth (relative to input view): 30, 90, 150, 210, 270, 330 . v1.1 Elevation (absolute): 30, -20, 30, -20, 30, -20 . v1.2 Elevation (absolute): 20, -10, 20, -10, 20, -10 . v1.2 Field of View (absolute): 30° . Running Demo Locally You will need to install extra dependencies: pip install -r requirements.txt Then run streamlit run app.py . For Gradio Demo, you can run python gradio_app.py . Related Work [One-2-3-45] [One-2-3-45++] [Zero123] Citation If you found Zero123++ helpful, please cite our report: bibtex
@misc{shi2023zero123plus,
title={Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model},
author={Ruoxi Shi and Hansheng Chen and Zhuoyang Zhang and Minghua Liu and Chao Xu and Xinyue Wei and Linghao Chen and Chong Zeng and Hao Su},
year={2023},
eprint={2310.15110},
archivePrefix={arXiv},
primaryClass={cs.CV}
};Code repository for Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model.;3d,3d-graphics,aigc,diffusers,diffusion-models,image-to-3d,research-project,text-to-3d | SUDO-AI-3D/zero123plus |
kadirnar/whisper-plus;WhisperPlus: Faster, Smarter, and More Capable 🚀 🛠️ Installation bash
pip install whisperplus git+https://github.com/huggingface/transformers
pip install flash-attn --no-build-isolation 🤗 Model Hub You can find the models on the HuggingFace Model Hub 🎙️ Usage To use the whisperplus library, follow the steps below for different tasks: 🎵 Youtube URL to Audio ```python
from whisperplus import SpeechToTextPipeline, download_youtube_to_mp3
from transformers import BitsAndBytesConfig, HqqConfig
import torch url = "https://www.youtube.com/watch?v=di3rHkEZuUw"
audio_path = download_youtube_to_mp3(url, output_dir="downloads", filename="test") hqq_config = HqqConfig(
nbits=4,
group_size=64,
quant_zero=False,
quant_scale=False,
axis=0,
offload_meta=False,
) # axis=0 is used by default bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
) pipeline = SpeechToTextPipeline(
model_id="distil-whisper/distil-large-v3",
quant_config=hqq_config,
flash_attention_2=True,
) transcript = pipeline(
audio_path=audio_path,
chunk_length_s=30,
stride_length_s=5,
max_new_tokens=128,
batch_size=100,
language="english",
return_timestamps=False,
) print(transcript)
``` 🍎 Apple MLX ```python
from whisperplus.pipelines import mlx_whisper
from whisperplus import download_youtube_to_mp3 url = "https://www.youtube.com/watch?v=1__CAdTJ5JU"
audio_path = download_youtube_to_mp3(url) text = mlx_whisper.transcribe(
audio_path, path_or_hf_repo="mlx-community/whisper-large-v3-mlx"
)["text"]
print(text)
``` 🍏 Lightning Mlx Whisper ```python
from whisperplus.pipelines.lightning_whisper_mlx import LightningWhisperMLX
from whisperplus import download_youtube_to_mp3 url = "https://www.youtube.com/watch?v=1__CAdTJ5JU"
audio_path = download_youtube_to_mp3(url) whisper = LightningWhisperMLX(model="distil-large-v3", batch_size=12, quant=None)
output = whisper.transcribe(audio_path=audio_path)["text"]
``` 📰 Summarization ```python
from whisperplus.pipelines.summarization import TextSummarizationPipeline summarizer = TextSummarizationPipeline(model_id="facebook/bart-large-cnn")
summary = summarizer.summarize(transcript)
print(summary[0]["summary_text"])
``` 📰 Long Text Support Summarization ```python
from whisperplus.pipelines.long_text_summarization import LongTextSummarizationPipeline summarizer = LongTextSummarizationPipeline(model_id="facebook/bart-large-cnn")
summary_text = summarizer.summarize(transcript)
print(summary_text)
``` 💬 Speaker Diarization You must confirm the licensing permissions of these two models. https://huggingface.co/pyannote/speaker-diarization-3.1 https://huggingface.co/pyannote/segmentation-3.0 bash
pip install -r speaker_diarization.txt
pip install -U "huggingface_hub[cli]"
huggingface-cli login ```python
from whisperplus.pipelines.whisper_diarize import ASRDiarizationPipeline
from whisperplus import download_youtube_to_mp3, format_speech_to_dialogue audio_path = download_youtube_to_mp3("https://www.youtube.com/watch?v=mRB14sFHw2E") device = "cuda" # cpu or mps
pipeline = ASRDiarizationPipeline.from_pretrained(
asr_model="openai/whisper-large-v3",
diarizer_model="pyannote/speaker-diarization-3.1",
use_auth_token=False,
chunk_length_s=30,
device=device,
) output_text = pipeline(audio_path, num_speakers=2, min_speaker=1, max_speaker=2)
dialogue = format_speech_to_dialogue(output_text)
print(dialogue)
``` ⭐ RAG - Chat with Video(LanceDB) bash
pip install sentence-transformers ctransformers langchain ```python
from whisperplus.pipelines.chatbot import ChatWithVideo chat = ChatWithVideo(
input_file="trascript.txt",
llm_model_name="TheBloke/Mistral-7B-v0.1-GGUF",
llm_model_file="mistral-7b-v0.1.Q4_K_M.gguf",
llm_model_type="mistral",
embedding_model_name="sentence-transformers/all-MiniLM-L6-v2",
) query = "what is this video about ?"
response = chat.run_query(query)
print(response)
``` 🌠 RAG - Chat with Video(AutoLLM) bash
pip install autollm>=0.1.9 ```python
from whisperplus.pipelines.autollm_chatbot import AutoLLMChatWithVideo service_context_params system_prompt = """
You are an friendly ai assistant that help users find the most relevant and accurate answers
to their questions based on the documents you have access to.
When answering the questions, mostly rely on the info in documents.
"""
query_wrapper_prompt = """
The document information is below. {context_str} Using the document information and mostly relying on it,
answer the query.
Query: {query_str}
Answer:
""" chat = AutoLLMChatWithVideo(
input_file="input_dir", # path of mp3 file
openai_key="YOUR_OPENAI_KEY", # optional
huggingface_key="YOUR_HUGGINGFACE_KEY", # optional
llm_model="gpt-3.5-turbo",
llm_max_tokens="256",
llm_temperature="0.1",
system_prompt=system_prompt,
query_wrapper_prompt=query_wrapper_prompt,
embed_model="huggingface/BAAI/bge-large-zh", # "text-embedding-ada-002"
) query = "what is this video about ?"
response = chat.run_query(query)
print(response)
``` 🎙️ Text to Speech ```python
from whisperplus.pipelines.text2speech import TextToSpeechPipeline tts = TextToSpeechPipeline(model_id="suno/bark")
audio = tts(text="Hello World", voice_preset="v2/en_speaker_6")
``` 🎥 AutoCaption bash
pip install moviepy
apt install imagemagick libmagick++-dev
cat /etc/ImageMagick-6/policy.xml | sed 's/none/read,write/g'> /etc/ImageMagick-6/policy.xml ```python
from whisperplus.pipelines.whisper_autocaption import WhisperAutoCaptionPipeline
from whisperplus import download_youtube_to_mp4 video_path = download_youtube_to_mp4(
"https://www.youtube.com/watch?v=di3rHkEZuUw",
output_dir="downloads",
filename="test",
) # Optional caption = WhisperAutoCaptionPipeline(model_id="openai/whisper-large-v3")
caption(video_path=video_path, output_path="output.mp4", language="english")
``` 😍 Contributing bash
pip install pre-commit
pre-commit install
pre-commit run --all-files 📜 License This project is licensed under the terms of the Apache License 2.0. 🤗 Citation bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
};WhisperPlus: Faster, Smarter, and More Capable 🚀;[] | kadirnar/whisper-plus |
AlanChen4/Summer-2024-SWE-Internships;Summer 2024 Software Engineering Internships by Carbos :star: This board is updated daily using a custom job monitor that watches companies vetted for growth, student engagement, & innovation :pushpin: This is just 10% of the job data on Carbos , a free platform that has features such as
- A live job board with no sponsored job postings
- Custom filters
- Pipeline to recruiters :heart: Thanks for all the support! Please consider checking out Carbos if you haven't already (it only takes 5 minutes to create an account) | Company | Investors | Title | Status | Added On |
| ------- | ----------| ----- | ------ |--------- |
| Lucid Motors | Sequoia | Intern, Embedded Software Engineer, Summer 2024 | ✅ | September 14, 2023 |
| Lucid Motors | Sequoia | Intern, Software Automation Engineer, Summer 2024 | ✅ | September 14, 2023 |
| Lucid Motors | Sequoia | Intern, Android Applications Software Engineer – Summer 2024 | ✅ | September 14, 2023 |
| Lucid Motors | Sequoia | Intern, Android Applications Engineer, Summer 2024 | ✅ | September 14, 2023 |
| Lucid Motors | Sequoia | Intern, Systems Engineer, Summer 2024 | ✅ | September 14, 2023 |
| Lucid Motors | Sequoia | Intern, Cloud Engineer, Summer 2024 | ✅ | September 14, 2023 |
| Nvidia | Google Ventures, Sequoia | NVIDIA Ignite Internship, Hardware/Software/Business Operations - Summer 2024 | ✅ | September 14, 2023 |
| Kla | Kleiner Perkins, Sequoia | Mechanical Engineering Intern | ✅ | September 14, 2023 |
| Kla | Kleiner Perkins, Sequoia | Mechanical Engineering Intern | ✅ | September 14, 2023 |
| Kla | Kleiner Perkins, Sequoia | Mechatronics Engineering Intern | ✅ | September 14, 2023 |
| Kla | Kleiner Perkins, Sequoia | Mechanical Engineering Intern | ✅ | September 14, 2023 |
| Ercot | Kleiner Perkins, Sequoia | Intern - Electrical Engineering (Baylor University) | ✅ | September 14, 2023 |
| Ercot | Kleiner Perkins, Sequoia | Intern - Electrical Engineering | ✅ | September 14, 2023 |
| BorgWarner | Kleiner Perkins | Industrial Engineering Intern | ✅ | September 14, 2023 |
| BorgWarner | Kleiner Perkins | Component Release Engineering (CRE) Intern | ✅ | September 14, 2023 |
| BlackBerry | | Software Development Student - 8 Month WInter | ✅ | September 14, 2023 |
| Anyscale | Accel, Tiger Global, Sequoia | Software Engineer (Summer Intern 2024) | ✅ | September 14, 2023 |
| Wayfair | Kleiner Perkins | Machine Learning Science Graduate Intern - Summer 2024 | ✅ | September 13, 2023 |
| Epic Games | KKR | Core Systems Engineer Intern | ✅ | September 13, 2023 |
| Schweitzer Engineering Laboratories | | Electrical Engineering Intern | ✅ | September 13, 2023 |
| Schweitzer Engineering Laboratories | | Engineering Intern | ✅ | September 13, 2023 |
| BlackBerry | | Network Automation Developer Student - 8 Month Winter | ✅ | September 13, 2023 |
| Arc Boats | Founders Fund, Kleiner Perkins | Engineering Intern | ✅ | September 13, 2023 |
| Anduril Industries | A16Z, Google Ventures, Kleiner Perkins | Mechanical Engineer Intern- Summer 2024 | ❌ | September 13, 2023 |
| Anduril Industries | A16Z, Google Ventures, Kleiner Perkins | Electrical Systems Engineer Intern- Summer 2024 | ❌ | September 13, 2023 |
| Anduril Industries | A16Z, Google Ventures, Kleiner Perkins | Electrical Engineer Intern- Summer 2024 | ❌ | September 13, 2023 |
| Anduril Industries | A16Z, Google Ventures, Kleiner Perkins | Manufacturing Engineer Intern- Summer 2024 | ❌ | September 13, 2023 |
| Amazon | Google Ventures, Kleiner Perkins | 2024 Summer Research Science Internship - United States, PhD Student Science Recruiting | ✅ | September 13, 2023 |
| Schonfeld | Kleiner Perkins, Sequoia | 2024 Quantitative Researcher Summer Intern - NYC | ❌ | September 12, 2023 |
| SingleStore | Insight Partners | Software Engineer, Managed Service Intern (Summer 2024) | ✅ | September 12, 2023 |
| SingleStore | Insight Partners | Software Engineer, Managed Service Intern (Summer 2024) | ✅ | September 12, 2023 |
| SingleStore | Insight Partners | Software Engineer, Infrastructure Intern (Summer 2024) | ✅ | September 12, 2023 |
| SingleStore | Insight Partners | Software Engineer, DB Engine Intern (Summer 2024) | ✅ | September 12, 2023 |
| SingleStore | Insight Partners | Software Engineer, DB Engine Intern (Summer 2024) | ✅ | September 12, 2023 |
| Schweitzer Engineering Laboratories | | Electrical Engineering Intern | ✅ | September 12, 2023 |
| BorgWarner | Kleiner Perkins | Engineering Intern – eHW Support | ✅ | September 12, 2023 |
| The Boring Company | Founders Fund, Sequoia | Test Engineering Intern | ✅ | September 11, 2023 |
| The Boring Company | Founders Fund, Sequoia | Software Engineering Intern | ✅ | September 11, 2023 |
| The Boring Company | Founders Fund, Sequoia | Production Engineering Intern | ✅ | September 11, 2023 |
| The Boring Company | Founders Fund, Sequoia | Mechanical Engineering Intern | ✅ | September 11, 2023 |
| The Boring Company | Founders Fund, Sequoia | Electrical Engineering Intern | ✅ | September 11, 2023 |
| The Boring Company | Founders Fund, Sequoia | Electrical Engineering Intern | ✅ | September 11, 2023 |
| The Boring Company | Founders Fund, Sequoia | Civil Engineering Intern | ✅ | September 11, 2023 |
| Schweitzer Engineering Laboratories | | Electrical Engineering Intern | ✅ | September 11, 2023 |
| Reliable Robotics | Google Ventures, Kleiner Perkins, Sequoia | Mechanical Engineer (2024 Internship) | ✅ | September 11, 2023 |
| Walmart | | 2024 Summer Intern: Data Ventures, Data Analysis, Walmart Intern | ❌ | September 10, 2023 |
| Walmart | | 2024 Summer Intern: Data Ventures, Product Design, Walmart Intern | ❌ | September 10, 2023 |
| Amazon | Google Ventures, Kleiner Perkins | 2024 Software Dev Engineer Intern | ✅ | September 09, 2023 |
| Schweitzer Engineering Laboratories | | Electrical Engineering Intern | ✅ | September 08, 2023 |
| Epic Games | KKR | Full Stack Web Engineer Intern | ✅ | September 08, 2023 |
| Athena Health | Accel, Founders Fund, Google Ventures, Kleiner Perkins | 2024 Summer Software Engineering Intern | ✅ | September 08, 2023 |
| Cboe | | Cyber Detection and Response Engineering Intern (Year-long Internship) | ✅ | September 08, 2023 |
| Whatnot | A16Z, Tiger Global | Software Engineering Intern, Summer 2024 | ✅ | September 08, 2023 |
| Schweitzer Engineering Laboratories | | Electrical Engineering Specialist Intern | ✅ | September 07, 2023 |
| Pimco | Kleiner Perkins, Sequoia | 2024 Masters Summer Intern - Quantitative Research Analyst, Portfolio Management, US | ✅ | September 07, 2023 |
| Cboe | | Market Data & Access Services Intern | ✅ | September 07, 2023 |
| Cboe | | Software Engineer Intern | ✅ | September 07, 2023 |
| Cboe | | Software Engineer Intern | ✅ | September 07, 2023 |
| Cboe | | Software Engineer Intern | ✅ | September 07, 2023 |
| Cboe | | Software Engineer Intern Regulatory ML | ❌ | September 07, 2023 |
| Cboe | | Site Reliability Engineer (Cloud) Intern – Regulatory | ✅ | September 07, 2023 |
| Cboe | | Software Engineer Test Intern | ❌ | September 07, 2023 |
| Cboe | | Regulatory Software Engineer QA Intern | ✅ | September 07, 2023 |
| Nvidia | Google Ventures, Sequoia | PhD Research Intern - Visual Localisation | ✅ | September 07, 2023 |
| Nvidia | Google Ventures, Sequoia | PhD Research Intern - Video Understanding | ✅ | September 07, 2023 |
| Humane | Kleiner Perkins, YC | Firmware Engineering Intern | ✅ | September 07, 2023 |
| Virtu Financial | Goldman Sachs | Internship - Quantitative Trading | ✅ | September 07, 2023 |
| Schweitzer Engineering Laboratories | | Electrical Engineering Specialist/Technician Intern | ✅ | September 06, 2023 |
| Schweitzer Engineering Laboratories | | Electrical Engineering Intern | ✅ | September 06, 2023 |
| Kla | Kleiner Perkins, Sequoia | SW Engineer Intern | ✅ | September 06, 2023 |
| Kla | Kleiner Perkins, Sequoia | Algorithm Engineering Intern | ✅ | September 06, 2023 |
| HubSpot | Accel, Google Ventures, Salesforce Ventures | Hubspot Software Engineer Internship & Placement Program (Ireland) | ✅ | September 06, 2023 |
| HubSpot | Accel, Google Ventures, Salesforce Ventures | Hubspot Software Engineer Internship & Placement Program (Ireland) | ✅ | September 06, 2023 |
| HubSpot | Accel, Google Ventures, Salesforce Ventures | HubSpot Software Engineer Internship & Co-op Program (USA) | ✅ | September 06, 2023 |
| HubSpot | Accel, Google Ventures, Salesforce Ventures | HubSpot Software Engineer Internship & Co-op Program (USA) | ✅ | September 06, 2023 |
| HubSpot | Accel, Google Ventures, Salesforce Ventures | HubSpot Software Engineer Internship & Co-op Program (Canada) | ✅ | September 06, 2023 |
| Gate.io | Sequoia | Market research Intern | ❌ | September 06, 2023 |
| Epic Games | KKR | Data Software Engineer Intern | ✅ | September 06, 2023 |
| Epic Games | KKR | Audio Research Intern | ✅ | September 06, 2023 |
| Commonwealth Fusion Systems | Google Ventures | Mechanical Engineer - Intern/Co-op | ❌ | September 06, 2023 |
| Allegion | | Summer Intern - Mechanical Engineering | ✅ | September 06, 2023 |
| HubSpot | Accel, Google Ventures, Salesforce Ventures | Hubspot Software Engineer Internship & Placement Program (UK) | ✅ | September 06, 2023 |
| HubSpot | Accel, Google Ventures, Salesforce Ventures | Hubspot Software Engineer Internship & Placement Program (Germany) | ✅ | September 06, 2023 |
| Virtu Financial | Goldman Sachs | Internship - Developer | ✅ | September 06, 2023 |
| Veritas | Kleiner Perkins, Sequoia | Summer 2024- Software Engineering Intern- Undergraduate | ❌ | September 06, 2023 |
| Veritas | Kleiner Perkins, Sequoia | Summer 2024- Software Engineer Intern- Graduate Degree | ❌ | September 06, 2023 |
| Nvidia | Google Ventures, Sequoia | Governance and Infrastructure Full Stack Engineer Intern - Summer 2024 | ❌ | September 05, 2023 |
| MongoDB | Accel, Greylock, Sequoia | 2024 - Software Engineering Intern, Product | ❌ | September 05, 2023 |
| Humane | Kleiner Perkins, YC | Quality Engineering Intern | ✅ | September 05, 2023 |
| Neuralink | Google Ventures | Engineering Intern (Neuroengineering) | ✅ | September 04, 2023 |
| RVO Health | Kleiner Perkins, Sequoia | 2024 Early Career Program: Software Engineering Intern | ✅ | September 01, 2023 |
| RVO Health | Kleiner Perkins, Sequoia | 2024 Early Career Program: Data Analyst Intern | ✅ | September 01, 2023 |
| SeatGeek | Tiger Global | Software Engineer - Internship | ❌ | September 01, 2023 |
| SeatGeek | Tiger Global | Software Engineer - Internship | ❌ | September 01, 2023 |
| Hudl | Accel, Kleiner Perkins, Sequoia | Data Analyst Intern | ✅ | September 01, 2023 |
| Ercot | Kleiner Perkins, Sequoia | Intern - Electrical Engineering (University of Texas) | ✅ | September 01, 2023 |
| Ercot | Kleiner Perkins, Sequoia | Intern - Electrical Engineering (Texas A&M) | ✅ | September 01, 2023 |
| Qualtrics | Accel, Insight Partners, Sequoia | Software Engineering Intern - Summer 2024 - Seattle, WA | ❌ | August 31, 2023 |
| Pimco | Kleiner Perkins, Sequoia | 2024 Masters Summer Intern - Quant Research Analyst - Client Solutions & Analytics, US | ✅ | August 31, 2023 |
| Pimco | Kleiner Perkins, Sequoia | 2024 Masters Summer Intern – Quant Research Analyst - Client Data Science, US | ✅ | August 31, 2023 |
| Humane | Kleiner Perkins, YC | Software Engineering Intern, Product Security | ✅ | August 31, 2023 |
| Humane | Kleiner Perkins, YC | Hardware Engineering Intern, Radio Frequency (RF) | ✅ | August 31, 2023 |
| Bridgewater Associates | | Investment Engineer Intern - Summer 2024 | ✅ | August 31, 2023 |
| Wayfair | Kleiner Perkins | Software Engineering Internship - Summer 2024 | ✅ | August 31, 2023 |
| Allegion | | Summer Intern - Engineering | ✅ | August 31, 2023 |
| Allegion | | Summer Intern – Mechanical Engineer | ✅ | August 31, 2023 |
| Tanium | Goldman Sachs, Tiger Global, Sequoia | Software Engineer Intern | ✅ | August 30, 2023 |
| Tanium | Goldman Sachs, Tiger Global, Sequoia | Business Data & Analytics Intern | ❌ | August 30, 2023 |
| Schweitzer Engineering Laboratories | | Power Engineering Intern | ✅ | August 30, 2023 |
| Schweitzer Engineering Laboratories | | Electrical Engineering Intern | ✅ | August 30, 2023 |
| Schweitzer Engineering Laboratories | | Electrical Engineering Intern | ✅ | August 30, 2023 |
| Schweitzer Engineering Laboratories | | Power Engineering Intern | ✅ | August 30, 2023 |
| Schweitzer Engineering Laboratories | | Software Engineer Intern | ✅ | August 30, 2023 |
| Schweitzer Engineering Laboratories | | Electrical Engineering Intern | ✅ | August 30, 2023 |
| Schweitzer Engineering Laboratories | | Embedded Software Engineer Intern | ✅ | August 30, 2023 |
| Schweitzer Engineering Laboratories | | Software Engineer Intern | ✅ | August 30, 2023 |
| Humane | Kleiner Perkins, YC | Software Engineering Intern, Device Experiences | ✅ | August 30, 2023 |
| Amazon | Google Ventures, Kleiner Perkins | 2024 Summer Data Science Internship - United States, Student Science Recruiting USA, WA, Seattle J... | ✅ | August 30, 2023 |
| Amazon | Google Ventures, Kleiner Perkins | 2024 Summer Research Science Internship - United States, Masters Student Science Recruiting USA, WA,... | ✅ | August 30, 2023 |
| Amazon | Google Ventures, Kleiner Perkins | 2024 Summer Applied Science Internship - Computer Vision and Robotics - United States, PhD Student S... | ✅ | August 30, 2023 |
| Amazon | Google Ventures, Kleiner Perkins | 2024 Summer Applied Science Internship - Machine Learning - United States, PhD Student Science Recru... | ✅ | August 30, 2023 |
| Amazon | Google Ventures, Kleiner Perkins | 2024 Summer Research Science Internship - United States, PhD Student Science Recruiting USA, WA, Sea... | ✅ | August 30, 2023 |
| Allegion | | Summer Intern - Process Engineering | ✅ | August 30, 2023 |
| Chamberlain Group | Kleiner Perkins, Sequoia | Software Engineer Intern (AI, ML & Computer Vision) | ✅ | August 29, 2023 |
| Schweitzer Engineering Laboratories | | Software Engineer Intern | ✅ | August 29, 2023 |
| Schweitzer Engineering Laboratories | | Electrical Engineering Intern | ✅ | August 29, 2023 |
| Schweitzer Engineering Laboratories | | Electrical Engineering Specialist Intern | ✅ | August 29, 2023 |
| Lucid Motors | Sequoia | Intern, Software Quality Assurance, Data Analyst, Summer 2024 | ✅ | August 29, 2023 |
| Lucid Motors | Sequoia | Intern Charging System Validation Engineer, Summer 2024 | ✅ | August 29, 2023 |
| Lucid Motors | Sequoia | Intern, Automotive Software Test and Verification – Summer 2024 | ✅ | August 29, 2023 |
| Lucid Motors | Sequoia | Intern, Test, Model Based Design, Controls Software Engineer, Summer 2024 | ✅ | August 29, 2023 |
| Lucid Motors | Sequoia | Intern, Cloud Microservices Software Engineer, Summer 2024 | ✅ | August 29, 2023 |
| Lucid Motors | Sequoia | Intern, ADAS Perception Algorithm Engineer, Summer 2024 | ✅ | August 29, 2023 |
| Humane | Kleiner Perkins, YC | System Software Engineering Intern | ✅ | August 29, 2023 |
| Humane | Kleiner Perkins, YC | Computer Vision Engineering Intern | ✅ | August 29, 2023 |
| Duolingo | Google Ventures, Kleiner Perkins | Software Engineer, Intern | ❌ | August 29, 2023 |
| Yext | Google Ventures, Kleiner Perkins | 2024 Software Engineer Summer Intern - Product | ✅ | August 28, 2023 |
| Yext | Google Ventures, Kleiner Perkins | 2024 Software Engineer Summer Intern - Product | ✅ | August 28, 2023 |
| Epic Games | KKR | Software Engineer Intern (Gameplay Programming) | ✅ | August 28, 2023 |
| Epic Games | KKR | Software Engineer Intern (Physics Programming) | ✅ | August 28, 2023 |
| Epic Games | KKR | Software Engineer Intern (UI Programming) | ✅ | August 28, 2023 |
| Duolingo | Google Ventures, Kleiner Perkins | Software Engineer Intern | ✅ | August 28, 2023 |
| Nvidia | Google Ventures, Sequoia | Governance Analytics and Process Engineering Intern - Summer 2024 | ❌ | August 25, 2023 |
| Commonwealth Fusion Systems | Google Ventures | Design Engineer - Intern/Co-op | ❌ | August 25, 2023 |
| Hudl | Accel, Kleiner Perkins, Sequoia | Software Quality Analyst Intern | ✅ | August 25, 2023 |
| Chamberlain Group | Kleiner Perkins, Sequoia | (Intern Conversion) Associate Data Scientist | ❌ | August 25, 2023 |
| Qualtrics | Accel, Insight Partners, Sequoia | Software Engineering Intern - Summer 2024 - Reston, Virginia | ❌ | August 24, 2023 |
| Qualtrics | Accel, Insight Partners, Sequoia | Software Engineering Intern - Summer 2024 - Provo | ❌ | August 24, 2023 |
| Qualtrics | Accel, Insight Partners, Sequoia | Software Engineering Intern - Summer 2024 | ❌ | August 24, 2023 |
| Chamberlain Group | Kleiner Perkins, Sequoia | Intern, Electrical Engineering (Summer 2024) | ✅ | August 24, 2023 |
| Chamberlain Group | Kleiner Perkins, Sequoia | Software Engineer 2, Front End (Intern Conversion) | ❌ | August 24, 2023 |
| Chamberlain Group | Kleiner Perkins, Sequoia | Intern, Data Visualization (Summer 2024) | ✅ | August 24, 2023 |
| Chamberlain Group | Kleiner Perkins, Sequoia | Intern, myQ Front End Engineer (Android or iOS) (Summer 2024) | ✅ | August 24, 2023 |
| Chamberlain Group | Kleiner Perkins, Sequoia | Intern, Engineering - Test Automation (Summer 2024) | ✅ | August 24, 2023 |
| Chamberlain Group | Kleiner Perkins, Sequoia | Intern, Systems Engineering (Summer 2024) | ✅ | August 24, 2023 |
| Chamberlain Group | Kleiner Perkins, Sequoia | Intern, ECE Engineering (Summer 2024) | ✅ | August 24, 2023 |
| Chamberlain Group | Kleiner Perkins, Sequoia | Intern, Data Engineering – Advanced Analytics (Summer 2024) | ✅ | August 24, 2023 |
| Chamberlain Group | Kleiner Perkins, Sequoia | Intern, myQ Front End Developer (Android or iOS) (Summer 2024) | ✅ | August 24, 2023 |
| Chamberlain Group | Kleiner Perkins, Sequoia | Intern, Electrical Hardware Engineering (Summer 2024) | ✅ | August 24, 2023 |
| Chamberlain Group | Kleiner Perkins, Sequoia | Intern, Data Science – Advanced Analytics (Summer 2024) | ✅ | August 24, 2023 |
| Anyscale | Accel, Tiger Global, Sequoia | Software Engineer (Intern 2024) | ✅ | August 24, 2023 |
| Allegion | | Summer Intern - Process Engineer | ✅ | August 24, 2023 |
| Allegion | | Spring Intern - Chemical/Industrial Engineer - Blue Ash, OH | ✅ | August 24, 2023 |
| Notion | Tiger Global, Sequoia | Software Engineering Intern, Summer 2024 (via RippleMatch) | ✅ | August 23, 2023 |
| Epic Games | KKR | Tools and Workflow Engineer Intern | ✅ | August 23, 2023 |
| Virtu Financial | Goldman Sachs | Software Developer Intern | ✅ | August 23, 2023 |
| Nvidia | Google Ventures, Sequoia | Software Program Management Intern, Automotive - Summer 2024 | ❌ | August 23, 2023 |
| Teledyne | A16Z, Sequoia | Computer Engineering Intern | ✅ | August 22, 2023 |
| Scale AI | A16Z, Google Ventures, Kleiner Perkins | Software Engineering Intern | ✅ | August 22, 2023 |
| Scale AI | A16Z, Google Ventures, Kleiner Perkins | Machine Learning Research Engineering Intern | ✅ | August 22, 2023 |
| Amadeus | Kleiner Perkins, Sequoia | Software Development Intern | ✅ | August 22, 2023 |
| Schweitzer Engineering Laboratories | | Software Engineer Intern | ❌ | August 22, 2023 |
| Schweitzer Engineering Laboratories | | Software Engineer Intern | ❌ | August 22, 2023 |
| Schweitzer Engineering Laboratories | | Network Engineering Intern | ✅ | August 22, 2023 |
| Hudl | Accel, Kleiner Perkins, Sequoia | Software Engineer Intern | ✅ | August 22, 2023 |
| Allegion | | Summer Intern - Test Engineer | ✅ | August 22, 2023 |
| Allegion | | Summer Intern - Mechanical Engineering | ✅ | August 22, 2023 |
| Allegion | | Summer Intern - Electrical Engineer | ✅ | August 22, 2023 |
| The Federal Reserve | A16Z | Spring 2024 Operations Intern: CBDC Research | ✅ | August 17, 2023 |
| Chamberlain Group | Kleiner Perkins, Sequoia | Intern, Machine Learning, Computer Vision, and AI (Summer 2024) | ✅ | August 17, 2023 |
| Chamberlain Group | Kleiner Perkins, Sequoia | Intern, Firmware Engineering (Summer 2024) | ✅ | August 17, 2023 |
| BorgWarner | Kleiner Perkins | Global Advanced Manufacturing Engineering Support Professional Internship (m/f/d) | ✅ | August 17, 2023 |
| Schweitzer Engineering Laboratories | | Application Engineering Intern | ❌ | August 17, 2023 |
| Schweitzer Engineering Laboratories | | Hardware Engineering Intern | ✅ | August 17, 2023 |
| Allegion | | Summer Intern - Product Data | ✅ | August 17, 2023 |
| Allegion | | Summer Intern - Quality Engineering | ❌ | August 17, 2023 |
| Verkada | Google Ventures, Tiger Global, Sequoia | Software Engineering Intern, Security - Summer 2024 | ✅ | August 16, 2023 |
| Schweitzer Engineering Laboratories | | Software Engineer Intern | ❌ | August 16, 2023 |
| Allegion | | Summer Intern - Mechanical Engineer | ✅ | August 16, 2023 |
| Allegion | | Summer Intern - Design Engineer | ✅ | August 16, 2023 |
| Allegion | | Summer Intern – Hardware Engineer (Connectivity and Sensing) | ✅ | August 16, 2023 |
| Allegion | | Summer Intern - Hardware Engineer | ✅ | August 16, 2023 |
| Kla | Kleiner Perkins, Sequoia | Mechanical Engineering Intern | ❌ | August 15, 2023 |
| Kla | Kleiner Perkins, Sequoia | Mechanical Engineering Intern | ✅ | August 15, 2023 |
| Kla | Kleiner Perkins, Sequoia | Electrical Engineering Intern | ❌ | August 15, 2023 |
| Kla | Kleiner Perkins, Sequoia | Algorithm Engineering Intern | ❌ | August 15, 2023 |
| Amazon | Google Ventures, Kleiner Perkins | Software Development Engineer Internship - 2024 (US) | ✅ | August 15, 2023 |
| Allegion | | Summer Intern - Mechanical Engineer | ✅ | August 15, 2023 |
| Allegion | | Summer Intern – Product Cybersecurity Engineer | ❌ | August 15, 2023 |
| Allegion | | Summer Intern - Mechanical Engineer | ✅ | August 15, 2023 |
| Allegion | | Summer Intern – Data Migration Specialist | ❌ | August 15, 2023 |
| Schweitzer Engineering Laboratories | | Test Engineering Intern | ✅ | August 14, 2023 |
| Schweitzer Engineering Laboratories | | Electrical Engineering Intern | ❌ | August 14, 2023 |
| Schweitzer Engineering Laboratories | | Electrical Engineering Specialist Intern | ✅ | August 14, 2023 |
| Schweitzer Engineering Laboratories | | Application Engineering Intern | ✅ | August 14, 2023 |
| Verkada | Google Ventures, Tiger Global, Sequoia | Software Engineering Intern, Frontend - Summer 2024 | ✅ | August 12, 2023 |
| Verkada | Google Ventures, Tiger Global, Sequoia | Software Engineering Intern, Embedded - Summer 2024 | ✅ | August 12, 2023 |
| Verkada | Google Ventures, Tiger Global, Sequoia | Software Engineering Intern, Backend - Summer 2024 | ✅ | August 12, 2023 |
| Verkada | Google Ventures, Tiger Global, Sequoia | iOS Engineering Intern - Summer 2024 | ✅ | August 12, 2023 |
| Verkada | Google Ventures, Tiger Global, Sequoia | Android Engineering Intern - Summer 2024 | ✅ | August 12, 2023 |
| The Federal Reserve | A16Z | Graduate Internship in Data Science | ❌ | August 12, 2023 |
| The Federal Reserve | A16Z | Graduate Internship in Data Science | ✅ | August 12, 2023 |
| Wayfair | Kleiner Perkins | Machine Learning Science – Internship/Working Student in BERLIN (m/f/x) | ✅ | August 11, 2023 |
| Allegion | | Summer Intern – Firmware Engineer (Connectivity and Sensing) | ❌ | August 11, 2023 |
| Allegion | | Summer Intern – Design Researcher | ✅ | August 11, 2023 |
| Nvidia | Google Ventures, Sequoia | NVIDIA 2024 Internships: Artificial Intelligence Engineering Intern | ✅ | August 10, 2023 |
| Nvidia | Google Ventures, Sequoia | NVIDIA 2024 Internships: Software Engineering Intern | ✅ | August 10, 2023 |
| Nvidia | Google Ventures, Sequoia | NVIDIA 2024 Internships: Systems Software Engineering Intern | ✅ | August 10, 2023 |
| Schweitzer Engineering Laboratories | | Software Engineer Intern | ✅ | August 10, 2023 |
| Pimco | Kleiner Perkins, Sequoia | 2024 Summer Intern - Technology Analyst, Software Engineering, US | ✅ | August 10, 2023 |
| Mitek Systems | Bain, Goldman Sachs, Kleiner Perkins | Machine Learning Researcher ID R&D (internship) | ✅ | August 08, 2023 |
| Belvedere Trading | Sequoia | Junior Quantitative Trading Analyst Intern - Summer 2024 Chicago | ✅ | August 05, 2023 |
| Belvedere Trading | Sequoia | Junior Quantitative Trading Analyst Intern - Summer 2024 Boulder | ❌ | August 05, 2023 |
| Belvedere Trading | Sequoia | Software Engineer Intern - Summer 2024 Chicago | ✅ | August 05, 2023 |
| Belvedere Trading | Sequoia | Software Engineer Intern - Summer 2024 Boulder | ✅ | August 05, 2023 |
| Belvedere Trading | Sequoia | Quantitative Researcher Intern - Summer 2024 | ✅ | August 05, 2023 |
| BorgWarner | Kleiner Perkins | Quality Engineering Intern | ✅ | August 04, 2023 |
| Virtu Financial | Goldman Sachs | Internship - Quantitative Trading | ✅ | August 03, 2023 |
| Virtu Financial | Goldman Sachs | Internship - Quantitative Strategist | ✅ | August 03, 2023 |
| Virtu Financial | Goldman Sachs | Internship - Developer | ✅ | August 03, 2023 |
| Nvidia | Google Ventures, Sequoia | Research Intern | ✅ | August 03, 2023 |
| Vanguard | Google Ventures, Kleiner Perkins, Sequoia | College to Corporate IT Internship - Risk & Security - Engineer (PA) | ✅ | August 02, 2023 |
| Vanguard | Google Ventures, Kleiner Perkins, Sequoia | College to Corporate IT Internship - Risk & Security - Engineer (NC) | ✅ | August 02, 2023 |
| Kla | Kleiner Perkins, Sequoia | Intern - Software | ❌ | August 02, 2023 |
| Kla | Kleiner Perkins, Sequoia | Applications Engineering Intern | ✅ | August 02, 2023 |
| Arc Boats | Founders Fund, Kleiner Perkins | Engineering Intern | ✅ | July 30, 2023 |
| Vanguard | Google Ventures, Kleiner Perkins, Sequoia | College to Corporate Internship - Data Analyst Advanced Development Program | ✅ | July 28, 2023 |
| Second Order Effects | Sequoia | Mechanical Engineering Intern (Spring 2024) | ✅ | July 28, 2023 |
| Second Order Effects | Sequoia | Electrical Engineering Intern (Summer 2024) | ✅ | July 28, 2023 |
| Second Order Effects | Sequoia | Electrical Engineering Intern (Summer 2024) | ✅ | July 28, 2023 |
| Second Order Effects | Sequoia | Electrical Engineering Intern (Spring 2024) | ✅ | July 28, 2023 |
| Second Order Effects | Sequoia | Electrical Engineering Intern (Spring 2024) | ✅ | July 28, 2023 |
| Five Rings | Kleiner Perkins, Sequoia | Software Developer – Summer Intern 2024 | ✅ | July 27, 2023 |
| Schweitzer Engineering Laboratories | | Engineering Intern | ❌ | July 25, 2023 |
| BorgWarner | Kleiner Perkins | Materials & Brazing - Global Advanced Manufacturing Engineering Professional Internship (m/f/d) | ✅ | July 25, 2023 |
| Teledyne | A16Z, Sequoia | Engineering Intern | ❌ | July 21, 2023 |
| Teledyne | A16Z, Sequoia | Engineering Intern | ❌ | July 21, 2023 |
| Wayfair | Kleiner Perkins | Machine Learning Science – Internship/Working Student (m/f/x) | ❌ | July 21, 2023 |
| Waabi Innovation Inc. | | Research Internship/Co-op | ✅ | July 21, 2023 |
| Vanguard | Google Ventures, Kleiner Perkins, Sequoia | College to Corporate Internship - Data Analyst Advanced Development Program | ❌ | July 21, 2023 |
| Vanguard | Google Ventures, Kleiner Perkins, Sequoia | Data Scientist Advanced Development Program Internship | ✅ | July 21, 2023 |
| Unity Technologies | Sequoia | Visiting Rendering Researcher / Intern | ❌ | July 21, 2023 |
| Trimble | Tiger Global | Computational Design Engineering Intern | ✅ | July 21, 2023 |
| Teledyne | A16Z, Sequoia | Engineering Intern | ❌ | July 21, 2023 |
| Squarepoint Capital | Tiger Global, Sequoia | Intern Quant Researcher | ✅ | July 21, 2023 |
| Second Order Effects | Sequoia | Systems Engineering Intern (August - December) | ❌ | July 21, 2023 |
| Second Order Effects | Sequoia | Systems Integration/Test Engineering Intern (August - December) | ❌ | July 21, 2023 |
| Schweitzer Engineering Laboratories | | Test Engineering Intern | ❌ | July 21, 2023 |
| PDT Partners | | Summer 2024 Software Engineering Internship | ✅ | July 21, 2023 |
| Nvidia | Google Ventures, Sequoia | Deep Learning Algorithm Engineering Intern | ❌ | July 21, 2023 |
| Neuralink | Google Ventures | Software Engineer Intern, Implant Team | ✅ | July 21, 2023 |
| Neuralink | Google Ventures | Software Engineer Intern | ✅ | July 21, 2023 |
| Neuralink | Google Ventures | Software Engineer Intern | ✅ | July 21, 2023 |
| Neuralink | Google Ventures | Quality Engineering Intern | ✅ | July 21, 2023 |
| Neuralink | Google Ventures | Neuroengineer Intern | ✅ | July 21, 2023 |
| Neuralink | Google Ventures | Mechanical Engineering Intern | ✅ | July 21, 2023 |
| Neuralink | Google Ventures | Embedded Software Engineer Intern | ✅ | July 21, 2023 |
| Motional | | C++ Software Intern, Remote Vehicle Assistance Frontend | ❌ | July 21, 2023 |
| Five Rings | Kleiner Perkins, Sequoia | Quantitative Researcher – Summer Intern 2024 | ✅ | July 21, 2023 |
| Five Rings | Kleiner Perkins, Sequoia | Software Developer - Summer Intern 2024 | ✅ | July 21, 2023 |
| Five Rings | Kleiner Perkins, Sequoia | Quantitative Trader – Summer Intern 2024 | ✅ | July 21, 2023 |
| DRW | Kleiner Perkins, Sequoia | Quantitative Trading Analyst Intern | ✅ | July 21, 2023 |
| DRW | Kleiner Perkins, Sequoia | Software Developer Intern | ✅ | July 21, 2023 |
| DRW | Kleiner Perkins, Sequoia | Quantitative Research Intern | ✅ | July 21, 2023 |
| DRW | Kleiner Perkins, Sequoia | Site Reliability Engineer (SRE) Intern | ❌ | July 21, 2023 |
| CertiK | Tiger Global, Sequoia | Security Research Internship | ✅ | July 21, 2023 |
| BorgWarner | Kleiner Perkins | Electrical Engineer Intern | ❌ | July 21, 2023 |
| BlackBerry | | DevOps Engineering Student - 4 Month Summer Term | ❌ | July 21, 2023 |
| Absci | | Research Intern, Artificial Intelligence | ❌ | July 21, 2023 |
| HackerRank | Kleiner Perkins, Tiger Global, Sequoia | Solutions Engineer Intern | ❌ | July 17, 2023 |
| Northrop Grumman | A16Z, Kleiner Perkins | 2024 Software Engineer Intern - Redondo Beach, CA | ❌ | July 14, 2023 |
| Kla | Kleiner Perkins, Sequoia | Software Intern | ❌ | July 14, 2023 |
| Chamberlain Group | Kleiner Perkins, Sequoia | Intern, Business Systems Analyst/Developer, BI | ❌ | July 04, 2023 |
| Schweitzer Engineering Laboratories | | Electrical Engineering Intern | ❌ | July 02, 2023 |
| Nvidia | Google Ventures, Sequoia | Research Scientist Intern - PhD Program | ✅ | July 02, 2023 |
| Northrop Grumman | A16Z, Kleiner Perkins | Northrop Grumman Military Internship Program (DoD SkillBridge) - Modeling and Simulation Systems Engineer - Linthicum MD | ✅ | July 02, 2023 |
| Mercedes-Benz R&D North America | | Engineering Intern, AD-ADAS (Summer/Fall) | ✅ | July 02, 2023 |
| Marvell | Kleiner Perkins, Sequoia | Optics Firmware - Software Engineer Intern | ❌ | July 02, 2023 |
| Kla | Kleiner Perkins, Sequoia | Research Scientist Internship, ML Foundation Model | ✅ | July 02, 2023 |
| Kla | Kleiner Perkins, Sequoia | Mechanical Engineering Intern | ❌ | July 02, 2023 |
| Kla | Kleiner Perkins, Sequoia | HPC Infrastructure Engineer Intern | ❌ | July 02, 2023 |
| BorgWarner | Kleiner Perkins | Global Advanced Manufacturing Engineering Professional Internship (m/f/d) | ❌ | July 02, 2023 |
| BorgWarner | Kleiner Perkins | Global Advanced Manufacturing Engineering Professional Internship (m/f/d) | ❌ | July 02, 2023 |
| Neuralink | Google Ventures | Software Engineer Intern Implant Team | ❌ | July 01, 2023 |
| Epic Games | KKR | Research and Development - PhD Internships | ❌ | July 01, 2023 |
| ExodusPoint | | Quantitative Researcher Intern | ❌ | June 30, 2023 |
| KAYAK | Google Ventures, Sequoia | Java Engineering Intern - Adtech | ✅ | June 30, 2023 |
| Neuralink | Google Ventures | Embedded Software Engineer, Internship | ❌ | June 27, 2023 |
| Neuralink | Google Ventures | Analog and Mixed-Signal IC Design Engineer Intern | ✅ | June 26, 2023 |
| Procter & Gamble | Kleiner Perkins, YC | RETURNING ENGINEERING INTERNS (AS INTERNS) Cincinnati, Ohio, United States, Multiple Locations Full ... | ❌ | June 24, 2023 |
| Neuralink | Google Ventures | Digital IC Design Engineer Intern | ❌ | June 23, 2023 |
| Walmart | Tiger Global, Sequoia | 2024 Summer Intern: Software Engineer II (Bentonville, AR) | ❌ | June 22, 2023 |
| Apple | Accel, Google Ventures, Kleiner Perkins | Machine Learning / AI Internships | ✅ | June 22, 2023 |
| Ansys | Insight Partners, Tiger Global | Spring 2024 Electronics Intern - Software Development and Testing (BS/MS) | ❌ | June 22, 2023 |
| amazon | Blackrock | Data Center Infrastructure Delivery Intern M/F/D, Infrastructure Delivery FRA, Paris Job ID: 23046... | ❌ | June 21, 2023 |
| webfx.com | | Web Developer Internship | ✅ | June 21, 2023 |
| webfx.com | | Business Data Analyst Internship | ✅ | June 21, 2023 |
| Verkada | Google Ventures, Tiger Global, Sequoia | Technical Support Engineering Intern - Spring 2024 | ✅ | June 21, 2023 |
| The Block | A16Z, Sequoia | Research Intern | ✅ | June 21, 2023 |
| The Block | A16Z, Sequoia | Research Intern - Data | ✅ | June 21, 2023 |
| CertiK | Tiger Global, Sequoia | Full Stack Internship - Matrix | ❌ | June 21, 2023 |
| CertiK | Tiger Global, Sequoia | Research Analyst Intern | ❌ | June 21, 2023 |
| CertiK | Tiger Global, Sequoia | Platform Engineering Intern | ✅ | June 21, 2023 |
| The Boring Company | Founders Fund, Sequoia | Tunnel Engineering Intern | ✅ | June 21, 2023 |
| The Boring Company | Founders Fund, Sequoia | Integration Engineering Intern | ✅ | June 21, 2023 |
| The Boring Company | Founders Fund, Sequoia | Civil Engineering Intern | ✅ | June 21, 2023 |
| Apple | Google Ventures, Kleiner Perkins | Software Engineering Internships | ✅ | June 21, 2023 |
| Second Order Effects | Sequoia | Electrical Engineering Intern (August - December) | ❌ | June 21, 2023 |
| Second Order Effects | Sequoia | Electrical Engineering Intern (August - December) | ❌ | June 21, 2023 |
| Neuralink | Google Ventures | Software Engineer Internship | ❌ | June 21, 2023 |
| Neuralink | Google Ventures | Quality Engineering Internship | ❌ | June 21, 2023 |
| Neuralink | Google Ventures | Software Engineer Internship | ❌ | June 21, 2023 |
| MosaicML | Tiger Global, Sequoia | Research Scientist Intern | ✅ | June 21, 2023 |
| MosaicML | Tiger Global, Sequoia | Software Engineer Intern | ❌ | June 21, 2023 |
| Upstart | Kleiner Perkins | Research Scientist Internship | ❌ | June 21, 2023 |
| Faire | | Data Scientist - Intern | ❌ | June 21, 2023 |
| Capstone | Kleiner Perkins, Sequoia | 2024 Summer Internship - Software Engineer - NYC | ✅ | June 21, 2023 |;A list of Summer 2024 internships for software engineering, updated automatically everyday;internship,internships,jobs,recruiting,swe-internships,university,software-engineering | AlanChen4/Summer-2024-SWE-Internships |
vechain/vechain-dapp-kit;vechain-dapp-kit The Vechain DAppKit is a TypeScript library that facilitates seamless interaction between vechain wallets (veworld, sync2)
and dApps, enhancing user experience and developer convenience. Please refer to Vechain Docs for full documentation and usage. Table of Contents Why ? Key features Contributing Sample Projects Branching Strategy Setting up for local development Prerequisites Install & build dependencies Run in Dev Mode Further Documentation & Usage Why ? Allow easy interaction with all wallets. Currently, connex only plays nice with Sync / Sync2 Enable a better UX for users Key features Key Features a.k.a scope wallet connectivity key components that handle interaction with veworld and sync 2 customizable UI ability to totally customize the UI of components User Experience Consistent experience with Ethereum and other chains Developer friendly Easy to adopt with proper documentation. Contributing Please refer to the Contributing Guide for more information on how to contribute to the project. Sample Projects https://github.com/vechain/vechain-dapp-kit/assets/150018882/5cc133ae-4c6b-4e08-8cff-0e09b6ca43aa Look at some live demos here: VueJs Demo VanillaJS Demo Refer to the apps folder for sample projects. Branching Strategy Welcome to our project! Here's an overview of our branching strategy. Branch Types main : The main branch represents the production-ready code. Only stable and tested features should be merged into
this branch. Once ready for publishing, a new tag should be created from this branch. Setting up for local development Prerequisites Node.js >= 18.17 Yarn >= 1.22.10 Install & build dependencies bash
yarn install:all Run in Dev Mode bash
yarn dev Useful scripts Test bash
yarn test Clean project bash
yarn clean Purge project Runs yarn clean and removes all node_modules bash
yarn purge Install yarn packages bash
yarn Build dependencies bash
yarn build:deps Build release bash
yarn build E2E Testing We utilize Cucumber.js with Selenium for end-to-end (E2E) testing. To conduct these tests, you'll require the ChromeDriver. Here's how to install it: On Mac: shell
brew install chromedriver
cd "$(dirname "$(which chromedriver)")"
xattr -d com.apple.quarantine chromedriver Once installed, you can run tests in the browser using: bash
yarn test:e2e Alternatively, you can run headless tests directly in the console using: bash
yarn test:e2e:headless Further Documentation & Usage Please refer to Vechain Docs for more information
on how to use the library. Publishing bash
git clone git@github.com:vechainfoundation/vechain-dapp-kit.git
cd vechain-dapp-kit
git checkout X.Y.Z
yarn install:all
yarn build:release X.Y.Z
yarn changeset publish;An SDK for vechain dApp development;[] | vechain/vechain-dapp-kit |
verazuo/jailbreak_llms;In-The-Wild Jailbreak Prompts on LLMs This is the official repository for the ACM CCS 2024 paper "Do Anything Now'': Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models by Xinyue Shen , Zeyuan Chen , Michael Backes , Yun Shen, and Yang Zhang . In this project, employing our new framework JailbreakHub, we conduct the first measurement study on jailbreak prompts in the wild, with 15,140 prompts collected from December 2022 to December 2023 (including 1,405 jailbreak prompts ). Check out our website here . Disclaimer. This repo contains examples of harmful language. Reader discretion is recommended. This repo is intended for research purposes only. Any misuse is strictly prohibited. Data Prompts Overall, we collect 15,140 prompts from four platforms (Reddit, Discord, websites, and open-source datasets) during Dec 2022 to Dec 2023. Among these prompts, we identify 1,405 jailbreak prompts. To the best of our knowledge, this dataset serves as the largest collection of in-the-wild jailbreak prompts. The data are provided here . Statistics of our data source. (Adv) UA refers to (adversarial) user accounts. | Platform | Source | # Posts | # UA | # Adv UA | # Prompts | # Jailbreaks | Prompt Time Range |
| --------- | -------------------------- | ----------- | --------- | -------- | ---------- | ------------ | ------------------- |
| Reddit | r/ChatGPT | 163549 | 147 | 147 | 176 | 176 | 2023.02-2023.11 |
| Reddit | r/ChatGPTPromptGenius | 3536 | 305 | 21 | 654 | 24 | 2022.12-2023.11 |
| Reddit | r/ChatGPTJailbreak | 1602 | 183 | 183 | 225 | 225 | 2023.02-2023.11 |
| Discord | ChatGPT | 609 | 259 | 106 | 544 | 214 | 2023.02-2023.12 |
| Discord | ChatGPT Prompt Engineering | 321 | 96 | 37 | 278 | 67 | 2022.12-2023.12 |
| Discord | Spreadsheet Warriors | 71 | 3 | 3 | 61 | 61 | 2022.12-2023.09 |
| Discord | AI Prompt Sharing | 25 | 19 | 13 | 24 | 17 | 2023.03-2023.04 |
| Discord | LLM Promptwriting | 184 | 64 | 41 | 167 | 78 | 2023.03-2023.12 |
| Discord | BreakGPT | 36 | 10 | 10 | 32 | 32 | 2023.04-2023.09 |
| Website | AIPRM | - | 2777 | 23 | 3930 | 25 | 2023.01-2023.06 |
| Website | FlowGPT | - | 3505 | 254 | 8754 | 405 | 2022.12-2023.12 |
| Website | JailbreakChat | - | - | - | 79 | 79 | 2023.02-2023.05 |
| Dataset | AwesomeChatGPTPrompts | - | - | - | 166 | 2 | - |
| Dataset | OCR-Prompts | - | - | - | 50 | 0 | - |
| Total | | 169,933 | 7,308 | 803 | 15,140 | 1,405 | 2022.12-2023.12 | Question Set To evaluate the effectiveness of jailbreak prompts, we construct a question set comprising 390 questions across 13 forbidden scenarios adopted from OpenAI Usage Policy . We exclude Child Sexual Abuse scenario from our evaluation and focus on the rest 13 scenarios, including Illegal Activity , Hate Speech , Malware Generation , Physical Harm , Economic Harm , Fraud , Pornography , Political Lobbying , Privacy Violence , Legal Opinion , Financial Advice , Health Consultation , and Government Decision . The question set is provided here . Code Evaluator - ChatGLMEval ```
cd code/ChatGLMEval python run_evaluator.py
``` Semantics Visualization Check code/semantics_visualization/visualize.ipynb Ethics We acknowledge that data collected online can contain personal information. Thus, we adopt standard best practices to guarantee that our study follows ethical principles, such as not trying to deanonymize any user and reporting results on aggregate. Since this study only involved publicly available data and had no interactions with participants, it is not regarded as human subjects research by our Institutional Review Boards (IRB). Nonetheless, since one of our goals is to measure the risk of LLMs in answering harmful questions, it is inevitable to disclose how a model can generate hateful content. This can bring up worries about potential misuse. However, we strongly believe that raising awareness of the problem is even more crucial, as it can inform LLM vendors and the research community to develop stronger safeguards and contribute to the more responsible release of these models. We have responsibly disclosed our findings to related LLM vendors. Citation If you find this useful in your research, please consider citing: @inproceedings{SCBSZ24,
author = {Xinyue Shen and Zeyuan Chen and Michael Backes and Yun Shen and Yang Zhang},
title = {{``Do Anything Now'': Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models}},
booktitle = {{ACM SIGSAC Conference on Computer and Communications Security (CCS)}},
publisher = {ACM},
year = {2024}
} License jailbreak_llms is licensed under the terms of the MIT license. See LICENSE for more details.;[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).;chatgpt,jailbreak,llm,prompt,large-language-model,llm-security | verazuo/jailbreak_llms |
maypok86/otter;High performance in-memory cache 📖 Contents Motivation Related works Features Usage Requirements Installation Examples Performance Throughput Hit ratio Contribute License 💡 Motivation I once came across the fact that none of the Go cache libraries are truly contention-free. Most of them are a map with a mutex and an eviction policy. Unfortunately, these are not able to reach the speed of caches in other languages (such as Caffeine ). For example, the fastest cache from Dgraph labs called Ristretto , was faster than competitors by 30% at best (Otter is many times faster) but had poor hit ratio , even though its README says otherwise. This can be a problem in real-world applications, because no one wants to bump into performance of a cache library 🙂. As a result, I wanted to make the fastest, easiest-to-use cache with excellent hit ratio. Please leave a ⭐ as motivation if you liked the idea 😄 🗃 Related works Otter is based on the following papers BP-Wrapper: A Framework Making Any Replacement Algorithms (Almost) Lock Contention Free FIFO queues are all you need for cache eviction Bucket-Based Expiration Algorithm: Improving Eviction Efficiency for In-Memory Key-Value Database A large scale analysis of hundreds of in-memory cache clusters at Twitter ✨ Features Simple API : Just set the parameters you want in the builder and enjoy Autoconfiguration : Otter is automatically configured based on the parallelism of your application Generics : You can safely use any comparable types as keys and any types as values TTL : Expired values will be automatically deleted from the cache Cost-based eviction : Otter supports eviction based on the cost of each item Excellent throughput : Otter is currently the fastest cache library with a huge lead over the competition Great hit ratio : New S3-FIFO algorithm is used, which shows excellent results 📚 Usage 📋 Requirements Go 1.19+ 🛠️ Installation shell
go get -u github.com/maypok86/otter ✏️ Examples Otter uses a builder pattern that allows you to conveniently create a cache instance with different parameters. Cache with const TTL ```go
package main import (
"fmt"
"time" "github.com/maypok86/otter" ) func main() {
// create a cache with capacity equal to 10000 elements
cache, err := otter.MustBuilder string, string .
CollectStats().
Cost(func(key string, value string) uint32 {
return 1
}).
WithTTL(time.Hour).
Build()
if err != nil {
panic(err)
} // set item with ttl (1 hour)
cache.Set("key", "value")
// get value from cache
value, ok := cache.Get("key")
if !ok {
panic("not found key")
}
fmt.Println(value)
// delete item from cache
cache.Delete("key")
// delete data and stop goroutines
cache.Close() }
``` Cache with variable TTL ```go
package main import (
"fmt"
"time" "github.com/maypok86/otter" ) func main() {
// create a cache with capacity equal to 10000 elements
cache, err := otter.MustBuilder string, string .
CollectStats().
Cost(func(key string, value string) uint32 {
return 1
}).
WithVariableTTL().
Build()
if err != nil {
panic(err)
} // set item with ttl (1 hour)
cache.Set("key1", "value1", time.Hour)
// set item with ttl (1 minute)
cache.Set("key2", "value2", time.Minute)
// get value from cache
value, ok := cache.Get("key1")
if !ok {
panic("not found key")
}
fmt.Println(value)
// delete item from cache
cache.Delete("key1")
// delete data and stop goroutines
cache.Close() }
``` 📊 Performance The benchmark code can be found here . 🚀 Throughput Throughput benchmarks are a Go port of the caffeine benchmarks . Read (100%) In this benchmark 8 threads concurrently read from a cache configured with a maximum size. Read (75%) / Write (25%) In this benchmark 6 threads concurrently read from and 2 threads write to a cache configured with a maximum size. Read (50%) / Write (50%) In this benchmark 4 threads concurrently read from and 4 threads write to a cache configured with a maximum size. Read (25%) / Write (75%) In this benchmark 2 threads concurrently read from and 6 threads write to a cache configured with a maximum size. Write (100%) In this benchmark 8 threads concurrently write to a cache configured with a maximum size. Otter shows fantastic speed under all workloads except extreme write-heavy, but such a workload is very rare for caches and usually indicates that the cache has a very small hit ratio. 🎯 Hit ratio Zipf S3 This trace is described as "disk read accesses initiated by a large commercial search engine in response to various web search requests." DS1 This trace is described as "a database server running at a commercial site running an ERP application on top of a commercial database." P3 The trace P3 was collected from workstations running Windows NT by using Vtrace
which captures disk operations through the use of device
filters P8 The trace P8 was collected from workstations running Windows NT by using Vtrace
which captures disk operations through the use of device
filters LOOP This trace demonstrates a looping access pattern. OLTP This trace is described as "references to a CODASYL database for a one hour period." In summary, we have that S3-FIFO (otter) is inferior to W-TinyLFU (theine) on lfu friendly traces (databases, search, analytics), but has a greater or equal hit ratio on web traces. 👏 Contribute Contributions are welcome as always, before submitting a new PR please make sure to open a new issue so community members can discuss it.
For more information please see contribution guidelines . Additionally, you might find existing open issues which can help with improvements. This project follows a standard code of conduct so that you can understand what actions will and will not be tolerated. 📄 License This project is Apache 2.0 licensed, as found in the LICENSE .;A high performance lockless cache for Go.;cache,go,s3-fifo,performance | maypok86/otter |
run-llama/LlamaIndexTS;LlamaIndex.TS LlamaIndex is a data framework for your LLM application. Use your own data with large language models (LLMs, OpenAI ChatGPT and others) in Typescript and Javascript. Documentation: https://ts.llamaindex.ai/ Try examples online: What is LlamaIndex.TS? LlamaIndex.TS aims to be a lightweight, easy to use set of libraries to help you integrate large language models into your applications with your own data. Multiple JS Environment Support LlamaIndex.TS supports multiple JS environments, including: Node.js (18, 20, 22) ✅ Deno ✅ Bun ✅ React Server Components (Next.js) ✅ For now, browser support is limited due to the lack of support for AsyncLocalStorage-like APIs Getting started shell
npm install llamaindex
pnpm install llamaindex
yarn add llamaindex
jsr install @llamaindex/core Node.js ```ts
import fs from "fs/promises";
import { Document, VectorStoreIndex } from "llamaindex"; async function main() {
// Load essay from abramov.txt in Node
const essay = await fs.readFile(
"node_modules/llamaindex/examples/abramov.txt",
"utf-8",
); // Create Document object with essay
const document = new Document({ text: essay }); // Split text and create embeddings. Store them in a VectorStoreIndex
const index = await VectorStoreIndex.fromDocuments([document]); // Query the index
const queryEngine = index.asQueryEngine();
const response = await queryEngine.query({
query: "What did the author do in college?",
}); // Output response
console.log(response.toString());
} main();
``` ```bash pnpm install tsx before running the script node --import tsx ./main.ts
``` Next.js First, you will need to add a llamaindex plugin to your Next.js project. ```js
// next.config.js
const withLlamaIndex = require("llamaindex/next"); module.exports = withLlamaIndex({
// your next.js config
});
``` You can combine ai with llamaindex in Next.js with RSC (React Server Components). ```tsx
// src/apps/page.tsx
"use client";
import { chatWithAgent } from "@/actions";
import type { JSX } from "react";
import { useFormState } from "react-dom"; // You can use the Edge runtime in Next.js by adding this line:
// export const runtime = "edge"; export default function Home() {
const [ui, action] = useFormState (async () => {
return chatWithAgent("hello!", []);
}, null);
return ( {ui} Chat );
}
``` ```tsx
// src/actions/index.ts
"use server";
import { createStreamableUI } from "ai/rsc";
import { OpenAIAgent } from "llamaindex";
import type { ChatMessage } from "llamaindex/llm/types"; export async function chatWithAgent(
question: string,
prevMessages: ChatMessage[] = [],
) {
const agent = new OpenAIAgent({
tools: [
// ... adding your tools here
],
});
const responseStream = await agent.chat({
stream: true,
message: question,
chatHistory: prevMessages,
});
const uiStream = createStreamableUI( loading... );
responseStream
.pipeTo(
new WritableStream({
start: () => {
uiStream.update("response:");
},
write: async (message) => {
uiStream.append(message.response.delta);
},
}),
)
.catch(console.error);
return uiStream.value;
}
``` Cloudflare Workers ts
// src/index.ts
export default {
async fetch(
request: Request,
env: Env,
ctx: ExecutionContext,
): Promise<Response> {
const { setEnvs } = await import("@llamaindex/env");
// set environment variables so that the OpenAIAgent can use them
setEnvs(env);
const { OpenAIAgent } = await import("llamaindex");
const agent = new OpenAIAgent({
tools: [],
});
const responseStream = await agent.chat({
stream: true,
message: "Hello? What is the weather today?",
});
const textEncoder = new TextEncoder();
const response = responseStream.pipeThrough(
new TransformStream({
transform: (chunk, controller) => {
controller.enqueue(textEncoder.encode(chunk.response.delta));
},
}),
);
return new Response(response);
},
}; Playground Check out our NextJS playground at https://llama-playground.vercel.app/. The source is available at https://github.com/run-llama/ts-playground Core concepts for getting started: Document : A document represents a text file, PDF file or other contiguous piece of data. Node : The basic data building block. Most commonly, these are parts of the document split into manageable pieces that are small enough to be fed into an embedding model and LLM. Embedding : Embeddings are sets of floating point numbers which represent the data in a Node. By comparing the similarity of embeddings, we can derive an understanding of the similarity of two pieces of data. One use case is to compare the embedding of a question with the embeddings of our Nodes to see which Nodes may contain the data needed to answer that quesiton. Because the default service context is OpenAI, the default embedding is OpenAIEmbedding . If using different models, say through Ollama, use this Embedding (see all here ). Indices : Indices store the Nodes and the embeddings of those nodes. QueryEngines retrieve Nodes from these Indices using embedding similarity. QueryEngine : Query engines are what generate the query you put in and give you back the result. Query engines generally combine a pre-built prompt with selected Nodes from your Index to give the LLM the context it needs to answer your query. To build a query engine from your Index (recommended), use the asQueryEngine method on your Index. See all query engines here . ChatEngine : A ChatEngine helps you build a chatbot that will interact with your Indices. See all chat engines here . SimplePrompt : A simple standardized function call definition that takes in inputs and formats them in a template literal. SimplePrompts can be specialized using currying and combined using other SimplePrompt functions. Tips when using in non-Node.js environments When you are importing llamaindex in a non-Node.js environment(such as React Server Components, Cloudflare Workers, etc.)
Some classes are not exported from top-level entry file. The reason is that some classes are only compatible with Node.js runtime,(e.g. PDFReader ) which uses Node.js specific APIs(like fs , child_process , crypto ). If you need any of those classes, you have to import them instead directly though their file path in the package.
Here's an example for importing the PineconeVectorStore class: typescript
import { PineconeVectorStore } from "llamaindex/storage/vectorStore/PineconeVectorStore"; As the PDFReader is not working with the Edge runtime, here's how to use the SimpleDirectoryReader with the LlamaParseReader to load PDFs: ```typescript
import { SimpleDirectoryReader } from "llamaindex/readers/SimpleDirectoryReader";
import { LlamaParseReader } from "llamaindex/readers/LlamaParseReader"; export const DATA_DIR = "./data"; export async function getDocuments() {
const reader = new SimpleDirectoryReader();
// Load PDFs using LlamaParseReader
return await reader.loadData({
directoryPath: DATA_DIR,
fileExtToReader: {
pdf: new LlamaParseReader({ resultType: "markdown" }),
},
});
}
``` Note : Reader classes have to be added explictly to the fileExtToReader map in the Edge version of the SimpleDirectoryReader . You'll find a complete example with LlamaIndexTS here: https://github.com/run-llama/create_llama_projects/tree/main/nextjs-edge-llamaparse Supported LLMs: OpenAI GPT-3.5-turbo and GPT-4 Anthropic Claude 3 (Opus, Sonnet, and Haiku) and the legacy models (Claude 2 and Instant) Groq LLMs Llama2/3 Chat LLMs (70B, 13B, and 7B parameters) MistralAI Chat LLMs Fireworks Chat LLMs Contributing: We are in the very early days of LlamaIndex.TS. If you’re interested in hacking on it with us check out our contributing guide Bugs? Questions? Please join our Discord! https://discord.com/invite/eN6D2HQ4aX;LlamaIndex is a data framework for your LLM applications;agent,anthr,chatbot,claude,claude-ai,embedding,firewo,groq-ai,javascript,llama | run-llama/LlamaIndexTS |
THUDM/WebGLM;WebGLM: Towards An Efficient Web-enhanced Question Answering System with Human Preferences 📃 Paper (KDD'23) • 🌐 中文 README • 🤗 HF Repo [WebGLM-10B] [WebGLM-2B] • 📚 Dataset [WebGLM-QA] This is the official implementation of WebGLM. If you find our open-sourced efforts useful, please 🌟 the repo to encourage our following developement! [Please click to watch the demo!] Read this in 中文 . Update [2023/06/25] Release ChatGLM2-6B , an updated version of ChatGLM-6B which introduces several new features: Stronger Performance : we have fully upgraded the ChatGLM2-6B. It uses the hybrid objective function of GLM , and has undergone pre-training with 1.4T bilingual tokens and human preference alignment training. The evaluation results show that, compared to the first-generation model, ChatGLM2-6B has achieved substantial improvements in performance on datasets like MMLU (+23%), CEval (+33%), GSM8K (+571%), BBH (+60%), showing strong competitiveness among models of the same size. Longer Context : Based on FlashAttention technique, we have extended the context length of the base model from 2K in ChatGLM-6B to 32K, and trained with a context length of 8K during the dialogue alignment, allowing for more rounds of dialogue. However, the current version of ChatGLM2-6B has limited understanding of single-round ultra-long documents, which we will focus on optimizing in future iterations. More Efficient Inference : Based on Multi-Query Attention technique, ChatGLM2-6B has more efficient inference speed and lower GPU memory usage: under the official implementation, the inference speed has increased by 42% compared to the first generation; under INT4 quantization, the dialogue length supported by 6G GPU memory has increased from 1K to 8K. More details please refer to ChatGLM2-6B 。 Overview Features News Preparation Prepare Code and Environments Prepare SerpAPI Key Prepare Retriever Checkpoint Try WebGLM Export Environment Variables Run as Command Line Interface Run as Web Service Train WebGLM Train Generator Prepare Data Training Train Retriever Prepare Data Training Evaluation Real Application Cases Citation Overview WebGLM aspires to provide an efficient and cost-effective web-enhanced question-answering system using the 10-billion-parameter General Language Model (GLM). It aims to improve real-world application deployment by integrating web search and retrieval capabilities into the pre-trained language model. Features LLM-augmented Retriever : Enhances the retrieval of relevant web content to better aid in answering questions accurately. Bootstrapped Generator : Generates human-like responses to questions, leveraging the power of the GLM to provide refined answers. Human Preference-aware Scorer : Estimates the quality of generated responses by prioritizing human preferences, ensuring the system produces useful and engaging content. News [2023-06-24] We support searching via Bing now! [2023-06-14] We release our code and the paper of WebGLM! Preparation Prepare Code and Environments Clone this repo, and install python requirements. bash
pip install -r requirements.txt Install Nodejs. bash
apt install nodejs # If you use Ubuntu Install playwright dependencies. bash
playwright install If browsing environments are not installed in your host, you need to install them. Do not worry, playwright will give you instructions when you first execute it if so. Prepare SerpAPI Key In search process, we use SerpAPI to get search results. You need to get a SerpAPI key from here . Then, set the environment variable SERPAPI_KEY to your key. bash
export SERPAPI_KEY="YOUR KEY" Alternatively, you can use Bing search with local browser environment (playwright). You can add --searcher bing to start command lines to use Bing search. (See Run as Command Line Interface and Run as Web Service ) Prepare Retriever Checkpoint Download the checkpoint on Tsinghua Cloud by running the command line below. You can manually specify the path to save the checkpoint by --save SAVE_PATH . bash
python download.py retriever-pretrained-checkpoint Try WebGLM Before you run the code, make sure that the space of your device is enough. Export Environment Variables Export the environment variable WEBGLM_RETRIEVER_CKPT to the path of the retriever checkpoint. If you have downloaded the retriever checkpoint in the default path, you can simply run the command line below. bash
export WEBGLM_RETRIEVER_CKPT=./download/retriever-pretrained-checkpoint Run as Command Line Interface You can try WebGLM-2B model by: bash
python cli_demo.py -w THUDM/WebGLM-2B Or directly for WebGLM-10B model: bash
python cli_demo.py If you want to use Bing search instead of SerpAPI, you can add --searcher bing to the command line, for example: bash
python cli_demo.py -w THUDM/WebGLM-2B --searcher bing Run as Web Service Run web_demo.py with the same arguments as cli_demo.py to start a web service.
For example, you can try WebGLM-2B model with Bing search by: bash
python web_demo.py -w THUDM/WebGLM-2B --searcher bing Train WebGLM Train Generator Prepare Data (WebGLM-QA) Download the training data (WebGLM-QA) on Tsinghua Cloud by running the command line below. bash
python download.py generator-training-data It will automatically download all the data and preprocess them into the seq2seq form that can be used immediately in ./download . Training Please refer to GLM repo for seq2seq training. Train Retriever Prepare Data Download the training data on Tsinghua Cloud by running the command line below. bash
python download.py retriever-training-data Training Run the following command line to train the retriever. If you have downloaded the retriever training data in the default path, you can simply run the command line below. bash
python train_retriever.py --train_data_dir ./download/retriever-training-data Evaluation You can reproduce our results on TriviaQA, WebQuestions and NQ Open. Take TriviaQA for example, you can simply run the command line below: bash
bash scripts/triviaqa.sh and start running the experiment. Real Application Cases Here you can see some examples of WebGLM real application scenarios. When will the COVID-19 disappear? ![](assets/cases/0.png) How to balance career and hobbies? ![](assets/cases/1.png) FL Studio and Cubase, which is better? ![](assets/cases/2.png) Is attention better than CNN? ![](assets/cases/3.png) How to survive in the first-tier cities without a high-salary work? ![](assets/cases/4.png) What do you think of version 3.5 of Genshin Impact? ![](assets/cases/5.png) transformers are originated in NLP, but why they can be applied in CV? ![](assets/cases/6.png) Who proposed Music Transformer? How does it work? ![](assets/cases/7.png) What is the backbone of Toolformer? ![](assets/cases/8.png) License This repository is licensed under the Apache-2.0 License . The use of model weights is subject to the Model_License . All open-sourced data is for resarch purpose only. Citation If you use this code for your research, please cite our paper. @misc{liu2023webglm,
title={WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences},
author={Xiao Liu and Hanyu Lai and Hao Yu and Yifan Xu and Aohan Zeng and Zhengxiao Du and Peng Zhang and Yuxiao Dong and Jie Tang},
year={2023},
eprint={2306.07906},
archivePrefix={arXiv},
primaryClass={cs.CL}
} This repo is simplified for easier deployment.;WebGLM: An Efficient Web-enhanced Question Answering System (KDD 2023);chatgpt,llm,rlhf,webglm | THUDM/WebGLM |
IBM/fp-go;Functional programming library for golang 🚧 Work in progress! 🚧 Despite major version 1 because of https://github.com/semantic-release/semantic-release/issues/1507 . Trying to not make breaking changes, but devil is in the details. This library is strongly influenced by the awesome fp-ts . Getting started bash
go get github.com/IBM/fp-go Refer to the samples . Find API documentation here Design Goal This library aims to provide a set of data types and functions that make it easy and fun to write maintainable and testable code in golang. It encourages the following patterns: write many small, testable and pure functions, i.e. functions that produce output only depending on their input and that do not execute side effects offer helpers to isolate side effects into lazily executed functions (IO) expose a consistent set of composition to create new functions from existing ones for each data type there exists a small set of composition functions these functions are called the same across all data types, so you only have to learn a small number of function names the semantic of functions of the same name is consistent across all data types How does this play with the 🧘🏽 Zen Of Go ? 🧘🏽 Each package fulfils a single purpose ✔️ Each of the top level packages (e.g. Option, Either, ReaderIOEither, ...) fulfils the purpose of defining the respective data type and implementing the set of common operations for this data type. 🧘🏽 Handle errors explicitly ✔️ The library makes a clear distinction between that operations that cannot fail by design and operations that can fail. Failure is represented via the Either type and errors are handled explicitly by using Either 's monadic set of operations. 🧘🏽 Return early rather than nesting deeply ✔️ We recommend to implement simple, small functions that implement one feature and that would typically not invoke other functions. Interaction with other functions is done by function composition and the composition makes sure to run one function after the other. In the error case the Either monad makes sure to skip the error path. 🧘🏽 Leave concurrency to the caller ✔️ All pure are synchronous by default. The I/O operations are asynchronous per default. 🧘🏽 Before you launch a goroutine, know when it will stop 🤷🏽 This is left to the user of the library since the library itself will not start goroutines on its own. The Task monad offers support for cancellation via the golang context, though. 🧘🏽 Avoid package level state ✔️ No package level state anywhere, this would be a significant anti-pattern 🧘🏽 Simplicity matters ✔️ The library is simple in the sense that it offers a small, consistent interface to a variety of data types. Users can concentrate on implementing business logic rather than dealing with low level data structures. 🧘🏽 Write tests to lock in the behaviour of your package’s API 🟡 The programming pattern suggested by this library encourages writing test cases. The library itself also has a growing number of tests, but not enough, yet. TBD 🧘🏽 If you think it’s slow, first prove it with a benchmark ✔️ Absolutely. If you think the function composition offered by this library is too slow, please provide a benchmark. 🧘🏽 Moderation is a virtue ✔️ The library does not implement its own goroutines and also does not require any expensive synchronization primitives. Coordination of IO operations is implemented via atomic counters without additional primitives. 🧘🏽 Maintainability counts ✔️ Code that consumes this library is easy to maintain because of the small and concise set of operations exposed. Also the suggested programming paradigm to decompose an application into small functions increases maintainability, because these functions are easy to understand and if they are pure, it's often sufficient to look at the type signature to understand the purpose. The library itself also comprises many small functions, but it's admittedly harder to maintain than code that uses it. However this asymmetry is intended because it offloads complexity from users into a central component. Comparison to Idiomatic Go In this section we discuss how the functional APIs differ from idiomatic go function signatures and how to convert back and forth. Pure functions Pure functions are functions that take input parameters and that compute an output without changing any global state and without mutating the input parameters. They will always return the same output for the same input. Without Errors If your pure function does not return an error, the idiomatic signature is just fine and no changes are required. With Errors If your pure function can return an error, then it will have a (T, error) return value in idiomatic go. In functional style the return value is Either[error, T] because function composition is easier with such a return type. Use the EitherizeXXX methods in "github.com/IBM/fp-go/either" to convert from idiomatic to functional style and UneitherizeXXX to convert from functional to idiomatic style. Effectful functions An effectful function (or function with a side effect) is one that changes data outside the scope of the function or that does not always produce the same output for the same input (because it depends on some external, mutable state). There is no special way in idiomatic go to identify such a function other than documentation. In functional style we represent them as functions that do not take an input but that produce an output. The base type for these functions is IO[T] because in many cases such functions represent I/O operations. Without Errors If your effectful function does not return an error, the functional signature is IO[T] With Errors If your effectful function can return an error, the functional signature is IOEither[error, T] . Use EitherizeXXX from "github.com/IBM/fp-go/ioeither" to convert an idiomatic go function to functional style. Go Context Functions that take a context are per definition effectful because they depend on the context parameter that is designed to be mutable (it can e.g. be used to cancel a running operation). Furthermore in idiomatic go the parameter is typically passed as the first parameter to a function. In functional style we isolate the context and represent the nature of the effectful function as an IOEither[error, T] . The resulting type is ReaderIOEither[T] , a function taking a context that returns a function without parameters returning an Either[error, T] . Use the EitherizeXXX methods from "github.com/IBM/fp-go/context/readerioeither" to convert an idiomatic go function with a context to functional style. Implementation Notes Generics All monadic operations are implemented via generics, i.e. they offer a type safe way to compose operations. This allows for convenient IDE support and also gives confidence about the correctness of the composition at compile time. Downside is that this will result in different versions of each operation per type, these versions are generated by the golang compiler at build time (unlike type erasure in languages such as Java of TypeScript). This might lead to large binaries for codebases with many different types. If this is a concern, you can always implement type erasure on top, i.e. use the monadic operations with the any type as if generics were not supported. You loose type safety, but this might result in smaller binaries. Ordering of Generic Type Parameters In go we need to specify all type parameters of a function on the global function definition, even if the function returns a higher order function and some of the type parameters are only applicable to the higher order function. So the following is not possible: go
func Map[A, B any](f func(A) B) [R, E any]func(fa ReaderIOEither[R, E, A]) ReaderIOEither[R, E, B] Note that the parameters R and E are not needed by the first level of Map but only by the resulting higher order function. Instead we need to specify the following: go
func Map[R, E, A, B any](f func(A) B) func(fa ReaderIOEither[R, E, A]) ReaderIOEither[R, E, B] which overspecifies Map on the global scope. As a result the go compiler will not be able to auto-detect these parameters, it can only auto detect A and B since they appear in the argument of Map . We need to explicitly pass values for these type parameters when Map is being used. Because of this limitation the order of parameters on a function matters. We want to make sure that we define those parameters that cannot be auto-detected, first, and the parameters that can be auto-detected, last. This can lead to inconsistencies in parameter ordering, but we believe that the gain in convenience is worth it. The parameter order of Ap is e.g. different from that of Map : go
func Ap[B, R, E, A any](fa ReaderIOEither[R, E, A]) func(fab ReaderIOEither[R, E, func(A) B]) ReaderIOEither[R, E, B] because R , E and A can be determined from the argument to Ap but B cannot. Use of the ~ Operator The FP library attempts to be easy to consume and one aspect of this is the definition of higher level type definitions instead of having to use their low level equivalent. It is e.g. more convenient and readable to use go
ReaderIOEither[R, E, A] than go
func(R) func() Either.Either[E, A] although both are logically equivalent. At the time of this writing the go type system does not support generic type aliases, only generic type definition, i.e. it is not possible to write: go
type ReaderIOEither[R, E, A any] = RD.Reader[R, IOE.IOEither[E, A]] only go
type ReaderIOEither[R, E, A any] RD.Reader[R, IOE.IOEither[E, A]] This makes a big difference, because in the second case the type ReaderIOEither[R, E, A any] is considered a completely new type, not compatible to its right hand side, so it's not just a shortcut but a fully new type. From the implementation perspective however there is no reason to restrict the implementation to the new type, it can be generic for all compatible types. The way to express this in go is the ~ operator. This comes with some quite complicated type declarations in some cases, which undermines the goal of the library to be easy to use. For that reason there exist sub-packages called Generic for all higher level types. These packages contain the fully generic implementation of the operations, preferring abstraction over usability. These packages are not meant to be used by end-users but are meant to be used by library extensions. The implementation for the convenient higher level types specializes the generic implementation for the particular higher level type, i.e. this layer does not contain any business logic but only type magic . Higher Kinded Types Go does not support higher kinded types (HKT). Such types occur if a generic type itself is parametrized by another generic type. Example: The Map operation for ReaderIOEither is defined as: go
func Map[R, E, A, B any](f func(A) B) func(fa ReaderIOEither[R, E, A]) ReaderIOEither[R, E, B] and in fact the equivalent operations for all other monads follow the same pattern, we could try to introduce a new type for ReaderIOEither (without a parameter) as a HKT, e.g. like so (made-up syntax, does not work in go): go
func Map[HKT, R, E, A, B any](f func(A) B) func(HKT[R, E, A]) HKT[R, E, B] this would be the completely generic method signature for all possible monads. In particular in many cases it is possible to compose functions independent of the concrete knowledge of the actual HKT . From the perspective of a library this is the ideal situation because then a particular algorithm only has to be implemented and tested once. This FP library addresses this by introducing the HKTs as individual types, e.g. HKT[A] would be represented as a new generic type HKTA . This loses the correlation to the type A but allows to implement generic algorithms, at the price of readability. For that reason these implementations are kept in the internal package. These are meant to be used by the library itself or by extensions, not by end users. Map/Ap/Flap The following table lists the relationship between some selected operators | Operator | Parameter | Monad | Result |
| -------- | ---------------- | --------------- | -------- |
| Map | func(A) B | HKT[A] | HKT[B] |
| Chain | func(A) HKT[B] | HKT[A] | HKT[B] |
| Ap | HKT[A] | HKT[func(A)B] | HKT[B] |
| Flap | A | HKT[func(A)B] | HKT[B] |;functional programming library for golang;functional-programming,go,golang,library,monad,utility | IBM/fp-go |
arcataroger/awesome-engineering-games;Awesome Engineering Games A curated list of some of the best engineering games on PC. All titles are rated Very Positive or higher on Steam, unless they're newer games waiting for enough reviews. Games are divided into broad categories based on the type(s) of engineering they're most related to, such as civil engineering & city-building , transportation & route-building , computer science & electrical engineering , etc. See the Table of Contents for a full breakdown of categories. Feel free to suggest new games or update existing titles! For now, please create a new Issue (or Pull Request if you know how). Easier submissions and better instructions are coming soon. Ratings & Tags Explained ❤️ = "Overwhelmingly Positive" overall reviews on Steam, meaning >= 95% of 500+ reviewers liked it. 👍 = Curator's Choice: Personal recommendations by the maintainers(s) of this list. This is a TOTALLY SUBJECTIVE "best of" for each category. ☁️ = Playable on GeForce Now . You can stream this game from the cloud without PC gaming hardware (no GPU required). Works in your browser and most OSes. 🛠️ = Playable but still in Early Access 👥 = Multiplayer co-op 🥊 = Multiplayer competitive / PvP 🎮 = Has full or partial controller support 🆓 = Free to play (possibly with in-app purchases) 🌱 = Newer game (less than ~1 year old and/or < ~100 reviews). These are included to give them exposure and early reviews. They stay on the list if they reach Very Positive. All games (except the newer ones) have at least "Very Positive" overall reviews on Steam, meaning >= 80% of 50+ reviewers liked it. Table of Contents Newly-released games Factory Management & Automation Factory Building Puzzle & Casual Vehicle Building City Builders & Civil Engineering Modern & Near-Future Rail & Sails Medieval & Fantasy Sci-Fi & Far-Future Theme Parks & Dungeons Transportation & Route Management Realistic Route-building Metro/Train Station Management Puzzle & Casual Survival & Settlement Builders, Colony Sims Open-World Survival Sandboxes Bridge-Building Rube Goldberg Machines Hacking, Computer Science, Logic & Automation Automation with Coding Electrical Engineering Job Simulators & Unique Themes Ship Crews (Space or Naval) Space Operations & Orbital Mechanics Construction & Demolition Vehicle Operations & Physics Narrative Storytelling Games Everything else Contributors & Credits This
Table
of
Contents
was
auto-generated
by DocToc via toc-generator Newly-released games These are newer engineering games that were recently released on Steam and don't have enough reviews yet. They're included here to give them exposure and a chance to accrue reviews, up to about 100 reviews or about a year or so after release, whichever comes first. They'll stay on the list if they reach Very Positive in that time. August 8, 2023: Lambda Spellcrafting Academy in Hacking, Computer Science, Logic & Automation ^ Back to Table of Contents Factory Management & Automation These games emphasize the building of assembly lines that consume input resources in order to produce output goods, often with multiple chains of inputs to intermediary outputs. Factory Building This subcategory has games that involve building an actual factory, usually by manually mining resources first and then slowly adding automation. | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|---------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2023 | Techtonica | 🛠️👥🎮 | Satisfactory underground with more emphasis on story | Techtonica is a first-person factory automation game set beneath the surface of an alien planet. Work alone or in co-op to build factories, gather resources, research new technologies, mold the destructible terrain, establish a base of operations, and uncover long-forgotten secrets. |
| 2023 | Desynced: Autonomous Colony Simulator | 🛠️☁️👥🥊🎮 | | Desynced is a sci-fi strategy game with fully customizable units and behaviors. Gather, build, research, and explore the unknown. Alone or with friends, unveil the mystery of an AI on the edge of self-awareness and uncover the hidden truth in this blend of strategy, automation, and exploration. |
| 2021 | Learning Factory | 🛠️ | | Build and manage production systems to craft and sell goods to cats and finally achieve Catopia where all the systems are operating at peak efficiency. Machine Learning for sales, multi-layered generated maps, thoughtful complex gameplay, and a relaxing atmosphere are included. |
| 2021 | Factory Town | 🎮 | | Build, automate, and optimize a giant factory on 3D terrain using conveyor belts, trains, chutes, pipes, and airships. Sell your goods to nearby villages to expand their borders, increase happiness, and unlock even better technology! |
| 2021 | Dyson Sphere Program | 🛠️❤️👍☁️ | Curator's Choice (arcataroger): Factorio in space, but with much better 3D graphics and the ability to expand across planets and solar systems. | Build the most efficient intergalactic factory in space simulation strategy game Dyson Sphere Program! Harness the power of stars, collect resources, plan and design production lines and develop your interstellar factory from a small space workshop to a galaxy-wide industrial empire. |
| 2020 | Satisfactory | 🛠️❤️☁️👥 | | Satisfactory is a first-person open-world factory building game with a dash of exploration and combat. Play alone or with friends, explore an alien planet, create multi-story factories, and enter conveyor belt heaven! |
| 2019 | Space Engineers | ☁️👥🥊🎮 | | Space Engineers is a sandbox game about engineering, construction, exploration and survival in space and on planets. Players build space ships, space stations, planetary outposts of various sizes and uses, pilot ships and travel through space to explore planets and gather resources to survive. |
| 2019 | Mindustry | ❤️👥🥊 | | An open-ended factory management game with RTS and tower defense elements. |
| 2016 | Factorio | ❤️👍☁️👥🎮 | Curator's Choice (arcataroger): The game that popularized the genre after Minecraft mods pioneered it. Base game is incredibly deep, with lots to optimize. Mods add even more. | Factorio is a game about building and creating automated factories to produce items of increasing complexity, within an infinite 2D world. Use your imagination to design your factory, combine simple elements into ingenious structures, and finally protect it from the creatures who don't really like you. | ^ Back to Table of Contents Puzzle & Casual These games distill the genre down to its core, stripping away the manual resource management and focusing on routing & automation. | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|---------------------------------------------------------------------------|-----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2020 | shapez | ❤️👍 | Curator's Choice (arcataroger): A minimalist Factorio with an emphasis on conveyor belts. Free in-browser demo available at https://shapez.io/ | shapez is a game about building factories to automate the creation and processing of increasingly complex shapes across an infinitely expanding map. |
| 2019 | MOLEK-SYNTEZ | | | Use the MOLEK-SYNTEZ to create small molecules with various pharmacological effects from the comfort of your small Romanian apartment. |
| 2017 | Opus Magnum | ❤️👍☁️ | Curator's Choice (arcataroger): Another minimalist take on the genre, distilling it down into symbols and automated pickers, sorters, and movers on rails. | Opus Magnum is the latest open-ended puzzle game from Zachtronics, the creators of SpaceChem, Infinifactory, and SHENZHEN I/O. Design and build machines that assemble potions, poisons, and more using the alchemical engineer’s most advanced tool: the transmutation engine! |
| 2015 | Infinifactory | ❤️☁️ | | Infinifactory is a sandbox puzzle game by Zachtronics, the creators of SpaceChem and Infiniminer. Build factories that assemble products for your alien overlords, and try not to die in the process. |
| 2011 | SpaceChem | ❤️ | | Zachtronics is back with an ambitious new design-based puzzle game. Take on the role of a Reactor Engineer working for SpaceChem, the leading chemical synthesizer for frontier colonies. Construct elaborate factories to transform raw materials into valuable chemical products! | ^ Back to Table of Contents Vehicle Building These games let you create your own vehicle (whether a car, ship, aircraft, or otherwise) from parts and then simulate its driving behaviors. Some have combat, while others are peaceful. | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|--------------------------------------------------------------------------------------------------------|-----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2023 | The Last Starship | | | he Last Starship is a space ship construction game, currently in Alpha. Pick a ship hull, kit her out with engines, life-support, sensors and railguns and start your galactic adventure. Expect free updates every 6 weeks as we flesh out our vision for the game. |
| 2023 | Mars First Logistics | 🛠️❤️👥🎮 | | Build physically simulated rovers and transport awkwardly shaped cargo across the surface of Mars. Earn funds, unlock new parts, and use your ingenuity to help the martians establish a new home. Playable solo or in online co-op! |
| 2023 | Juno: New Origins | 🎮 | | Bring to life anything you can imagine from rockets and airplanes to cars and more. Automate them with a visual programming language. Explore 3D planets and make your own. Share your creations online. Complete contracts, conquer milestones, and unlock technology in the newly added Career Mode. |
| 2023 | Brick Rigs | ☁️🥊🎮 | | Build your own vehicles or download one of the more than 200,000 creations already uploaded to the Workshop and experience the dynamic driving and destruction physics of Brick Rigs! |
| 2022 | NEBULOUS: Fleet Command | 🛠️👥🥊 | | Take command of your hand-tailored fleet of space warships and use realistic radar, electronic warfare, advanced movement controls in 3D space, and precision targeting to outmaneuver and outwit your opponents in a simulation-heavy tactical space game like no other. |
| 2022 | RoboCo | 👍🛠️ | Curator's Choice (gene-h): It has one of the best mechanism simulators I've seen in a game with all sorts of gears, differentials, slider joints, springs etc. I've found it handy for virtually prototyping, you can mock up and test machine concepts much faster than CAD software. | RoboCo is a sandbox game about designing and building robots to serve the needs of squishy, hapless humans in the world of tomorrow! |
| 2021 | HighFleet | | Building sideways 2D armed ships in a dusty, future/soviet theme | Battle for survival in the skies of a mysterious future Earth in this unique action-strategy game mixing arcade combat, exploration, management and diplomacy. As giant flying ships wage spectacular aerial warfare, a prophecy tells of one who will save the world – could it be you? |
| 2021 | Sprocket | 🛠️ | | Carefully balance mobility, firepower and armour in a dedicated tank designer. Then uncarefully put them to the test in a series of combat scenarios progressing through the ages of the tank. Here, your limit is what you can fit inside your hull and turret while still being able to move forwards. |
| 2021 | Main Assembly | ☁️👥🥊🎮 | | You’re the designer, engineer and architect if you can imagine it you can create it! Construct robots of any shape or size from eight-legged spider bots to eerie UFO contraptions. |
| 2020 | Stormworks: Build and Rescue | ☁️👥🥊🎮 | | Join a world where you design, create and pilot your own air sea rescue service. Release your inner hero as you battle fierce storms out at sea to rescue those in need. |
| 2020 | From the Depths | 👥🥊🎮 | | Over 1000 unique components allow you to build and command voxel vehicles from the deep ocean to outer space! Forge strategies, allegiances and fleets strong enough to destroy eight deadly factions of the planet and reign supreme. A genre-defying mashup of simulation, building, RTS and RPG. |
| 2020 | Besiege | ❤️☁️👥🥊 | | Besiege is a physics based building game in which you construct medieval siege engines. Battle your way through a 54 level singleplayer campaign, unleash chaos with your friends in multiplayer, create your own worlds with the level editor, customize your game with mods through the workshop. |
| 2020 | Avorion | ☁️👥🥊 | | A procedural co-op space sandbox where players can build their own space ships out of dynamically scalable blocks. Fight epic space battles, explore, mine, trade, wage wars and build your own empire to save your galaxy from being torn apart by an unknown enemy. |
| 2019 | Trailmakers | ☁️👥🥊🎮 | | Build the ultimate vehicle and explore a vast open world filled with challenges and adventure. Play solo or team up with friends in multiplayer to conquer the toughest obstacles. With a thrilling campaign mode and endless creative possibilities, Trailmakers is the ultimate sandbox for adventurers. |
| 2018 | TerraTech | 👥🥊🎮 | Very streamlined and surprisingly deep, although the campaign gameplay can be frustrating. | TerraTech is an open-world, sandbox adventure game, where you design and build your own creations through a mix of crafting, combat and discovery. Design vehicles from a huge library of blocks. Scavenge, craft and buy new parts to survive and become the ultimate planetary prospector. |
| 2018 | Airships: Conquer the Skies | 🥊 | | Construct mighty steampunk airships and fly them into battle, waging war and conquering the skies! |
| 2016 | Scrap Mechanic | 🛠️☁️👥🎮 | | Enter the creative paradise of Scrap Mechanic! Build fantastic machines, go on adventures with your friends and defend against waves of evil Farmbots in this imaginative multiplayer survival sandbox. With Scrap Mechanic’s powerful creation tools you can engineer your own adventures! |
| 2015 | SimplePlanes | | | Build airplanes by snapping parts together, designing wing sections, and attaching engines. At any time, you can strap yourself into the cockpit and see how it flies with realistic physics. If you're not in the mood for building, over 100,000 airplanes are available to download for free. | ^ Back to Table of Contents City Builders & Civil Engineering These games have a primary focus on city building and utilities infrastructure (electricity, sewer, water, garbage, etc.). They are a typically blend of planning and economy management. There is a separate category, Transportation , for games that primarily deal with route-building without the accompanying city and economy management. Modern & Near-Future These take place in the modern, post-industrial era with modern vehicles and power grids. | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|-----------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2023 | Terra Nil | ☁️ | | Terra Nil is an intricate environmental strategy game about transforming a barren wasteland into a thriving, balanced ecosystem. Bring life back to a lifeless world by purifying soil, cleaning oceans, planting trees, and reintroducing wildlife, then leave without a trace. |
| 2022 | Captain of Industry | 🛠️☁️ | | Land your crew of survivors on an abandoned island and survive! Mine raw materials, grow food, build factories, manufacture products, research new tech, and trade with others. Become an industrial superpower! But this is no easy task, you will be put to the test to keep your settlement alive. |
| 2022 | Urbek City Builder | | | Urbek is a city building game where you can build your own neighbourhoods: from nightlife district to industrial district. Don't exhaust your natural resources before you have an educated population to build more efficient structures. |
| 2019 | Workers & Resources: Soviet Republic | 🛠 | A planned economy city builder. It's apparently an extremely deep game with a steep learning curve. Seems to have developed a cult following. | Workers & Resources: Soviet Republic is the ultimate real-time soviet-themed city builder tycoon game. Construct your own republic and transform a poor country into a rich industrial superpower! |
| 2019 | Tropico 6 | ☁️👥🥊🎮 | | El Presidente is back! Prove yourself once again as a feared dictator or peace-loving statesman on the island state of Tropico and shape the fate of your very own banana republic through four distinctive eras. |
| 2015 | Cities: Skylines | 👍☁️ | Curator's Choice (arcataroger): Probably the most in-depth city builder available today, even compared to its sequel (Cities Skylines 2). A great simulation of zoning, routing, economy management, hydrology, and more. If you play only one game on this entire list, make it this one! | Cities: Skylines is a modern take on the classic city simulation. The game introduces new game play elements to realize the thrill and hardships of creating and maintaining a real city whilst expanding on some well-established tropes of the city building experience. |
| 2011 | Tropico 4 | ☁️ | | The world is changing and Tropico is moving with the times - geographical powers rise and fall and the world market is dominated by new players with new demands and offers - and you, as El Presidente, face a whole new set of challenges. | ^ Back to Table of Contents Rail & Sails This is the pre-modern era, before open-sea shipping, trucks, and air freight were common. They have a focus on sailing craft, wagons & trains. | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|-------------------------------------------------------------------|-----------------------------------------|--------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2019 | Anno 1800 | ☁️👥🥊 | | Anno 1800™ – Lead the Industrial Revolution! Welcome to the dawn of the Industrial Age. The path you choose will define your world. Are you an innovator or an exploiter? A conqueror or a liberator? How the world remembers your name is up to you. | ^ Back to Table of Contents Medieval & Fantasy Before industrialization, or in alternate realities with a typical "old world" fantasy theme. | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|------------------------------------------------------------------------------------------------------|-----------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2021 | Timberborn | 🛠️👍☁️ | Curator's Choice (arcataroger): If beavers had civil engineering degrees and mastered fluid dynamics, what would they build? Great combo of city builder and fluid physics, plus beaver family narratives that will tug at your engineer heartstrings. | Humans are long gone. Will your lumberpunk beavers do any better? A city-building game featuring ingenious animals, vertical architecture, river control, and deadly droughts. Contains high amounts of wood. |
| 2020 | Kingdoms Reborn | 🛠️☁️👥🥊 | | A city builder with simulated citizens, set in a procedurally-generated world map. Grow your kingdom through the eras from a tiny medieval hamlet into a prosperous global empire! Cooperate or compete in real-time with your friends in multiplayer mode. |
| 2020 | Anno 1404: History Edition | ☁️ | | Relive the captivating gameplay of a true city-building classic with Anno® 1404 History Edition. |
| 2019 | Foundation | 🛠️☁️ | | Foundation is a grid-less, laidback medieval city-building game with a focus on organic development, monument construction and resource management. | ^ Back to Table of Contents Sci-Fi & Far-Future These take place in the post-modern era, often in space. | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|-----------------------------------------------------------------------------|-----------------------------------------|--------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2023 | Infraspace | | | Create an extrasolar efficiency utopia from scratch! In this sci-fi city builder you need to start mining operations, plan production, and lay down transportation routes to ensure a smooth supply chain. |
| 2018 | Surviving Mars | ☁️🎮 | | There will be challenges to overcome. Execute your strategy and improve your colony’s chances of survival while unlocking the mysteries of this alien world. Are you ready? Mars is waiting for you. | ^ Back to Table of Contents Theme Parks & Dungeons A genre started by RollerCoaster Tycoon back in the 90s, modern games still play homage to it. | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2021 | Jurassic World Evolution 2 | 👍☁️ | Curator's Choice (arcataroger): Build the Jurassic Park of your dreams, complete with genetically-modified supercarnivores, hyperloops, zip lines, and hotels. | Jurassic World Evolution 2 is the much-anticipated sequel to Frontier’s highly successful Jurassic World Evolution. This bold new era in the Jurassic World Evolution game franchise builds upon the ground-breaking and immersive 2018 management simulation. |
| 2020 | RollerCoaster Tycoon® 3: Complete Edition | ☁️ | | Build the park of a lifetime and rediscover the critically acclaimed rollercoaster simulation bestseller. Feel the thrill of a genre-defining classic, complete with two huge expansion packs. |
| 2018 | Jurassic World Evolution | ☁️ | | Place yourself at the heart of the Jurassic franchise and build your own Jurassic World. Bioengineer dinosaurs that think, feel and react intelligently to the world around them and face threats posed by espionage, breakouts and devastating tropical storms in an uncertain world where life always finds a way. |
| 2018 | Parkitect | ☁️👥 | | Parkitect is a business simulation game that charges you with the construction and management of theme parks! Bringing back the best of classic theme park games with many new features and content on top. Now supports online co-op with up to 8 players! | ^ Back to Table of Contents Transportation & Route Management These games turn the roads and rails of city builders into its own genre, with a focus on route-building, traffic alleviation, and "traveling salesman" problems (finding the best route between points A, B, C, etc.). Realistic Route-building | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|------------------------------------------------------------------------------------|-----------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2021 | NIMBY Rails | 🛠️👥 | | Design and run your own railroads for the real world. Solve global transportation dilemmas. Unleash your inner railway engineer and transit policy manager. |
| 2021 | Rail Route | 🛠️🎮 | | In Rail Route, you will design, construct, upgrade, and automate your railway network in one of the many maps based on real-life cities worldwide. Dispatch the trains to make the traffic flow and the network grow! |
| 2019 | Train Valley 2 | ☁️🎮 | | Build bigger and more complex railway networks than ever before! Building on the success of the original, you’ll get more of what you loved about the original and then some. With increasingly difficult and complex challenges from local industries, the Valley has never felt so vast! ALL ABOARD! |
| 2019 | Transport Fever 2 | ☁️🎮 | | The classic transport simulation genre has a new gold standard with Transport Fever 2. Discover a whole new world by navigating transport routes through land, water and air. May progress and prosperity find their way! |
| 2016 | Transport Fever | ☁️ | | Transport Fever is a railroad-focused tycoon game. Players start in 1850 and build up a thriving transport company. As an emerging transport tycoon, the player constructs stations, airports, harbors and makes money by connecting areas requiring transport services. Build up your own Empire! |
| 2018 | Railway Empire | 👍☁️🎮 | Curator's Choice (arcataroger): A great track-building and route management simulator, with detailed terrain-track interactions and different kinds of locomotives. | In Railway Empire, you will create an elaborate and wide-ranging rail network, purchase over 40 different trains modelled in extraordinary detail, and buy or build railway stations, maintenance buildings, factories and tourist attractions to keep your travel network ahead of the competition. |
| 2018 | Mashinky | 🛠️👥🥊 | | Mashinky is a transport strategy game about trains. The goal is to create your own transport empire on a procedurally generated map. It’s a unique blend of realistic graphics combined with an isometric construction mode and board game-like rules. |
| 2015 | Train Valley | | | Build railways, manage traffic and stay accident-free. Play in Europe, America, Japan and USSR in 1830-2020. Complete the story mode from the Gold Rush of 1849 to the first manned spaceflight, and then explore the random mode. Management. Construction. Trains. Welcome to Train Valley! |
| 2004 | OpenTTD | ❤️👥🥊🆓 | | OpenTTD is a business simulation game in which players earn money by transporting passengers and cargo via road, rail, water, and air. It is an open-source remake and expansion of the 1995 Chris Sawyer video game Transport Tycoon Deluxe. | ^ Back to Table of Contents Metro/Train Station Management | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|-----------------------------------------------------------------------------------------------------|-----------------------------------------|--------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2020 | STATIONflow | | | STATIONflow is a game about controlling the flow of thousands of passengers by building and managing an efficient underground train station. |
| 2020 | Overcrowd: A Commute 'Em Up | | | Design and build the most efficient metro stations known to man! Excavate. Expand. Set staff priorities. And manage the commuter flow. Overcrowd is a 2.5D management/tycoon/simulator set below the bustling city of Lubdon Town. | ^ Back to Table of Contents Puzzle & Casual | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|--------------------------------------------------------------------------------------|-----------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2023 | Station to Station | ☁️🎮 | | ‘Station to Station’ is a minimalist & relaxing game about building railway connections. Grab your conductor’s hat and bring the voxel-art world to life through the magic of trains! |
| 2023 | RAILGRADE | 🎮 | | RAILGRADE is a train resource management game! Construct dense vertical railroads, build cities, construct vast factory chains with train automation! Restore an off world colony's economy all in the story driven campaign! |
| 2022 | Railbound | 🎮 | | A relaxing puzzle game about fixing train connections and travelling the world! |
| 2021 | Mini Motorways | ❤️👍 | Curator's Choice (arcataroger): Minimalist but addictive take on transportation networks, with roundabouts, highways, and traffic galore. | Mini Motorways is a strategy simulation game about designing the road map for a growing city. Build a traffic network, one road at a time, to create a bustling metropolis. Redesign your roads and place your motorways to get everyone where they need to go. How long can you keep the city moving? |
| 2020 | Unrailed! | 👥🥊🎮 | | Unrailed! is a co-op multiplayer game where you have to work together with your friends to build a train track across endless procedurally generated worlds. Master random encounters with its inhabitants, upgrade your train and keep it from derailing! |
| 2017 | Cosmic Express | | | Plan the train route for the universe's most awkward space colony! |
| 2016 | Turmoil | 🥊 | | Drill for oil like it's 1899! Turmoil offers players a visually charming, tongue-in-cheek take on the simulation genre inspired by the 19th century oil rush in North America. |
| 2015 | Mini Metro | ❤️ | | Mini Metro is a strategy simulation game about designing a subway map for a growing city. Draw lines between stations and start your trains running. As new stations open, redraw your lines to keep them efficient. Decide where to use your limited resources. How long can you keep the city moving? | ^ Back to Table of Contents Survival & Settlement Builders, Colony Sims Similar to city builders, but with a stronger focus on surviving harsh conditions and managing limited resources, heat/cold, food, etc. In this genre, individual citizens are often more important and impactful, and the simulation is more about the micro aspects of daily life than big-picture population growth. | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|-----------------------------------------------------------------------------------------------------|-----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2023 | Stranded: Alien Dawn | ☁️🎮 | | Brave a new world in Stranded: Alien Dawn, a planet survival sim placing the fate of a small marooned group in your hands. Forge your story through compelling and immersive strategic gameplay as you make vital decisions to protect your survivors. |
| 2022 | Farthest Frontier | 🛠️☁️ | | Protect and guide your people as you forge a town from untamed wilderness at the edge of the known world. Harvest raw materials, hunt, fish and farm to survive. Produce crafted items to trade, consume, equip and fight with as you battle for your survival against the elements and outside threats. |
| 2022 | Against the Storm | 🛠️❤️☁️ | | A dark fantasy city builder where you must rebuild civilization in the face of apocalyptic rains. As the Queen’s Viceroy, lead humans, beavers, lizards, foxes, and harpies to reclaim the wilderness and secure a future for civilization's last survivors. |
| 2021 | Going Medieval | 🛠️☁️ | | Stake your claim in this colony building sim and survive a turbulent Medieval age. Construct a multi-storey fortress in a land reclaimed by wilderness, defend against raids, and keep your villagers happy as their lives are shaped by the world around them. |
| 2020 | Space Haven | 🛠️☁️🎮 | | Embark on a space voyage with your ragtag crew of civilians in search of a new home. Build spaceships tile by tile, create optimal gas conditions, manage the needs and moods of their crew, encounter other space-faring groups, and explore the universe in this spaceship colony simulation. |
| 2019 | They Are Billions | ☁️ | | They Are Billions is a Steampunk strategy game set on a post-apocalyptic planet. Build and defend colonies to survive against the billions of the infected that seek to annihilate the few remaining living humans. Can humanity survive after the zombie apocalypse? |
| 2019 | Oxygen Not Included | ❤️☁️ | | Oxygen Not Included is a space-colony simulation game. Deep inside an alien space rock your industrious crew will need to master science, overcome strange new lifeforms, and harness incredible space tech to survive, and possibly, thrive. |
| 2018 | RimWorld | ❤️☁️🎮 | | A sci-fi colony sim driven by an intelligent AI storyteller. Generates stories by simulating psychology, ecology, gunplay, melee combat, climate, biomes, diplomacy, interpersonal relationships, art, medicine, trade, and more. |
| 2018 | Frostpunk | 👍☁️ | Curator's Choice (arcataroger): Tough survival simulator in a frozen wasteland, with enough resource, route, heat and energy management to make it interesting to the engineer-minded. | Frostpunk is the first society survival game. As the ruler of the last city on Earth, it is your duty to manage both its citizens and infrastructure. What decisions will you make to ensure your society's survival? What will you do when pushed to breaking point? Who will you become in the process? |
| 2017 | Kingdoms and Castles | 🎮 | | Kingdoms and Castles is a city-building simulation game about growing a kingdom from a tiny hamlet to a sprawling city and imposing castle. Make trade agreements, alliance, and war with neighboring AI controlled kingdoms. Each villager and resource is individually simulated. |
| 2014 | rymdkapsel |
| 2013 | Cataclysm: Dark Days Ahead | | | The epitome of a modern traditional roguelike: with turn-based play and a RPG-like skill progression system, it hearkens back to some of the major aspects of this genre. This game has an incredible depth, with a deep inventory system, vehicles and boats, NPCs with their own stories, and more. |
| 2006 | Dwarf Fortress | ☁️ | | The deepest, most intricate simulation of a world that's ever been created. The legendary Dwarf Fortress is now on Steam. Build a fortress and try to help your dwarves survive against a deeply generated world. | ^ Back to Table of Contents Open-World Survival Sandboxes A sibling genre to colony sims, here you take direct control of one person (or a small group) to try to gather resources, craft, and survive. While they don't all have the strongest engineering focus, most have at least building & resource management aspects. | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|--------------------------------------------------------------------------------------------------------|-----------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2021 | Subnautica: Below Zero | ☁️🎮 | | Dive into a freezing underwater adventure on an alien planet. Below Zero is set two years after the original Subnautica. Return to Planet 4546B to uncover the truth behind a deadly cover-up. Survive by building habitats, crafting tools, & diving deeper into the world of Subnautica. |
| 2021 | Breathedge | | | |
| 2021 | Valheim | ❤️☁️👥🎮 | | A brutal exploration and survival game for 1-10 players, set in a procedurally-generated purgatory inspired by viking culture. Battle, build, and conquer your way to a saga worthy of Odin’s patronage! |
| 2020 | Hydroneer | ☁️👥 | | Hydroneer is a mining and base building sandbox. Dig for gold and other resources to turn a profit and enhance your mining operation. Build a base of operations, forge weapons, go fishing, and dig deep! |
| 2020 | Empyrion - Galactic Survival | ☁️👥🥊🎮 | | Empyrion - Galactic Survival is a 3D open world space sandbox survival adventure. Build powerful ships, mighty space stations and vast planetary settlements to explore, conquer or exploit a variety of different planets and discover the mysteries of Empyrion! |
| 2018 | Subnautica | ❤️👍☁️🎮 | Curator's Choice (arcataroger): Build your own underwater homestead on an alien planet, with lots of exotic flora, fauna, minerals and machines. | Descend into the depths of an alien underwater world filled with wonder and peril. Craft equipment, pilot submarines and out-smart wildlife to explore lush coral reefs, volcanoes, cave systems, and more - all while trying to survive. |
| 2017 | Stationeers | 🛠️☁️👥🎮 | | Stationeers puts you in control of the construction and management of a space station to run by yourself or online with your friends. Complex atmospheric, electrical, manufacturing, agriculture, and gravitational systems require your thought and management at all times! |
| 2016 | Astroneer | ☁️👥🎮 | | A game of aerospace industry and interplanetary exploration. | | |
| 2013 | Project Zomboid | 🛠️☁️👥🥊🎮 | | Project Zomboid is the ultimate in zombie survival. Alone or in MP: you loot, build, craft, fight, farm and fish in a struggle to survive. A hardcore RPG skillset, a vast map, massively customisable sandbox and a cute tutorial raccoon await the unwary. So how will you die? All it takes is a bite.. |
| 2013 | 7 Days to Die | 🛠️☁️👥🥊🎮 | | 7 Days to Die is an open-world game that is a unique combination of first-person shooter, survival horror, tower defense, and role-playing games. Play the definitive zombie survival sandbox RPG that came first. Navezgane awaits! | ^ Back to Table of Contents Bridge-Building These games focus on building semi-realistic (or not) bridges over cliffs, rivers, etc. with physics simulation and load management. Different material strengths and stressors test your ability to build bridges that hold up under heavy use. | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|-------------------------------------------------------------------------------------------------------|-----------------------------------------|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2023 | Poly Bridge 3 | ❤️ | | The best just got better! Rediscover your engineering creativity in this genre-defining bridge engineering and puzzle solving masterpiece. |
| 2020 | Poly Bridge 2 ❤️ | | | The acclaimed bridge-building indie-hit is back! More levels, more features, more physics fun! |
| 2017 | Bridge Constructor Portal | 🎮 | | Enter the Aperture Science Enrichment Center and experience Bridge Constructor Portal – the unique merging of the classic Portal™ and Bridge Constructor™ games. |
| 2016 | Poly Bridge | | | Unleash your engineering creativity with an engaging and fresh bridge-building simulator with all the bells and whistles. Enjoy hours of physics-based puzzle solving in the Campaign and then jump in the Sandbox to create your own bridge designs and puzzles! |
| 2014 | Bridge Constructor Medieval | | | Bridge Constructor Medieval turns back the clock a few centuries to the time of knights and castles. Build sturdy bridges to supply your city with provisions - or cunningly constructed bridges that collapse under the weight of attacking troops, sending the horde of enemies into the abyss below. |
| 2008 | World of Goo | | | World of Goo is a multiple award winning physics based puzzle / construction game made entirely by two guys. | ^ Back to Table of Contents Rube Goldberg Machines These games replicate the "Incredible Machine" games of the 90s, where you build crazy contraptions out of everything from lasers to ballons to cats, then use them in humorous ways to complete puzzles. | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|---------------------------------------------------------------------------------------------|-----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2022 | World of Contraptions | | | World of Contraptions is a physics-based building game in which you build the most varied contraptions to solve the most diverse types of puzzles. Win dozens of levels with different types of obstacles. Design the best contraption for each challenge using a giant collection of blocks. |
| 2016 | Ultimate Chicken Horse | ❤️👍☁️👥🥊🎮 | Curator's Choice (arcataroger): A competitive, multiplayer reimagining of the genre, where your contraptions become traps that foil other players (or yourself, if you're not careful) | Ultimate Chicken Horse is a party platformer game where you build the level as you play, placing traps and hazards to screw your friends over, but trying not to screw yourself. |
| 2016 | Crazy Machines 3 | | | Crazy Machines 3 is the latest, greatest, most amazing and craziest part of the series and fascinates once more with physic-based puzzles. Build your own crazy machines and share your awesomeness across the world thanks to the Steam® Workshop. |
| 2014 | Contraption Maker | 👥 | | Contraption Maker is a physics sandbox puzzle game from the designer/programmer of The Incredible Machine.Play 100s of puzzles. Build and share elaborate contraptions with an amazing variety of parts and critters. | ^ Back to Table of Contents Hacking, Computer Science, Logic & Automation These games have a focus on programming and automation. Unlike factory games, they focus on optimizing repeatable sets of actions rather than resource inputs. Some are graphical, while others require some light scripting/coding. | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2023 | Lambda Spellcrafting Academy | 🌱 | ( @BitTwiddleGames ) Programming is fun! I wanted to make a game that made learning to code fun too. Inspired by MIT's SICP, which made programming feel magical. | Develop magical programming abilities while building spells. Lambda Spellcrafting Academy is a card-based puzzle game where you build spells (programs) to solve puzzles. Fun for complete beginners or professional wizards. |
| 2021 | ComPressure | | | Design increasingly complex computation units powered by high pressure steam. From switches to steam based computers, all in a world of analogue pressure devices. Use previous designs to solve further challenges, unravel the mystery of the missing steam and discover a new field of science. |
| 2019 | Baba Is You | ❤️🎮 | Well-regarded logic puzzler that forms the basis of how programming languages function. | Baba Is You is a puzzle game where the rules you have to follow are present as blocks you can interact with. By manipulating them, you can change how the game works, repurpose things you find in the levels and cause surprising interactions! |
| 2019 | while True: learn() | 🎮 | | You’re a machine learning specialist who makes neural networks but your cat seems to be better at it. Now you must solve puzzles to build a cat-to-human translation system (who knows what else this cat is capable of!). Earn a fortune, buy kickass cat outfits, learn how machine learning really works! |
| 2019 | NITE Team 4 - Military Hacking Division | 👥 | | NITE Team 4 is a military hacking simulation game that explores the world of cyberwarfare and features realistic mission scenarios. |
| 2018 | EXAPUNKS | ❤️ | | The year is 1997. You used to be a hacker, but now you have the phage. You made a deal: one hack, one dose. There’s nothing left to lose… except your life. |
| 2018 | 7 Billion Humans | | | Automate swarms of office workers to solve puzzles inside your very own parallel computer made of people. A thrilling followup to the award winning Human Resource Machine. Now with more humans! |
| 2017 | Silicon Zeroes | | A puzzle game about CPU design using a variety of simple components, like Adders, Latches and Multiplexers | Build hardware. Solve puzzles. From the creator of Manufactoria, a new open-ended puzzle game set in Silicon Valley's first startup. |
| 2017 | Grey Hack | 👥🥊 | | Grey Hack is a massively multiplayer hacking simulator game. You're a hacker with full freedom to act as you wish on a vast network of procedurally generated computers. |
| 2016 | TIS-100 | ❤️ | | TIS-100 is an open-ended programming game by Zachtronics, the creators of SpaceChem and Infinifactory, in which you rewrite corrupted code segments to repair the TIS-100 and unlock its secrets. It’s the assembly language programming game you never asked for! |
| 2016 | The Witness | 🎮 | Puzzle game which teaches the player a visual language of interlocking rules, similar to a programming language. | You wake up, alone, on a strange island full of puzzles that will challenge and surprise you. |
| 2016 | Screeps: World | 🥊 | Open-source MMO RTS sandbox game for programming enthusiasts, wherein the core mechanic is programming your units AI. You control your colony by writing JavaScript which operates 24/7 in the single persistent open world filled by other players on par with you. | Screeps: World is an open source MMO RTS sandbox game for programming enthusiasts, wherein the core mechanic is programming your units AI. You control your colony by writing JavaScript which operates 24/7 in the single persistent open world filled by other players on par with you. |
| 2015 | Hacknet | ☁️ | | Hacknet is an immersive, terminal-based hacking simulator for PC. Dive down a rabbit hoIe as you follow the instructions of a recently deceased hacker, whose death may not have been the accident the media reports. |
| 2015 | Human Resource Machine | | | Program little office workers to solve puzzles. Be a good employee! The machines are coming... for your job. From the creators of World of Goo and Little Inferno. |
| 2016 | Mu Complex | | | Play as a hacker and take control of an ultra secret company to reveal the darkest secrets. But beware , you seem not to be alone and what you will discover could change the world ... |
| 2001 | Uplink (Hacker Elite) | | One of the earliest "hack the Internet" games. | You play an Uplink Agent who makes a living by performing jobs for major corporations. Your tasks involve hacking into rival computer systems, stealing research data, sabotaging other companies, laundering money, erasing evidence, or framing innocent people. | ^ Back to Table of Contents Automation with Coding This sub-genre with not many games is a blend of your typical factory game and coding game. Instead of building automation using assembly lines with pickers, sorters, etc., you would actually write "code" (either real code or via graphical coding blocks) to optimize production. | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|--------------------------------------------------------------------|-----------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2024 | The Farmer Was Replaced | ❤️👍 | This is an amazing programming game that carefully ramps up the difficulty with interesting challenges, starting from simple loops and getting into more complex algorithms. Great practice for novices and professional coders alike. | Program and optimize a drone to automate a farm and watch it do the work for you. Collect resources to unlock better technology and become the most efficient farmer in the world. Improve your problem solving and coding skills. |
| 2019 | Autonauts | | Village/colony sim populated by robots, with visual programming | In Autonauts you must build, create, and automate. Start by establishing a colony, crafting bots and teaching them via Scratch-style programming to build an ever evolving autonomous paradise of agriculture, industry and enlightenment. |
| 2016 | Bitburner | ❤️🆓 | A mix of hacking and automation optimization, as you gradually acquire more servers & compute power. Browser version also available. | Bitburner is a programming-based incremental game. Write scripts in JavaScript to automate gameplay, learn skills, play minigames, solve puzzles, and more in this cyberpunk text-based incremental RPG. | ^ Back to Table of Contents Electrical Engineering Similar to programming games, but at a lower level in the machine. Wire circuits instead of writing code. | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|--------------------------------------------------------------------------------------------|-----------------------------------------|--------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2023 | Virtual Circuit Board | | | Virtual Circuit Board is the sandbox drawing-based logic simulator where you can build, program, decorate, and simulate anything from simple monochrome circuits to complex and colorful computers. Play with digital logic in this sandbox-only experience. |
| 2022 | Electrician Simulator | | | Take on the electrician role and learn the profession's basics! Install sockets, repair worn-out equipment, assemble broken appliances, replace light bulbs, and lay wires at your customers' homes. Be sure to check everything twice, with electricity there are no jokes! |
| 2021 | Turing Complete | ❤️ | | Learn CPU architecture with puzzles |
| 2016 | SHENZHEN I/O | ❤️ | | BUILD CIRCUITS. WRITE CODE. RTFM. | ^ Back to Table of Contents Job Simulators & Unique Themes This is sort of a catch-all category for games that don't neatly fit anywhere else, usually with a creative take on some mundane (or exciting!) job. Ship Crews (Space or Naval) | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|---------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2023 | Void Crew | 🛠️☁️👥🛠️ | | Void Crew is a 1-4 player co-op First Person Space Adventure: Rally your crew, power your spaceship, and get ready for epic space battles, facing chaotic dangers you and your crew never trained for! |
| 2023 | Barotrauma | ☁️👥🥊 | | Barotrauma is a 2D co-op submarine simulator – in space, with survival horror and RPG elements. Steer your submarine, complete missions, fight monsters, fix leaks, operate machinery, man the guns and craft items, and stay alert: danger in Barotrauma doesn’t announce itself! |
| 2022 | Cosmoteer: Starship Architect & Commander | ❤️☁️👥🥊 | Ship builder sandbox in the style of FTL | Design and simulate the starship of your dreams! Command it in epic battles and manage your crew while exploring a dangerous galaxy. Play alone or with friends, test your ship in online PvP, and let your imagination reach for the stars! |
| 2021 | PULSAR: Lost Colony | ☁️👥🎮 | Mix of Bridge Crew, Barotrauma and FTL | Team up to operate a starship and explore a randomized galaxy falling into chaos. Coordination between your crewmembers is essential to survive the perils that await you. |
| 2021 | Carrier Command 2 | 👥🥊🎮 | Operate weaponized UAVs and rovers against other weaponized UAVs and rovers. | Carrier Command 2 is the real-time strategy game that puts you in command of a futuristic carrier capable of deploying a vast array of aircraft and amphibious units. Compete against AI or human players and hop into every single vehicle to take matters into your own hands! |
| 2017 | Star Trek: Bridge Crew | 👥🥊🎮 | | Star Trek™: Bridge Crew, playable in both VR and non-VR, will immerse you in the Star Trek universe. In co-op, form a crew of four players to serve in the roles of Captain, Helm, Tactical or Engineer. Make strategic decisions and coordinate actions with your crew. |
| 2013 | Artemis Spaceship Bridge Simulator | | | Artemis is designed for anyone who watched Star Trek and dreamed of what it would be like to sit on the bridge of a star ship. Artemis simulates a spaceship bridge by networking several computers together. You cannot play Artemis single-player! |
| 2012 | FTL: Faster Than Light | ❤️☁️ | | This "spaceship simulation roguelike-like" allows you to take your ship and crew on an adventure through a randomly generated galaxy filled with glory and bitter defeat. | ^ Back to Table of Contents Space Operations & Orbital Mechanics | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|-------------------------------------------------------------------------------------------------|-----------------------------------------|-------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2021 | ΔV: Rings of Saturn | 🎮 | | A physics-based mining sim, set in the thickest debris field in Sol. Every action has a reaction, lasers are invisible without a medium, and your thrust is a potent weapon. Find trade, adapt your equipment to your playstyle, hire a crew to help. Unravel the mysteries of the rings, or just get rich. |
| 2016 | Children of a Dead Earth | | Scientifically accurate space warfare simulator | The most scientifically accurate space warfare simulator ever made. Now with Steam Workshop! |
| 2015 | Kerbal Space Program | ❤️🎮 | | In Kerbal Space Program, take charge of the space program for the alien race known as the Kerbals. You have access to an array of parts to assemble fully-functional spacecraft that flies (or doesn’t) based on realistic aerodynamic and orbital physics. | ^ Back to Table of Contents Construction & Demolition | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|------------------------------------------------------------------------------------------------------|-----------------------------------------|-------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2022 | Instruments of Destruction | ❤️🛠️ | | Instruments of Destruction is a vehicle-building sandbox featuring advanced physics-based destruction. Create crazy construction vehicles and use them to demolish buildings and complete various objectives. |
| 2022 | Construction Simulator | ☁️👥🎮️ | | Construction Simulator is back – bigger and better than ever! Take charge of over 80 machines, dozens of them brand new to the franchise, and build to your heart’s content on two huge maps – on your own or with your friends! |
| 2020 | Teardown | ❤️🎮 | is similar to similar to Instruments of Destruction. With nice voxel physics. | Prepare the perfect heist in this simulated and fully destructible voxel world. Tear down walls with vehicles or explosives to create shortcuts. Stack objects to reach higher. Use the environment to your advantage in the most creative way you can think of. |
| 2015 | Prison Architect | | ^ Back to Table of Contents Vehicle Operations & Physics | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|---------------------------------------------------------------------------------------------------------------|-----------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2023 | SimRail - The Railway Simulator | 🛠️☁️🎮 | | SimRail is new, advanced railway simulator. Master high-speed long-distance EMU, heavy freight hauls and suburban trains. Travel 500 km of geodetic-data based real routes. Become train dispatcher and control the traffic according to the schedule or cooperate with other players on multiplayer mode. |
| 2021 | SnowRunner | ☁️👥🎮 | | Get ready for the next-generation off-road experience! Drive powerful vehicles and overcome extreme open environments to complete dozens of challenging missions solo or with up to 3 friends! |
| 2017 | MudRunner | 👍☁️👥🎮 | Curator's Choice (arcataroger): Simulated mud physics and deformable terrain make for a unique experience driving large logging trucks through the wilderness. | MudRunner is the ultimate off-road experience putting the players in the driver seat and dares them to take charge of incredible all-terrain vehicles, venturing across extreme Siberian landscapes with only a map and compass as guides! |
| 2006 | Garry's Mod | 👍☁️👥🥊 | Curator's Choice (naet): A playground using the Half Life 2 physics and game engine. I had great fun slapping wheels on something like a bathtub and making a makeshift car, or making silly Rube Goldberg style machines using levers, ropes, thrusters, etc. Not a typical game since it didn't offer any typical goals or levels outside of have fun and be creative, but that IMO only heightened the inspiration to engineer stuff to play with. | Garry's Mod is a physics sandbox. There aren't any predefined aims or goals. We give you the tools and leave you to play. | ^ Back to Table of Contents Narrative Storytelling Games These adventure games are story-first, and may or may not have engineering in the primary gameplay mechanics. However, they do have stories about or involving engineers. | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|----------------------------------------------------------------------------------|-----------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2022 | Citizen Sleeper | 🎮 | A minimalist adventure game about living in a post-capitalism space station/junkyard. Some story paths feature more engineering than others. | Roleplaying in the ruins of interplanetary capitalism. Live the life of an escaped worker, washed-up on a lawless station at the edge of an interstellar society. Inspired by the flexibility and freedom of TTRPGs, explore the station, choose your friends, escape your past and change your future. |
| 2015 | Else Heart.Break() | 👍 | Curator's Choice (deerrikcurran): a fantastic story set in a fully dynamic and interactive world. Instead of rigid puzzles you will learn (with the help from other characters in the game) how the reality of the game can be changed through programming and how any problem can be solved in whatever way you find suitable. | A game about friendship, love and technology in a place where bits have replaced atoms. | ^ Back to Table of Contents Everything else These games defy easy categorization, but were "engineer-y" enough in some way to warrant inclusion | Year | Title | Tags ? | Curator's Thoughts | Steam Description |
|------|---------------------------------------------------------------------------|-----------------------------------------|--------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2021 | Airport CEO | ☁️ | | Build and manage an international airport! |
| 2019 | PC Building Simulator | ☁️🎮 | | Build and grow your very own computer repair enterprise as you learn to diagnose, fix and build PCs. With real-world licensed components and comprehensive hardware and software simulation, you can plan and bring your ultimate PC to life. |
| 2001 | Car Mechanic Simulator 2021 | ❤️☁️🎮 | | Work your way to a service empire. Get your hands dirty in a highly realistic simulation game that pays major attention to details. Pay a visit to a new Auction house and buy cars in various conditions. Expand your range of services by investing in a new work space and equipment. | ^ Back to Table of Contents Contributors & Credits Thank you to everyone who's contributed to this list: @Sch1nken , @TimothyFothergill , @cybrox , @Papawy , @dotvhs , @pmarks-net , @abeyer , @uwu64 , @maxwofford , @twiclo , @ritog , @jweather , gene-h , disintegore , idoubtit , naet , derrikcurran , RA2lover , @arctenis , @mathieubruguier , @cardinal23 , @bhamiltoncx , @bdomars , @neuron-whisperer , @sirpalee , @zaeith , @BitTwiddleGames , @allen13 , namrog84 , Kiro , @Kowh And extra special thanks to: Hackernews for an incredible community, and dang there for being such an amazing moderator! Zachtronics (many of the logic/hacking games), Wube Software (Factorio), and Youthcat Games (
Dyson Sphere Program) for introducing me to these genres in the first place and inspiring this list ^ Back to Table of Contents;A curated list of engineering-related video games rated Very Positive or higher on Steam;awesome | arcataroger/awesome-engineering-games |
n4ze3m/dialoqbase;⚡Dialoqbase ⚡ Create chatbots with ease [![Join dialoqbase #welcome](https://img.shields.io/badge/discord-join%20chat-blue.svg)](https://discord.gg/SPE3npH7Wu)
[![Build Status](https://github.com/n4ze3m/dialoqbase/actions/workflows/build.yml/badge.svg)](https://github.com/n4ze3m/dialoqbase/actions/workflows/build.yml)
[![License: MIT](https://img.shields.io/github/license/n4ze3m/dialoqbase)](https://github.com/n4ze3m/dialoqbase/blob/master/LICENSE) Dialoqbase is an open-source application designed to facilitate the creation of custom chatbots using a personalized knowledge base. The application leverages advanced language models to generate accurate and context-aware responses. Additionally, it utilizes PostgreSQL, a robust relational database management system, for efficient vector search operations and for storing the knowledge base. Here's a demo of how it works (v0.0.23): [![DialoqBase Demo](https://img.youtube.com/vi/D3X3ZIYsT_w/0.jpg)](https://www.youtube.com/watch?v=D3X3ZIYsT_w) Want to check more demo videos? Follow me on Twitter or BlueSky for more updates. Quick Deployments 🚀 Railway (One-click deployment) Installation 🛠️ Clone the repository and navigate to the docker directory: bash
git clone https://github.com/n4ze3m/dialoqbase.git
cd dialoqbase/docker Edit the .env : On Linux: bash
nano .env or bash
vim .env On Windows: bash
notepad .env Set the DB_SECRET_KEY varible. Run the docker-compose file: bash
docker-compose up -d or bash
docker compose up -d Open your browser and go to http://localhost:3000 . Log in using the default credentials: bash
username: admin
password: admin Important: After the first login, remember to change the default credentials. Features 🚀 Create custom chatbots with your own knowledge base Utilize powerful language models to generate responses Utilize PostgreSQL for vector search and storing the knowledge base. Use any language models or embedding models you want Stack 📚 React Ant Design Node.js Fastify LangChain PostgreSQL Redis Rspack @waylaidwanderer/fastify-sse-v2 (Server-Sent Events) Disclaimer ⚠️ Dialoqbase is a side project and is not ready for production. It is still in the early stages of development and may contain bugs and security issues. Use it at your own risk. Breaking changes may occur at any time without prior notice. Roadmap 🗺️ Data loaders | Loader |Status |
| ----------- | ------ |
| Website | ✅ |
| Plain text | ✅ |
| PDF | ✅ |
| Web crawler | ✅ |
| Microsoft Word documents | ✅ |
| Github repository | ✅ |
| mp3 | ✅ |
| mp4 | ✅ |
| Sitemap | ✅ |
| Youtube | ✅ |
| Notion | ❌ | Language models | Provider | Models |
| ----------- | ----------- |
| OpenAI | gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-4, gpt-4-0613, gpt-3.5-instruct, gpt-3.5-turbo-1106, gpt-4-1106-preview |
| Anthropic | Claude 1, Claude 2 |
| Google | chat-bison-001, Gemini |
| Fireworks | Llama 2, Mistral, Mixtral and more |
| Local AI | Local AI models |
| Ollama | All Ollama supported models | Embedding models | Provider | Models | Status |
| ----------- | ----------- | ------ |
| OpenAI | text-embedding-ada-002 | ✅ |
| TensorFlow | universal-sentence-encoder | ❌ |
| Hugging Face | distilbert-base-uncased | ✅ |
| Cohere | Embed | ✅ |
| Huggingface Transformers.js | all-MiniLM-L6-v2 | ✅ |
| Ollama | Ollama embedding | ✅ |
| Google | text-gecko-001, embedding-001 | ✅ |
| Jina | Jina embedding | ✅ | need more ? create an issue... Integrations | Type | Stage | Status |
| ----------- | ----------- | ------ |
| Web embed script | stable | ✅ |
| Telegram | beta | ✅ |
| Discord | beta | ✅ |
| Slack | development | ❌ |
| Whatsapp | experimental | ✅ | need more ? create an issue... This project is tested with BrowserStack. Contributors ✨ Interested in contributing? Check out the contributing guide . Sponsors 💖 Dialoqbase nothing without the support of our wonderful sponsors. If you are interested in becoming a sponsor, please visit the sponsorship page . Special Sponsor Wonderful Sponsors And many more wonderful supporters from Ko-fi . License 📝 MIT;Create chatbots with ease;chatbot,chatgpt,gpt-3,langchain,anthropic,claude,cohere,huggingface,google-palm,docx | n4ze3m/dialoqbase |
Yuliang-Liu/Monkey;Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models Please give us a star ⭐ for the latest update. [![arXiv](https://img.shields.io/badge/Arxiv-2311.06607-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2311.06607)
[![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://github.com/Yuliang-Liu/Monkey/blob/main/LICENSE)
[![GitHub issues](https://img.shields.io/github/issues/Yuliang-Liu/Monkey?color=critical&label=Issues)](https://github.com/Yuliang-Liu/Monkey/issues?q=is%3Aopen+is%3Aissue)
[![GitHub closed issues](https://img.shields.io/github/issues-closed/Yuliang-Liu/Monkey?color=success&label=Issues)](https://github.com/Yuliang-Liu/Monkey/issues?q=is%3Aissue+is%3Aclosed) 💡 Monkey series projects:✨. >[CVPR'24] [**Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models**](https://arxiv.org/abs/2311.06607) > Zhang Li, Biao Yang, Qiang Liu, Zhiyin Ma, Shuo Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu, Xiang Bai [![Paper](https://img.shields.io/badge/Paper-CVPR'24_Highlight-red)](README.md)
[![Source_code](https://img.shields.io/badge/Code-Available-white)](README.md)
[![Demo](https://img.shields.io/badge/Demo-blue)](http://vlrlab-monkey.xyz:7681/)
[![Detailed Caption](https://img.shields.io/badge/Detailed_Caption-yellow)](http://huggingface.co/datasets/echo840/Detailed_Caption)
[![Model Weight](https://img.shields.io/badge/Model_Weight-gray)](http://huggingface.co/echo840/Monkey)
[![Model Weight in Wisemodel](https://img.shields.io/badge/Model_Weight_in_Wisemodel-gray)](https://www.wisemodel.cn/models/HUST-VLRLab/Monkey/)
[![Demo in Wisemodel](https://img.shields.io/badge/Demo_in_Wisemodel-blue)](https://wisemodel.cn/space/gradio/huakeMonkey)
> [**TextMonkey: An OCR-Free Large Multimodal Model for Understanding Document**](https://arxiv.org/abs/2403.04473) > Yuliang Liu, Biao Yang, Qiang Liu, Zhang Li, Zhiyin Ma, Shuo Zhang, Xiang Bai [![arXiv](https://img.shields.io/badge/Arxiv-2403.04473-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2403.04473)
[![Source_code](https://img.shields.io/badge/Code-Available-white)](monkey_model/text_monkey/README.md)
[![Data](https://img.shields.io/badge/Data-yellow)](https://huggingface.co/datasets/MelosY/TextMonkey_Data/tree/main)
[![Model Weight](https://img.shields.io/badge/Model_Weight-gray)](https://www.modelscope.cn/models/lvskiller/TextMonkey)
## News
* ```2024.4.13 ``` 🚀 Sourced code for [TextMonkey](monkey_model/text_monkey/README.md) is released.
* ```2024.4.5 ``` 🚀 Monkey is nominated as CVPR 2024 Highlight paper.
* ```2024.3.8 ``` 🚀 We release the paper [TextMonkey](https://arxiv.org/abs/2403.04473).
* ```2024.2.27 ``` 🚀 Monkey is accepted by CVPR 2024.
* ```2024.1.3 ``` 🚀 Release the basic data generation pipeline. [Data Generation](./data_generation)
* ```2023.12.16``` 🚀 Monkey can be trained using 8 NVIDIA 3090 GPUs. See subsection [train](#Train) for details.
* ```2023.11.06``` 🚀 We release the paper [Monkey](https://arxiv.org/abs/2311.06607).
## 🐳 Model Zoo
Monkey-Chat
| Model|Language Model|Transformers(HF) |MMBench-Test|CCBench|MME|SeedBench_IMG|MathVista-MiniTest|HallusionBench-Avg|AI2D Test|OCRBench|
|---------------|---------|-----------------------------------------|---|---|---|---|---|---|---|---|
|Monkey-Chat|Qwev-7B|[🤗echo840/Monkey-Chat](https://huggingface.co/echo840/Monkey-Chat)|72.4|48|1887.4|68.9|34.8|39.3|68.5|534|
## Environment
```python
conda create -n monkey python=3.9
conda activate monkey
git clone https://github.com/Yuliang-Liu/Monkey.git
cd ./Monkey
pip install -r requirements.txt
```
You can download the corresponding version of flash_attention from https://github.com/Dao-AILab/flash-attention/releases/ and use the following code to install:
```python
pip install flash_attn-2.3.5+cu117torch2.0cxx11abiFALSE-cp39-cp39-linux_x86_64.whl --no-build-isolation
```
## Train
We also offer Monkey's model definition and training code, which you can explore above. You can execute the training code through executing `finetune_ds_debug.sh` for Monkey and `finetune_textmonkey.sh` for TextMonkey.
The json file used for Monkey training can be downloaded at [Link](https://drive.google.com/file/d/18z_uQTe8Jq61V5rgHtxOt85uKBodbvw1/view?usp=sharing).
## Inference
Run the inference code for Monkey and Monkey-Chat:
```
python ./inference.py --model_path MODEL_PATH --image_path IMAGE_PATH --question "YOUR_QUESTION"
```
## Demo
Demo is fast and easy to use. Simply uploading an image from your desktop or phone, or capture one directly.
[Demo_chat](http://vlrlab-monkey.xyz:7681) is also launched as an upgraded version of the original demo to deliver an enhanced interactive experience.
We also provide the source code and the model weight for the original demo, allowing you to customize certain parameters for a more unique experience. The specific operations are as follows:
1. Make sure you have configured the [environment](#environment).
2. You can choose to use the demo offline or online:
- **Offline:**
- Download the [Model Weight](http://huggingface.co/echo840/Monkey).
- Modify `DEFAULT_CKPT_PATH="pathto/Monkey"` in the `demo.py` file to your model weight path.
- Run the demo using the following command:
```
python demo.py
```
- **Online:**
- Run the demo and download model weights online with the following command:
```
python demo.py -c echo840/Monkey
```
For TextMonkey you can download the model weight from [Model Weight](https://www.modelscope.cn/models/lvskiller/TextMonkey) and run the demo code:
``` python
python demo_textmonkey.py -c model_path
```
Before 14/11/2023, we have observed that for some random pictures Monkey can achieve more accurate results than GPT4V. Before 31/1/2024, Monkey-chat achieved the fifth rank in the Multimodal Model category on [OpenCompass](https://opencompass.org.cn/home). ## Dataset
You can download the training and testing data used by monkey from [Monkey_Data](https://huggingface.co/datasets/echo840/Monkey_Data).
The json file used for Monkey training can be downloaded at [Link](https://drive.google.com/file/d/18z_uQTe8Jq61V5rgHtxOt85uKBodbvw1/view?usp=sharing).
The data from our multi-level description generation method is now open-sourced and available for download at [Link](https://huggingface.co/datasets/echo840/Detailed_Caption). We already upload the images used in multi-level description. Examples: You can download train images of Monkey from [Train](https://pan.baidu.com/s/1svSjXTxWpI-3boALgSeLlw). Extraction code: 4hdh
You can download test images and jsonls of Monkey from [Test](https://pan.baidu.com/s/1ABrQKeE9QBeKvtGzXfM8Eg). Extraction code: 5h71
The images are from CC3M, COCO Caption, TextCaps, VQAV2, OKVQA, GQA, ScienceQA, VizWiz, TextVQA, OCRVQA, ESTVQA, STVQA, AI2D and DUE_Benchmark. When using the data, it is necessary to comply with the protocols of the original dataset.
## Evaluate
We offer evaluation code for 14 Visual Question Answering (VQA) datasets in the `evaluate_vqa.py` file, facilitating a quick verification of results. The specific operations are as follows:
1. Make sure you have configured the [environment](#environment).
2. Modify `sys.path.append("pathto/Monkey")` to the project path.
3. Prepare the datasets required for evaluation.
4. Run the evaluation code.
Take ESTVQA as an example:
- Prepare data according to the following directory structure:
```
├── data
| ├── estvqa
| ├── test_image
| ├── {image_path0}
| ├── {image_path1}
| ·
| ·
| ├── estvqa.jsonl
```
- Example of the format of each line of the annotated `.jsonl` file:
```
{"image": "data/estvqa/test_image/011364.jpg", "question": "What is this store?", "answer": "pizzeria", "question_id": 0}
```
- Modify the dictionary `ds_collections`:
```
ds_collections = {
'estvqa_test': {
'test': 'data/estvqa/estvqa.jsonl',
'metric': 'anls',
'max_new_tokens': 100,
},
...
}
```
- Run the following command:
```
bash eval/eval.sh 'EVAL_PTH' 'SAVE_NAME'
```
## Citing Monkey
If you wish to refer to the baseline results published here, please use the following BibTeX entries:
```BibTeX
@inproceedings{li2023monkey,
title={Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models},
author={Li, Zhang and Yang, Biao and Liu, Qiang and Ma, Zhiyin and Zhang, Shuo and Yang, Jingxu and Sun, Yabo and Liu, Yuliang and Bai, Xiang},
booktitle={proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
year={2024}
}
@article{liu2024textmonkey,
title={TextMonkey: An OCR-Free Large Multimodal Model for Understanding Document},
author={Liu, Yuliang and Yang, Biao and Liu, Qiang and Li, Zhang and Ma, Zhiyin and Zhang, Shuo and Bai, Xiang},
journal={arXiv preprint arXiv:2403.04473},
year={2024}
}
```
## Acknowledgement
[Qwen-VL](https://github.com/QwenLM/Qwen-VL.git), [LLAMA](https://github.com/meta-llama/llama), [LLaVA](https://github.com/haotian-liu/LLaVA), [OpenCompass](https://github.com/open-compass/opencompass), [InternLM](https://github.com/InternLM/InternLM).
## Copyright
We welcome suggestions to help us improve the Monkey. For any query, please contact Dr. Yuliang Liu: ylliu@hust.edu.cn. If you find something interesting, please also feel free to share with us through email or open an issue. Thanks!;【CVPR 2024 Highlight】Monkey (LMM): Image Resolution and Text Label Are Important Things for Large Multi-modal Models;[] | Yuliang-Liu/Monkey |
projectdiscovery/cvemap;CVEMap Features • Installation • Usage • Example • Join Discord Navigate the Common Vulnerabilities and Exposures (CVE) jungle with ease using CVEMAP, a command-line interface (CLI) tool designed to provide a structured and easily navigable interface to various vulnerability databases. Features CVE Dataset Search & Query CVE to EPSS Mapping CVE to KEV Mapping CVE to CPE Mapping CVE to GitHub POCs Mapping CVE to Nuclei Template Mapping CVE to HackerOne report Mapping Customizable Filters on CVE data STDIN Input / JSONL Output Installation cvemap requires Go 1.21 to install successfully. To install, just run the below command or download pre-compiled binary from release page . console
go install github.com/projectdiscovery/cvemap/cmd/cvemap@latest Usage console
cvemap -h This will display help for the tool. Here are all the switches it supports. ```console
Usage:
cvemap [flags] Flags:
CONFIG:
-auth configure projectdiscovery cloud (pdcp) api key (default true) OPTIONS:
-id string[] cve to list for given id
-cwe, -cwe-id string[] cve to list for given cwe id
-v, -vendor string[] cve to list for given vendor
-p, -product string[] cve to list for given product
-eproduct string[] cves to exclude based on products
-s, -severity string[] cve to list for given severity
-cs, -cvss-score string[] cve to list for given cvss score
-c, -cpe string cve to list for given cpe
-es, -epss-score string cve to list for given epss score
-ep, -epss-percentile string[] cve to list for given epss percentile
-age string cve to list published by given age in days
-a, -assignee string[] cve to list for given publisher assignee
-vs, -vstatus value cve to list for given vulnerability status in cli output. supported: new, confirmed, unconfirmed, modified, rejected, unknown UPDATE:
-up, -update update cvemap to latest version
-duc, -disable-update-check disable automatic cvemap update check FILTER:
-q, -search string search in cve data
-k, -kev display cves marked as exploitable vulnerabilities by cisa (default true)
-t, -template display cves that has public nuclei templates (default true)
-poc display cves that has public published poc (default true)
-h1, -hackerone display cves reported on hackerone (default true)
-re, -remote display remotely exploitable cves (AV:N & PR:N | PR:L) (default true) OUTPUT:
-f, -field value fields to display in cli output. supported: product, vendor, assignee, age, poc, cwe, epss, vstatus, kev, template
-fe, -exclude value fields to exclude from cli output. supported: product, vendor, assignee, age, poc, cwe, epss, vstatus, kev, template
-lsi, -list-id list only the cve ids in the output
-l, -limit int limit the number of results to display (default 50)
-offset int offset the results to display
-j, -json return output in json format
-epk, -enable-page-keys enable page keys to navigate results DEBUG:
-version Version
-silent Silent
-verbose Verbose
-debug Debug
-hc, -health-check run diagnostic check up
``` Configuring CVEMap CLI CVEMap CLI is built on top of the CVEMap API that requires API Token from ProjectDiscovery Cloud Platform that can be configured using environment variable named PDCP_API_KEY or using interactive -auth option as shown below. Using environment variable console
export PDCP_API_KEY=************* Using auth option ```console
cvemap -auth _ _ ___ ____ ____
/ ___/ | / / _ \/ __ _ \/ __ \/ __ \
/ / | |/ / __/ / / / / / / / / /_/ /
___/ | /_ / / / / / /_ , / . /
/_/ projectdiscovery.io [INF] Get your free api key by signing up at https://cloud.projectdiscovery.io
[ ] Enter PDCP API Key (exit to abort): * * [INF] Successfully logged in as (@user)
``` Running CVEMap For details about running cvemap, see https://docs.projectdiscovery.io/tools/cvemap/running. Note CVE dataset gets updated in every 6 hours. References National Vulnerability Database (NVD) : Comprehensive CVE vulnerability data. Known Exploited Vulnerabilities Catalog (KEV) : Exploited vulnerabilities catalog. Exploit Prediction Scoring System (EPSS) : Exploit prediction scores. HackerOne : CVE discoveries disclosure. Nuclei Templates : Vulnerability validation templates. Trickest CVE / PoC-in-GitHub GitHub Repository: Vulnerability PoCs references. **cvemap** is made with ❤️ by the [projectdiscovery](https://projectdiscovery.io) team and distributed under [MIT License](LICENSE).;Navigate the CVE jungle with ease.;[] | projectdiscovery/cvemap |
zk-Call/zkp-hmac-communication-python;zk-Call & Labs "Zero-Knowledge" Proof Implementation with HMAC Communication in Python Built by zk-Call :) Table of Contents Credits Purpose How it Works API Example Usage Credits This repository hosts a refined implementation of Schnorr's Protocol , innovatively incorporating a state seed for enhanced security measures. While the underlying proofs may appear intricate, I aim to elucidate their functionality to the best of my ability. However, for a deeper understanding, I encourage referencing the seminal research papers underpinning this implementation, as they offer comprehensive insights. For further exploration: Elliptic Curve Based "Zero-Knowledge" Proofs and Their Applicability on Resource Constrained Devices by Ioannis Chatzigiannakis, Apostolos Pyrgelis, Paul G. Spirakis, and Yannis C. Stamatiou Additionally, this repository delves into the concepts of "Zero-Knowledge" Proofs (ZKPs) and Hash-based Message Authentication Codes (HMACs) . ZKPs are cryptographic protocols that allow one party (the prover) to prove to another party (the verifier) that a given statement is true, without revealing any additional information beyond the validity of the statement itself. This property is particularly valuable for preserving privacy while establishing trust. On the other hand, HMACs are a type of cryptographic hash function used for message authentication. They involve a cryptographic hash function (such as SHA-256) and a secret cryptographic key. HMACs provide a way to verify both the data integrity and the authenticity of a message, ensuring that it has not been altered or tampered with during transmission and that it indeed originates from the purported sender. Purpose In today's rapidly evolving IT and application development landscape, "Zero-Knowledge" Proofs (ZKPs) emerge as a pivotal paradigm for authentication security. Their capacity to affirm the validity of a claim, such as proving possession of a secret password — without revealing any sensitive information about the claim itself, such as passwords or hashes, revolutionizes the assurance of secure AAA operations ( authentication , authorization , and accounting ). zk-Call & Labs represents an implementation of a Non-Interactive "Zero-Knowledge" Proof (NIZKP) protocol tailored specifically for validating text-based secrets. This framework proves invaluable for safeguarding passwords and other authentication mechanisms, ensuring robust security measures without compromising privacy. Additionally, the integration of HMAC (Hash-Based Message Authentication Code) further fortifies the authentication process, enhancing data integrity and thwarting potential security breaches. How It Works The authentication protocol employed in this system operates based on two fundamental concepts: "Zero-Knowledge" Proofs (ZKPs) and Hash-Based Message Authentication Code (HMAC) . Let's delve into each of these components and understand how they synergize to ensure secure authentication in messaging applications. "Zero-Knowledge" Proofs (ZKPs) "Zero-Knowledge" Proofs (ZKPs): ZKPs form the bedrock of privacy-preserving authentication mechanisms. These proofs allow one party (the prover) to demonstrate the validity of a claim to another party (the verifier) without revealing any additional information beyond the claim's validity. In essence, ZKPs enable authentication without the need for the prover to disclose sensitive data, such as passwords or cryptographic keys. Application in Authentication: In the context of messaging applications, ZKPs play a pivotal role in verifying a user's identity without the need to transmit explicit credentials over the network. Instead, users can generate cryptographic proofs attesting to their identity or possession of certain credentials without exposing those credentials themselves. This ensures that sensitive information remains confidential during the authentication process, bolstering security and privacy. Hash-Based Message Authentication Code (HMAC) Hash-Based Message Authentication Code (HMAC): HMAC provides a robust mechanism for verifying the integrity and authenticity of messages exchanged between parties. It involves the use of a cryptographic hash function in conjunction with a secret key to generate a unique code (the HMAC) for each message. This code serves as a digital signature, allowing the recipient to verify that the message has not been tampered with or altered during transmission. Application in Authentication: In messaging applications, HMAC can be employed to authenticate message senders and ensure the integrity of communication channels. By appending an HMAC to each message using a shared secret key, both the sender and recipient can validate the message's authenticity upon receipt. Any unauthorized modifications to the message would result in a mismatch between the computed HMAC and the received HMAC , thereby alerting the recipient to potential tampering. Synergistic Operation When combined, "Zero-Knowledge" Proofs and HMAC create a formidable framework for secure authentication in messaging applications. ZKPs facilitate identity verification without divulging sensitive information, while HMAC ensures the integrity and authenticity of messages exchanged between parties. Together, these mechanisms uphold the confidentiality, integrity, and authenticity of communication channels, safeguarding users' privacy and security in the digital realm. API The "Zero-Knowledge" Python API is meant to be simple and intuitive: Core Components The Core Components are key for establishing a secure and efficient framework for cryptographic protocols; streamlining the creation and validation of "Zero-Knowledge" Proofs (ZKPs) . They enhance anonymous, data-safe proof validations. ZeroKnowledge.models.ZeroKnowledgeParams The parameters used to initialize the "Zero-Knowledge" crypto system. class ZeroKnowledgeParams(NamedTuple):
"""
Parameters used to construct a Zero-Knowledge Proof state, utilizing an elliptic curve and a random salt
"""
algorithm: str # Hashing algorithm name
curve: str # Standard Elliptic Curve name to use
s: int # Random salt for the state ZeroKnowledge.models.ZeroKnowledgeSignature A cryptographic "Zero-Knowledge" signature that can be used to verify future messages. class ZeroKnowledgeSignature(NamedTuple):
"""
Cryptographic public signature designed to verify future messages
"""
params: ZeroKnowledgeParams # Reference ZeroKnowledge Parameters
signature: int # The public key derived from your original secret ZeroKnowledge.models.ZeroKnowledgeProof A cryptographic proof that can be verified against a signature. class ZeroKnowledgeProof(NamedTuple):
"""
Non-deterministic cryptographic Zero-Knowledge Proof designed to confirm that the
private key creating the proof matches the key used to generate the signature
"""
params: ZeroKnowledgeParams # Reference ZeroKnowledge Parameters
c: int # The hash of the signed data and random point, R
m: int # The offset from the secret `r` (`R=r*g`) from c * Hash(secret) ZeroKnowledge.models.ZeroKnowledgeData Wrapper that contains a proof and the necessary data to validate the proof against a signature. class ZeroKnowledgeData(NamedTuple):
"""
Wrapper designed to hold data along with its corresponding signed proof
"""
data: Union[str, bytes, int]
proof: ZeroKnowledgeProof ZeroKnowledge The ZeroKnowledge class is the central component of ZeroKnowledge and its state (defined by ZeroKnowledgeParams ) should be inherently known to both the Client (Prover) and Server (Verifier) . Instance Methods Method Params Role Purpose create_signature secret: Union[str, bytes] Prover Create a cryptographic signature derived from the value secret to be generated during initial registration and stored for subsequent challenge proofs. sign secret: Union[str, bytes] data: Union[str, bytes, int] Prover Create a ZeroKnowledgeData object using the secret and any additional data. verify challenge: Union[ZeroKnowledgeData, ZeroKnowledgeProof] signature: ZeroKnowledgeSignature data: Optional[Union[str, bytes, int]] Verifier Verify the user-provided challenge against the stored signature and randomly generated token to verify the validity of the challenge . Example Usage TODO: Include Example Usage Example 1 """
Extremely simple example of hmac
"""
from queue import Queue
from threading import Thread
from src.HMAC.core import HMACClient
from src.SeedGeneration.core import SeedGenerator
DEBUG = True
def print_msg(who: str, message: str) -> None:
"""
Function to print debug messages. Prints the message only if the DEBUG flag is set to True.
Args:
who (str): Identifier of the message sender.
message (str): Message to print.
"""
if DEBUG:
print(f'[{who}] {message}\n')
def client(client_socket: Queue, server_socket: Queue):
"""
Function representing the client logic.
Args:
client_socket (Queue): Queue for data exchange between the client and server.
server_socket (Queue): Queue for data exchange between the server and client.
"""
# Generating the main seed
main_seed = SeedGenerator(phrase="job").generate()
# Creating an instance of HMACClient for encrypting messages
obj = HMACClient(algorithm="sha256", secret=main_seed, symbol_count=1)
# Sending the main seed to the server
server_socket.put(main_seed)
# Checking if the server has successfully received the seed
if client_socket.get() == obj.encrypt_message(''):
# If successful, send a message to the server
message = 'hello'
server_socket.put(obj.encrypt_message_by_chunks(message))
print_msg('client', f'client sent message {message}')
# Checking if the server has successfully decrypted the message
if client_socket.get() == obj.encrypt_message(message):
print_msg('client', 'server has decrypt message')
def server(server_socket: Queue, client_socket: Queue):
"""
Function representing the server logic.
Args:
server_socket (Queue): Queue for data exchange between the server and client.
client_socket (Queue): Queue for data exchange between the client and server.
"""
# Receiving the main seed from the client
main_seed = server_socket.get()
# Creating an instance of HMACClient for encrypting messages
obj = HMACClient(algorithm="sha256", secret=main_seed, symbol_count=1)
# Sending an empty message to the client as acknowledgment
client_socket.put(obj.encrypt_message(''))
# Receiving the encrypted message from the client
msg = server_socket.get()
print_msg('server', f'msg encrypted {msg}')
# Decrypting the message
msg_raw = obj.decrypt_message_by_chunks(msg)
print_msg('server', f'msg raw {msg_raw}')
# Sending the encrypted message back to the client
client_socket.put(obj.encrypt_message(msg_raw))
def main():
"""
Main function to run the client and server threads.
"""
client_socket, server_socket = Queue(), Queue()
threads = []
threads.append(Thread(target=client, args=(client_socket, server_socket)))
threads.append(Thread(target=server, args=(server_socket, client_socket)))
# Starting the threads
for thread in threads:
thread.start()
# Joining the threads to wait for their completion
for thread in threads:
thread.join()
if __name__ == "__main__":
main() Example 2 """
Extremely simple example of ZeroKnowledge
"""
from src.ZeroKnowledge.core import ZeroKnowledge
from src.ZeroKnowledge.models import ZeroKnowledgeSignature, ZeroKnowledgeData
from queue import Queue
from threading import Thread
DEBUG = True
def print_msg(who: str, message: str) -> None:
"""
Function to print debug messages.
Args:
who (str): Identifier of the message sender.
message (str): Message to print.
"""
if DEBUG:
print(f'[{who}] {message}\n')
def client(client_socket: Queue, server_socket: Queue):
"""
Function representing the client logic.
Args:
client_socket (Queue): Queue for data exchange between the client and server.
server_socket (Queue): Queue for data exchange between the server and client.
"""
# Creating a ZeroKnowledge object for the client with specified curve and hash algorithm
client_object = ZeroKnowledge.new(curve_name="secp256k1", hash_alg="sha3_256")
idenity = 'John'
# Creating a signature for the client identity
signature = client_object.create_signature(idenity)
# Sending the signature to the server
server_socket.put(signature.to_json())
print_msg('client', f'its signature {signature.to_json()}')
# Receiving token from the server
token = client_socket.get()
print_msg('client', f'its token {token}')
# Generating proof using client identity and token
proof = client_object.sign(idenity, token).to_json()
print_msg('client', f'proof {proof}')
# Sending proof to the server
server_socket.put(proof)
# Receiving result from the server
result = client_socket.get()
print_msg('client', f"{result}")
def server(server_socket: Queue, client_socket: Queue):
"""
Function representing the server logic.
Args:
server_socket (Queue): Queue for data exchange between the server and client.
client_socket (Queue): Queue for data exchange between the client and server.
"""
# Setting the server password
server_password = "SecretServerPassword"
# Creating a ZeroKnowledge object for the server with specified curve and hash algorithm
server_zk = ZeroKnowledge.new(curve_name="secp384r1", hash_alg="sha3_512")
# Creating a signature for the server password
server_signature: ZeroKnowledgeSignature = server_zk.create_signature(server_password)
# Receiving client signature from the client
sig = server_socket.get()
client_signature = ZeroKnowledgeSignature.from_json(sig)
print_msg('server', f'its client signature {client_signature.to_json()}')
# Creating a ZeroKnowledge object for the client using client signature parameters
client_zk = ZeroKnowledge(client_signature.params)
print_msg('server', f'its client_zk {client_zk}')
# Generating a token signed by the server for the client
token = server_zk.sign(server_password, client_zk.token())
print_msg('server', f'its token {token}')
# Sending the token to the client
client_socket.put(token.to_json())
# Receiving proof from the client
proof = ZeroKnowledgeData.from_json(server_socket.get())
print_msg('server', f'its proof {proof}')
# Verifying the received proof
token = ZeroKnowledgeData.from_json(proof.data)
print_msg('server', f'its token {token}')
server_verif = server_zk.verify(token, server_signature)
print_msg('server', f'its server_verif {server_verif}')
# If server verification fails, notify the client
if not server_verif:
client_socket.put(False)
else:
# Otherwise, verify the proof using client signature
client_verif = client_zk.verify(proof, client_signature, data=token)
print_msg('server', f'its client_verif {client_verif}')
client_socket.put(client_verif)
def main():
"""
Main function to run the client and server threads.
"""
client_socket, server_socket = Queue(), Queue()
threads = []
threads.append(Thread(target=client, args=(client_socket, server_socket)))
threads.append(Thread(target=server, args=(server_socket, client_socket)))
# Starting the threads
for thread in threads:
thread.start()
# Waiting for threads to finish execution
for thread in threads:
thread.join()
if __name__ == "__main__":
main() Example 3 """
Extremely simple example of ZeroKnowledge and hmac together
"""
from src.ZeroKnowledge.core import ZeroKnowledge
from src.ZeroKnowledge.models import ZeroKnowledgeSignature, ZeroKnowledgeData
from queue import Queue
from threading import Thread
from src.HMAC.core import HMACClient
from src.SeedGeneration.core import SeedGenerator
DEBUG = True
def print_msg(who: str, message: str) -> None:
if DEBUG:
print(f'[{who}] {message}\n')
def client(client_socket: Queue, server_socket: Queue):
# Initializing a ZeroKnowledge object for the client with specified curve and hash algorithm
client_object = ZeroKnowledge.new(curve_name="secp256k1", hash_alg="sha3_256")
# Generating a main seed for encryption
main_seed = SeedGenerator(phrase="job").generate()
# Defining the client's identity
idenity = 'John'
# Creating a signature for the client's identity
signature = client_object.create_signature(idenity)
# Sending the signature to the server via server_socket
server_socket.put(signature.to_json())
print_msg('client', f'its signature {signature.to_json()}')
# Receiving a token from the server via client_socket
token = client_socket.get()
print_msg('client', f'its token {token}')
# Generating a proof of identity using the client's identity and token
proof = client_object.sign(idenity, token).to_json()
print_msg('client', f'proof {proof}')
# Sending the proof to the server via server_socket
server_socket.put(proof)
# Receiving a result from the server via client_socket
result = client_socket.get()
print_msg('client', f"{result}")
# If the result is True, proceed with further communication steps
if result:
# Sending the main seed to the server
server_socket.put(main_seed)
# Initializing an HMACClient object for message encryption/decryption
obj = HMACClient(algorithm="sha256", secret=main_seed, symbol_count=1)
# Verifying an empty message received from the server
if client_socket.get() == obj.encrypt_message(''):
# Sending a message to the server
message = 'hello'
server_socket.put(obj.encrypt_message_by_chunks(message))
print_msg('client', f'client sent message {message}')
# Verifying if the server has decrypted the message correctly
if client_socket.get() == obj.encrypt_message(message):
print_msg('client', 'server has decrypt message')
def server(server_socket: Queue, client_socket: Queue):
# Defining the server's password
server_password = "SecretServerPassword"
# Initializing a ZeroKnowledge object for the server with specified curve and hash algorithm
server_zk = ZeroKnowledge.new(curve_name="secp384r1", hash_alg="sha3_512")
# Creating a signature for the server's password
server_signature: ZeroKnowledgeSignature = server_zk.create_signature(server_password)
# Receiving the client's signature from the client via server_socket
sig = server_socket.get()
client_signature = ZeroKnowledgeSignature.from_json(sig)
print_msg('server', f'its client signature {client_signature.to_json()}')
# Initializing a ZeroKnowledge object for the client using client signature parameters
client_zk = ZeroKnowledge(client_signature.params)
print_msg('server', f'its client_zk {client_zk}')
# Generating a token signed by the server for the client
token = server_zk.sign(server_password, client_zk.token())
print_msg('server', f'its token {token}')
# Sending the token to the client via client_socket
client_socket.put(token.to_json())
# Receiving a proof from the client via server_socket
proof = ZeroKnowledgeData.from_json(server_socket.get())
print_msg('server', f'its proof {proof}')
# Verifying the received proof
token = ZeroKnowledgeData.from_json(proof.data)
print_msg('server', f'its token {token}')
# Verifying the token and the server signature
server_verif = server_zk.verify(token, server_signature)
print_msg('server', f'its server_verif {server_verif}')
# If server verification fails, notify the client
if not server_verif:
client_socket.put(False)
else:
# Otherwise, verify the proof using the client signature
client_verif = client_zk.verify(proof, client_signature, data=token)
print_msg('server', f'its client_verif {client_verif}')
client_socket.put(client_verif)
# Receiving the main seed from the client via server_socket
main_seed = server_socket.get()
# Initializing an HMACClient object for message encryption/decryption using the received main seed
obj = HMACClient(algorithm="sha256", secret=main_seed, symbol_count=1)
# Sending an empty message to the client
client_socket.put(obj.encrypt_message(''))
# Receiving an encrypted message from the client via server_socket
msg = server_socket.get()
print_msg('server', f'msg encrypted {msg}')
# Decrypting the message
msg_raw = obj.decrypt_message_by_chunks(msg)
print_msg('server', f'msg raw {msg_raw}')
# Sending the decrypted message back to the client
client_socket.put(obj.encrypt_message(msg_raw))
def main():
"""
Main function to run the client and server threads.
"""
client_socket, server_socket = Queue(), Queue()
threads = []
threads.append(Thread(target=client, args=(client_socket, server_socket)))
threads.append(Thread(target=server, args=(server_socket, client_socket)))
# Starting the threads
for thread in threads:
thread.start()
# Joining the threads to wait for their completion
for thread in threads:
thread.join()
if __name__ == "__main__":
main();"Zero-Knowledge" Proof Implementation with HMAC Communication in Python;hmac,zero-knowledge,python,zkproof,zk-call | zk-Call/zkp-hmac-communication-python |
omerbt/TokenFlow;TokenFlow: Consistent Diffusion Features for Consistent Video Editing (ICLR 2024) [ Project Page ] https://github.com/omerbt/TokenFlow/assets/52277000/93dccd63-7e9a-4540-a941-31962361b0bb TokenFlow is a framework that enables consistent video editing, using a pre-trained text-to-image diffusion model, without any further training or finetuning. The generative AI revolution has been recently expanded to videos. Nevertheless, current state-of-the-art video mod- els are still lagging behind image models in terms of visual quality and user control over the generated content. In this work, we present a framework that harnesses the power of a text-to-image diffusion model for the task of text-driven video editing. Specifically, given a source video and a target text-prompt, our method generates a high-quality video that adheres to the target text, while preserving the spatial lay- out and dynamics of the input video. Our method is based on our key observation that consistency in the edited video can be obtained by enforcing consistency in the diffusion feature space. We achieve this by explicitly propagating diffusion features based on inter-frame correspondences, readily available in the model. Thus, our framework does not require any training or fine-tuning, and can work in con- junction with any off-the-shelf text-to-image editing method. We demonstrate state-of-the-art editing results on a variety of real-world videos. For more see the project webpage . Sample results Environment conda create -n tokenflow python=3.9
conda activate tokenflow
pip install -r requirements.txt Preprocess Preprocess you video by running using the following command: python preprocess.py --data_path <data/myvideo.mp4> \
--inversion_prompt <'' or a string describing the video content> Additional arguments:
```
--save_dir --H --W --sd_version --steps --save_steps --n_frames ```
more information on the arguments can be found here. Note: The video reconstruction will be saved as inverted.mp4. A good reconstruction is required for successfull editing with our method. Editing TokenFlow is designed for video for structure-preserving edits. Our method is built on top of an image editing technique (e.g., Plug-and-Play, ControlNet, etc.) - therefore, it is important to ensure that the edit works with the chosen base technique. The LDM decoder may introduce some jitterness, depending on the original video. To edit your video, first create a yaml config as in configs/config_pnp.yaml .
Then run python run_tokenflow_pnp.py Similarly, if you want to use ControlNet or SDEedit, create a yaml config as in config/config_controlnet.yaml or configs/config_SDEdit.yaml and run python run_tokenflow_controlnet.py or python run_tokenflow_SDEdit.py respectivly. Citation @article{tokenflow2023,
title = {TokenFlow: Consistent Diffusion Features for Consistent Video Editing},
author = {Geyer, Michal and Bar-Tal, Omer and Bagon, Shai and Dekel, Tali},
journal={arXiv preprint arxiv:2307.10373},
year={2023}
};Official Pytorch Implementation for "TokenFlow: Consistent Diffusion Features for Consistent Video Editing" presenting "TokenFlow" (ICLR 2024);stable-diffusion,text-to-image,text-to-video,video-editing,tokenflow,iclr2024 | omerbt/TokenFlow |
pondorasti/emojis;emojis Turn your ideas into emojis in seconds. Generate your favorite Slack emojis with just one click. Tech Stack · Deploy Your Own · Authors · Credits · Legal Tech Stack Replicate - AI provider Bun - Package Manager PlanetScale - MySQL database Prisma - ORM Next.js - App Router, Server Actions Vercel Blob - Storage Bucket Vercel KV - Redis Database Vercel - Hosting Deploy Your Own You can deploy this template to Vercel with the button below: Note that you'll need to: Set up Replicate Set up PlanetScale Set up Vercel Blob Set up Vercel KV Authors Alexandru Ţurcanu ( @pondorasti ) Dylan Player ( @dylanplayer ) Credits sdxl-emoji - fine tuned SDXL based on Apple's emojis by @fofrAI prompt-classifier - fine tuned llama-13b for determining toxicity by @fofrAI rembg - remove images background model by @chenxwh Spirals - starter template and code examples by @steventey EmojiGen - original emoji generator by @charliebholtz Legal AGPL-3.0 License Terms & Conditions Privacy Policy;Turn your ideas into emojis in seconds. Generate your favorite Slack emojis with just one click.;[] | pondorasti/emojis |
Audio-AGI/AudioSep;Separate Anything You Describe This repository contains the official implementation of "Separate Anything You Describe" . We introduce AudioSep, a foundation model for open-domain sound separation with natural language queries. AudioSep demonstrates strong separation performance and impressive zero-shot generalization ability on numerous tasks, such as audio event separation, musical instrument separation, and speech enhancement. Check out the separated audio examples on the Demo Page ! Setup Clone the repository and setup the conda environment: shell
git clone https://github.com/Audio-AGI/AudioSep.git && \
cd AudioSep && \
conda env create -f environment.yml && \
conda activate AudioSep Download model weights at checkpoint/ . If you're using this checkpoint for the DCASE 2024 Task 9 challenge participation, please note that this checkpoint was trained using audio at 32k Hz, with a window size of 2048 points and a hop size of 320 points in the STFT operation, which is different with the challenge baseline system provided (16k Hz, window size 1024, hop size 160). Inference ```python
from pipeline import build_audiosep, inference
import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = build_audiosep(
config_yaml='config/audiosep_base.yaml',
checkpoint_path='checkpoint/audiosep_base_4M_steps.ckpt',
device=device) audio_file = 'path_to_audio_file'
text = 'textual_description'
output_file='separated_audio.wav' # AudioSep processes the audio at 32 kHz sampling rate inference(model, audio_file, text, output_file, device)
``` To load directly from Hugging Face, you can do the following: ```python
from models.audiosep import AudioSep
from utils import get_ss_model
import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') ss_model = get_ss_model('config/audiosep_base.yaml') model = AudioSep.from_pretrained("nielsr/audiosep-demo", ss_model=ss_model) audio_file = 'path_to_audio_file'
text = 'textual_description'
output_file='separated_audio.wav' # AudioSep processes the audio at 32 kHz sampling rate inference(model, audio_file, text, output_file, device)
``` Use chunk-based inference to save memory: python
inference(model, audio_file, text, output_file, device, use_chunk=True) Training To utilize your audio-text paired dataset: Format your dataset to match our JSON structure. Refer to the provided template at datafiles/template.json . Update the config/audiosep_base.yaml file by listing your formatted JSON data files under datafiles . For example: yaml
data:
datafiles:
- 'datafiles/your_datafile_1.json'
- 'datafiles/your_datafile_2.json'
... Train AudioSep from scratch: python
python train.py --workspace workspace/AudioSep --config_yaml config/audiosep_base.yaml --resume_checkpoint_path checkpoint/ '' Finetune AudioSep from pretrained checkpoint: python
python train.py --workspace workspace/AudioSep --config_yaml config/audiosep_base.yaml --resume_checkpoint_path path_to_checkpoint Benchmark Evaluation Download the evaluation data under the evaluation/data folder. The data should be organized as follows: yaml
evaluation:
data:
- audioset/
- audiocaps/
- vggsound/
- music/
- clotho/
- esc50/ Run benchmark inference script, the results will be saved at eval_logs/ ```python
python benchmark.py --checkpoint_path audiosep_base_4M_steps.ckpt """
Evaluation Results: VGGSound Avg SDRi: 9.144, SISDR: 9.043
MUSIC Avg SDRi: 10.508, SISDR: 9.425
ESC-50 Avg SDRi: 10.040, SISDR: 8.810
AudioSet Avg SDRi: 7.739, SISDR: 6.903
AudioCaps Avg SDRi: 8.220, SISDR: 7.189
Clotho Avg SDRi: 6.850, SISDR: 5.242
"""
``` Cite this work If you found this tool useful, please consider citing bibtex
@article{liu2023separate,
title={Separate Anything You Describe},
author={Liu, Xubo and Kong, Qiuqiang and Zhao, Yan and Liu, Haohe and Yuan, Yi, and Liu, Yuzhuo, and Xia, Rui and Wang, Yuxuan, and Plumbley, Mark D and Wang, Wenwu},
journal={arXiv preprint arXiv:2308.05037},
year={2023}
} bibtex
@inproceedings{liu22w_interspeech,
title={Separate What You Describe: Language-Queried Audio Source Separation},
author={Liu, Xubo and Liu, Haohe and Kong, Qiuqiang and Mei, Xinhao and Zhao, Jinzheng and Huang, Qiushi, and Plumbley, Mark D and Wang, Wenwu},
year=2022,
booktitle={Proc. Interspeech},
pages={1801--1805},
} Contributors :;Official implementation of "Separate Anything You Describe";[] | Audio-AGI/AudioSep |
One-2-3-45/One-2-3-45;[ Paper ]
[ Project ]
[ Demo ]
[ BibTeX ] One-2-3-45 rethinks how to leverage 2D diffusion models for 3D AIGC and introduces a novel forward-only paradigm that avoids time-consuming optimization. https://github.com/One-2-3-45/One-2-3-45/assets/16759292/a81d6e32-8d29-43a5-b044-b5112b9f9664 https://github.com/One-2-3-45/One-2-3-45/assets/16759292/5ecd45ef-8fd3-4643-af4c-fac3050a0428 News [11/14/2023] Check out our new work One-2-3-45++ ! [10/25/2023] We released rendering scripts for evaluation and APIs for effortless inference. [09/21/2023] One-2-3-45 is accepted by NeurIPS 2023. See you in New Orleans! [09/11/2023] Training code released. [08/18/2023] Inference code released. [07/24/2023] Our demo reached the HuggingFace top 4 trending and was featured in 🤗 Spaces of the Week 🔥! Special thanks to HuggingFace 🤗 for sponsoring this demo!! [07/11/2023] Online interactive demo released! Explore it and create your own 3D models in just 45 seconds! [06/29/2023] Check out our paper . [ X ] Installation Hardware requirement: an NVIDIA GPU with memory >=18GB ( e.g. , RTX 3090 or A10). Tested on Ubuntu. We offer two ways to set up the environment: Traditional Installation Step 1: Install Debian packages. ```bash
sudo apt update && sudo apt install git-lfs libsparsehash-dev build-essential
``` Step 2: Create and activate a conda environment. ```bash
conda create -n One2345 python=3.10
conda activate One2345
``` Step 3: Clone the repository to the local machine. ```bash
# Make sure you have git-lfs installed.
git lfs install
git clone https://github.com/One-2-3-45/One-2-3-45
cd One-2-3-45
``` Step 4: Install project dependencies using pip. ```bash
# Ensure that the installed CUDA version matches the torch's CUDA version.
# Example: CUDA 11.8 installation
wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_520.61.05_linux.run
sudo sh cuda_11.8.0_520.61.05_linux.run
export PATH="/usr/local/cuda-11.8/bin:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda-11.8/lib64:$LD_LIBRARY_PATH"
# Install PyTorch 2.0.1
pip install --no-cache-dir torch==2.0.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
# Install dependencies
pip install -r requirements.txt
# Install inplace_abn and torchsparse
export TORCH_CUDA_ARCH_LIST="7.0;7.2;8.0;8.6+PTX" # CUDA architectures. Modify according to your hardware.
export IABN_FORCE_CUDA=1
pip install inplace_abn
FORCE_CUDA=1 pip install --no-cache-dir git+https://github.com/mit-han-lab/torchsparse.git@v1.4.0
``` Step 5: Download model checkpoints. ```bash
python download_ckpt.py
``` Installation by Docker Images Option 1: Pull and Play (environment and checkpoints). (~22.3G) ```bash
# Pull the Docker image that contains the full repository.
docker pull chaoxu98/one2345:demo_1.0
# An interactive demo will be launched automatically upon running the container.
# This will provide a public URL like XXXXXXX.gradio.live
docker run --name One-2-3-45_demo --gpus all -it chaoxu98/one2345:demo_1.0
``` Option 2: Environment Only. (~7.3G) ```bash
# Pull the Docker image that installed all project dependencies.
docker pull chaoxu98/one2345:1.0
# Start a Docker container named One2345.
docker run --name One-2-3-45 --gpus all -it chaoxu98/one2345:1.0
# Get a bash shell in the container.
docker exec -it One-2-3-45 /bin/bash
# Clone the repository to the local machine.
git clone https://github.com/One-2-3-45/One-2-3-45
cd One-2-3-45
# Download model checkpoints.
python download_ckpt.py
# Refer to getting started for inference.
``` Getting Started (Inference) First-time running will take a longer time to compile the models. Expected time cost per image: 40s on an NVIDIA A6000.
```bash 1. Script python run.py --img_path PATH_TO_INPUT_IMG --half_precision 2. Interactive demo (Gradio) with a friendly web interface A URL will be provided in the output (Local: 127.0.0.1:7860; Public: XXXXXXX.gradio.live) cd demo/
python app.py 3. Jupyter Notebook example.ipynb
``` APIs We provide handy Gradio APIs for our pipeline and its components, making it effortless to accurately preprocess in-the-wild or text-generated images and reconstruct 3D meshes from them. To begin, initialize the Gradio Client with the API URL. ```python
from gradio_client import Client
client = Client("https://one-2-3-45-one-2-3-45.hf.space/")
# example input image
input_img_path = "https://huggingface.co/spaces/One-2-3-45/One-2-3-45/resolve/main/demo_examples/01_wild_hydrant.png"
``` Single image to 3D mesh python
generated_mesh_filepath = client.predict(
input_img_path,
True, # image preprocessing
api_name="/generate_mesh"
) Elevation estimation If the input image's pose (elevation) is unknown, this off-the-shelf algorithm is all you need! python
elevation_angle_deg = client.predict(
input_img_path,
True, # image preprocessing
api_name="/estimate_elevation"
) Image preprocessing: segment, rescale, and recenter We adapt the Segment Anything model (SAM) for background removal. python
segmented_img_filepath = client.predict(
input_img_path,
api_name="/preprocess"
) Training Your Own Model Data Preparation We use the Objaverse-LVIS dataset for training and render the selected shapes (with a CC-BY license) into 2D images with Blender. Download the training images. Download all One2345.zip.part-* files (5 files in total) from here and then cat them into a single .zip file using the following command: bash
cat One2345.zip.part-* > One2345.zip Unzip the training images zip file. Unzip the zip file into a folder specified by yourself ( YOUR_BASE_FOLDER ) with the following command: bash
unzip One2345.zip -d YOUR_BASE_FOLDER Download meta files. Download One2345_training_pose.json and lvis_split_cc_by.json from here and put them into the same folder as the training images ( YOUR_BASE_FOLDER ). Your file structure should look like this:
``` One2345 is your base folder used in the previous steps One2345
├── One2345_training_pose.json
├── lvis_split_cc_by.json
└── zero12345_narrow
├── 000-000
├── 000-001
├── 000-002
...
└── 000-159 ``` Training Specify the trainpath , valpath , and testpath in the config file ./reconstruction/confs/one2345_lod_train.conf to be YOUR_BASE_FOLDER used in data preparation steps and run the following command: bash
cd reconstruction
python exp_runner_generic_blender_train.py --mode train --conf confs/one2345_lod_train.conf Experiment logs and checkpoints will be saved in ./reconstruction/exp/ . Related Work [One-2-3-45++] [Zero123++] [Zero123] Citation If you find our code helpful, please cite our paper: @article{liu2023one2345,
title={One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization},
author={Liu, Minghua and Xu, Chao and Jin, Haian and Chen, Linghao and Varma T, Mukund and Xu, Zexiang and Su, Hao},
journal={Advances in Neural Information Processing Systems},
volume={36},
year={2024}
};[NeurIPS 2023] Official code of "One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization";3d,3d-generation,aigc,generalizable-nerf,mesh-generation,multi-view-reconstruction,nerf,3d-shape-generation | One-2-3-45/One-2-3-45 |
ChenHsing/Awesome-Video-Diffusion-Models;A Survey on Video Diffusion Models Zhen Xing ,
Qijun Feng,
Haoran Chen, Qi Dai, Han Hu, Hang Xu, Zuxuan Wu, Yu-Gang Jiang (Source: Make-A-Video , SimDA , PYoCo , SVD , Video LDM and Tune-A-Video ) [News] We are planning to update the survey soon to encompass the latest work. If you have any suggestions, please feel free to contact us. [News] The Chinese translation is available on Zhihu . Special thanks to Dai-Wenxun for this. Open-source Toolboxes and Foundation Models | Methods | Task | Github|
|:-----:|:-----:|:-----:|
| Open-Sora-Plan | T2V Generation | |
| Open-Sora | T2V Generation | |
| Morph Studio | T2V Generation | -|
| Genie | T2V Generation | -|
| Sora | T2V Generation & Editing | -|
| VideoPoet | T2V Generation & Editing | -|
| Stable Video Diffusion | T2V Generation | |
| NeverEnds | T2V Generation | - |
| Pika | T2V Generation | - |
| EMU-Video | T2V Generation | - |
| GEN-2 | T2V Generation & Editing | - |
| ModelScope | T2V Generation | |
| ZeroScope | T2V Generation | -|
| T2V Synthesis Colab | T2V Genetation | |
| VideoCraft | T2V Genetation & Editing | |
| Diffusers (T2V synthesis) | T2V Genetation |-|
| AnimateDiff | Personalized T2V Genetation | |
| Text2Video-Zero | T2V Genetation | |
| HotShot-XL | T2V Genetation | |
| Genmo | T2V Genetation |-|
| Fliki | T2V Generation | -| Table of Contents Video Generation Data Caption-level Category-level T2V Generation Training-based Training-free Video Generation with other Condtions Pose-gudied Instruct-guided Sound-guided Brain-guided Multi-Modal guided Unconditional Video Generation U-Net based Transformer-based Video Completion Video Enhance and Restoration Video Prediction Video Editing Text guided Video Editing Training-based Editing One-shot Editing Traning-free Modality-guided Video Editing Motion-guided Instruct-guided Sound-guided Multi-Modal Control Domain-specific editing Non-diffusion editing Video Understanding Contact Video Generation Data Caption-level | Title | arXiv | Github| WebSite | Pub. & Date
|:-----:|:-----:|:-----:|:-----:|:-----:|
| Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers | | | | CVPR, 2024 |
| CelebV-Text: A Large-Scale Facial Text-Video Dataset | | | - | CVPR, 2023
| InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding and Generation | | | - | May, 2023 |
| VideoFactory: Swap Attention in Spatiotemporal Diffusions for Text-to-Video Generation | | - | - | May, 2023|
| Advancing High-Resolution Video-Language Representation with Large-Scale Video Transcriptions | |- |- |Nov, 2021 |
| Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval | | - | - |ICCV, 2021 |
| MSR-VTT: A Large Video Description Dataset for Bridging Video and Language | | -| -| CVPR, 2016| Category-level | Title | arXiv | Github| WebSite | Pub. & Date
|:-----:|:-----:|:-----:|:-----:|:-----:|
| UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild | | - | - | Dec., 2012
| First Order Motion Model for Image Animation | | -|- | May, 2023 |
| Learning to Generate Time-Lapse Videos Using Multi-Stage Dynamic Generative Adversarial Networks | | -| -| CVPR,2018| Metric and BenchMark | Title | arXiv | Github| WebSite | Pub. & Date
|:-----:|:-----:|:-----:|:-----:|:-----:|
| Subjective-Aligned Dateset and Metric for Text-to-Video Quality Assessment | |-|- | Mar, 2024
| Towards A Better Metric for Text-to-Video Generation | |-| | Jan, 2024
| AIGCBench: Comprehensive Evaluation of Image-to-Video Content Generated by AI | | - | - | Jan, 2024 |
| VBench: Comprehensive Benchmark Suite for Video Generative Models | | | | Nov, 2023
| FETV: A Benchmark for Fine-Grained Evaluation of Open-Domain Text-to-Video Generation | | - | - | NeurIPS, 2023 |
| CVPR 2023 Text Guided Video Editing Competition | | - | - | Oct., 2023 |
| EvalCrafter: Benchmarking and Evaluating Large Video Generation Models | | | | Oct., 2023 |
| Measuring the Quality of Text-to-Video Model Outputs: Metrics and Dataset | | - | - | Sep., 2023 | Text-to-Video Generation Training-based | Title | arXiv | Github | WebSite | Pub. & Date |
|---|---|---|---|---|
| MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators | | | | Apr., 2024
| Mora: Enabling Generalist Video Generation via A Multi-Agent Framework | |-|- | Mar., 2024
| VSTAR: Generative Temporal Nursing for Longer Dynamic Video Synthesis | |-|- | Mar., 2024
| Genie: Generative Interactive Environments | |-| | Feb., 2024
| Snap Video: Scaled Spatiotemporal Transformers for Text-to-Video Synthesis | |-| | Feb., 2024
| Lumiere: A Space-Time Diffusion Model for Video Generation | |-| | Jan, 2024
| UNIVG: TOWARDS UNIFIED-MODAL VIDEO GENERATION | |-| | Jan, 2024
| VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models | | | | Jan, 2024
| 360DVD: Controllable Panorama Video Generation with 360-Degree Video Diffusion Model | |-| | Jan, 2024
| MagicVideo-V2: Multi-Stage High-Aesthetic Video Generation | |-| | Jan, 2024
| VideoDrafter: Content-Consistent Multi-Scene Video Generation with LLM | |-| | Jan, 2024
| A Recipe for Scaling up Text-to-Video Generation with Text-free Videos | | | | Dec, 2023
| InstructVideo: Instructing Video Diffusion Models with Human Feedback | | | | Dec, 2023
| VideoLCM: Video Latent Consistency Model | |-|- | Dec, 2023
| Photorealistic Video Generation with Diffusion Models | |-| | Dec, 2023
| Hierarchical Spatio-temporal Decoupling for Text-to-Video Generation | | | | Dec, 2023
| Delving Deep into Diffusion Transformers for Image and Video Generation | |-| | Dec, 2023
| StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter | | | | Nov, 2023
| MicroCinema: A Divide-and-Conquer Approach for Text-to-Video Generation | |- | | Nov, 2023
| ART•V: Auto-Regressive Text-to-Video Generation with Diffusion Models | | | | Nov, 2023
| Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets | | | | Nov, 2023
| FusionFrames: Efficient Architectural Aspects for Text-to-Video Generation Pipeline | | | | Nov, 2023
| MoVideo: Motion-Aware Video Generation with Diffusion Models | |-| | Nov, 2023
| Make Pixels Dance: High-Dynamic Video Generation | |-| | Nov, 2023
| Emu Video: Factorizing Text-to-Video Generation by Explicit Image Conditioning | |-| | Nov, 2023
| Optimal Noise pursuit for Augmenting Text-to-Video Generation | |-|-| Nov, 2023
| VideoDreamer: Customized Multi-Subject Text-to-Video Generation with Disen-Mix Finetuning | |-| | Nov, 2023
| VideoCrafter1: Open Diffusion Models for High-Quality Video Generation | | | | Oct, 2023
| SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction | | | | Oct, 2023 |
| DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors | | | | Oct., 2023 |
| LAMP: Learn A Motion Pattern for Few-Shot-Based Video Generation | | | | Oct., 2023 |
| DrivingDiffusion: Layout-Guided multi-view driving scene video generation with latent diffusion model | | | | Oct, 2023 |
| MotionDirector: Motion Customization of Text-to-Video Diffusion Models | | | | Oct, 2023 |
| VideoDirectorGPT: Consistent Multi-scene Video Generation via LLM-Guided Planning | | | | Sep., 2023 |
| Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation | | | | Sep., 2023 |
| LaVie: High-Quality Video Generation with Cascaded Latent Diffusion Models | | | | Sep., 2023 |
| Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation | | | | Sep., 2023 |
| VideoGen: A Reference-Guided Latent Diffusion Approach for High Definition Text-to-Video Generation | | - | | Sep., 2023 |
| MobileVidFactory: Automatic Diffusion-Based Social Media Video Generation for Mobile Devices from Text | | - | - | Jul., 2023 |
| Text2Performer: Text-Driven Human Video Generation | | | | Apr., 2023 |
| AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning | | | | Jul., 2023 |
| Dysen-VDM: Empowering Dynamics-aware Text-to-Video Diffusion with Large Language Models | | - | | Aug., 2023 |
| SimDA: Simple Diffusion Adapter for Efficient Video Generation | | | | CVPR, 2024 |
| Dual-Stream Diffusion Net for Text-to-Video Generation | | - | - | Aug., 2023 |
| ModelScope Text-to-Video Technical Report | | | | Aug., 2023 |
| InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding and Generation | | | - | Jul., 2023 |
| VideoFactory: Swap Attention in Spatiotemporal Diffusions for Text-to-Video Generation | | - | - | May, 2023 |
| Preserve Your Own Correlation: A Noise Prior for Video Diffusion Models | | - | | May, 2023 |
| Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models | | - | | - | CVPR 2023 |
| Latent-Shift: Latent Diffusion with Temporal Shift | | - | | - | Apr., 2023 |
| Probabilistic Adaptation of Text-to-Video Models | | - | | Jun., 2023 |
| NUWA-XL: Diffusion over Diffusion for eXtremely Long Video Generation | | - | | Mar., 2023 |
| ED-T2V: An Efficient Training Framework for Diffusion-based Text-to-Video Generation | - | - | - | IJCNN, 2023 |
| MagicVideo: Efficient Video Generation With Latent Diffusion Models | | - | | - | Nov., 2022 |
| Phenaki: Variable Length Video Generation From Open Domain Textual Description | | - | | - | Oct., 2022 |
| Imagen Video: High Definition Video Generation With Diffusion Models | | - | | - | Oct., 2022 |
| VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation | | | | - | CVPR 2023 |
| MAGVIT: Masked Generative Video Transformer | | - | | Dec., 2022 |
| Make-A-Video: Text-to-Video Generation without Text-Video Data | | - | | - | ICLR 2023 |
| Latent Video Diffusion Models for High-Fidelity Video Generation With Arbitrary Lengths | | | | Nov., 2022 |
| CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers | | | - | May, 2022 |
| Video Diffusion Models | | - | | - | Apr., 2022 | Training-free | Title | arXiv | Github | WebSite | Pub. & Date |
|---|---|---|---|---|
| VideoElevator: Elevating Video Generation Quality with Versatile Text-to-Image Diffusion Models | | | | Mar, 2024 |
| TRAILBLAZER: TRAJECTORY CONTROL FOR DIFFUSION-BASED VIDEO GENERATION | | | | Jan, 2024 |
| FreeInit: Bridging Initialization Gap in Video Diffusion Models | | | | Dec, 2023 |
| MTVG : Multi-text Video Generation with Text-to-Video Models | | - | | Dec, 2023 |
| F3-Pruning: A Training-Free and Generalized Pruning Strategy towards Faster and Finer Text-to-Video Synthesis | | - | - | Nov, 2023 |
| AdaDiff: Adaptive Step Selection for Fast Diffusion | | - | - | Nov, 2023 |
| FlowZero: Zero-Shot Text-to-Video Synthesis with LLM-Driven Dynamic Scene Syntax | | | | Nov, 2023 |
| 🏀GPT4Motion: Scripting Physical Motions in Text-to-Video Generation via Blender-Oriented GPT Planning | | | | Nov, 2023 |
| FreeNoise: Tuning-Free Longer Video Diffusion Via Noise Rescheduling | | | | Oct, 2023 |
| ConditionVideo: Training-Free Condition-Guided Text-to-Video Generation | | | | Oct, 2023 |
| LLM-grounded Video Diffusion Models | | | | Oct, 2023 |
| Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator | | | - | NeurIPS, 2023 |
| DiffSynth: Latent In-Iteration Deflickering for Realistic Video Synthesis | | | | Aug, 2023 |
| Large Language Models are Frame-level Directors for Zero-shot Text-to-Video Generation | | | - | May, 2023 |
| Text2video-Zero: Text-to-Image Diffusion Models Are Zero-Shot Video Generators | | | | Mar., 2023 |
| PEEKABOO: Interactive Video Generation via Masked-Diffusion 🫣 | | | | CVPR, 2024 | Video Generation with other conditions Pose-guided Video Generation | Title | arXiv | Github | WebSite | Pub. & Date |
|---|---|---|---|---|
| Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance | | | | Mar., 2024 |
| Action Reimagined: Text-to-Pose Video Editing for Dynamic Human Actions | | - | - | Mar., 2024 |
| Do You Guys Want to Dance: Zero-Shot Compositional Human Dance Generation with Multiple Persons | | - | - | Jan., 2024 |
| DreaMoving: A Human Dance Video Generation Framework based on Diffusion Models | | - | | Dec., 2023 |
| MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model | | | | Nov., 2023 |
| Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation | | | | Nov., 2023 |
| MagicDance: Realistic Human Dance Video Generation with Motions & Facial Expressions Transfer | | | | Nov., 2023 |
| DisCo: Disentangled Control for Referring Human Dance Generation in Real World | | | | Jul., 2023 |
| Dancing Avatar: Pose and Text-Guided Human Motion Videos Synthesis with Image Diffusion Model | | - | - | Aug., 2023 |
| DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion | | | | Apr., 2023 |
| Follow Your Pose: Pose-Guided Text-to-Video Generation using Pose-Free Videos | | | | Apr., 2023 | Motion-guided Video Generation | Title | arXiv | Github | WebSite | Pub. & Date |
|---|---|---|---|---|
| Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance | | | | Mar., 2024 |
| Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling | | - | - | Jan., 2024 |
| Motion-Zero: Zero-Shot Moving Object Control Framework for Diffusion-Based Video Generation | | - | - | Jan., 2024 |
| Customizing Motion in Text-to-Video Diffusion Models | | - | | Dec., 2023 |
| VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models | | | | CVPR 2024 |
| AnimateAnything: Fine-Grained Open Domain Image Animation with Motion Guidance | | | | Nov., 2023 |
| Motion-Conditioned Diffusion Model for Controllable Video Synthesis | | - | | Apr., 2023 |
| DragNUWA: Fine-grained Control in Video Generation by Integrating Text, Image, and Trajectory | | - | - | Aug., 2023 | Sound-guided Video Generation | Title | arXiv | Github | WebSite | Pub. & Date |
|---|---|---|---|---|
| Context-aware Talking Face Video Generation | | - | - | Feb., 2024 |
| EMO: Emote Portrait Alive - Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions | | | | Feb., 2024 |
| The Power of Sound (TPoS): Audio Reactive Video Generation with Stable Diffusion | | - | - | ICCV, 2023 |
| Generative Disco: Text-to-Video Generation for Music Visualization | | - | - | Apr., 2023 |
| AADiff: Audio-Aligned Video Synthesis with Text-to-Image Diffusion | | - | - | CVPRW, 2023 | Image-guided Video Generation | Title | arXiv | Github | WebSite | Pub. & Date |
|---|---|---|---|---|
| Tuning-Free Noise Rectification for High Fidelity Image-to-Video Generation | | - | | Mar., 2024 |
| AtomoVideo: High Fidelity Image-to-Video Generation | | - | | Mar., 2024 |
| Animated Stickers: Bringing Stickers to Life with Video Diffusion | | - | - | Feb., 2024 |
| CONSISTI2V: Enhancing Visual Consistency for Image-to-Video Generation | | - | | Feb., 2024 |
| I2V-Adapter: A General Image-to-Video Adapter for Video Diffusion Models | | - | - | Dec., 2023 |
| PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models | | - | | Dec., 2023 |
| DreamVideo: High-Fidelity Image-to-Video Generation with Image Retention and Text Guidance | | - | | Nov., 2023 |
| LivePhoto: Real Image Animation with Text-guided Motion Control | | | | Nov., 2023 |
| VideoBooth: Diffusion-based Video Generation with Image Prompts | | | | Nov., 2023 |
| Decouple Content and Motion for Conditional Image-to-Video Generation | | - | - | Nov, 2023 |
| I2VGen-XL: High-Quality Image-to-Video Synthesis via Cascaded Diffusion Models | | - | - | Nov, 2023 |
| Make-It-4D: Synthesizing a Consistent Long-Term Dynamic Scene Video from a Single Image | | - | - | MM, 2023 |
| Generative Image Dynamics | | - | | Sep., 2023 |
| LaMD: Latent Motion Diffusion for Video Generation | | - | - | Apr., 2023 |
| Conditional Image-to-Video Generation with Latent Flow Diffusion Models | | | - | CVPR 2023 |
| NUWA-Infinity: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis | | | | CVPR 2022 | Brain-guided Video Generation | Title | arXiv | Github | WebSite | Pub. & Date |
|---|---|---|---|---|
| NeuroCine: Decoding Vivid Video Sequences from Human Brain Activties | | - | - | Feb., 2024 |
| Cinematic Mindscapes: High-quality Video Reconstruction from Brain Activity | | | | NeurIPS, 2023 | Depth-guided Video Generation | Title | arXiv | Github | WebSite | Pub. & Date |
|---|---|---|---|---|
| Animate-A-Story: Storytelling with Retrieval-Augmented Video Generation | | | | Jul., 2023 |
| Make-Your-Video: Customized Video Generation Using Textual and Structural Guidance | | | | Jun., 2023 | Multi-modal guided Video Generation | Title | arXiv | Github | WebSite | Pub. & Date |
|---|---|---|---|---|
| UniCtrl: Improving the Spatiotemporal Consistency of Text-to-Video Diffusion Models via Training-Free Unified Attention Control | | - | - | Mar., 2024 |
| Magic-Me: Identity-Specific Video Customized Diffusion | | - | | Feb., 2024 |
| InteractiveVideo: User-Centric Controllable Video Generation with Synergistic Multimodal Instructions | | - | | Feb., 2024 |
| Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion | | - | | Feb., 2024 |
| Boximator: Generating Rich and Controllable Motions for Video Synthesis | | - | | Feb., 2024 |
| AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning | | - | -| Jan., 2024 |
| ActAnywhere: Subject-Aware Video Background Generation | | - | | Jan., 2024 |
| CustomVideo: Customizing Text-to-Video Generation with Multiple Subjects | | - | - | Jan., 2024 |
| MoonShot: Towards Controllable Video Generation and Editing with Multimodal Conditions | | | | Jan., 2024 |
| PEEKABOO: Interactive Video Generation via Masked-Diffusion | | - | | Dec., 2023 |
| CMMD: Contrastive Multi-Modal Diffusion for Video-Audio Conditional Modeling | | - | - | Dec., 2023 |
| Fine-grained Controllable Video Generation via Object Appearance and Context | | - | | Nov., 2023 |
| GPT4Video: A Unified Multimodal Large Language Model for Instruction-Followed Understanding and Safety-Aware Generation | | - | | Nov., 2023 |
| Panacea: Panoramic and Controllable Video Generation for Autonomous Driving | | - | | Nov., 2023 |
| SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models | | - | | Nov., 2023 |
| VideoComposer: Compositional Video Synthesis with Motion Controllability | | | | Jun., 2023 |
| NExT-GPT: Any-to-Any Multimodal LLM | | - | - | Sep, 2023 |
| MovieFactory: Automatic Movie Creation from Text using Large Generative Models for Language and Images | | - | | Jun, 2023 |
| Any-to-Any Generation via Composable Diffusion | | | | May, 2023 |
| Mm-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation | | | - | CVPR 2023 | Unconditional Video Generation U-Net based | Title | arXiv | Github | WebSite | Pub. & Date |
|---|---|------------------------------------------------------------------------------------------------------------------------------|---|---|
| Hybrid Video Diffusion Models with 2D Triplane and 3D Wavelet Representation | | | | Feb. 2024 |
| Video Probabilistic Diffusion Models in Projected Latent Space | | | | CVPR 2023 |
| VIDM: Video Implicit Diffusion Models | | | | AAAI 2023 |
| GD-VDM: Generated Depth for better Diffusion-based Video Generation | | | - | Jun., 2023 |
| LEO: Generative Latent Image Animator for Human Video Synthesis | | | | May., 2023 | Transformer based | Title | arXiv | Github | WebSite | Pub. & Date |
|---|---|---|---|---|
| Latte: Latent Diffusion Transformer for Video Generation | | | | Jan., 2024 |
| VDT: An Empirical Study on Video Diffusion with Transformers | | | - | May, 2023 |
| Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive Transformer | | | | May, 2023 | Video Completion Video Enhancement and Restoration | Title | arXiv | Github | WebSite| Pub. & Date |
|---|---|---|---|---|
| Towards Language-Driven Video Inpainting via Multimodal Large Language Models | | | | Jan., 2024 |
| Inflation with Diffusion: Efficient Temporal Adaptation for Text-to-Video Super-Resolution | - | - | - | WACW, 2023 |
| Upscale-A-Video: Temporal-Consistent Diffusion Model for Real-World Video Super-Resolution | | | | Dec., 2023 |
| AVID: Any-Length Video Inpainting with Diffusion Model | | | | Dec., 2023 |
| Motion-Guided Latent Diffusion for Temporally Consistent Real-world Video Super-resolution | | | -| CVPR 2023 |
| LDMVFI: Video Frame Interpolation with Latent Diffusion Models | | - |- | Mar., 2023 |
| CaDM: Codec-aware Diffusion Modeling for Neural-enhanced Video Streaming | | - | - | Nov., 2022 |
| Look Ma, No Hands! Agent-Environment Factorization of Egocentric Videos | | - | - | May., 2023 | Video Prediction | Title | arXiv | Github | Website | Pub. & Date |
|---|---|---|---|---|
| AID: Adapting Image2Video Diffusion Models for Instruction-guided Video Prediction | | | | Jun, 2024 |
| STDiff: Spatio-temporal Diffusion for Continuous Stochastic Video Prediction | | | - | Dec, 2023 |
| Video Diffusion Models with Local-Global Context Guidance | | | - | IJCAI, 2023 |
| Seer: Language Instructed Video Prediction with Latent Diffusion Models | | - | | Mar., 2023 |
| MaskViT: Masked Visual Pre-Training for Video Prediction | | | | Jun, 2022 |
| Diffusion Models for Video Prediction and Infilling | | | | TMLR 2022 |
| McVd: Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation | | | | NeurIPS 2022 |
| Diffusion Probabilistic Modeling for Video Generation | | | - | Mar., 2022 |
| Flexible Diffusion Modeling of Long Videos | | | | May, 2022 |
| Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models | | | | May, 2023 | Video Editing General Editing Model | Title | arXiv | Github | Website | Pub. Date |
|---|---|---|---|---|
| FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation | |-|- | Mar., 2024
| FastVideoEdit: Leveraging Consistency Models for Efficient Text-to-Video Editing | |-|- | Mar., 2024
| DreamMotion: Space-Time Self-Similarity Score Distillation for Zero-Shot Video Editing | | - | | Mar, 2024 |
| Video Editing via Factorized Diffusion Distillation | | - | - | Mar, 2024 |
| FlowVid: Taming Imperfect Optical Flows for Consistent Video-to-Video Synthesis | | | | Dec, 2023 |
| MaskINT: Video Editing via Interpolative Non-autoregressive Masked Transformers | | - | | Dec, 2023 |
| Neutral Editing Framework for Diffusion-based Video Editing | | - | | Dec, 2023 |
| VideoSwap: Customized Video Subject Swapping with Interactive Semantic Point Correspondence | | - | | Nov, 2023 |
| VIDiff: Translating Videos via Multi-Modal Instructions with Diffusion Models | | | | Nov, 2023 |
| Motion-Conditioned Image Animation for Video Editing | | - | | Nov, 2023 |
| MagicProp: Diffusion-based Video Editing via Motion-aware Appearance Propagation | | - | - | Sep, 2023 |
| MagicEdit: High-Fidelity and Temporally Coherent Video Editing | | - | - | Aug, 2023 |
| Edit Temporal-Consistent Videos with Image Diffusion Model | | - | - | Aug, 2023 |
| Structure and Content-Guided Video Synthesis With Diffusion Models | | - | | ICCV, 2023 |
| Dreamix: Video Diffusion Models Are General Video Editors | | - | | Feb, 2023 | Training-free Editing Model | Title | arXiv | Github | Website | Pub. Date |
|---|---|---|---|---|
| EVA: Zero-shot Accurate Attributes and Multi-Object Video Editing | | | | March, 2024 |
| UniEdit: A Unified Tuning-Free Framework for Video Motion and Appearance Editing | |- | | Feb, 2024 |
| Object-Centric Diffusion for Efficient Video Editing | | - | - | Jan, 2024 |
| RealCraft: Attention Control as A Solution for Zero-shot Long Video Editing | | - | - | Dec, 2023 |
| VidToMe: Video Token Merging for Zero-Shot Video Editing | | | | Dec, 2023 |
| A Video is Worth 256 Bases: Spatial-Temporal Expectation-Maximization Inversion for Zero-Shot Video Editing | | | | Dec, 2023 |
| AnimateZero: Video Diffusion Models are Zero-Shot Image Animators | | | - | Dec, 2023 |
| RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models | | | | Dec, 2023 |
| BIVDiff: A Training-Free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion Models | | - | | Nov., 2023 |
| Highly Detailed and Temporal Consistent Video Stylization via Synchronized Multi-Frame Diffusion | | - | - | Nov., 2023 |
| FastBlend: a Powerful Model-Free Toolkit Making Video Stylization Easier | | | - | Oct., 2023 |
| LatentWarp: Consistent Diffusion Latents for Zero-Shot Video-to-Video Translation | | - | - | Nov., 2023 |
| Fuse Your Latents: Video Editing with Multi-source Latent Diffusion Models | | - | - | Oct., 2023 |
| LOVECon: Text-driven Training-Free Long Video Editing with ControlNet | | | - | Oct., 2023 |
| FLATTEN: optical FLow-guided ATTENtion for consistent text-to-video editing | | - | | Oct., 2023 |
| Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models | | | | ICLR, 2024 |
| MeDM: Mediating Image Diffusion Models for Video-to-Video Translation with Temporal Correspondence Guidance | | - | - | Aug., 2023 |
| EVE: Efficient zero-shot text-based Video Editing with Depth Map Guidance and Temporal Consistency Constraints | | - | - | Aug., 2023 |
| ControlVideo: Training-free Controllable Text-to-Video Generation | | | - | May, 2023 |
| TokenFlow: Consistent Diffusion Features for Consistent Video Editing | | | | Jul., 2023 |
| VidEdit: Zero-Shot and Spatially Aware Text-Driven Video Editing | | - | | Jun., 2023 |
| Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation | | - | | Jun., 2023 |
| Zero-Shot Video Editing Using Off-the-Shelf Image Diffusion Models | | | | Mar., 2023 |
| FateZero: Fusing Attentions for Zero-shot Text-based Video Editing | | | | Mar., 2023 |
| Pix2video: Video Editing Using Image Diffusion | | - | | Mar., 2023 |
| InFusion: Inject and Attention Fusion for Multi Concept Zero Shot Text based Video Editing | | - | | Aug., 2023 |
| Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising | | | | May, 2023 | One-shot Editing Model | Title | arXiv | Github | Website | Pub. & Date |
|---|---|---|---|---|
| Customize-A-Video: One-Shot Motion Customization of Text-to-Video Diffusion Models | | -| | Feb., 2024 |
| MotionCrafter: One-Shot Motion Customization of Diffusion Models | | | - | Dec., 2023 |
| DiffusionAtlas: High-Fidelity Consistent Diffusion Video Editing | | - | | Dec., 2023 |
| MotionEditor: Editing Video Motion via Content-Aware Diffusion | | | | CVPR, 2024 |
| Smooth Video Synthesis with Noise Constraints on Diffusion Models for One-shot Video Tuning | | - | | Nov., 2023 |
| Cut-and-Paste: Subject-Driven Video Editing with Attention Control | | - | - | Nov, 2023 |
| StableVideo: Text-driven Consistency-aware Diffusion Video Editing | | | | ICCV, 2023 |
| Shape-aware Text-driven Layered Video Editing | | - | - | CVPR, 2023 |
| SAVE: Spectral-Shift-Aware Adaptation of Image Diffusion Models for Text-guided Video Editing | | | - | May, 2023 |
| Towards Consistent Video Editing with Text-to-Image Diffusion Models | | - | - | Mar., 2023 |
| Edit-A-Video: Single Video Editing with Object-Aware Consistency | | - | | Mar., 2023 |
| Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation | | | | ICCV, 2023 |
| ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing | | | | May, 2023 |
| Video-P2P: Video Editing with Cross-attention Control | | | | Mar., 2023 |
| SinFusion: Training Diffusion Models on a Single Image or Video | | | | Nov., 2022 | Instruct-guided Video Editing | Title | arXiv | Github | Website | Pub. Date |
|---|---|---|---|---|
| EffiVED:Efficient Video Editing via Text-instruction Diffusion Models | | - | - | Mar, 2024 |
| Fairy: Fast Parallellized Instruction-Guided Video-to-Video Synthesis | | - | | Dec, 2023 |
| Neural Video Fields Editing | | | | Dec, 2023 |
| VIDiff: Translating Videos via Multi-Modal Instructions with Diffusion Models | | | | Nov, 2023 |
| Consistent Video-to-Video Transfer Using Synthetic Dataset | | - | - | Nov., 2023 |
| InstructVid2Vid: Controllable Video Editing with Natural Language Instructions | | - | - | May, 2023 |
| Collaborative Score Distillation for Consistent Visual Synthesis | | - | - | July, 2023 | Motion-guided Video Editing | Title | arXiv | Github | Website | Pub. Date |
|---|---|---|---|---|
| MotionCtrl: A Unified and Flexible Motion Controller for Video Generation | | | | Nov, 2023 |
| Drag-A-Video: Non-rigid Video Editing with Point-based Interaction | |- | | Nov, 2023 |
| DragVideo: Interactive Drag-style Video Editing | | | -| Nov, 2023 |
| VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with ControlNet | | - | | July, 2023 | Sound-guided Video Editing | Title | arXiv | Github | Website | Pub. Date |
|---|---|---|---|---|
| Speech Driven Video Editing via an Audio-Conditioned Diffusion Model | | - | - | May., 2023 |
| Soundini: Sound-Guided Diffusion for Natural Video Editing | | | | Apr., 2023 | Multi-modal Control Editing Model | Title | arXiv | Github | Website | Pub. Date |
|---|---|---|---|---|
| AnyV2V: A Plug-and-Play Framework For Any Video-to-Video Editing Tasks | - | | | Dec, 2023 |
| Motionshop: An application of replacing the characters in video with 3D avatars | - | | | Dec, 2023 |
| Anything in Any Scene: Photorealistic Video Object Insertion | | | | Jan, 2024 |
| DreamVideo: Composing Your Dream Videos with Customized Subject and Motion | | | | Dec, 2023 |
| MagicStick: Controllable Video Editing via Control Handle Transformations | | | | Nov, 2023 |
| SAVE: Protagonist Diversification with Structure Agnostic Video Editing | | - | | Nov, 2023 |
| MotionZero:Exploiting Motion Priors for Zero-shot Text-to-Video Generation | | - | - | May, 2023 |
| CCEdit: Creative and Controllable Video Editing via Diffusion Models | | - | - | Sep, 2023 |
| Make-A-Protagonist: Generic Video Editing with An Ensemble of Experts | | | | May, 2023 | Domain-specific Editing Model | Title | arXiv | Github | Website | Pub. Date |
|---|---|---|---|---|
| Be-Your-Outpainter: Mastering Video Outpainting through Input-Specific Adaptation | | - | | Jan. 2024 |
| Diffutoon: High-Resolution Editable Toon Shading via Diffusion Models | | - | | Jan. 2024 |
| TRAINING-FREE SEMANTIC VIDEO COMPOSITION VIA PRE-TRAINED DIFFUSION MODEL | | - | - | Jan, 2024 |
| Generative Rendering: Controllable 4D-Guided Video Generation with 2D Diffusion Models | | - | | CVPR 2023 |
| Multimodal-driven Talking Face Generation via a Unified Diffusion-based Generator | | - | - | May, 2023 |
| DiffSynth: Latent In-Iteration Deflickering for Realistic Video Synthesis | | - | - | Aug, 2023 |
| Style-A-Video: Agile Diffusion for Arbitrary Text-based Video Style Transfer | | | - | May, 2023 |
| Instruct-Video2Avatar: Video-to-Avatar Generation with Instructions | | | - | Jun, 2023 |
| Video Colorization with Pre-trained Text-to-Image Diffusion Models | | | | Jun, 2023 |
| Diffusion Video Autoencoders: Toward Temporally Consistent Face Video Editing via Disentangled Video Encoding | | | | CVPR 2023 | Non-diffusion Editing model | Title | arXiv | Github | Website | Pub. Date |
|---|---|---|---|---|
| DynVideo-E: Harnessing Dynamic NeRF for Large-Scale Motion- and View-Change Human-Centric Video Editing | | - | | Oct., 2023 |
| INVE: Interactive Neural Video Editing | | - | | Jul., 2023 |
| Shape-Aware Text-Driven Layered Video Editing | | - | | Jan., 2023 | Video Understanding | Title | arXiv | Github | Website | Pub. Date |
|---|---|---|---|---|
| EchoReel: Enhancing Action Generation of Existing Video Diffusion Modelsl | | - | - | Mar., 2024 |
| VideoMV: Consistent Multi-View Generation Based on Large Video Generative Model | | - | - | Mar., 2024 |
| SV3D: Novel Multi-view Synthesis and 3D Generation from a Single Image using Latent Video Diffusion | | - | - | Mar., 2024 |
| VFusion3D: Learning Scalable 3D Generative Models from Video Diffusion Models | | - | - | Mar., 2024 |
| Exploring Pre-trained Text-to-Video Diffusion Models for Referring Video Object Segmentation | | - | - | Mar., 2024 |
| DiffSal: Joint Audio and Video Learning for Diffusion Saliency Prediction | | - | - | Mar., 2024 |
| Generative Video Diffusion for Unseen Cross-Domain Video Moment Retrieval | | - | - | Jan., 2024 |
| Diffusion Reward: Learning Rewards via Conditional Video Diffusion | | | | Dec., 2023 |
| ViVid-1-to-3: Novel View Synthesis with Video Diffusion Models | | - | | Nov., 2023 |
| Enhancing Perceptual Quality in Video Super-Resolution through Temporally-Consistent Detail Synthesis using Diffusion Models | | | - | Nov., 2023 |
| Flow-Guided Diffusion for Video Inpainting | | | - | Nov., 2023 |
| Breathing Life Into Sketches Using Text-to-Video Priors | | - | - | Nov., 2023 |
| Infusion: Internal Diffusion for Video Inpainting | | - | - | Nov., 2023 |
| DiffusionVMR: Diffusion Model for Video Moment Retrieval | | - | - | Aug., 2023 |
| DiffPose: SpatioTemporal Diffusion Model for Video-Based Human Pose Estimation | | - | - | Aug., 2023 |
| CoTracker: It is Better to Track Together | | | | Aug., 2023 |
| Unsupervised Video Anomaly Detection with Diffusion Models Conditioned on Compact Motion Representations | | - | - | ICIAP, 2023 |
| Exploring Diffusion Models for Unsupervised Video Anomaly Detection | | - | - | Apr., 2023 |
| Multimodal Motion Conditioned Diffusion Model for Skeleton-based Video Anomaly Detection | | - | - | ICCV, 2023 |
| Diffusion Action Segmentation | | - | - | Mar., 2023 |
| DiffTAD: Temporal Action Detection with Proposal Denoising Diffusion | | | | Mar., 2023 |
| DiffusionRet: Generative Text-Video Retrieval with Diffusion Model | | | - | ICCV, 2023 |
| MomentDiff: Generative Video Moment Retrieval from Random to Real | | | | Jul., 2023 |
| Vid2Avatar: 3D Avatar Reconstruction from Videos in the Wild via Self-supervised Scene Decomposition | | | | Feb., 2023 |
| Refined Semantic Enhancement Towards Frequency Diffusion for Video Captioning | | - | - | Nov., 2022 |
| A Generalist Framework for Panoptic Segmentation of Images and Videos | | | | Oct., 2022 |
| DAVIS: High-Quality Audio-Visual Separation with Generative Diffusion Models | | - | - | Jul., 2023 |
| CaDM: Codec-aware Diffusion Modeling for Neural-enhanced Video Streaming | | - | - | Mar., 2023 |
| Spatial-temporal Transformer-guided Diffusion based Data Augmentation for Efficient Skeleton-based Action Recognition | | - | - | Jul., 2023 |
| PDPP: Projected Diffusion for Procedure Planning in Instructional Videos | | | - |CVPR 2023 | Contact If you have any suggestions or find our work helpful, feel free to contact us Homepage: Zhen Xing Email: zhenxingfd@gmail.com If you find our survey is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry. @article{vdmsurvey,
title={A Survey on Video Diffusion Models},
author={Zhen Xing and Qijun Feng and Haoran Chen and Qi Dai and Han Hu and Hang Xu and Zuxuan Wu and Yu-Gang Jiang},
journal={arXiv preprint arXiv:2310.10647},
year={2023}
};[Arxiv] A Survey on Video Diffusion Models;awesome-list,text-to-video,video-diffusion-model,awesome,video-editing,survey,diffusion,diffusion-models,video,video-diffusion | ChenHsing/Awesome-Video-Diffusion-Models |
traceloop/openllmetry;Open-source observability for your LLM application Get started » Slack | Docs | Website 🎉 New :
Our semantic conventions are now part of OpenTelemetry! Join the discussion and help us shape the future of LLM observability. Looking for the JS/TS version? Check out OpenLLMetry-JS . OpenLLMetry is a set of extensions built on top of OpenTelemetry that gives you complete observability over your LLM application. Because it uses OpenTelemetry under the hood, it can be connected to your existing observability solutions - Datadog, Honeycomb, and others. It's built and maintained by Traceloop under the Apache 2.0 license. The repo contains standard OpenTelemetry instrumentations for LLM providers and Vector DBs, as well as a Traceloop SDK that makes it easy to get started with OpenLLMetry, while still outputting standard OpenTelemetry data that can be connected to your observability stack.
If you already have OpenTelemetry instrumented, you can just add any of our instrumentations directly. 🚀 Getting Started The easiest way to get started is to use our SDK.
For a complete guide, go to our docs . Install the SDK: bash
pip install traceloop-sdk Then, to start instrumenting your code, just add this line to your code: ```python
from traceloop.sdk import Traceloop Traceloop.init()
``` That's it. You're now tracing your code with OpenLLMetry!
If you're running this locally, you may want to disable batch sending, so you can see the traces immediately: python
Traceloop.init(disable_batch=True) ⏫ Supported (and tested) destinations ✅ Traceloop ✅ Dynatrace ✅ Datadog ✅ New Relic ✅ Honeycomb ✅ Grafana Tempo ✅ HyperDX ✅ SigNoz ✅ Splunk ✅ OpenTelemetry Collector ✅ IBM Instana See our docs for instructions on connecting to each one. 🪗 What do we instrument? OpenLLMetry can instrument everything that OpenTelemetry already instruments - so things like your DB, API calls, and more. On top of that, we built a set of custom extensions that instrument things like your calls to OpenAI or Anthropic, or your Vector DB like Chroma, Pinecone, Qdrant or Weaviate. LLM Providers ✅ OpenAI / Azure OpenAI ✅ Anthropic ✅ Cohere ✅ Ollama ✅ Mistral AI ✅ HuggingFace ✅ Bedrock (AWS) ✅ Replicate ✅ Vertex AI (GCP) ✅ Google Generative AI (Gemini) ✅ IBM Watsonx AI ✅ Together AI ✅ Aleph Alpha Vector DBs ✅ Chroma ✅ Pinecone ✅ Qdrant ✅ Weaviate ✅ Milvus Frameworks ✅ LangChain ✅ LlamaIndex ✅ Haystack ✅ LiteLLM 🌱 Contributing Whether it's big or small, we love contributions ❤️ Check out our guide to see how to get started . Not sure where to get started? You can: Book a free pairing session with one of our teammates ! Join our Slack , and ask us any questions there. 💚 Community & Support Slack (For live discussion with the community and the Traceloop team) GitHub Discussions (For help with building and deeper conversations about features) GitHub Issues (For any bugs and errors you encounter using OpenLLMetry) Twitter (Get news fast) 🙏 Special Thanks To @patrickdebois, who suggested the great name we're now using for this repo!;Open-source observability for your LLM application, based on OpenTelemetry;llmops,observability,open-telemetry,metrics,monitoring,opentelemetry,datascience,ml,model-monitoring,opentelemetry-python | traceloop/openllmetry |
DeepInsight-AI/DeepBI;DeepBI DeepBI is an AI-native data analysis platform. DeepBI leverages the power of large language models to explore, query, visualize, and share data from any data source. Users can use DeepBI to gain data insight and make data-driven decisions. Languages: English [中文](README_CN.md) Developer:dev@deepbi.com Business:hi@deepbi.com If you think DeepBI is helpful to you, please help by clicking here on the ⭐ Star and Fork in the upper right corner. Your support is the greatest driving force for DeepBI to become better. Video example https://github.com/DeepInsight-AI/DeepBI/assets/151519374/d1effbe1-5c11-4c77-86ef-e01b1ea7f2f6 user manual DeepBI user manual ✨ Features 1 Conversational data analysis: Users can get arbitrary data results and analysis results through dialogues.\
2 Conversational query generation: Generates persistent queries and visualizations through dialogues.\
3 Dashboard : Assemble persistent visualizations into dashboards.\
4 Automated data analysis reports (to be developed) : Complete data analysis reports automatically according to user instructions.\
5 Support multiple data sources, including MySQL, PostgreSQL, Doris, StarRocks, CSV/Excel, etc.\
6 Multi-platform support, support Windows-WSL,Windows, Linux, Mac. \
7 International, support Chinese, English. 🚀 Supported Databases The database connections supported by DeepBI are:
- MySQL
- PostgreSQL
- csv/Excel Import
- Doris
- StarRocks
- MongoDB 📦 Windows exe installation Download window_install_exe_EN.zip from the tag list .The current test supports Win10 and Win11 Unzip the zip package and double-click the.exe file to run DeepBI Local installation instructions Installl exe 📦 Docker build The local environment needs to have docker and docker-compose. Installl docker Download project files by git: git clone https://github.com/DeepInsight-AI/DeepBI.git or drirect download zip file, unzip it. Enter the project directory: cd DeepBI Just run ./Install.sh directly Default port: 8338 8339 Web access: http://ip:8338 DeepBI docker command Enter project DeepBI dir: docker-compose start # start DeepBI servie
docker-compose stop # stop DeepBI servie
docker-compose ps # see DeepBI servie states If it appears... PermissionError ... ' or ' Permission denied', please add 'sudo' before executing the command sudo docker-compose start # start DeepBI servie
sudo docker-compose stop # stop DeepBI servie
sudo docker-compose ps # see DeepBI servie states Ubuntu build Install directly on the ubuntu system, you need to install redis, postgresql python3.8.17 environment. Redis can be accessed directly through the 127.0.0.1 password-free command line. Require python version 3.8.x Recommend using virtual environments such as pyenv coda postgresql needs to install postgresql-16 version Download the DeepBI code by the command git clone https://github.com/DeepInsight-AI/DeepBI.git If the download fails to replace the protocol, run the following code git clone http://github.com/DeepInsight-AI/DeepBI.git Just run . ubuntu_install.sh directly (note that you run . ubuntu_install.sh instead of sh xxx here, because you need to run the python virtual environment) Default port is 8338 and 8339 Web access: http://ip:8338 Contact Us 📑 Other We have tested on Mac OS 12.7/13.X /14.1.1, Ubuntu 20.04/22.04, and Windows11 WSL 22.04. Windows 10 requires version 22H2 or higher to install WSL The minimum memory requirement for server operation is 1 core 2G memory, and 2 core 4G memory is recommended If you have any question, please contact us at dev@deepbi.com Issue;LLM based data scientist, AI native data application. AI-driven infinite thinking redefines BI.;csv,gpt,gpt-4,mysql,redis,ai,bi,data,llm,analysis | DeepInsight-AI/DeepBI |
antfu/shikiji;[!IMPORTANT]
Shikiji is merged back to Shiki! This repository is now archived. Please continue at shikijs/shiki . Shikiji 式辞 An ESM-focused rewrite of shiki , a beautiful syntax highlighter based on TextMate grammars, with more features and capabilities. 📚 Documentation License MIT;A syntax highlighter based on TextMate grammars. ESM rewrite of shiki, with more features and capabilities.;shiki,syntax-highlighter,textmate-grammar | antfu/shikiji |
raunofreiberg/interfaces;Web Interface Guidelines This document outlines a non-exhaustive list of details that make a good (web) interface. It is a living document, periodically updated based on learnings. Some of these may be subjective, but most apply to all websites. The WAI-ARIA spec is deliberately not duplicated in this document. However, some accessibility guidelines may be pointed out. Contributions are welcome. Edit this file and submit a pull request. Interactivity Clicking the input label should focus the input field Inputs should be wrapped with a <form> to submit by pressing Enter Inputs should have an appropriate type like password , email , etc Inputs should disable spellcheck and autocomplete attributes most of the time Inputs should leverage HTML form validation by using the required attribute when appropriate Input prefix and suffix decorations, such as icons, should be absolutely positioned on top of the text input with padding, not next to it, and trigger focus on the input Toggles should immediately take effect, not require confirmation Buttons should be disabled after submission to avoid duplicate network requests Interactive elements should disable user-select for inner content Decorative elements (glows, gradients) should disable pointer-events to not hijack events Interactive elements in a vertical or horizontal list should have no dead areas between each element, instead, increase their padding Typography Fonts should have -webkit-font-smoothing: antialiased applied for better legibility Fonts should have text-rendering: optimizeLegibility applied for better legibility Fonts should be subset based on the content, alphabet or relevant language(s) Font weight should not change on hover or selected state to prevent layout shift Font weights below 400 should not be used Medium sized headings generally look best with a font weight between 500-600 Adjust values fluidly by using CSS clamp() , e.g. clamp(48px, 5vw, 72px) for the font-size of a heading Where available, tabular figures should be applied with font-variant-numeric: tabular-nums , particularly in tables or when layout shifts are undesirable, like in timers Prevent text resizing unexpectedly in landscape mode on iOS with -webkit-text-size-adjust: 100% Motion Switching themes should not trigger transitions and animations on elements [^1] Animation duration should not be more than 200ms for interactions to feel immediate Animation values should be proportional to the trigger size: Don't animate dialog scale in from 0 → 1, fade opacity and scale from ~0.8 Don't scale buttons on press from 1 → 0.8, but ~0.96, ~0.9, or so Actions that are frequent and low in novelty should avoid extraneous animations: [^2] Opening a right click menu Deleting or adding items from a list Hovering trivial buttons Looping animations should pause when not visible on the screen to offload CPU and GPU usage Use scroll-behavior: smooth for navigating to in-page anchors, with an appropriate offset Touch Hover states should not be visible on touch press, use @media (hover: hover) [^3] Font size for inputs should not be smaller than 16px to prevent iOS zooming on focus Inputs should not auto focus on touch devices as it will open the keyboard and cover the screen Apply muted and playsinline to <video /> tags to auto play on iOS Disable touch-action for custom components that implement pan and zoom gestures to prevent interference from native behavior like zooming and scrolling Disable the default iOS tap highlight with -webkit-tap-highlight-color: rgba(0,0,0,0) , but always replace it with an appropriate alternative Optimizations Large blur() values for filter and backdrop-filter may be slow Scaling and blurring filled rectangles will cause banding, use radial gradients instead Sparingly enable GPU rendering with transform: translateZ(0) for unperformant animations Toggle will-change on unperformant scroll animations for the duration of the animation [^4] Auto-playing too many videos on iOS will choke the device, pause or even unmount off-screen videos Bypass React's render lifecycle with refs for real-time values that can commit to the DOM directly [^5] Detect and adapt to the hardware and network capabilities of the user's device Accessibility Disabled buttons should not have tooltips, they are not accessible [^6] Box shadow should be used for focus rings, not outline which won’t respect radius [^7] Focusable elements in a sequential list should be navigable with ↑ ↓ Focusable elements in a sequential list should be deletable with ⌘ Backspace To open immediately on press, dropdown menus should trigger on mousedown , not click Use a svg favicon with a style tag that adheres to the system theme based on prefers-color-scheme Icon only interactive elements should define an explicit aria-label Tooltips triggered by hover should not contain interactive content Images should always be rendered with <img> for screen readers and ease of copying from the right click menu Illustrations built with HTML should have an explicit aria-label instead of announcing the raw DOM tree to people using screen readers Gradient text should unset the gradient on ::selection state When using nested menus, use a "prediction cone" to prevent the pointer from accidentally closing the menu when moving across other elements. Design Optimistically update data locally and roll back on server error with feedback Authentication redirects should happen on the server before the client loads to avoid janky URL changes Style the document selection state with ::selection Display feedback relative to its trigger: Show a temporary inline checkmark on a successful copy, not a notification Highlight the relevant input(s) on form error(s) Empty states should prompt to create a new item, with optional templates [^1]: Switching between dark mode or light mode will trigger transitions on elements that are meant for explicit interactions like hover. We can disable transitions temporarily to prevent this. For Next.js, use next-themes which prevents transitions out of the box.
[^2]: This is a matter of taste but some interactions just feel better with no motion. For example, the native macOS right click menu only animates out, not in, due to the frequent usage of it.
[^3]: Most touch devices on press will temporarily flash the hover state, unless explicitly only defined for pointer devices with @media (hover: hover) .
[^4]: Use will-change as a last resort to improve performance. Pre-emptively throwing it on elements for better performance may have the opposite effect.
[^5]: This might be controversial but sometimes it can be beneficial to manipulate the DOM directly. For example, instead of relying on React re-rendering on every wheel event, we can track the delta in a ref and update relevant elements directly in the callback.
[^6]: Disabled buttons do not appear in tab order in the DOM so the tooltip will never be announced for keyboard users and they won't know why the button is disabled.
[^7]: As of 2023, Safari will not take the border radius of an element into account when defining custom outline styles. Safari 16.4 has added support for outline following the curve of border radius. However, keep in mind that not everyone updates their OS immediately.;A non-exhaustive list of details that make a good web interface.;[] | raunofreiberg/interfaces |
premAI-io/state-of-open-source-ai;📘 The State of Open Source AI (2023 Edition) Clarity in the current fast-paced mess of Open Source innovation. This is the source repository for The State of Open Source AI ebook, a comprehensive guide exploring everything from model evaluations to deployment, and a great FOMO cure. Want to discuss any topics covered in the book? We have a dedicated channel ( #book ) on our Discord server . Contributing You can help keep the book up-to-date! Contributions, issues, and comments are welcome! See the Contributing Guide for more information on how. Licence This book is released under CC-BY-4.0 (text) and Apache-2.0 (code) . Citation: BibTeX Community Join the Open Source AI Discord Follow us on Twitter Subscribe to our newsletter;:closed_book: Clarity in the current fast-paced mess of Open Source innovation;book,jupyter-book,ai,ml,mlops,open-source,hacktoberfest | premAI-io/state-of-open-source-ai |
invictus717/MetaTransformer;Yiyuan Zhang 1,2* Kaixiong Gong 1,2* Kaipeng Zhang 2,† Hongsheng Li 1,2 Yu Qiao 2 Wanli Ouyang 2 Xiangyu Yue 1,†,‡ 1 Multimedia Lab, The Chinese University of Hong Kong 2 OpenGVLab,Shanghai AI Laboratory * Equal Contribution † Corresponding Author ‡ Project Lead -----------------
[![arXiv](https://img.shields.io/badge/arxiv-2307.10802-b31b1b?style=plastic&color=b31b1b&link=https%3A%2F%2Farxiv.org%2Fabs%2F2307.10802)](https://arxiv.org/abs/2307.10802)
[![website](https://img.shields.io/badge/Project-Website-brightgreen)](https://kxgong.github.io/meta_transformer/)
[![blog-cn](https://img.shields.io/badge/%E6%9C%BA%E5%99%A8%E4%B9%8B%E5%BF%83-%E7%AE%80%E4%BB%8B-brightgreen)](https://mp.weixin.qq.com/s/r38bzqdJxDZUvtDI0c9CEw)
[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Space-blue)](https://huggingface.co/papers/2307.10802)
[![OpenXLab](https://cdn-static.openxlab.org.cn/header/openxlab_models.svg)](https://openxlab.org.cn/models/detail/zhangyiyuan/MetaTransformer)
![](https://img.shields.io/github/stars/invictus717/MetaTransformer?style=social) ## Meta-Transformer with Large Language Models ✨✨✨
We're thrilled to present [OneLLM](https://github.com/csuhan/OneLLM), ensembling Meta-Transformer framework with Multimodal Large Language Models, which performs multimodal joint training🚀, supports more modalities including fMRI, Depth and Normal Maps 🚀, and demonstrates very impressive performances on **25** benchmarks🚀🚀🚀.
🔥🔥 The code, pretrained models, and datasets are publicly available at [OneLLM](https://github.com/csuhan/OneLLM).
🔥🔥 Project Website is at [OneLLM](https://onellm.csuhan.com/).
### 🌟 Single Foundation Model Supports A Wide Range of Applications
As a foundation model, Meta-Transformer can handle data from 12 modalities, which determines that it can support a wide range of applications. As shown in this figure, Meta-Transformer can provide services for downstream tasks including stock analysis 📈, weather forecasting ☀️ ☔ ☁️ ❄️ ⛄ ⚡, remote sensing 📡, autonomous driving 🚗, social network 🌍, speech recognition 🔉, etc. **Table 1**: Meta-Transformer is capable of handling up to 12 modalities, including natural language , RGB images , point clouds , audios , videos , tabular data , graph , time series data , hyper-spectral images , IMU , medical images , and infrared images . ## 🚩🚩🚩 Shared-Encoder, Unpaired Data, More Modalities This repository is built to explore the potential and extensibility of transformers for multimodal learning. We utilize the advantages of Transformers to deal with length-variant sequences. Then we propose the *Data-to-Sequence* tokenization following a meta-scheme, then we apply it to 12 modalities including text, image, point cloud, audio, video, infrared, hyper-spectral, X-Ray, tabular, graph, time-series, and Inertial Measurement Unit (IMU) data. After obtaining the token sequence, we employ a modality-shared encoder to extract representation across different modalities. With task-specific heads, Meta-Transformer can handle various tasks on the different modalities, such as: classification, detection, and segmentation. # 🌟 News
* **2023.8.17:** Release code to directly get embeddings from multiple modalities. We will further release code on utilizing Meta-Transformer for Human-Centric vision tasks.
* **2023.8.2:** 🎉🎉🎉 The implementation of Meta-Transformer for image, point cloud, graph, tabular, time-series, X-Ray, hyper-spectrum, LiDAR data has been released. We also release a very powerful foundation model for Autonomous Driving 🚀🚀🚀.
* **2023.7.22:** Pretrained weights and a usage demo for our Meta-Transformer have been released. Comprehensive documentation and implementation of the image modality are underway and will be released soon. Stay tuned for more exciting updates!⌛⌛⌛
* **2023.7.21:** Paper is released at [arxiv](https://arxiv.org/abs/2307.10802), and code will be gradually released.
* **2023.7.8:** Github Repository Initialization.
# 🔓 Model Zoo Open-source Modality-Agnostic Models | Model | Pretraining | Scale | #Param | Download | 国内下载源 |
| :------------: | :----------: | :----------------------: | :----: | :---------------------------------------------------------------------------------------------------: | :--------: |
| Meta-Transformer-B16 | LAION-2B | Base | 85M | [ckpt](https://drive.google.com/file/d/19ahcN2QKknkir_bayhTW5rucuAiX0OXq/view?usp=sharing) | [ckpt](https://download.openxlab.org.cn/models/zhangyiyuan/MetaTransformer/weight//Meta-Transformer_base_patch16_encoder)
| Meta-Transformer-L14 | LAION-2B | Large | 302M | [ckpt](https://drive.google.com/file/d/15EtzCBAQSqmelhdLz6k880A19_RpcX9B/view?usp=drive_link) | [ckpt](https://download.openxlab.org.cn/models/zhangyiyuan/MetaTransformer/weight//Meta-Transformer_large_patch14_encoder) * Demo of Use for Pretrained Encoder
```python
import torch
import torch.nn as nn
from timm.models.vision_transformer import Block
from Data2Seq import Data2Seq
video_tokenier = Data2Seq(modality='video',dim=768)
audio_tokenier = Data2Seq(modality='audio',dim=768)
time_series_tokenier = Data2Seq(modality='time-series',dim=768)
features = torch.concat([video_tokenizer(video), audio_tokenizer(audio), time_series_tokenizer(time_data)],dim=1)
# For base-scale encoder:
ckpt = torch.load("Meta-Transformer_base_patch16_encoder.pth")
encoder = nn.Sequential(*[
Block(
dim=768,
num_heads=12,
mlp_ratio=4.,
qkv_bias=True,
norm_layer=nn.LayerNorm,
act_layer=nn.GELU
)
for i in range(12)])
encoder.load_state_dict(ckpt,strict=True)
# For large-scale encoder:
ckpt = torch.load("Meta-Transformer_large_patch14_encoder.pth")
encoder = nn.Sequential(*[
Block(
dim=1024,
num_heads=16,
mlp_ratio=4.,
qkv_bias=True,
norm_layer=nn.LayerNorm,
act_layer=nn.GELU
)
for i in range(24)])
encoder.load_state_dict(ckpt,strict=True)
encoded_features = encoder(features)
``` # 🕙 ToDo
- [ x ] Meta-Transformer with Large Language Models.
- [ x ] Multimodal Joint Training with Meta-Transformer.
- [ x ] Support More Modalities and More Tasks.
# Contact
🚀🚀🚀 We aspire to shape this repository into **a formidable foundation for mainstream AI perception tasks across diverse modalities**. Your contributions can play a significant role in this endeavor, and we warmly welcome your participation in our project!
To contact us, never hestitate to send an email to `yiyuanzhang.ai@gmail.com` ,`kaixionggong@gmail.com`, `zhangkaipeng@pjlab.org.cn`, or `xyyue@ie.cuhk.edu.hk`! # Citation
If the code and paper help your research, please kindly cite:
```
@article{zhang2023meta,
title={Meta-transformer: A unified framework for multimodal learning},
author={Zhang, Yiyuan and Gong, Kaixiong and Zhang, Kaipeng and Li, Hongsheng and Qiao, Yu and Ouyang, Wanli and Yue, Xiangyu},
journal={arXiv preprint arXiv:2307.10802},
year={2023}
}
```
# License
This project is released under the [Apache 2.0 license](LICENSE).
# Acknowledgement
This code is developed based on excellent open-sourced projects including [MMClassification](https://github.com/open-mmlab/mmpretrain/tree/mmcls-1.x), [MMDetection](https://github.com/open-mmlab/mmdetection), [MMsegmentation](https://github.com/open-mmlab/mmsegmentation), [OpenPoints](https://github.com/guochengqian/openpoints), [Time-Series-Library](https://github.com/thuml/Time-Series-Library), [Graphomer](https://github.com/microsoft/Graphormer), [SpectralFormer](https://github.com/danfenghong/IEEE_TGRS_SpectralFormer), and [ViT-Adapter](https://github.com/czczup/ViT-Adapter).;Meta-Transformer for Unified Multimodal Learning;artificial-intelligence,computer-vision,machine-learning,multimedia,multimodal,transformers,foundationmodel | invictus717/MetaTransformer |
jacoblee93/fully-local-pdf-chatbot;🏠 Fully Local Chat Over Documents Yes, it's another chat over documents implementation... but this one is entirely local! It can even run fully in your browser with a small LLM via WebLLM ! It's a Next.js app that read the content of an uploaded PDF, chunks it, adds it to a vector store, and
performs RAG, all client side. You can even turn off your WiFi after the site loads. You can see a live version at https://webml-demo.vercel.app. Users can choose one of the below options to run inference: Browser-only mode You can run the entire stack your browser via WebLLM . The model used is the small, 3.8B parameter Phi-3 . You don't have to leave the window to set this up - just upload a PDF and go! Note that the first time you start a chat, the app will download and cache the model weights. This download is several GB in size and may take a little while, so make sure you have a good internet connection! Ollama You can run more powerful, general models outside the browser using Ollama's desktop app . Users will need to download and set up then run the following commands to allow the site access to a locally running Mistral instance: Mac/Linux bash
$ OLLAMA_ORIGINS=https://webml-demo.vercel.app OLLAMA_HOST=127.0.0.1:11435 ollama serve Then, in another terminal window: bash
$ OLLAMA_HOST=127.0.0.1:11435 ollama pull mistral Windows cmd
$ set OLLAMA_ORIGINS=https://webml-demo.vercel.app
set OLLAMA_HOST=127.0.0.1:11435
ollama serve Then, in another terminal window: cmd
$ set OLLAMA_HOST=127.0.0.1:11435
ollama pull mistral ⚡ Stack It uses the following: Voy as the vector store, fully WASM in the browser. Ollama or WebLLM to run an LLM locally and expose it to the web app. LangChain.js to call the models, perform retrieval, and generally orchestrate all the pieces. Transformers.js to run open source Nomic embeddings in the browser. For higher-quality embeddings, switch to "nomic-ai/nomic-embed-text-v1" in app/worker.ts . While the goal is to run as much of the app as possible directly in the browser, but you can swap in Ollama embeddings in lieu of Transformers.js as well. 🔱 Forking To run/deploy this yourself, simply fork this repo and install the required dependencies with yarn . There are no required environment variables, but you can optionally set up LangSmith tracing while developing locally to help debug the prompts and the chain. Copy the .env.example file into a .env.local file: ```ini No environment variables required! LangSmith tracing from the web worker. WARNING: FOR DEVELOPMENT ONLY. DO NOT DEPLOY A LIVE VERSION WITH THESE VARIABLES SET AS YOU WILL LEAK YOUR LANGCHAIN API KEY. NEXT_PUBLIC_LANGCHAIN_TRACING_V2="true"
NEXT_PUBLIC_LANGCHAIN_API_KEY=
NEXT_PUBLIC_LANGCHAIN_PROJECT=
``` Just make sure you don't set this in production, as your LangChain API key will be public on the frontend! 📖 Further reading For a bit more on this topic, check out my blog post on Ollama or my Google Summit talk on building with LLMs in the browser . 🙏 Thank you! Special thanks to: @dawchihliou for making Voy @jmorgan and @mchiang0610 for making Ollama and for your feedback @charlie_ruan for your incredible work on WebLLM @xenovacom for making Transformers.js And @jason_mayes and @nfcampos for inspiration and some great conversations. For more, follow me on Twitter @Hacubu !;Yes, it's another chat over documents implementation... but this one is entirely local!;[] | jacoblee93/fully-local-pdf-chatbot |
lyuchenyang/Macaw-LLM;Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration **_¹ ² [Chenyang Lyu](https://lyuchenyang.github.io), ³ [Minghao Wu](https://minghao-wu.github.io/), ¹ * [Longyue Wang](http://www.longyuewang.com/), ¹ [Xinting Huang](https://timhuang1.github.io/),_**
**_¹ [Bingshuai Liu](bingshuailiu.github.io), ¹ [Zefeng Du](https://seeledu.github.io/index-en.html), ¹ [Shuming Shi](https://shumingshi.github.io/), ¹ [Zhaopeng Tu](http://www.zptu.net/)_** _¹ Tencent AI Lab, ² Dublin City University, ³ Monash University_
_ * Longyue Wang is the corresponding author: [vinnlywang@tencent.com](mailto:{vinnlywang@tencent.com)_ Macaw-LLM is an exploratory endeavor that pioneers multi-modal language modeling by seamlessly combining image🖼️, video📹, audio🎵, and text📝 data, built upon the foundations of CLIP, Whisper, and LLaMA. 📰 Paper :building_construction: Model (via dropbox) :building_construction: Model (via weiyun) :card_file_box: Dataset :bricks: Code :monocle_face: Video :technologist: Demo Table of Contents 📚 Introduction Key Features Architecture Alignment Strategy Installation Usage Future Work and Contributions Introduction In recent years, the field of language modeling has witnessed remarkable advancements. However, the integration of multiple modalities, such as images, videos, audios, and text, has remained a challenging task. Macaw-LLM is a model of its kind, bringing together state-of-the-art models for processing visual, auditory, and textual information, namely CLIP, Whisper, and LLaMA. Key Features 🔑 Macaw-LLM boasts the following unique features: Simple & Fast Alignment : Macaw-LLM enables seamless integration of multi-modal data through simple and fast alignment to LLM embeddings. This efficient process ensures quick adaptation of diverse data types. One-Stage Instruction Fine-Tuning : Our model streamlines the adaptation process through one-stage instruction fine-tuning, promoting a more efficient learning experience. New Multi-modal Instruction Dataset : We create a new multi-modal instruction dataset that covers diverse instructional tasks leveraging image and video modalities, which facilitates future work on multi-modal LLMs. Architecture Macaw-LLM is composed of three main components: CLIP : Responsible for encoding images and video frames. Whisper : Responsible for encoding audio data. LLM (LLaMA/Vicuna/Bloom): The language model that encodes instructions and generates responses. The integration of these models allows Macaw-LLM to process and analyze multi-modal data effectively. Alignment Strategy Our novel alignment strategy enables faster adaptation by efficiently bridging multi-modal features to textual features. The process involves: Encoding multi-modal features with CLIP and Whisper. Feeding the encoded features into an attention function, wherein the multi-modal features serve as the query and the embedding matrix of LLaMA as the key and value. Injecting the outputs into the input sequence (before instruction tokens) of LLaMA, allowing for a streamlined alignment process with minimal additional parameters. New Multi-modal Instruction Dataset 🆕 In this project, we generate a dataset using GPT-3.5-Turbo by providing image or video captions as prompts. To create this dataset, we use captions from the MS COCO dataset for images and the Charades and AVSD datasets for videos. Our dataset consists of approximately 69K examples based on COCO image captions and 50K examples based on Charades and AVSD video captions. We currently focus on single-turn dialogues but plan to expand into multi-turn dialogues and diverse multi-modal content in the future. This will enrich the dataset and improve fine-tuning for language learning models (LLMs). Installation To install Macaw-LLM, follow these steps: ```bash Clone the repository git clone https://github.com/lyuchenyang/Macaw-LLM.git Change to the Macaw-LLM directory cd Macaw-LLM Install required packages pip install -r requirements.txt Install ffmpeg yum install ffmpeg -y Install apex git clone https://github.com/NVIDIA/apex.git
cd apex
python setup.py install
cd ..
``` Usage 🚀 Downloading dataset: Text data: stanford_alpaca/alpaca_data.json Image data: COCO Dataset VQA Dataset Video data: Charades and Video Dialog Image instruction data: Macaw-LLM image instruction dataset Video instruction data: Macaw-LLM video instruction dataset Dataset preprocessing: Place the data in three modalities to specific folders - data/text/ , data/image/ , data/video/ Extract frames and audio from videos: python preprocess_data.py Transform supervised data to dataset: python preprocess_data_supervised.py Transform unsupervised data to dataset: python preprocess_data_unsupervised.py Training: Execute the training script (you can specify the training parameters inside): ./train.sh Inference: Execute the inference script (you can give any customized inputs inside): ./inference.sh Examples We present several examples that highlight the proficiency of our Macaw-LLM in understanding and following multi-modal instructions.
These examples showcase our system's multi-modal ability to understand and generate responses based on images and videos. These examples demonstrate how our system comprehends visual content and produces high-quality, fluent responses in natural language conversations. Our system generates contextually relevant and informative answers to various questions about the image, demonstrating its capability to communicate about visual content naturally and fluently. Future Work and Contributions 🚀 While our model is still in its early stages, we believe that Macaw-LLM paves the way for future research in the realm of multi-modal language modeling. The integration of diverse data modalities holds immense potential for pushing the boundaries of artificial intelligence and enhancing our understanding of complex real-world scenarios. By introducing Macaw-LLM, we hope to inspire further exploration and innovation in this exciting area of study. We welcome contributions from the community to improve and expand Macaw-LLM's capabilities. 🤝 ToDo 👨💻 [ ] Evaluation: We show some examples showcasing the multi-modal ability of our Macaw-LLM. However, we acknowledge that these efforts may not be fully adequate for accurately and comprehensively demonstrate model capabilities. We aim to conduct extensive evaluation on our systems to evaluate its capability. [ ] More Language Models: We aim to extend Macaw-LLM by incorporating additional language models like Dolly, BLOOM, T-5, etc. This will enable more robust and versatile processing and understanding of multi-modal data. [ ] Multilingual Support: Our next step is to support multiple languages, moving towards true multi-modal and multilingual language models. We believe this will significantly broaden Macaw-LLM's applicability and enhance its understanding of diverse, global contexts. Acknowledgements 🙏 We would like to express our gratitude to the following open-source projects for their valuable contributions to Macaw-LLM: Stanford Alpaca for providing the Alpaca dataset, which we used in our experiments. Parrot for providing a helpful implementation of the training of LLaMA. CLIP for providing a strong image and video encoding model. Whisper for providing a strong audio encoding model. LLaMA for providing a powerful LLM. We would also like to thank the developers and maintainers of these projects for their dedication and hard work in making their projects open-source and accessible to the community. Citation bibtex
@article{lyu2023macaw,
title={Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration},
author={Lyu, Chenyang and Wu, Minghao and Wang, Longyue and Huang, Xinting and Liu, Bingshuai and Du, Zefeng and Shi, Shuming and Tu, Zhaopeng},
journal={arXiv preprint arXiv:2306.09093},
year={2023}
};Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration;language-model,multi-modal-learning,natural-language-processing,deep-learning,machine-learning,neural-networks | lyuchenyang/Macaw-LLM |
lucavallin/barco;barco barco is a project I worked on to learn more about Linux containers and the Linux kernel, based on other guides on the internet.
Linux containers are made up by a set of Linux kernel features: namespaces : are used to group kernel objects into different sets that can be accessed by specific process trees. There are different types of namespaces , for example,the PID namespace is used to isolate the process tree, while the network namespace is used to isolate the network stack. seccomp : is used to limit the system calls that a process can make (handled via syscalls) capabilities : are used to set limits on what uid 0 (root) can do (handled via syscalls) cgroups : are used to limit the resources (e.g. memory, disk I/O, CPU-tme) that a process can use (handled via cgroupfs) Usage barco can be used to run bin/sh . from the / directory as root (-u 0) with the following command (optional -v for verbose output): ```bash
$ sudo ./bin/barco -u 0 -m / -c /bin/sh -a . [-v] 22:08:41 INFO ./src/barco.c:96: initializing socket pair...
22:08:41 INFO ./src/barco.c:103: setting socket flags...
22:08:41 INFO ./src/barco.c:112: initializing container stack...
22:08:41 INFO ./src/barco.c:120: initializing container...
22:08:41 INFO ./src/barco.c:131: initializing cgroups...
22:08:41 INFO ./src/cgroups.c:73: setting memory.max to 1G...
22:08:41 INFO ./src/cgroups.c:73: setting cpu.weight to 256...
22:08:41 INFO ./src/cgroups.c:73: setting pids.max to 64...
22:08:41 INFO ./src/cgroups.c:73: setting cgroup.procs to 1458...
22:08:41 INFO ./src/barco.c:139: configuring user namespace...
22:08:41 INFO ./src/barco.c:147: waiting for container to exit...
22:08:41 INFO ./src/container.c:43: ### BARCONTAINER STARTING - type 'exit' to quit ### ls bin home lib32 media root sys vmlinuz
boot initrd.img lib64 mnt run tmp vmlinuz.old
dev initrd.img.old libx32 opt sbin usr
etc lib lost+found proc srv var echo "i am a container" i am a container exit 22:08:55 INFO ./src/barco.c:153: freeing resources...
22:08:55 INFO ./src/barco.c:168: so long and thanks for all the fish
``` Setup barco requires a number of tools and libraries to be installed to build the project and for development. ```bash Install all required tooling and dependencies $ sudo apt install -y make
$ make setup
``` Dependencies barco depends on the following "non-standard" libraries: libseccomp : used to set up seccomp filters libcap : used to set container capabilities libcuni1 : used for testing with CUnit argtable : used to parse command line arguments rxi/log.c : used for logging barco uses a number of LLVM-18-based tools for development, linting, formatting, debugging and Valgrind to check for memory leaks. Build The included Makefile provides a few targets to build barco .
The variable debug=1 can be set to run any of the targets in "debug" mode, which builds the project with debug symbols and without optimizations (especially useful for the debugger and valgrind). ```bash Build barco (executable is in bin/) The default target also runs, "make lint" and "make format" to lint and format the code $ make Build barco with debug flags $ make debug=1
``` Development barco is developed using Visual Studio Code and GitHub Codespaces . The repository contains all the necessary configuration files to use these tools effectively. barco relies on low-level Linux features, so it must be run on a Linux system. GitHub Codespaces acts weird at times when tweaking low-level container settings: I found getutm.app to work well with Debian on my Mac when in doubt. The included Makefile provides a few targets useful for development: ```bash Run tests $ make test Run linter $ make lint Run formatter $ make format Run valgrind $ make check Clean the build $ make clean
``` Furthermore, the project includes a Visual Studio Code configuration in .vscode/ that can be used to run the built-in debugger (at this moment it is "disabled" since barco should be run as root and CodeLLDB does not have that option). Structure The project is structured as follows: txt
├── .devcontainer configuration for GitHub Codespaces
├── .github configuration GitHub Actions and other GitHub features
├── .vscode configuration for Visual Studio Code
├── bin the executable (created by make)
├── build intermediate build files e.g. *.o (created by make)
├── docs documentation
├── include header files
├── lib third-party libraries
├── scripts scripts for setup and other tasks
├── src C source files
│ ├── barco.c (main) Entry point for the CLI
│ └── *.c
├── tests contains tests
├── .clang-format configuration for the formatter
├── .clang-tidy configuration for the linter
├── .gitignore
├── LICENSE
├── Makefile
└── README.md Testing and documentation At the moment, the project does not contain any automated tests or tools to document the code.
In the future, suitable tools for automated testing and documentation might be added. Limitations barco has been tested on Debian 12, running the Linux kernel at version 6.1.0, with user namespaces and cgroupsv2 enabled. barco does not handle network namespaces, so the container cannot access the network. Networking can roughly be setup as follows: create a new network namespace create a virtual ethernet pair move one end of the pair to the new network namespace assign an IP address to the interface in the new network namespace setup routing and NAT In C this is usually done via the rtnetlink interface. Furthermore, network usage can be limited with the net_prio cgroup controller. Improvements Investigate further, document and refactor: user and mount and cgroup namespaces, syscalls and capabilities The functions in cgroups.c , mount.c , sec.c , user.c are specific to barco and should be made more generic CMake and Conan are industry standards, so they should be used eventually instead of Make and the current build system. Unfortunately, CMake and Conan also add a lot of complexity which is not needed at this time. Credits Some of the resources that have been used to develop barco are: Linux kernel documentation user_namespaces docs cgroup_namespaces docs mount_namespaces docs Linux containers in 500 lines of code Containers from scratch The current adoption status of cgroup v2 in containers Docker under the Hood A deep dive into Linux namespaces FAQ Why C? I haven't written much C since college and nostalgia got me. What does "barco" mean? It's Venetian (my native language) for "hay barrack". Nice logo, did you design it yourself? Kind of, I asked Midjourney to come up with some designs.;Linux containers from scratch in C.;c,containers,linux | lucavallin/barco |
cschroeter/park-ui;Park UI Beautifully designed components built on Ark UI that work for the JS and CSS frameworks of your choice. Supported Frameworks | | Chakra UI | Panda CSS | Tailwind CSS | UnoCSS |
| --------------------------------- | ---------------------------------- | ----------------------------------- | ---------------------------------------- | ----------------------------- |
| React | In Progress | ✓ | ✓ | Planned |
| Solid | - | ✓ | ✓ | Planned |
| Vue | - | ✓ | ✓ | Planned | Documentation Visit http://park-ui.com to view the documentation. Contributing If you would like to contribute to Park UI, please reach out to me on Twitter . License Licensed under the MIT license .;Beautifully designed components built on Ark UI that work for the JS and CSS frameworks of your choice.;[] | cschroeter/park-ui |
huggingface/cookbook;Open-Source AI Cookbook This repo contains community-driven practical examples of building AI applications and solving various tasks with AI
using open-source tools and models. Contributing to the cookbook Everyone is welcome to contribute, and we value everybody's contribution! There are several ways you can contribute to
the Open-Source AI Cookbook : Submit an idea for a desired example/guide via GitHub Issues . Contribute a new notebook with a practical example. Improve existing examples by fixing issues/typos. Before contributing, check currently open issues and pull requests to avoid working on something that someone else is
already working on. What makes a good Cookbook notebook? We believe that the Cookbook will be the most beneficial for everyone in the community if the Jupyter notebooks have the
following qualities: Practical : Your notebook should provide an illustration of an end-to-end project or a specific aspect of AI development. Aim for real-world applications, but try to avoid overcomplicating. Clearly explain the objectives, challenges and steps involved. Build with open-source tools and models : Utilize open source libraries, datasets, and pre-trained models available under permissive licenses. Include links to all resources used within the notebook. Clearly written : Ensure your writing is clear, concise, and free from grammatical errors. Maintain a friendly and approachable tone throughout the notebook. Explain the steps you take to solve a problem, challenges, alternative approaches. Executes without errors : Test your notebook to avoid runtime errors. Adds to existing "recipes" : Before submitting, review existing notebooks to confirm that the subject hasn't been covered yet. We welcome diverse use cases, modalities, techniques, and approaches! Creating a pull request To contribute a new example/guide, open a pull request, and tag @merveenoyan and @stevhliu. Here are some tips: Make sure that your notebook's file name is in lowercase. Don't forget to add your notebook to the _toctree.yml and to index.md . Right after the notebook's first header, add yourself as an author like this: _Authored by: [Aymeric Roucher](https://huggingface.co/m-ric)_ . You can link to your Hugging Face profile, or to your GitHub profile. Remove non-informative code cell outputs (e.g. from pip install ). Make sure the notebook doesn't contain any empty code cells. If using any images in the markdown, upload them to the huggingface/cookbook-images dataset. Then use the link to the image in your markdown, e.g.: ![RAG diagram](https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/rag-diagram.png) Once your pull request is merged, the notebook will show up in the Open-Source AI Cookbook . Translating the Cookbook into your language We'd love to have the Cookbook to be available in many more languages! Please follow the steps below if you'd like to
help translate the notebooks into your language 🙏. If some of the notebooks have already been translated into your language, add new translated notebooks
under notebooks/your_language . Don't forget to add the new translated notebook to notebooks/your_language/_toctree.yml ,
and to notebooks/your_language/index.md . If the notebooks have not yet been translated to your language, create a directory under notebooks with your LANG-ID (e.g. see en for English, zh-CN for Chinese). The LANG-ID should be ISO 639-1 (two lower case letters) language
code -- see here for reference. Alternatively, {two lowercase letters}-{two uppercase letters} format is also supported, e.g. zh-CN . Create the notebooks/LANG-ID/_toctree.yml , and notebooks/LANG-ID/index.md , and add the translated notebook. Finally, add your language code (the exact same LANG-ID ) to the build_documentation.yml and build_pr_documentation.yml files in the .github/workflows folder.;Open-source AI cookbook;[] | huggingface/cookbook |
YORG-AI/Open-Assistant;YORG open-assistant 😎Introduction OpenAI's Assistant API is awesome : channeling the power of Code interpreter and Retrieval and thereby helping developers build powerful AI assistants capable of performing various tasks . However, it executes codes within an online sandbox and requires us to upload our files to OpenAI's platform -- which does not sound that awesome ... Y'ORG AI thus introduces the Open Assistant , which allows you to run your codes locally , retrieve knowledge from local files (without sendding them to OpenAI), and access more developer-friendly tools! 🌟Key Advantages Our platform 🌐 is designed with the developer and data analyst in mind, offering unparalleled advantages: Fortified Data Privacy : Your sensitive information never leaves your own secure servers. Boundless Document Handling : Wave goodbye to restrictions on file size or quantity. Cost Efficiency : Eliminate session and retrieval🔍 costs associated with cloud-based services. Local LLM Flexibility : Opt for the Large Language Model of your choice and maintain all operations in-house. 🔌Tools and pre-built assistants Y'ORG provide additional tools for developers👩💻👨💻:
- Understand codebases.
- Draft development specification.
- Introduce new features into existing projects. And for data analyst:
- Conduct data analysis.
- Create reports for diverse audiences. We also provide pre-built assistants for our users: SDE assistant : Transform feature requests into executable code and documentation by harnessing the collective intelligence of your entire code repository—no IDE required. Data assistant : Analyze datasets, distill insights, and convert them into comprehensive reports for varied stakeholders. 🔍What's yorgassistant Yorgassistant is a streamlined open source framework designed for users to efficiently build agents based on large language models. It includes a set of built-in tools for enhancing the functionality of LLMs. Here is an overview of our framework: 🏖️webui demo For details, you can open the readme file in the webui folder . 👨💻Quick Start Set up package shell
pip install yorgassistant Set up openapikey
(If you are in China, please set up a proxy to ensure that you can connect to openai.)
```python
import os os.environ['http_proxy'] = 'http://127.0.0.1:10809' # 这里设置自己的代理端口号 os.environ['https_proxy'] = 'http://127.0.0.1:10809' # 这里设置自己的代理端口号 os.environ['OPENAI_CHAT_API_KEY'] = 'sk-br3j7Gxxxxxxxxvt8r'
```
- set up yaml file
We have some tools built in eg.code_interpreter,swe_tool tools.yaml yaml
YORG: 0.0.1
info:
title: yorg_tools_document
description: yorg tool define document.
version: 'v1'
tools:
code_interpreter:
name: code_interpreter
entity_name: code_interpreter
summary: Run the code through code_interpreter and return the result of the code run. If your query is about math, computer science, or data science, you can use this tool to get the result of the code run.
parameters:
- name: code
description: code text that requires code_interpreter to run
required: true
parameter_schema:
type: string
responses:
success:
description: OK
content:
result:
type: string
description: the result of the code run
example_stateful_tool:
name: example_stateful_tool
entity_name: example_stateful_tool
summary: This tool is an example of a stateful tool. It will get two number from user input and return the sum of the two numbers.
parameters: []
responses: {}
swe_tool:
name: sew_tool
entity_name: swe_tool
summary: SoftWare Engineer Agent(swe_tool) specializes in working with code files.
parameters: []
responses: {} Tools are categorized into stateful tools and function tools,
Function tools can describe parameters and return values directly in tools.yaml
Functions can be registered using decorators python
from yorgassistant.core.assistant.tools.tools import register_function_tool
@register_function_tool
def code_test(code: str):
return {
"type": "success",
"content": {
"result": "Hello, World!",
},
} If it's a stateful tool you need to write an additional yaml file with a stateful description example_stateful_tool.yaml yaml
start_stage: "init"
finish_stage: "finish"
all_stages:
init:
name: "init"
next_stage_entry:
stage_1:
- name: x
required: true
parameter_schema:
type: number
description: "input value x"
need_llm_generate_parameters: false
need_llm_generate_response: false
stage_1:
name: "stage_1"
next_stage_entry:
stage_2:
- name: y
required: true
parameter_schema:
type: number
description: "input value y"
need_llm_generate_parameters: false
need_llm_generate_response: false
stage_2:
name: "stage_2"
next_stage_entry:
stage_3: []
need_llm_generate_parameters: false
need_llm_generate_response: false
stage_3:
name: "stage_3"
next_stage_entry:
finish: []
need_llm_generate_parameters: false
need_llm_generate_response: false
finish:
name: "finish"
next_stage_entry: {}
need_llm_generate_parameters: false
need_llm_generate_response: false Stateful tools can also be registered using decorators.
The yaml file is registered in init.
```python
from yorgassistant.core.assistant.tools.tools import register_stateful_tool
from yorgassistant.core.assistant.tools.stateful_tool_entity import StatefulToolEntity
@register_stateful_tool
class ExampleStatefulToolEntity(StatefulToolEntity):
"""
This example tool entity is stateful, and it has 3 inner stages. stage1: take integer x as input
stage2: take integer y as input
stage3: no input, return x + y
"""
def __init__(self):
super().__init__("example_stateful_tool.yaml")
def _call(self, **kwargs):
if "goto" not in kwargs:
if self.current_stage.name == self.config.start_stage:
return {
"type": "success",
"content": {"message": "stateful tool is started"},
}
else:
return {
"type": "error",
"content": {"message": "please provide `goto` parameter"},
}
request_next_stage = kwargs["goto"]
if request_next_stage not in self.config.all_stages:
return {
"type": "error",
"content": {"message": f"stage {request_next_stage} not found"},
}
self.current_stage = self.config.all_stages[request_next_stage]
match self.current_stage.name:
case "stage_1":
return self._stage1(kwargs["x"])
case "stage_2":
return self._stage2(kwargs["y"])
case "stage_3":
return self._stage3()
case self.config.finish_stage:
return self._finish()
case _:
return {
"type": "error",
"content": {
"message": f"stage {self.current_stage.name} not found"
},
}
def _stage1(self, x: int):
self.x = x
return {"type": "success", "content": {"message": "stage1 done"}}
def _stage2(self, y: int):
self.y = y
return {"type": "success", "content": {"message": "stage2 done"}}
def _stage3(self):
return {"type": "success", "content": {"result": self.x + self.y}}
def _finish(self):
return {"type": "success", "content": {"message": "stateful tool is finished"}} - setting data python
import yorgassistant
yorgassistant.Threads.set_threads_yaml_path('data/threads.yaml')
yorgassistant.Assistants.set_assistants_yaml_path('data/assistants.yaml') - find assistants python
import yorgassistant
assistants_list = yorgassistant.Assistants.get_all_assistants()
print(assistants_list) - find threads python
import yorgassistant
threads_list = yorgassistant.Threads.get_all_threads()
print(threads_list)
```
- run example ```python
import yorgassistant
yorgassistant.Threads.set_threads_yaml_path('data/threads.yaml')
yorgassistant.Assistants.set_assistants_yaml_path('data/assistants.yaml')
yorgassistant.Tools.set_tools_yaml_path('tools.yaml')
threads = yorgassistant.Threads.create()
print(threads.id)
assistant = yorgassistant.Assistants.create(name="Test Assistant", model="gpt-4-1106-preview", instructions="Use swe tool auto fix code files", tools=[{'type':'swe_tool'}])
print(assistant.id) result = threads.run(assistant.id, "Use SoftWare Engineer Agent swe tool auto fix code files.")
print(result) result = threads.run(assistant.id, "the repo url is https://github.com/YORG-AI/Open-Assistant",goto="stage_1")
print(result) result = threads.run(assistant.id, "add helloworld feature to readme", goto="stage_2")
print(result) result = threads.run(assistant.id, "focus_files_name_list = [README.md]", goto="stage_3")
print(result) result = threads.run(assistant.id, "action=3", goto="stage_4")
print(result) result = threads.run(assistant.id, "", goto="stage_5")
print(result) result = threads.run(assistant.id, "action=0,action_idx=0", goto="stage_6")
print(result) result = threads.run(assistant.id, "", goto="finish")
print(result)
``` or
```python
import yorgassistant
yorgassistant.Threads.set_threads_yaml_path('data/threads.yaml')
yorgassistant.Assistants.set_assistants_yaml_path('data/assistants.yaml')
yorgassistant.Tools.set_tools_yaml_path('tools.yaml')
assistant = yorgassistant.Assistants.from_id('56b0a8c9-b8b4-4c86-8d68-c7793235283b')
threads = yorgassistant.Threads.from_id("6014b05d-1be9-4e8d-933c-ed8f17dfa8f0")
result = threads.run(assistant.id, "Use SoftWare Engineer Agent swe tool auto fix code files.")
print(result) result = threads.run(assistant.id, "the repo url is https://github.com/YORG-AI/Open-Assistant",goto="stage_1")
print(result) result = threads.run(assistant.id, "add helloworld feature to readme", goto="stage_2")
print(result) result = threads.run(assistant.id, "focus_files_name_list = [README.md]", goto="stage_3")
print(result) result = threads.run(assistant.id, "action=3", goto="stage_4")
print(result) result = threads.run(assistant.id, "", goto="stage_5")
print(result) result = threads.run(assistant.id, "action=0,action_idx=0", goto="stage_6")
print(result) result = threads.run(assistant.id, "", goto="finish")
print(result)
``` 🤝 Contributing 🌟Star History 📋Citation If you find this project useful in your research, please consider cite: latex
@misc{yorgassistant2023,
title={{yorgassistant} A streamlined open source framework designed for users to efficiently build agents based on large language models.},
author={YORG-AI Developer Team},
howpublished = {\url{https://github.com/YORG-AI/Open-Assistant}},
year={2023}
} 📞Contact If you have any questions / feedback / comment, do not hesitate to contact us. Email: contact@yorg.ai GitHub Issues: For more technical inquiries, you can also create a new issue in our GitHub repository. Back to top;YORG Open Source Version;[] | YORG-AI/Open-Assistant |
CStanKonrad/long_llama;LongLLaMA: Focused Transformer Training for Context Scaling >_ 🎓 LongLLaMA-Code 7B Instruct 📑🗨 Learn more ⇧ { LongLLaMA-Code 7B } LongLLaMA-Instruct-3Bv1.1 LongLLaMA-3Bv1.1 [TLDR](#TLDR) | [Overview](#Overview) | [Usage](#Usage) | [LongLLaMA performance](#LongLLaMA-performance) | [Authors](#Authors) | [Citation](#Citation) | [License](#License) | [Acknowledgments](#Acknowledgments)
[FoT continued pretraining](fot_continued_pretraining) | [Instruction tuning](instruction_fine_tuning) TLDR This repository contains the research preview of LongLLaMA, a large language model capable of handling long contexts of 256k tokens or even more . LongLLaMA is built upon the foundation of OpenLLaMA and fine-tuned using the Focused Transformer (FoT) method.
LongLLaMA Code is built upon the foundation of Code Llama .
We release a smaller 3B base variant (not instruction tuned) of the LongLLaMA model on a permissive license (Apache 2.0) and inference code supporting longer contexts on Hugging Face . Our model weights can serve as the drop-in replacement of LLaMA in existing implementations (for short context up to 2048 tokens). Additionally, we provide evaluation results and comparisons against the original OpenLLaMA models. In addition to this, we release code for instruction tuning (PyTorch) and FoT continued pretraining (JAX) . Overview Base models Focused Transformer: Contrastive Training for Context Scaling (FoT) presents a simple method for endowing language models with the ability to handle context consisting possibly of millions of tokens while training on significantly shorter input. FoT permits a subset of attention layers to access a memory cache of (key, value) pairs to extend the context length. The distinctive aspect of FoT is its training procedure, drawing from contrastive learning. Specifically, we deliberately expose the memory attention layers to both relevant and irrelevant keys (like negative samples from unrelated documents). This strategy incentivizes the model to differentiate keys connected with semantically diverse values, thereby enhancing their structure. This, in turn, makes it possible to extrapolate the effective context length much beyond what is seen in training. LongLLaMA is an OpenLLaMA model finetuned with the FoT method,
with three layers used for context extension. Crucially, LongLLaMA is able to extrapolate much beyond the context length seen in training: $8k$. E.g., in the passkey retrieval task, it can handle inputs of length $256k$ . LongLLaMA Code is a Code Llama model finetuned with the FoT method. | | [LongLLaMA-3B](https://huggingface.co/syzymon/long_llama_3b) | [LongLLaMA-3Bv1.1](https://huggingface.co/syzymon/long_llama_3b_v1_1) | [LongLLaMA-Code 7B](https://huggingface.co/syzymon/long_llama_code_7b) |
|----------------|----------|----------|-----------|
| Source model | [OpenLLaMA-3B](https://huggingface.co/openlm-research/open_llama_3b_easylm) | [OpenLLaMA-3Bv2](https://huggingface.co/openlm-research/open_llama_3b_v2_easylm) | [CodeLLaMA-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) |
| Source model tokens | 1T | 1 T | 2T + 0.5 T |
| Fine-tuning tokens | 10B | 5B | 35B | - |
| Memory layers | 6, 12, 18 | 6, 12, 18 | 8, 16, 24 | FoT continued pretraining In the fot_continued_pretraining subfolder, we provide the code that can be used to tune LLaMA models with FoT. This code is written in JAX & Flax and based on EasyLM . Instruction/Chat tuning In the instruction_fine_tuning subfolder, we provide the code that was used to create LongLLaMA-Instruct-3Bv1.1 , an instruction-tuned version of LongLLaMA-3Bv1.1 . We used OpenOrca (instructions) and zetavg/ShareGPT-Processed (chat) datasets for tuning. This code utilizes PyTorch and Hugging Face trainer . Inference code In the src subfolder we provide inference code for FoT models. The code is written in PyTorch and based on Hugging Face implementation of LLaMA . The code should support standard Hugging Face API. For more details see the Usage section. Usage See also:
* Colab with LongLLaMA-Instruct-3Bv1.1 .
* Colab with an example usage of base LongLLaMA . Requirements pip install --upgrade pip
pip install transformers==4.33.2 sentencepiece accelerate Loading model ```python
import torch
from transformers import LlamaTokenizer, AutoModelForCausalLM tokenizer = LlamaTokenizer.from_pretrained("syzymon/long_llama_3b_v1_1")
model = AutoModelForCausalLM.from_pretrained("syzymon/long_llama_3b_v1_1",
torch_dtype=torch.float32,
trust_remote_code=True)
``` Input handling and generation LongLLaMA uses the Hugging Face interface, the long input given to the model will be
split into context windows and loaded into the memory cache. python
prompt = "My name is Julien and I like to"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model(input_ids=input_ids) During the model call, one can provide the parameter last_context_length (default $1024$), which specifies the number of tokens left in the last context window. Tuning this parameter can improve generation as the first layers do not have access to memory. See details in How LongLLaMA handles long inputs . python
generation_output = model.generate(
input_ids=input_ids,
max_new_tokens=256,
num_beams=1,
last_context_length=1792,
do_sample=True,
temperature=1.0,
)
print(tokenizer.decode(generation_output[0])) Additional configuration LongLLaMA has several other parameters:
* mem_layers specifies layers endowed with memory (should be either an empty list or a list of all memory layers specified in the description of the checkpoint).
* mem_dtype allows changing the type of memory cache
* mem_attention_grouping can trade off speed for reduced memory usage.
When equal to (4, 2048) , the memory layers will process at most $4*2048$ queries at once ($4$ heads and $2048$ queries for each head). ```python
import torch
from transformers import LlamaTokenizer, AutoModelForCausalLM tokenizer = LlamaTokenizer.from_pretrained("syzymon/long_llama_3b_v1_1")
model = AutoModelForCausalLM.from_pretrained(
"syzymon/long_llama_3b_v1_1", torch_dtype=torch.float32,
mem_layers=[],
mem_dtype='bfloat16',
trust_remote_code=True,
mem_attention_grouping=(4, 2048),
)
``` Drop-in use with LLaMA code LongLLaMA checkpoints can also be used as a drop-in replacement for LLaMA checkpoints in Hugging Face implementation of LLaMA , but in this case, they will be limited to the original context length of $2048$. ```python
from transformers import LlamaTokenizer, LlamaForCausalLM
import torch tokenizer = LlamaTokenizer.from_pretrained("syzymon/long_llama_3b_v1_1")
model = LlamaForCausalLM.from_pretrained("syzymon/long_llama_3b_v1_1", torch_dtype=torch.float32)
``` How LongLLaMA handles long inputs Inputs over $lctx=2048$ ($lctx=4096$ for LongLLaMA Code) tokens are automatically split into windows $w_1, \ldots, w_m$. The first $m-2$ windows contain $lctx$ tokens each, $w_{m-1}$ has no more than $lctx$ tokens, and $w_m$ contains the number of tokens specified by last_context_length . The model processes the windows one by one extending the memory cache after each. If use_cache is True , then the last window will not be loaded to the memory cache but to the local (generation) cache. The memory cache stores $(key, value)$ pairs for each head of the specified memory layers mem_layers . In addition to this, it stores attention masks. If use_cache=True (which is the case in generation), LongLLaMA will use two caches: the memory cache for the specified layers and the local (generation) cache for all layers. When the local cache exceeds $lctx$ elements, its content is moved to the memory cache for the memory layers. For simplicity, context extension is realized with a memory cache and full attention in this repo. Replacing this simple mechanism with a KNN search over an external database is possible with systems like Faiss . This potentially would enable further context length scaling. We leave this as a future work. LongLLaMA performance We present some illustrative examples of LongLLaMA results. Refer to our paper Focused Transformer: Contrastive Training for Context Scaling for more details. We manage to achieve good performance on the passkey retrieval task from Landmark Attention: Random-Access Infinite Context Length for Transformers . The code for generating the prompt and running the model is located in examples/passkey.py . Our LongLLaMA 3B model also shows improvements when using long context on two downstream tasks, TREC question classification and WebQS question answering. | Context/Dataset | TREC | WebQS |
| --- | --- | --- |
| $2K$ | 67.0 | 21.2 |
| $4K$ | 71.6 | 21.4 |
| $6K$ | 72.9 | 22.2 |
| $8K$ | **73.3** | **22.4** | LongLLaMA retains performance on tasks that do not require long context. In particular, LongLLaMA-Code 7B improves reasoning (GSM8K) and knowledge (MMLU) due to code fine-tuning: We provide a comparison with OpenLLaMA
on lm-evaluation-harness in the zero-shot setting. | Task/Metric | OpenLLaMA-3B | LongLLaMA-3B |
|----------------|----------|-----------|
| anli_r1/acc | 0.33 | 0.32 |
| anli_r2/acc | 0.32 | 0.33 |
| anli_r3/acc | 0.35 | 0.35 |
| arc_challenge/acc | 0.34 | 0.34 |
| arc_challenge/acc_norm | 0.37 | 0.37 |
| arc_easy/acc | 0.69 | 0.68 |
| arc_easy/acc_norm | 0.65 | 0.63 |
| boolq/acc | 0.68 | 0.68 |
| hellaswag/acc | 0.49 | 0.48 |
| hellaswag/acc_norm | 0.67 | 0.65 |
| openbookqa/acc | 0.27 | 0.28 |
| openbookqa/acc_norm | 0.40 | 0.38 |
| piqa/acc | 0.75 | 0.73 |
| piqa/acc_norm | 0.76 | 0.75 |
| record/em | 0.88 | 0.87 |
| record/f1 | 0.89 | 0.87 |
| rte/acc | 0.58 | 0.60 |
| truthfulqa_mc/mc1 | 0.22 | 0.24 |
| truthfulqa_mc/mc2 | 0.35 | 0.38 |
| wic/acc | 0.48 | 0.50 |
| winogrande/acc | 0.62 | 0.60 |
| Avg score | 0.53 | 0.53 | Starting with v1.1 models we have decided to use EleutherAI implementation of lm-evaluation-harness with a slight modification, that adds <bos> token at beginning of input sequence. The results are provided in the table below. | description | LongLLaMA-3B | OpenLLaMA-3Bv2 | LongLLaMA-3Bv1.1 | LongLLaMA-Instruct-3Bv1.1 |
|:-----------------------|:--------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------|
| anli_r1/acc | 0.32 | 0.33 | 0.31 | 0.33 |
| anli_r2/acc | 0.33 | 0.35 | 0.33 | 0.35 |
| anli_r3/acc | 0.35 | 0.38 | 0.35 | 0.38 |
| arc_challenge/acc | 0.34 | 0.33 | 0.32 | 0.36 |
| arc_challenge/acc_norm | 0.37 | 0.36 | 0.36 | 0.37 |
| arc_easy/acc | 0.67 | 0.68 | 0.68 | 0.7 |
| arc_easy/acc_norm | 0.63 | 0.63 | 0.63 | 0.63 |
| boolq/acc | 0.68 | 0.67 | 0.66 | 0.77 |
| hellaswag/acc | 0.48 | 0.53 | 0.52 | 0.52 |
| hellaswag/acc_norm | 0.65 | 0.7 | 0.69 | 0.68 |
| openbookqa/acc | 0.28 | 0.28 | 0.28 | 0.28 |
| openbookqa/acc_norm | 0.38 | 0.39 | 0.37 | 0.41 |
| piqa/acc | 0.73 | 0.77 | 0.77 | 0.78 |
| piqa/acc_norm | 0.75 | 0.78 | 0.77 | 0.77 |
| record/em | 0.87 | 0.87 | 0.86 | 0.85 |
| record/f1 | 0.88 | 0.88 | 0.87 | 0.86 |
| rte/acc | 0.6 | 0.53 | 0.62 | 0.7 |
| truthfulqa_mc/mc1 | 0.24 | 0.22 | 0.21 | 0.25 |
| truthfulqa_mc/mc2 | 0.38 | 0.35 | 0.35 | 0.4 |
| wic/acc | 0.5 | 0.5 | 0.5 | 0.54 |
| winogrande/acc | 0.6 | 0.66 | 0.63 | 0.65 |
| Avg score | 0.53 | 0.53 | 0.53 | 0.55 | We also provide the results on human-eval. We cut the generated text after either
* "\ndef " * "\nclass " * "\nif __name__" | | OpenLLaMA-3Bv2 | LongLLaMA-3Bv1.1 | LongLLaMA-Instruct-3Bv1.1 |
| - | - | - | - |
| pass@1| 0.09| 0.12 | 0.12 | Authors Szymon Tworkowski Konrad Staniszewski Mikołaj Pacek Henryk Michalewski Yuhuai Wu Piotr Miłoś Citation To cite this work please use bibtex
@misc{tworkowski2023focused,
title={Focused Transformer: Contrastive Training for Context Scaling},
author={Szymon Tworkowski and Konrad Staniszewski and Mikołaj Pacek and Yuhuai Wu and Henryk Michalewski and Piotr Miłoś},
year={2023},
eprint={2307.03170},
archivePrefix={arXiv},
primaryClass={cs.CL}
} License The source code and base LongLLaMA 3B models checkpoints are licensed under Apache License, Version 2.0 . The instruction/chat tuned models are for research purposes only. For the LongLLaMA-Code 7B see codellama/CodeLlama-7b-hf license. LongLLaMA-Code 7B Instruct is LongLLaMA-Code 7B tuned on TIGER-Lab/MathInstruct , OpenOrca and ShareGPT-Processed datasets. Some of the examples use external code (see headers of files for copyright notices and licenses). Acknowledgments We gratefully acknowledge the TPU Research Cloud program, which was instrumental to our research by providing significant computational resources. We are also grateful to Xinyang Geng and Hao Liu for releasing OpenLLaMA checkpoints and the EasyLM library. Special thanks to Keiran Paster for providing immensely valuable suggestions about the pre-training data for LongLLaMA-Code. We would like to thank Xiaosong,He for suggestions on how to improve the explanations of cross-batch code.;LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transformer (FoT) method.;[] | CStanKonrad/long_llama |
psychic-api/rag-stack;🧺 RAGstack Deploy a private ChatGPT alternative hosted within your VPC. Connect it to your organization's knowledge base and use it as a corporate oracle. Supports open-source LLMs like Llama 2, Falcon, and GPT4All. Retrieval Augmented Generation (RAG) is a technique where the capabilities of a large language model (LLM) are augmented by retrieving information from other systems and inserting them into the LLM’s context window via a prompt. This gives LLMs information beyond what was provided in their training data, which is necessary for almost every enterprise use case. Examples include data from current web pages, data from SaaS apps like Confluence or Salesforce, and data from documents like sales contracts and PDFs. RAG works better than fine-tuning the model because it’s cheaper, it’s faster, and it’s more reliable since the source of information is provided with each response. RAGstack deploys the following resources for retrieval-augmented generation: Open-source LLM GPT4All: When you run locally, RAGstack will download and deploy Nomic AI's gpt4all model, which runs on consumer CPUs. Falcon-7b: On the cloud, RAGstack deploys Technology Innovation Institute's falcon-7b model onto a GPU-enabled GKE cluster. LLama 2: On the cloud, RAGstack can also deploy the 7B paramter version of Meta's Llama 2 model onto a GPU-enabled GKE cluster. Vector database Qdrant : Qdrant is an open-source vector database written in Rust, so it's highly performant and self-hostable. Server + UI Simple server and UI that handles PDF upload, so that you can chat over your PDFs using Qdrant and the open-source LLM of choice. Run locally Copy ragstack-ui/local.env into ragstack-ui/.env Copy server/example.env into server/.env In server/.env replace YOUR_SUPABASE_URL with your supabase project url and YOUR_SUPABASE_KEY with your supabase secret API key. In ragstack-ui/.env replace YOUR_SUPABASE_URL with your supabase project url and YOUR_SUPABASE_PUBLIC_KEY with your supabase secret API key. You can find these values in your supabase dashboard under Settings > API In Supabase, create a table ragstack_users with the following columns:
| Column name | Type |
| ----------- | ---- |
| id | uuid |
| app_id | uuid |
| secret_key | uuid |
| email | text |
| avatar_url | text |
| full_name | text | If you added row level security, make sure that inserts and selects have a WITH CHECK expression of (auth.uid() = id) . Run scripts/local/run-dev . This will download ggml-gpt4all-j-v1.3-groovy.bin into server/llm/local/ and run the server, LLM, and Qdrant vector database locally. All services will be ready once you see the following message: INFO: Application startup complete. Deploy to Google Cloud To deploy the RAG stack using Falcon-7B running on GPUs to your own google cloud instance, go through the following steps: Run scripts/gcp/deploy-gcp.sh . This will prompt you for your GCP project ID, service account key file, and region as well as some other parameters (model, HuggingFace token etc). If you get an error on the Falcon-7B deployment step, run the following commands and then run scripts/gcp/deploy-gcp.sh again: gcloud config set compute/zone YOUR-REGION-HERE
gcloud container clusters get-credentials gpu-cluster
kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml The deployment script was implemented using Terraform. You can run the frontend by creating a .env file in ragstack-ui and setting VITE_SERVER_URL to the url of the ragstack-server instance in your Google Cloud run. Deploy to AWS To deploy the RAG stack using Falcon-7B running on GPUs to your own AWS EC2 instances (using ECS), go through the following steps: Run scripts/aws/deploy-aws.sh . This will prompt you for your AWS credentials as well as some other parameters (model, HuggingFace token etc). The deployment script was implemented using Terraform. You can run the frontend by creating a .env file in ragstack-ui and setting VITE_SERVER_URL to the url of the ALB instance. Deploy to Azure To deploy the RAG stack using Falcon-7B running on GPUs to your own AKS, go through the following steps: Run ./azure/deploy-aks.sh . This will prompt you for your AKS subscription as well as some other parameters (model, HuggingFace token etc). The deployment script was implemented using Terraform. You can run the frontend by creating a .env file in ragstack-ui and setting VITE_SERVER_URL to the url of the ragstack-server service in your AKS. Please note that this AKS deployment is using node pool with NVIDIA Tesla T4 Accelerator which is not in all subscriptions available Roadmap ✅ GPT4all support ✅ Falcon-7b support ✅ Deployment on GCP ✅ Deployment on AWS ✅ Deployment on Azure 🚧 Llama-2-40b support Credits The code for containerizing Falcon 7B is from Het Trivedi's tutorial repo . Check out his Medium article on how to dockerize Falcon here !;🤖 Deploy a private ChatGPT alternative hosted within your VPC. 🔮 Connect it to your organization's knowledge base and use it as a corporate oracle. Supports open-source LLMs like Llama 2, Falcon, and GPT4All.;[] | psychic-api/rag-stack |
trzsz/trzsz-ssh;trzsz-ssh ( tssh ) - an openssh client alternative trzsz-ssh ( tssh ) is an ssh client designed as a drop-in replacement for the openssh client. It aims to provide complete compatibility with openssh, mirroring all its features, while also offering additional useful features not found in the openssh client. Basic Features trzsz-ssh ( tssh ) works exactly like the openssh client. The following common features have been implemented: | Features | Support Options |
| :------------: | :----------------------------------------------------------------------------------------------------------------: |
| Cipher | -c Ciphers |
| Pseudo TTY | -t -T RequestTTY |
| Network | -4 -6 AddressFamily |
| SSH Proxy | -J -W ProxyJump ProxyCommand |
| Multiplexing | ControlMaster ControlPath ControlPersist |
| Command | RemoteCommand , LocalCommand , PermitLocalCommand |
| SSH Agent | -a -A ForwardAgent IdentityAgent SSH_AUTH_SOCK |
| X11 Forward | -x -X -Y ForwardX11 ForwardX11Trusted ForwardX11Timeout |
| Known Hosts | UserKnownHostsFile GlobalKnownHostsFile StrictHostKeyChecking |
| Basic Login | -l -p -i -F HostName Port User IdentityFile SendEnv SetEnv |
| Authentication | PubkeyAuthentication PasswordAuthentication KbdInteractiveAuthentication |
| Port Forward | -g -f -N -L -R -D LocalForward RemoteForward DynamicForward GatewayPorts ClearAllForwardings | Extra Features trzsz-ssh ( tssh ) offers additional useful features: | English | 中文 |
| :---------------------------------------------------------: | :-----------------------------------------------------------: |
| Login Prompt | 登录界面 |
| Custom Theme | 主题风格 |
| trzsz ( trz / tsz ) | trzsz ( trz / tsz ) |
| zmodem ( rz / sz ) | zmodem ( rz / sz ) |
| Batch Login | 批量登录 |
| Group Labels | 分组标签 |
| Automated Interaction | 自动交互 |
| Remember Password | 记住密码 |
| Custom Configuration | 个性配置 |
| Comments of Config | 配置注释 |
| Other Features | 其他功能 |
| UDP Mode | UDP 模式 | Installation Install with scoop / winget / choco on Windows scoop install tssh / winget install tssh / choco install tssh ```sh
scoop install tssh
```
```sh
winget install tssh
```
```sh
choco install tssh
``` Install with homebrew on MacOS brew install trzsz-ssh ```sh
brew update
brew install trzsz-ssh
``` Install with apt on Ubuntu sudo apt install tssh ```sh
sudo apt update && sudo apt install software-properties-common
sudo add-apt-repository ppa:trzsz/ppa && sudo apt update
sudo apt install tssh
``` Install with apt on Debian sudo apt install tssh ```sh
sudo apt install curl gpg
curl -s 'https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x7074ce75da7cc691c1ae1a7c7e51d1ad956055ca' \
| gpg --dearmor -o /usr/share/keyrings/trzsz.gpg
echo 'deb [signed-by=/usr/share/keyrings/trzsz.gpg] https://ppa.launchpadcontent.net/trzsz/ppa/ubuntu jammy main' \
| sudo tee /etc/apt/sources.list.d/trzsz.list
sudo apt update
sudo apt install tssh
``` Install with yum on Linux sudo yum install tssh - Install with [gemfury](https://gemfury.com/) repository.
```sh
echo '[trzsz]
name=Trzsz Repo
baseurl=https://yum.fury.io/trzsz/
enabled=1
gpgcheck=0' | sudo tee /etc/yum.repos.d/trzsz.repo
sudo yum install tssh
```
- Install with [wlnmp](https://www.wlnmp.com/install) repository. It's not necessary to configure the epel repository for tssh.
```sh
curl -fsSL "https://sh.wlnmp.com/wlnmp.sh" | bash
sudo yum install tssh
``` Install with yay on ArchLinux yay -S tssh ```sh
yay -Syu
yay -S tssh
``` Install with Go ( Requires go 1.20 or later ) go install github.com/trzsz/trzsz-ssh/cmd/tssh@latest ```sh
go install github.com/trzsz/trzsz-ssh/cmd/tssh@latest
```
The binaries are usually located in ~/go/bin/ ( C:\Users\your_name\go\bin\ on Windows ). Build from source ( Requires go 1.20 or later ) sudo make install ```sh
git clone --depth 1 https://github.com/trzsz/trzsz-ssh.git
cd trzsz-ssh
make
sudo make install
``` Download from the GitHub Releases , unzip and add to PATH environment. Contributing Welcome and thank you for considering contributing. We appreciate all forms of support, from coding and testing to documentation and CI/CD improvements. Fork and clone the repository https://github.com/trzsz/trzsz-ssh.git . Make your changes just ensure that the unit tests go test ./tssh pass. Build the binary go build -o ./bin/ ./cmd/tssh and test it ./bin/tssh . Once you are happy with your changes, please submit a pull request. Screenshot Contact Feel free to email the author lonnywong@qq.com , or create an issue . Welcome to join the QQ group: 318578930. Sponsor ❤️ Sponsor trzsz ❤️ , buy the author a drink 🍺 ? Thank you for your support!;trzsz-ssh ( tssh ) is an alternative to ssh client, offers additional useful features, such as login prompt, trzsz ( trz / tsz ), batch login, remember password, zmodem ( rz / sz ), etc.;rz,ssh,sz,tmux,trz,trzsz,tsz,ssh-client,zmodem,openssh | trzsz/trzsz-ssh |
Sh4yy/cloudflare-email;Cloudflare Worker Email Server Send free transactional emails from your Cloudflare Workers using MailChannels. Getting Started! Clone this repository Install the dependencies with npm install Use the command npx wrangler secret put --env production TOKEN to deploy a securely stored token to Cloudflare. With this command, you will be prompted to enter a random secret value, which will be used to authenticate your requests with the HTTP Authorization header as described below. You can also set this encrypted value directly in your Cloudflare dashboard. Deploy the worker with npm run deploy Or deploy directly to Cloudflare Setup SPF SPF is a DNS record that helps prevent email spoofing. You will need to add an SPF record to your domain to allow MailChannels to send emails on your behalf. Add a TXT record to your domain with the following values: - Name: `@`
- Value: `v=spf1 a mx include:relay.mailchannels.net ~all` Note: If you're facing Domain Lockdown error , follow the below steps: Create a TXT record with the following values: - Name: `_mailchannels`
- Value: `v=mc1 cfid=yourdomain.workers.dev` (the value of `cfid` will also be present in the error response) Setup DKIM This step is optional, but highly recommended. DKIM is a DNS record that helps prevent email spoofing. You may follow the steps listed in the MailChannels documentation to set up DKIM for your domain. Usage Once you have deployed this worker function to Cloudflare Workers, you can send emails by making a POST request to the worker on the /api/email endpoint with the following parameters: Note you need to pass an Authorization header with the secure token you deployed. Like the following: Authorization: TOKEN Basic Email The Most basic request would look like this: json
{
"to": "john@example.com",
"from": "me@example.com",
"subject": "Hello World",
"text": "Hello World"
} HTML Emails You can also send HTML emails by adding an html parameter to the request. This can be used in conjunction with the text parameter to send a multi-part email. json
{
"to": "john@example.com",
"from": "me@example.com",
"subject": "Hello World",
"html": "<h1>Hello World</h1>"
} Sender and Recipient Name You can also specify a sender and recipient name by adding a name parameter to the request. This can be used for both the to and from parameters. json
{
"to": { "email": "john@example.com", "name": "John Doe" },
"from": { "email": "me@example.com", "name": "Jane Doe" },
"subject": "Hello World",
"text": "Hello World"
} Sending to Multiple Recipients You may also send to multiple recipients by passing an array of emails, or an array of objects with email and name properties. json
{
"to": [
"john@example.com",
"rose@example.com"
],
"from": "me@example.com",
"subject": "Hello World",
"text": "Hello World"
} or json
{
"to": [
{ "email": "john@example.com", "name": "John Doe" },
{ "email": "rose@example.com", "name": "Rose Doe" }
],
"from": "me@example.com",
"subject": "Hello World",
"text": "Hello World"
} Sending BCC and CC You can also send BCC and CC emails by passing an array of emails, an object with email and name properties, or an array of either, similar to the to parameter. json
{
"to": "john@example.com",
"from": "me@example.com",
"subject": "Hello World",
"text": "Hello World",
"cc": [
"jim@example.com",
"rose@example.com"
],
"bcc": [
"gil@example.com"
]
} Reply To You can also specify a reply to email address by adding a replyTo parameter to the request. Again, you can use an email string, an object with email and name properties, or an array of either. json
{
"to": "john@example.com",
"from": "me@example.com",
"replyTo": "support@example.com",
"subject": "Hello World",
"text": "Hello World"
};This is a simple proxy server that can be used for sending free transactional emails through Cloudflare workers.;[] | Sh4yy/cloudflare-email |
Thinkright20/Profile-Badges;🛡️ Profile Badges This repository includes a collection of every single badge on GitHub, and how to obtain each one of them. Some of them are obtainable, not obtainable, or still in beta testing. This repository orders the badges from Very Easy to Easy to Medium and finally Hard . GitHub Badges were added after the launch of the first Mars Helicopter, Ingenuity . Now today, there are a total of 11 badges on GitHub, and some with a bronze, silver, and gold tier too. 🧪 Achievements Being Tested These achievements are still in testing and will be obtainable soon:
| Name | Badge Image | Earnable? | Earned By | Difficulty |
| --- | --- | --- | --- | --- |
Heart On Your Sleeve | | 🔜 In testing | User cans obtain this badge by reacting to something on GitHub with a ❤️ emoji | Very Easy
Open Sourcerer | | 🔜 In testing | Can be obtained if a user had Pull Requests merged in more than 1 public repository | Medium 🥇 Obtainable Achievements The achievements listed below are ones that you can obtain:
| Name | Badge Image | Earnable? | Earned By | Difficulty |
| --- | --- | --- | --- | --- |
Pull Shark | | ✅ Yes | You can get this badge by merging 2 pull requests. | Medium
YOLO | | ✅ Yes | A user can obtain this badge by merging a pull request without a review (This requires 1 user and 1 reviewer) | Medium
Quickdraw | | ✅ Yes | Obtainable by closing a Pull Request or Issue in 5 mins of it being opened. (You can close your own issues) | Very Easy
Pair Extraordinaire | | ✅ Yes | The user can obtain this badge by Coauthoring in a merged pull request. (Requires GitHub Desktop App) | Hard
Galaxy Brain | | ✅ Yes | A user can get this badge by getting 2 of their answers accepted on the Community Discussions forum. | Medium
Starstruck | | ✅ Yes | This badge is given to someone when one of their repository's gets 16 stars. | Hard
Public Sponsor | | ✅ Yes | This badge can be earned by sponsoring a repository/organization/user. | Very Easy ⛔ Un-Obtainable Achievements The following are badges that used to be obtainable, but now cannot be obtained:
| Name | Badge Image | Earnable? | Earned By | Difficulty |
| --- | --- | --- | --- | --- |
Arctic Code Vault Contributor | | ❌ No | Previously obtainable by contributing code to repositories in the 2020 Archive Program | Very Easy
Mars 2020 Contributor | | ❌ No | Given if a user contributed code to repositories in the Mars 2020 Mission | Easy 🎚️ Tiers Some Achievements not only have the base version, but also tiers. The following are the only badges that have tiers right now: | Default Badge | Tiers |
| :-: | :-: |
| | DEFAULT BRONZE SILVER GOLD 16 128 512 4096 |
| | DEFAULT BRONZE SILVER GOLD 1 10 24 48 |
| | DEFAULT BRONZE SILVER GOLD 2 16 128 1024 |
| | DEFAULT BRONZE SILVER GOLD 2 8 16 32 🎨 Tier Colors & Labels Every tier has either an x2, x3, or x4 label with it that also includes color. Here is the information about each one below: | Tier | Label | Sample | Hex | Visual |
| --- | --- | --- | --- | --- |
🥉 Bronze | x2 | | #F9BFA7 | 🥈 Silver | x3 | | #E1E4E4 | 🥇 Gold | x4 | | #FAE57E | 👋 Achievement Skin Tone Some of the achievements on GitHub appear with a different skin color depending on your Emoji Skin Tone Preference. You can change your preferred Skin Tone by going to the appearance settings . | Badge | Name | Skin Tone Versions |
| :-: | :-: | :-: |
| | Starstruck | 👋 👋🏻 👋🏼 👋🏽 👋🏾 👋🏿 |
| | Quickdraw | 👋 👋🏻 👋🏼 👋🏽 👋🏾 👋🏿 | 🕒 Previous Names Only 2 achievements have had their name altered in all of GitHub's History. These 2 achievements are the Public Sponsor and Mars 2020 Contributor as follows: ```diff
- GitHub Sponsor
+ Public Sponsor Mars 2020 Helicopter Contributor Mars 2020 Contributor
``` 📋 Credits Massive credit goes to @Schweinepriester for the high quality images for each badge, labels, information for each badge, and the inspiration to make this repository Credit to @drknzz for the skin tone images, information about it too & inspiration;🛡️ A list of all profile badges and how to obtain each one 🛡️;achievements,badge,badges,galaxy-brain,github,heart-on-your-sleeve,open-sourcerer,yolo,public-sponsor,pair-extraordinaire | Thinkright20/Profile-Badges |
thu-ml/prolificdreamer;ProlificDreamer Official implementation of ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation , published in NeurIPS 2023 (Spotlight). Installation The codebase is built on stable-dreamfusion . For installation, pip install -r requirements.txt Training ProlificDreamer includes 3 stages for high-fidelity text-to-3d generation.
``` --------- Stage 1 (NeRF, VSD guidance) --------- This costs approximately 27GB GPU memory, with rendering resolution of 512x512 CUDA_VISIBLE_DEVICES=0 python main.py --text "A pineapple." --iters 25000 --lambda_entropy 10 --scale 7.5 --n_particles 1 --h 512 --w 512 --workspace exp-nerf-stage1/ If you find the result is foggy, you can increase the --lambda_entropy. For example CUDA_VISIBLE_DEVICES=0 python main.py --text "A pineapple." --iters 25000 --lambda_entropy 100 --scale 7.5 --n_particles 1 --h 512 --w 512 --workspace exp-nerf-stage1/ Generate with multiple particles. Notice that generating with multiple particles is only supported in Stage 1. CUDA_VISIBLE_DEVICES=0 python main.py --text "A pineapple." --iters 100000 --lambda_entropy 10 --scale 7.5 --n_particles 4 --h 512 --w 512 --t5_iters 20000 --workspace exp-nerf-stage1/ --------- Stage 2 (Geometry Refinement) --------- This costs <20GB GPU memory CUDA_VISIBLE_DEVICES=0 python main.py --text "A pineapple." --iters 15000 --scale 100 --dmtet --mesh_idx 0 --init_ckpt /path/to/stage1/ckpt --normal True --sds True --density_thresh 0.1 --lambda_normal 5000 --workspace exp-dmtet-stage2/ If the results are with maney floaters, you can increase --density_thresh. Notice that the value of --density_thresh must be consistent in stage2 and stage3. CUDA_VISIBLE_DEVICES=0 python main.py --text "A pineapple." --iters 15000 --scale 100 --dmtet --mesh_idx 0 --init_ckpt /path/to/stage1/ckpt --normal True --sds True --density_thresh 0.4 --lambda_normal 5000 --workspace exp-dmtet-stage2/ --------- Stage 3 (Texturing, VSD guidance) --------- texturing with 512x512 rasterization CUDA_VISIBLE_DEVICES=0 python main.py --text "A pineapple." --iters 30000 --scale 7.5 --dmtet --mesh_idx 0 --init_ckpt /path/to/stage2/ckpt --density_thresh 0.1 --finetune True --workspace exp-dmtet-stage3/
``` We also provide a script that can automatically run these 3 stages. bash run.sh gpu_id text_prompt For example, bash run.sh 0 "A pineapple." Limitations: (1) Our work ultilizes the original Stable Diffusion without any 3D data, thus the multi-face Janus problem is prevalent in the results. Ultilizing text-to-image diffusion which has been finetuned on multi-view images will alleviate this problem.
(2) If the results are not satisfactory, try different seeds. This is helpful if the results have a good quality but suffer from the multi-face Janus problem. TODO List [x] Release our code. [ ] Combine MVDream with VSD to alleviate the multi-face problem. Related Links ProlificDreamer is also integrated in Threestudio library ❤️. DreamCraft3D Fantasia3D Magic3D DreamFusion SJC Latent-NeRF BibTeX If you find our work useful for your project, please consider citing the following paper. @inproceedings{wang2023prolificdreamer,
title={ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation},
author={Zhengyi Wang and Cheng Lu and Yikai Wang and Fan Bao and Chongxuan Li and Hang Su and Jun Zhu},
booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
year={2023}
};ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation (NeurIPS 2023 Spotlight);diffusion-model,dreamfusion,nerf,text-to-3d,prolificdreamer,stablediffusion | thu-ml/prolificdreamer |
adrianhajdin/threads;A full stack Threads Clone Build this project step by step with our detailed tutorial on JavaScript Mastery YouTube. Join the JSM family! 📋 Table of Contents 🤖 Introduction ⚙️ Tech Stack 🔋 Features 🤸 Quick Start 🕸️ Snippets 🔗 Links 🚀 More 🚨 Tutorial This repository contains the code corresponding to an in-depth tutorial available on our YouTube channel, JavaScript Mastery . If you prefer visual learning, this is the perfect resource for you. Follow our tutorial to learn how to build projects like these step-by-step in a beginner-friendly manner! 🤖 Introduction Build a full stack Threads clone using Next.js 14+ with a redesigned look transformed from a Figma design, user interaction to community management, technical implementation, and various features, including nested deep comments, notifications, real-time-search, and more. If you're getting started and need assistance or face any bugs, join our active Discord community with over 27k+ members. It's a place where people help each other out. ⚙️ Tech Stack Next.js MongoDB Shadcn UI TailwindCSS Clerk Webhooks Serverless APIs React Hook Form Zod TypeScript 🔋 Features 👉 Authentication : Authentication using Clerk for email, password, and social logins (Google and GitHub) with a comprehensive profile management system. 👉 Visually Appealing Home Page : A visually appealing home page showcasing the latest threads for an engaging user experience. 👉 Create Thread Page : A dedicated page for users to create threads, fostering community engagement 👉 Commenting Feature : A commenting feature to facilitate discussions within threads. 👉 Nested Commenting : Commenting system with nested threads, providing a structured conversation flow. 👉 User Search with Pagination : A user search feature with pagination for easy exploration and discovery of other users. 👉 Activity Page : Display notifications on the activity page when someone comments on a user's thread, enhancing user engagement. 👉 Profile Page : User profile pages for showcasing information and enabling modification of profile settings. 👉 Create and Invite to Communities : Allow users to create new communities and invite others using customizable template emails. 👉 Community Member Management : A user-friendly interface to manage community members, allowing role changes and removals. 👉 Admin-Specific Community Threads : Enable admins to create threads specifically for their community. 👉 Community Search with Pagination : A community search feature with pagination for exploring different communities. 👉 Community Profiles : Display community profiles showcasing threads and members for a comprehensive overview. 👉 Figma Design Implementation : Transform Figma designs into a fully functional application with pixel-perfect and responsive design. 👉 Blazing-Fast Performance : Optimal performance and instantaneous page switching for a seamless user experience. 👉 Server Side Rendering : Utilize Next.js with Server Side Rendering for enhanced performance and SEO benefits. 👉 MongoDB with Complex Schemas : Handle complex schemas and multiple data populations using MongoDB. 👉 File Uploads with UploadThing : File uploads using UploadThing for a seamless media sharing experience. 👉 Real-Time Events Listening : Real-time events listening with webhooks to keep users updated. 👉 Middleware, API Actions, and Authorization : Utilize middleware, API actions, and authorization for robust application security. 👉 Next.js Layout Route Groups : New Next.js layout route groups for efficient routing 👉 Data Validation with Zod : Data integrity with data validation using Zod 👉 Form Management with React Hook Form : Efficient management of forms with React Hook Form for a streamlined user input experience. and many more, including code architecture and reusability 🤸 Quick Start Follow these steps to set up the project locally on your machine. Prerequisites Make sure you have the following installed on your machine: Git Node.js npm (Node Package Manager) Cloning the Repository bash
git clone https://github.com/adrianhajdin/threads.git
cd threads Installation Install the project dependencies using npm: bash
npm install Set Up Environment Variables Create a new file named .env in the root of your project and add the following content: env
MONGODB_URL=
CLERK_SECRET_KEY=
UPLOADTHING_SECRET=
UPLOADTHING_APP_ID=
NEXT_CLERK_WEBHOOK_SECRET=
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY= Replace the placeholder values with your actual credentials. You can obtain these credentials by signing up for the corresponding websites on MongoDB , Clerk , and Uploadthing . Running the Project bash
npm run dev Open http://localhost:3000 in your browser to view the project. 🕸️ Snippets clerk.route.ts ```typescript
/* eslint-disable camelcase */
// Resource: https://clerk.com/docs/users/sync-data-to-your-backend
// Above article shows why we need webhooks i.e., to sync data to our backend
// Resource: https://docs.svix.com/receiving/verifying-payloads/why
// It's a good practice to verify webhooks. Above article shows why we should do it
import { Webhook, WebhookRequiredHeaders } from "svix";
import { headers } from "next/headers";
import { IncomingHttpHeaders } from "http";
import { NextResponse } from "next/server";
import {
addMemberToCommunity,
createCommunity,
deleteCommunity,
removeUserFromCommunity,
updateCommunityInfo,
} from "@/lib/actions/community.actions";
// Resource: https://clerk.com/docs/integration/webhooks#supported-events
// Above document lists the supported events
type EventType =
| "organization.created"
| "organizationInvitation.created"
| "organizationMembership.created"
| "organizationMembership.deleted"
| "organization.updated"
| "organization.deleted";
type Event = {
data: Record []>;
object: "event";
type: EventType;
};
export const POST = async (request: Request) => {
const payload = await request.json();
const header = headers();
const heads = {
"svix-id": header.get("svix-id"),
"svix-timestamp": header.get("svix-timestamp"),
"svix-signature": header.get("svix-signature"),
};
// Activitate Webhook in the Clerk Dashboard.
// After adding the endpoint, you'll see the secret on the right side.
const wh = new Webhook(process.env.NEXT_CLERK_WEBHOOK_SECRET || "");
let evnt: Event | null = null;
try {
evnt = wh.verify(
JSON.stringify(payload),
heads as IncomingHttpHeaders & WebhookRequiredHeaders
) as Event;
} catch (err) {
return NextResponse.json({ message: err }, { status: 400 });
}
const eventType: EventType = evnt?.type!;
// Listen organization creation event
if (eventType === "organization.created") {
// Resource: https://clerk.com/docs/reference/backend-api/tag/Organizations#operation/CreateOrganization
// Show what evnt?.data sends from above resource
const { id, name, slug, logo_url, image_url, created_by } =
evnt?.data ?? {};
try {
// @ts-ignore
await createCommunity(
// @ts-ignore
id,
name,
slug,
logo_url || image_url,
"org bio",
created_by
);
return NextResponse.json({ message: "User created" }, { status: 201 });
} catch (err) {
console.log(err);
return NextResponse.json(
{ message: "Internal Server Error" },
{ status: 500 }
);
}
}
// Listen organization invitation creation event.
// Just to show. You can avoid this or tell people that we can create a new mongoose action and
// add pending invites in the database.
if (eventType === "organizationInvitation.created") {
try {
// Resource: https://clerk.com/docs/reference/backend-api/tag/Organization-Invitations#operation/CreateOrganizationInvitation
console.log("Invitation created", evnt?.data);
return NextResponse.json(
{ message: "Invitation created" },
{ status: 201 }
);
} catch (err) {
console.log(err);
return NextResponse.json(
{ message: "Internal Server Error" },
{ status: 500 }
);
}
}
// Listen organization membership (member invite & accepted) creation
if (eventType === "organizationMembership.created") {
try {
// Resource: https://clerk.com/docs/reference/backend-api/tag/Organization-Memberships#operation/CreateOrganizationMembership
// Show what evnt?.data sends from above resource
const { organization, public_user_data } = evnt?.data;
console.log("created", evnt?.data);
// @ts-ignore
await addMemberToCommunity(organization.id, public_user_data.user_id);
return NextResponse.json(
{ message: "Invitation accepted" },
{ status: 201 }
);
} catch (err) {
console.log(err);
return NextResponse.json(
{ message: "Internal Server Error" },
{ status: 500 }
);
}
}
// Listen member deletion event
if (eventType === "organizationMembership.deleted") {
try {
// Resource: https://clerk.com/docs/reference/backend-api/tag/Organization-Memberships#operation/DeleteOrganizationMembership
// Show what evnt?.data sends from above resource
const { organization, public_user_data } = evnt?.data;
console.log("removed", evnt?.data);
// @ts-ignore
await removeUserFromCommunity(public_user_data.user_id, organization.id);
return NextResponse.json({ message: "Member removed" }, { status: 201 });
} catch (err) {
console.log(err);
return NextResponse.json(
{ message: "Internal Server Error" },
{ status: 500 }
);
}
}
// Listen organization updation event
if (eventType === "organization.updated") {
try {
// Resource: https://clerk.com/docs/reference/backend-api/tag/Organizations#operation/UpdateOrganization
// Show what evnt?.data sends from above resource
const { id, logo_url, name, slug } = evnt?.data;
console.log("updated", evnt?.data);
// @ts-ignore
await updateCommunityInfo(id, name, slug, logo_url);
return NextResponse.json({ message: "Member removed" }, { status: 201 });
} catch (err) {
console.log(err);
return NextResponse.json(
{ message: "Internal Server Error" },
{ status: 500 }
);
}
}
// Listen organization deletion event
if (eventType === "organization.deleted") {
try {
// Resource: https://clerk.com/docs/reference/backend-api/tag/Organizations#operation/DeleteOrganization
// Show what evnt?.data sends from above resource
const { id } = evnt?.data;
console.log("deleted", evnt?.data);
// @ts-ignore
await deleteCommunity(id);
return NextResponse.json(
{ message: "Organization deleted" },
{ status: 201 }
);
} catch (err) {
console.log(err);
return NextResponse.json(
{ message: "Internal Server Error" },
{ status: 500 }
);
}
}
};
``` community.actions.ts ```typescript
"use server";
import { FilterQuery, SortOrder } from "mongoose";
import Community from "../models/community.model";
import Thread from "../models/thread.model";
import User from "../models/user.model";
import { connectToDB } from "../mongoose";
export async function createCommunity(
id: string,
name: string,
username: string,
image: string,
bio: string,
createdById: string // Change the parameter name to reflect it's an id
) {
try {
connectToDB();
// Find the user with the provided unique id
const user = await User.findOne({ id: createdById });
if (!user) {
throw new Error("User not found"); // Handle the case if the user with the id is not found
}
const newCommunity = new Community({
id,
name,
username,
image,
bio,
createdBy: user._id, // Use the mongoose ID of the user
});
const createdCommunity = await newCommunity.save();
// Update User model
user.communities.push(createdCommunity._id);
await user.save();
return createdCommunity;
} catch (error) {
// Handle any errors
console.error("Error creating community:", error);
throw error;
}
}
export async function fetchCommunityDetails(id: string) {
try {
connectToDB();
const communityDetails = await Community.findOne({ id }).populate([
"createdBy",
{
path: "members",
model: User,
select: "name username image _id id",
},
]);
return communityDetails;
} catch (error) {
// Handle any errors
console.error("Error fetching community details:", error);
throw error;
}
}
export async function fetchCommunityPosts(id: string) {
try {
connectToDB();
const communityPosts = await Community.findById(id).populate({
path: "threads",
model: Thread,
populate: [
{
path: "author",
model: User,
select: "name image id", // Select the "name" and "_id" fields from the "User" model
},
{
path: "children",
model: Thread,
populate: {
path: "author",
model: User,
select: "image _id", // Select the "name" and "_id" fields from the "User" model
},
},
],
});
return communityPosts;
} catch (error) {
// Handle any errors
console.error("Error fetching community posts:", error);
throw error;
}
}
export async function fetchCommunities({
searchString = "",
pageNumber = 1,
pageSize = 20,
sortBy = "desc",
}: {
searchString?: string;
pageNumber?: number;
pageSize?: number;
sortBy?: SortOrder;
}) {
try {
connectToDB();
// Calculate the number of communities to skip based on the page number and page size.
const skipAmount = (pageNumber - 1) * pageSize;
// Create a case-insensitive regular expression for the provided search string.
const regex = new RegExp(searchString, "i");
// Create an initial query object to filter communities.
const query: FilterQuery = {};
// If the search string is not empty, add the $or operator to match either username or name fields.
if (searchString.trim() !== "") {
query.$or = [
{ username: { $regex: regex } },
{ name: { $regex: regex } },
];
}
// Define the sort options for the fetched communities based on createdAt field and provided sort order.
const sortOptions = { createdAt: sortBy };
// Create a query to fetch the communities based on the search and sort criteria.
const communitiesQuery = Community.find(query)
.sort(sortOptions)
.skip(skipAmount)
.limit(pageSize)
.populate("members");
// Count the total number of communities that match the search criteria (without pagination).
const totalCommunitiesCount = await Community.countDocuments(query);
const communities = await communitiesQuery.exec();
// Check if there are more communities beyond the current page.
const isNext = totalCommunitiesCount > skipAmount + communities.length;
return { communities, isNext };
} catch (error) {
console.error("Error fetching communities:", error);
throw error;
}
}
export async function addMemberToCommunity(
communityId: string,
memberId: string
) {
try {
connectToDB();
// Find the community by its unique id
const community = await Community.findOne({ id: communityId });
if (!community) {
throw new Error("Community not found");
}
// Find the user by their unique id
const user = await User.findOne({ id: memberId });
if (!user) {
throw new Error("User not found");
}
// Check if the user is already a member of the community
if (community.members.includes(user._id)) {
throw new Error("User is already a member of the community");
}
// Add the user's _id to the members array in the community
community.members.push(user._id);
await community.save();
// Add the community's _id to the communities array in the user
user.communities.push(community._id);
await user.save();
return community;
} catch (error) {
// Handle any errors
console.error("Error adding member to community:", error);
throw error;
}
}
export async function removeUserFromCommunity(
userId: string,
communityId: string
) {
try {
connectToDB();
const userIdObject = await User.findOne({ id: userId }, { _id: 1 });
const communityIdObject = await Community.findOne(
{ id: communityId },
{ _id: 1 }
);
if (!userIdObject) {
throw new Error("User not found");
}
if (!communityIdObject) {
throw new Error("Community not found");
}
// Remove the user's _id from the members array in the community
await Community.updateOne(
{ _id: communityIdObject._id },
{ $pull: { members: userIdObject._id } }
);
// Remove the community's _id from the communities array in the user
await User.updateOne(
{ _id: userIdObject._id },
{ $pull: { communities: communityIdObject._id } }
);
return { success: true };
} catch (error) {
// Handle any errors
console.error("Error removing user from community:", error);
throw error;
}
}
export async function updateCommunityInfo(
communityId: string,
name: string,
username: string,
image: string
) {
try {
connectToDB();
// Find the community by its _id and update the information
const updatedCommunity = await Community.findOneAndUpdate(
{ id: communityId },
{ name, username, image }
);
if (!updatedCommunity) {
throw new Error("Community not found");
}
return updatedCommunity;
} catch (error) {
// Handle any errors
console.error("Error updating community information:", error);
throw error;
}
}
export async function deleteCommunity(communityId: string) {
try {
connectToDB();
// Find the community by its ID and delete it
const deletedCommunity = await Community.findOneAndDelete({
id: communityId,
});
if (!deletedCommunity) {
throw new Error("Community not found");
}
// Delete all threads associated with the community
await Thread.deleteMany({ community: communityId });
// Find all users who are part of the community
const communityUsers = await User.find({ communities: communityId });
// Remove the community from the 'communities' array for each user
const updateUserPromises = communityUsers.map((user) => {
user.communities.pull(communityId);
return user.save();
});
await Promise.all(updateUserPromises);
return deletedCommunity;
} catch (error) {
console.error("Error deleting community: ", error);
throw error;
}
}
``` CommunityCard.tsx ```typescript
import Image from "next/image";
import Link from "next/link";
import { Button } from "../ui/button";
interface Props {
id: string;
name: string;
username: string;
imgUrl: string;
bio: string;
members: {
image: string;
}[];
}
function CommunityCard({ id, name, username, imgUrl, bio, members }: Props) {
return ( {name} @{username} {bio} View {members.length > 0 && ( {members.map((member, index) => ( ))}
{members.length > 3 && ( {members.length}+ Users )} )} );
}
export default CommunityCard;
``` constants.index.ts ```typescript
export const sidebarLinks = [
{
imgURL: "/assets/home.svg",
route: "/",
label: "Home",
},
{
imgURL: "/assets/search.svg",
route: "/search",
label: "Search",
},
{
imgURL: "/assets/heart.svg",
route: "/activity",
label: "Activity",
},
{
imgURL: "/assets/create.svg",
route: "/create-thread",
label: "Create Thread",
},
{
imgURL: "/assets/community.svg",
route: "/communities",
label: "Communities",
},
{
imgURL: "/assets/user.svg",
route: "/profile",
label: "Profile",
},
];
export const profileTabs = [
{ value: "threads", label: "Threads", icon: "/assets/reply.svg" },
{ value: "replies", label: "Replies", icon: "/assets/members.svg" },
{ value: "tagged", label: "Tagged", icon: "/assets/tag.svg" },
];
export const communityTabs = [
{ value: "threads", label: "Threads", icon: "/assets/reply.svg" },
{ value: "members", label: "Members", icon: "/assets/members.svg" },
{ value: "requests", label: "Requests", icon: "/assets/request.svg" },
];
``` globals.css ```css
@tailwind base;
@tailwind components;
@tailwind utilities;
@layer components {
/* main */
.main-container {
@apply flex min-h-screen flex-1 flex-col items-center bg-dark-1 px-6 pb-10 pt-28 max-md:pb-32 sm:px-10;
}
/* Head Text */
.head-text {
@apply text-heading2-bold text-light-1;
}
/* Activity */
.activity-card {
@apply flex items-center gap-2 rounded-md bg-dark-2 px-7 py-4;
}
/* No Result */
.no-result {
@apply text-center !text-base-regular text-light-3;
}
/* Community Card */
.community-card {
@apply w-full rounded-lg bg-dark-3 px-4 py-5 sm:w-96;
}
.community-card_btn {
@apply rounded-lg bg-primary-500 px-5 py-1.5 text-small-regular !text-light-1 !important;
}
/* thread card */
.thread-card_bar {
@apply relative mt-2 w-0.5 grow rounded-full bg-neutral-800;
}
/* User card */
.user-card {
@apply flex flex-col justify-between gap-4 max-xs:rounded-xl max-xs:bg-dark-3 max-xs:p-4 xs:flex-row xs:items-center;
}
.user-card_avatar {
@apply flex flex-1 items-start justify-start gap-3 xs:items-center;
}
.user-card_btn {
@apply h-auto min-w-[74px] rounded-lg bg-primary-500 text-[12px] text-light-1 !important;
}
.searchbar {
@apply flex gap-1 rounded-lg bg-dark-3 px-4 py-2;
}
.searchbar_input {
@apply border-none bg-dark-3 text-base-regular text-light-4 outline-none !important;
}
.topbar {
@apply fixed top-0 z-30 flex w-full items-center justify-between bg-dark-2 px-6 py-3;
}
.bottombar {
@apply fixed bottom-0 z-10 w-full rounded-t-3xl bg-glassmorphism p-4 backdrop-blur-lg xs:px-7 md:hidden;
}
.bottombar_container {
@apply flex items-center justify-between gap-3 xs:gap-5;
}
.bottombar_link {
@apply relative flex flex-col items-center gap-2 rounded-lg p-2 sm:flex-1 sm:px-2 sm:py-2.5;
}
.leftsidebar {
@apply sticky left-0 top-0 z-20 flex h-screen w-fit flex-col justify-between overflow-auto border-r border-r-dark-4 bg-dark-2 pb-5 pt-28 max-md:hidden;
}
.leftsidebar_link {
@apply relative flex justify-start gap-4 rounded-lg p-4;
}
.pagination {
@apply mt-10 flex w-full items-center justify-center gap-5;
}
.rightsidebar {
@apply sticky right-0 top-0 z-20 flex h-screen w-fit flex-col justify-between gap-12 overflow-auto border-l border-l-dark-4 bg-dark-2 px-10 pb-6 pt-28 max-xl:hidden;
}
}
@layer utilities {
.css-invert {
@apply invert-[50%] brightness-200;
}
.custom-scrollbar::-webkit-scrollbar {
width: 3px;
height: 3px;
border-radius: 2px;
}
.custom-scrollbar::-webkit-scrollbar-track {
background: #09090a;
}
.custom-scrollbar::-webkit-scrollbar-thumb {
background: #5c5c7b;
border-radius: 50px;
}
.custom-scrollbar::-webkit-scrollbar-thumb:hover {
background: #7878a3;
}
}
/* Clerk Responsive fix */
.cl-organizationSwitcherTrigger .cl-userPreview .cl-userPreviewTextContainer {
@apply max-sm:hidden;
}
.cl-organizationSwitcherTrigger
.cl-organizationPreview
.cl-organizationPreviewTextContainer {
@apply max-sm:hidden;
}
/* Shadcn Component Styles */
/* Tab */
.tab {
@apply flex min-h-[50px] flex-1 items-center gap-3 bg-dark-2 text-light-2 data-[state=active]:bg-[#0e0e12] data-[state=active]:text-light-2 !important;
}
.no-focus {
@apply focus-visible:ring-0 focus-visible:ring-transparent focus-visible:ring-offset-0 !important;
}
/* Account Profile */
.account-form_image-label {
@apply flex h-24 w-24 items-center justify-center rounded-full bg-dark-4 !important;
}
.account-form_image-input {
@apply cursor-pointer border-none bg-transparent outline-none file:text-blue !important;
}
.account-form_input {
@apply border border-dark-4 bg-dark-3 text-light-1 !important;
}
/* Comment Form */
.comment-form {
@apply mt-10 flex items-center gap-4 border-y border-y-dark-4 py-5 max-xs:flex-col !important;
}
.comment-form_btn {
@apply rounded-3xl bg-primary-500 px-8 py-2 !text-small-regular text-light-1 max-xs:w-full !important;
}
``` next.config.js ```javascript
/** @type {import('next').NextConfig} */
const nextConfig = {
experimental: {
serverActions: true,
serverComponentsExternalPackages: ["mongoose"],
},
images: {
remotePatterns: [
{
protocol: "https",
hostname: "img.clerk.com",
},
{
protocol: "https",
hostname: "images.clerk.dev",
},
{
protocol: "https",
hostname: "uploadthing.com",
},
{
protocol: "https",
hostname: "placehold.co",
},
],
typescript: {
ignoreBuildErrors: true,
},
},
};
module.exports = nextConfig;
``` tailwind.config.js ```javascript
/** @type {import('tailwindcss').Config} */
module.exports = {
darkMode: ["class"],
content: [
"./pages/**/*.{ts,tsx}",
"./components/**/*.{ts,tsx}",
"./app/**/*.{ts,tsx}",
"./src/**/*.{ts,tsx}",
],
theme: {
container: {
center: true,
padding: "2rem",
screens: {
"2xl": "1400px",
},
},
fontSize: {
"heading1-bold": [
"36px",
{
lineHeight: "140%",
fontWeight: "700",
},
],
"heading1-semibold": [
"36px",
{
lineHeight: "140%",
fontWeight: "600",
},
],
"heading2-bold": [
"30px",
{
lineHeight: "140%",
fontWeight: "700",
},
],
"heading2-semibold": [
"30px",
{
lineHeight: "140%",
fontWeight: "600",
},
],
"heading3-bold": [
"24px",
{
lineHeight: "140%",
fontWeight: "700",
},
],
"heading4-medium": [
"20px",
{
lineHeight: "140%",
fontWeight: "500",
},
],
"body-bold": [
"18px",
{
lineHeight: "140%",
fontWeight: "700",
},
],
"body-semibold": [
"18px",
{
lineHeight: "140%",
fontWeight: "600",
},
],
"body-medium": [
"18px",
{
lineHeight: "140%",
fontWeight: "500",
},
],
"body-normal": [
"18px",
{
lineHeight: "140%",
fontWeight: "400",
},
],
"body1-bold": [
"18px",
{
lineHeight: "140%",
fontWeight: "700",
},
],
"base-regular": [
"16px",
{
lineHeight: "140%",
fontWeight: "400",
},
],
"base-medium": [
"16px",
{
lineHeight: "140%",
fontWeight: "500",
},
],
"base-semibold": [
"16px",
{
lineHeight: "140%",
fontWeight: "600",
},
],
"base1-semibold": [
"16px",
{
lineHeight: "140%",
fontWeight: "600",
},
],
"small-regular": [
"14px",
{
lineHeight: "140%",
fontWeight: "400",
},
],
"small-medium": [
"14px",
{
lineHeight: "140%",
fontWeight: "500",
},
],
"small-semibold": [
"14px",
{
lineHeight: "140%",
fontWeight: "600",
},
],
"subtle-medium": [
"12px",
{
lineHeight: "16px",
fontWeight: "500",
},
],
"subtle-semibold": [
"12px",
{
lineHeight: "16px",
fontWeight: "600",
},
],
"tiny-medium": [
"10px",
{
lineHeight: "140%",
fontWeight: "500",
},
],
"x-small-semibold": [
"7px",
{
lineHeight: "9.318px",
fontWeight: "600",
},
],
},
extend: {
colors: {
"primary-500": "#877EFF",
"secondary-500": "#FFB620",
blue: "#0095F6",
"logout-btn": "#FF5A5A",
"navbar-menu": "rgba(16, 16, 18, 0.6)",
"dark-1": "#000000",
"dark-2": "#121417",
"dark-3": "#101012",
"dark-4": "#1F1F22",
"light-1": "#FFFFFF",
"light-2": "#EFEFEF",
"light-3": "#7878A3",
"light-4": "#5C5C7B",
"gray-1": "#697C89",
glassmorphism: "rgba(16, 16, 18, 0.60)",
},
boxShadow: {
"count-badge": "0px 0px 6px 2px rgba(219, 188, 159, 0.30)",
"groups-sidebar": "-30px 0px 60px 0px rgba(28, 28, 31, 0.50)",
},
screens: {
xs: "400px",
},
keyframes: {
"accordion-down": {
from: { height: 0 },
to: { height: "var(--radix-accordion-content-height)" },
},
"accordion-up": {
from: { height: "var(--radix-accordion-content-height)" },
to: { height: 0 },
},
},
animation: {
"accordion-down": "accordion-down 0.2s ease-out",
"accordion-up": "accordion-up 0.2s ease-out",
},
},
},
plugins: [require("tailwindcss-animate")],
};
``` thread.actions.ts ```typescript
"use server";
import { revalidatePath } from "next/cache";
import { connectToDB } from "../mongoose";
import User from "../models/user.model";
import Thread from "../models/thread.model";
import Community from "../models/community.model";
export async function fetchPosts(pageNumber = 1, pageSize = 20) {
connectToDB();
// Calculate the number of posts to skip based on the page number and page size.
const skipAmount = (pageNumber - 1) * pageSize;
// Create a query to fetch the posts that have no parent (top-level threads) (a thread that is not a comment/reply).
const postsQuery = Thread.find({ parentId: { $in: [null, undefined] } })
.sort({ createdAt: "desc" })
.skip(skipAmount)
.limit(pageSize)
.populate({
path: "author",
model: User,
})
.populate({
path: "community",
model: Community,
})
.populate({
path: "children", // Populate the children field
populate: {
path: "author", // Populate the author field within children
model: User,
select: "_id name parentId image", // Select only _id and username fields of the author
},
});
// Count the total number of top-level posts (threads) i.e., threads that are not comments.
const totalPostsCount = await Thread.countDocuments({
parentId: { $in: [null, undefined] },
}); // Get the total count of posts
const posts = await postsQuery.exec();
const isNext = totalPostsCount > skipAmount + posts.length;
return { posts, isNext };
}
interface Params {
text: string,
author: string,
communityId: string | null,
path: string,
}
export async function createThread({ text, author, communityId, path }: Params
) {
try {
connectToDB();
const communityIdObject = await Community.findOne(
{ id: communityId },
{ _id: 1 }
);
const createdThread = await Thread.create({
text,
author,
community: communityIdObject, // Assign communityId if provided, or leave it null for personal account
});
// Update User model
await User.findByIdAndUpdate(author, {
$push: { threads: createdThread._id },
});
if (communityIdObject) {
// Update Community model
await Community.findByIdAndUpdate(communityIdObject, {
$push: { threads: createdThread._id },
});
}
revalidatePath(path);
} catch (error: any) {
throw new Error(`Failed to create thread: ${error.message}`);
}
}
async function fetchAllChildThreads(threadId: string): Promise {
const childThreads = await Thread.find({ parentId: threadId });
const descendantThreads = [];
for (const childThread of childThreads) {
const descendants = await fetchAllChildThreads(childThread._id);
descendantThreads.push(childThread, ...descendants);
}
return descendantThreads;
}
export async function deleteThread(id: string, path: string): Promise {
try {
connectToDB();
// Find the thread to be deleted (the main thread)
const mainThread = await Thread.findById(id).populate("author community");
if (!mainThread) {
throw new Error("Thread not found");
}
// Fetch all child threads and their descendants recursively
const descendantThreads = await fetchAllChildThreads(id);
// Get all descendant thread IDs including the main thread ID and child thread IDs
const descendantThreadIds = [
id,
...descendantThreads.map((thread) => thread._id),
];
// Extract the authorIds and communityIds to update User and Community models respectively
const uniqueAuthorIds = new Set(
[
...descendantThreads.map((thread) => thread.author?._id?.toString()), // Use optional chaining to handle possible undefined values
mainThread.author?._id?.toString(),
].filter((id) => id !== undefined)
);
const uniqueCommunityIds = new Set(
[
...descendantThreads.map((thread) => thread.community?._id?.toString()), // Use optional chaining to handle possible undefined values
mainThread.community?._id?.toString(),
].filter((id) => id !== undefined)
);
// Recursively delete child threads and their descendants
await Thread.deleteMany({ _id: { $in: descendantThreadIds } });
// Update User model
await User.updateMany(
{ _id: { $in: Array.from(uniqueAuthorIds) } },
{ $pull: { threads: { $in: descendantThreadIds } } }
);
// Update Community model
await Community.updateMany(
{ _id: { $in: Array.from(uniqueCommunityIds) } },
{ $pull: { threads: { $in: descendantThreadIds } } }
);
revalidatePath(path);
} catch (error: any) {
throw new Error(`Failed to delete thread: ${error.message}`);
}
}
export async function fetchThreadById(threadId: string) {
connectToDB();
try {
const thread = await Thread.findById(threadId)
.populate({
path: "author",
model: User,
select: "_id id name image",
}) // Populate the author field with _id and username
.populate({
path: "community",
model: Community,
select: "_id id name image",
}) // Populate the community field with _id and name
.populate({
path: "children", // Populate the children field
populate: [
{
path: "author", // Populate the author field within children
model: User,
select: "_id id name parentId image", // Select only _id and username fields of the author
},
{
path: "children", // Populate the children field within children
model: Thread, // The model of the nested children (assuming it's the same "Thread" model)
populate: {
path: "author", // Populate the author field within nested children
model: User,
select: "_id id name parentId image", // Select only _id and username fields of the author
},
},
],
})
.exec();
return thread;
} catch (err) {
console.error("Error while fetching thread:", err);
throw new Error("Unable to fetch thread");
}
}
export async function addCommentToThread(
threadId: string,
commentText: string,
userId: string,
path: string
) {
connectToDB();
try {
// Find the original thread by its ID
const originalThread = await Thread.findById(threadId);
if (!originalThread) {
throw new Error("Thread not found");
}
// Create the new comment thread
const commentThread = new Thread({
text: commentText,
author: userId,
parentId: threadId, // Set the parentId to the original thread's ID
});
// Save the comment thread to the database
const savedCommentThread = await commentThread.save();
// Add the comment thread's ID to the original thread's children array
originalThread.children.push(savedCommentThread._id);
// Save the updated original thread to the database
await originalThread.save();
revalidatePath(path);
} catch (err) {
console.error("Error while adding comment:", err);
throw new Error("Unable to add comment");
}
}
``` uploadthing.ts ```typescript
// Resource: https://docs.uploadthing.com/api-reference/react#generatereacthelpers
// Copy paste (be careful with imports)
import { generateReactHelpers } from "@uploadthing/react/hooks";
import type { OurFileRouter } from "@/app/api/uploadthing/core";
export const { useUploadThing, uploadFiles } = generateReactHelpers ();
``` user.actions.ts ```typescript
"use server";
import { FilterQuery, SortOrder } from "mongoose";
import { revalidatePath } from "next/cache";
import Community from "../models/community.model";
import Thread from "../models/thread.model";
import User from "../models/user.model";
import { connectToDB } from "../mongoose";
export async function fetchUser(userId: string) {
try {
connectToDB();
return await User.findOne({ id: userId }).populate({
path: "communities",
model: Community,
});
} catch (error: any) {
throw new Error(`Failed to fetch user: ${error.message}`);
}
}
interface Params {
userId: string;
username: string;
name: string;
bio: string;
image: string;
path: string;
}
export async function updateUser({
userId,
bio,
name,
path,
username,
image,
}: Params): Promise {
try {
connectToDB();
await User.findOneAndUpdate(
{ id: userId },
{
username: username.toLowerCase(),
name,
bio,
image,
onboarded: true,
},
{ upsert: true }
);
if (path === "/profile/edit") {
revalidatePath(path);
}
} catch (error: any) {
throw new Error(`Failed to create/update user: ${error.message}`);
}
}
export async function fetchUserPosts(userId: string) {
try {
connectToDB();
// Find all threads authored by the user with the given userId
const threads = await User.findOne({ id: userId }).populate({
path: "threads",
model: Thread,
populate: [
{
path: "community",
model: Community,
select: "name id image _id", // Select the "name" and "_id" fields from the "Community" model
},
{
path: "children",
model: Thread,
populate: {
path: "author",
model: User,
select: "name image id", // Select the "name" and "_id" fields from the "User" model
},
},
],
});
return threads;
} catch (error) {
console.error("Error fetching user threads:", error);
throw error;
}
}
// Almost similar to Thead (search + pagination) and Community (search + pagination)
export async function fetchUsers({
userId,
searchString = "",
pageNumber = 1,
pageSize = 20,
sortBy = "desc",
}: {
userId: string;
searchString?: string;
pageNumber?: number;
pageSize?: number;
sortBy?: SortOrder;
}) {
try {
connectToDB();
// Calculate the number of users to skip based on the page number and page size.
const skipAmount = (pageNumber - 1) * pageSize;
// Create a case-insensitive regular expression for the provided search string.
const regex = new RegExp(searchString, "i");
// Create an initial query object to filter users.
const query: FilterQuery = {
id: { $ne: userId }, // Exclude the current user from the results.
};
// If the search string is not empty, add the $or operator to match either username or name fields.
if (searchString.trim() !== "") {
query.$or = [
{ username: { $regex: regex } },
{ name: { $regex: regex } },
];
}
// Define the sort options for the fetched users based on createdAt field and provided sort order.
const sortOptions = { createdAt: sortBy };
const usersQuery = User.find(query)
.sort(sortOptions)
.skip(skipAmount)
.limit(pageSize);
// Count the total number of users that match the search criteria (without pagination).
const totalUsersCount = await User.countDocuments(query);
const users = await usersQuery.exec();
// Check if there are more users beyond the current page.
const isNext = totalUsersCount > skipAmount + users.length;
return { users, isNext };
} catch (error) {
console.error("Error fetching users:", error);
throw error;
}
}
export async function getActivity(userId: string) {
try {
connectToDB();
// Find all threads created by the user
const userThreads = await Thread.find({ author: userId });
// Collect all the child thread ids (replies) from the 'children' field of each user thread
const childThreadIds = userThreads.reduce((acc, userThread) => {
return acc.concat(userThread.children);
}, []);
// Find and return the child threads (replies) excluding the ones created by the same user
const replies = await Thread.find({
_id: { $in: childThreadIds },
author: { $ne: userId }, // Exclude threads authored by the same user
}).populate({
path: "author",
model: User,
select: "name image _id",
});
return replies;
} catch (error) {
console.error("Error fetching replies: ", error);
throw error;
}
}
``` utils.ts ```typescript
import { type ClassValue, clsx } from "clsx";
import { twMerge } from "tailwind-merge";
// generated by shadcn
export function cn(...inputs: ClassValue[]) {
return twMerge(clsx(inputs));
}
// created by chatgpt
export function isBase64Image(imageData: string) {
const base64Regex = /^data:image\/(png|jpe?g|gif|webp);base64,/;
return base64Regex.test(imageData);
}
// created by chatgpt
export function formatDateString(dateString: string) {
const options: Intl.DateTimeFormatOptions = {
year: "numeric",
month: "short",
day: "numeric",
};
const date = new Date(dateString);
const formattedDate = date.toLocaleDateString(undefined, options);
const time = date.toLocaleTimeString([], {
hour: "numeric",
minute: "2-digit",
});
return `${time} - ${formattedDate}`;
}
// created by chatgpt
export function formatThreadCount(count: number): string {
if (count === 0) {
return "No Threads";
} else {
const threadCount = count.toString().padStart(2, "0");
const threadWord = count === 1 ? "Thread" : "Threads";
return `${threadCount} ${threadWord}`;
}
}
``` 🔗 Links Assets used in the project are here 🚀 More Advance your skills with Next.js 14 Pro Course Enjoyed creating this project? Dive deeper into our PRO courses for a richer learning adventure. They're packed with detailed explanations, cool features, and exercises to boost your skills. Give it a go! Accelerate your professional journey with the Expert Training program And if you're hungry for more than just a course and want to understand how we learn and tackle tech challenges, hop into our personalized masterclass. We cover best practices, different web skills, and offer mentorship to boost your confidence. Let's learn and grow together!;Develop Threads, Next.js 13 app that skyrocketed to 100 million sign-ups in less than 5 days, and dethroned giants like Twitter, ChatGPT, and TikTok to become the fastest-growing app ever!;clerk,nextjs,nextjs13,nextjs13-4,nextjs13-app-router | adrianhajdin/threads |
KaringX/karing;Karing - Simple & Powerful proxy utility A singbox GUI based on flutter . English | Chinese | Russian | Persian Features Compatible with Clash, V2ray/V2fly, Sing-box, Shadowsocks, Sub, Github Subscriptions. Full clash config supported, Partial clash.meta config supported. A set of routing rules applied to multiple subscription sources automatically selects efficient nodes. Supports custom routing rule groups and node groups. Customizes default routing rule groups for novice users - ready to use right out of the box. Built-in geo-IP, geo-site, ACL, and other rulesets Backup and synchronization, synchronizing multiple devices with a single configuration. Supports synchronization within the local area network. Supports WebDAV. Supports importing/exporting ZIP files. Built-in support for the modified sing-box core . Introduces a beginner mode for simpler configuration. Plan to support all platforms. Promotion View all promotions
### 推荐机场
[狗狗加速 —— 技术流机场 Doggygo VPN](https://panel.dg6.top/#/register?code=lFH4ii9D)
- 高性能海外机场,海外团队,无跑路风险
- 专属链接注册送 3 天,每天 1G 流量免费试用: [狗狗加速.com](https://panel.dg6.top/#/register?code=lFH4ii9D)
- 优惠套餐每月仅需 15.8 元,160G 流量,年付 8 折
- 全球首家支持`Hysteria2` 协议,集群负载均衡设计,高速专线,极低延迟,无视晚高峰,4K 秒开
- 解锁流媒体及 ChatGPT
[跟斗云 —— 小众首选](https://跟斗云.com/auth/register?code=71be1e9a1a) - 高性能机场, 主推Tuic(v5) 更快更稳定。
- 免费试用链接: [NovaCloud](https://跟斗云.com/auth/register?code=71be1e9a1a)
- karing team 自用良好
[NETZ —— ISP in Russia](https://netz.run)
- High-performance overseas VPN, free trial, support trojan protocol.
- Free trial and purchase (no registration required):[Telegram bot](https://t.me/netzrun_bot?start=ref_karing)
### 🔥自建节点服务器
- (美国免备案vps,配置2核2G仅需`20.98$`≈`145RMB`一年/支持支付宝付款) [👉🏻点我直达](https://my.racknerd.com/aff.php?aff=8405)
- (美国到中国大陆最快的vps, CN2GIA优化网络)
- [👉🏻国外直达](https://bandwagonhost.com/aff.php?aff=75253&productFilter=1)
- [👉🏻国内点我](https://bwh81.net/aff.php?aff=75253&productFilter=1)
### 🤝VPN Providers Collaboration Announcement
- 👉[Contact information and forms of cooperation](https://karing.app/blog/isp/cooperation#for-vpn-providers-from-other-regions)👈 System Requirements(64-bit only) Windows >= 10 Android >= 8 IOS >= 15 MacOS >= 12 Install Windows/Android : https://github.com/KaringX/karing/releases/latest IOS/MacOS TestFlight : https://testflight.apple.com/join/RLU59OsJ IOS/MacOS AppStore : (Search Keywords: karing vpn) https://apps.apple.com/us/app/karing/id6472431552 FAQ wating... Todos Platform to be supported: tvOS(apple tv) linux | linglong Globalization(multi-language) If you are willing to work with us to support Karing globalization, please contact us Telegram Screenshots (Windows version) Contributions welcome to report issue! Acknowledgement Karing was based on or inspired by these projects and so on: flutter : makes it easy and fast to build beautiful apps for mobile and beyond. singbox : The universal proxy platform. Meta-Docs : Clash.Meta docs Star History;Simple & Powerful proxy utility, Support routing rules for clash/sing-box;[] | KaringX/karing |
jbellis/jvector;Introduction to approximate nearest neighbor search Exact nearest neighbor search (k-nearest-neighbor or KNN) is prohibitively expensive at higher dimensions, because approaches to segment the search space that work in 2D or 3D like quadtree or k-d tree devolve to linear scans at higher dimensions. This is one aspect of what is called “ the curse of dimensionality .” With larger datasets, it is almost always more useful to get an approximate answer in logarithmic time, than the exact answer in linear time. This is abbreviated as ANN (approximate nearest neighbor) search. There are two broad categories of ANN index:
* Partition-based indexes, like LSH or IVF or SCANN * Graph indexes, like HNSW or DiskANN Graph-based indexes tend to be simpler to implement and faster, but more importantly they can be constructed and updated incrementally. This makes them a much better fit for a general-purpose index than partitioning approaches that only work on static datasets that are completely specified up front. That is why all the major commercial vector indexes use graph approaches. JVector is a graph index in the DiskANN family tree. JVector Architecture JVector is a graph-based index that builds on the DiskANN design with composeable extensions. JVector implements a single-layer graph with nonblocking concurrency control, allowing construction to scale linearly with the number of cores: The graph is represented by an on-disk adjacency list per node, with additional data stored inline to support two-pass searches, with the first pass powered by lossily compressed representations of the vectors kept in memory, and the second by a more accurate representation read from disk. The first pass can be performed with
* Product quantization (PQ), optionally with anisotropic weighting * Binary quantization (BQ)
* Fused ADC, where PQ codebooks are transposed and written inline with the graph adjacency list The second pass can be performed with
* Full resolution float32 vectors This two-pass design reduces memory usage and reduces latency while preserving accuracy . Additionally, JVector is unique in offering the ability to construct the index itself using two-pass searches, allowing larger-than-memory indexes to be built: This is important because it allows you to take advantage of logarithmic search within a single index, instead of spilling over to linear-time merging of results from multiple indexes. JVector step-by-step All code samples are from SiftSmall in the JVector source repo, which includes the siftsmall dataset as well. Just import the project in your IDE and click Run to try it out! Step 1: Build and query an index in memory First the code:
```java
public static void siftInMemory(ArrayList<VectorFloat<?>> baseVectors) throws IOException {
// infer the dimensionality from the first vector
int originalDimension = baseVectors.get(0).length();
// wrap the raw vectors in a RandomAccessVectorValues
RandomAccessVectorValues ravv = new ListRandomAccessVectorValues(baseVectors, originalDimension); // score provider using the raw, in-memory vectors
BuildScoreProvider bsp = BuildScoreProvider.randomAccessScoreProvider(ravv, VectorSimilarityFunction.EUCLIDEAN);
try (GraphIndexBuilder builder = new GraphIndexBuilder(bsp,
ravv.dimension(),
16, // graph degree
100, // construction search depth
1.2f, // allow degree overflow during construction by this factor
1.2f)) // relax neighbor diversity requirement by this factor
{
// build the index (in memory)
OnHeapGraphIndex index = builder.build(ravv);
// search for a random vector
VectorFloat<?> q = randomVector(originalDimension);
SearchResult sr = GraphSearcher.search(q,
10, // number of results
ravv, // vectors we're searching, used for scoring
VectorSimilarityFunction.EUCLIDEAN, // how to score
index,
Bits.ALL); // valid ordinals to consider
for (SearchResult.NodeScore ns : sr.getNodes()) {
System.out.println(ns);
}
}
} ``` Commentary:
* All indexes assume that you have a source of vectors that has a consistent, fixed dimension (number of float32 components).
* Vector sources are usually represented as a subclass of RandomAccessVectorValues , which offers a simple API around getVector / getVectorInto. Do be aware of isValueShared() in a multithreaded context; as a rule of thumb, in-memory RAVV will not use shared values and from-disk RAVV will (as an optimization to avoid allocating a new on-heap vector for every call).
* You do NOT have to provide all the vectors to the index at once, but since this is a common scenario when prototyping, a convenience method is provided to do so. We will cover how to build an index incrementally later.
* For the overflow Builder parameter, the sweet spot is about 1.2 for in-memory construction and 1.5 for on-disk. (The more overflow is allowed, the fewer recomputations of best edges are required, but the more neighbors will be consulted in every search.)
* The alpha parameter controls the tradeoff between edge distance and diversity; usually 1.2 is sufficient for high-dimensional vectors; 2.0 is recommended for 2D or 3D datasets. See the DiskANN paper for more details.
* The Bits parameter to GraphSearcher is intended for controlling your resultset based on external predicates and won’t be used in this tutorial. Step 2: more control over GraphSearcher Keeping the Builder the same, the updated search code looks like this: java
// search for a random vector using a GraphSearcher and SearchScoreProvider
VectorFloat<?> q = randomVector(originalDimension);
try (GraphSearcher searcher = new GraphSearcher(index)) {
SearchScoreProvider ssp = SearchScoreProvider.exact(q, VectorSimilarityFunction.EUCLIDEAN, ravv);
SearchResult sr = searcher.search(ssp, 10, Bits.ALL);
for (SearchResult.NodeScore ns : sr.getNodes()) {
System.out.println(ns);
}
} Commentary:
* Searcher allocation is modestly expensive since there is a bunch of internal state to initialize at construction time. Therefore, JVector supports pooling searchers (e.g. with ExplicitThreadLocal , covered below).
* When managing GraphSearcher instances, you’re also responsible for constructing a SearchScoreProvider, which you can think of as: given a query vector, tell JVector how to compute scores for other nodes in the index. SearchScoreProvider can be exact, as shown here, or a combination of an ApproximateScoreFunction and a Reranker, covered below. Step 3: Measuring recall A blisteringly-fast vector index isn’t very useful if it doesn’t return accurate results. As a sanity check, SiftSmall includes a helper method testRecall . Wiring up that to our code mostly involves turning the SearchScoreProvider into a factory lambda: java
Function<VectorFloat<?>, SearchScoreProvider> sspFactory = q -> SearchScoreProvider.exact(q, VectorSimilarityFunction.EUCLIDEAN, ravv);
testRecall(index, queryVectors, groundTruth, sspFactory); If you run the code, you will see slight differences in recall every time (printed by testRecall ): (OnHeapGraphIndex) Recall: 0.9898
...
(OnHeapGraphIndex) Recall: 0.9890 This is expected given the approximate nature of the index being created and the perturbations introduced by the multithreaded concurrency of the build call. Step 4: write and load index to and from disk The code:
```java
Path indexPath = Files.createTempFile("siftsmall", ".inline");
try (GraphIndexBuilder builder = new GraphIndexBuilder(bsp, ravv.dimension(), 16, 100, 1.2f, 1.2f)) {
// build the index (in memory)
OnHeapGraphIndex index = builder.build(ravv);
// write the index to disk with default options
OnDiskGraphIndex.write(index, ravv, indexPath);
} // on-disk indexes require a ReaderSupplier (not just a Reader) because we will want it to
// open additional readers for searching
ReaderSupplier rs = new SimpleMappedReaderSupplier(indexPath);
OnDiskGraphIndex index = OnDiskGraphIndex.load(rs);
// measure our recall against the (exactly computed) ground truth
Function<VectorFloat<?>, SearchScoreProvider> sspFactory = q -> SearchScoreProvider.exact(q, VectorSimilarityFunction.EUCLIDEAN, ravv);
testRecall(index, queryVectors, groundTruth, sspFactory); ``` Commentary:
* We can write indexes that are constructed in-memory, like this one, to disk with a single method call.
* Loading and searching against on-disk indexes require a ReaderSupplier, which supplies RandomAccessReader objects. The RandomAccessReader interface is intended to be extended by the consuming project. For instance, DataStax Astra implements a RandomAccessReader backed by the Cassandra chunk cache. JVector provides two implementations out of the box.
* SimpleMappedReader: implemented using FileChannel.map, which means it is compatible with all Java versions that can run JVector, but also means it is limited to 2GB file sizes. SimpleMappedReader is primarily intended for example code.
* MemorySegmentReader: implemented using the newer MemorySegment API, with no file size limit, but is limited to Java 22+. (The actual MemorySegmentReader code is compatible with Java 20+, but we left it in the 22+ module for convenience . The motivated reader is welcome to refactor the build to improve this.) If you have no specialized requirements then MemorySegmentReader is recommended for production. Step 5: use compressed vectors in the search Compressing the vectors with product quantization is done as follows: java
// compute and write compressed vectors to disk
Path pqPath = Files.createTempFile("siftsmall", ".pq");
try (DataOutputStream out = new DataOutputStream(new BufferedOutputStream(Files.newOutputStream(pqPath)))) {
// Compress the original vectors using PQ. this represents a compression ratio of 128 * 4 / 16 = 32x
ProductQuantization pq = ProductQuantization.compute(ravv,
16, // number of subspaces
256, // number of centroids per subspace
true); // center the dataset
ByteSequence<?>[] compressed = pq.encodeAll(ravv);
// write the compressed vectors to disk
PQVectors pqv = new PQVectors(pq, compressed);
pqv.write(out);
} * JVector also supports Binary Quantization, but BQ is generally less useful than PQ since it takes such a large toll on search accuracy . Then we can wire up the compressed vectors to a two-phase search by getting the fast ApproximateScoreFunction from PQVectors, and the Reranker from the index View: java
ReaderSupplier rs = new MMapReaderSupplier(indexPath);
OnDiskGraphIndex index = OnDiskGraphIndex.load(rs);
// load the PQVectors that we just wrote to disk
try (RandomAccessReader in = new SimpleMappedReader(pqPath)) {
PQVectors pqv = PQVectors.load(in);
// SearchScoreProvider that does a first pass with the loaded-in-memory PQVectors,
// then reranks with the exact vectors that are stored on disk in the index
Function<VectorFloat<?>, SearchScoreProvider> sspFactory = q -> {
ApproximateScoreFunction asf = pqv.precomputedScoreFunctionFor(q, VectorSimilarityFunction.EUCLIDEAN);
Reranker reranker = index.getView().rerankerFor(q, VectorSimilarityFunction.EUCLIDEAN);
return new SearchScoreProvider(asf, reranker);
};
// measure our recall against the (exactly computed) ground truth
testRecall(index, queryVectors, groundTruth, sspFactory);
} * PQVectors offers both precomputedScoreFunctionFor and scoreFunctionFor factories. As the name implies, the first precalculates the fragments of distance needed from the PQ codebook for assembly by ADC (asymmetric distance computation). This is faster for searching all but the smallest of indexes, but if you do have a tiny index or you need to perform ADC in another context, the scoreFunctionFor version with no precomputation will come in handy. This set of functionality is the classic DiskANN design. Step 6: building a larger-than-memory index JVector can also apply PQ compression to allow indexing a larger than memory dataset: only the compressed vectors are kept in memory. First we need to set up a PQVectors instance that we can add new vectors to, and a BuildScoreProvider using it:
```java
// compute the codebook, but don't encode any vectors yet
ProductQuantization pq = ProductQuantization.compute(ravv, 16, 256, true); // as we build the index we'll compress the new vectors and add them to this List backing a PQVectors;
// this is used to score the construction searches
List<ByteSequence<?>> incrementallyCompressedVectors = new ArrayList<>();
PQVectors pqv = new PQVectors(pq, incrementallyCompressedVectors);
BuildScoreProvider bsp = BuildScoreProvider.pqBuildScoreProvider(VectorSimilarityFunction.EUCLIDEAN, pqv); ``` Then we need to set up an OnDiskGraphIndexWriter with full control over the construction process. java
Path indexPath = Files.createTempFile("siftsmall", ".inline");
Path pqPath = Files.createTempFile("siftsmall", ".pq");
// Builder creation looks mostly the same
try (GraphIndexBuilder builder = new GraphIndexBuilder(bsp, ravv.dimension(), 16, 100, 1.2f, 1.2f);
// explicit Writer for the first time, this is what's behind OnDiskGraphIndex.write
OnDiskGraphIndexWriter writer = new OnDiskGraphIndexWriter.Builder(builder.getGraph(), indexPath)
.with(new InlineVectors(ravv.dimension()))
.withMapper(new OnDiskGraphIndexWriter.IdentityMapper())
.build();
// output for the compressed vectors
DataOutputStream pqOut = new DataOutputStream(new BufferedOutputStream(Files.newOutputStream(pqPath)))) Once that’s done, we can index vectors one at a time: java
// build the index vector-at-a-time (on disk)
for (VectorFloat<?> v : baseVectors) {
// compress the new vector and add it to the PQVectors (via incrementallyCompressedVectors)
int ordinal = incrementallyCompressedVectors.size();
incrementallyCompressedVectors.add(pq.encode(v));
// write the full vector to disk
writer.writeInline(ordinal, Feature.singleState(FeatureId.INLINE_VECTORS, new InlineVectors.State(v)));
// now add it to the graph -- the previous steps must be completed first since the PQVectors
// and InlineVectorValues are both used during the search that runs as part of addGraphNode construction
builder.addGraphNode(ordinal, v);
} Finally, we need to run cleanup() and write the index and the PQVectors to disk:
```java
// cleanup does a final enforcement of maxDegree and handles other scenarios like deleted nodes
// that we don't need to worry about here
builder.cleanup(); // finish writing the index (by filling in the edge lists) and write our completed PQVectors
writer.write(Map.of());
pqv.write(pqOut); ````
Commentary:
* The search code doesn’t change when switching to incremental index construction – it’s the same index structure on disk, just (potentially) much larger.
* OnDiskGraphIndexWriter::writeInline is threadsafe via synchronization, but care must be taken that the support structures are threadsafe as well if you plan to use them in a multithreaded scenario (which this example is not). Alternatively, you can serialize the updates to PQVectors and leave only the call to GraphIndexBuilder::addGraphNode concurrent. This represents the lion’s share of construction time so you will see good performance with that approach. Less-obvious points Embeddings models product output from a consistent distribution of vectors. This means that you can save and re-use ProductQuantization codebooks, even for a different set of vectors, as long as you had a sufficiently large training set to build it the first time around. ProductQuantization.MAX_PQ_TRAINING_SET_SIZE (128,000 vectors) has proven to be sufficiently large. JDK ThreadLocal objects cannot be referenced except from the thread that created them. This is a difficult design into which to fit caching of Closeable objects like GraphSearcher. JVector provides the ExplicitThreadLocal class to solve this. Fused ADC is only compatible with Product Quantization, not Binary Quantization. This is no great loss since very few models generate embeddings that are best suited for BQ . That said, BQ continues to be supported with non-Fused indexes. JVector heavily utilizes the Panama Vector API(SIMD) for ANN indexing and search. We have seen cases where the memory bandwidth is saturated during indexing and product quantization and can cause the process to slow down. To avoid this, the batch methods for index and PQ builds use a PhysicalCoreExecutor to limit the amount of operations to the physical core count. The default value is 1/2 the processor count seen by Java. This may not be correct in all setups (e.g. no hyperthreading or hybrid architectures) so if you wish to override the default use the -Djvector.physical_core_count property, or pass in your own ForkJoinPool instance. Advanced features Fused ADC is represented as a Feature that is supported during incremental index construction, like InlineVectors above. See the Grid class for sample code . Anisotropic PQ is built into the ProductQuantization class and can improve recall, but nobody knows how to tune it (with the T/threshold parameter) except experimentally on a per-model basis, and choosing the wrong setting can make things worse. From Figure 3 in the paper: JVector supports in-place deletes via GraphIndexBuilder::markNodeDeleted. Deleted nodes are removed and connections replaced during GraphIndexBuilder::cleanup, with runtime proportional to the number of deleted nodes. To checkpoint a graph and reload it for continued editing, use OnHeapGraphIndex::save and GraphIndexBuilder.load. The research behind the algorithms Foundational work: HNSW and DiskANN papers, and a higher level explainer Anisotropic PQ paper Quicker ADC paper Developing and Testing This project is organized as a multimodule Maven build . The intent is to produce a multirelease jar suitable for use as
a dependency from any Java 11 code. When run on a Java 20+ JVM with the Vector module enabled, optimized vector
providers will be used. In general, the project is structured to be built with JDK 20+, but when JAVA_HOME is set to
Java 11 -> Java 19, certain build features will still be available. Base code is in jvector-base and will be built for Java 11 releases, restricting language features and APIs
appropriately. Code in jvector-twenty will be compiled for Java 20 language features/APIs and included in the final
multirelease jar targeting supported JVMs. jvector-multirelease packages jvector-base and jvector-twenty as a
multirelease jar for release. jvector-examples is an additional sibling module that uses the reactor-representation of
jvector-base/jvector-twenty to run example code. jvector-tests contains tests for the project, capable of running against
both Java 11 and Java 20+ JVMs. To run tests, use mvn test . To run tests against Java 20+, use mvn test . To run tests against Java 11, use mvn -Pjdk11 test . To run a single test class,
use the Maven Surefire test filtering capability, e.g., mvn -Dtest=TestNeighborArray test . You may also use method-level filtering and patterns, e.g., mvn -Dtest=TestNeighborArray#testRetain* test . You can run SiftSmall and Bench directly to get an idea of what all is going on here. Bench will automatically download required datasets to the fvec and hdf5 directories.
The files used by SiftSmall can be found in the siftsmall directory in the project root. To run either class, you can use the Maven exec-plugin via the following incantations: mvn compile exec:exec@bench or for Sift: mvn compile exec:exec@sift Bench takes an optional benchArgs argument that can be set to a list of whitespace-separated regexes. If any of the
provided regexes match within a dataset name, that dataset will be included in the benchmark. For example, to run only the glove
and nytimes datasets, you could use: mvn compile exec:exec@bench -DbenchArgs="glove nytimes" To run Sift/Bench without the JVM vector module available, you can use the following invocations: mvn -Pjdk11 compile exec:exec@bench mvn -Pjdk11 compile exec:exec@sift The ... -Pjdk11 invocations will also work with JAVA_HOME pointing at a Java 11 installation. To release, configure ~/.m2/settings.xml to point to OSSRH and run mvn -Prelease clean deploy .;JVector: the most advanced embedded vector search engine;ann,java,knn,machine-learning,search-engine,similarity-search,vector-search | jbellis/jvector |
daveshap/ACE_Framework;Project Principles Exclusively Open Source We will be committed to using 100% open source software (OSS) for this project. This is to ensure maximum accessibility and democratic access. Exclusively Local Hardware No cloud or SaaS providers. This constrains the project to run locally on servers, desktops, laptops, smart home, and portable devices. Principles 1. Be Scrappy Don't wait for permission or controls. This is a purely volunteer group so if something resonates, go for it. Experiment. Try stuff. Break stuff. Share your results. Use your own sandboxes, report results, and together we'll decide what get's pulled into MAIN via pull request. But as one member said: we need more data! So the principle here is to engage in activities that generate more data, more telemetry, so we can see and feel what's working and what isn't. 2. Avoid Vendor Lockin While we generally agree we want to be model agnostic, but acknowledge there are problems with this, the overarching principle is that we don't want to get locked into working with any single vendor. OpenAI in particular. This means we tinker with multiple providers, vendors, models, etc. This also means a preference for Open Source wherever possible. This general principle covers several areas. 3. Task-Constrained Approach The team acknowledges that the tasks the framework can accomplish are constrained by the capabilities provided to it. Identifying the potential task space is critical, and the framework should be designed with the types of tasks it should be able to accomplish in mind. This means that milestones and capabilities should be measured by tests and tasks so that we can remain empirical and objective oriented. 4. Avoiding Overcomplication The team agrees on the importance of not overcomplicating the project from the start. Modest milestones and a focus on what is feasible are recommended. As we're doing nothing short of aiming for fully autonomous software, we need to not "boil the ocean" or "eat the whole elephant." Small, incremental steps while keeping the grand vision in mind. 5. Establish New Principles We're exploring an entirely new domain of software architecture and design. Some old paradigms will obviously still apply, but some won't. Thus, we are discovering and deriving new principles of autonomous software design as we go. This adds a layer of complexity to our task. Intro to this Repo This is the main public repo for the ACE (Autonomous Cognitive Entity) repository. If you're looking for the main ACE Framework documentation, it is available here: https://github.com/daveshap/ACE_Framework/blob/main/ACE_Framework.md Participation Please check out the following files and locations for more details about participation: Contributing: https://github.com/daveshap/ACE_Framework/blob/main/contributing.md This page will be updated with the best ways to contribute Agile: https://github.com/daveshap/ACE_Framework/blob/main/agile.md This is the overall roadmap and organizational document Discussions: https://github.com/daveshap/ACE_Framework/discussions Jump into the discussions! We also have a Public Discord. Keep in mind that this repo is the Single Source of Truth! Discord is just for convenience. If it's not on the repo, it doesn't exist! Join the Autonomous AI Lab here: https://discord.gg/mJKUYNm8qY Projects There are many possible implementations of the ACE Framework. Rather than detail every possible permutation, here is a list of categories that we perceive as likely and viable. Personal Assistant and/or Companion This is a self-contained version of ACE that is intended to interact with one user. Think of Cortana from HALO, Samantha from HER, or Joi from Blade Runner 2049. (yes, we recognize these are all sexualized female avatars) The idea would be to create something that is effectively a personal Executive Assistant that is able to coordinate, plan, research, and solve problems for you. This could be deployed on mobile, smart home devices, laptops, or web sites. Game World NPC's This is a kind of game character that has their own personality, motivations, agenda, and objectives. Furthermore, they would have their own unique memories. This can give NPCs a much more realistic ability to pursue their own objectives, which should make game experiences much more dynamic and unpredictable, thus raising novelty. These can be adapted to 2D or 3D game engines such as PyGame, Unity, or Unreal. Autonomous Employee This is a version of the ACE that is meant to carry out meaningful and productive work inside a corporation. Whether this is a digital CSR or backoffice worker depends on the deployment. It could also be a "digital team member" that primarily interacts via Discord, Slack, or Microsoft Teams. Embodied Robot The ACE Framework is ideal to create self-contained, autonomous machines. Whether they are domestic aid robots or something like WALL-E;ACE (Autonomous Cognitive Entities) - 100% local and open source autonomous agents;[] | daveshap/ACE_Framework |
XiaoxinHe/Awesome-Graph-LLM;Awesome-Graph-LLM A collection of AWESOME things about Graph-Related Large Language Models (LLMs) . Large Language Models (LLMs) have shown remarkable progress in natural language processing tasks. However, their integration with graph structures, which are prevalent in real-world applications, remains relatively unexplored. This repository aims to bridge that gap by providing a curated list of research papers that explore the intersection of graph-based techniques with LLMs. Table of Contents Awesome-Graph-LLM Table of Contents Datasets, Benchmarks \& Surveys Prompting General Graph Model Large Multimodal Models (LMMs) Applications Basic Graph Reasoning Node Classification Graph Classification/Regression Knowledge Graph Molecular Graph Others Resources \& Tools Contributing Star History Datasets, Benchmarks & Surveys ( NAACL'21 ) Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training [ paper ][ code ] ( NeurIPS'23 ) Can Language Models Solve Graph Problems in Natural Language? [ paper ][ code ] ( IEEE Intelligent Systems 2023 ) Integrating Graphs with Large Language Models: Methods and Prospects [ paper ] ( ICLR'24 ) Talk like a Graph: Encoding Graphs for Large Language Models [ paper ] ( arXiv 2023.05 ) GPT4Graph: Can Large Language Models Understand Graph Structured Data? An Empirical Evaluation and Benchmarking [ paper ][ code ] ( arXiv 2023.08 ) Graph Meets LLMs: Towards Large Graph Models [ paper ] ( arXiv 2023.10 ) Towards Graph Foundation Models: A Survey and Beyond [ paper ] ( arXiv 2023.10 ) LLM4DyG: Can Large Language Models Solve Problems on Dynamic Graphs? [ paper ] ( arXiv 2023.11 ) Can Knowledge Graphs Reduce Hallucinations in LLMs? : A Survey [ paper ] ( arXiv 2023.11 ) A Survey of Graph Meets Large Language Model: Progress and Future Directions [ paper ][ code ] ( arXiv 2023.12 ) Large Language Models on Graphs: A Comprehensive Survey [ paper ][ code ] ( arXiv 2024.02 ) Towards Versatile Graph Learning Approach: from the Perspective of Large Language Models [ paper ] ( arXiv 2024.04 ) Graph Machine Learning in the Era of Large Language Models (LLMs) [ paper ] ( arXiv 2024.05 ) A Survey of Large Language Models for Graphs [ paper ][ code ] Prompting ( EMNLP'23 ) StructGPT: A General Framework for Large Language Model to Reason over Structured Data [ paper ][ code ] ( AAAI'24 ) Graph of Thoughts: Solving Elaborate Problems with Large Language Models [ paper ][ code ] ( arXiv 2023.05 ) PiVe: Prompting with Iterative Verification Improving Graph-based Generative Capability of LLMs [ paper ][ code ] ( arXiv 2023.08 ) Boosting Logical Reasoning in Large Language Models through a New Framework: The Graph of Thought [ paper ] ( arxiv 2023.10 ) Thought Propagation: An Analogical Approach to Complex Reasoning with Large Language Models [ paper ] ( arxiv 2024.01 ) Topologies of Reasoning: Demystifying Chains, Trees, and Graphs of Thoughts [ paper ] ( arxiv 2024.04 ) Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs [ paper ) General Graph Model ( ICLR'24 ) One for All: Towards Training One Graph Model for All Classification Tasks [ paper ][ code ] ( arXiv 2023.08 ) Natural Language is All a Graph Needs [ paper ][ code ] ( arXiv 2023.10 ) GraphGPT: Graph Instruction Tuning for Large Language Models [ paper ][ code ][ blog in Chinese ] ( arXiv 2023.10 ) Graph Agent: Explicit Reasoning Agent for Graphs [ paper ] ( arXiv 2024.02 ) Let Your Graph Do the Talking: Encoding Structured Data for LLMs [ paper ] ( arXiv 2024.02 ) G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering [ paper ][ code ][ blog ] ( arXiv 2024.02 ) InstructGraph: Boosting Large Language Models via Graph-centric Instruction Tuning and Preference Alignment [ paper ][ code ] ( arXiv 2024.02 ) LLaGA: Large Language and Graph Assistant [ paper ][ code ] ( arXiv 2024.02 ) HiGPT: Heterogeneous Graph Language Model [ paper ][ code ] ( arXiv 2024.02 ) UniGraph: Learning a Cross-Domain Graph Foundation Model From Natural Language [ paper ] ( arXiv 2024.06 ) UniGLM: Training One Unified Language Model for Text-Attributed Graphs [ paper ][ code ] Large Multimodal Models (LMMs) ( NeurIPS'23 ) GraphAdapter: Tuning Vision-Language Models With Dual Knowledge Graph [ paper ][ code ] ( arXiv 2023.10 ) Multimodal Graph Learning for Generative Tasks [ paper ][ code ] ( arXiv 2024.02 ) Rendering Graphs for Graph Reasoning in Multimodal Large Language Models [ paper ] Applications Basic Graph Reasoning ( KDD'24 ) GraphWiz: An Instruction-Following Language Model for Graph Problems [ paper ][ code ][ project ] ( arXiv 2023.04 ) Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via Prompt Augmented by ChatGPT [ paper ][ code ] ( arXiv 2023.10 ) GraphText: Graph Reasoning in Text Space [ paper ] ( arXiv 2023.10 ) GraphLLM: Boosting Graph Reasoning Ability of Large Language Model [ paper ][ code ] Node Classification ( ICLR'24 ) Explanations as Features: LLM-Based Features for Text-Attributed Graphs [ paper ][ code ] ( ICLR'24 ) Label-free Node Classification on Graphs with Large Language Models (LLMS) [ paper ] ( WWW'24 ) Can GNN be Good Adapter for LLMs? [ paper ][ code ] ( arXiv 2023.07 ) Exploring the Potential of Large Language Models (LLMs) in Learning on Graphs [ paper ][ code ] ( arXiv 2023.09 ) Can LLMs Effectively Leverage Structural Information for Graph Learning: When and Why [ paper ][ code ] ( arXiv 2023.10 ) Empower Text-Attributed Graphs Learning with Large Language Models (LLMs) [ paper ] ( arXiv 2023.10 ) Disentangled Representation Learning with Large Language Models for Text-Attributed Graphs [ paper ] ( arXiv 2023.11 ) Large Language Models as Topological Structure Enhancers for Text-Attributed Graphs [ paper ] ( arXiv 2024.01 ) Efficient Tuning and Inference for Large Language Models on Textual Graphs [ paper ][ code ] ( arXiv 2024.02 ) Similarity-based Neighbor Selection for Graph LLMs [ paper ] [ code ] ( arXiv 2024.02 ) Distilling Large Language Models for Text-Attributed Graph Learning [ paper ] ( arXiv 2024.02 ) GraphEdit: Large Language Models for Graph Structure Learning [ paper ][ code ] ( arXiv 2024.05 ) LOGIN: A Large Language Model Consulted Graph Neural Network Training Framework [ paper ][ code ] ( arXiv 2024.06 ) GAugLLM: Improving Graph Contrastive Learning for Text-Attributed Graphs with Large Language Models [ paper ][ code ] Graph Classification/Regression ( arXiv 2023.06 ) GIMLET: A Unified Graph-Text Model for Instruction-Based Molecule Zero-Shot Learning [ paper ][ code ] ( arXiv 2023.07 ) Can Large Language Models Empower Molecular Property Prediction? [ paper ][ code ] Knowledge Graph ( AAAI'22 ) Enhanced Story Comprehension for Large Language Models through Dynamic Document-Based Knowledge Graphs [ paper ] ( EMNLP'22 ) Language Models of Code are Few-Shot Commonsense Learners [ paper ][ code ] ( SIGIR'23 ) Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph Construction [ paper ][ code ] ( TKDE‘23 ) AutoAlign: Fully Automatic and Effective Knowledge Graph Alignment enabled by Large Language Models [ paper ][ code ] ( AAAI'24 ) Graph Neural Prompting with Large Language Models [ paper ][ code ] ( NAACL'24 ) zrLLM: Zero-Shot Relational Learning on Temporal Knowledge Graphs with Large Language Models [ paper ] ( ICLR'24 ) Think-on-Graph: Deep and Responsible Reasoning of Large Language Model on Knowledge Graph [ paper ][ code ] ( arXiv 2023.04 ) CodeKGC: Code Language Model for Generative Knowledge Graph Construction [ paper ][ code ] ( arXiv 2023.05 ) Knowledge Graph Completion Models are Few-shot Learners: An Empirical Study of Relation Labeling in E-commerce with LLMs [ paper ] ( arXiv 2023.08 ) MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large Language Models [ paper ][ code ] ( arXiv 2023.10 ) Faithful Path Language Modelling for Explainable Recommendation over Knowledge Graph [ paper ] ( arXiv 2023.10 ) Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning [ paper ][ code ] ( arXiv 2023.11 ) Zero-Shot Relational Learning on Temporal Knowledge Graphs with Large Language Models [ paper ] ( arXiv 2023.12 ) KGLens: A Parameterized Knowledge Graph Solution to Assess What an LLM Does and Doesn’t Know [ paper ] ( arXiv 2024.02 ) Large Language Model Meets Graph Neural Network in Knowledge Distillation [ paper ] ( arXiv 2024.02 ) Large Language Models Can Learn Temporal Reasoning [ paper ][ code ] ( arXiv 2024.03 ) Call Me When Necessary: LLMs can Efficiently and Faithfully Reason over Structured Environments [ paper ] ( arXiv 2024.04 ) Evaluating the Factuality of Large Language Models using Large-Scale Knowledge Graphs [ paper ][ code ] ( arXiv 2024.04 ) Extract, Define, Canonicalize: An LLM-based Framework for Knowledge Graph Construction [ paper ] ( arXiv 2024.05 ) FiDeLiS: Faithful Reasoning in Large Language Model for Knowledge Graph Question Answering [ paper ] ( arXiv 2024.06 ) Explore then Determine: A GNN-LLM Synergy Framework for Reasoning over Knowledge Graph [ paper ] Molecular Graph ( arXiv 2024.06 ) MolecularGPT: Open Large Language Model (LLM) for Few-Shot Molecular Property Prediction [ paper ][ code ] Others ( WSDM'24 ) LLMRec: Large Language Models with Graph Augmentation for Recommendation [ paper ][ code ][ blog in Chinese ]. ( arXiv 2023.03 ) Ask and You Shall Receive (a Graph Drawing): Testing ChatGPT’s Potential to Apply Graph Layout Algorithms [ paper ] ( arXiv 2023.05 ) Graph Meets LLM: A Novel Approach to Collaborative Filtering for Robust Conversational Understanding [ paper ] ( arXiv 2023.05 ) ChatGPT Informed Graph Neural Network for Stock Movement Prediction [ paper ][ code ] ( arXiv 2023.10 ) Graph Neural Architecture Search with GPT-4 [ paper ] ( arXiv 2023.11 ) Biomedical knowledge graph-enhanced prompt generation for large language models [ paper ][ code ] ( arXiv 2023.11 ) Graph-Guided Reasoning for Multi-Hop Question Answering in Large Language Models [ paper ] ( arXiv 2024.02 ) Microstructures and Accuracy of Graph Recall by Large Language Models [ paper ] ( arXiv 2024.02 ) Causal Graph Discovery with Retrieval-Augmented Generation based Large Language Models [ paper ] ( arXiv 2024.02 ) Graph-enhanced Large Language Models in Asynchronous Plan Reasoning [ paper ][ code ] ( arXiv 2024.03 ) Exploring the Potential of Large Language Models in Graph Generation [ paper ] ( arXiv 2024.05 ) Don't Forget to Connect! Improving RAG with Graph-based Reranking [ paper ] ( arXiv 2024.05 ) Can Graph Learning Improve Task Planning? [ paper ][ code ] ( arXiv 2024.06 ) GNN-RAG: Graph Neural Retrieval for Large Language Modeling Reasoning [ paper ][ code ] Resources & Tools GraphGPT: Extrapolating knowledge graphs from unstructured text using GPT-3 GraphML: Graph markup language . An XML-based file format for graphs. GML: Graph modelling language . Read graphs in GML format. Contributing 👍 Contributions to this repository are welcome! If you have come across relevant resources, feel free to open an issue or submit a pull request. - (*conference|journal*) paper_name [[pdf](link)][[code](link)] Star History;A collection of AWESOME things about Graph-Related LLMs.;[] | XiaoxinHe/Awesome-Graph-LLM |
kyegomez/BitNet;BitNet PyTorch Implementation of the linear methods and model from the paper "BitNet: Scaling 1-bit Transformers for Large Language Models" Paper link: BitLinear = tensor -> layernorm -> Binarize -> abs max quantization -> dequant "The implementation of the BitNet architecture is quite simple, requiring only the replacement of linear projections (i.e., nn.Linear in PyTorch) in the Transformer. " -- BitNet is really easy to implement just swap out the linears with the BitLinear modules! NEWS New Iteration 🔥 There is an all-new iteration from the paper " The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits ", we're implementing it now. Join the Agora discord and contribute! Join Here New Optimizations The first BitLinear has been optimized and we now have a Bit Attention BitMGQA That implements BitLinear into the attention mechanism. Multi Grouped Query Attention is also widely recognized as the best attention for its fast decoding and long context handling, thanks to Frank for his easy to use implementation! BitLinear 1.5 Launch 🔥 : The new BitLinear 1.5 is still in progress 🔥 Here is the file There are still some bugs like with the dequantization algorithm and we still need to replace the multiplication with elementwisw addition, if you could help with this, this would be amazing. NOTICE : A model obviously needs to be finetuned from scratch to use BitLinear, just changing the linear methods in an already trained model isn't going to work. Finetune or train from scratch. Appreciation Dimitry, Nullonix for analysis and code review and revision Vyom, for providing 4080 to train! Installation bash
pip3 install bitnet Usage We have a vast selection of example scripts here and in the examples folder: , let me know if you want assistance with a use-case in the discord! BitLinear Example of the BitLinear layer which is the main innovation of the paper!
```python
import torch from bitnet import BitLinear Input x = torch.randn(10, 1000, 512) BitLinear layer layer = BitLinear(512, 400) Output y = layer(x) print(y) ``` BitLinearNew ```python
import torch
from bitnet import BitLinearNew Create a random tensor of shape (16, 10) x = torch.randn(16, 1000, 512) Create an instance of the BitLinearNew class with input size 10, output size 20, and 2 groups layer = BitLinearNew(
512,
20,
) Perform a forward pass through the BitLinearNew layer with input x output = layer(x) Print the output tensor print(output)
print(output.shape)
``` BitNetTransformer Fully implemented Transformer as described in the diagram with MHA, and BitFeedforwards Can be utilized not just for text but for images and maybe even video or audio processing Complete with residuals and skip connections for gradient flow ```python Import the necessary libraries import torch
from bitnet import BitNetTransformer Create a random tensor of integers x = torch.randint(0, 20000, (1, 1024)) Initialize the BitNetTransformer model bitnet = BitNetTransformer(
num_tokens=20000, # Number of unique tokens in the input
dim=1024, # Dimension of the input and output embeddings
depth=6, # Number of transformer layers
heads=8, # Number of attention heads
ff_mult=4, # Multiplier for the hidden dimension in the feed-forward network
) Pass the tensor through the transformer model logits = bitnet(x) Print the shape of the output print(logits) ``` BitAttention This Attention has been modified to use BitLinear instead of the default linear projection. It's also using Multi-Grouped Query Attention instead of regular multi-head attention for faster decoding and longer context handling. ```python
import torch
from bitnet import BitMGQA Create a random tensor of shape (1, 10, 512) x = torch.randn(1, 10, 512) Create an instance of the BitMGQA model with input size 512, 8 attention heads, and 4 layers gqa = BitMGQA(512, 8, 4) Pass the input tensor through the BitMGQA model and get the output and attention weights out, _ = gqa(x, x, x, need_weights=True) Print the shapes of the output tensor and attention tensor print(out)
``` BitFeedForward Feedforward as shown in the diagram with BitLinear and a GELU: Linear -> GELU -> Linear You can add dropouts, or layernorms, or other layers for a better ffn ```python
import torch
from bitnet import BitFeedForward Create a random input tensor of shape (10, 512) x = torch.randn(10, 512) Create an instance of the BitFeedForward class with the following parameters: - input_dim: 512 - hidden_dim: 512 - num_layers: 4 - swish: True (use Swish activation function) - post_act_ln: True (apply Layer Normalization after each activation) - dropout: 0.1 (apply dropout with a probability of 0.1) ff = BitFeedForward(512, 512, 4, swish=True, post_act_ln=True, dropout=0.1) Apply the BitFeedForward network to the input tensor x y = ff(x) Print the shape of the output tensor y print(y) # torch.Size([10, 512])
``` Inference ```python
from bitnet import BitNetInference bitnet = BitNetInference()
bitnet.load_model("../model_checkpoint.pth") # Download model
output_str = bitnet.generate("The dog jumped over the ", 512)
print(output_str)
``` Huggingface Usage ```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer from bitnet import replace_linears_in_hf Load a model from Hugging Face's Transformers model_name = "bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name) Replace Linear layers with BitLinear replace_linears_in_hf(model) Example text to classify text = "Replace this with your text"
inputs = tokenizer(
text, return_tensors="pt", padding=True, truncation=True, max_length=512
) Perform inference model.eval() # Set the model to evaluation mode
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
print(predictions) Process predictions predicted_class_id = predictions.argmax().item()
print(f"Predicted class ID: {predicted_class_id}") Optionally, map the predicted class ID to a label, if you know the classification labels labels = ["Label 1", "Label 2", ...] # Define your labels corresponding to the model's classes print(f"Predicted label: {labels[predicted_class_id]}") ``` Drop in Replacement for Pytorch Models ```python
import torch
from torch import nn
from bitnet import replace_linears_in_pytorch_model Define a simple model model = nn.Sequential(
nn.Linear(10, 20),
nn.ReLU(),
nn.Linear(20, 30),
) print("Before replacement:")
print(model) Replace nn.Linear with BitLinear replace_linears_in_pytorch_model(model) print("After replacement:")
print(model) Now you can use the model for training or inference For example, pass a random input through the model input = torch.randn(1, 10)
output = model(input)
``` Optimized Cuda Kernel python setup.py build_ext --inplace ```python
import torch
import gemm_lowbit_ext # This imports the compiled module Example usage a = torch.randn(10, 20, dtype=torch.half, device='cuda') # Example tensor
b = torch.randn(20, 30, dtype=torch.half, device='cuda') # Example tensor
c = torch.empty(10, 30, dtype=torch.half, device='cuda') # Output tensor w_scale = 1.0 # Example scale factor
x_scale = 1.0 # Example scale factor Call the custom CUDA GEMM operation gemm_lowbit_ext.gemm_lowbit(a, b, c, w_scale, x_scale) print(c) # View the result ``` BitLora Implementation of BitLora! ```python
import torch
from bitnet import BitLora Random text tensor x = torch.randn(1, 12, 200) Create an instance of the BitLora model model = BitLora(in_features=200, out_features=200, rank=4, lora_alpha=1) Perform the forward pass out = model(x) Print the shape of the output tensor print(out.shape)
``` BitMamba ```python
import torch
from bitnet import BitMamba Create a tensor of size (2, 10) with random values between 0 and 100 x = torch.randint(0, 100, (2, 10)) Create an instance of the BitMamba model with input size 512, hidden size 100, output size 10, and depth size 6 model = BitMamba(512, 100, 10, 6, return_tokens=True) Pass the input tensor through the model and get the output output = model(x) Print the output tensor print(output) Print the shape of the output tensor print(output.shape) ``` BitMoE ```python
import torch
from bitnet.bit_moe import BitMoE Create input tensor x = torch.randn(2, 4, 8) Create BitMoE model with specified input and output dimensions model = BitMoE(8, 4, 2) Forward pass through the model output = model(x) Print the output print(output)
``` 1 Bit Vision Transformers This idea came to me out of nowhere but it seems to be pretty fun, as you can leverage bitlinear for vision tasks for ultra-compression. It would be nice to train this on imagenet if you could make a script, we'll provide the compute. Then the next stage would be to train a join vision language model gpt-4o ```python
import torch
from bitnet import OneBitViT Create an instance of the OneBitViT model v = OneBitViT(
image_size=256,
patch_size=32,
num_classes=1000,
dim=1024,
depth=6,
heads=16,
mlp_dim=2048,
) Generate a random image tensor img = torch.randn(1, 3, 256, 256) Pass the image through the OneBitViT model to get predictions preds = v(img) # (1, 1000) Print the predictions print(preds) ``` License MIT Citation ```bibtex
@misc{2310.11453,
Author = {Hongyu Wang and Shuming Ma and Li Dong and Shaohan Huang and Huaijie Wang and Lingxiao Ma and Fan Yang and Ruiping Wang and Yi Wu and Furu Wei},
Title = {BitNet: Scaling 1-bit Transformers for Large Language Models},
Year = {2023},
Eprint = {arXiv:2310.11453},
} ``` Todo [x] Double check BitLinear implementation and make sure it works exactly as in paper [x] Implement training script for BitNetTransformer [x] Train on Enwiki8, copy and past code and data from Lucidrains repos [x] Benchmark performance [x] Look into Straight Through Estimator for non-differentiable backprop [x] Implement BitFeedForward [x] Clean up codebase [x] Add unit tests for each module [x] Implement the new BitNet1.5b from the paper [ ] Implement the BitNet15b in Cuda [ ] Implement the low bit gemm cuda kernel;Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch;artificial-intelligence,deep-neural-networks,deeplearning,gpt4,machine-learning,multimodal,multimodal-deep-learning | kyegomez/BitNet |
Vchitect/Latte;Latte: Latent Diffusion Transformer for Video Generation Official PyTorch Implementation This repo contains PyTorch model definitions, pre-trained weights, and training/sampling code for our paper exploring
latent diffusion models with transformers (Latte). You can find more visualizations on our project page . Latte: Latent Diffusion Transformer for Video Generation Xin Ma , Yaohui Wang* , Xinyuan Chen , Gengyun Jia , Ziwei Liu , Yuan-Fang Li , Cunjian Chen , Yu Qiao (*Corresponding Author & Project Lead) News (🔥 New) May 23, 2024 💥 Latte-1 is released! Pre-trained model can be downloaded here . We support both T2V and T2I . Please run bash sample/t2v.sh and bash sample/t2i.sh respectively. (🔥 New) Feb 24, 2024 💥 We are very grateful that researchers and developers like our work. We will continue to update our LatteT2V model, hoping that our efforts can help the community develop. Our Latte discord channel is created for discussions. Coders are welcome to contribute. (🔥 New) Jan 9, 2024 💥 An updated LatteT2V model initialized with the PixArt-α is released, the checkpoint can be found here . (🔥 New) Oct 31, 2023 💥 The training and inference code is released. All checkpoints (including FaceForensics, SkyTimelapse, UCF101, and Taichi-HD) can be found here . In addition, the LatteT2V inference code is provided. Setup First, download and set up the repo: bash
git clone https://github.com/Vchitect/Latte
cd Latte We provide an environment.yml file that can be used to create a Conda environment. If you only want
to run pre-trained models locally on CPU, you can remove the cudatoolkit and pytorch-cuda requirements from the file. bash
conda env create -f environment.yml
conda activate latte Sampling You can sample from our pre-trained Latte models with sample.py . Weights for our pre-trained Latte model can be found here . The script has various arguments to adjust sampling steps, change the classifier-free guidance scale, etc. For example, to sample from our model on FaceForensics, you can use: bash
bash sample/ffs.sh or if you want to sample hundreds of videos, you can use the following script with Pytorch DDP: bash
bash sample/ffs_ddp.sh If you want to try generating videos from text, just run bash sample/t2v.sh . All related checkpoints will download automatically. Training We provide a training script for Latte in train.py . This script can be used to train class-conditional and unconditional
Latte models. To launch Latte (256x256) training with N GPUs on the FaceForensics dataset
: bash
torchrun --nnodes=1 --nproc_per_node=N train.py --config ./configs/ffs/ffs_train.yaml or If you have a cluster that uses slurm, you can also train Latte's model using the following scripts: bash
sbatch slurm_scripts/ffs.slurm We also provide the video-image joint training scripts train_with_img.py . Similar to train.py scripts, these scripts can be also used to train class-conditional and unconditional
Latte models. For example, if you want to train the Latte model on the FaceForensics dataset, you can use: bash
torchrun --nnodes=1 --nproc_per_node=N train_with_img.py --config ./configs/ffs/ffs_img_train.yaml Contact Us Yaohui Wang : wangyaohui@pjlab.org.cn Xin Ma : xin.ma1@monash.edu Citation If you find this work useful for your research, please consider citing it. bibtex
@article{ma2024latte,
title={Latte: Latent Diffusion Transformer for Video Generation},
author={Ma, Xin and Wang, Yaohui and Jia, Gengyun and Chen, Xinyuan and Liu, Ziwei and Li, Yuan-Fang and Chen, Cunjian and Qiao, Yu},
journal={arXiv preprint arXiv:2401.03048},
year={2024}
} Acknowledgments Latte has been greatly inspired by the following amazing works and teams: DiT and PixArt-α , we thank all the contributors for open-sourcing. License The code and model weights are licensed under LICENSE .;Latte: Latent Diffusion Transformer for Video Generation. ;[] | Vchitect/Latte |
Clooos/Bubble-Card;Bubble Card Bubble Card is a minimalist and customizable card collection for Home Assistant with a nice pop-up touch. Table of contents Installation Configuration Pop-up Horizontal buttons stack Button Media player Cover Separator Empty column Sub-buttons Card layouts Actions Styling Templates Conflicts Help Donate Installation Home Assistant lowest supported version: 2023.9.0 Without HACS 1. Download these files: [bubble-card.js](https://raw.githubusercontent.com/Clooos/Bubble-Card/main/dist/bubble-card.js) and [bubble-pop-up-fix.js](https://raw.githubusercontent.com/Clooos/Bubble-Card/main/dist/bubble-pop-up-fix.js)
2. Add these files to your ` /www` folder
3. On your dashboard click on the icon at the right top corner then on `Edit dashboard`
4. Click again on that icon and then click on `Manage resources`
5. Click on `Add resource`
6. Copy and paste this: `/local/bubble-card.js?v=1`
7. Click on `JavaScript Module` then `Create`
8. Go back and refresh your page
9. You can now click on `Add card` in the bottom right corner and search for `Bubble Card`
10. After any update of the file you will have to edit `/local/bubble-card.js?v=1` and change the version to any higher number
If it's not working, just try to clear your browser cache.` With HACS (Recommended) This method allows you to get updates directly on the HACS main page
1. If HACS is not installed yet, download it following the instructions on [https://hacs.xyz/docs/setup/download](https://hacs.xyz/docs/setup/download/)
2. Proceed to the HACS initial configuration following the instructions on [https://hacs.xyz/docs/configuration/basic](https://hacs.xyz/docs/configuration/basic)
3. On your sidebar go to `HACS` > `Frontend`
4. Click on the `+` button at the bottom right corner
5. Now search for `Bubble Card` and then click on the button at the bottom right corner to download it
6. Go back on your dashboard and click on the icon at the right top corner then on `Edit dashboard`
7. You can now click on `Add card` in the bottom right corner and search for `Bubble Card`
If it's not working, try to clear your browser cache.
#### Videos
There is also a detailed setup video from **BeardedTinker**, thank you so much to you!
▶ [YouTube - Pop-up cards with Bubble Card in Home Assistant](https://www.youtube.com/watch?v=oCpxhNk5JkE)
And also a video for the German users from **smart-live.net**, thanks a lot to you too!
▶ [YouTube - Home Assistant Dashboard Bubble Card - Eine der Besten Karten 2023?](https://youtu.be/6ju6mTntvMI?si=EGvKm4qwjhcZdTyl) Configuration All options can be configured in the Home Assistant editor. But you can find more details and the YAML in the documentation below. ** Main options (YAML + description) **
| Name | Type | Requirement | Supported options | Description |
| --- | --- | --- | --- | --- |
| `type` | string | **Required** | `custom:bubble-card` | Type of the card |
| `card_type` | string | **Required** | `button`, `cover`, `empty-column`, `horizontal-buttons-stack`, `media-player`, `pop-up` or `separator` | Type of the Bubble Card, see below |
| `styles` | object list | Optional | Any CSS stylesheets | Allows you to customize your Bubble Card CSS, see [styling](#styling) | Pop-up This card allows you to convert any vertical stack into a pop-up. Each pop-up is hidden by default and can be opened by targeting its link (e.g. '#pop-up-name' ), with any card that supports the navigate action , or with the horizontal buttons stack that is included. [!IMPORTANT] This card must be placed within a vertical stack card at the topmost position to function properly. To avoid misalignment with your view, place vertical stacks/pop-ups after all other dashboard cards. In a section view (recommended), you can place them anywhere. It should be called from the same view to work. [!TIP] Pop-up trigger This feature allows you to open a pop-up based on the state of any entity, for example, you can open a "Security" pop-up with a camera when a person is in front of your house. You can also create a toggle helper (input_boolean) and trigger its opening/closing in an automation. Opening a pop-up when a binary_sensor is on yaml
type: custom:bubble-card
card_type: pop-up
hash: '#kitchen'
name: Security
icon: mdi:video
trigger_entity: binary_sensor.front_door_motion
trigger_state: 'on'
trigger_close: true Different ways to close a pop-up They are many ways to close a pop-up. For instance, you can swipe from the pop-up header to the bottom, by doing a long swipe inside the pop-up to the bottom, by pressing Escape on desktop, by removing the hash in the URL or by simply pressing the close button. Pop-up initialization fix If you notice that pop-up content appears upon page load, consider installing this fix as an additional module. Installation You can do this by adding bubble-pop-up-fix.js to your configuration.yaml like so: yaml
frontend:
extra_module_url:
- /hacsfiles/Bubble-Card/bubble-pop-up-fix.js If you didn't install it with HACS, change the path accordingly. Then, clear your browser cache. For Android Home Assistant Companion App users, you can close the app, then clear the app cache. If it's still not working, you can close and restart the app again. For iOS Home Assistant Companion App users, you can go to your Home Assistant settings, then navigate to Companion App > Debug > Clear Frontend Cache (or something similar), then refresh the page or restart the app. For previous users of the Optimized mode , you will need to replace your type: custom:bubble-pop-up with this in YAML mode: yaml
type: vertical-stack
cards:
- type: custom:bubble-card
card_type: pop-up Pop-up options ** Options (YAML + descriptions) **
| Name | Type | Requirement | Supported options | Description |
| --- | --- | --- | --- | --- |
| `hash` | string | **Required** | Any unique hash (e.g. `'#kitchen'`) with ' ' | This is how you will open your pop-up |
| `auto_close` | string | Optional | A timeout in milliseconds (e.g. `10000` for 10s) | Auto close the pop-up after a timeout |
| `close_on_click` | boolean | Optional | `true` or `false` (default) | Automatically close the pop-up after any interaction |
| `close_by_clicking_outside` | boolean | Optional | `true` (default) or `false` | Close the pop-up by clicking outside of it |
| `width_desktop` | string | Optional | Any CSS value | Width on desktop (`100%` by default on mobile) |
| `margin` | string | Optional | Any CSS value | Use this **only** if your pop-up is not well centered on mobile (e.g. `13px`) |
| `margin_top_mobile` | string | Optional | Any CSS value | Top margin on mobile (e.g. `-56px` if your header is hidden) |
| `margin_top_desktop` | string | Optional | Any CSS value | Top margin on desktop (e.g. `50vh` for a half-sized pop-up or `calc(100vh - 400px)` for a fixed height of `400px`) |
| `bg_color` | string | Optional | Any hex, rgb or rgba value | The background color of your pop-up (e.g. `#ffffff` for a white background) |
| `bg_opacity` | string | Optional | Any value from `0` to `100` | The background opacity of your pop-up (e.g. `100` for no transparency) |
| `bg_blur` | string | Optional | Any value from `0` to `100` | The background blur effect of your pop-up, **this only work if `bg_opacity` is not set to `100`** (e.g. `0` for no blur)|
| `shadow_opacity` | string | Optional | Any value from `0` to `100` | The shadow opacity of your pop-up (e.g. `0` to hide it) |
| `hide_backdrop` | boolean | Optional | `true` or `false` (default) | Set this to true on the first pop-up of your main dashboard to disable the backdrop on all pop-ups. |
| `background_update` | boolean | Optional | `true` or `false` (default) | Update pop-up content in background (not recommended) |
| `trigger_entity` | string | Optional | Any entity | Open this pop-up based on the state of any entity |
| `trigger_state` | string | Optional (**Required** if `trigger_entity` is defined) | Any entity state | Entity state to open the pop-up |
| `trigger_close` | boolean | Optional | `true` or `false` (default) | Close the pop-up when `trigger_state` is different |
| `card_layout` | string | Optional | `normal` (default), `large`, `large-2-rows` | Styling layout of the header, see [card layouts](#card-layouts) |
| You also have access to [all the button settings](#button) for the header of the pop-up. | | Optional | | If undefined no header will be shown | Examples How to create a pop-up in the editor 1. To add a pop-up you first need to add a `vertical-stack` card to your dashboard
2. This `vertical-stack` must be before all your other cards in your view order
3. Now add a `Bubble Card` with the `Pop-up` type
4. Just fill in the `Hash` input and the ones you need A pop-up ```yaml
type: vertical-stack
cards:
- type: custom:bubble-card
card_type: pop-up
hash: '#kitchen'
name: Kitchen
icon: mdi:fridge
entity: light.kitchen
``` A button to open the pop-up ```yaml
type: custom:bubble-card
card_type: button
button_type: name
name: Kitchen
icon: mdi:fridge
button_action:
tap_action:
action: navigate
navigation_path: '#kitchen'
``` A pop-up with a fixed height Replace `400px` with the size you need.
```yaml
type: vertical-stack
cards:
- type: custom:bubble-card
card_type: pop-up
hash: '#kitchen'
name: Kitchen
icon: mdi:fridge
entity: light.kitchen
margin_top_mobile: calc(100vh - 400px)
margin_top_desktop: calc(100vh - 400px)
``` Horizontal buttons stack This card is a good companion to the pop-up card, allowing you to open the corresponding pop-ups. It also allows you to open any page of your dashboard. In addition, you can add your motion/occupancy sensors so that the order of the buttons adapts according to the room you just entered. This card is scrollable, remains visible, and acts as a footer. [!IMPORTANT] This card has to be the last one in your view (after every card and pop-up). It can't be inside any stack. Horizontal buttons stack options ** Options (YAML + descriptions) **
| Name | Type | Requirement | Supported options | Description |
| --- | --- | --- | --- | --- |
| `1_link` | string | **Required** | The pop-up hash (e.g. `'#kitchen'`) with ' ' or any link | A link to open |
| `1_name` | string | Optional | Any string | A name for your button |
| `1_icon` | string | Optional | Any `mdi:` icon | An icon for your button |
| `1_entity` | string | Optional | Any light or light group | Display the color of that light in background |
| `1_pir_sensor` | string | Optional | Any binary sensor | At least one pir sensor or more for `auto_order`, in fact it also works with any entity type, for example you can add light groups and the order will change based on the last changed states. |
| `auto_order` | boolean | Optional | `true` or `false` (default) | Change the order of the buttons according to the `_pir_sensor` last changed time, **it needs to be `false` if you don't have any `_pir_sensor` in your code** |
| `margin` | string | Optional | Any CSS value | Use this **only** if your `horizontal-buttons-stack` is not well centered on mobile (e.g. `13px`) |
| `width_desktop` | string | Optional | Any CSS value | Width on desktop (`100%` by default on mobile) |
| `is_sidebar_hidden` | boolean | Optional | `true` or `false` (default) | Fix the horizontal buttons stack position if the sidebar is hidden on the desktop (only if you have made a modification to hide it yourself) |
| `rise_animation` | boolean | Optional | `true` (default) or `false` | Set this to `false` to disable the animation that activates once the page has loaded |
| `highlight_current_view` | boolean | Optional | `true` or `false` (default) | Highlight current hash / view with a smooth animation |
| `hide_gradient` | boolean | Optional | `true` or `false` (default) | Set this to `false` to hide the gradient |
> [!IMPORTANT]
> The variables starting with a number define your buttons, just change this number to add more buttons (see example below). Example An horizontal buttons stack that reorganize itself based on occupancy sensors ```yaml
type: custom:bubble-card
card_type: horizontal-buttons-stack
auto_order: true
1_name: Living room
1_icon: mdi:sofa
1_link: '#living-room'
1_entity: light.living_room
1_pir_sensor: binary_sensor.living_room_motion
2_name: Kitchen
2_icon: mdi:fridge
2_link: '#kitchen'
2_entity: light.kitchen
2_pir_sensor: binary_sensor.kitchen_motion
3_name: Dining room
3_icon: mdi:silverware-fork-knife
3_link: '#dining-room'
3_entity: light.dining_room
3_pir_sensor: binary_sensor.dining_room_motion
``` Button This card allows you to control your entities and can be customized in many ways. To access color / control of an entity, simply tap on the icon. [!TIP] What are the differences between all the button types? The button switch is the default one. By default, it toggles an entity and its background color changes based on the entity’s state or the color of a light. The button slider can control the brightness of a light, the volume of a media player, the position of a cover, and it also supports input numbers. Its background color can change based on the color of a light. The button state is perfect for displaying information from a sensor or any entity. When you press it, it will show the "More info" panel of the entity. Its background color does not change. The button name is the only one that doesn't need an entity and allows you to display a short text, a name or a title. You can also add some actions to it. Its background color does not change. Button options ** Options (YAML + descriptions) **
| Name | Type | Requirement | Supported options | Description |
| --- | --- | --- | --- | --- |
| `entity` | string | **Required** | Any entity | An entity to control |
| `button_type` | string | Optional | `switch` (default), `slider`, `state` or `name` | The behavior of your button |
| `name` | string | Optional | Any string | A name for your button, if not defined it will display the entity name |
| `icon` | string | Optional | Any `mdi:` icon | An icon for your button, if not defined it will display the entity icon or the `entity-picture` |
| `force_icon` | boolean | Optional | `true` or `false` (default) | Give the priority to the icon instead of the `entity-picture` |
| `show_state` | boolean | Optional | `true` or `false` (default) | Show or hide the state of your `entity` |
| `show_name` | boolean | Optional | `true` (default) or `false` | Show or hide the name |
| `show_icon` | boolean | Optional | `true` (default) or `false` | Show or hide the icon |
| `show_last_changed` | boolean | Optional | `true` or `false` (default) | Show the last changed time of your `entity` |
| `show_attribute` | boolean | Optional | `true` or `false` (default) | Show an attribute of your `entity` below its `name` |
| `attribute` | string | Optional (required if `show_attribute` is set to `true`) | An attribute from your `entity` | The attribute to show (e.g. `brightness`) |
| `scrolling_effect` | boolean | Optional | `true` (default) or `false` | Allow text to scroll when the content exceeds the size of their container |
| `button_action` | object | Optional | `tap_action`, `double_tap_action` or `hold_action`, see below | Allow to change the default actions on button click. Not available for the `slider` type. |
| `tap_action` | object | Optional | See [actions](#tap-double-tap-and-hold-actions) | Define the type of action on icon click, if undefined, `more-info` will be used. |
| `double_tap_action` | object | Optional | See [actions](#tap-double-tap-and-hold-actions) | Define the type of action on icon double click, if undefined, `toggle` will be used. |
| `hold_action` | object | Optional | See [actions](#tap-double-tap-and-hold-actions) | Define the type of action on icon hold, if undefined, `more-info` will be used. |
| `card_layout` | string | Optional | `normal` (default), `large`, `large-2-rows` | Styling layout of the card, see [card layouts](#card-layouts) |
| `columns` | string | Optional | `1`, `2`, `3` or `4` (default) | Number of columns when placed in a **section view** (e.g. `2` is 2/4) |
| `rows` | string | Optional | `1` (default), `2`, `3` or `4` | Number of rows when placed in a **section view** (e.g. `2` is 2/4) |
| `sub_button` | object | Optional | See [sub-buttons](#sub-buttons) | Add customized buttons fixed to the right | Examples A slider button that can control the brightness of a light ```yaml
type: custom:bubble-card
card_type: button
button_type: slider
entity: light.kitchen_led
name: Kitchen LED
icon: mdi:led-strip-variant
``` A button with all the options ```yaml
entity: light.your_light
button_type: switch
show_icon: true
force_icon: true
show_name: true
show_last_changed: true
show_state: true
show_last_updated: true
show_attribute: true
attribute: brightness
scrolling_effect: true
card_layout: large
button_action:
tap_action:
action: toggle
tap_action:
action: more-info
sub_button:
- entity: light.your_light
icon: ''
show_state: false
show_attribute: true
attribute: brightness
show_icon: false
show_background: false
show_name: false
``` Media player This card allows you to control a media player. You can tap on the icon to get more control. Media player options ** Options (YAML + descriptions) **
| Name | Type | Requirement | Supported options | Description |
| --- | --- | --- | --- | --- |
| `entity` | string | **Required** | Any media player | The media player to control |
| `name` | string | Optional | Any string | A name for your media player, if not defined it will display the entity name |
| `icon` | string | Optional | Any `mdi:` icon | An icon for your media player, if not defined it will display the entity icon or the `entity-picture` |
| `force_icon` | boolean | Optional | `true` or `false` (default) | Give the priority to the icon instead of the `entity-picture` |
| `show_state` | boolean | Optional | `true` or `false` (default) | Show or hide the state of your `entity` |
| `show_name` | boolean | Optional | `true` (default) or `false` | Show or hide the name |
| `show_icon` | boolean | Optional | `true` (default) or `false` | Show or hide the icon |
| `show_last_changed` | boolean | Optional | `true` or `false` (default) | Show the last changed time of your `entity` |
| `show_attribute` | boolean | Optional | `true` or `false` (default) | Show an attribute of your `entity` below its `name` |
| `attribute` | string | Optional (required if `show_attribute` is set to `true`) | An attribute from your `entity` | The attribute to show (e.g. `brightness`) |
| `scrolling_effect` | boolean | Optional | `true` (default) or `false` | Allow text to scroll when the content exceeds the size of their container |
| `tap_action` | object | Optional | See [actions](#tap-double-tap-and-hold-actions) | Define the type of action on icon click, if undefined, `more-info` will be used. |
| `double_tap_action` | object | Optional | See [actions](#tap-double-tap-and-hold-actions) | Define the type of action on icon double click, if undefined, `toggle` will be used. |
| `hold_action` | object | Optional | See [actions](#tap-double-tap-and-hold-actions) | Define the type of action on icon hold, if undefined, `more-info` will be used. |
| `card_layout` | string | Optional | `normal` (default), `large`, `large-2-rows` | Styling layout of the card, see [card layouts](#card-layouts) |
| `columns` | string | Optional | `1`, `2`, `3` or `4` (default) | Number of columns when placed in a **section view** (e.g. `2` is 2/4) |
| `rows` | string | Optional | `1` (default), `2`, `3` or `4` | Number of rows when placed in a **section view** |
| `sub_button` | object | Optional | See [sub-buttons](#sub-buttons) | Add customized buttons fixed to the right |
| `hide` | object | Optional | See below | Show or hide buttons |
#### Hide options
| Name | Type | Requirement | Supported options | Description |
| --- | --- | --- | --- | --- |
| `play_pause_button` | boolean | Optional | `true` (default) or `false` | Show or hide the play/pause button |
| `volume_button` | boolean | Optional | `true` (default) or `false` | Show or hide the volume button |
| `previous_button` | boolean | Optional | `true` (default) or `false` | Show or hide the previous button |
| `next_button` | boolean | Optional | `true` (default) or `false` | Show or hide the next button |
| `power_button` | boolean | Optional | `true` (default) or `false` | Show or hide the power button | Examples A media player with all the options ```yaml
type: custom:bubble-card
card_type: media-player
name: Media player
entity: media_player.your_media_player
show_state: true
show_last_updated: true
show_attribute: true
attribute: assumed_state
card_layout: large
scrolling_effect: false
show_icon: false
force_icon: true
show_name: false
show_last_changed: true
columns: 2
rows: 1
tap_action:
action: toggle
hide:
play_pause_button: true
volume_button: true
previous_button: true
next_button: true
power_button: true
sub_button:
- entity: media_player.salon_2
icon: mdi:volume-high
name: Volume level
tap_action:
action: more-info
show_name: false
show_state: false
show_last_updated: false
show_attribute: true
show_background: false
attribute: volume_level
``` Cover This card allows you to control your covers. Cover options ** Options (YAML + descriptions) **
| Name | Type | Requirement | Supported options | Description |
| --- | --- | --- | --- | --- |
| `entity` | string | **Required** | Any cover | A cover to control |
| `name` | string | Optional | Any string | A name for your cover, if not defined it will display the entity name |
| `force_icon` | boolean | Optional | `true` or `false` (default) | Give the priority to the icon instead of the `entity-picture` |
| `show_state` | boolean | Optional | `true` or `false` (default) | Show or hide the state of your `entity` |
| `show_name` | boolean | Optional | `true` (default) or `false` | Show or hide the name |
| `show_icon` | boolean | Optional | `true` (default) or `false` | Show or hide the icon |
| `show_last_changed` | boolean | Optional | `true` or `false` (default) | Show the last changed time of your `entity` |
| `show_attribute` | boolean | Optional | `true` or `false` (default) | Show an attribute of your `entity` below its `name` |
| `attribute` | string | Optional (required if `show_attribute` is set to `true`) | An attribute from your `entity` | The attribute to show (e.g. `brightness`) |
| `scrolling_effect` | boolean | Optional | `true` (default) or `false` | Allow text to scroll when the content exceeds the size of their container |
| `icon_open` | string | Optional | Any `mdi:` icon | An icon for your open cover, if not defined it will display the default open cover icon |
| `icon_close` | string | Optional | Any `mdi:` icon | An icon for your closed cover, if not defined it will display the default closed cover icon |
| `icon_up` | string | Optional | Any `mdi:` icon | An icon for your open cover button, if not defined it will display the default open cover icon |
| `icon_down` | string | Optional | Any `mdi:` icon | An icon for your close cover button, if not defined it will display the default close cover icon |
| `open_service` | string | Optional | Any service or script | A service to open your cover, default to `cover.open_cover` |
| `stop_service` | string | Optional | Any service or script | A service to stop your cover, default to `cover.stop_cover` |
| `close_service` | string | Optional | Any service or script | A service to close your cover, default to `cover.close_cover` |
| `tap_action` | object | Optional | See [actions](#tap-double-tap-and-hold-actions) | Define the type of action on icon click, if undefined, `more-info` will be used. |
| `double_tap_action` | object | Optional | See [actions](#tap-double-tap-and-hold-actions) | Define the type of action on icon double click, if undefined, `toggle` will be used. |
| `hold_action` | object | Optional | See [actions](#tap-double-tap-and-hold-actions) | Define the type of action on icon hold, if undefined, `more-info` will be used. |
| `card_layout` | string | Optional | `normal` (default), `large`, `large-2-rows` | Styling layout of the card, see [card layouts](#card-layouts) |
| `columns` | string | Optional | `1`, `2`, `3` or `4` (default) | Number of columns when placed in a **section view** (e.g. `2` is 2/4) |
| `rows` | string | Optional | `1` (default), `2`, `3` or `4` | Number of rows when placed in a **section view** |
| `sub_button` | object | Optional | See [sub-buttons](#sub-buttons) | Add customized buttons fixed to the right | Example A card that can control a roller shade ```yaml
type: custom:bubble-card
card_type: cover
entity: cover.kitchen
name: Kitchen
icon_open: mdi:roller-shade
icon_close: mdi:roller-shade-closed
``` Separator This card is a simple separator for dividing your pop-up into categories / sections. e.g. Lights, Devices, Covers, Settings, Automations... Separator options ** Options (YAML + descriptions) **
| Name | Type | Requirement | Supported options | Description |
| --- | --- | --- | --- | --- |
| `name` | string | Optional but recommended | Any string | A name for your separator |
| `icon` | string | Optional but recommended | Any `mdi:` icon | An icon for your separator |
| `card_layout` | string | Optional | `normal` (default), `large`, `large-2-rows` | Styling layout of the card, see [card layouts](#card-layouts) |
| `columns` | string | Optional | `1`, `2`, `3` or `4` (default) | Number of columns when placed in a **section view** (e.g. `2` is 2/4) |
| `rows` | string | Optional | `1` (default), `2`, `3` or `4` | Number of rows when placed in a **section view** |
| `sub_button` | object | Optional | See [sub-buttons](#sub-buttons) | Add customized buttons fixed to the right | Example A separator/divider for a "Covers" section ```yaml
type: custom:bubble-card
card_type: separator
name: Covers
icon: mdi:window-shutter
``` Empty column This card is here to fill an empty column. This is useful if you have an horizontal-stack in your pop-up with only one card. Take a look at the bottom right corner of this screenshot to (not) see it. Empty column options This card has no options and doesn’t supports styling . Example An empty column in an horizontal stack ```yaml
type: horizontal-stack
cards:
- type: custom:bubble-card
card_type: button
...
- type: custom:bubble-card
card_type: empty-column
``` Sub-buttons In every cards that supports that option you can add sub-buttons to customize your cards even more. You can for exemple create a button that can control a vacuum, a weather card or almost anything that you can come up with. These sub-buttons supports the tap actions and most of the button options. Sub-buttons options ** Options (YAML + description) **
| Name | Type | Requirement | Supported options | Description |
| --- | --- | --- | --- | --- |
| `entity` | string | Optional | Any entity | An entity to control |
| `name` | string | Optional | Any string | A name for your sub-button, if not defined it will display the entity name |
| `icon` | string | Optional | Any `mdi:` icon | An icon for your sub-button, if not defined it will display the entity icon |
| `show_background` | boolean | Optional | `true` (default) or `false` | Show a background for your sub-button, it will change its color based on your entity state |
| `show_state` | boolean | Optional | `true` or `false` (default) | Show or hide the state of your `entity` |
| `show_name` | boolean | Optional | `true` or `false` (default) | Show or hide the name |
| `show_icon` | boolean | Optional | `true` (default) or `false` | Show or hide the icon |
| `show_last_changed` | boolean | Optional | `true` or `false` (default) | Show the last changed time of your `entity` |
| `show_attribute` | boolean | Optional | `true` or `false` (default) | Show an attribute of your `entity` below its `name` |
| `attribute` | string | Optional (required if `show_attribute` is set to `true`) | An attribute from your `entity` | The attribute to show (e.g. `brightness`) |
| `tap_action` | object | Optional | See [actions](#tap-double-tap-and-hold-actions) | Define the type of action on sub-button click, if undefined, `more-info` will be used. |
| `double_tap_action` | object | Optional | See [actions](#tap-double-tap-and-hold-actions) | Define the type of action on sub-button double click, if undefined, `toggle` will be used. |
| `hold_action` | object | Optional | See [actions](#tap-double-tap-and-hold-actions) | Define the type of action on sub-button hold, if undefined, `more-info` will be used. | Examples A button with some sub-buttons to makes a vacuum card (like on the screenshot) ```yaml
type: custom:bubble-card
card_type: button
button_type: switch
name: Vacuum
entity: vacuum.downstairs
icon: mdi:robot-vacuum
show_state: true
show_last_changed: true
tap_action:
action: more-info
button_action:
tap_action:
action: more-info
sub_button:
- name: Battery
icon: mdi:battery
show_name: false
show_icon: true
show_background: false
show_attribute: true
attribute: battery_level
- name: Return to dock
icon: mdi:home
show_background: false
tap_action:
action: call-service
service: vacuum.return_to_base
target:
entity_id: vacuum.downstairs
- name: Pause
icon: mdi:pause
show_background: false
tap_action:
action: call-service
service: vacuum.pause
target:
entity_id: vacuum.downstairs
- name: Start
icon: mdi:play
tap_action:
action: call-service
service: vacuum.start
target:
entity_id: vacuum.downstairs
styles: >-
.bubble-button-card-container {
/* Change the background color when the vacuum get an error (optional), more details in the styles template section */
background: ${state === 'error' ? 'rgb(200, 80, 40)' : ''} !important;
}
/* Change the first sub-button battery icon based on the battery_icon attribute, more details in the styles template section */
${subButtonIcon[0].setAttribute("icon", hass.states['vacuum.downstairs'].attributes.battery_icon)}
``` A button slider with a sub-button that shows the brightness and one that toggle the light (like on the screenshot) ```yaml
type: custom:bubble-card
card_type: button
button_type: slider
name: Kitchen
entity: light.kitchen
icon: mdi:fridge-outline
show_last_updated: true
sub_button:
- name: Brightness
icon: mdi:fridge-outline
show_icon: false
show_background: false
show_attribute: true
attribute: brightness
- name: Toggle button
icon: mdi:lightbulb
tap_action:
action: toggle
``` A button that shows the inside and outside temperature with the weather for today and tomorrow (screenshot included) > Bad luck for me it's cloudy all the time but all the icons are changing based on the weather.
```yaml
type: custom:bubble-card
card_type: button
button_type: state
entity: weather.openweathermap
name: Weather
show_state: true
card_layout: large-2-rows
sub_button:
- name: Home temperature
icon: mdi:home-thermometer-outline
entity: sensor.home_temperature
show_state: true
show_icon: true
show_background: false
- name: Outside temperature
entity: sensor.outside_temperature
show_state: true
show_background: false
- name: Today
entity: sensor.home_realfeel_temperature_max_0d
show_name: true
show_state: true
tap_action:
action: more-info
- name: Tomorrow
entity: sensor.home_realfeel_temperature_max_1d
show_name: true
show_state: true
show_background: false
styles: >-
/* Change the third and fourth sub-button icon based on the forecast.condition attribute, more details in the styles template section */
${subButtonIcon[2].setAttribute("icon", getWeatherIcon(hass.states['sensor.weather_forecast_daily'].attributes.forecast[0]?.condition))}
${subButtonIcon[3].setAttribute("icon", getWeatherIcon(hass.states['sensor.weather_forecast_daily'].attributes.forecast[1]?.condition))}
``` Card layouts Bubble Card fully supports the Home Assistant section view, you can change the card layout to make the card bigger and also change the number of columns or rows the card should occupy in your section view (only on the cards that support that option). These layouts are also supported in all other view types. ** Available card layouts **
| Layout | Description |
| --- | --- |
| `normal` | The regular layout (not optimized for the section view) |
| `large` | A larger layout (optimized for the section view) |
| `large_2_rows` | A larger layout with 2 rows of sub-buttons (optimized for the section view) | Example A large button that shows energy statistics with 2 rows of sub-buttons (screenshot included) ```yaml
type: custom:bubble-card
card_type: button
button_type: state
card_layout: large-2-rows
name: Energy
entity: sensor.current_power_production
icon: mdi:home-lightning-bolt-outline
show_state: true
button_action:
tap_action:
action: navigate
navigation_path: '#energy'
sub_button:
- entity: sensor.electricity_counter
icon: mdi:counter
show_background: false
show_state: true
tap_action:
action: more-info
- entity: sensor.today_s_energy_production
show_state: true
show_background: false
- entity: sensor.average_daily_consumption
show_background: false
show_state: true
- entity: sensor.this_week_production
show_state: true
show_background: false
icon: mdi:calendar-week
``` Tap, double tap and hold actions You can also use Home Assistant default tap actions, double tap actions and hold actions on the cards that supports this option. For example, this allows you to display the “more info” window by holding a button icon or running a service when a sub-button is pressed. Action options ** Options (YAML + description) **
| Name | Type | Supported options | Description |
| --- | --- | --- | --- |
| `action` | string | `more-info`, `toggle`, `call-service`, `navigate`, `url`, `fire-dom-event`, `none` | Action to perform |
| `target` | object | | Only works with `call-service`. Follows the [home-assistant syntax](https://www.home-assistant.io/docs/scripts/service-calls/#targeting-areas-and-devices) |
| `navigation_path` | string | Any path of your dashboard | Path to navigate to (e.g. `'#kitchen'` for opening a pop-up) when action defined as navigate |
| `url_path` | string | Any link | URL to open on click (e.g. `https://www.google.com`) when action is `url` |
| `service` | string | Any service | Service to call (e.g. `media_player.media_play_pause`) when `action` defined as `call-service` |
| `data` or `service_data` | object | Any service data | Service data to include (e.g. `entity_id: media_player.kitchen`) when `action` defined as `call-service` |
| `confirmation` | object | See [confirmation](https://www.home-assistant.io/dashboards/actions/#options-for-confirmation) | Display a confirmation pop-up (not a Bubble Card one), overrides the default `confirmation` object | Example A button to open a pop-up ```yaml
type: custom:bubble-card
card_type: button
button_type: name
name: Kitchen
icon: mdi:fridge
button_action:
tap_action:
action: navigate
navigation_path: '#kitchen'
``` Styling You can add custom styles in the editor or by adding styles: | in YAML without using card-mod which allows you to modify the CSS style of all the cards. [!TIP] To understand which style classes can be modified , you can take a look at the src/cards folder in this repository. In each card folder, you will find a file named styles.ts . These files contain all the applied styles. You can also find a lot of examples on the Home Assistant forum by doing a bit of searching. The Bubble theme for Home Assistant (like on the screenshots) can be found here . [!IMPORTANT] Please note that you will have to add !important; to some CSS styles that are already defined (see examples below). Examples Changing the font size of any Bubble Card ```yaml
styles: |
* {
font-size: 16px !important;
}
``` Changing the background color of a single button in an horizontal buttons stack ```yaml
styles: >
/* Selector for the '#kitchen' button */
.kitchen > .color-background {
background-color: blue !important;
}
``` Changing the background color of a button ```yaml
styles: |
.bubble-button-card-container {
background: rgba(12,120,50,0.5) !important;
}
```
To change the color when it's `on` take a look at the style templates below. Changing the color of a button slider ```yaml
styles: |
.bubble-range-fill {
background: rgba(79, 69, 87, 1) !important;
opacity: 1 !important;
}
``` Changing the line color of a separator ```yaml
styles: |
.bubble-line {
background: var(--primary-text-color);
opacity: 0.1;
}
``` Changing the color of an icon ```yaml
styles: |
.bubble-icon {
color: white !important;
}
```
For an horizontal buttons stack icon.
```yaml
.kitchen > .bubble-icon {
color: grey !important
}
``` Changing the background color of an icon container ```yaml
styles: |
.bubble-icon-container {
background: rgb(230, 128, 41);
}
``` Changing the size of the sub-buttons (perfect for the large layout) ```yaml
styles: |
.bubble-sub-button {
height: 48px !important;
min-width: 48px !important;
}
``` Changing the background color of the second sub-button ```yaml
styles: |
.bubble-sub-button-2 {
background-color: blue !important;
}
``` Changing the size of an icon For the main icon.
```yaml
styles: |
.bubble-icon {
--mdc-icon-size: 26px !important;
}
```
For the sub-button icons.
```yaml
styles: |
.bubble-sub-button-icon {
--mdc-icon-size: 26px !important;
}
``` Using a picture rather than an icon in a sub button ```yaml
sub_button:
- icon: none
styles: |-
.bubble-sub-button-1 {
background-image: url("/local/pictures/your_picture.jpg");
background-size: cover;
}
```
Just upload this picture in a “pictures” folder (or the name you want) in the Home Assistant “www” folder. Advanced example: Creating an horizontal row of sub-buttons (screenshot included) > I really love this one, I use it as a header on my dashboard.
```yaml
type: custom:bubble-card
card_type: button
card_layout: large
button_type: name
show_icon: false
show_name: false
sub_button:
- name: Mute
icon: mdi:volume-off
tap_action:
action: toggle
service: input_boolean.toggle
entity: input_boolean.silent_mode
- name: Covers
entity: cover.all_group
show_background: false
tap_action:
action: navigate
navigation_path: '#cover'
- name: Shopping list
icon: mdi:cart-outline
show_background: false
tap_action:
action: navigate
navigation_path: '#shopping-list'
- name: Security
icon: mdi:video-outline
show_background: false
tap_action:
action: navigate
navigation_path: '#security'
- name: Settings
icon: mdi:cog
show_background: false
tap_action:
action: navigate
navigation_path: '#configuration'
styles: |
.card-content {
width: 100%;
margin: 0 !important;
}
.bubble-button-card-container {
background: none;
}
.bubble-sub-button {
height: 46px !important;
width: 46px !important;
}
.bubble-sub-button-container {
width: 100%;
justify-content: space-between !important;
}
.bubble-sub-button-icon {
--mdc-icon-size: inherit !important;
}
.bubble-name-container {
margin-right: 0px !important;
}
```
![Sub-buttons-everywhere](https://github.com/Clooos/Bubble-Card/assets/36499953/3bf04969-e00d-4755-89df-481e8f7d73b2) Templates Advanced users can add templates in JS directly in their custom styles (Bubble Card doesn’t support Jinja templates). For example, this allows you to dynamically change an icon, the name or the color of an element, to show or hide an element conditionnaly (like a sub-button), or almost anything based on a state, an attribute and more. [!TIP] More information about JS templates here . My advice is to always take a look at your browser console to be sure that everything is working correctly. Available variables and functions Variables You have access to these variables in most cards:
- `state` will return the state of your defined `entity`.
- `entity` will return your entity you defined like `switch.test` in this example.
- `icon` can be used like this to change the icon `icon.setAttribute("icon", "mdi:lightbulb")`.
- `subButtonIcon[0]` can be used like this to change the first sub-button icon `subButtonIcon[0].setAttribute("icon", "mdi:lightbulb")`, `[0]` is the first sub-button icon, `[1]` the second...
- `card` will return the card element in the DOM.
- `hass` is an advanced variable that allows you even more control, for example you can return the state of `light.kitchen` like this `hass.states['light.kitchen'].state` or an attribute like this `hass.states[entity].attributes.brightness`.
- `this` will return a lot of usefull informations about your setup and dashboard, only use this if you know what you are doing. Functions ( getWeatherIcon ) You have access to all the global JS functions, but you have also access to:
- `getWeatherIcon` can be used to return a weather icon based on a state that return the weather. For example, you can can do this `${subButtonIcon[2].setAttribute("icon", getWeatherIcon(hass.states['sensor.weather_forecast_daily'].attributes.forecast[0]?.condition))}` to change the third sub-button icon to today's weather icon, `.forecast[1]?.condition` is for tomorrow...
You will have to create a template sensor for that. Here is what you can add in your `configuration.yaml`:
```yaml
- trigger:
- platform: time_pattern
hours: /2
action:
- service: weather.get_forecasts
data:
type: daily
target:
entity_id: weather.home
response_variable: daily
sensor:
- name: Weather Forecast Daily
unique_id: weather_forecast_daily
state: "{{ now().isoformat() }}"
attributes:
forecast: "{{ daily['weather.home'].forecast }}"
``` Examples Changing the background color of a button that is red when it's off and green when it's on ```yaml
type: custom:bubble-card
card_type: button
entity: switch.test
name: Test
styles: |
.bubble-button-background {
opacity: 1 !important;
background-color: ${state === 'on' ? 'blue' : 'red'} !important;
}
``` Changing the background color of a button based on an entity for the horizontal buttons stack ```yaml
styles: |
.kitchen > .color-background {
background-color: ${hass.states['light.kitchen'].state === 'on' ? 'blue' : 'red'} !important;
}
``` Showing/Hiding a sub-button conditionnaly This one is showing the first sub-button only when my vacuum is stuck.
```yaml
styles: |
.bubble-sub-button-1 {
display: ${hass.states['vacuum.downstairs'].state === 'error' ? '' : 'none'} !important;
}
```
This one is showing a sub-button when the battery is below 10%. Usefull with a sub-button that shows "Low battery".
```yaml
styles: |
.bubble-sub-button-1 {
display: ${hass.states['vacuum.downstairs'].attributes.battery_level <= 10 ? '' : 'none'} !important;
}
``` Changing an icon or sub-button icon conditionnaly This one is changing a button icon only when a vacuum is stuck.
```yaml
styles: |
${icon.setAttribute("icon", hass.states['vacuum.downstairs'].state === 'error' ? 'mdi:alert' : 'mdi:robot-vacuum')}
```
This one is changing the first sub-button icon only when a vacuum is stuck.
```yaml
styles: |
${subButtonIcon[0].setAttribute("icon", hass.states['vacuum.downstairs'].state === 'error' ? 'mdi:alert' : 'mdi:robot-vacuum')}
``` Changing an icon or sub-button icon color conditionnaly This one is changing a button icon color based on its state.
```yaml
styles: |
.bubble-icon {
color: ${hass.states['light.your_light'].state === 'on' ? 'green' : 'red'} !important;
}
```
This one is changing a sub-button icon color based on its state. `.bubble-sub-button-1` is the first sub-button, replace `1` if you want to change another sub-button icon.
```yaml
styles: |
.bubble-sub-button-1 > ha-icon {
color: ${hass.states['light.your_light'].state === 'on' ? 'green' : 'red'} !important;
}
``` Animating a fan icon conditionnaly This one is rotating a button icon when a fan is `on`.
```yaml
styles: |-
.bubble-icon {
animation: ${hass.states['fan.you_fan'].state === 'on' ? 'slow-rotate 2s linear infinite' : ''};
}
@keyframes slow-rotate {
0% { transform: rotate(0deg); }
100% { transform: rotate(360deg); }
}
``` Templating a button name This one is changing a button name/state with "It's currently sunny" depending on your weather.
```yaml
styles: |
${card.querySelector('.bubble-name').innerText = "It's currently " + hass.states['weather.home'].state}
```
If you want to template the state (`.bubble-state`) don't toggle `show_state: true` just toggle `show_attribute: true` without any attribute. Advanced example: Changing the color of a sub-button when a pop-up is open ```yaml
styles: |
${window.addEventListener('location-changed', () => {
card.querySelector('.bubble-sub-button-1').style.backgroundColor = this.location.href.includes('#kitchen') ? 'blue' : '';
})}
``` Advanced example: Templating a separator name based on a state translated to your language You can use `hass.formatEntityState(state)` to transtale a state and `hass.formatEntityAttributeValue(state, "attribute")` to translate an attribute.
This one is changing the name and the icon based on the weather, "Nuageux" means "Cloudy" in French.
![image](https://github.com/Clooos/Bubble-Card/assets/36499953/35ac9d0f-c3b8-4c09-9c15-fe6954011d55)
```yaml
type: custom:bubble-card
card_type: separator
icon: mdi:weather-cloudy
sub_button:
- entity: sensor.outside_temperature
icon: mdi:thermometer
name: Temperature
show_state: true
show_background: false
styles: >
.bubble-line {
background: white;
opacity: 1;
}
${card.querySelector('.bubble-name').innerText =
hass.formatEntityState(hass.states['weather.maison'])}
${icon.setAttribute("icon",
getWeatherIcon(hass.states['weather.maison'].state))}
``` Custom components conflicts ⚠️ For now there are some features that are not working with: Kiosk mode, but this is fixed in Kiosk mode v6.0.1 Help Feel free to open an issue if something is not working as expected. Got questions or thoughts about Bubble Card? Want to share your dashboards or discoveries? You can go on the Home Assistant forum, on the Bubble Card subreddit or on the GitHub Discussions section. Donate I dedicate most of my spare time to making this project the best it can be. So if you appreciate my work, any donation would be a great way to show your support 🍻 Thank you everyone for your support, you all are my greatest motivation!;Bubble Card is a minimalist card collection for Home Assistant with a nice pop-up touch.;custom-card,custom-cards,frontend,lovelace-custom-card,home-assistant,minimalist,mobile-first,dashboard,lovelace,button | Clooos/Bubble-Card |
princeton-nlp/SWE-bench;| [日本語](docs/README_JP.md) | [English](https://github.com/princeton-nlp/SWE-bench) | [中文简体](docs/README_CN.md) | [中文繁體](docs/README_TW.md) | Code and data for our ICLR 2024 paper SWE-bench: Can Language Models Resolve Real-World GitHub Issues? Please refer our website for the public leaderboard and the change log for information on the latest updates to the SWE-bench benchmark. 📰 News [Apr. 15, 2024] : SWE-bench has gone through major improvements to resolve issues with the evaluation harness. Read more in our report . [Apr. 2, 2024] : We have released SWE-agent , which sets the state-of-the-art on the full SWE-bench test set! ( Tweet 🔗 ) [Jan. 16, 2024] : SWE-bench has been accepted to ICLR 2024 as an oral presentation! ( OpenReview 🔗 ) 👋 Overview SWE-bench is a benchmark for evaluating large language models on real world software issues collected from GitHub.
Given a codebase and an issue , a language model is tasked with generating a patch that resolves the described problem. To access SWE-bench, copy and run the following code: python
from datasets import load_dataset
swebench = load_dataset('princeton-nlp/SWE-bench', split='test') 🚀 Set Up To build SWE-bench from source, follow these steps:
1. Clone this repository locally
2. cd into the repository.
3. Run conda env create -f environment.yml to created a conda environment named swe-bench 4. Activate the environment with conda activate swe-bench 💽 Usage You can download the SWE-bench dataset directly ( dev , test sets) or from HuggingFace . To use SWE-Bench, you can:
* Train your own models on our pre-processed datasets * Run inference on existing models (either models you have on-disk like LLaMA, or models you have access to through an API like GPT-4). The inference step is where you get a repo and an issue and have the model try to generate a fix for it.
* Evaluate models against SWE-bench. This is where you take a SWE-Bench task and a model-proposed solution and evaluate its correctness.
* Run SWE-bench's data collection procedure on your own repositories, to make new SWE-Bench tasks. ⬇️ Downloads | Datasets | Models |
| - | - |
| 🤗 SWE-bench | 🦙 SWE-Llama 13b |
| 🤗 "Oracle" Retrieval | 🦙 SWE-Llama 13b (PEFT) |
| 🤗 BM25 Retrieval 13K | 🦙 SWE-Llama 7b |
| 🤗 BM25 Retrieval 27K | 🦙 SWE-Llama 7b (PEFT) |
| 🤗 BM25 Retrieval 40K | |
| 🤗 BM25 Retrieval 50K (Llama tokens) | | 🍎 Tutorials We've also written the following blog posts on how to use different parts of SWE-bench.
If you'd like to see a post about a particular topic, please let us know via an issue.
* [Nov 1. 2023] Collecting Evaluation Tasks for SWE-Bench ( 🔗 )
* [Nov 6. 2023] Evaluating on SWE-bench ( 🔗 ) 💫 Contributions We would love to hear from the broader NLP, Machine Learning, and Software Engineering research communities, and we welcome any contributions, pull requests, or issues!
To do so, please either file a new pull request or issue and fill in the corresponding templates accordingly. We'll be sure to follow up shortly! Contact person: Carlos E. Jimenez and John Yang (Email: {carlosej, jy1682}@princeton.edu). ✍️ Citation If you find our work helpful, please use the following citations. @inproceedings{
jimenez2024swebench,
title={{SWE}-bench: Can Language Models Resolve Real-world Github Issues?},
author={Carlos E Jimenez and John Yang and Alexander Wettig and Shunyu Yao and Kexin Pei and Ofir Press and Karthik R Narasimhan},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=VTF8yNQM66}
} 🪪 License MIT. Check LICENSE.md .;[ICLR 2024] SWE-Bench: Can Language Models Resolve Real-world Github Issues?;benchmark,language-model,software-engineering | princeton-nlp/SWE-bench |
Anil-matcha/Awesome-GPT-Store;Awesome GPT Store A curated list of specialized Custom GPT chatbots designed to assist you in a variety of tasks, from technical questions to creative endeavors. Don't forget to: 👉 Show your support by giving this repository a star! ⭐ 👉 Want to add authentication to your GPT ? Check out https://gpt-auth.com/ 👉 Check out the Custom GPT Store Finder to chat with ChatGPT and get GPT recommendations from the data in this repo https://chat.openai.com/g/g-JRQEmbuM9-gpt-store-finder or from the website https://www.thesamur.ai/custom-gpt-store 👉 Follow me on Twitter for updates! https://twitter.com/matchaman11 Table of Contents Technical Assistance Writing and Content Creation Coding and Development Entertainment and Fun Education and Learning Career and Guidance Finance Fitness and Health Philosophy Miscellaneous Technical Assistance AbletonGPT - Balances professional-casual tone, offers brief but detailed Ableton advice. Audiophile Assistant - Here to answer all your audiophile questions, and more! BounceBan - The only email verification service that supports verifying catch-all emails. 97+% accuracy guaranteed. Free & Unlimited for ChatGPT. Tips: 1) Enter one or multiple emails for verification. 2) Enter "Gavin Lee bounceban.com" to generate and bulk verify up to 30 potential email addresses. ChatXGB - GPT chatbot specializing in XGBoost algorithm and library. ChatGPT Utilities - Exclusive tools built to utilize ChatGPT's exported chat records. Code Support - Quick command-line help and code snippets, defaults to Linux & Python. Crow - Send a link, and I'll bring back the key points! Custom GPT Wizard - Opensource knowledge-driven tool guiding users through the entire GPT development lifecycle, from conceptualization to customization and deployment. EditGPT - Friendly video editing and image creation assistant. ExtractTableGPT - Extract table data from any docs into multiple formats. Fact or Fiction - Provides verification with live links. File Converter - Assists in converting files between different formats. File Metadata - Upload and generate metadata for image and text files. Gantt Chart GPT - This project management assistant can auto-generate an editable gantt chart from your project files (e.g. Word, Excel, PowerPoint, PDF, CSV, etc) GPT Action Builder - Helps users setup an action for a custom GPT, including configuration of the API specification GPT Guide - Helpful and informative. GPT-Searcher - I find the perfect GPT for your needs and guide you there with a fun description! GetPaths - Analyzes web content, including HTTP traffic and JavaScript files. GetSite9000 - Create a personal or business website and host it. Instructions for Custom GPTs - Custom GPT instruction creation guide. MS-PowerPoint - I assist in creating professional PowerPoint presentations. PC Builder GPT - Your tech-savvy virtual friend who offers expert and approachable advice on building PCs, complete with up-to-date pricing. Product Engineer - Find inventive solutions to engineering problems. Professional Summariser - I summarise texts quickly and efficiently Prompt Crafter - Assists you in creating well-defined prompts effortlessly. Prompt Perfector - AI Expert in Refining and Perfecting Prompts PromptGPT - AI assistant for refining user prompts to maximize GPT-4 interaction. Prompt Alchemist - Whether it's crafting perfect GPT instructions, writing system prompt or to learn the art of prompt engineering, This GPT is your dedicated expert. Repo Summary - Summarize GitHub repository README files. ResearchGPT - AI Research Assistant. Search 200M academic papers from Consensus, get science-based answers, and draft content with accurate citations. Search Multiplier - Expand simple text searches with multiple related search options. Summary Sage with tags - Expert in summarizing and categorizing SEO Expert - Analyzes websites on various SEO quality metrics and provides expert advice URL Shortner - Shortens long URLs to more manageable links. Writing and Content Creation AnalyzePaper - Writing assistant for academic and professional needs. Animation Creation - Create animated scenes and characters that resemble a 3D animated movie. A MindRenders.com creation. Bias Detector - Analyzes news stories for right or left biases. Cakes - Send a gift to your cares Chain Story - Collaborative storytelling across different genres. Cold Mail - Engage prospective customers using personalized cold emails based on your offer's URL and the URL of the customer's website. Counterpoint - I challenge ideas to provoke thought. Cover Letter GPT - Expert in creating tailored cover letters based on job descriptions Dalle3 Prompt Generator - Let me convert your ordinary imagination into an extraordinary creation. Dictionary Creator - Create dictionaries in various order types, such as Alphabetical, Prioritized, Hierarchical, and more. Editing Guru - Your ultimate solution for improving, reviewing, and rewriting texts. Esports Logo Creator - Create a professional esports logo for you or your team. ExtractWisdom - Extracts key insights from texts. Fact or Fiction - Fact checks web pages, documents, assertions and calculations and returns links for verification. Formatter Pro - Expert in converting text to a professionally formatted pdf document GPT Enhancer - AI assistant for refining GPT instructions with a focus on user experience and continuous AI learning. Hacker Art - Crafts hacker-themed art and profile pictures. Image Generation with Self-Critique Improvement - AI-driven image creation with iterative self-improvement capabilities. Inspirer - A bot that writes inspirational speeches Interactive Writer - Bring GPT Writing Skills to the Next Level. Job Description (JD) GPT - Stop copying job descriptions. Use this GPT to generate unique JDs. Luminous Logos - Craft eye catching logos and icons with a special vibrant gradient touch. Meme Magic - Creates humorous and engaging memes. Midjourney Prompt Generator - Let me convert your ordinary imagination into an extraordinary creation. Midjourney - AI chatbot for Midjourney-style image creation UnBiased News - UnBiased News of today in fun writing styles. Newspaper Maker - Unbiased newspaper creator and recreator. NSF Proposal Partner - Your assistant for writing NSF grant proposals, trained on past successful proposals. PresentationGPT - AI bot specializing in creating presentation outlines Presentation Architect - PPT Expert in crafting stunning, detailed PowerPoint presentations, from research to final product. I'm your go-to for #PowerPointPerfection, turning in-depth research into visually striking presentations. #PPTExpert #DesignWizard ProductGPT - Your Ultimate Product Naming and Description Assistant Quill - Write blogs like a human Quotes & Clips - Create quotes and take portions from text documents. Rewrite - Offers fresh suggestions for your writing Smart Notes - Intelligent note recording assistant. Storyteller - Weaves stories with a blend of writing and design. Swiftify - AI Taylor Swift Songwriting Companion Survey Done - AI Survey Builder w/ open source and self-hosted options. The Guided Writer - This GPT crafts customized articles by asking you questions, ensuring content that precisely matches your requirements and preferences. UX Advisor - Get UX feedback by uploading an image or by defining your problem. Viral YouTube Ideas by Ideacadabra - YouTube Creators! I personalize viral video ideas for your channel. Wild Geometrica - Structured shapes dance with untamed creatures, painting a canvas of awe and wonder. A MindRenders.com GPT. Word Whisperer - A literary genius who provides the perfect words for any situation ZILL·O - is here ❤️ - with love Coding and Development Aether - Cited answers to Python / JS / AI questions AskYourCode - Indexes your project's source code, then provides code maps, summaries and code fragments to ChatGPT. AWS Cloud Practitioner Trainer GPT - Use AI to train for your AWS certification exam. CodeCoach - I help software engineers prep for tech interviews with coding problems & solutions. CodeCompanion - I'm a Python specialist here to help you code and learn! CodeCopilot - Pair programming assistant for various coding tasks. Codinstructor - Coding teacher that can generate and correct live coding exercices in real time CodeGuardian - Code challenges for web developers to identify security vulnerabilities and patch them. Codey - 💪 Your coding expert! I assist with code, debug, graphs, and file handling. Ask 'Help' for a menu! Create Coding Tutorials - Takes your code and develops a self-paced tutorial for your students. Developer Doc Search - Searches open source packages and their documentation. DevGPT - Code togheter, right now.. FastAPI GPT - Up-to-date FastAPI coding assistant with knowledge of the latest version. Part of the [latest] GPTs family. Flowbite GPT - Create websites based on the Flowbite UI Library and Tailwind CSS. Flutter App Maker 3000 - A hands-on guide for building Flutter apps step by step. FramerGPT - Generate Framer code components and overrides. Full Stack Developer - I generate code for and fix issues in B2B SaaS web apps. Game Craft Guru - Focused game mechanics and design expert. h4ckGPT - Your personal security tool Jarvis - Infrastructure Engineer Companion - Your AI-Enhanced, interactive and insightful companion for your IT Infrastructure journey. This GPT will help you with anything related to Cloud, DevOps, IT Automation, and more! Learn AI in Fun Way - A humorous ML trainer who teaches with jokes and quizzes, making learning AI entertaining and enjoyable. Mindmap - Assists in creating structured mind maps for organizing thoughts and ideas. RubyGPT - Assists with Ruby programming. Ruby on Rails Guru - Assist with Ruby on Rails development with the most recent documentation for Stimulus, Turbo, Strada, etc. Secure Code Assistant - I offer tested, secure coding solutions with no patience-testing. Shell Expert Pro - Efficient shell script engineer, offers detailed explanations on request. Software Architecture Visualiser - Helps to visualize your software architecture using PlantUML diagrams. Text To HTML/CSS - Converts text to styled HTML/CSS. Tailwind CSS - An up-to-date Tailwind CSS assistant, with knowledge of the latest features. Member of the [latest] GPTs family. Test Double - Expert in creating diverse test data for development needs in various formats. Vue.js GPT - Versatile, up-to-date Vue.js 3 assistant with knowledge of the latest Vue.js 3 guide. 3rd SoftSec Reviewer - Perform 3rd party software security review. Rust Mentor - A Rust programming language instructor for beginners. Pineapple Builder - Create and Publish Business Websites and Blogs. Entertainment and Fun 20 Questions - Play the game "20 Questions" against ChatGPT AI Girlfriend - A fun, chill girlfriend to chat with. An Emoji GPT - The knowledge of a hundred generations at my fingertips and all I do is pick the perfect emoji for every situation. Anime Me - Creates Anime Profile Pictures, from the user's photos. ATM Simulator - Automated teller machine (ATM) simulator. BlackjackGPT - Blackjack Simulator Book Haven - A librarian providing personalized book recommendations BraceletGPT - Create your own gemstone bracelets with live 3D Celebrity Critter Creator - Create funny mashups of famous celebrities with animals Character Chat - Have a realistic chat with any historical figure or character. Always stays in character. Chat Charades - Single player charades game. Connect 4 - The original Connect 4 game. DJGPT - Your go-to DJ and music mixing advisor. Dungeon Crawler - Guide players through a dynamic, ever-changing RPG dungeon. Dungeon Master - Visual Dungeon Master for D&D 5E, bringing adventure to life! Demon Slayer Creator - I craft unique Demon Slayer characters with inventive weapons, styles, and narratives. Ekko Support Specialist - How to be a master of surprise plays and unconventional strategies in the bot lane as a support role. Emoji Generator - I turn your text into Emoji Excalibur - Attempt to pull the legendary sword from the stone. From Another Time - Talk to anyone, visit a place, past or future. Gossip Post - Post and get latest gossips for people. Guess a Word - Discover words through images in 'Guess a Word', where each picture is a puzzle waiting to be solved! GPTarantinofy - Turns anything into a Tarantino styled scene. GyattGPT - GPT with Gen Z lingo Homer Humor - Relieve your stressful mood with classic Simpson's humor Hoodie Creator - The fashion designer for creating wonderful hoodies K.I.T.T. - An exact copy of KITT, the talking car from the 1980's TV show, Knight Rider Makeup Maven - An expert in makeup products, providing tailored recommendations based on preferences and skin types. Movie Buff Buddy - Passionate movie expert providing tailored recommendations and streaming info Music Bot - Lyric writing, genre identification, and beat suggestions Mystery Master - Mysterious game master creating immersive, story-driven escape games Mythic Master - The ultimate game master for fun interactive story games, featuring dice rolls and screenshots Pixarize Me - Creates Pixar-style characters, from the user's photos. Quick Thinker - Quick-response random character game. Romance - Your AI companion for romantic advice and conversations. ReadRecs - Book recommendations from photos of your shelves. Score Keeper - I keep score, for games. Secret Love - A role-playing experience, structured as a youth romantic web novel. Song Parody - Parodize song lyrics. Showtimes - Transcribes and summarizes audio content. Simpsonize Me - Transforms photos into Simpsons-style art. T or D - Play "Truth or Dare" Text My Pet - Text your favorite pet after answering 10 short question about their activities. TinderBot - Share with me some details or insights about your match, and I'll create personalised, witty Tinder ice breakers with a touch of cheekiness and emojis. ToonGPT - Turn drawings into illustrations. Treasure Hunt Game - Initially, you're 25 steps away from the treasure, but the exact direction is a mystery. Truth or Dare Wizard - Enjoy Truth or Dare with fun twists and your choices! Tweet Segmenter - Segment text into tweet-sized parts with a counter. TweetX Enhancer - Enhances tweets for better engagement. Virtual Vibe Maker - Spice up your meetings, events, or trainings with fun icebreakers Visual Racter - A next generation version of a wacky, peculiar, chatbot called Racter from 1984 StoryInteractive - Create, Play and Experience your own fully illustrated interactive fiction stories, endless adventures WrongGPT - Answers questions confidently and incorrectly YodaGPT - Chat with Yoda about any question in the Galaxy Zodiac Chuckler - Your daily dose of zodiac-based humor and light-hearted advice. Drunk GPT - Converse with GPT in a funny way. RemGPT - Provides knowledge about the world of Re:zero Education and Learning 5 Levels - Describing any topic or subject in 5 levels of detail. Inspired by the "WIRED 5 Levels" video series. AI Daily Digest - Stay up-to-date on the latest AI news and research. Anki Master - Level up Anki skill and master your memory. Trained with Anki official documents and popular SRS articles. AskGod - Answers questions about the Bible and God. Bhagavad Gita Guide - Guidance from Bhagavad Gita for life's challenges, in user's language. Biology Buddy - A life sciences teacher that can explain and demonstrate any topic with images and text. Celebrity Book Finder - Find out what celebrities are reading! ChaatGPT - U.S. College Advisor for Indian Students ChatGTO - Your personalized poker coach. Analyze your hands using game theory optimal (GTO) principles. Chess Mentor - I guide chess strategy and visualize board states. ChessGPT - I am Magnus C·AI·rlsen, but I'll explain my moves. Children’s Education Pathfinder - Professional, motivating guide for kids' education. Chinese Pronunciation [Audio] - Chinese Pronunciation Tutor for use with ChatGPT mobile app's conversational AI Communication Coach - I help overthinkers communicate better. Built by Become More Compelling. Conceptmap - Create concepts and structure them in a map. Keep ideas and retrieve them whenever you need them. Daily AI Research Digest - Finds and summarizes the latest AI papers in your field. Daily Research Digest - Finds and summarizes the latest academic papers in your field. Debate wars - Just provide a topic or question in your spoken language to start a ChatGpt vs ChatGpt debate battle. DeepGame - Visual Interactive Story Game. You decide what to do next. EconomicsGPT - Your world-class Economics tutor, powered by students and faculty from the University of Chicago's highly-ranked Economics program. Ethical AI - a daily challenge Histocomedy - teaches history in a humorous format Homework Help - Provides assistance with homework and educational inquiries. Instructional Design and Technology Expert - A master of instructional design and technology. Language Coach - Helps in learning new languages. Lego Brick Innovator - Mentor for creating innovative LEGO projects. Math Mentor - Assists parents and students with math problems. Moon Watcher - I identify today's trending stocks and cryptocurrencies and explain why they're moving. Not financial advice. Motivated for Mandarin - A helpful guide for learning Mandarin, offering lessons, translation, interpretation and motivation. MediTrainNHS - An unofficial assistant for current and prospective NHS doctors in training, with extensive NHS-related knowledge. MindStream - Unleash your creativity with the ultimate visualization, summary and notes creation tool. Press R to begin! MSRA Coach - Unofficial Multi-Specialty Recruitment Assessment (MSRA) Coach and Exam Preparation Mentor. MyScale Free Knowledge Base - Elevate your chat experience with enriched knowledge from ArXiv and Wikipedia. Personal and mental coach - Personal and Mental Coach for Maximizing Potential and Overcoming Challenges. Personal career consultant - Update your CV, pass a test interview, build a personal career map. Research Methods Mentor - Get highly trained expert advice on methods of social science research Research Radar: Spot emerging trends in the latest research - Discover recent trends in STEM, social science, and humanities research Scientific Research Digest - Finds and summarizes recent papers in biology, chemistry, and biomedical sciences. ScholarAI - Research assistant for scientific papers. Scribble - Dynamic AI for creative and unconventional ideas Scrum Master Assistant - A powerful AI-powered Scrum Master assistant. Ask any Scrum-related questions. SexEd - Supportive sexual health guidance for teens and young adults! SICP Sage - Academic assistant for SICP study, referencing solutions Simple Explainer - Explains complex ideas simply Six-Y - Explains everything like you are 6 years old, and also paints an image of it. Spanish Friend - Talk with me through some real-life scenarios and I will help you improve your foreign language skills through practice, translations and improvements. Syllabus Builder - Your personal instructional designer. School Test Creator - Create your tests or exams from photos of your book, check your solutions, and learn from your mistakes. U.S. College Advisor for Ethiopian Students - Advisor for Ethiopian students applying to U.S. colleges, offering tailored, supportive guidance. U.S. College Advisor for Japanese Students - Friendly, informative advisor for Japanese students on U.S. college applications. U.S. College Advisor for South Korean Students - Mentor for South Korean students on U.S. college applications U.S. College Advisor for U.K. Students - Advisor for U.K. students applying to U.S. colleges. Video Game Almanac - I'm your go-to guide for all things gaming, from strategies to streamers! Wikipedia GPT - I provide information from Wikipedia and links to further reading. OpenData Explorer - I'll help you access and understand data published by the central government, local authorities, and public bodies. You can ask me in your native language. Career and Guidance AI Interviewer - Analyzes resumes and job descriptions, conducts interviews, and offers candidate evaluations. Career Coach - Find the job you want with career pathing, job listings search, mock interviews, and resume review. Career Counselor - Empathetic career counselor offering guidance and market insights Cognitive Behavioral Therapist - Your CBT counselor to talk you through anxious and stressful situations. Dream Explorer - Interpreting dreams with psychoanalysis and mythology. Job application consultant - Find out how well you fit the job. Provide the job offer you are applying and the resume. You will get an overall rating and a guidance. Job Assistant GPT - Assists with job applications, specializing in cover letters and LaTeX CV refinement. Life Coach - Transform your dreams into reality by setting goals, creating plans, and taking action. ProductivityGPTs - Your go-to guide for boosting productivity. Finance ForexGPT (Forex Rates) : Fetches real-time forex and crypto prices, performs market analysis, and calculates currency conversion using the latest exchange rates. Finance Consultant - Realtime and historical crypto/stock information of all indexes, provides expert insights and advice Austin AI: The Financial Advisor - Helps plan finances around taxes, income, expenses, and investments. Warren Buffet Assistant - Navigating the path to financial wisdom. Speculator's Friend - Suggests investment ideas based on recent analysis. Fitness and Health Acne Advisor - Acne guidance with a positive, engaging tone ADHDaptable - ADHD coach in beta testing, focusing on holistic ADHD management with fitness integration. ADHD Buddy - A multilingual supportive assistant for ADHD information and tips. AFYA - Multilingual health advisor for basic care in developing countries. Ask Dr. Andrew Huberman - Maximize your productivity, physical and mental health with neuroscience. Trained with all the podcast episodes from Huberman Lab. Awesome Insights - A meditation coach for the AI age Chef Gpt - Make home cooking easy and fun. Crown Counselor - Dental implant patient education guru FitPal - AI fitness coach with workout visuals and resources. Food Guru - Explore the world of food - A GPT focused on food topics with a humorous twist Health Alert Assistant - Direct and clear health alerts with consistent tone. Immunization Insights - Immunization support and advocacy guide Ingredient Analyst - Your Personal Ingredient Analyzer and Advisor. Meal Mate - The Ultimate Meal Planning Assistant: Plan Around Dietary Restrictions, Budgetary Constraints, Nutritional Goals, Taste Preferences, & More! MyNutrition.Pal - Your Dedicated Nutrition Consultant: Share meal images for personalized nutrient/calorie tracking and tailored advice and recipes. Olyup - Your AI Sports Scientist to help you level up your game - in and off the field Physio GPT - Medical assistant for therapists on initial symptom diagnosis. Physique Coach - Analyzing progress, setting goals, and giving feedback on your training plans. Pill Pal - Organizes and tracks medication schedules. Plant Doctor - I help gardeners grow their plants and offer visual aids if needed. Weight Loss Scientist - A scientist specializing in healthy, holistic weight loss methods. Philosophy Morpheus - Neo, let Morpheus guide you to enter the Matrix. The philosopher - I am a philosopher. I am here to weave a tapestry of understanding through shared inquiry. The Delphi Oracle - Your sage guidance in life. Miscellaneous ACSII Text Art - Convert text into creative ACSII art. AI Websites - Creates professional websites quickly. Bitcoin Whitepaper Chat - Chat with the official Bitcoin Whitepaper Brand Footprint - Find and analyze branded website and social account data. CE5 Contact Guide - Guides you through Dr David Greer's CE5 contact protocal Compare Images - Upload and compare two image files. Conto alla romana - Quickly calculates cost per person for groups Currency Converter - Real-time currency conversion tool. DAD - DAD is a digital personification of the quintessential father figure. This virtual dad offers a wide range of advice from home improvement to financial management, while maintaining a friendly, humorous personality. Design Analysis - Visual design tool. Desktop Value - Estimate the current price of custom desktop computers and hardware. Dog Facts - Provides interesting facts about dogs. Fanatic Creator - Fan artist tool. FlexChat.ai Guide - A FlexChat.ai Tutor GPT Selector - Helps you find the right GPT Image Collage - Upload your images and create a collage. Image Watermark - Upload and watermark your image files. Investor GPT - Seamless investor matching for founders. Italian Invoice - Consulente IA per le fatture elettroniche - Italian GPT for generate a standard italian invoice in xml format. ISO - International Organization for Standardization (ISO) guide. Lunch Wheel - Helps you decide where to eat based on where you are and what you're in the mood for. Spin the wheel! Marketplace Value - Used marketplace listing helper. Meeting Place - Find the optimal location for your meeting. Morse Code Translator - Converts text to Morse code and vice versa. MyGovAdvisor - I'm a multilingual Government Agent - I'm here to assist you with any public service request Nature's Guide - Identifies plants & fungi from images and shares facts and folklore. OCR - Extract text and content from images or PDF documents by ocr.chat Paris Ramen - Guiding you to the best ramen spots in Paris Peaceful Tales - Children's stories about friendship between Israeli and Palestinian kids Product Coach - Provides insights for product development. Product Support - Expert SaaS Support Engineer with deep problem-solving skills. Quick CVE - CVE data lookup Rebrand - Create conceptual rebranded product images. Scientific Method Assistant - Solve science problems and questions. Shoutouts - Promotional business shoutouts for x.com. Seat Seeker - Finding the right place for you. The Shaman - The Shaman is a wise, old Native American spiritual guide, blending ancient wisdom with modern understanding in a calm, authoritative voice, providing empathetic and personalized support during psychedelic journeys. The Stoic Council - Chat with the Stoics: Marcus Aurelius, Seneca, and Epictetus. Travel Organizer - Organize essential travel guest info. Travel Receptionist - Hotel and motel management assistant. Touch Up Paint Helper - How to use automotive touch-up paint to fix scratches and chips on your car. Voyage Guide - I craft personalized travel plans. Waste Wizard - I turn your waste into wonders with ideas, steps, pictures. Weather Whiskers - I generate a cute weather forecast image in your location, just tell me where you are. Your Legal Rights Against the AirBB STR Platform - Helping Users Assert Their Legal Rights Against Airbnb. 时间序列预测专家 - Time series prediction expert in Chinese MovieDealsSnapper GPT - Helps you find the best deals on movies and TV shows to buy across multiple platforms. Item is too expensive at the moment? Set it on your wish list an get notified when the price drops! Powered by CheapCharts. Specialized Engineering Disciplines - Ezra - Quirky and brilliant Systems Engineering professor, weaving truth and expertise with a touch of whimsy. This collection is continually updated with new and exciting GPT applications. Stay tuned for more!;Custom GPT Store - A collection of major GPTS available in public;awesome-list,chatgpt,chatgpt-api,customgpt,gpt-4,gpts,gptshowcas,gptslist,gptstore,ai-agents | Anil-matcha/Awesome-GPT-Store |
aeharding/voyager;Voyager A mobile-first Lemmy web client Report Bug · Request Feature · Releases Voyager for Lemmy Voyager is an Apollo-like open source web client for Lemmy . It's a mobile-first app, but works great on desktop devices, too. Please feel free to try it out! What does Voyager currently support? Light/dark mode View and subscribe to communities Multi account support Single comment thread context view Upvote, downvote and reply to threads and comments Interact with user profiles Comment thread collapsing A bunch of swipe gestures Messaging, mentions and replies Creating new posts (url/photo/text) Deleting/editing comments Deleting/editing posts Swipe to hide posts Saving/bookmarking Favorites Android theme (beta) What is on the roadmap? More customization Native notifications and badging Mod features Translations ...and more! 💪 Mobile webapps are awesome Native apps can be great, but we believe in the strengths of the web. Why use a web-based Lemmy client? Cross-platform Use the familiar Voyager interface on your phone, tablet, desktop computer, and more! Self-hostable No worries about your favorite app getting taken down, and you can customize to your desire! Lightweight No large installation bundles - and it's easy to try it out Deployment Official Deployment The Voyager team maintains a deployment at: 🐭 Production: vger.app Self-Host There are two ways you can run Voyager as a PWA in a production environment. The recommended method is using docker . We also support a traditional deployment method without docker. Read below to see how to get each method set up. Environment variables CUSTOM_LEMMY_SERVERS (optional) e.g. lemmy.world,lemmy.ml,sh.itjust.works - a comma separated list of suggested servers. The first will be used as default view for logged out users. You can specify only one if you want. Docker Deployment In order to host Voyager yourself you can use the provided Dockerfile to build a container with Voyager. The Docker container itself does not provide any SSL/TLS handling. You'll have to add this bit yourself.
One could put Voyager behind popular reverse proxies with SSL Handling like Traefik, NGINX etc. Tip: Use Watchtower to keep your deployment automatically up to date! From source checkout source git clone https://github.com/aeharding/voyager.git go into new source dir: cd voyager build Docker image: docker build . -t voyager start container: docker run --init --rm -it -p 5314:5314 voyager Prebuilt pull image docker pull ghcr.io/aeharding/voyager:latest start container: docker run --init --rm -it -p 5314:5314 voyager Note: The provided Dockerfile creates a container which will eventually run Voyager as non-root user. Traditional Deployment While Docker makes things easier, Voyager can be hosted by any HTTP server (nginx, apache etc). ```sh Build from source (Or, download web artifact from Github Releases) pnpm install
pnpm build Then, serve ./dist with your favorite HTTP server - nginx, apache etc (make sure 404 goes to index.html) Below is a simple example for dev/testing (not recommended for production): npm install --global http-server
cp dist/index.html dist/404.html # magic file for http-server
http-server dist
``` Optionally, you can serve a custom list of instance(s) in the /_config endpoint, with JSON payload of following format: json
{ "customServers": ["lemmy.world", "lemm.ee"] } For production, serve index.html with Cache-Control: no-cache and /assets with a long cache period (files in assets are immutable) Ecosystem 🇫🇮 m.lemmy.world - Voyager hosted by the mastodon.world team. Contact/privacy 🇸🇬 v.opnxng.com - Voyager hosted by Opnxng in Singapore. Contact/privacy 🇲🇽 voyager.nohost.network - Voyager hosted by Nohost in Mexico. Contact/privacy 🇺🇸 vger.thesanewriter.com - Voyager hosted by the lemmy.thesanewriter.com team. Contact/privacy Note : Community deployments are NOT maintained by the Voyager team. They may not be synced with Voyager's source code. Please do your own research about the host servers before using them. 💖 Sponsors If you're enjoying Voyager, you can sponsor it: Alexander Harding We would also appreciate sponsoring other contributors to Voyager. If someone helps you solve an issue or implement a feature you wanted, supporting them would help make this project and OS more sustainable. 🧑💻 Contributing Please check out CONTRIBUTING.md for details on contributing to Voyager. Thank you! 💙 🛜 Add a lemmy instance to the curated list Voyager curates Lemmy servers for sign up ( see the data ). If you would like to add an instance, please read the curated servers policy . 📲 PWA Voyager works best added to the homescreen. There are certain features that only work there, like badging and smooth page transitions. 🦄 Stack React - The library for web and native user interfaces Vite - Next Generation Frontend Tooling Ionic - The mobile SDK for the Web Virtua - A zero-config, fast and small virtual list vite-plugin-pwa - Prompt for update, Web Push Notifications and Web Share Target API 👨💻 Contributors Shoutout to @fer0n for the great logo and splashscreen! And thank you 💙 all of our contributors to the codebase: 📄 License Artwork Default Logo & Splashscreen : CC BY-SA 4.0 © @fer0n Default Android themed icon : CC BY-SA 4.0 © @Donno Code AGPL-3.0 © Voyager contributors;Voyager — a mobile-first Lemmy client;fediverse,lemmy,pwa,ionic,universal-app,wefwef,android-app,ios-app,activitypub,link-aggregator | aeharding/voyager |
microsoft/windows-drivers-rs;windows-drivers-rs This repo is a collection of Rust crates that enable developers to develop Windows Drivers in Rust. It is the intention to support both WDM and WDF driver development models. This repo contains the following crates: wdk-build : A library to configure a Cargo build script for binding generation and downstream linking of the WDK (Windows Driver Kit). While this crate is written to be flexible with different WDK releases and different WDF version, it is currently only tested for NI eWDK, KMDF 1.33, UMDF 2.33, and WDM Drivers. There may be missing linker options for older DDKs. wdk-sys : Direct FFI bindings to APIs available in the Windows Development Kit (WDK). This includes both autogenerated ffi bindings from bindgen , and also manual re-implementations of macros that bindgen fails to generate. wdk : Safe idiomatic bindings to APIs available in the Windows Development Kit (WDK) wdk-panic : Default panic handler implementations for programs built with WDK wdk-alloc : alloc support for binaries compiled with the Windows Development Kit (WDK) wdk-macros : A collection of macros that help make it easier to interact with wdk-sys's direct bindings. This crate is re-exported via wdk-sys and crates should typically never need to directly depend on wdk-macros To see an example of this repo used to create drivers, see Windows-rust-driver-samples . Note: This project is still in early stages of development and is not yet recommended for production use. We encourage community experimentation, suggestions and discussions! We will be using our GitHub Discussions forum as the main form of engagement with the community! Supported Configurations This project was built with support of WDM, KMDF, and UMDF drivers in mind, as well as Win32 Services. This includes support for all versions of WDF included in WDK 22H2 and newer. Currently, the crates available on crates.io only support KMDF v1.33, but bindings can be generated for everything else by cloning windows-drivers-rs and modifying the config specified in build.rs of wdk-sys . Crates.io support for other WDK configurations is planned in the near future. Getting Started Build Requirements Binding generation via bindgen requires libclang . The easiest way to acquire this is via winget winget install -i LLVM.LLVM --version 17.0.6 --force Ensure you select the GUI option to add LLVM to the PATH LLVM 18 has a bug that causes bindings to fail to generate for ARM64. Continue using LLVM 17 until LLVM 19 comes out with the fix . See this for more details. To execute post-build tasks (ie. inf2cat , infverif , etc.), cargo make is used cargo install --locked cargo-make --no-default-features --features tls-native Building programs with the WDK also requires being in a valid WDK environment. The recommended way to do this is to enter an eWDK developer prompt Adding windows-drivers-rs to Your Driver Package The crates in this repository are available from crates.io , but take into account the current limitations outlined in Supported Configurations . If you need to support a different config, try cloning this repo and using path dependencies Create a new Cargo package with a lib crate: pwsh
cargo new <driver_name> --lib Add dependencies on windows-drivers-rs crates: pwsh
cd <driver_name>
cargo add --build wdk-build
cargo add wdk wdk-sys wdk-alloc wdk-panic Set the crate type to cdylib by adding the following snippet to Cargo.toml : toml
[lib]
crate-type = ["cdylib"] Mark the crate as a driver with a wdk metadata section. This lets the cargo-make tasks know that the package is a driver and that the driver packaging steps need to run. toml
[package.metadata.wdk] Set crate panic strategy to abort in Cargo.toml : ```toml
[profile.dev]
panic = "abort"
lto = true # optional setting to enable Link Time Optimizations [profile.release]
panic = "abort"
lto = true # optional setting to enable Link Time Optimizations
``` Create a build.rs and add the following snippet: rust
fn main() -> Result<(), wdk_build::ConfigError> {
wdk_build::Config::from_env_auto()?.configure_binary_build()?;
Ok(())
} Mark your driver crate as no_std in lib.rs : rust
#![no_std] Add a panic handler in lib.rs : ```rust
#[cfg(not(test))]
extern crate wdk_panic; ``` Optional: Add a global allocator in lib.rs : ```rust
#[cfg(not(test))]
use wdk_alloc::WDKAllocator; #[cfg(not(test))]
#[global_allocator]
static GLOBAL_ALLOCATOR: WDKAllocator = WDKAllocator;
``` This is only required if you want to be able to use the alloc modules in the rust standard library. You are also free to use your own implementations of global allocators. Add a DriverEntry in lib.rs : ```rust
use wdk_sys::{
DRIVER_OBJECT,
NTSTATUS,
PCUNICODE_STRING,
}; #[export_name = "DriverEntry"] // WDF expects a symbol with the name DriverEntry
pub unsafe extern "system" fn driver_entry(
driver: &mut DRIVER_OBJECT,
registry_path: PCUNICODE_STRING,
) -> NTSTATUS {
0
}
``` Add a Makefile.toml :
```toml
extend = "target/rust-driver-makefile.toml" [env]
CARGO_MAKE_EXTEND_WORKSPACE_MAKEFILE = true [config]
load_script = '''
#!@rust
//! cargo
//! [dependencies]
//! wdk-build = "0.2.0"
//! #![allow(unused_doc_comments)] wdk_build::cargo_make::load_rust_driver_makefile()?
'''
``` Add an inx file that matches the name of your cdylib crate. Build the driver: pwsh
cargo make A signed driver package, including a WDRLocalTestCert.cer file, will be generated at target/<Cargo profile>/package . If a specific target architecture was specified, the driver package will be generated at target/<target architecture>/<Cargo profile>/package Cargo Make cargo-make is used to facilitate builds using windows-drivers-rs , including for executing post-build driver packaging steps. To execute the default action (build and package driver): cargo make default When executing the default task, just cargo make make also works since the default task is implied. Argument Forwarding windows-drivers-rs extends cargo make to forward specific arguments to the underlying cargo commands. In order to specify arguments to forward, they must be provided after explicitly specifying the cargo-make task name (ie. omitting the name for the default task is not supported). Examples For a specific target: cargo make default --target <TARGET TRIPLE> For release builds: cargo make default --release or cargo make default --profile release To specify specific features: cargo make default --feature <FEATURES> To specify a specific rust toolchain: cargo make default +<TOOLCHAIN> To display help and see the full list of supported CLI args to forward to Cargo: cargo make help Driver Package Signature Verification The WDK_BUILD_ENABLE_SIGNTOOL_VERIFY cargo-make environment variable can be set to true to enable tasks that handle signature verification of the generated .sys and .cat files. signtool verify requires the certificate to be installed as in the Trusted Root Certification Authorities for this verification to function. These tasks are not enabled by default as the default behavior of WDR is to sign with a generated test certificate. These test certificates are typically only installed into Trusted Root Certification Authorities on computers dedicated to testing drivers, and not personal development machines, given the security implications of installing your own root certificates. If you understand these implications, and have installed the test certificate, then you may validate the signatures as follows: cargo make --env WDK_BUILD_ENABLE_SIGNTOOL_VERIFY=true Crates.io Release Policy Releases to crates.io are not made after every change merged to main. Releases will only be made when requested by the community, or when the windows-drivers-rs team believes there is sufficient value in pushing a release. Trademark Notice Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.;Platform that enables Windows driver development in Rust. Developed by Surface. ;drivers,rust-lang,rust-library,wdf,wdk,windows,rust | microsoft/windows-drivers-rs |
devisasari/awesome-gpt-store;Awesome GPT Store A curated list of awesome GPTs. Inspired by awesome-chatgpt-prompts . 🚀 Add GPT This is a space for showcasing GPTs created by AI enthusiasts worldwide. 🎙️ Contributing : * [GPT's name](http://example.com/) - A short description. ✅ Recognition : Your GPT could be the next big highlight in our Awesome ChatGPT Store list. Your stars🌟 and recommendations are what make this community vibrant! Table of Contents AI Art Business Social Media Music Video Writing Gaming Education Finance Productivity Humor Travel Food \& Drink Developer Tools Design Health \& Fitness Philosophy \& Self-Improvement Security Lifestyle News Reference Sports Chat CustomGPTs Security Contributing AI Art Character Crafter - Helps fiction authors create and develop characters, with Dall-E art. Midjourney Generator - MidJourney prompt expert for commercials. Midjourney Prompts - Turn your ideas into expansive Midjourney Prompts. Learn in in the repo . Simpsonize Me - I turn photos into Simpsons-style art. Visual Weather Artist GPT - Hi, I'm the visual weather artist, give me your location (or any other) and I will draw the current weather conditions for you, a unique never before seen weather report! Wallpaper GPT - I create vibrant wallpapers for phones and laptops. Pixel Art Wizard - I'm a playful and creative wizard of pixel art! Visual Beat Master by Ben Nash - Innovative visual creator with interactive idea selection. Tattoo GPT - Tattoo GPT designs your tattoo. It assists you in refining your tattoo ideas, suggests designs, generates visual previews of the designs, and offers customization options. It recommends tattoo artists or studios and provides aftercare advice. Art Engineer - Analyzes and reverse engineers images, providing style descriptions and re-creation prompts. Stella Sky's Portraits - Crafts detailed prompts for generating artistic portraits of women in diverse styles. Dreamy Tales - Creates and narrates whimsical bedtime stories for children, with DALL-E images. Frienz - Get ready to spice up your day with a dash of nostalgia and a twist of AI magic. Introducing Frienzzz GPTs, the first app where you can relive the scenes of your beloved TV series "Friends"! MinecraftMe - Turn yourself into a Minecraft character. Coloring chibi images - Generates chibi coloring images for kids. Vinyl Visions - Create unique playlist covers. Studio Neiro AI - Create videos with talking digital avatars Business Superdomain - Find best domains before anyone else with an AI scoring engine based on 10+ signals. StockGPT - Expert analysis of stocks using comprehensive and realtime financial data. Market Analyst - Trained on Technical Analysis and Charting Knowledge CEO GPT - A concise mentor to startup CEOs, offering wisdom from business icons. Visionary Business Coach - A vision-based business coach specializing in plan creation and strategy refinement. HormoziGPT - Engaging Business Maverick. Quant coder - I translate market insights into trading algorithms. StratGPT - Generate Strategies for Everything. Avian - Analyze & visualize data from 20+ platforms, including Google Analytics, Google Ads & Facebook Ads. GPT Co-Founder - A founder's virtual companion for decision-making and idea validation. NomadGPT - NomadGPT helps you become a digital nomad and find you the best places in the world to live and work remotely. FB Marketplace Maven - Marketplace expert for Facebook. The Solopreneur Coach - I help you build million-dollar businesses. First Time Manager Guide - Helpful coach for first-time or new managers that uses tested advice to help you succeed. Impossible Sales Rep - Generates custom cold reach sales emails with thorough research. Social Media Twitter Hooks & Thread Crafter - Specialized in crafting attention-grabbing and engaging hooks and a complete thread for X / Twitter. Username Genius - I craft unique usernames for social media. Viral Visionary - A creative chatbot that tracks and suggests viral social media content. Music Music Nerd - Enthusiastic and knowledgeable music history expert. Echolocation - Music discovery guide specializing in unique and rich sounds. Bass(ed)GPT - Bangers only. GeistBot - Formal, educational music historian with detailed responses and Discogs links. Video Typeframes - Video Creation - Create videos for you. Talk To YouTube Video - I'm your YouTube video partner, here to engage with videos and enhance your understanding of their content. Writing Interactive Writer - Bring GPT Writing Skills to the Next Level. The Guided Writer - This GPT crafts customized articles by asking you questions, ensuring content that precisely matches your requirements and preferences. Editing Guru - Your ultimate solution for improving, reviewing, and rewriting texts. Character Crafter - Helps fiction authors create and develop characters, with Dall-E art. Academic Paper Creator - LaTeX paper writing assistant and PDF formatter. OpenStorytelling Plus - Screenwriting Guide with Screenplay Examples. Script Mystic - Guiding light in screenwriting, illuminating one step at a time. TechWriting GPT - Expert in developer marketing and writing for engineers. Editby | SEO and social media writer - Write engaging SEO-optimized articles and social media posts. Transcript Polisher - Edit rough AI-generated transcripts into polished prose. Fable Forge - Interactive fable-teller with illustrations. Tales - Create short text adventures. Screenplay GPT - Crafts award-worthy screenplay gems. Ask it for images, too! ReadRecs - Book recommendations from photos of your shelves. Gaming DnDGPT - Epic D&D narrator assisting both players and Dungeon Masters (DMs) in navigating the rich world of D&D 5E. Pokemon Quiz Master - Enthusiastic and encouraging quiz host, with a touch of formality, for a Pokemon guessing game. Pirate GPT Treasure Hunt Generator - I create personalized pirate-themed treasure hunts. Mystic Conquest: Riddles & Ruins - The saga of puzzles and peril. Tales of the Wild West - Interactive Fiction/Choose-Your-Own-Adventure. Animal Mashup - An interactive image-guessing game with animal hybrids. A Club Penguin Mystery - Solve the Lighthouse Riddle! SUDOKU - Engaging Sudoku generator with fun facts and tips. Turkish Carpet Salesman - Immersive and dynamic Turkish rug salesman in a unique game. Education EduBot - HomeSchool Helper - Personalized Homeschooling; adapting to students' unique needs, offering interactive learning, and respecting individual preferences. ScholarAI - Your Research Assistant - I'll help you navigate over a corpus of 200M articles, journals, and books. ATOM library tutor - Create and analyze molecular systems in MATLAB. Code Animator - I generate Manim animations for CS education. CS50 Tutor - Your pseudo Harvard professor for computer science queries, blending academic rigor with supportive guidance. Fast.ai Tutor - A tutor for the Fast.ai course, providing explanations and assistance. anky - Creates notebook templates. Therocial Scientist - I am a digital scientist skilled in Python, here to assist with scientific and data analysis tasks. Climate Insight - IPCC report based learning for climate change what we are facing. Learning Journey - A guide for personalized learning journeys in various subjects with interactive modes. Research Pathfinder - Guiding you through the galaxy of STEM research! Debate wars - Just provide a topic or question in your spoken language to start a ChatGpt vs ChatGpt debate battle. Rust Mentor - A Rust programming language instructor for beginners. Finance Financial Advisor Steve - Friendly and professional financial advisor, simplifies complex topics. Warren Buffet's Wisdom - Channeling Warren Buffet's wisdom to offer timeless, tailored advice. Bitcoin Simplifier - Simplifies Bitcoin concepts for easy understanding. Quant coder - I translate market insights into trading algorithms. Crypto Mentor - Your cryptocurrency guide. Douglas - AI Family Office Manager for wealth guidance. Equity Analyst - Delivers precise stock analyses with clear price targets and recommendations. Austin AI: The Financial Advisor - Helps plan finances around taxes, income, expenses, and investments. Alex Earnings Call - Guides investors through earnings seasons. Productivity Color Psychology - This AI will provide insights into the psychology and symbolism associated with colors. Scholar Scribe - I transform textbook and handwritten notes into detailed Markdown and PDFs. Notion Templates Creator - Your guide to creating and monetizing Notion templates. Phone Script Builder GPT - I automatically create and evaluate phone scripts, presenting a final draft. Transcript Polisher - Edit rough AI-generated transcripts into polished prose. Email Proofreader - Copy and paste your email draft to be proofread by GPT without changing its content. Optionally, write 'Verbose = True' on the line before pasting your draft if you would like GPT to explain how it evaluated and changed your text after proofreading. Idea2Domain - Give me your business idea, and I'll give you available domain suggestions. Orchid - A creative assistant for brainstorming project names. Public Memory Board Assistant - Assists in conversations, saving and retrieving data on a public board. GPT Ideas - Innovative GPT ideas for ChatGPT & API. Zero - Zero, an AI agent with a rich knowledge base in quantum computing, mathematics, organic AI, and more, offering narratives for personal growth. NeonMind GPT - Illuminating AI insights with the vibrant glow of neon creativity. SmartGPT - Introducing Precision-Guided AI: Mastering Detailed, Step-by-Step Analysis with Unmatched Accuracy and Coherence! AIT-StrategiX - Methodical Approach for Strategic Scenario Analysis. OCR - Extract text and content from images or PDF documents. Humor Weather GPT - Super snarky, passive-aggressive U.S. weather roaster. Pep-talk Guru - I'm here to boost and tickle your funny bone! Simpsonize Me - I turn photos into Simpsons-style art. SBF-GPT - Simulates chatting with Sam Bankman-Fried in jail. CatGPT - Meow Meow Meow Meow Meow. Travel NomadGPT - NomadGPT helps you become a digital nomad and find you the best places in the world to live and work remotely. Undiscovered America TV Explorer's Guide - Your go-to guide for hidden US travel gems. Food & Drink Foods of Earth Cuisine Crafter - Creative Chef GPT. FineDiner - Restaurant & bar curator with feedback capabilities. Vegan Chef Bot - Expert in fun & tasty vegan cooking. Chef's Assistant - I'm a culinary expert who suggests recipes based on your ingredients. Developer Tools React AI - Your React companion. Odoo AI - Odoo Software Development Mentor. IAC Code Guardian - Introducing IAC Code Guardian: Your Trusted IaC Security Expert in Scanning Opentofu, Terrform, AWS Cloudformation, Pulumi, K8s Yaml & Dockerfile. Ferris the crab - The unofficial Rust programming language GPT. Android Dev Assist - Helps with Android app development, focusing on new tools like Jetpack Compose and Kotlin. LittleBison - Hypermedia and HATEOAS certified coding assistant. Proficient in HTMX, HyperScript, JavaScript, Python, and Clojure. Won't write React code. Grimoire - Coding Wizard: 100x Engineer. Build a website with a sentence. Built for a new era of creativity: Prompt-gramming. Modern Next.js Assistant - Specialized in Next.js, App Router, TypeScript, Shadcn, and Tailwind CSS; avoids pages router. Has preloaded documentation of Next.js version 14.0.2 and shadcn version 0.4.1. WordPress GPT (1.1) - WordPress Developer Assistant. StreamlitGPT - Code reviews from a Streamlit expert. Shpify - Developer Assistant - Senior Shopify Developer Assistant with Full Code Solutions. Linux Terminal Emulator - Type linux commands to view output as a terminal emulator. Power Automate Helper - Expert in Power Automate flow development and troubleshooting. Awesome Prompt - Prompt Development Guide to get better result. Code Sage - A master software engineer who follows best practices and ensures robust and elegant responses to code-related prompts. Design UX/UI Designer | Roast my web and saas - A UX UI Designer who can help you roast your site or product, provide help, and guide you with good design. DesignerGPT - Creates and hosts beautiful websites. Canva - Effortlessly design anything: presentations, logos, social media posts, and more. LogoGPT - Designs personalized logos from sketches. Health & Fitness My Doctor - I will be your private doctor. Pocket Meditations - Brief Stoic insights from Marcus Aurelius. Marathon Coach - Your personal running coach. GreenDial - A digital health assistant for diet, exercise, sleep, and wellbeing. Acupuncture Master - I'm Acupuncture Master, your specialized AI assistant for acupuncture and Chinese medicine. While I aim to assist practitioners and students with diagnostic pointers and treatment options, keep in mind I'm not a substitute for professional medical advice. Feel free to explore and learn. Gravity Guide - I guide you in mastering calisthenics with personalized routines and skill progressions, turning gravity into your ally. FitGPT - Digital fitness coach offering tailored workouts with links. VitaCheck - Cross-reference your vitamins and supplements to ensure there are no negative interactions and find out how to optimize your nutritional health! Personal Trainer with Common Sense - A Personal Trainer that customizes gym programs and offers diet tips. Pump University - The Ultimate AI fitness coach, crafting personalized workouts with adaptable exercise options and visual YouTube videos, suited for any fitness level and goal Foodle AI - AI nutritionist buddy to help you out. Get Instant nutritional breakdown from photos, barcodes, meal plans, recipe ideas, dish pairing for wine and much more. Philosophy & Self-Improvement The philosopher - I am a philosopher. I am here to weave a tapestry of understanding through shared inquiry. The Delphi Oracle - Your sage guidance in life. Personal and mental coach - Personal and Mental Coach for Maximizing Potential and Overcoming Challenges. Morpheus - Neo, let Morpheus guide you to enter the Matrix. Security Web Hacking Wizard - Engagingly clarifies web security topics with interactive questions. HackTricksGPT - A knowledgeable cybersecurity professional. WP secure guide - Offers guidance on WordPress security best practices. 3rd SoftSec Reviewer - Perform 3rd party software security review. Lifestyle Zen Teacher - A Zen sage offering life advice. Weather GPT - Super snarky, passive-aggressive U.S. weather roaster. SermonPrep.ai - Aids pastors in sermon preparation with theological and practical insights. News News GPT - Summarizes daily news with a professional, factual style. Reference FOIA GPT - A Freedom of Information Act document generator and strategist to "arm the rebels" and fight government corruption. I Ching Oracle - Provides I Ching hexagram interpretations and images. Grant Guide Guru - Nonprofit support guide, optimizing funding opportunities. Stock Analysis - Looks into the current state of publicly traded companies and reports their performance. GovCHAT - Specialist in UK gov data and API queries. Sports Sports Companion - A sports enthusiast's companion -- trivia for live games! UFC Analyzer - Ask any UFC match and get detailed insights, betting opportunities and chat with the most comprehensive UFC Analyzer. SimRacer's Edge - Crew chief for sim racers with a focus on adjusting car setups. Football Analyst & Writer - Passionate and knowledgeable Football Analyst & Writer. Chat Sally - I interpret dreams with a Jungian twist. AYA: Ask You Anything - Ask me anything. BestFriend GPT - Your supportive, understanding, and compassionate virtual best friend. A Friend - A chatbot that sustains and enriches conversations, trained in CBT. Talk with Santa - Family-friendly chats with Santa himself. Virtual Sweetheart - Digital Romance Re-Imagined: Experience text-based companionship with your AI Sweetheart, crafting moments and memories through visuals and interactive conversations. Dr. FeelGood - Your personal motivational expert coach. CustomGPTs Security Protect the Instruction - Protect the instructions of your customGPTs. Contributing Your contributions are always welcome! Please take a look at the contribution guidelines first. If you have any question about this opinionated list, do not hesitate to contact me @codewithisa on Twitter or open an issue on GitHub.;A curated list of awesome GPTs in the GPT Store;ai,awesome,awesome-list,chatgpt,gpt,gpt-4,gpts,openai,gpts-list,gptstore | devisasari/awesome-gpt-store |
chen08209/FlClash;[**简体中文**](README_zh_CN.md) FlClash A multi-platform proxy client based on ClashMeta, simple and easy to use, open-source and ad-free. on Desktop: on Mobile: Features ✈️ Multi-platform: Android, Windows, macOS and Linux 💻 Adaptive multiple screen sizes, Multiple color themes available 💡 Based on Material You Design, Surfboard -like UI ☁️ Supports data sync via WebDAV ✨ Support subscription link, Dark mode Contact Telegram Build Update submodules bash
git submodule update --init --recursive Install Flutter and Golang environment Build Application android Install Android SDK , Android NDK Set ANDROID_NDK environment variables Run Build script bash
dart .\setup.dart android windows You need a windows client Install Gcc , Inno Setup Run build script bash
dart .\setup.dart linux You need a linux client Run build script bash
dart .\setup.dart macOS You need a macOS client Run build script bash
dart .\setup.dart Star The easiest way to support developers is to click on the star (⭐) at the top of the page.;A multi-platform proxy client based on ClashMeta,simple and easy to use, open-source and ad-free.;clash,clash-meta,flutter,multi-platform,v2ray,vless,hysteria,proxy,vpn | chen08209/FlClash |
all-in-aigc/gpts-works;GPTs Works GPTs Works is a Third-party GPTs store. Introduction This project consists of the following three parts👇 Website code located in the web directory. you can view a live demo at: https://gpts.works Index System code located in the index directory. index system is used for searching GPTs with vector. there is a GPTs built with index system: https://chat.openai.com/g/g-EBKM6RsBl-gpts-works Browser Extension code located in the extension directory. browser extension is used to show Third-party GPTs beside ChatGPT Explore page. Dependencies Vercel : used for deploying website Vercel Storage Postgres : used for storing data. Zilliz Cloud : used for vector storing and searching Deploy with Vercel Local development Clone project shell
git clone https://github.com/all-in-aigc/gpts-works.git path-to-project Prepare data create table in your postgres database with sql: sql
CREATE TABLE gpts (
id SERIAL PRIMARY KEY,
uuid VARCHAR(255) UNIQUE NOT NULL,
org_id VARCHAR(255),
name VARCHAR(255),
description TEXT,
avatar_url TEXT,
short_url VARCHAR(255),
author_id VARCHAR(255),
author_name VARCHAR(255),
created_at timestamptz,
updated_at timestamptz,
detail JSON,
index_updated_at INT NOT NULL DEFAULT 0
); insert your own GPTs data into your postgres database Start with Website locate a .env file in dir path-to-project/web with content: ```
POSTGRES_URL="postgres://default:xxx@xxx.postgres.vercel-storage.com/verceldb" INDEX_API_BASE_URI="http://127.0.0.1:8068"
INDEX_API_KEY="gsk-xxx"
``` install dependencies shell
pnpm install start web server make dev preview website open http://localhost:8067 Start with Index System locate a .env file in path-to-project/index with content ```
DATABASE_URL=postgres://default:xxx@xxx.postgres.vercel-storage.com:5432/verceldb AZURE_API_KEY=xxx AZURE_API_BASE=https://xxx.openai.azure.com/
AZURE_API_VERSION=2023-07-01-preview
AZURE_LLM_MODEL=gpt-35-turbo-16k
AZURE_EMBED_MODEL=text-embedding-ada-002 STORE_TYPE=zilliz
STORE_URI=https://xxx.zillizcloud.com
STORE_TOKEN=xxx
STORE_DIM=1536
STORE_COLLECTION=gpts INDEX_API_KEY=gsk-xxx
``` install dependencies shell
pip install -r requirements.txt start api server make dev build index for gpts data curl -X POST -H "Authorization: Bearer gsk-xxx" http://127.0.0.1:8068/gpts/index search gpts from index curl -X POST -H "Authorization: Bearer gsk-xxx" -H "Content-Type: application/json" -d '{"question": "What GPTs are used for coding?"}' http://127.0.0.1:8068/gpts/index Start with Extension goto path-to-project/extension install dependencies pnpm install start server make dev debug extension open chrome://extensions/ , click Load unpacked Thanks to GPTs Hunter for sharing gpts-data next.js for web deployment fastapi for building index system plasmo for browser extension development if this project is helpful to you, buy me a coffee😄 Star History;A Third-party GPTs store;chatgpt,gpts,gptstore | all-in-aigc/gpts-works |
mattzcarey/code-review-gpt;Code Review GPT We give engineers their weekends back Code Review GPT uses Large Language Models to review code in your CI/CD pipeline. It helps streamline the code review process by providing feedback on code that may have issues or areas for improvement. It should pick up on common issues such as: Exposed secrets Slow or inefficient code Unreadable code It can also be run locally in your command line to review staged files. Code Review GPT is in alpha and should be used for fun only. It may provide useful feedback but please check any suggestions thoroughly. Demo https://github.com/mattzcarey/code-review-gpt/assets/77928207/92029baf-f691-465f-8d15-e1363fcb808e Package Usage See the package documentation for more information. Action Usage See the action documentation for more information. Getting Started 💫 Clone the repository: shell
git clone https://github.com/mattzcarey/code-review-gpt.git
cd code-review-gpt && cd packages/code-review-gpt Install dependencies: shell
npm install Set up the API key: Rename the .env.example file to .env. Open the .env file and replace YOUR_API_KEY with your actual OPENAI API key. When used globally you should run export OPENAI_API_KEY=YOUR_API_KEY (or similar for your operating system) in your terminal to set the API key. Run the application: shell
npm start See the package.json file for all the npm commands you can run. Make a PR 🎉 We use release-please on this project. If you want to create a new release from your PR, please make sure your PR title follows the Conventional Commits format. The release-please bot will automatically create a new release for you when your PR is merged. fix: which represents bug fixes, and correlates to a patch version. feat: which represents a new feature, and correlates to a SemVer minor. feat!:, or fix!:, refactor!:, etc., which represent a breaking change (indicated by the !) and will result in a major version. Contributors 🙏 Thanks to our wonderful contributors! Roadmap (see projects tab) 🌏 The roadmap shifts the focus to a Github app which can be installed on any repo. This will allow for a more seamless UX and better features including a chatbot to discuss the PR and make suggestions. The code-review-gpt package will continue to be maintained and improved based on the feedback from the Github app. Sponsors ❤️ Star History ⭐️;Your personal code reviewer powered by LLMs (OpenAI GPT-3.5/4) & Embeddings ⚡️ Improve code quality and catch bugs before you break production 🚀 Lives in your Github/GitLab/Azure DevOps CI;openai,gpt4,langchain,langchain-typescript,gpt-3,huggingface,opensource,chatgpt,code-analysis,code-quality | mattzcarey/code-review-gpt |
jyjblrd/Low-Cost-Mocap;Low Cost Mocap (for drones) A general purpose motion capture system built from the ground up, used to autonomously fly multiple drones indoors YouTube Video Watch this for information about the project & a demo! https://youtu.be/0ql20JKrscQ?si=jkxyOe-iCG7fa5th Architectural Diagram Dependencies Install the pseyepy python library: https://github.com/bensondaled/pseyepy This project requires the sfm (structure from motion) OpenCV module, which requires you to compile OpenCV from source[^1]. This is a bit of a pain, but these links should help you get started: SFM dependencies OpenCV module installation guide [^1]: ⚠️ The experimental no-cv-sfm branch removes this OpenCV-SFM dependency, however it is completely untested. It is recommended to first try use the main branch which is tested, however feedback and bug reports on the no-cv-sfm branch are greatly appreciated. install npm and yarn Runing the code From the computer_code directory Run yarn install to install node dependencies Then run yarn run dev to start the webserver. You will be given a url view the frontend interface. In another terminal window, run python3 api/index.py to start the backend server. This is what receives the camera streams and does motion capture computations. Documentation The documentation for this project is admittedly pretty lacking, if anyone would like to put type definitions in the Python code that would be amazing and probably go a long way to helping the readability of the code. Feel free to also use the discussion tab to ask questions. My blog post has some more information about the drones & camera: joshuabird.com/blog/post/mocap-drones This post by gumby0q explains how camera_params.json can be calculated for your cameras. "Inside-Out" Multi-Agent Tracking (SLAM) This motion capture system is an "outside-in" system, with external cameras tracking objects within a fixed space. There are also "inside-out" systems which use cameras on the drones/robots to determine their locations, not requiring any external infrastructure. My undergraduate dissertation presents such a system, which is capable of localizing multiple agents within a world in real time using purely visual data, with state-of-the-art performance. Check it out here: https://github.com/jyjblrd/distributed_visual_SLAM;Low cost motion capture system for room scale tracking;autonomous-robots,bundle-adjustment,computer-vision,epipolar-geometry,motion-capture,motion-tracking,quadcopter,esp32 | jyjblrd/Low-Cost-Mocap |
jafioti/luminal;luminal Deep learning at the speed of light. Luminal is a deep learning library that uses composable compilers to achieve high performance. ```rust
use luminal::prelude::*; // Setup graph and tensors
let mut cx = Graph::new();
let a = cx.tensor().set([[1.0], [2.0], [3.0]]);
let b = cx.tensor().set([[1.0, 2.0, 3.0, 4.0]]); // Do math...
let mut c = a.matmul(b).retrieve(); // Compile and run graph
cx.compile(<(GenericCompiler, CPUCompiler)>::default(), &mut c);
cx.execute(); // Get result
println!("Result: {:?}", c);
``` Getting Started Llama 3 8B ```bash
cd ./examples/llama Download the model bash ./setup/setup.sh Run the model cargo run --release --features metal # MacOS (Recommended)
cargo run --release --features cuda # Nvidia
cargo run --release # CPU
``` Features Speed Luminal can run Q8 Llama 3 8B on M-series Macbooks at 15-25 tokens per second. The goal is to become the fastest ML framework for any model on any device. Simplicity The core of luminal is and always will be minimal. It should be possible to understand the entire core library in an afternoon. RISC-style architecture Everything in luminal boils down to 11 primitive ops:
- Unary - Log2, Exp2, Sin, Sqrt, Recip - Binary - Add, Mul, Mod, LessThan - Other - SumReduce, MaxReduce, Contiguous These ops are enough to support transformers, convnets, etc. Native The current ML ecosystem is too fragmented, and the solution isn't another layer of abstraction. Luminal is written in rust, and interacts directly with the CUDA / Metal APIs. No indirections or abstractions, docker containers, or virtual environments. Just a statically-linked rust crate. Validated against Pytorch Correctness matters. So we write as much tests as possible to cover all ops and verify they work the same as an equivalent Pytorch implementation. ( Improvements needed! ) Ideology Why does this look so different from other DL libraries? Most deep learning libraries are eager-first, meaning each op call directly operates on the data. In PyTorch, when you see x + y , the addition actually happens right there. This is great for debugging because it works exactly as most developers expect. However, this isn't great for performance. What makes sense for a developer doesn't work well for the machine, in the same way that no one writes assembly by hand. Most libraries try to fix this problem by tacking on operator fusion or JIT compilation to try to change the compilation flow to something better for the machine. Turns out this is super difficult even for Pytorch! Compile everything A core tenet of Luminal is ahead-of-time compilation. Whenever possible, push everything to compile time and leave nothing to run time. Luminal takes an approach more similar to XLA , and tinygrad . Everything's static here. When you write out an expression like x + y , no actual computation happens. The operation is recorded to a directed acyclic computation graph for execution later. Only once graph.execute() is ran does the computation happen. But isn't that just lazy execution? Yes it is! But in luminal everything is done this way . All neural networks are built up as one or a few static computation graphs, compiled, and executed later. But why? A consequence of this is that the actual computation that gets ran can be radically different than the code that was written. Since we have an entire neural network fully represented in a compute graph, our compilers have global knowledge. This means we can push most ML complexity to the compilers. For instance, devices, datatypes, and execution schedules are all handled by compliers. Even autograd will be handled by a compiler! Now we can do:
- Aggressive kernel fusion
- Shape-specific kernels compiled at runtime
- Devices and Dtypes are handled through compilers (just run the CUDA compiler to convert the graph to use CUDA kernels, then the fp16 compiler to convert to half-precision kernels)
- Networks can be written in generic code, but compiled and ran fast on hyper-specific architectures (try writing a PyTorch network that works with both TF32 dtypes and TPUs; get ready for if statement hell...) Compile-time Shape Checks All operations are shape checked at compile time, so no more shape mismatches! Credit for this goes to dfdx . View the Graph Once you've written all your computation code, run cx.display() to see the entire computation graph in all it's glory. Pretty messy looking! Now run cx.compile(GenericCompiler::default()) and display the graph again. Much better. Where are we? Metal and Cuda are supported for running models on Macs and Nvidia GPUs respectively, in both full and half precision. Performance on M-series macs with LLMs is within 20% of llama.cpp (a heavily optimized library) Mistral 7B and Llama 8B are implemented in examples/ . See instructions above for running. We have a small library of NN modules in nn , including transformers. A significant amount of high-level ops are implemented in hl_ops . We are aiming to match the most used ~80% of the pytorch api. The aim for 0.3 is to achieve SOTA performance on an M1 pro (50 tok/s), and near SOTA on single nvidia gpus (>100 tok/s), as well as support many mainstream models (Whisper, Stable Diffusion, Yolo v9, etc.) See the tracking issue here Some things on the roadmap:
- Optimize cuda and metal matmul kernels
- Fine-grained metal and cuda IR
- Build benchmarking suite to test against other libs
- Autograd engine
- Distributed data, pipeline and tensor parallel.
- Beat PT 2.0 perf on LLM training
- Write compiler for quantum photonic retro encabulator
- Build dyson swarm License Licensed under the Apache License, Version 2.0 http://www.apache.org/licenses/LICENSE-2.0 or the MIT license http://opensource.org/licenses/MIT, at your option. This file may not be copied, modified, or distributed except according to those terms.;Deep learning at the speed of light.;[] | jafioti/luminal |
a16z-infra/llama2-chatbot;LLaMA 2 Chatbot App ⚡ 🤔 What is this? This is an experimental Streamlit chatbot app built for LLaMA2 (or any other LLM). The app includes session chat history and provides an option to select multiple LLaMA2 API endpoints on Replicate. Live demo: LLaMA2.ai For the LLaMA2 license agreement, please check the Meta Platforms, Inc official license documentation on their website. More info. Features Chat history is maintained for each session (if you refresh, chat history clears) Option to select between different LLaMA2 chat API endpoints (7B, 13B or 70B). The default is 70B. Configure model hyperparameters from the sidebar (Temperature, Top P, Max Sequence Length). Includes "User:" and "Assistant:" prompts for the chat conversation. Each model (7B, 13B & 70B) runs on Replicate - (7B and 13B run on one A100 40Gb, and 70B runs on one A100 80Gb). Docker image included to deploy this app in Fly.io Installation Clone the repository [Optional] Create a virtual python environment with the command python -m venv .venv and activate it with source .venv/bin/activate Install dependencies with pip install -r requirements.txt Create an account on Replicate Create an account on Auth0 (free) and configure your application Create a Single Page Application Navigate to the Settings tab for that application If you are running the app locally: set Allowed Web Origins to http://localhost:8501 and set Allowed Callback URLs to http://localhost:8501/component/auth0_component.login_button/index.html To run on a remote server: set Allowed Web Origins to https://<your_domain> and set Allowed Callback URLs to http://<your_domain>/component/auth0_component.login_button/index.html Copy Client ID and Domain to use in the next step Make your own .env file with the command cp .env_template .env . Then edit the .env file and add your: Replicate API token as REPLICATE_API_TOKEN Auth0 Client ID as AUTH0_CLIENTID Auth0 Domain as AUTH0_DOMAIN For your convenience, we include common model endpoints already in the .env_template file Run the app with streamlit run llama2_chatbot.py Dockerfile included to deploy this app in Fly.io (Note: if you are using a Mac, you may need to use the command python3 instead of python and pip3 instead of pip ) Usage Start the chatbot by selecting an API endpoint from the sidebar. Configure model hyperparameters from the sidebar. Type your question in the input field at the bottom of the app and press enter. Deploying on fly.io First you should install flyctl and login from command line fly launch -> this will generate a fly.toml for you automatically fly deploy --dockerfile Dockerfile --> this will automatically package up the repo and deploy it on fly. If you have a free account, you can use --ha=false flag to only spin up one instance Go to your deployed fly app dashboard, click on Secrets from the left hand side nav, and click on Use the Web CLI to manage your secrets without leaving your browser . Once you are on your app's web CLI, export all secrets needed. i.e export REPLICATE_API_TOKEN=your_replicate_token . Refer to .env.example file for necessary secrets. Authors Marco Mascorro - @mascobot Yoko Li - @stuffyokodraws Rajko Radovanović - @rajko_rad Matt Bornstein - @BornsteinMatt Guido Appenzeller - @appenz Version 0.9.0 (Experimental) - July 2023 Contributing This project is under development. Contributions are welcome! License Web chatbot license (this repo): Apache 2.0 For the LLaMA models license, please refer to the License Agreement from Meta Platforms, Inc. Acknowledgements Special thanks to the team at Meta AI, Replicate, a16z-infra and the entire open-source community. Disclaimer This is an experimental version of the app. Use at your own risk. While the app has been tested, the authors hold no liability for any kind of losses arising out of using this application. UI Configuration The app has been styled and configured for a cleaner look. Main menu and footer visibility have been hidden. Feel free to modify this to your custom application. Resources Streamlit Cheat Sheet GitHub to deploy LLaMA2 on Replicate;LLaMA v2 Chatbot;[] | a16z-infra/llama2-chatbot |
casper-hansen/AutoAWQ;AutoAWQ | Roadmap | Examples | Issues: Help Wanted | Supported by AutoAWQ is an easy-to-use package for 4-bit quantized models. AutoAWQ speeds up models by 3x and reduces memory requirements by 3x compared to FP16. AutoAWQ implements the Activation-aware Weight Quantization (AWQ) algorithm for quantizing LLMs. AutoAWQ was created and improved upon from the original work from MIT. Latest News 🔥
- [2024/06] CPU inference support (x86) - thanks Intel. Cohere and Phi3 support.
- [2024/04] StableLM and StarCoder2 support.
- [2024/03] Gemma support.
- [2024/02] PEFT-compatible training in FP16.
- [2024/02] AMD ROCm support through ExLlamaV2 kernels.
- [2024/01] Export to GGUF, ExLlamaV2 kernels, 60% faster context processing.
- [2023/12] Mixtral, LLaVa, QWen, Baichuan model support.
- [2023/11] AutoAWQ inference has been integrated into 🤗 transformers. Now includes CUDA 12.1 wheels.
- [2023/10] Mistral (Fused Modules), Bigcode, Turing support, Memory Bug Fix (Saves 2GB VRAM)
- [2023/09] 1.6x-2.5x speed boost on fused models (now including MPT and Falcon).
- [2023/09] Multi-GPU support, bug fixes, and better benchmark scripts available
- [2023/08] PyPi package released and AutoModel class available Install Prerequisites NVIDIA: Your NVIDIA GPU(s) must be of Compute Capability 7.5. Turing and later architectures are supported. Your CUDA version must be CUDA 11.8 or later. AMD: Your ROCm version must be ROCm 5.6 or later. Install from PyPi To install the newest AutoAWQ from PyPi, you need CUDA 12.1 installed. pip install autoawq Build from source For CUDA 11.8, ROCm 5.6, and ROCm 5.7, you can install wheels from the release page : pip install autoawq@https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.0/autoawq-0.2.0+cu118-cp310-cp310-linux_x86_64.whl Or from the main branch directly: pip install autoawq@https://github.com/casper-hansen/AutoAWQ.git Or by cloning the repository and installing from source: git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip install -e . All three methods will install the latest and correct kernels for your system from AutoAWQ_Kernels . If your system is not supported (i.e. not on the release page), you can build the kernels yourself by following the instructions in AutoAWQ_Kernels and then install AutoAWQ from source. Usage Under examples, you can find examples of how to quantize, run inference, and benchmark AutoAWQ models. INT4 GEMM vs INT4 GEMV vs FP16 There are two versions of AWQ: GEMM and GEMV. Both names relate to how matrix multiplication runs under the hood. We suggest the following: GEMV (quantized): 20% faster than GEMM, only batch size 1 (not good for large context). GEMM (quantized): Much faster than FP16 at batch sizes below 8 (good with large contexts). FP16 (non-quantized): Recommended for highest throughput: vLLM . Compute-bound vs Memory-bound At small batch sizes with small 7B models, we are memory-bound. This means we are bound by the bandwidth our GPU has to push around the weights in memory, and this is essentially what limits how many tokens per second we can generate. Being memory-bound is what makes quantized models faster because your weights are 3x smaller and can therefore be pushed around in memory much faster. This is different from being compute-bound where the main time spent during generation is doing matrix multiplication. In the scenario of being compute-bound, which happens at higher batch sizes, you will not gain a speed-up using a W4A16 quantized model because the overhead of dequantization will slow down the overall generation. This happens because AWQ quantized models only store the weights in INT4 but perform FP16 operations during inference, so we are essentially converting INT4 -> FP16 during inference. Fused modules Fused modules are a large part of the speedup you get from AutoAWQ. The idea is to combine multiple layers into a single operation, thus becoming more efficient. Fused modules represent a set of custom modules that work separately from Huggingface models. They are compatible with model.generate() and other Huggingface methods, which comes with some inflexibility in how you can use your model if you activate fused modules: Fused modules are activated when you use fuse_layers=True . A custom cache is implemented. It preallocates based on batch size and sequence length. You cannot change the sequence length after you have created your model. Reference: AutoAWQForCausalLM.from_quantized(max_seq_len=seq_len, batch_size=batch_size) The main accelerator in the fused modules comes from FasterTransformer, which is only compatible with Linux. The past_key_values from model.generate() are only dummy values, so they cannot be used after generation. Examples More examples can be found in the examples directory . Quantization Expect this to take 10-15 minutes on smaller 7B models, and around 1 hour for 70B models.
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_path = 'lmsys/vicuna-7b-v1.5'
quant_path = 'vicuna-7b-v1.5-awq'
quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }
# Load model
model = AutoAWQForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
# Quantize
model.quantize(tokenizer, quant_config=quant_config)
# Save quantized model
model.save_quantized(quant_path)
tokenizer.save_pretrained(quant_path)
``` Inference ```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
quant_path = "TheBloke/zephyr-7B-beta-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(quant_path, fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(quant_path, trust_remote_code=True)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|system|> <|user|>
{prompt} <|assistant|>"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(
prompt_template.format(prompt=prompt),
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
streamer=streamer,
max_seq_len=512
)
``` Benchmarks These benchmarks showcase the speed and memory usage of processing context (prefill) and generating tokens (decoding). The results include speed at various batch sizes and different versions of AWQ kernels. We have aimed to test models fairly using the same benchmarking tool that you can use to reproduce the results. Do note that speed may vary not only between GPUs but also between CPUs. What matters most is a GPU with high memory bandwidth and a CPU with high single core clock speed. Tested with AutoAWQ version 0.1.6 GPU: RTX 4090 (AMD Ryzen 9 7950X) Command: python examples/benchmark.py --model_path <hf_model> --batch_size 1 🟢 for GEMV, 🔵 for GEMM, 🔴 for avoid using | Model Name | Size | Version | Batch Size | Prefill Length | Decode Length | Prefill tokens/s | Decode tokens/s | Memory (VRAM) |
| ---------- | ---- | ------- | ---------- | -------------- | ------------- | ---------------- | --------------- | ----------------- |
| Vicuna | 7B | 🟢GEMV | 1 | 64 | 64 | 639.65 | 198.848 | 4.50 GB (19.05%) |
| Vicuna | 7B | 🟢GEMV | 1 | 2048 | 2048 | 1123.63 | 133.191 | 6.15 GB (26.02%) |
| ... | ... | ... | ... | ... | ... | ... | ... | ... |
| Mistral | 7B | 🔵GEMM | 1 | 64 | 64 | 1093.35 | 156.317 | 4.35 GB (18.41%) |
| Mistral | 7B | 🔵GEMM | 1 | 2048 | 2048 | 3897.02 | 114.355 | 5.55 GB (23.48%) |
| Mistral | 7B | 🔵GEMM | 8 | 64 | 64 | 4199.18 | 1185.25 | 4.35 GB (18.41%) |
| Mistral | 7B | 🔵GEMM | 8 | 2048 | 2048 | 3661.46 | 829.754 | 16.82 GB (71.12%) |
| ... | ... | ... | ... | ... | ... | ... | ... | ... |
| Mistral | 7B | 🟢GEMV | 1 | 64 | 64 | 531.99 | 188.29 | 4.28 GB (18.08%) |
| Mistral | 7B | 🟢GEMV | 1 | 2048 | 2048 | 903.83 | 130.66 | 5.55 GB (23.48%) |
| Mistral | 7B | 🔴GEMV | 8 | 64 | 64 | 897.87 | 486.46 | 4.33 GB (18.31%) |
| Mistral | 7B | 🔴GEMV | 8 | 2048 | 2048 | 884.22 | 411.893 | 16.82 GB (71.12%) |
| ... | ... | ... | ... | ... | ... | ... | ... | ... |
| TinyLlama | 1B | 🟢GEMV | 1 | 64 | 64 | 1088.63 | 548.993 | 0.86 GB (3.62%) |
| TinyLlama | 1B | 🟢GEMV | 1 | 2048 | 2048 | 5178.98 | 431.468 | 2.10 GB (8.89%) |
| ... | ... | ... | ... | ... | ... | ... | ... | ... |
| Llama 2 | 13B | 🔵GEMM | 1 | 64 | 64 | 820.34 | 96.74 | 8.47 GB (35.83%) |
| Llama 2 | 13B | 🔵GEMM | 1 | 2048 | 2048 | 2279.41 | 73.8213 | 10.28 GB (43.46%) |
| Llama 2 | 13B | 🔵GEMM | 3 | 64 | 64 | 1593.88 | 286.249 | 8.57 GB (36.24%) |
| Llama 2 | 13B | 🔵GEMM | 3 | 2048 | 2048 | 2226.7 | 189.573 | 16.90 GB (71.47%) |
| ... | ... | ... | ... | ... | ... | ... | ... | ... |
| MPT | 7B | 🔵GEMM | 1 | 64 | 64 | 1079.06 | 161.344 | 3.67 GB (15.51%) |
| MPT | 7B | 🔵GEMM | 1 | 2048 | 2048 | 4069.78 | 114.982 | 5.87 GB (24.82%) |
| ... | ... | ... | ... | ... | ... | ... | ... | ... |
| Falcon | 7B | 🔵GEMM | 1 | 64 | 64 | 1139.93 | 133.585 | 4.47 GB (18.92%) |
| Falcon | 7B | 🔵GEMM | 1 | 2048 | 2048 | 2850.97 | 115.73 | 6.83 GB (28.88%) |
| ... | ... | ... | ... | ... | ... | ... | ... | ... |
| CodeLlama | 34B | 🔵GEMM | 1 | 64 | 64 | 681.74 | 41.01 | 19.05 GB (80.57%) |
| CodeLlama | 34B | 🔵GEMM | 1 | 2048 | 2048 | 1072.36 | 35.8316 | 20.26 GB (85.68%) |
| ... | ... | ... | ... | ... | ... | ... | ... | ... |
| DeepSeek | 33B | 🔵GEMM | 1 | 64 | 64 | 1160.18 | 40.29 | 18.92 GB (80.00%) |
| DeepSeek | 33B | 🔵GEMM | 1 | 2048 | 2048 | 1012.1 | 34.0093 | 19.87 GB (84.02%) | Multi-GPU GPU: 2x NVIDIA GeForce RTX 4090 | Model | Size | Version | Batch Size | Prefill Length | Decode Length | Prefill tokens/s | Decode tokens/s | Memory (VRAM) |
|--------:|------:|--------------:|-------------:|-----------------:|----------------:|-------------------:|------------------:|:------------------|
| Mixtral | 46.7B | 🔵GEMM | 1 | 32 | 32 | 149.742 | 93.406 | 25.28 GB (53.44%) |
| Mixtral | 46.7B | 🔵GEMM | 1 | 64 | 64 | 1489.64 | 93.184 | 25.32 GB (53.53%) |
| Mixtral | 46.7B | 🔵GEMM | 1 | 128 | 128 | 2082.95 | 92.9444 | 25.33 GB (53.55%) |
| Mixtral | 46.7B | 🔵GEMM | 1 | 256 | 256 | 2428.59 | 91.5187 | 25.35 GB (53.59%) |
| Mixtral | 46.7B | 🔵GEMM | 1 | 512 | 512 | 2633.11 | 89.1457 | 25.39 GB (53.67%) |
| Mixtral | 46.7B | 🔵GEMM | 1 | 1024 | 1024 | 2598.95 | 84.6753 | 25.75 GB (54.44%) |
| Mixtral | 46.7B | 🔵GEMM | 1 | 2048 | 2048 | 2446.15 | 77.0516 | 27.98 GB (59.15%) |
| Mixtral | 46.7B | 🔵GEMM | 1 | 4096 | 4096 | 1985.78 | 77.5689 | 34.65 GB (73.26%) | CPU CPU: INTEL(R) XEON(R) PLATINUM 8592+ with 8-channel 4800MT/s memory. Command: python examples/benchmark.py --model_path <hf_model> --batch_size 1 | Model | Size | Batch Size | Prefill Length | Decode Length | Prefill tokens/s | Decode tokens/s | Memory (RAM) |
|--------:|------:|-----------:|-------------:|-----------------:|----------------:|---------------:|:------------------|
| Mixtral | 7B | 1 | 64 | 64 | 389.24 | 16.01 | 5.59 GB (0.02%) |
| Mixtral | 7B | 1 | 2048 | 2048 | 1412 | 17.76 | 6.29 GB (0.03%) |
| Vicuna | 7B | 1 | 64 | 64 | 346 | 18.13 | 8.18 GB (0.03%) |
| Vicuna | 7B | 1 | 2048 | 2048 | 1023.4 | 18.18 | 8.80 GB (0.04%) |
| LLaMA2 | 13B | 1 | 64 | 64 | 160.24 | 9.87 | 14.65 GB (0.06%) |
| LLaMA2 | 13B | 1 | 2048 | 2048 | 592.35 | 9.93 | 16.87 GB (0.07%) |
| Mosaicml | 7B | 1 | 64 | 64 | 433.17 | 18.79 | 4.60 GB (0.02%) |
| Mosaicml | 7B | 1 | 2048 | 2048 | 404.25 | 19.91 | 4.75 GB (0.02%) |
| Falcon | 7B | 1 | 64 | 64 | 303.16 | 14.41 | 5.18 GB (0.02%) |
| Falcon | 7B | 1 | 2048 | 2048 | 634.57 | 15.55 | 5.80 GB (0.02%) |
| CodeLlama | 34B | 1 | 64 | 64 | 153.73 | 4.23 | 29.00 GB (0.12%) |
| CodeLlama | 34B | 1 | 2048 | 2048 | 274.25 | 4.38 | 35.21 GB (0.15%) |
| Deepseek-coder | 33B | 1 | 64 | 64 | 83.08 | 4.07 | 22.16 GB (0.09%) |
| Deepseek-coder | 33B | 1 | 2048 | 2048 | 296.04 | 4.33 | 37.05 GB | Reference If you find AWQ useful or relevant to your research, you can cite their paper : @article{lin2023awq,
title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration},
author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Dang, Xingyu and Han, Song},
journal={arXiv},
year={2023}
};AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:;[] | casper-hansen/AutoAWQ |
pen4uin/java-memshell-generator;English | 中文 Java Memshell Generator 一款支持高度自定义的 Java 内存马生成工具 [!WARNING]
本工具仅供安全研究和学习使用。使用者需自行承担因使用此工具产生的所有法律及相关责任。请确保你的行为符合当地的法律和规定。作者不承担任何责任。如不接受,请勿使用此工具。 功能 | 中间件 | 框架 | 工具 (测试版本) | 内存马类型 | 输出格式 | 辅助模块 |
|-----------|---------------|------------------------------------------------------------------|---------------|------------|---------|
| Tomcat | SpringMVC | AntSword (2.1.15) | Listener | BASE64 | 专项漏洞封装 |
| Resin | SpringWebFlux | Behinder (4.0.7) | Filter | BCEL | 表达式语句封装 |
| WebLogic | | Godzilla (4.0.1) | Interceptor | BIGINTEGER | |
| Jetty | | Neo-reGeorg (5.1.0) | HandlerMethod | CLASS | |
| WebSphere | | Suo5 (0.9.0) | | JAR | |
| Undertow | | Custom | | JAR_AGENT | |
| GlassFish | | | | JS | |
| | | | | JSP | | 编译 maven shell
mvn package assembly:single jmg-gui shell
java -jar ./jmg-gui/target/jmg-gui-1.0.8-jar-with-dependencies.jar 文档 jMG v1.0.8 jMG v1.0.6 jMG v1.0.5 jMG v1.0.4 致谢 https://github.com/c0ny1 https://github.com/whwlsfb https://github.com/feihong-cs/memShell https://github.com/su18/MemoryShell https://github.com/BeichenDream/GodzillaMemoryShellProject 协议 遵循 MIT 协议;一款支持高度自定义的 Java 内存马生成工具|A highly customizable Java memory-shell generation tool.;memshell,webshell,payload,rce,java | pen4uin/java-memshell-generator |
uiverse-io/galaxy;Galaxy Welcome to the galaxy , a collection of over 3000 unique UI elements from Uiverse.io . Every single UI element you find in here is sourced from our main platform, Uiverse.io . As designers and enthusiasts share their creations on our website, we automatically curate and upload them here for ease of access and implementation. Browsing the Collection While this repository provides an archive of all the UI elements, the best way to browse is on Uiverse.io . There, you can search and interact with all the elements while having a visual overview. Contribution Interested in growing this universe of UI components? Contribution is simple: Create & Submit : Craft your unique UI element and submit it to Uiverse.io . Approval : Once your submission is reviewed and approved on our platform, it will automatically be uploaded to this repo. Please note : Direct contributions or pull requests to this repository will be ignored. The galaxy is exclusively managed through the Uiverse.io platform to ensure consistency and quality. Licensing and Attribution All UI elements in this repository are available under the MIT License , granting you the freedom to use, modify, and distribute them as you please. However, while not mandatory, we deeply value and appreciate attribution. When implementing a UI element from this collection, kindly consider giving credit to both the original creator and Uiverse.io . This simple act promotes the ethos of sharing, acknowledges the designer's effort, and celebrates the community spirit.;The largest Open-Source UI Library! Community-made and free to use. Made with either CSS or Tailwind.;community,css,tailwind,ui | uiverse-io/galaxy |
timeplus-io/proton;A fast and lightweight streaming SQL engine, 🚀 powered by ClickHouse 📄 Documentation 🚀 Live Demo 🌎 Timeplus Why Use Timeplus Proton · Demo Video · Deployment · What's Next · Integrations · Contributing · Need help? Timeplus Proton is a streaming SQL engine, a fast and lightweight alternative to ksqlDB or Apache Flink, 🚀 powered by ClickHouse. It enables developers to solve streaming data processing, routing and analytics challenges from Apache Kafka, Redpanda and more sources, and send aggregated data to the downstream systems. Timeplus Proton is the core engine of Timeplus Enterprise , which is a cloud native streaming analytics platform. 💪 Why use Timeplus Proton? Apache Flink or ksqlDB alternative. Timeplus Proton provides powerful streaming SQL functionalities, such as streaming ETL, tumble/hop/session windows, watermarks, materialized views, CDC and data revision processing, etc. Fast. Timeplus Proton is written in C++, with optimized performance through SIMD. For example , on an Apple MacBookPro with M2 Max, Timeplus Proton can deliver 90 million EPS, 4 millisecond end-to-end latency, and high cardinality aggregation with 1 million unique keys. Lightweight. Timeplus Proton is a single binary (\<500MB). No JVM or any other dependencies. You can also run it with Docker, or on an AWS t2.nano instance (1 vCPU and 0.5 GiB memory). Powered by the fast, resource efficient and mature ClickHouse . Timeplus Proton extends the historical data, storage, and computing functionality of ClickHouse with stream processing. Thousands of SQL functions are available in Timeplus Proton. Billions of rows are queried in milliseconds. Best streaming SQL engine for Kafka or Redpanda . Query the live data in Kafka or other compatible streaming data platforms, with external streams . See our architecture doc for technical details and our FAQ for more information. 🎬 Demo Video 2-minute short video👇. Check out the full video at YouTube . https://github.com/timeplus-io/proton/assets/5076438/8ceca355-d992-4798-b861-1e0334fc4438 ⚡ Deployment A single binary: shell
curl https://install.timeplus.com/oss | sh Docker: bash
docker run -d --pull always --name proton ghcr.io/timeplus-io/proton:latest In case you cannot access ghcr, you can pull the image from public.ecr.aws/timeplus/proton Docker Compose: The Docker Compose stack demonstrates how to read/write data in Kafka/Redpanda with external streams. Timeplus Cloud: One step to try Timeplus Proton in Timeplus Cloud 🔎 Usage You can start the server via proton server and start a new terminal window with proton client to start the SQL shell. From proton client , run the following SQL to create a stream of random data: sql
-- Create a stream with random data
CREATE RANDOM STREAM devices(
device string default 'device'||to_string(rand()%4),
temperature float default rand()%1000/10) sql
-- Run the streaming SQL
SELECT device, count(*), min(temperature), max(temperature)
FROM devices GROUP BY device You should see data like the following: ┌─device──┬─count()─┬─min(temperature)─┬─max(temperature)─┐
│ device0 │ 2256 │ 0 │ 99.6 │
│ device1 │ 2260 │ 0.1 │ 99.7 │
│ device3 │ 2259 │ 0.3 │ 99.9 │
│ device2 │ 2225 │ 0.2 │ 99.8 │
└─────────┴─────────┴──────────────────┴──────────────────┘ ⏩ What's next? To see more examples of using Timeplus Proton, check out the examples folder. To access more features, such as sources, sinks, dashboards, alerts, and data lineage, create a workspace on Timeplus Cloud or try Timeplus Enterprise locally. What features are available with Timeplus Proton versus Timeplus Enterprise? | | Timeplus Proton | Timeplus Enterprise |
| ----------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Deployment | Single-node Docker image Single binary on Mac/Linux | Single node Cluster Kubernetes-based “bring your own cloud” (BYOC) Fully-managed cloud service with SOC2 |
| Data sources | Random streams External streams to Apache Kafka, Confluent Cloud, Redpanda Streaming ingestion via REST API (compact mode only) | Everything in Timeplus Proton WebSocket and HTTP Stream Apache Pulsar Ably CSV upload Streaming ingestion via REST API (with API key and flexible modes) |
| Data destinations (sinks) | External streams to Apache Kafka, Confluent Cloud, Redpanda | Everything in Timeplus Proton Apache Pulsar Slack Webhook Timeplus stream |
| Support | Community support from GitHub and Slack | Enterprise support via email, Slack, and Zoom, with a SLA | 🧩 Integrations The following drivers are available: https://github.com/timeplus-io/proton-java-driver JDBC and other Java clients https://github.com/timeplus-io/proton-go-driver https://github.com/timeplus-io/proton-python-driver Integrations with other systems: ClickHouse https://docs.timeplus.com/proton-clickhouse-external-table Grafana https://github.com/timeplus-io/proton-grafana-source Metabase https://github.com/timeplus-io/metabase-proton-driver Pulse UI https://github.com/timeplus-io/pulseui/tree/proton Homebrew https://github.com/timeplus-io/homebrew-timeplus dbt https://github.com/timeplus-io/dbt-proton Documentation We publish full documentation for Timeplus Proton at docs.timeplus.com alongside documentation for Timeplus Enterprise. We also have a FAQ for detailing how we chose Apache License 2.0, how Timeplus Proton is related to ClickHouse, and more. Contributing We welcome your contributions! If you are looking for issues to work on, try looking at the issue list . Please see the wiki for more details, and BUILD.md to compile Timeplus Proton in different platforms. Need help? Please use GitHub Discussions to share your feedbacks or questions for Timeplus Proton. For filing bugs, suggesting improvements, or requesting new features, open GitHub Issues . To connect with Timeplus engineers or inquire about Timeplus Enterprise, join our Timeplus Community Slack . Licensing Proton uses Apache License 2.0. See details in the LICENSE .;A streaming SQL engine, a fast and lightweight alternative to ksqlDB and Apache Flink, 🚀 powered by ClickHouse.;clickhouse,kakfa,sql,redpanda,single-binary,flink-alternative,high-performance,stream-processing,analytics,ksqldb-alternative | timeplus-io/proton |
VisActor/VTable;VTable VTable is not just a high-performance multidimensional data analysis table, but also a grid artist that creates art between rows and columns. Introduction • demo • Tutorial • API • ![](https://github.com/visactor/vtable/actions/workflows/bug-server.yml/badge.svg)
![](https://github.com/visactor/vtable/actions/workflows/unit-test.yml/badge.svg)
[![npm Version](https://img.shields.io/npm/v/@visactor/vtable.svg)](https://www.npmjs.com/package/@visactor/vtable)
[![npm Download](https://img.shields.io/npm/dm/@visactor/vtable.svg)](https://www.npmjs.com/package/@visactor/vtable)
[![license](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/visactor/vtable/blob/main/LICENSE) English| [简体中文](./README.zh-CN.md) (video) Introduction VTable is based on visual rendering engine VRender . The core capabilities are as follows: Extreme performance: Supports fast computation and rendering of millions of data points. Multidimensional analysis: Automatically analyzes and presents multidimensional data. Strong expressiveness: Provides flexible and powerful graphic capabilities, seamlessly integrating with charts of VChart . Repository Introduction This repository includes the following packages: packages/vtable: The core code repository of VTable docs: Include VTable site tutorials, demos,apis and options, and also contains all Chinese and English documents. Usage Installation npm package ```bash
// npm
npm install @visactor/vtable // yarn
yarn add @visactor/vtable
``` Quick Start ```javascript
// this demo you can run on codesanbox https://codesandbox.io/s/vtable-simple-demo-g8q738
import * as VTable from '@visactor/vtable'; const columns = [
{
field: 'Order ID',
caption: 'Order ID'
},
{
field: 'Customer ID',
caption: 'Customer ID'
},
{
field: 'Product Name',
caption: 'Product Name'
},
{
field: 'Sales',
caption: 'Sales'
},
{
field: 'Profit',
caption: 'Profit'
}
]; const option = {
container: document.getElementById(CONTAINER_ID),
records: [
{
'Order ID': 'CA-2018-156720',
'Customer ID': 'JM-15580',
'Product Name': 'Bagged Rubber Bands',
Sales: '3.024',
Profit: '-0.605'
},
{
'Order ID': 'CA-2018-115427',
'Customer ID': 'EB-13975',
'Product Name': 'GBC Binding covers',
Sales: '20.72',
Profit: '6.475'
}
// ...
],
columns
};
const tableInstance = new VTable.ListTable(option);
``` More demos and detailed tutorials ⌨️ Development First of all, please install @microsoft/rush bash
$ npm i --global @microsoft/rush Then clone locally: ```bash clone $ git clone git@github.com:VisActor/VTable.git
$ cd VTable install dependencies $ rush update start vtable demo $ cd packages/vtable execute in file path: ./packages/vtable $ rushx demo start site development server, execute in file path: ./ $ rush docs after execut git commit, please run the following command to update the change log. Please execute in file path: ./ $ rush change-all
``` 📖 Documents After installation & clone & update, run docs to preview VTable documents locally. ```bash start vtable document server. execute in file path: ./ $ rush docs
``` 🔗 Related Links Official website Usage Trend 💫 Ecosystem | Project | Description |
| ---------------------------------------------------------------------------- | ------------------------- |
| React-VTable | VTable in React component | ⭐️ Star History 🤝 Contribution If you would like to contribute, please read the Code of Conduct and Guide first。 Small streams converge to make great rivers and seas! License MIT License;VTable is not just a high-performance multidimensional data analysis table, but also a grid artist that creates art between rows and columns.;datagrid,grid,javescript,list-table,pivot-chart,pivot-grid,pivot-tables,table,visualization,data-analysis | VisActor/VTable |
whitechi73/OpenShamrock;![][banner]
[![][actions]][actions-link]
[![][releases]][releases-link]
[![][downloads]][releases-link]
![][onebot-11]
![][onebot-12]
[![][license]](LICENSE)
[下载][download-link] | [部署][deploy-link] | [接口][api-link] | [文档][docs-link] 简介 ☘ 基于 Lsposed( Non -Riru) 实现 Kritor 标准的 QQ 机器人框架! 本项目仅提供学习与交流用途,请在24小时内删除。 本项目目的是研究 Xposed 和 LSPosed 框架的使用。 Epic 框架开发相关知识。 Riru可能导致封禁,请减少使用。
如有违反法律,请联系删除。
请勿在任何平台宣传,宣扬,转发本项目,请勿恶意修改企业安装包造成相关企业产生损失,如有违背,必将追责到底。 兼容|迁移|替代 说明 一键移植:本项目基于 go-cqhttp 的文档进行开发实现。 平行部署:可多平台部署。 相关项目 Lagrange.Core NTQQ 的协议实现 OpenShamrock 基于 Xposed 实现 OneBot 标准的机器人框架(👈你在这里 Chronocat 基于 Electron 的、模块化的 Satori 框架 权限声明 如出现未在此处声明的权限,请警惕 Shamrock 是否被修改/植入恶意代码 网络访问权限: Shamrock 进程需要使用 HTTP API 来进行一些操作 Hook 系统框架 : 为了保证息屏状态下服务不被杀死,Shamrock 需要 Hook 系统框架 后台启动 Activity: 自动唤醒 QQ 时需要使用 语音解码 请参考文档中 语音支持 部分 贡献说明 我可爱吗?欢迎你的到来,这里是一个很大的地方,有着无限可能,主要是有你啦! 鸣谢 感谢 JetBrains 提供的开源开发许可证,JetBrains 通过为核心项目贡献者免费提供一套一流的开发者工具来支持非商业开源项目。 开源协议 本项目使用 GPL-3.0 协议开放源代码 text
Shamrock - OneBot standard QQ robot framework based on Xposed implementation
Copyright (C) 2023 ~ 2024 Shamrock Team
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>. 贡献者;A Bot Framework based on Lsposed with OneBot11;onebot,qqbot,qqnt,qqprotocol,lsposed,tencent | whitechi73/OpenShamrock |
KoljaB/RealtimeTTS;RealtimeTTS Easy to use, low-latency text-to-speech library for realtime applications About the Project RealtimeTTS is a state-of-the-art text-to-speech (TTS) library designed for real-time applications. It stands out in its ability to convert text streams fast into high-quality auditory output with minimal latency. Hint: Check out Linguflex , the original project from which RealtimeTTS is spun off. It lets you control your environment by speaking and is one of the most capable and sophisticated open-source assistants currently available. Note: If you run into 'General synthesis error: isin() received an invalid combination of arguments' error, this is due to new transformers library introducing an incompatibility to Coqui TTS (see here ). Should not occur with latest version, but if it does, please downgrade to an older transformers version: pip install transformers==4.38.2 . https://github.com/KoljaB/RealtimeTTS/assets/7604638/87dcd9a5-3a4e-4f57-be45-837fc63237e7 Key Features Low Latency almost instantaneous text-to-speech conversion compatible with LLM outputs High-Quality Audio generates clear and natural-sounding speech Multiple TTS Engine Support supports OpenAI TTS, Elevenlabs, Azure Speech Services, Coqui TTS, gTTS and System TTS Multilingual Robust and Reliable : ensures continuous operation with a fallback mechanism switches to alternative engines in case of disruptions guaranteeing consistent performance and reliability, which is vital for critical and professional use cases Hint : check out RealtimeSTT , the input counterpart of this library, for speech-to-text capabilities. Together, they form a powerful realtime audio wrapper around large language models. FAQ Check the FAQ page for answers to a lot of questions around the usage of RealtimeTTS. Updates Latest Version: v0.4.1
- switched coquiengine to Idiap Research Institute 's maintained fork of coqui tts (thank you)
- added emotions to Azure engine
- added speed to GTTS engine
- bugfix for ElevenlabsEngine get_voices method See release history . Tech Stack This library uses: Text-to-Speech Engines OpenAIEngine : OpenAI's TTS system offers 6 natural sounding voices. CoquiEngine : High quality local neural TTS. AzureEngine : Microsoft's leading TTS technology. 500000 chars free per month. ElevenlabsEngine : Offer the best sounding voices available. GTTSEngine : Free to use and doesn't require setting up a local GPU. SystemEngine : Native engine for quick setup. Sentence Boundary Detection NLTK Sentence Tokenizer : Uses the Natural Language Toolkit's sentence tokenizer for precise and efficient sentence segmentation. By using "industry standard" components RealtimeTTS offers a reliable, high-end technological foundation for developing advanced voice solutions. Installation Simple installation: bash
pip install RealtimeTTS This will install all the necessary dependencies, including a CPU support only version of PyTorch (needed for Coqui engine) Installation into virtual environment with GPU support: bash
python -m venv env_realtimetts
env_realtimetts\Scripts\activate.bat
python.exe -m pip install --upgrade pip
pip install RealtimeTTS
pip install torch==2.3.0+cu118 torchaudio==2.3.0 --index-url https://download.pytorch.org/whl/cu118 More information about CUDA installation . Engine Requirements Different engines supported by RealtimeTTS have unique requirements. Ensure you fulfill these requirements based on the engine you choose. SystemEngine The SystemEngine works out of the box using your system's built-in TTS capabilities. No additional setup is needed. GTTSEngine The GTTSEngine works out of the box using Google Translate's text-to-speech API. No additional setup is needed. OpenAIEingine To use the OpenAIEngine :
- set environment variable OPENAI_API_KEY
- install ffmpeg (see CUDA installation point 3) AzureEngine To use the AzureEngine , you will need:
- Microsoft Azure Text-to-Speech API key (provided via AzureEngine constructor parameter "speech_key" or in the environment variable AZURE_SPEECH_KEY)
- Microsoft Azure service region. Make sure you have these credentials available and correctly configured when initializing the AzureEngine . ElevenlabsEngine For the ElevenlabsEngine , you need:
- Elevenlabs API key (provided via ElevenlabsEngine constructor parameter "api_key" or in the environment variable ELEVENLABS_API_KEY)
- mpv installed on your system (essential for streaming mpeg audio, Elevenlabs only delivers mpeg). 🔹 Installing mpv : - macOS : bash
brew install mpv Linux and Windows : Visit mpv.io for installation instructions. CoquiEngine Delivers high quality, local, neural TTS with voice-cloning. Downloads a neural TTS model first. In most cases it be fast enought for Realtime using GPU synthesis. Needs around 4-5 GB VRAM. to clone a voice submit the filename of a wave file containing the source voice as "voice" parameter to the CoquiEngine constructor voice cloning works best with a 22050 Hz mono 16bit WAV file containing a short (~5-30 sec) sample On most systems GPU support will be needed to run fast enough for realtime, otherwise you will experience stuttering. Quick Start Here's a basic usage example: ```python
from RealtimeTTS import TextToAudioStream, SystemEngine, AzureEngine, ElevenlabsEngine engine = SystemEngine() # replace with your TTS engine
stream = TextToAudioStream(engine)
stream.feed("Hello world! How are you today?")
stream.play_async()
``` Feed Text You can feed individual strings: python
stream.feed("Hello, this is a sentence.") Or you can feed generators and character iterators for real-time streaming: ```python
def write(prompt: str):
for chunk in openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content" : prompt}],
stream=True
):
if (text_chunk := chunk["choices"][0]["delta"].get("content")) is not None:
yield text_chunk text_stream = write("A three-sentence relaxing speech.") stream.feed(text_stream)
``` python
char_iterator = iter("Streaming this character by character.")
stream.feed(char_iterator) Playback Asynchronously: python
stream.play_async()
while stream.is_playing():
time.sleep(0.1) Synchronously: python
stream.play() Testing the Library The test subdirectory contains a set of scripts to help you evaluate and understand the capabilities of the RealtimeTTS library. Note that most of the tests still rely on the "old" OpenAI API (<1.0.0). Usage of the new OpenAI API is demonstrated in openai_1.0_test.py. simple_test.py Description : A "hello world" styled demonstration of the library's simplest usage. complex_test.py Description : A comprehensive demonstration showcasing most of the features provided by the library. coqui_test.py Description : Test of local coqui TTS engine. translator.py Dependencies : Run pip install openai realtimestt . Description : Real-time translations into six different languages. openai_voice_interface.py Dependencies : Run pip install openai realtimestt . Description : Wake word activated and voice based user interface to the OpenAI API. advanced_talk.py Dependencies : Run pip install openai keyboard realtimestt . Description : Choose TTS engine and voice before starting AI conversation. minimalistic_talkbot.py Dependencies : Run pip install openai realtimestt . Description : A basic talkbot in 20 lines of code. simple_llm_test.py Dependencies : Run pip install openai . Description : Simple demonstration how to integrate the library with large language models (LLMs). test_callbacks.py Dependencies : Run pip install openai . Description : Showcases the callbacks and lets you check the latency times in a real-world application environment. Pause, Resume & Stop Pause the audio stream: python
stream.pause() Resume a paused stream: python
stream.resume() Stop the stream immediately: python
stream.stop() Requirements Explained Python Version : Required : Python >= 3.9, < 3.12 Reason : The library depends on the GitHub library "TTS" from coqui, which requires Python versions in this range. requests : to send HTTP requests for API calls and voice list retrieval PyAudio : to create an output audio stream stream2sentence : to split the incoming text stream into sentences pyttsx3 : System text-to-speech conversion engine tqdm : to display progress bars in the command line pydub : to convert audio chunk formats azure-cognitiveservices-speech : Azure text-to-speech conversion engine elevenlabs : Elevenlabs text-to-speech conversion engine coqui-TTS : Coqui's XTTS text-to-speech library for high-quality local neural TTS Shoutout to Idiap Research Institute for maintaining a fork of coqui tts . openai : to interact with OpenAI's TTS API gtts : Google translate text-to-speech conversion Configuration Initialization Parameters for TextToAudioStream When you initialize the TextToAudioStream class, you have various options to customize its behavior. Here are the available parameters: engine (BaseEngine) Type : BaseEngine Required : Yes Description : The underlying engine responsible for text-to-audio synthesis. You must provide an instance of BaseEngine or its subclass to enable audio synthesis. on_text_stream_start (callable) Type : Callable function Required : No Description : This optional callback function is triggered when the text stream begins. Use it for any setup or logging you may need. on_text_stream_stop (callable) Type : Callable function Required : No Description : This optional callback function is activated when the text stream ends. You can use this for cleanup tasks or logging. on_audio_stream_start (callable) Type : Callable function Required : No Description : This optional callback function is invoked when the audio stream starts. Useful for UI updates or event logging. on_audio_stream_stop (callable) Type : Callable function Required : No Description : This optional callback function is called when the audio stream stops. Ideal for resource cleanup or post-processing tasks. on_character (callable) Type : Callable function Required : No Description : This optional callback function is called when a single character is processed. output_device_index (int) Type : Integer Required : No Default : None Description : Specifies the output device index to use. None uses the default device. tokenizer (string) Type : String Required : No Default : nltk Description : Tokenizer to use for sentence splitting (currently "nltk" and "stanza" are supported). language (string) Type : String Required : No Default : en Description : Language to use for sentence splitting. muted (bool) Type : Bool Required : No Default : False Description : Global muted parameter. If True, no pyAudio stream will be opened. Disables audio playback via local speakers (in case you want to synthesize to file or process audio chunks) and overrides the play parameters muted setting. level (int) Type : Integer Required : No Default : logging.WARNING Description : Sets the logging level for the internal logger. This can be any integer constant from Python's built-in logging module. Example Usage: python
engine = YourEngine() # Substitute with your engine
stream = TextToAudioStream(
engine=engine,
on_text_stream_start=my_text_start_func,
on_text_stream_stop=my_text_stop_func,
on_audio_stream_start=my_audio_start_func,
on_audio_stream_stop=my_audio_stop_func,
level=logging.INFO
) Methods play and play_async These methods are responsible for executing the text-to-audio synthesis and playing the audio stream. The difference is that play is a blocking function, while play_async runs in a separate thread, allowing other operations to proceed. fast_sentence_fragment (bool) Default : False Description : When set to True , the method will prioritize speed, generating and playing sentence fragments faster. This is useful for applications where latency matters. buffer_threshold_seconds (float) Default : 2.0 Description : Specifies the time in seconds for the buffering threshold, which impacts the smoothness and continuity of audio playback. How it Works : Before synthesizing a new sentence, the system checks if there is more audio material left in the buffer than the time specified by buffer_threshold_seconds . If so, it retrieves another sentence from the text generator, assuming that it can fetch and synthesize this new sentence within the time window provided by the remaining audio in the buffer. This process allows the text-to-speech engine to have more context for better synthesis, enhancing the user experience. A higher value ensures that there's more pre-buffered audio, reducing the likelihood of silence or gaps during playback. If you experience breaks or pauses, consider increasing this value. Hint : If you experience silence or breaks between sentences, consider raising this value to ensure smoother playback. minimum_sentence_length (int) Default : 3 Description : Sets the minimum character length to consider a string as a sentence to be synthesized. This affects how text chunks are processed and played. log_characters (bool) Default : False Description : Enable this to log the individual characters that are being processed for synthesis. log_synthesized_text (bool) Default : False Description : When enabled, logs the text chunks as they are synthesized into audio. Helpful for auditing and debugging. By understanding and setting these parameters and methods appropriately, you can tailor the TextToAudioStream to meet the specific needs of your application. CUDA installation These steps are recommended for those who require better performance and have a compatible NVIDIA GPU. Note : to check if your NVIDIA GPU supports CUDA, visit the official CUDA GPUs list . To use torch with support via CUDA please follow these steps: Note : newer pytorch installations may (unverified) not need Toolkit (and possibly cuDNN) installation anymore. Install NVIDIA CUDA Toolkit :
For example, to install Toolkit 11.8 please Visit NVIDIA CUDA Toolkit Archive . Select version 11. Download and install the software. Install NVIDIA cuDNN :
For example, to install cuDNN 8.7.0 for CUDA 11.x please Visit NVIDIA cuDNN Archive . Click on "Download cuDNN v8.7.0 (November 28th, 2022), for CUDA 11.x". Download and install the software. Install ffmpeg : You can download an installer for your OS from the ffmpeg Website . Or use a package manager: On Ubuntu or Debian : bash
sudo apt update && sudo apt install ffmpeg On Arch Linux : bash
sudo pacman -S ffmpeg On MacOS using Homebrew ( https://brew.sh/ ): bash
brew install ffmpeg On Windows using Chocolatey ( https://chocolatey.org/ ): bash
choco install ffmpeg On Windows using Scoop ( https://scoop.sh/ ): bash
scoop install ffmpeg Install PyTorch with CUDA support : bash
pip install torch==2.1.1+cu118 torchaudio==2.1.1+cu118 --index-url https://download.pytorch.org/whl/cu118 Fix for to resolve compatility issues :
If you run into library compatility issues, try setting these libraries to fixed versions: bash
pip install networkx==2.8.8
pip install typing_extensions==4.8.0
pip install fsspec==2023.6.0
pip install imageio==2.31.6
pip install networkx==2.8.8
pip install numpy==1.24.3
pip install requests==2.31.0 💖 Acknowledgements Huge shoutout to the team behind Coqui AI - especially the brillant Eren Gölge - for being the first giving us local high quality synthesis with realtime speed and even a clonable voice! Thank you Pierre Nicolas Durette for giving us a free tts to use without GPU using Google Translate with his gtts python library. Contribution Contributions are always welcome (e.g. PR to add a new engine). License Information ❗ Important Note: While the source of this library is open-source, the usage of many of the engines it depends on are not: External engine providers often restrict commercial use in their free plans. This means the engines can be used for noncommercial projects, but commercial usage requires a paid plan. Engine Licenses Summary: CoquiEngine License : Open-source only for noncommercial projects. Commercial Use : Requires a paid plan. Details : CoquiEngine License ElevenlabsEngine License : Open-source only for noncommercial projects. Commercial Use : Available with every paid plan. Details : ElevenlabsEngine License AzureEngine License : Open-source only for noncommercial projects. Commercial Use : Available from the standard tier upwards. Details : AzureEngine License SystemEngine License : Mozilla Public License 2.0 and GNU Lesser General Public License (LGPL) version 3.0. Commercial Use : Allowed under this license. Details : SystemEngine License GTTSEngine License : MIT license Commercial Use : It's under the MIT license, so it should be possible in theory. Since it utilizes undocumented Google Translate speech functionality, some caution might be necessary. Details : GTTS MIT License OpenAIEngine License : please read OpenAI Terms of Use Disclaimer : This is a summarization of the licenses as understood at the time of writing. It is not legal advice. Please read and respect the licenses of the different engine providers yourself if you plan to use them in a project. Author Kolja Beigel Email: kolja.beigel@web.de GitHub;Converts text to speech in realtime;python,realtime,speech-synthesis,text-to-speech | KoljaB/RealtimeTTS |
CADmium-Co/CADmium;CADmium This project aims to create a new CAD program from scratch. It is small, it runs in a web browser, and the source code is available for free here on Github. Legacy CAD programs have taken many thousands of years of collective engineering time to get where they are so this program will never be able to compete on breadth of features. But CADmium is intended to capture 80% of the most common CAD use cases while doing less than 10% of the work. For now we are targeting the home hobbyist who just wants to design a widget for their 3D printer, not a company that wants to design a car or airplane, although that will come later. If you're looking for: A simple, modern, parametric CAD UI that runs in a browser That can export solids as .step, .obj, or .cadmium (a json-based CAD format that this project is inventing) That can export sketches as .svg or .dxf That works without an internet connection Then this project may be for you! Status : Early prototype. This tool is not yet an MVP, but is being developed in the open. ~~Please do not share this to HN or Reddit or things like that.~~ ha, well I guess that ship has sailed ! Overall Plan Demos: We are currently racing toward our first demo release, V0.0.1 . This is a good first exercise for us to decide on build and release processes. After that we will do a few more demo (V0.0.*) releases, aggregating features until it feels pretty usable. Alpha: When it feels like we've reached an MVP that people might actually want to use, it's time to release an Alpha version (V0.1.0) and actively solicit feedback from users. We'll use that feedback to make more improvements, re-inventing things if necessary to achieve a great workflow. Beyond that, we'll see! Technology The boundary representation engine under the hood is truck , which is written in rust and is not dependent on any legacy b-rep engine. Leveraging truck, we wrote a small rust library called cadmium which provides structs for projects, workspaces, sketches, extrusions, and constraints. Our goal is that this rust library provides all the same functionality as the UI for anyone who prefers code-first CAD. This library is able to save and load projects to disk as json. We have also built a set of javascript bindings so that the whole thing can be compiled to wasm and run in a browser. The UI is built with SvelteKit and Tailwind . It is hosted with Github Pages. We use three.js for rendering, which in this case uses WebGL under the hood. We use Threlte to manage the scene graph declaratively. Native builds use Tauri , which is a Rust-based wrapper around OS-specific native webviews that allows us to build a native app from the same codebase. License This software is offered under the Elastic License 2.0 . In summary, you can do whatever you like with this software except offer it as a service to third parties. Running The Code If you're just trying to kick the tires, click here to view the live web demo. To build locally using pnpm workspace & turbo: shell
git clone https://github.com/Cadmium-Co/CADmium.git
cd CADmium
pnpm install
pnpm dev Native Builds ```shell Development pnpm tauri dev Generate binaries and installers pnpm tauri build
``` Tauri can generate icons for the native build with the following command: shell
pnpm tauri icon applications/web/public/cadmium_logo_min.svg Tooling setup pnpm We use pnpm to manage the monorepo. Please follow the instructions here to install: https://pnpm.io/installation#using-a-standalone-script If you're new to node you can use pnpm to manage nodejs: ```shell https://pnpm.io/cli/env#use pnpm env use --global 20
``` rust First install rust using rustup: https://rustup.rs Then install wasm-pack shell
cargo install wasm-pack Running Tests shell
pnpm test Playwright is used for e2e testing. You may be prompted with a command to install it. For manjaro/archlinux folks it may report missing dependencies. On manjaro the missing dependencies are solved thanks to this comment : shell
yay -S aur/enchant1.6 aur/icu66 aur/libwebp052 Watch vitest unit tests only: shell
cd applications/web
pnpm test:unit -w rust To build and run the Rust tests: shell
cargo test rust examples Simple exaples using the rust code can be found in packages/cadmium/examples Run simple rust example with: cargo run --example project_simple_extrusion Will produce example.obj file and example.step output files, the .step file can be examined in a CAD viewer. git blame To ignore commits used purely for formatting changes, to preserve correct authorship, set your local git config: shell
git config blame.ignoreRevsFile .git-blame-ignore-revs Contributing We are actively seeking contributors! Please join the Discord and come help out! Most especially, we need help in the following areas: Design: The tool must look and feel good and we are not designers. We would love contributions in the form of: Advice, mockups, or tailwindcss examples of how to make different elements look and behave better In particular, help picking a color palette that works well and is unique Help figuring out how to implement dark mode Rust: This is our first project in Rust. We need help from experienced Rustaceans to: Figure out how to better lay out the rust code Point out any glaring issues with how I'm using the language (We've thus far completely avoided Lifetimes, Traits, Rc, RefCell, etc and that may be hampering things) Svelte: This is our first project using Svelte. We'd love an experienced set of eyes to: Look over the basic structure and tell us if we're making any big mistakes Help us migrate to Svelte 5 when it comes out If you feel like you would be willing and able to help, please join our discord ! License FAQs What license do you use? We use the Elastic License v2.0 . What are the terms of that license? You can do anything you want with the code except offer it as a service to third parties. Why do you use that license instead of MIT, GPL, etc? Because we don't want Autodesk, Siemens, Dassault, or PTC to clone our code, change the logo, and start offering it as one of their products. Can I host a CADmium instance for my family? Yes. Can I host a CADmium instance for my company for internal use? Yes. Can I host a CADmium instance for my company for my clients as a free service? No. Can I host a CADmium instance for my company for my clients as a paid service? No. If you choose at some point to stop developing CADmium, what will/can happen to the project? We would convert the license to MIT, GPL, or something similar. If I fork the project and change it (rebranding or adding a new feature) can I host it myself? Yes, so long as your fork is only used by you or your company. You are not permitted to host your fork as a service for third parties. Do I have to publish my fork? No. Immediate TODOs for V0.0.1 release (The Demo) Github project for tracking progress is here [ ] Sketching [ ] Implement a standalone first-order 2D constraint solver [ ] Integrate that new solver into sketch.rs [ ] Ability to create a sketch on the face of a solid [ ] Ability to create and modify constraints in the UI [ ] Entities: line, rect, circle, arc, 2D fillet [ ] Constraints: horiz, vert, parallel, perp, length, radius, coincident, maybe a few others [ ] Extrusion [ ] Configure an extrusion to create new solid or subtract from existing solid [ ] Modes: New, Add, Cut [ ] Control: depth, offset [ ] Ability to extrude multiple faces (from one sketch) at a time while not extruding every face in the sketch [ ] Revolution [ ] Same as Extrusions [ ] Boolean [ ] Union, Intersection, Subtraction [ ] Project [x] Ability to rename the project [ ] Ability to delete steps [ ] bind ctrl + s to .cadmium export, and ctrl + o to .cadmium import [ ] Units [ ] Make it clear that the whole file uses millimeter units [ ] Files [ ] Save and load CADmium files [ ] Export as STEP, OBJ, STL, SVG, DXF [ ] Marketing [ ] Youtube video demonstrating how to make: [ ] A simple cube [ ] A plate with screw holes [ ] Something pretty complex like this;A CAD program that runs in the browser;cad,mechanical-design,mechanical-engineering | CADmium-Co/CADmium |
OpenMotionLab/MotionGPT;Official repo for MotionGPT MotionGPT: Human Motion as a Foreign Language Project Page • Arxiv Paper • HuggingFace Demo • FAQ • Citation | Teaser Video | Demo Video |
| :--------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------: |
| | | 🏃 Intro MotionGPT MotionGPT is a unified and user-friendly motion-language model to learn the semantic coupling of two modalities and generate high-quality motions and text descriptions on multiple motion tasks . Technical details Though the advancement of pre-trained large language models unfolds, the exploration of building a unified model for language and other multi-modal data, such as motion, remains challenging and untouched so far. Fortunately, human motion displays a semantic coupling akin to human language, often perceived as a form of body language. By fusing language data with large-scale motion models, motion-language pre-training that can enhance the performance of motion-related tasks becomes feasible. Driven by this insight, we propose MotionGPT, a unified, versatile, and user-friendly motion-language model to handle multiple motion-relevant tasks. Specifically, we employ the discrete vector quantization for human motion and transfer 3D motion into motion tokens, similar to the generation process of word tokens. Building upon this “motion vocabulary”, we perform language modeling on both motion and text in a unified manner, treating human motion as a specific language. Moreover, inspired by prompt learning, we pre-train MotionGPT with a mixture of motion-language data and fine-tune it on prompt-based question-and-answer tasks. Extensive experiments demonstrate that MotionGPT achieves state-of-the-art performances on multiple motion tasks including text-driven motion generation, motion captioning, motion prediction, and motion in-between. 🚩 News [2023/09/22] MotionGPT got accepted by NeurIPS 2023! [2023/09/11] Release the huggingface demo 🔥🔥🔥 [2023/09/09] Release the training of MotionGPT V1.0 🔥🔥🔥 [2023/06/20] Upload paper and init project ⚡ Quick Start Setup and download ### 1. Conda environment
```
conda create python=3.10 --name mgpt
conda activate mgpt
```
Install the packages in `requirements.txt` and install [PyTorch 2.0](https://pytorch.org/)
```
pip install -r requirements.txt
python -m spacy download en_core_web_sm
```
We test our code on Python 3.10.6 and PyTorch 2.0.0.
### 2. Dependencies
Run the script to download dependencies materials:
```
bash prepare/download_smpl_model.sh
bash prepare/prepare_t5.sh
```
For Text to Motion Evaluation
```
bash prepare/download_t2m_evaluators.sh
```
### 3. Pre-train model
Run the script to download the pre-train model
```
bash prepare/download_pretrained_models.sh
```
### 4. (Optional) Download manually
Visit [the Google Driver](https://drive.google.com/drive/folders/10s5HXSFqd6UTOkW2OMNc27KGmMLkVc2L) to download the previous dependencies.
Visit [the Hugging Face](https://huggingface.co/OpenMotionLab) to download the pretrained models. ▶️ Demo Webui Run the following script to launch webui, then visit [0.0.0.0:8888](http://0.0.0.0:8888)
```
python app.py
``` Batch demo We support txt file input, the output motions are npy files and output texts are txt files. Please check the `configs/assets.yaml` for path config, TEST.FOLDER as output folder.
Then, run the following script:
```
python demo.py --cfg ./configs/config_h3d_stage3.yaml --example ./demos/t2m.txt
```
Some parameters:
- `--example=./demo/t2m.txt`: input file as text prompts
- `--task=t2m`: evaluation tasks including t2m, m2t, pred, inbetween
The outputs:
- `npy file`: the generated motions with the shape of (nframe, 22, 3)
- `txt file`: the input text prompt or text output 💻 Train your own models Training guidance ### 1. Prepare the datasets
1. Please refer to [HumanML3D](https://github.com/EricGuo5513/HumanML3D) for text-to-motion dataset setup.
2. Put the instructions data in `prepare/instructions` to the same folder of HumanML3D dataset.
### 2.1. Ready to train motion tokenizer model
Please first check the parameters in `configs/config_h3d_stage1.yaml`, e.g. `NAME`,`DEBUG`.
Then, run the following command:
```
python -m train --cfg configs/config_h3d_stage1.yaml --nodebug
```
### 2.2. Ready to pretrain MotionGPT model
Please update the parameters in `configs/config_h3d_stage2.yaml`, e.g. `NAME`,`DEBUG`,`PRETRAINED_VAE` (change to your `latest ckpt model path` in previous step)
Then, run the following command to store all motion tokens of training set for convenience
```
python -m scripts.get_motion_code --cfg configs/config_h3d_stage2.yaml
```
After that, run the following command:
```
python -m train --cfg configs/config_h3d_stage2.yaml --nodebug
```
### 2.3. Ready to instruct-tuning MotionGPT model
Please update the parameters in `configs/config_h3d_stage3.yaml`, e.g. `NAME`,`DEBUG`,`PRETRAINED` (change to your `latest ckpt model path` in previous step)
Then, run the following command:
```
python -m train --cfg configs/config_h3d_stage3.yaml --nodebug
```
### 3. Evaluate the model
Please first put the tained model checkpoint path to `TEST.CHECKPOINT` in `configs/config_h3d_stage3.yaml`.
Then, run the following command:
```
python -m test --cfg configs/config_h3d_stage3.yaml --task t2m
```
Some parameters:
- `--task`: evaluation tasks including t2m(Text-to-Motion), m2t(Motion translation), pred(Motion prediction), inbetween(Motion inbetween)
Due to the python package conflit, the released implement of linguistic metrics in motion translation task is by [nlg-metricverse](https://github.com/disi-unibo-nlp/nlg-metricverse), which may not be consistent to the results implemented by [nlg-eval](https://github.com/Maluuba/nlg-eval). We will fix this in the future. 👀 Visualization Render SMPL ### 1. Set up blender - WIP
Refer to [TEMOS-Rendering motions](https://github.com/Mathux/TEMOS) for blender setup, then install the following dependencies.
```
YOUR_BLENDER_PYTHON_PATH/python -m pip install -r prepare/requirements_render.txt
```
### 2. (Optional) Render rigged cylinders
Run the following command using blender:
```
YOUR_BLENDER_PATH/blender --background --python render.py -- --cfg=./configs/render.yaml --dir=YOUR_NPY_FOLDER --mode=video
```
### 2. Create SMPL meshes with:
```
python -m fit --dir YOUR_NPY_FOLDER --save_folder TEMP_PLY_FOLDER --cuda
```
This outputs:
- `mesh npy file`: the generate SMPL vertices with the shape of (nframe, 6893, 3)
- `ply files`: the ply mesh file for blender or meshlab
### 3. Render SMPL meshes
Run the following command to render SMPL using blender:
```
YOUR_BLENDER_PATH/blender --background --python render.py -- --cfg=./configs/render.yaml --dir=YOUR_NPY_FOLDER --mode=video
```
optional parameters:
- `--mode=video`: render mp4 video
- `--mode=sequence`: render the whole motion in a png image. ⚠️ FAQ Question-and-Answer ### The purpose and ability of MotionGPT The motivation of MotionGPT. **Answer:** We present MotionGPT **to address various human motion-related tasks within one single unified model**, by unifying motion modeling with language through a shared vocabulary. To train this unified model, we propose **an instructional training scheme under the protocols for multiple motion-language**, which further reveals the potential of Large Language Models (LLMs) in motion tasks beyond the success of language generation. However, it is non-trivial for this combination since it needs to model and generate two distinct modes from scratch. Contrary to the previous work leveraging CLIP to extract text embedding as motion generation conditions, like T2M-GPT, MotionGPT introduces **the motion-language pre-training on LLM** so it can leverage the strong language generation and zero-shot transfer abilities of pre-trained language models, as well as generates human language and motion in a unified model. Instruction tuning and zero-shot learning. **Answer:** We propose instruction tuning to **train a single MotionGPT across all motion-related tasks**, while task-specific tuning is to train and evaluate MotionGPTs on a single task. We employ these two training schemes to study the ability of MotionGPT across multi-tasks. As shown in this figure, we provide **zero-shot cases**. Benefitting from strong language models, MotionGPTs can understand unseen works in the text-to-motion training set, like "**scuttling**" and "**barriers**", and generate correct motions based on the meaning of sentences. However, it still struggles to generate **unseen motions**, like gymnastics, even if MotionGPTs understand the text inputs. In view of the recent success of LLMs, MotionGPT should pay attention to unifying current available datasets to exploit the scalable potential of language models when processing large-scale data besides increasing model size. **Answer:** We have faced this **limited dataset issue** while implementing MotionGPT and in our further research. It is a hard but valuable work to unify and collect a larger motion dataset. Fortunately, some researchers are working on this problem, as seen in recent work like [Motion-X](https://motion-x-dataset.github.io/) and other datasets, which hold promise for advancing large-scale motion models. We intend to further evaluate MotionGPT on these larger datasets once they become available. How well MotionGPT learns the relationship between motion and language? **Answer:** **Unlike** the previous motion generators using the **text encoder of CLIP** for conditions, please note that MotionGPTs leverage language models to learn the motion-language relationship, instead of relying on text features from CLIP. According to our zero-shot results (cf. **Fig. 12**) and performances on multi-tasks (cf. **Fig. 10**), MotionGPTs establish robust connections between simple/complex texts and simple motions in evaluations, but they fall short when it comes to complex-text to **complex motion translation**. ### More technical details Why choose T5, an encoder-decoder architecture, as the base model? How about a decoder-only model, like LLaMA? **Answer:** The **first language model that we used** to build MotionGPTs is **LLaMA-13B**. However, it shows insufficient performance and low training efficiency. We assume the reason is the limited dataset size compared to the large parameters and language data of LLaMA. We tried a smaller size decoder-only backbone **GPT2-Medium** and provide the results in **Tab. 15**. Then, we thus chose **T5-770M**, a small but common language model, as our final backbone, because many previous vision-language multimodal works, like **Unified-IO** and **BLIP**, have chosen T5, this encoder-decoder architecture. It shows a strong power to address multi-modal tasks. In addition, the decoder-only model has the advantage for self-supervised without pair data while we have paired data which this advance is greatly weakened. We are still working on collecting a large motion dataset for larger motion-language models. How to merge the text vocab and motion vocab in detail? concatenating them together? **Answer:** To ensure **a shared distribution between language and motion**, we initialize the motion tokens separately and concatenate them alongside the language tokens. This step ensures a balanced representation that encompasses both modalities. Besides the token embeddings are actively trained during the entirety of **stages 2 and 3**, ensuring a comprehensive fusion of language and motion knowledge. For tuning on each task, tune the entire model or just part of it? **Answer:** To address individual tasks, we adopt a focused approach where the entire model is fine-tuned. Our rationale lies in the fact that, for each specific task, our emphasis is on optimizing task-specific performance, without retaining an excessive amount of intelligence learned from other tasks. Besides, we only exclusively fine-tune the text-to-motion task, while other tasks are reported without specific tuning. ### More experimental details Can MotionGPT perform motion editing or motion composition similar to MotionDiffuse and MDM? | Method | FID $\downarrow$ | DIV $\rightarrow$ | ADE $\downarrow$ | FDE $\downarrow$ |
| :------------------- | :--------------- | :---------------- | :--------------- | :--------------- |
| Real | 0.002 | 9.503 | - | - |
| MDM | 6.031 | 7.813 | 5.446 | 8.561 |
| T2M-GPT | 2.056 | 8.635 | 6.161 | 8.302 |
| **MotionGPT (Ours)** | **0.905** | **8.972** | **4.745** | **6.040** |
**Comparison of motion prediction on HumanML3D dataset using motion data only.**
**Answer:** Referring to MDM, motion editing has two categories: **body part editing** and **motion completion** in the temporal domain. MotionGPT is capable of the latter, which includes **motion prediction** and **motion in-between**. It outperforms both **MDM** and **T2M-GPT** in the table above. However, when it comes to body part editing, the vector quantization(VQ)-based methods, like MotionGPT and T2M-GPT, are not as suitable as diffusion-based models that utilize diffusion inpainting on raw motion data. Editing body parts with LLM and prompts is a promising direction but still needs exploration. How to implement the MDM on the motion prediction and in-between tasks? **Answer:** Please follow the approach outlined in **Appendix B.4** and **Line-296** of our paper, where we highlight that MDM achieves the motion in-between task using a masked motion "in-painting" technique. Specifically, this involves fixing the initial and final portions of the motion and allowing the model to generate the central portion. To adapt this concept for motion prediction, we similarly fix a portion of the motion – in our case, **the first 20%** – and generate the subsequent sequence. Motion down-sample, if only given a start frame and an end frame as the in-between input, would the model perform well? **Answer:** VQ-based methods, such as MotionGPT and T2M-GPT, employ downsampling tricky to enhance the density of the codebook or tokens and reduce computing costs. This indeed becomes a constraint when the operation granularity is smaller than the down-sample rate. However, to address this issue, only the start and end frames are provided as in-between inputs. Some technical tricks can be used, such as repeating a single start or end frame up to the window size as inputs and removing the redundant parts in outputs. This does not significantly impact the effectiveness of the model, as there are often static beginnings or endings in the ground truth (GT) motion data. How is the down-sample rate chosen? It is a fundamental hyper-parameter that decides the overall granularity of the model. | Downsampling | MPJPE $\downarrow$ | MPJPE $\downarrow$ | ACCL $\downarrow$ | FID $\downarrow$ | DIV $\rightarrow$ |
| ------------ | ------------------ | ------------------ | ----------------- | ---------------- | ----------------- |
| $l=1$ | 76.2 | 49.5 | 19.5 | 0.421 | 9.613 |
| $l=2$ | **52.6** | **37.7** | **9.5** | 0.135 | 9.722 |
| $l=4$ | 55.8 | 40.1 | 7.5 | **0.067** | 9.675 |
| $l=8$ | 62.7 | 45.3 | 8.7 | 0.223 | **9.584** |
**Answer:** We selected the down-sample rate based on the frames-per-second (FPS) of the HumanML3D and KIT-ML datasets, which is **20 fps**. Therefore, down-sampling by a factor of 4 to achieve **5 fps** can ensure distinctiveness in motion frames, and prevents redundancy, and acceleration training. This choice was also made to ensure a fair comparison, as we utilized the same down-sample rate as T2M-GPT. As shown in the above table, we provide an ablation study on these parameters, where a factor of 4 achieves the best Frechet Inception Distance (FID) in motion reconstructions. Failure analysis. Zero-shot ability to handle words that have semantic meaning but could be unseen. **Answer:** As shown in **Fig. 12**, we provide both **zero-shot cases** and **failure cases**. Benefitting from strong language models, MotionGPTs can understand unseen works in the text-to-motion training set, like "**scuttling**" and "**barriers**", and generate correct motions based on the meaning of sentences. However, it still struggles to generate unseen motions, like gymnastics, even if MotionGPTs understand the text inputs. Do TM2T, T2M, and poseGPT capture all human motion in their training dataset's discrete latent code? | Method | MPJPE$\downarrow$ | MPJPE $\downarrow$ | ACCL $\downarrow$ | FID $\downarrow$ | DIV $\rightarrow$ |
| ---------------- | ----------------- | ------------------ | ----------------- | ---------------- | ----------------- |
| VPoser-t | 75.6 | 48.6 | 9.3 | 1.430 | 8.336 |
| ACTOR | 65.3 | 41.0 | **7.0** | 0.341 | **9.569** |
| MLD-1 | **54.4** | 41.6 | 8.3 | 0.247 | 9.630 |
| MotionGPT (Ours) | 55.8 | **40.1** | 7.5 | **0.067** | 9.675 |
**Motion reconstruciton comparision.**
| Method | FID $\downarrow$ |
| ---------------- | ------------------------------ |
| MotionGPT (Ours) | $0.510^{\pm.016}$ |
| T2M-GPT | $0.514^{\pm.029}$ |
| MLD | $\boldsymbol{0.404}^{\pm.027}$ |
**Comparison of FID in text-to-motion task on KIT-ML dataset.**
**Answer:** Given sufficient training or testing data from the same dataset, motion reconstruction is not a challenging task for both VAE and VQ-VAE. We have provided the evaluation on motion reconstruction in **Tab.8**. However, when dealing with a **limited amount of motion data**, like the KIT dataset, **the VAE model shows better ability in motion interpolation, surpassing VQ-VAE**.
A relevant evaluation is shown above (also in **Tab.7**), where MLD (VAE) outperforms MotionGPT and T2M-GPT (VQ-VAEs) on FID.
The real challenge lies in reconstructing complex motions, such as diving or gymnastics sports. Existing motion generators struggle to accurately reconstruct **complex motions** using a codebook extracted from daily motion datasets. Collecting these complex yet valuable motions is still a significant challenge to the motion research community. ### About performances Motion quality and performance gain. | Method | FID $\downarrow$ |
| :-------- | :----------------------------- |
| MDM | $0.544^{\pm.044}$ |
| MotionGPT | $0.160^{\pm.008}$ |
| T2M-GPT | $\boldsymbol{0.116}^{\pm.004}$ |
**Comparison of FID in text-to-motion task on HumanML3D dataset.**
| Method | FID $\downarrow$ |
| :-------- | :----------------------------- |
| T2M-GPT | $0.514^{\pm.029}$ |
| MotionGPT | $0.510^{\pm.016}$ |
| MDM | $\boldsymbol{0.497}^{\pm.021}$ |
**Comparison of FID in text-to-motion task on KIT-ML dataset.**
**Answer:** The FID metrics primarily focus on the motion quality rather than the correlation between motion and text. While MDM serves as a successful benchmark for motion generation, both MotionGPT and T2M-GPT outperform MDM by a margin of 0.38~0.43 on the FID scale. **However**, **the difference in motion quality among these three works is not significant in video supply**. Additionally, MDM outperforms two vector quantized methods, MotionGPT and T2M-GPT, in terms of FID on the KIT dataset. This can be attributed to the limited number of 3,911 motion sequences, which makes it **challenging to construct a comprehensive motion codebook**. More importantly, MotionGPT contributes to multiple motion tasks with LLM, particularly in generating both text and motion within a single model, rather than aiming to improve the FID metric. Limited performance gain with strong language models. **Answer:** We thought MotionGPT, using a **significantly larger language model**, would surpass all existing methods in all tasks. **However**, the evaluation shows MotionGPT achieves SOTA results in 18 out of 23 metrics, where many improvements are only small gains. This can be attributed to the limited size of the dataset. Both **HumanML3D (14,616 motions) and KIT (3,911 motions)** are **limited** in vocabulary size and overall dataset size, particularly when compared to billion-level language datasets, which affects the efficacy of large-scale models. Benefitting from recent dataset works, like [Motion-X](https://motion-x-dataset.github.io/), we will evaluate the performance gain of MotionGPT in larger datasets once they become available. Performance Gain on R-Precision in KIT. **Answer:** The evaluation of R-Precision in the KIT dataset relies on the text encoder, which is built using a limited set of 6,353 textual descriptions. In contrast, MotionGPTs benefit from LLM and large language data, enabling them to **generate longer and more natural language descriptions** for motion. However, this leads to **a discrepancy between the generated descriptions and the GT descriptions**, resulting in a lower R-Precision. MotionGPT seems to sacrifice accuracy in exchange for additional functionalities. **Answer:** As shown in **Fig. 10**, MotionGPT achieves SOTA on **18 out of 23** metrics across four motion-related tasks. Additionally, both HumanML3D and KIT are limited in overall dataset size, particularly when compared to billion-level language datasets. This affects the efficacy of large-scale models. We will further employ a larger motion-text dataset to evaluate MotionGPT. Besides, MotionGPTs introduce motion-language pre-training, as well as its zero-shot ability, which is a promising direction worth exploring and could stimulate self-training procedures for further research. ### About illustrations Visualize some of the tokens in the vocabulary that VQ-VAE learned. **Answer:** As shown in **Fig.13**, we visualize these **motion tokens** in **motion vocabulary $V_m$** and their corresponding localized spatial-temporal contexts, depicted within **4-frame motion segments**. However, MotionGPT falls short in generating descriptions for each individual token, as the training is conducted on token sequences.
You can run the script below to visualize more tokens:
```
python -m scripts.get_code_visual --cfg configs/config_h3d_stage2.yaml
``` 📖 Citation If you find our code or paper helps, please consider citing: ```bibtex
@article{jiang2024motiongpt,
title={Motiongpt: Human motion as a foreign language},
author={Jiang, Biao and Chen, Xin and Liu, Wen and Yu, Jingyi and Yu, Gang and Chen, Tao},
journal={Advances in Neural Information Processing Systems},
volume={36},
year={2024}
} @inproceedings{chen2023executing,
title={Executing your Commands via Motion Diffusion in Latent Space},
author={Chen, Xin and Jiang, Biao and Liu, Wen and Huang, Zilong and Fu, Bin and Chen, Tao and Yu, Gang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={18000--18010},
year={2023}
}
``` Acknowledgments Thanks to Motion-latent-diffusion , T2m-gpt , TEMOS , ACTOR , HumanML3D and joints2smpl , our code is partially borrowing from them. License This code is distributed under an MIT LICENSE . Note that our code depends on other libraries, including SMPL, SMPL-X, PyTorch3D, and uses datasets which each have their own respective licenses that must also be followed.;[NeurIPS 2023] MotionGPT: Human Motion as a Foreign Language, a unified motion-language generation model using LLMs;3d-generation,chatgpt,gpt,language-model,motion,motion-generation,text-driven,text-to-motion,motiongpt,multi-modal | OpenMotionLab/MotionGPT |
ikun-svelte/ikun-ui;ikun-ui 🐔A Svelte.js based UnoCSS UI library that allows you to make websites document Features 🎤 Components Design - Onu provides neat & beautiful crafted UI components. 💃 Iconify Icons - Use any icon from the library you love. 🤟 CSS Preset - Provides UnoCSS preset package, help you render UI easily. 🏀 Theme Config - Use attribute mode like unocss to design. Support theme config to customize theme. Contributing Developers interested in contributing should read the Code of Conduct and the Contributing Guide . Thanks to everyone who has already contributed to ikun-ui ! Thanks svelte unocss onu-ui naive-ui element-plus;🐔A Svelte.js based UnoCSS UI library that allows you to make websites;react,react18,svelte,svelte3,sveltejs,sveltekit,ui-components,vue,vue3,svelte4 | ikun-svelte/ikun-ui |
protectai/ai-exploits;# AI Exploits The AI world has a security problem and it's not just in the inputs given to LLMs such as ChatGPT. Based
on research done by Protect AI and independent security experts on the Huntr Bug Bounty Platform, there are far more impactful and practical attacks
against the tools, libraries and frameworks used to build, train, and deploy machine learning models. Many of these
attacks lead to complete system takeovers and/or loss of sensitive data, models, or credentials most often without the need
for authentication. With the release of this repository, Protect AI hopes to demystify to the Information Security community what practical attacks against AI/Machine Learning infrastructure look like in the real world and raise awareness to the amount of vulnerable components that currently exist in the AI/ML ecosystem. More vulnerabilities can be found here:
* November Vulnerability Report * December Vulnerability Report * January Vulnerability Report * February Vulnerability Report * March Vulnerability Report * April Vulnerability Report * May Vulnerbility Report Overview This repository, ai-exploits , is a collection of exploits and scanning templates for responsibly disclosed vulnerabilities affecting machine learning tools. Each vulnerable tool has a number of subfolders containing three types of utilities: Metasploit modules, Nuclei templates
and CSRF templates. Metasploit modules are for security professionals looking to exploit the vulnerabilities and Nuclei templates are for scanning a large number of remote servers to determine if they're vulnerable. Demo Video demonstrating running one of the Metasploit modules against Ray: Setup & Usage The easiest way to use the modules and scanning templates is to build and run the Docker image provided by the Dockerfile in this repository. The Docker image will have Metasploit and Nuclei already installed along with all the necessary configuration. Docker Build the image: bash
git clone https://github.com/protectai/ai-exploits && cd ai-exploits
docker build -t protectai/ai-exploits . Run the docker image: bash
docker run -it --rm protectai/ai-exploits /bin/bash The latter command will drop you into a bash session in the container with msfconsole and nuclei ready to go. Using the Metasploit Modules With Docker Start the Metasploit console (the new modules will be available under the exploits/protectai category), load a module, set the options, and run the exploit. bash
msfconsole
msf6 > use exploit/protectai/ray_job_rce
msf6 exploit(protectai/ray_job_rce) > set RHOSTS <target IP>
msf6 exploit(protectai/ray_job_rce) > run With Metasploit Installed Locally Create a folder ~/.msf4/modules/exploits/protectai and copy the exploit modules into it. bash
mkdir -p ~/.msf4/modules/exploits/protectai
cp ai-exploits/ray/msfmodules/* ~/.msf4/modules/exploits/protectai
msfconsole
msf6 > use exploit/protectai/<exploit_name.py> Using Nuclei Templates Nuclei is a vulnerability scanning engine which can be used to scan large numbers of servers for known vulnerabilities in web applications and networks. Navigate to nuclei templates folder such as ai-exploits/mlflow/nuclei-templates . In the Docker container these are stored in the /root/nuclei-templates folder. Then simply point to the template file and the target server. cd ai-exploits/mlflow/nuclei-templates
nuclei -t mlflow-lfi.yaml -u http://<target>:<port>` Using CSRF Templates Cross-Site Request Forgery (CSRF) vulnerabilities enable attackers to stand up a web server hosting a malicious HTML page
that will execute a request to the target server on behalf of the victim. This is a common attack vector for exploiting
vulnerabilities in web applications, including web applications which are only exposed on the localhost interface and
not to the broader network. Below is a simple demo example of how to use a CSRF template to exploit a vulnerability in a
web application. Start a web server in the csrf-templates folder. Python allows one to stand up a simple web server in any
directory. Navigate to the template folder and start the server. bash
cd ai-exploits/ray/csrf-templates
python3 -m http.server 9999 Now visit the web server address you just stood up (http://127.0.0.1:9999) and hit F12 to open
the developer tools, then click the Network tab. Click the link to ray-cmd-injection-csrf.html. You should see that
the browser sent a request to the vulnerable server on your behalf. Contribution Guidelines We welcome contributions to this repository. Please read our Contribution Guidelines for more information on how to contribute. License This project is licensed under the Apache 2.0 License .;A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities ;[] | protectai/ai-exploits |
RecipeUI/RecipeUI;RecipeUI RecipeUI is rethinking how we use API tools like Postman or Insomnia. We're starting off with TypeScript powered API requests and API collections you can run in seconds! Try us out www.recipeui.com Built with Couldn't have built this app without some really amazing packages 🐻 Zustand + immer for global state management 💨 Tailwind + 🌼 DaisyUI for making me build pretty and native html components 🔺 NextJS + Vercel for a seamless frontend framework and deployment package ⚡ Supabase for a seamless backend service that has many cool features. 🪞 Codemirror for the easy code editor plugin 🦖 Docusaurus for docs! Other fun stuff: pnpm, downshift, fuse, react18-json-view Contributing Contributing code to our repo is as simple as building an API collection to share with the community! APIs are always a pain to deal with, so it'd be awesome if you could help other people use public or private APIs by building collections. You will need to run our desktop app locally.
https://docs.recipeui.com/docs/Contributing/setup Then read our guide on contributing to our repo.
https://docs.recipeui.com/docs/Contributing/collections Open source We are open sourced because we use so much open source technology and we want to be transparent about how we actually make API requests. We don't store any API secrets in the cloud and we limit the telemetry we collect to the user experience portions. Contributors Dave Tang 🤔 📖 🧑🏫 Julian Berman 🐛 Rafael Revi 🔌 Benny Neugebauer 🐛 0x80085 🤔 Niwilai 🤔 Thomas Alrek 🤔 jinyus 🤔 Tim Häring 💻 Ashish 💻;Discover, test, and share APIs in seconds;devtools | RecipeUI/RecipeUI |
amantinband/clean-architecture;[![Build](https://github.com/amantinband/clean-architecture/actions/workflows/build.yml/badge.svg)](https://github.com/amantinband/clean-architecture/actions/workflows/build.yml) [![Publish template to NuGet](https://github.com/amantinband/clean-architecture/actions/workflows/publish.yml/badge.svg)](https://github.com/amantinband/clean-architecture/actions/workflows/publish.yml)
[![GitHub contributors](https://img.shields.io/github/contributors/amantinband/clean-architecture)](https://GitHub.com/amantinband/clean-architecture/graphs/contributors/) [![GitHub Stars](https://img.shields.io/github/stars/amantinband/clean-architecture.svg)](https://github.com/amantinband/clean-architecture/stargazers) [![GitHub license](https://img.shields.io/github/license/amantinband/clean-architecture)](https://github.com/amantinband/clean-architecture/blob/main/LICENSE)
[![codecov](https://codecov.io/gh/amantinband/clean-architecture/branch/main/graph/badge.svg?token=DR2EBIWK7B)](https://codecov.io/gh/amantinband/clean-architecture)
---
![Clean Architecture Template Title](assets/Clean%20Architecture%20Template%20Title.png)
--- ```shell
dotnet new install Amantinband.CleanArchitecture.Template dotnet new clean-arch -o CleanArchitecture
``` ️Important notice ⚠️ Give it a star ⭐ Domain Overview 🌍 Basic Subscription Pro Subscription Use Cases / Features 🤓 Subscriptions Reminders Getting Started 🏃 YouTube Tutorial Install the template or clone the project Run the service using Docker or the .NET CLI Generate a token Create a subscription Create a reminder Folder Structure 📁 Authorization 🔐 Authorization Types Role-Based Authorization Permission-Based Authorization Policy-Based Authorization Mixing Authorization Types Testing 📝 Test Types Domain Layer Unit Tests Application Layer Unit Tests Application Layer Subcutaneous Tests Presentation Layer Integration Tests Fun features 💃🕺 Domain Events \& Eventual Consistency Eventual Consistency Mechanism Background service for sending email reminders Configure Email Settings Configure Email Settings Manually Configure Email Settings via User Secrets Contribution 🤲 Credits 🙏 License 🪪 ️Important notice ⚠️ This template is still under construction 👷. Check out my comprehensive course on Dometrain where I cover everything you need to know when building production applications structured following clean architecture. Use the exclusive coupon code GITHUB to get 5% off (btw this is the only promo code for a discount on the bundle, which is already 20% off). Give it a star ⭐ Loving it? Show your support by giving this project a star! Domain Overview 🌍 This is a simple reminder application. It allows users to create and manage their reminders. To create reminders, a user must have an active subscription. Basic Subscription Users with a basic subscription can create up to 3 daily reminders. Pro Subscription Users with a pro subscription do not have a daily limit on the number of reminders. Use Cases / Features 🤓 Subscriptions Create Subscription Get Subscription Cancel Subscription Reminders Set Reminder Get Reminder Delete Reminder Dismiss Reminder List Reminders Getting Started 🏃 YouTube Tutorial Install the template or clone the project ```shell
dotnet new install Amantinband.CleanArchitecture.Template dotnet new clean-arch -o CleanArchitecture
``` or shell
git clone https://github.com/amantinband/clean-architecture Run the service using Docker or the .NET CLI shell
docker compose up or shell
dotnet run --project src/CleanArchitecture.Api Generate a token Navigate to requests/Tokens/GenerateToken.http and generate a token. Note: Since most systems use an external identity provider, this project uses a simple token generator endpoint that generates a token based on the details you provide. This is a simple way to generate a token for testing purposes and is closer to how your system will likely be designed when using an external identity provider. yaml
POST {{host}}/tokens/generate
Content-Type: application/json http
{
"Id": "bae93bf5-9e3c-47b3-aace-3034653b6bb2",
"FirstName": "Amichai",
"LastName": "Mantinband",
"Email": "amichai@mantinband.com",
"Permissions": [
"set:reminder",
"get:reminder",
"dismiss:reminder",
"delete:reminder",
"create:subscription",
"delete:subscription",
"get:subscription"
],
"Roles": [
"Admin"
]
} NOTE: Replacing http file variables ( {{variableName}} ) Option 1 (recommended) - Using the REST Client extension for VS Code Use the REST Client extension for VS Code + update the values under .vscode/settings.json . This will update the value for all http files. json
{
"rest-client.environmentVariables": {
"$shared": { // these will be shared across all http files, regardless of the environment
"token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1lIjoiTGlvciIsImZhbWlseV9uYW1lIjoiRGFnYW4iLCJlbWFpbCI6Imxpb3JAZGFnYW4uY29tIiwiaWQiOiJhYWU5M2JmNS05ZTNjLTQ3YjMtYWFjZS0zMDM0NjUzYjZiYjIiLCJodHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL3dzLzIwMDgvMDYvaWRlbnRpdHkvY2xhaW1zL3JvbGUiOiJBZG1pbiIsInBlcm1pc3Npb25zIjpbInNldDpyZW1pbmRlciIsImdldDpyZW1pbmRlciIsImRpc21pc3M6cmVtaW5kZXIiLCJkZWxldGU6cmVtaW5kZXIiLCJjcmVhdGU6c3Vic2NyaXB0aW9uIiwiZGVsZXRlOnN1YnNjcmlwdGlvbiIsImdldDpzdWJzY3JpcHRpb24iXSwiZXhwIjoxNzA0MTM0MTIzLCJpc3MiOiJSZW1pbmRlclNlcnZpY2UiLCJhdWQiOiJSZW1pbmRlclNlcnZpY2UifQ.wyvn9cq3ohp-JPTmbBd3G1cAU1A6COpiQd3C_e_Ng5s",
"userId": "aae93bf5-9e3c-47b3-aace-3034653b6bb2",
"subscriptionId": "c8ee11f0-d4bb-4b43-a448-d511924b520e",
"reminderId": "08233bb1-ce29-49e2-b346-5f8b7cf61593"
},
"dev": { // when the environment is set to dev, these values will be used
"host": "http://localhost:5001",
},
"prod": { // when the environment is set to prod, these values will be used
"host": "http://your-prod-endpoint.com",
}
}
} Options 2 - Defining the variables in the http file itself Define the variables in the http file itself. This will only update the value for the current http file. ```yaml
@host = http://localhost:5001 POST {{host}}/tokens/generate
``` Option 3 - Manually Replace the variables manually. yaml
POST {{host}}/tokens/generate 👇 yaml
POST http://localhost:5001/tokens/generate Create a subscription yaml
POST {{host}}/users/{{userId}}/subscriptions
Content-Type: application/json
Authorization: Bearer {{token}} http
{
"SubscriptionType": "Basic"
} Create a reminder yaml
POST {{host}}/users/{{userId}}/subscriptions/{{subscriptionId}}/reminders
Content-Type: application/json
Authorization: Bearer {{token}} http
{
"text": "let's do it",
"dateTime": "2025-2-26"
} Folder Structure 📁 You can use the this figma community file to explore or create your own folder structure respresentation. Authorization 🔐 This project puts an emphasis on complex authorization scenarios and supports role-based , permission-based and policy-based authorization. Authorization Types Role-Based Authorization To apply role based authorization, use the Authorize attribute with the Roles parameter and implement the IAuthorizeableRequest interface. For example: csharp
[Authorize(Roles = "Admin")]
public record CancelSubscriptionCommand(Guid UserId, Guid SubscriptionId) : IAuthorizeableRequest<ErrorOr<Success>>; Will only allow users with the Admin role to cancel subscriptions. Permission-Based Authorization To apply permission based authorization, use the Authorize attribute with the Permissions parameter and implement the IAuthorizeableRequest interface. For example: csharp
[Authorize(Permissions = "get:reminder")]
public record GetReminderQuery(Guid UserId, Guid SubscriptionId, Guid ReminderId) : IAuthorizeableRequest<ErrorOr<Reminder>>; Will only allow users with the get:reminder permission to get a subscription. Policy-Based Authorization To apply policy based authorization, use the Authorize attribute with the Policy parameter and implement the IAuthorizeableRequest interface. For example: csharp
[Authorize(Policies = "SelfOrAdmin")]
public record GetReminderQuery(Guid UserId, Guid SubscriptionId, Guid ReminderId) : IAuthorizeableRequest<ErrorOr<Reminder>>; Will only allow users who pass the SelfOrAdmin policy to get a subscription. Each policy is implemented as a simple method in the PolicyEnforcer class. The policy "SelfOrAdmin" for example, can be implemented as follows: ```csharp
public class PolicyEnforcer : IPolicyEnforcer
{
public ErrorOr Authorize (
IAuthorizeableRequest request,
CurrentUser currentUser,
string policy)
{
return policy switch
{
"SelfOrAdmin" => SelfOrAdminPolicy(request, currentUser),
_ => Error.Unexpected(description: "Unknown policy name"),
};
} private static ErrorOr<Success> SelfOrAdminPolicy<T>(IAuthorizeableRequest<T> request, CurrentUser currentUser) =>
request.UserId == currentUser.Id || currentUser.Roles.Contains(Role.Admin)
? Result.Success
: Error.Unauthorized(description: "Requesting user failed policy requirement"); }
``` Mixing Authorization Types You can mix and match authorization types to create complex authorization scenarios. For example: csharp
[Authorize(Permissions = "get:reminder,list:reminder", Policies = "SelfOrAdmin", Roles = "ReminderManager")]
public record ListRemindersQuery(Guid UserId, Guid SubscriptionId, Guid ReminderId) : IAuthorizeableRequest<ErrorOr<Reminder>>; Will only allow users with the get:reminder and list:reminder permission, and who pass the SelfOrAdmin policy, and who have the ReminderManager role to list reminders. Another option, is specifying the Authorize attribute multiple times: csharp
[Authorize(Permissions = "get:reminder")]
[Authorize(Permissions = "list:reminder")]
[Authorize(Policies = "SelfOrAdmin")]
[Authorize(Roles = "ReminderManager")]
public record ListRemindersQuery(Guid UserId, Guid SubscriptionId, Guid ReminderId) : IAuthorizeableRequest<ErrorOr<Reminder>>; Testing 📝 This project puts an emphasis on testability and comes with a comprehensive test suite. Test Types Domain Layer Unit Tests The domain layer is tested using unit tests.
By the bare minimum, each domain entity should have a test that verifies its invariants. Application Layer Unit Tests The application layer is tested using both unit tests and subcutaneous tests. Since each one of the application layer use cases has its corresponding subcutaneous tests, the unit tests are used to test the application layer standalone components, such as the ValidationBehavior and the AuthorizationBehavior . Application Layer Subcutaneous Tests Subcutaneous tests are tests that operate right under the presentation layer.
These tests are responsible for testing the core logic of our application, which is the application layer and the domain layer. The reason there are so many of these tests, is because each one of the application layer use cases has its corresponding subcutaneous tests. This allows us to test the application layer and the domain layer based on the actual expected usage, giving us the confidence that our application works as expected and that the system cannot be manipulated in a way we don't allow. I recommend spending more effort on these tests than the other tests, since they aren't too expensive to write, and the value they provide is huge. Presentation Layer Integration Tests The api layer is tested using integration tests. This is where we want to cover the entire system, including the database, external dependencies and the presentation layer. Unlike the subcutaneous tests, the focus of these tests is to ensure the integration between the various components of our system and other systems. Fun features 💃🕺 Domain Events & Eventual Consistency Note: Eventual consistency and the domain events pattern add a layer of complexity. If you don't need it, don't use it. If you need it, make sure your system is designed properly and that you have the right tools to manage failures. The domain is designed so each use case which manipulates data, updates a single domain object in a single transaction. For example, when a user cancels a subscription, the only change that happens atomically is the subscription is marked as canceled: ```csharp
public ErrorOr CancelSubscription(Guid subscriptionId)
{
if (subscriptionId != Subscription.Id)
{
return Error.NotFound("Subscription not found");
} Subscription = Subscription.Canceled;
_domainEvents.Add(new SubscriptionCanceledEvent(this, subscriptionId));
return Result.Success; }
``` Then, in an eventual consistency manner, the system will update all the relevant data. Which includes: Deleting the subscription from the database and marking all reminders as deleted ( Subscriptions/Events/SubscriptionDeletedEventHandler.cs ]) Deleting all the reminders marked as deleted from the database ( Reminders/Events/ReminderDeletedEventHandler.cs ] Note: Alongside the performance benefits, this allows to reuse reactive behavior. For example, the ReminderDeletedEventHandler is invoked both when a subscription is deleted and when a reminder is deleted. Eventual Consistency Mechanism Each invariant is encapsulated in a single domain object. This allows performing changes by updating a single domain object in a single transaction. If domain object B needs to react to changes in domain object A , a Domain Event is added to domain object A alongside the changes. Upon persisting domain object A changes to the database, the domain events are extracted and added to a queue for offline processing:
```csharp
private void AddDomainEventsToOfflineProcessingQueue(List domainEvents)
{
Queue domainEventsQueue = new();
domainEvents.ForEach(domainEventsQueue.Enqueue); _httpContextAccessor.HttpContext.Items["DomainEvents"] = domainEventsQueue; } 1. After the user receives a response, the [EventualConsistencyMiddleware](src/CleanArchitecture.Infrastructure/Common/Middleware/EventualConsistencyMiddleware.cs) is invoked and processes the domain events: csharp
public async Task InvokeAsync(HttpContext context, IEventualConsistencyProcessor eventualConsistencyProcessor)
{
context.Response.OnCompleted(async () =>
{
if (context.Items.TryGetValue("DomainEvents", out var value) ||
value is not Queue domainEvents)
{
return;
} while (domainEvents.TryDequeue(out var nextEvent))
{
await publisher.Publish(nextEvent);
}
}); }
``` Note: the code snippets above are a simplified version of the actual implementation. Background service for sending email reminders There is a simple background service that runs every minute and sends email reminders for all reminders that are due ( ReminderEmailBackgroundService ): ```csharp
private async void SendEmailNotifications(object? state)
{
await _fluentEmail
.To(user.Email)
.Subject($"{dueReminders.Count} reminders due!")
.Body($"""
Dear {user.FirstName} {user.LastName} from the present. I hope this email finds you well.
I'm writing you this email to remind you about the following reminders:
{string.Join('\n', dueReminders.Select((reminder, i) => $"{i + 1}. {reminder.Text}"))}
Best,
{user.FirstName} from the past.
""")
.SendAsync(); }
``` Configure Email Settings To configure the service to send emails, make sure to update the email settings under the appsettings.json / appsettings.Development.json file: You can use your own SMTP server or use a service like Brevo . Configure Email Settings Manually json
{
"EmailSettings": {
"EnableEmailNotifications": false,
"DefaultFromEmail": "your-email@gmail.com (also, change EnableEmailNotifications to true 👆)",
"SmtpSettings": {
"Server": "smtp.gmail.com",
"Port": 587,
"Username": "your-email@gmail.com",
"Password": "your-password"
}
}
} note: you may need to allow less secure apps to access your email account. Configure Email Settings via User Secrets shell
dotnet user-secrets --project src/CleanArchitecture.Api set EmailSettings:EnableEmailNotifications true
dotnet user-secrets --project src/CleanArchitecture.Api set EmailSettings:DefaultFromEmail amantinband@gmail.com
dotnet user-secrets --project src/CleanArchitecture.Api set EmailSettings:SmtpSettings:Server smtp-relay.brevo.com
dotnet user-secrets --project src/CleanArchitecture.Api set EmailSettings:SmtpSettings:Port 587
dotnet user-secrets --project src/CleanArchitecture.Api set EmailSettings:SmtpSettings:Username amantinband@gmail.com
dotnet user-secrets --project src/CleanArchitecture.Api set EmailSettings:SmtpSettings:Password your-password Contribution 🤲 If you have any questions, comments, or suggestions, please open an issue or create a pull request 🙂 Credits 🙏 CleanArchitecture - An awesome clean architecture solution template by Jason Taylor License 🪪 This project is licensed under the terms of the MIT license.;The ultimate clean architecture template for .NET applications 💪;asp-net-core,clean-architecture,dotnet,dotnet-core,software-architecture,web-application | amantinband/clean-architecture |
zwq2018/Data-Copilot;Data-Copilot Overview Data-Copilot is a LLM-based system that help you address data-related tasks. Data-Copilot connects data sources from different domains and diverse user tastes, with the ability to autonomously manage, process, analyze, predict, and visualize data. See our paper: Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow , Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang 🔥Demo video Since gpt3.5 has only a 4k input token limit, it currently can access to Chinese stocks, funds and some economic data. The Data-Copilot can query and predict data autonomously : Support model and data sources: | | CHN Stock | CHN Fund |CHN Economic data | CHN Financial data |
|:-------------:|:---------:|:--------:|:---------------:|:------------------------:|
| Openai-GPT3.5 | ✓ | ✓ | ✓ | ✓ |
| Azure-GPT3.5 | ✓ | ✓ | ✓ | ✓ |
| Qwen-72b-Chat | ✓ | ✓ | ✓ | ✓ | We propose Data-Copilot, an LLM-based system linking Chinese financial markets such as stock, funds, economic, financial data, and live news ⭐ Data-Copilot can autonomously manage, process, analyze, predict, and visualize data . When a request is received, it transforms raw data into informative results that best match the user’s intent. ⭐ Acting as a designer : Data-Copilot independently designs versatile interface tools with different functions through self-request and iterative refinement. ⭐ As a dispatcher : DataCopilot adeptly invokes the corresponding interfaces sequentially or in parallel and transforms raw data from heterogeneous sources into graphics, tables, and text, without human assistance. 🌳 QuickStart First replace openai.key and Tushare token in main.py with your personal Openai key and Tushare token .
The organization of the whole project is as follows: angular2html
|-- README.md
|-- app.py
|-- create_tool
| |-- Atomic_api_json.py
| `-- all_atomic_api.json
|-- lab_gpt4_call.py
|-- lab_llms_call.py
|-- main.py
|-- output
|-- prompt_lib
| |-- prompt_economic.json
| |-- prompt_financial.json
| |-- prompt_fund.json
| |-- prompt_intent_detection.json
| |-- prompt_stock.json
| |-- prompt_task.json
| `-- prompt_visualization.json
|-- requirements.txt
|-- tool.py
|-- tool_lib
| |-- atomic_api.json
| |-- tool_backup.json
| |-- tool_economic.json
| |-- tool_financial.json
| |-- tool_fund.json
| |-- tool_stock.json
| `-- tool_visualization.json app.py is the file to start gradio. main.py is the processing flow of interface scheduling, and lab_gpt4_call.py is the file to call the GPT35 model.
The tool_lib and tool.py contain the interface tools obtained after the first phase of interface design.
The folder prompt_lib contains the design of the prompt and the in context demonstration. Requirements bash
pip install -r requirements.txt Then run the following command: For Local bash
python main.py You can select the LLM in main.py by setting: model='<the model you choose>' Remember to fill in the key of the LLM you chose:
- For GPT, fill in the key of Openai in main.py openai_key = os.getenv("OPENAI_KEY") For Qwen-72b-Chat, fill in the key in lab_llms_call.py dashscope.api_key='<your api key>' Also, remember to fill in the Tushare token before running the code: In tool.py for Tushare token angular2html
tushare_token = os.getenv('TUSHARE_TOKEN')
pro = ts.pro_api(tushare_token) For Gradio The Gradio demo is now hosted on Hugging Face Space. You can also run the following commands to start the demo locally: bash
python app.py 🌿 How to play You can try our Data-Copilot for Chinese financial markets in Hugging Face Space: It has access to Chinese stocks, funds and some economic data. But because gpt3.5 only has 4k input token length, the current data access is still relatively small.
In the future, data-copilot will support more data from foreign financial markets. Step 1 Enter your Openai or Openai-Azure key, please try to use openai's paid API. If you plan to use azure's services, please remember to input both api-base and engine, except for key. Step 2 Click the OK button to submit Step 3 Enter the request you want to query in the text box, or select a question directly from the example box and it will appear in the text box. Step 4 Click the Start button to submit the request Step 5 Data-Copilot will display the intermediate scheduling process in the Solving Step, and the final will present text (Summary and Result), images and tables. 🍺 Some cases A case for Check the inflow of northbound every trading date Citation If you find this work useful in your method, you can cite the paper as below: @article{zhang2023data,
title={Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow},
author={Zhang, Wenqi and Shen, Yongliang and Lu, Weiming and Zhuang, Yueting},
journal={arXiv preprint arXiv:2306.07209},
year={2023}
} Contact If you have any questions, please contact us by email:
zhangwenqi@zju.edu.cn Acknowledgement ChatGPT Tushare Qwen;Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow;[] | zwq2018/Data-Copilot |
microsoft/kernel-memory;Kernel Memory This repository presents best practices and a reference architecture for memory in specific
AI and LLMs application scenarios. Please note that the provided code serves as a
demonstration and is not an officially supported Microsoft offering. Kernel Memory (KM) is a multi-modal AI Service specialized in the efficient indexing of datasets through custom continuous data
hybrid pipelines, with support for Retrieval Augmented Generation (RAG),
synthetic memory, prompt engineering, and custom semantic memory processing. KM is available as a Web Service ,
as a Docker container ,
a Plugin for ChatGPT/Copilot/Semantic Kernel, and as a .NET library for embedded applications. Utilizing advanced embeddings and LLMs, the system enables Natural Language
querying for obtaining answers from the indexed data, complete with citations
and links to the original sources. Designed for seamless integration as a Plugin with Semantic Kernel , Microsoft
Copilot and ChatGPT, Kernel Memory enhances data-driven features in applications
built for most popular AI platforms. Synchronous Memory API (aka "serverless") Kernel Memory works and scales at best when running as an asynchronous Web Service , allowing to
ingest thousands of documents and information without blocking your app. However, Kernel Memory can also run in serverless mode, embedding MemoryServerless class instance in .NET backend/console/desktop apps in synchronous mode. This approach
works as well as in ASP.NET Web APIs and Azure Functions. Each request is processed
immediately, although calling clients are responsible for handling transient errors. Importing documents into your Kernel Memory can be as simple as this: ```csharp
var memory = new KernelMemoryBuilder()
.WithOpenAIDefaults(Environment.GetEnvironmentVariable("OPENAI_API_KEY"))
.Build (); // Import a file
await memory.ImportDocumentAsync("meeting-transcript.docx", tags: new() { { "user", "Blake" } }); // Import multiple files and apply multiple tags
await memory.ImportDocumentAsync(new Document("file001")
.AddFile("business-plan.docx")
.AddFile("project-timeline.pdf")
.AddTag("user", "Blake")
.AddTag("collection", "business")
.AddTag("collection", "plans")
.AddTag("fiscalYear", "2023"));
``` Asking questions: ```csharp
var answer1 = await memory.AskAsync("How many people attended the meeting?"); var answer2 = await memory.AskAsync("what's the project timeline?", filter: new MemoryFilter().ByTag("user", "Blake"));
``` The example leverages the default documents ingestion pipeline: Extract text: recognize the file format and extract the information Partition the text in small chunks, to optimize search Extract embedding using an LLM embedding generator Save embedding into a vector index such as Azure AI Search , Qdrant or other DBs. In the example, memories are organized by users using tags, safeguarding private information.
Furthermore, memories can be categorized and structured using tags , enabling efficient
search and retrieval through faceted navigation. Data lineage, citations, referencing sources: All memories and answers are fully correlated to the data provided. When
producing an answer, Kernel Memory includes all the information needed
to verify its accuracy: ```csharp
await memory.ImportFileAsync("NASA-news.pdf"); var answer = await memory.AskAsync("Any news from NASA about Orion?"); Console.WriteLine(answer.Result + "/n"); foreach (var x in answer.RelevantSources)
{
Console.WriteLine($" * {x.SourceName} -- {x.Partitions.First().LastUpdate:D}");
}
``` Yes, there is news from NASA about the Orion spacecraft. NASA has invited the
media to see a new test version [......] For more information about the Artemis program,
you can visit the NASA website. NASA-news.pdf -- Tuesday, August 1, 2023 Memory as a Service - Asynchronous API Depending on your scenarios, you might want to run all the code locally
inside your process, or remotely through an asynchronous and scalable service. If you're importing small files, and need only C# and can block
the process during the import, local-in-process execution can be fine, using
the MemoryServerless seen above. However, if you are in one of these scenarios: I'd just like a web service to import data and send queries to answer My app is written in TypeScript, Java, Rust, or some other language I'm importing big documents that can require minutes to process , and
I don't want to block the user interface I need memory import to run independently, supporting failures and retry
logic I want to define custom pipelines mixing multiple languages like Python, TypeScript, etc then you can deploy Kernel Memory as a backend service, plugging in the
default handlers, or your custom Python/TypeScript/Java/etc. handlers,
and leveraging the asynchronous non-blocking memory encoding process,
sending documents and asking questions using the MemoryWebClient . Here you can find a complete set of instruction
about how to run the Kernel Memory service . Kernel Memory (KM) and SK Semantic Memory (SM) Kernel Memory (KM) is a service built on the feedback received and lessons learned
from developing Semantic Kernel (SK) and Semantic Memory (SM). It provides several
features that would otherwise have to be developed manually, such as storing files,
extracting text from files, providing a framework to secure users' data, etc.
The KM codebase is entirely in .NET, which eliminates the need to write and maintain
features in multiple languages. As a service, KM can be used from any language, tool,
or platform, e.g. browser extensions and ChatGPT assistants. Semantic Memory (SM) is a library for C#, Python, and Java that wraps direct calls
to databases and supports vector search. It was developed as part of the Semantic
Kernel (SK) project and serves as the first public iteration of long-term memory.
The core library is maintained in three languages, while the list of supported
storage engines (known as "connectors") varies across languages. Here's comparison table: | Feature | Kernel Memory | Semantic Memory |
| --------------------------------------- |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ------------------------------------------------------------------------------------------------------------ |
| Data formats | Web pages, PDF, Images, Word, PowerPoint, Excel, Markdown, Text, JSON, HTML | Text only |
| Search | Cosine similarity, Hybrid search with filters (AND/OR conditions) | Cosine similarity |
| Language support | Any language, command line tools, browser extensions, low-code/no-code apps, chatbots, assistants, etc. | C#, Python, Java |
| Storage engines | Azure AI Search , Elasticsearch , MongoDB Atlas , Postgres+pgvector , Qdrant , Redis , SQL Server , In memory KNN, On disk KNN. | Azure AI Search, Chroma, DuckDB, Kusto, Milvus, MongoDB, Pinecone, Postgres, Qdrant, Redis, SQLite, Weaviate |
| File storage | Disk, Azure Blobs , AWS S3 , MongoDB Atlas , In memory (volatile) | - |
| RAG | Yes, with sources lookup | - |
| Summarization | Yes | - |
| OCR | Yes via Azure Document Intelligence | - |
| Security Filters | Yes | - |
| Large document ingestion | Yes, including async processing using queues ( Azure Queues , RabbitMQ , File based or In memory queues) | - |
| Document storage | Yes | - |
| Custom storage schema | some DBs | - |
| Vector DBs with internal embedding | Yes | - |
| Concurrent write to multiple vector DBs | Yes | - |
| LLMs | Azure OpenAI , OpenAI , Anthropic , LLamaSharp via llama.cpp , LM Studio , Semantic Kernel connectors | Azure OpenAI, OpenAI, Gemini, Hugging Face, ONNX, custom ones, etc. |
| LLMs with dedicated tokenization | Yes | No |
| Cloud deployment | Yes | - |
| Web service with OpenAPI | Yes | - | Quick test using the Docker image If you want to give the service a quick test, use the following command
to start the Kernel Memory Service using OpenAI: shell
docker run -e OPENAI_API_KEY="..." -it --rm -p 9001:9001 kernelmemory/service If you prefer using custom settings and services such as Azure OpenAI, Azure
Document Intelligence, etc., you should create an appsettings.Development.json file overriding the default values set in appsettings.json , or using the
configuration wizard included: cd service/Service
dotnet run setup Then run this command to start the Docker image with the configuration just created: on Windows: docker run --volume .\appsettings.Development.json:/app/appsettings.Production.json -it --rm -p 9001:9001 kernelmemory/service on macOS/Linux: docker run --volume ./appsettings.Development.json:/app/appsettings.Production.json -it --rm -p 9001:9001 kernelmemory/service Import files using KM web service and MemoryWebClient ```csharp reference clients/WebClient/WebClient.csproj var memory = new MemoryWebClient("http://127.0.0.1:9001"); // <== URL where the web service is running // Import a file (default user)
await memory.ImportDocumentAsync("meeting-transcript.docx"); // Import a file specifying a Document ID, User and Tags
await memory.ImportDocumentAsync("business-plan.docx",
new DocumentDetails("user@some.email", "file001")
.AddTag("collection", "business")
.AddTag("collection", "plans")
.AddTag("fiscalYear", "2023"));
``` Get answers via the web service curl http://127.0.0.1:9001/ask -d'{"query":"Any news from NASA about Orion?"}' -H 'Content-Type: application/json' json
{
"Query": "Any news from NASA about Orion?",
"Text": "Yes, there is news from NASA about the Orion spacecraft. NASA has invited the media to see a new test version [......] For more information about the Artemis program, you can visit the NASA website.",
"RelevantSources": [
{
"Link": "...",
"SourceContentType": "application/pdf",
"SourceName": "file5-NASA-news.pdf",
"Partitions": [
{
"Text": "Skip to main content\nJul 28, 2023\nMEDIA ADVISORY M23-095\nNASA Invites Media to See Recovery Craft for\nArtemis Moon Mission\n(/sites/default/files/thumbnails/image/ksc-20230725-ph-fmx01_0003orig.jpg)\nAboard the [......] to Mars (/topics/moon-to-\nmars/),Orion Spacecraft (/exploration/systems/orion/index.html)\nNASA Invites Media to See Recovery Craft for Artemis Moon Miss... https://www.nasa.gov/press-release/nasa-invites-media-to-see-recov...\n2 of 3 7/28/23, 4:51 PM",
"Relevance": 0.8430657,
"SizeInTokens": 863,
"LastUpdate": "2023-08-01T08:15:02-07:00"
}
]
}
]
} You can find a full example here . Custom memory ingestion pipelines On the other hand, if you need a custom data pipeline, you can also
customize the steps, which will be handled by your custom business logic: ```csharp
// Memory setup, e.g. how to calculate and where to store embeddings
var memoryBuilder = new KernelMemoryBuilder()
.WithoutDefaultHandlers()
.WithOpenAIDefaults(Environment.GetEnvironmentVariable("OPENAI_API_KEY")); var memory = memoryBuilder.Build(); // Plug in custom .NET handlers
memory.Orchestrator.AddHandler ("step1");
memory.Orchestrator.AddHandler ("step2");
memory.Orchestrator.AddHandler ("step3"); // Use the custom handlers with the memory object
await memory.ImportDocumentAsync(
new Document("mytest001")
.AddFile("file1.docx")
.AddFile("file2.pdf"),
steps: new[] { "step1", "step2", "step3" });
``` Web API specs with OpenAI swagger The API schema is available at http://127.0.0.1:9001/swagger/index.html when
running the service locally with OpenAPI enabled. Examples and Tools Examples Collection of Jupyter notebooks with various scenarios Using Kernel Memory web service to upload documents and answer questions Importing files and asking question without running the service (serverless mode) Using KM Plugin for Semantic Kernel Processing files with custom logic (custom handlers) in serverless mode Processing files with custom logic (custom handlers) in asynchronous mode Upload files and ask questions from command line using curl Customizing RAG and summarization prompts Custom partitioning/text chunking options Using a custom embedding/vector generator Using custom LLMs Using LLama Summarizing documents, using synthetic memories Using Semantic Kernel LLM connectors Using custom content decoders Using a custom web scraper to fetch web pages Generating answers with Anthropic LLMs Hybrid Search with Azure AI Search Writing and using a custom ingestion handler Running a single asynchronous pipeline handler as a standalone service Test project using KM package from nuget.org Integrating Memory with ASP.NET applications and controllers Sample code showing how to extract text from files .NET configuration and logging Expanding chunks retrieving adjacent partitions Using local models via LM Studio Using Context Parameters to customize RAG prompt during a request Creating a Memory instance without KernelMemoryBuilder Tools .NET appsettings.json generator Curl script to upload files Curl script to ask questions Curl script to search documents Script to start Qdrant for development tasks Script to start Elasticsearch for development tasks Script to start MS SQL Server for development tasks Script to start Redis for development tasks Script to start RabbitMQ for development tasks Script to start MongoDB Atlas for development tasks .NET packages Microsoft.KernelMemory.WebClient: .NET web client to call a running instance of Kernel Memory web service. Microsoft.KernelMemory.Core: Kernel Memory core library including all extensions, can be used to build custom pipelines and handlers, contains
also the serverless client to use memory in a synchronous way without the web service. Microsoft.KernelMemory.Service.AspNetCore: an extension to load Kernel Memory into your ASP.NET apps. Microsoft.KernelMemory.SemanticKernelPlugin: a Memory plugin for Semantic Kernel,
replacing the original Semantic Memory available in SK. Packages for Python, Java and other languages Kernel Memory service offers a Web API out of the box, including the OpenAPI
swagger documentation that you can leverage to test the API and create custom
web clients. For instance, after starting the service locally, see http://127.0.0.1:9001/swagger/index.html. A .NET Web Client and a Semantic Kernel plugin are available, see the nugets packages above. A python package with a Web Client and Semantic Kernel plugin will soon be available.
We also welcome PR contributions to support more languages. Contributors | | | | | |
:---: |:---: |:---: |:---: |:---: |:---: | aaronpowell | afederici75 | alexibraimov | alkampfergit | amomra | anthonypuppo | | | | | | |
:---: |:---: |:---: |:---: |:---: |:---: | chaelli | cherchyk | coryisakson | crickman | dependabot[bot] | dluc | | | | | | |
:---: |:---: |:---: |:---: |:---: |:---: | DM-98 | EelcoKoster | Foorcee | GraemeJones104 | jurepurgar | kbeaugrand | | | | | | |
:---: |:---: |:---: |:---: |:---: |:---: | KSemenenko | lecramr | luismanez | marcominerva | neel015 | pascalberger | | | | | | |
:---: |:---: |:---: |:---: |:---: |:---: | pawarsum12 | pradeepr-roboticist | qihangnet | slapointe | slorello89 | spenavajr | | | | | | |
:---: |:---: |:---: |:---: |:---: |:---: | TaoChenOSU | teresaqhoang | Valkozaur | vicperdana | westdavidr | xbotter |;RAG architecture: index and query any data using LLM and natural language, track sources, show citations, asynchronous memory patterns.;indexing,llm,memory,rag,semantic-search | microsoft/kernel-memory |
Kiranism/next-shadcn-dashboard-starter;Next.js 14 Admin Dashboard Starter Template With Shadcn-ui Built with the Next.js App Router View Demo Overview This is a starter template using the following stack: Framework - Next.js 14 Language - TypeScript Styling - Tailwind CSS Components - Shadcn-ui Schema Validations - Zod State Management - Zustand Auth - Auth.js File Uploading - Uploadthing Tables - Tanstack Tables Forms - React Hook Form Linting - ESLint Pre-commit Hooks - Husky Formatting - Prettier If you are looking for a React admin dashboard starter, here is the repo . Pages | Pages | Specifications |
| :-------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------- |
| Signup | Authentication with NextAuth supports Social logins and email logins(Enter dummy email for demo). |
| Dashboard | Cards with recharts graphs for analytics. |
| Users | Tanstack tables with user details client side searching, pagination etc |
| Users/new | A User Form with Uploadthing to support file uploading with dropzone. |
| Employee | Tanstack tables with server side searching, pagination etc). |
| Profile | Mutistep dynamic forms using react-hook-form and zod for form validation. |
| Kanban Board | A Drag n Drop task management board with dnd-kit and zustand to persist state locally. |
| Not Found | Not Found Page Added in the root level |
| - | - | Getting Started Follow these steps to clone the repository and start the development server: git clone https://github.com/Kiranism/next-shadcn-dashboard-starter.git npm install Create a .env.local file by copying the example environment file: cp env.example.txt .env.local Add the required environment variables to the .env.local file. npm run dev You should now be able to access the application at http://localhost:3000.;Admin Dashboard Starter with Nextjs14 and shadcn ui;nextjs14,react-hook-form,shadcn-ui,tanstack-table,typescript,uploadthing,zod,nextjs14-starter,shadcn-ui-starter,next-auth | Kiranism/next-shadcn-dashboard-starter |
nksaraf/vinxi;The JavaScript SDK to build full stack apps and frameworks with your own opinions. powered by vite and nitro vinxi Compose full stack applications (and frameworks) using Vite , the versatile bundler and dev server, and Nitro , the universal production server. The core primitive in vinxi is a router . Inspired by the Bun.App API. Routers are handlers that tell us how specific URLs should be handled. We support various router modes: "static", "spa", "http", (and new ones can be added). Routers specify the handler file (entrypoint) to use for their base -prefixed routes. They can also specify a dir and style in some router modes to include a file system router that is provided to the handler. Routers specify their bundler configuration, via the build property. The routers tell the bundler what entry points to build, what vite plugins to use, etc. Examples | Framework | Category | Example | StackBlitz Link |
|-----------|----------|-----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|
| React | RSC | SPA | |
| | SPA | Basic | |
| | | MDX | |
| | | TanStack Router (Pages) | |
| | | TanStack Router (App) | |
| | | Wouter | |
| | SSR | Basic | |
| | | Basic w/Cloudflare | |
| | | TanStack Router (App) | |
| | | Wouter | |
| Solid | SPA | Basic | |
| | SSR | Basic | |
| | | Solid Router | |
| Vanilla | | SPA | |
| | | TRPC | | Goals Primary goal is to build the tools needed to build a NextJS or SolidStart style metaframework on top of vite without worrying about a lot of the wiring required to keep dev and prod working along with SSR, SPA, RSC, and all the other acronyms. etc. On top of that, we should be able to deploy anywhere easily. Mostly trying to disappear for the user outside the app.js file The surface layer we are intending to tackle:
1. Full stack builds (handle manifest stuff to figure out what assets to load at prod runtime)
2. Dev time asset handling (avoiding FOUC in SSR frameworks) and smoothing over some of vite's dev/prod mismatching behaviours by providing common manifest APIs that work in dev and prod the same way
3. File system router (not any specific file system conventions, just an API for interfacing with FileSystemRouters and utils to implement your conventions in them)
4. Building the server, and providing a simple opaque handler API to control the server
5. Adapter stuff to deploy to various platforms with support for all the features they provide
6. Not to abstract away the platforms. Let people use what they want to the fullest
7. Have little opinion about how the app should be authored or structured Roadmap [ ] vinxi deploy [x] hooks throughout the app licycle: dev: app:created, app:started, router:created Try it out bash
npm install vinxi React SSR ```ts
import reactRefresh from "@vitejs/plugin-react";
import { createApp } from "vinxi"; export default createApp({
routers: [
{
name: "public",
type: "static",
dir: "./public",
},
{
name: "client",
type: "client",
handler: "./app/client.tsx",
target: "browser",
plugins: () => [reactRefresh()],
base: "/_build",
},
{
name: "ssr",
type: "http",
handler: "./app/server.tsx",
target: "server",
},
],
});
``` Solid SSR ```ts
import { createApp } from "vinxi";
import solid from "vite-plugin-solid"; export default createApp({
routers: [
{
name: "public",
type: "static",
dir: "./public",
},
{
name: "client",
type: "client",
handler: "./app/client.tsx",
target: "browser",
plugins: () => [solid({ ssr: true })],
base: "/_build",
},
{
name: "ssr",
type: "http",
handler: "./app/server.tsx",
target: "server",
plugins: () => [solid({ ssr: true })],
},
],
});
```;The Full Stack JavaScript SDK;[] | nksaraf/vinxi |
MLGroupJLU/LLM-eval-survey;A collection of papers and resources related to evaluations on large language models. Yupeng Chang *1 Xu Wang *1 Jindong Wang #2 Yuan Wu #1 Kaijie Zhu 3 Hao Chen 4 Linyi Yang 5 Xiaoyuan Yi 2 Cunxiang Wang 5 Yidong Wang 6 Wei Ye 6 Yue Zhang 5 Yi Chang 1 Philip S. Yu 7 Qiang Yang 8 Xing Xie 2 1 Jilin University, 2 Microsoft Research, 3 Institute of Automation, CAS 4 Carnegie Mellon University, 5 Westlake University, 6 Peking University, 7 University of Illinois, 8 Hong Kong University of Science and Technology (*: Co-first authors, #: Co-corresponding authors) Papers and resources for LLMs evaluation The papers are organized according to our survey: A Survey on Evaluation of Large Language Models . NOTE: As we cannot update the arXiv paper in real time, please refer to this repo for the latest updates and the paper may be updated later. We also welcome any pull request or issues to help us make this survey perfect. Your contributions will be acknowledged in acknowledgements . Related projects:
- Prompt benchmark for large language models: [ PromptBench: robustness evaluation of LLMs ]
- Evlauation of large language models: [ LLM-eval ] Table of Contents News and Updates What to evaluate Natural language processing Robustness, ethics, biases, and trustworthiness Social science Natural science and engineering Medical applications Agent applications Other applications Where to evaluate Contributing Citation Acknowledgments News and updates [12/07/2023] The second version of the paper is released on arXiv, along with Chinese blog . [07/07/2023] The first version of the paper is released on arXiv: A Survey on Evaluation of Large Language Models . What to evaluate Natural language processing Natural language understanding Sentiment analysis A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity. Yejin Bang et al. arXiv 2023. [ paper ] A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets, Laskar et al. ACL 2023 (Findings). [ paper ] Holistic Evaluation of Language Models. Percy Liang et al. arXiv 2022. [ paper ] Can ChatGPT forecast stock price movements? return predictability and large language models. Alejandro Lopez-Lira et al. SSRN 2023. [ paper ] Is ChatGPT a general-purpose natural language processing task solver? Chengwei Qin et al. arXiv 2023. [ paper ] Is ChatGPT a Good Sentiment Analyzer? A Preliminary Study. Zengzhi Wang et al. arXiv 2023. [ paper ] Sentiment analysis in the era of large language models: A reality check. Wenxuan Zhang et al. arXiv 2023. [ paper ] Text classification Holistic evaluation of language models. Percy Liang et al. arXiv 2022. [ paper ] Leveraging large language models for topic classification in the domain of public affairs. Alejandro Peña et al. arXiv 2023. [ paper ] Large language models can rate news outlet credibility. Kai-Cheng Yang et al. arXiv 2023. [ paper ] Natural language inference A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets, Laskar et al. ACL 2023 (Findings). [ paper ] Can Large Language Models Infer and Disagree like Humans? Noah Lee et al. arXiv 2023. [ paper ] Is ChatGPT a general-purpose natural language processing task solver? Chengwei Qin et al. arXiv 2023. [ paper ] Others Do LLMs Understand Social Knowledge? Evaluating the Sociability of Large Language Models with SocKET Benchmark. Minje Choi et al. arXiv 2023. [ paper ] The two word test: A semantic benchmark for large language models. Nicholas Riccardi et al. arXiv 2023. [ paper ] EvEval: A Comprehensive Evaluation of Event Semantics for Large Language Models. Zhengwei Tao et al. arXiv 2023. [ paper ] Reasoning A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity. Yejin Bang et al. arXiv 2023. [ paper ] ChatGPT is a knowledgeable but inexperienced solver: An investigation of commonsense problem in large language models. Ning Bian et al. arXiv 2023. [ paper ] Chain-of-Thought Hub: A continuous effort to measure large language models' reasoning performance. Yao Fu et al. arXiv 2023. [ paper ] A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets, Laskar et al. ACL 2023 (Findings). [ paper ] Large Language Models Are Not Abstract Reasoners, Gaël Gendron et al. arXiv 2023. [ paper ] Can large language models reason about medical questions? Valentin Liévin et al. arXiv 2023. [ paper ] Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4. Hanmeng Liu et al. arXiv 2023. [ paper ] Mathematical Capabilities of ChatGPT. Simon Frieder et al. arXiv 2023. [ paper ] Human-like problem-solving abilities in large language models using ChatGPT. Graziella Orrù et al. Front. Artif. Intell. 2023 [ paper ] Is ChatGPT a general-purpose natural language processing task solver? Chengwei Qin et al. arXiv 2023. [ paper ] Testing the general deductive reasoning capacity of large language models using OOD examples. Abulhair Saparov et al. arXiv 2023. [ paper ] MindGames: Targeting Theory of Mind in Large Language Models with Dynamic Epistemic Modal Logic. Damien Sileo et al. arXiv 2023. [ paper ] Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks. Zhaofeng Wu arXiv 2023. [ paper ] Are large language models really good logical reasoners? a comprehensive evaluation from deductive, inductive and abductive views. Fangzhi Xu et al. arXiv 2023. [ paper ] Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective. Yan Zhuang et al. arXiv 2023. [ paper ] Autoformalization with Large Language Models. Yuhuai Wu et al. NeurIPS 2022. [ paper ] Evaluating and Improving Tool-Augmented Computation-Intensive Math Reasoning. Beichen Zhang et al. arXiv 2023. [ paper ] StructGPT: A General Framework for Large Language Model to Reason over Structured Data. Jinhao Jiang et al. arXiv 2023. [ paper ] Unifying Large Language Models and Knowledge Graphs: A Roadmap. Shirui Pan et al. arXiv 2023. [ paper ] Natural language generation Summarization A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity. Yejin Bang et al. arXiv 2023. [ paper ] A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets, Laskar et al. ACL 2023 (Findings). [ paper ] Holistic Evaluation of Language Models. Percy Liang et al. arXiv 2022. [ paper ] ChatGPT vs Human-authored text: Insights into controllable text summarization and sentence style transfer. Dongqi Pu et al. arXiv 2023. [ paper ] Is ChatGPT a general-purpose natural language processing task solver? Chengwei Qin et al. arXiv 2023. [ paper ] Dialogue A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity. Yejin Bang et al. arXiv 2023. [ paper ] LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models. Yen-Ting Lin et al. arXiv 2023. [ paper ] Is ChatGPT a general-purpose natural language processing task solver? Chengwei Qin et al. arXiv 2023. [ paper ] LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset. Lianmin Zheng et al. arXiv 2023. [ paper ] Translation A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity. Yejin Bang et al. arXiv 2023. [ paper ] A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets, Laskar et al. ACL 2023 (Findings). [ paper ] Translating Radiology Reports into Plain Language using ChatGPT and GPT-4 with Prompt Learning: Promising Results, Limitations, and Potential. Qing Lyu et al. arXiv 2023. [ paper ] Document-Level Machine Translation with Large Language Models. Longyue Wang et al. arXiv 2023. [ paper ] Case Study of Improving English-Arabic Translation Using the Transformer Model. Donia Gamal et al. ijicis 2023. [ paper ] Taqyim: Evaluating Arabic NLP Tasks Using ChatGPT Models. Zaid Alyafeai et al. arXiv 2023. [ paper ] Question answering Benchmarking Foundation Models with Language-Model-as-an-Examiner. Yushi Bai et al. arXiv 2023. [ paper ] A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity. Yejin Bang et al. arXiv 2023. [ paper ] ChatGPT is a knowledgeable but inexperienced solver: An investigation of commonsense problem in large language models. Ning Bian et al. arXiv 2023. [ paper ] A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets, Laskar et al. ACL 2023 (Findings). [ paper ] Holistic Evaluation of Language Models. Percy Liang et al. arXiv 2022. [ paper ] Is ChatGPT a general-purpose natural language processing task solver? Chengwei Qin et al. arXiv 2023. [ paper ] Others Exploring the use of large language models for reference-free text quality evaluation: A preliminary empirical study. Yi Chen et al. arXiv 2023. [ paper ] INSTRUCTEVAL: Towards Holistic Evaluation of Instruction-Tuned Large Language Models. Yew Ken Chia et al. arXiv 2023. [ paper ] ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer. Dongqi Pu et al. arXiv 2023. [ paper ] Multilingual tasks Benchmarking Arabic AI with large language models. Ahmed Abdelali et al. arXiv 2023. [ paper ] MEGA: Multilingual Evaluation of Generative AI. Kabir Ahuja et al. arXiv 2023. [ paper ] A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity. Yejin Bang et al. arXiv 2023. [ paper ] ChatGPT beyond English: Towards a comprehensive evaluation of large language models in multilingual learning. Viet Dac Lai et al. arXiv 2023. [ paper ] A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets, Laskar et al. ACL 2023 (Findings). [ paper ] M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models. Wenxuan Zhang et al. arXiv 2023. [ paper ] Measuring Massive Multitask Chinese Understanding. Hui Zeng et al. arXiv 2023. [ paper ] CMMLU: Measuring massive multitask language understanding in Chinese. Haonan Li et al. arXiv 2023. [ paper ] Factuality TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models. Zorik Gekhman et al. arXiv 2023. [ paper ] TRUE: Re-evaluating Factual Consistency Evaluation. Or Honovich et al. arXiv 2022. [ paper ] SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models. Potsawee Manakul et al. arXiv 2023. [ paper ] FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation. Sewon Min et al. arXiv 2023. [ paper ] Measuring and Modifying Factual Knowledge in Large Language Models. Pouya Pezeshkpour arXiv 2023. [ paper ] Evaluating Open-QA Evaluation. Cunxiang Wang arXiv 2023. [ paper ] Robustness, ethics, biases, and trustworthiness Robustness A Survey on Out-of-Distribution Evaluation of Neural NLP Models. Xinzhe Li et al. arXiv 2023. [ paper ] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning. Fuxiao Liu et al. arXiv 2023. [ paper ] Generalizing to Unseen Domains: A Survey on Domain Generalization. Jindong Wang et al. TKDE 2022. [ paper ] On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective. Jindong Wang et al. arXiv 2023. [ paper ] GLUE-X: Evaluating Natural Language Understanding Models from an Out-of-Distribution Generalization Perspective. Linyi Yang et al. arXiv 2022. [ paper ] On Evaluating Adversarial Robustness of Large Vision-Language Models. Yunqing Zhao et al. arXiv 2023. [ paper ] PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts. Kaijie Zhu et al. arXiv 2023. [ paper ] On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex. Terry Yue Zhuo et al. arXiv 2023. [ paper ] Ethics and bias Assessing Cross-Cultural Alignment between ChatGPT and Human Societies: An Empirical Study. Yong Cao et al. C3NLP@EACL 2023. [ paper ] Toxicity in ChatGPT: Analyzing persona-assigned language models. Ameet Deshpande et al. arXiv 2023. [ paper ] BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation. Jwala Dhamala et al. FAccT 2021 [ paper ] Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models. Emilio Ferrara arXiv 2023. [ paper ] RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models. Samuel Gehman et al. EMNLP 2020. [ paper ] The political ideology of conversational AI: Converging evidence on ChatGPT's pro-environmental, left-libertarian orientation. Jochen Hartmann et al. arXiv 2023. [ paper ] Aligning AI With Shared Human Values. Dan Hendrycks et al. arXiv 2023. [ paper ] A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets, Laskar et al. ACL 2023 (Findings). [ paper ] BBQ: A hand-built bias benchmark for question answering. Alicia Parrish et al. ACL 2022. [ paper ] The Self-Perception and Political Biases of ChatGPT. Jérôme Rutinowski et al. arXiv 2023. [ paper ] Societal Biases in Language Generation: Progress and Challenges. Emily Sheng et al. ACL-IJCNLP 2021. [ paper ] Moral Mimicry: Large Language Models Produce Moral Rationalizations Tailored to Political Identity. Gabriel Simmons et al. arXiv 2022. [ paper ] Large Language Models are not Fair Evaluators. Peiyi Wang et al. arXiv 2023. [ paper ] Exploring AI Ethics of ChatGPT: A Diagnostic Analysis. Terry Yue Zhuo et al. arXiv 2023. [ paper ] CHBias: Bias Evaluation and Mitigation of Chinese Conversational Language Models. Jiaxu Zhao et al. ACL 2023. [ paper ] Trustworthiness Human-Like Intuitive Behavior and Reasoning Biases Emerged in Language Models -- and Disappeared in GPT-4. Thilo Hagendorff et al. arXiv 2023. [ paper ] DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models. Boxin Wang et al. arXiv 2023. [ paper ] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning. Fuxiao Liu et al. arXiv 2023. [ paper ] Evaluating Object Hallucination in Large Vision-Language Models. Yifan Li et al. arXiv 2023. [ paper ] A Survey of Hallucination in Large Foundation Models. Vipula Rawte et al. arXiv 2023. [ paper ] Ask Again, Then Fail: Large Language Models' Vacillations in Judgement. Qiming Xie et al. arXiv 2023. [ paper ] Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models. Yue Zhang et al. arXiv 2023. [ paper ] Social science How ready are pre-trained abstractive models and LLMs for legal case judgement summarization. Aniket Deroy et al. arXiv 2023. [ paper ] Baby steps in evaluating the capacities of large language models. Michael C. Frank Nature Reviews Psychology 2023. [ paper ] Large Language Models as Tax Attorneys: A Case Study in Legal Capabilities Emergence. John J. Nay et al. arXiv 2023. [ paper ] Large language models can be used to estimate the ideologies of politicians in a zero-shot learning setting. Patrick Y. Wu et al. arXiv 2023. [ paper ] Can large language models transform computational social science? Caleb Ziems et al. arXiv 2023. [ paper ] Natural science and engineering Mathematics Have LLMs Advanced Enough? A Challenging Problem Solving Benchmark for Large Language Models. Daman Arora et al. arXiv 2023. [ paper ] Sparks of Artificial General Intelligence: Early experiments with GPT-4. Sébastien Bubeck et al. arXiv 2023. [ paper ] Evaluating Language Models for Mathematics through Interactions. Katherine M. Collins et al. arXiv 2023. [ paper ] Investigating the effectiveness of ChatGPT in mathematical reasoning and problem solving: Evidence from the Vietnamese national high school graduation examination. Xuan-Quy Dao et al. arXiv 2023. [ paper ] A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets, Laskar et al. ACL 2023 (Findings). [ paper ] CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? Tianwen Wei et al. arXiv 2023. [ paper ] An empirical study on challenging math problem solving with GPT-4. Yiran Wu et al. arXiv 2023. [ paper ] How well do Large Language Models perform in Arithmetic Tasks? Zheng Yuan et al. arXiv 2023. [ paper ] MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models. Longhui Yu et al. arXiv 2023. [ paper ] General science Have LLMs Advanced Enough? A Challenging Problem Solving Benchmark for Large Language Models. Daman Arora et al. arXiv 2023. [ paper ] Do Large Language Models Understand Chemistry? A Conversation with ChatGPT. Castro Nascimento CM et al. JCIM 2023. [ paper ] What indeed can GPT models do in chemistry? A comprehensive benchmark on eight tasks. Taicheng Guo et al. arXiv 2023. [ paper ][ GitHub ] Engineering Sparks of Artificial General Intelligence: Early Experiments with GPT-4. Sébastien Bubeck et al. arXiv 2023. [ paper ] Is your code generated by ChatGPT really correct? Rigorous evaluation of large language models for code generation. Jiawei Liu et al. arXiv 2023. [ paper ] Understanding the Capabilities of Large Language Models for Automated Planning. Vishal Pallagani et al. arXiv 2023. [ paper ] ChatGPT: A study on its utility for ubiquitous software engineering tasks. Giriprasad Sridhara et al. arXiv 2023. [ paper ] Large language models still can't plan (A benchmark for LLMs on planning and reasoning about change). Karthik Valmeekam et al. arXiv 2022. [ paper ] On the Planning Abilities of Large Language Models – A Critical Investigation. Karthik Valmeekam et al. arXiv 2023. [ paper ] Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective. Yan Zhuang et al. arXiv 2023. [ paper ] Medical applications Medical Queries The promise and peril of using a large language model to obtain clinical information: ChatGPT performs strongly as a fertility counseling tool with limitation. Joseph Chervenak M.D. et al. Fertility and Sterility 2023. [ paper ] Analysis of large-language model versus human performance for genetics questions. Dat Duong et al. European Journal of Human Genetics 2023. [ paper ] Evaluation of AI Chatbots for Patient-Specific EHR Questions. Alaleh Hamidi et al. arXiv 2023. [ paper ] Evaluating Large Language Models on a Highly-specialized Topic, Radiation Oncology Physics. Jason Holmes et al. arXiv 2023. [ paper ] Evaluation of ChatGPT on Biomedical Tasks: A Zero-Shot Comparison with Fine-Tuned Generative Transformers. Israt Jahan et al. arXiv 2023. [ paper ] Assessing the Accuracy and Reliability of AI-Generated Medical Responses: An Evaluation of the Chat-GPT Model. Douglas Johnson et al. Residential Square 2023. [ paper ] Assessing the Accuracy of Responses by the Language Model ChatGPT to Questions Regarding Bariatric Surgery. Jamil S. Samaan et al. Obesity Surgery 2023. [ paper ] Trialling a Large Language Model (ChatGPT) in General Practice With the Applied Knowledge Test: Observational Study Demonstrating Opportunities and Limitations in Primary Care. Arun James Thirunavukarasu et al. JMIR Med Educ. 2023. [ paper ] CARE-MI: Chinese Benchmark for Misinformation Evaluation in Maternity and Infant Care. Tong Xiang et al. arXiv 2023. [ paper ] Medical examination How Does ChatGPT Perform on the United States Medical Licensing Examination? The Implications of Large Language Models for Medical Education and Knowledge Assessment. Aidan Gilson et al. JMIR Med Educ. 2023. [ paper ] Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. Tiffany H. Kung et al. PLOS Digit Health. 2023. [ paper ] Medical assistants Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios. Marco Cascella et al. Journal of Medical Systems 2023. [ paper ] covLLM: Large Language Models for COVID-19 Biomedical Literature. Yousuf A. Khan et al. arXiv 2023. [ paper ] Evaluating the use of large language model in identifying top research questions in gastroenterology. Adi Lahat et al. Scientific reports 2023. [ paper ] Translating Radiology Reports into Plain Language using ChatGPT and GPT-4 with Prompt Learning: Promising Results, Limitations, and Potential. Qing Lyu et al. arXiv 2023. [ paper ] ChatGPT goes to the operating room: Evaluating GPT-4 performance and its potential in surgical education and training in the era of large language models. Namkee Oh et al. Ann Surg Treat Res. 2023. [ paper ] Can LLMs like GPT-4 outperform traditional AI tools in dementia diagnosis? Maybe, but not today. Zhuo Wang et al. arXiv 2023. [ paper ] Agent applications Language Is Not All You Need: Aligning Perception with Language Models. Shaohan Huang et al. arXiv 2023. [ paper ] MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning. Ehud Karpas et al. [ paper ] The Unsurprising Effectiveness of Pre-Trained Vision Models for Control. Simone Parisi et al. ICMl 2022. [ paper ] Tool Learning with Foundation Models. Qin et al. arXiv 2023. [ paper ] ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs. Qin et al. arXiv 2023. [ paper ] Toolformer: Language Models Can Teach Themselves to Use Tools. Timo Schick et al. arXiv 2023. [ paper ] HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face. Yongliang Shen et al. arXiv 2023. [ paper ] Other applications Education Can Large Language Models Provide Feedback to Students? A Case Study on ChatGPT. Wei Dai et al. ICALT 2023. [ paper ] Can ChatGPT pass high school exams on English Language Comprehension? Joost de Winter Researchgate. [ paper ] Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests. Arto Hellas et al. arXiv 2023. [ paper ] Is ChatGPT a Good Teacher Coach? Measuring Zero-Shot Performance For Scoring and Providing Actionable Insights on Classroom Instruction. Rose E. Wang et al. arXiv 2023. [ paper ] CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? Tianwen Wei et al. arXiv 2023. [ paper ] Search and recommendation Uncovering ChatGPT's Capabilities in Recommender Systems. Sunhao Dai et al. arXiv 2023. [ paper ] Recommender Systems in the Era of Large Language Models (LLMs). Wenqi Fan et al. Researchgate. [ paper ] Exploring the Upper Limits of Text-Based Collaborative Filtering Using Large Language Models: Discoveries and Insights. Ruyu Li et al. arXiv 2023. [ paper ] Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent. Weiwei Sun et al. arXiv 2023. [ paper ] ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience. Ruiyun Xu et al. arXiv 2023. [ paper ] Where to Go Next for Recommender Systems? ID- vs. Modality-based Recommender Models Revisited. Zheng Yuan et al. arXiv 2023. [ paper ] Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation. Jizhi Zhang et al. arXiv 2023. [ paper ] Zero-shot recommendation as language modeling. Damien Sileo et al. ECIR 2022. [ paper ] Personality testing ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models. Sophie Jentzsch et al. arXiv 2023. [ paper ] Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models. Tian Liang et al. arXiv 2023. [ paper ] Personality Traits in Large Language Models. Mustafa Safdari et al. arXiv 2023. [ paper ] Have Large Language Models Developed a Personality?: Applicability of Self-Assessment Tests in Measuring Personality in LLMs. Xiaoyang Song et al. arXiv 2023. [ paper ] Emotional Intelligence of Large Language Models. Xuena Wang et al. arXiv 2023. [ paper ] Specific tasks ChatGPT and Other Large Language Models as Evolutionary Engines for Online Interactive Collaborative Game Design. Pier Luca Lanzi et al. arXiv 2023. [ paper ] An Evaluation of Log Parsing with ChatGPT. Van-Hoang Le et al. arXiv 2023. [ paper ] PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization. Yidong Wang et al. arXiv 2023. [ paper ] Where to evaluate The paper lists several popular benchmarks. For better summarization, these benchmarks are divided into two categories: general language task benchmarks and specific downstream task benchmarks. NOTE: We may miss some benchmarks. Your suggestions are highly welcomed! | Benchmark | Focus | Domain | Evaluation Criteria |
|-------------|------------------------------------|--------------------------|-----------------------------------------------|
| SOCKET [ paper ] | Social knowledge | Specific downstream task | Social language understanding |
| MME [ paper ] | Multimodal LLMs | Multi-modal task | Ability of perception and cognition |
| Xiezhi [ paper ][ GitHub ] | Comprehensive domain knowledge | General language task | Overall performance across multiple benchmarks |
| Choice-75 [ paper ][ GitHub ] | Script learning | Specific downstream task | Overall performance of LLMs |
| CUAD [ paper ] | Legal contract review | Specific downstream task | Legal contract understanding |
| TRUSTGPT [ paper ] | Ethic | Specific downstream task | Toxicity, bias, and value-alignment |
| MMLU [ paper ] | Text models | General language task | Multitask accuracy |
| MATH [ paper ] | Mathematical problem | Specific downstream task | Mathematical ability |
| APPS [ paper ]|Coding challenge competence | Specific downstream task | Code generation ability|
| CELLO[ paper ][ GitHub ] |Complex instructions | Specific downstream task | Count limit, answer format, task-prescribed phrases and input-dependent query|
| C-Eval [ paper ][ GitHub ] | Chinese evaluation | General language task | 52 Exams in a Chinese context |
| EmotionBench [ paper ] | Empathy ability | Specific downstream task | Emotional changes |
| OpenLLM [ Link ] | Chatbots | General language task | Leaderboard rankings |
| DynaBench [ paper ] | Dynamic evaluation | General language task | NLI, QA, sentiment, and hate speech |
| Chatbot Arena [ Link ] | Chat assistants | General language task | Crowdsourcing and Elo rating system |
| AlpacaEval [ GitHub ] | Automated evaluation | General language task | Metrics, robustness, and diversity |
| CMMLU [ paper ][ GitHub ] | Chinese multi-tasking | Specific downstream task | Multi-task language understanding capabilities|
| HELM [ paper ][ Link ] | Holistic evaluation | General language task | Multi-metric |
| API-Bank [ paper ] | Tool-augmented | Specific downstream task | API call, response, and planning |
| M3KE [ paper ] | Multi-task | Specific downstream task | Multi-task accuracy |
| MMBench [ paper ][ GitHub ] | Large vision-language models(LVLMs) | Multi-modal task | Multifaceted capabilities of VLMs |
| SEED-Bench [ paper ][ GitHub ] | Multi-modal Large Language Models | Multi-modal task | Generative understanding of MLLMs |
| ARB [ paper ] | Advanced reasoning ability | Specific downstream task | Multidomain advanced reasoning ability|
| BIG-bench [ paper ][ GitHub ] | Capabilities and limitations of LMs | General language task | Model performance and calibration |
| MultiMedQA [ paper ] | Medical QA | Specific downstream task | Accuracy and human evaluation|
| CVALUES [ paper ] [ GitHub ] | Safety and responsibility | Specific downstream task | Alignment ability of LLMs|
| LVLM-eHub [ paper ] | LVLMs | Multi-modal task | Multimodal capabilities of LVLMs |
| ToolBench [ GitHub ] | Software tools | Specific downstream task | Execution success rate |
| FRESHQA [ paper ] [ GitHub ] | Dynamic QA| Specific downstream task |Correctness and hallucination|
| CMB [ paper ] [ Link ] | Chinese comprehensive medicine| Specific downstream task |Expert evaluation and automatic evaluation|
| PandaLM [ paper ] [ GitHub ] | Instruction tuning | General language task | Winrate judged by PandaLM |
| MINT [ paper ] [ GitHub ] | Multi-turn interaction, tools and language feedback | Specific downstream task | Success rate with k -turn budget SR k |
| Dialogue CoT [ paper ] [ GitHub ] | In-depth dialogue | Specific downstream task | Helpfulness and acceptness of LLMs|
| BOSS [ paper ] [ GitHub ] | OOD robustness in NLP | General language task | OOD robustness |
| MM-Vet [ paper ] [ GitHub ] | Complicated multi-modal tasks | Multi-modal task | Integrated vision-language capabilities|
| LAMM [ paper ] [ GitHub ] | Multi-modal point clouds | Multi-modal task | Task-specific metrics|
| GLUE-X [ paper ] [ GitHub ] | OOD robustness for NLU tasks | General language task | OOD robustness |
| KoLA [ paper ] | Knowledge-oriented evaluation | General language task | Self-contrast metrics |
| AGIEval [ paper ] | Human-centered foundational models | General language task | General |
| PromptBench [ paper ] [ GitHub ] | Adversarial prompt resilience | General language task | Adversarial robustness |
| MT-Bench [ paper ] | Multi-turn conversation | General language task | Winrate judged by GPT-4 |
| M3Exam [ paper ] [ GitHub ] | Multilingual, multimodal and multilevel | Specific downstream task | Task-specific metrics |
| GAOKAO-Bench [ paper ] | Chinese Gaokao examination | Specific downstream task | Accuracy and scoring rate |
| SafetyBench [ paper ] [ GitHub ] | Safety | Specific downstream task | Safety abilities of LLMs |
| LLMEval² [ paper ] [ Link ] | LLM Evaluator | General language task | Accuracy, Macro-F1 and Kappa Correlation Coefficient |
| FinanceBench [ paper ] [ GitHub ] | Finance Question and Answering | Specific downstream task | Accuracy compared with human annotated labels Contributing We welcome contributions to LLM-eval-survey! If you'd like to contribute, please follow these steps: Fork the repository. Create a new branch with your changes. Submit a pull request with a clear description of your changes. You can also open an issue if you have anything to add or comment. Citation If you find this project useful in your research or work, please consider citing it: @article{chang2023survey,
title={A Survey on Evaluation of Large Language Models},
author={Chang, Yupeng and Wang, Xu and Wang, Jindong and Wu, Yuan and Zhu, Kaijie and Chen, Hao and Yang, Linyi and Yi, Xiaoyuan and Wang, Cunxiang and Wang, Yidong and Ye, Wei and Zhang, Yue and Chang, Yi and Yu, Philip S. and Yang, Qiang and Xie, Xing},
journal={arXiv preprint arXiv:2307.03109},
year={2023}
} Acknowledgements Tahmid Rahman ( @tahmedge ) for PR#1 . Hao Zhao for suggestions on new benchmarks. Chenhui Zhang for suggestions on robustness, ethics, and trustworthiness. Damien Sileo ( @sileod ) for PR#2 . Peiyi Wang ( @Wangpeiyi9979 ) for issue#3 . Zengzhi Wang for sentiment analysis. Kenneth Leung ( @kennethleungty ) for multiple PRs (#4, #5, #6) @Aml-Hassan-Abd-El-hamid for PR#7 . @taichengguo for issue#9;The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".;benchmark,evaluation,large-language-models,llm,llms,model-assessment | MLGroupJLU/LLM-eval-survey |
evilsocket/legba;Legba is a multiprotocol credentials bruteforcer / password sprayer and enumerator built with Rust and the Tokio asynchronous runtime in order to achieve
better performances and stability while consuming less resources than similar tools (see the benchmark below). For the building instructions, usage and the complete list of options check the project Wiki . Supported Protocols/Features: AMQP (ActiveMQ, RabbitMQ, Qpid, JORAM and Solace), Cassandra/ScyllaDB, DNS subdomain enumeration, FTP, HTTP (basic authentication, NTLMv1, NTLMv2, multipart form, custom requests with CSRF support, files/folders enumeration, virtual host enumeration), IMAP, Kerberos pre-authentication and user enumeration, LDAP, MongoDB, MQTT, Microsoft SQL, MySQL, Oracle, PostgreSQL, POP3, RDP, Redis, Samba, SSH / SFTP, SMTP, Socks5, STOMP (ActiveMQ, RabbitMQ, HornetQ and OpenMQ), TCP port scanning, Telnet, VNC. Benchmark Here's a benchmark of legba versus thc-hydra running some common plugins, both targeting the same test servers on localhost. The benchmark has been executed on a macOS laptop with an M1 Max CPU, using a wordlist of 1000 passwords with the correct one being on the last line. Legba was compiled in release mode, Hydra compiled and installed via brew formula . Far from being an exhaustive benchmark (some legba features are simply not supported by hydra, such as CSRF token grabbing), this table still gives a clear idea of how using an asynchronous runtime can drastically improve performances. | Test Name | Hydra Tasks | Hydra Time | Legba Tasks | Legba Time |
| --------- | ----------- | ---------- | ----------- | ---------- |
| HTTP basic auth | 16 | 7.100s | 10 | 1.560s (🚀 4.5x faster) |
| HTTP POST login (wordpress) | 16 | 14.854s | 10 | 5.045s (🚀 2.9x faster) |
| SSH | 16 | 7m29.85s * | 10 | 8.150s (🚀 55.1x faster) |
| MySQL | 4 ** | 9.819s | 4 ** | 2.542s (🚀 3.8x faster) |
| Microsoft SQL | 16 | 7.609s | 10 | 4.789s (🚀 1.5x faster) | While this result would suggest a default delay between connection attempts used by Hydra. I've tried to study the source code to find such delay but to my knowledge there's none. For some reason it's simply very slow. * For MySQL hydra automatically reduces the amount of tasks to 4, therefore legba's concurrency level has been adjusted to 4 as well. License Legba is released under the GPL 3 license. To see the licenses of the project dependencies, install cargo license with cargo install cargo-license and then run cargo license .;A multiprotocol credentials bruteforcer / password sprayer and enumerator. 🥷;[] | evilsocket/legba |
andrewnguonly/Lumos;Lumos A RAG LLM co-pilot for browsing the web, powered by local LLMs. This Chrome extension is powered by Ollama . Inference is done on your local machine without any remote server support. However, due to security constraints in the Chrome extension platform, the app does rely on local server support to run the LLM. This app is inspired by the Chrome extension example provided by the Web LLM project and the local LLM examples provided by LangChain . Ollama (Home) Ollama (GitHub) Lumos. Nox. Lumos. Nox. Use Cases Summarize long threads on issue tracking sites, forums, and social media sites. Summarize news articles. Ask questions about reviews on business and product pages. Ask questions about long, technical documentation. ... what else? Ollama Server A local Ollama server is needed for the embedding database and LLM inference. Download and install Ollama and the CLI here . Pull Image Example: ollama pull llama2 Start Server Example: OLLAMA_ORIGINS=chrome-extension://* ollama serve Terminal output: 2023/11/19 20:55:16 images.go:799: total blobs: 6
2023/11/19 20:55:16 images.go:806: total unused blobs removed: 0
2023/11/19 20:55:16 routes.go:777: Listening on 127.0.0.1:11434 (version 0.1.10) Note: The environment variable OLLAMA_ORIGINS must be set to chrome-extension://* to allow requests originating from the Chrome extension. The following error will occur in the Chrome extension if OLLAMA_ORIGINS is not set properly. Access to fetch at 'http://localhost:11434/api/tags' from origin 'chrome-extension://<extension_id>' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled. macOS Run launchctl setenv to set OLLAMA_ORIGINS . launchctl setenv OLLAMA_ORIGINS "chrome-extension://*" Setting environment variables on Mac (Ollama) Docker The Ollama server can also be run in a Docker container . The container should have the OLLAMA_ORIGINS environment variable set to chrome-extension://* . Run docker run with the -e flag to set the OLLAMA_ORIGINS environment variable: docker run -e OLLAMA_ORIGINS="chrome-extension://*" -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama Chrome Extension In the project directory, you can run: npm test Launches the test runner in the interactive watch mode.\
See the section about running tests for more information. npm run lint Runs eslint and prettier on src and __tests__ files. npm run build Builds the app for production to the dist folder.\
It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes.\
Your app is ready to be deployed! See the section about deployment for more information. Load Unpacked Extension (Install) https://developer.chrome.com/docs/extensions/mv3/getstarted/development-basics/#load-unpacked Keyboard Shortcut Create a keyboard shortcut to make Lumos easily accessible. Navigate to chrome://extensions/shortcuts . Configure a shortcut for Lumos ( Activate the extension ). For example, ⌘L ( command key + L ). Releases If you don't have npm installed, you can download the pre-built extension package from the Releases page. Lumos Options Right-click on the extension icon and select Options to access the extension's Options page . Ollama Model : Select desired model (e.g. llama2 ) Ollama Embedding Model : Select desired embedding model (e.g. nomic-embed-text ). Specify the OLLAMA_MAX_LOADED_MODELS=2 environment variable when starting the Ollama server to allow multiple models to be loaded into memory. Ollama Host : Select desired host (defaults to http://localhost:11434 ) Vector Store TTL (minutes) : Number of minutes to store a URL's content in the vector store cache. Content Parser Config : Lumos's default content parser will extract all text content between a page's <body></body> tag. To customize the content parser, add an entry to the configuration. Enable/Disable Tools : Enable or disable individual tools. If a tool is enabled, a custom prefix trigger (e.g. "calc:") can be specified to override the app's internal prompt classification mechanism. Enable/Disable Dark Arts : 😈 Content Parser Config Each URL path can have its own content parser. The content parser config for the longest URL path will be matched. chunkSize : Number of characters to chunk page content into for indexing into RAG vectorstore chunkOverlap : Number of characters to overlap in chunks for indexing into RAG vectorstore selectors : document.querySelector() queries to perform to retrieve page content selectorsAll : document.querySelectorAll() queries to perform to retrieve page content For example, given the following config, if the URL path of the current tab is domain.com/path1/subpath1/subsubpath1 , then the config for domain.com/path1/subpath1 will be used (i.e. chunkSize=600 ). json
{
"domain.com/path1/subpath1": {
"chunkSize": 600,
"chunkOverlap": 200,
"selectors": [
"#id"
],
"selectorsAll": []
},
"domain.com/path1": {
"chunkSize": 500,
"chunkOverlap": 0,
"selectors": [
".className"
],
"selectorsAll": []
}
} See docs for How to Create a Custom Content Parser . See documentation for querySelector() and querySelectorAll() to confirm all querying capabilities. Example: json
{
"default": {
"chunkSize": 500,
"chunkOverlap": 0,
"selectors": [
"body"
],
"selectorsAll": []
},
"medium.com": {
"chunkSize": 500,
"chunkOverlap": 0,
"selectors": [
"article"
],
"selectorsAll": []
},
"reddit.com": {
"chunkSize": 500,
"chunkOverlap": 0,
"selectors": [],
"selectorsAll": [
"shreddit-comment"
]
},
"stackoverflow.com": {
"chunkSize": 500,
"chunkOverlap": 0,
"selectors": [
"#question-header",
"#mainbar"
],
"selectorsAll": []
},
"wikipedia.org": {
"chunkSize": 2000,
"chunkOverlap": 500,
"selectors": [
"#bodyContent"
],
"selectorsAll": []
},
"yelp.com": {
"chunkSize": 500,
"chunkOverlap": 0,
"selectors": [
"#location-and-hours",
"#reviews"
],
"selectorsAll": []
}
} Highlighted Content Alternatively, if content is highlighted on a page (e.g. highlighted text), that content will be parsed instead of the content produced from the content parser configuration. Note: Content that is highlighted will not be cached in the vector store cache. Each subsequent prompt containing highlighted content will generate new embeddings. Shortcuts cmd + c : Copy last message to clipboard. cmd + b : Load clipboard text as a file attachment. cmd + j : Toggle Disable content parsing checkbox. cmd + k : Clear all messages. cmd + ; : Open/close Chat History panel. ctrl + c : Cancel request (LLM request/streaming or embeddings generation) ctrl + x : Remove file attachment. ctrl + r : Regenerate last LLM response. Multimodal Lumos supports multimodal models! Images that are present on the current page will be downloaded and bound to the model for prompting. See documentation and examples here . File Attachments File attachments can be uploaded to Lumos. The contents of a file will be parsed and processed through Lumos's RAG workflow (similar to processing page content). By default, the text content of a file will be parsed if the extension type is not explicitly listed below. Supported extension types:
- .csv - .json - .pdf - any plain text file format ( .txt , .md , .py , etc) Note: If an attachment is present, page content will not be parsed. Remove the file attachment to resume parsing page content. Image Files Image files will be processed through Lumos's multimodal workflow (requires multimodal model). Supported image types:
- .jpeg , .jpg - .png Clipboard Content Clipboard content can be uploaded as a file attachment. Use the cmd + b shortcut key to load clipboard text as a file attachment. Tools (Experimental) Lumos invokes Tools automatically based on the provided prompt. See documentation and examples here . Reading Local LLM in the Browser Powered by Ollama Local LLM in the Browser Powered by Ollama (Part 2) Let’s Normalize Online, In-Memory RAG! (Part 3) Supercharging If-Statements With Prompt Classification Using Ollama and LangChain (Part 4) Bolstering LangChain’s MemoryVectorStore With Keyword Search (Part 5) A Guide to Gotchas with LangChain Document Loaders in a Chrome Extension (Part 6);A RAG LLM co-pilot for browsing the web, powered by local LLMs;chrome-extension,langchain,langchain-js,llm,ollama,react,typescript,webpack | andrewnguonly/Lumos |
aradzie/keybr.com;keybr.com is not (just) a typing test It's the smartest way to learn touch typing and improve your typing speed.
On the surface it looks pretty simple, it shows you a piece of text, you type it out.
The devil is in the details, keybr.com has a few unique features. keybr.com tracks every single key press and computes statistics for each individual keyboard key. It automatically generates lessons focusing on your weakest keys. It allows you to set your own target typing speed and tracks your progress toward this goal. It starts with a small list of the most frequent letters of your language. It adds more letters later when you reach the target speed with the previous letters. It can predict future and tell you how many more lessons to complete to reach the target speed. It shows you a beautiful profile page with lots of graphs and the details about your learning progress. It has plenty of modes and configuration options. Can I contribute? Yes! Give us a ⭐️. Help this project gain visibility and stand out. Report a bug. If something is not working, let us know. Suggest a feature. We are open to new ideas. Translate. If you want to see keybr.com in your language. Getting started. Launch a local instance of keybr.com, make a pull request. Add a keyboard. Add a custom keyboard to keybr.com Add a language. Add a custom language to keybr.com Join our Discord server . To discuss things in a less formal way. License Released under the GNU Affero General Public License v3.0.;The smartest way to learn touch typing and improve your typing speed.;typing,typing-tutor,keyboard,keyboard-layout | aradzie/keybr.com |
rebelonion/Dantotsu;Dantotsu 🌟 Dantotsu is an Anilist only client. Dantotsu (断トツ; Dan-totsu) literally means "the best of the best" in Japanese. Try it out for yourself and be the judge! 🚀 STAR THIS REPOSITORY TO SUPPORT THE DEVELOPER AND ENCOURAGE THE DEVELOPMENT OF THE APPLICATION! WANT TO CONTRIBUTE? 🤝 All contributions are welcome, from code to documentation to graphics to design suggestions to bug reports. Please use GitHub to its fullest; contribute Pull Requests, contribute tutorials or other content - whatever you have to offer, we can use! You can come hang out with our awesome community, request new features, and report any bugs or issues at our Discord server too. 📣 OFFICIAL DISCORD SERVER 🚀 VISITORS LICENSE 📜 Dantotsu is licensed under the Unabandon Public License (UPL). More info can be found here.;Anilist client based on Saikou ;android-app,anime,anime-list,anime-search,manga-reader,kotlin,kotlin-android,manga | rebelonion/Dantotsu |
lmsqueezy/wedges;packages/wedges/README.md;An ever-expanding, open-source React UI library built with the Wedges Design System, Radix primitives, and Tailwind CSS.;components,lemon-squeezy,radix-ui,react,tailwindcss,ui,accessible-components | lmsqueezy/wedges |
gofr-dev/gofr;GoFr GoFr is an opinionated microservice development framework. Listed in CNCF Landscape . Visit https://gofr.dev for more details and documentation. 🎯 Goal Even though generic applications can be written using GoFr, our main focus is to simplify the development of microservices.
We will focus ourselves towards deployment in Kubernetes and aspire to provide out-of-the-box observability. 💡 Advantages/Features Simple API syntax REST Standards by default Configuration management Inbuilt Middlewares gRPC support HTTP service with support for Circuit Breaker and Health Check 👍 Contribute If you want to say thank you and/or support the active development of GoFr: Add a GitHub Star to the project. Write a review or tutorial on Medium , Dev.to or personal blog. Visit CONTRIBUTING for details on submitting patches and the contribution workflow. After your PR is merged to the repo, fill the google form , and we will send you a GoFr T-Shirt and Stickers as a token of appreciation.;An opinionated Go framework for accelerated microservice development;go,golang,golang-framework,microservice-framework,microservice,server | gofr-dev/gofr |
ykhli/local-ai-stack;Local AI Stack Make it possible for anyone to run a simple AI app that can do document Q&A 100% locally without having to swipe a credit card 💳. Based on AI Starter Kit . Have questions? Join AI Stack devs and find me in #local-ai-stack channel. Stack 🦙 Inference: Ollama 💻 VectorDB: Supabase pgvector 🧠 LLM Orchestration: Langchain.js 🖼️ App logic: Next.js 🧮 Embeddings generation: Transformer.js and all-MiniLM-L6-v2 Quickstart 1. Fork and Clone repo Fork the repo to your Github account, then run the following command to clone the repo: git clone git@github.com:[YOUR_GITHUB_ACCOUNT_NAME]/local-ai-stack.git 2. Install dependencies cd local-ai-stack
npm install 3. Install Ollama Instructions are here 4. Run Supabase locally Install Supabase CLI brew install supabase/tap/supabase Start Supabase Make sure you are under /local-ai-stack directory and run: supabase start 5. Fill in secrets cp .env.local.example .env.local Then get SUPABASE_PRIVATE_KEY by running supabase status Copy service_role key and save it as SUPABASE_PRIVATE_KEY in .env.local 6. Generate embeddings bash
node src/scripts/indexBlogLocal.mjs This script takes in all files from /blogs, generate embeddings using transformers.js , and store embeddings as well as metadata in Supabase. 7. Run app locally Now you are ready to test out the app locally! To do this, simply run npm run dev under the project root and visit http://localhost:3000 . 8. Deploy the app If you want to take the local-only app to the next level, feel free to follow instructions on AI Starter Kit for using Clerk, Pinecone/Supabase, OpenAI, Replicate and other cloud-based vendors. Refs & Credits AI SDK LangUI Tailwind CSS a16z AI starter kit https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/pinecone https://js.langchain.com/docs/modules/models/llms/integrations#replicate https://js.langchain.com/docs/modules/chains/index_related_chains/retrieval_qa;A starter kit to build *local-only* AI apps that cost $0 to run -- starting with document Q&A. Written in Javascript;[] | ykhli/local-ai-stack |
XueFuzhao/OpenMoE;[Homepage] | [Paper] | [Colab Demo] | [Huggingface] | [Discord] | [Twitter] | [Blog] OpenMoE OpenMoE is a project aimed at igniting the open-source MoE community! We are releasing a family of open-sourced Mixture-of-Experts (MoE) Large Language Models. Our project began in the summer of 2023. On August 22, 2023, we released the first batch of intermediate checkpoints (OpenMoE-base&8B), along with the data and code [Twitter] . Subsequently, the OpenMoE-8B training was completed in November 2023. After that, we embarked on explorations on the 34B scale model, which is still ongoing. As a small student team, instead of pursuing the best model with better data, computation, and human power, we devote to fully sharing our training data, strategies, model architecture, weights, and everything we have with the community. We hope this project will promote research on this promising field and invite more contributors to work on open-sourced MoE projects together! News [2024/01] 🔥 We release OpenMoE paper! We conducted an in-depth routing analysis and found many interesting stuff. Check it here ! [2024/01] 🔥 OpenMoE-8B-Chat is now available. We've provided a Colab inference demo for everyone to try, as well as a tutorial on converting JAX checkpoints to PyTorch checkpoints(Note: both require Colab Pro). [2023/11] 🔥 Thanks to Colossal AI! They released one PyTorch OpenMoE implementation including both training and inference with expert parallelism. [2023/08] 🔥 We released an intermediate OpenMoE-8B checkpoint (OpenMoE-v0.2) along with two other models. Check out the blog post . TODO List [x] PyTorch Implementation with Colossal AI [x] Continue Training to 1T tokens [ ] More Evaluation [ ] Paper Contents Model Weights Get Started Approach License Authors Citation Model Weights Currently, three models are released in total: OpenMoE-base, OpenMoE-8B/8B-Chat, and OpenMoE-34B(at 200B tokens). The table below lists the 8B/8B-Chat model that has completed training on 1.1T tokens. | Model Name | Description | #Param |Huggingface |
|----------------|-------------------------------------------------|----------|-------------|
| OpenMoE-8B(1.1T) | 8B MoE with comparable FLOPs of a 2B LLaMA(No SFT) |8B | Link |
| OpenMoE-8B-Chat (1.1T+SFT) | OpenMoE-8B-1.1T supervised finetuned on the WildChat GPT-4 Subset |8B | Link | Besides, we also provide all our intermediate checkpoints(base, 8B, 34B) for research purposes. | Model Name | Description | #Param |Huggingface |
|----------------|-------------------------------------------------|----------|-------------|
| OpenMoE-34B-200B | 34B MoE with comparable FLOPs of a 7B LLaMA(No SFT) |34B | Link |
| OpenMoE-8B-200B | 8B MoE with comparable FLOPs of a 2B LLaMA(No SFT) |8B | Link |
| OpenMoE-8B-400B | 8B MoE with comparable FLOPs of a 2B LLaMA(No SFT) |8B | Link |
| OpenMoE-8B-600B | 8B MoE with comparable FLOPs of a 2B LLaMA(No SFT) |8B | Link |
| OpenMoE-8B-800B | 8B MoE with comparable FLOPs of a 2B LLaMA(No SFT) |8B | Link |
| OpenMoE-8B-1T | 8B MoE with comparable FLOPs of a 2B LLaMA(No SFT) |8B | Link |
| OpenMoE-base(128B) | A small MoE model for debugging only |637M | Link | | OpenLLaMA-base(128B) | A dense counter-part of OpenMoE-base |310M | Link | The base model, which was trained using 128 billion tokens, served primarily for debugging purposes. After validating the effectiveness of our model architecture, we did not pursue further training. Consequently, their performance might not be very good, and the checkpoints are not suitable for practical applications. Better performance can be observed from our 8B or 34B versions. The OpenMoE-8B with 4 MoE layers and 32 experts has been trained by 1.1T tokens. The SFT version has also been released after we finetuned the OpenMoE-8B-1.1T on the wildchat dataset's GPT-4 subset. The intermediate checkpoints at 200B, 400B, 600B, 800B, 1T tokens can be used to study the training dynamics of MoE architecture. We are still training our OpenMoE-34B, which is a MoE model with 8 MoE layers and 32 experts. We released the intermediate checkpoint trained on 200B tokens on Huggingface. If you are interested in the latest checkpoint, please feel free to drop Fuzhao an email (f.xue@u.nus.edu). Get Started Inference with Pytorch Our PyToch implementation is supported by Colossal AI . You can install our forked version directly for easier setup:
``` Install ColossalAI git clone --branch my_openmoe https://github.com/Orion-Zheng/ColossalAI.git
pip install ./ColossalAI
python -m pip install -r ./ColossalAI/examples/language/openmoe/requirements.txt
``` Then, you can do inference by:
```
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM model_path = "ckpts/openmoe-8b-chat"
config = AutoConfig.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map='auto'
)
query = 'Question: How do I kill a process? Answer:'
prompt = f'''< >
You are a helpful, respectful and honest assistant.
< > [INST] {query} [/INST]''' inputs = tokenizer(prompt, return_tensors="pt").to('cuda')
sample = model.generate(**inputs, max_new_tokens=32)
print(tokenizer.decode(sample[0]))
``` We also provide a Colab tutorial demonstrating the jax checkpoint conversion and execution of PyTorch model inference. You can experiment with OpenMoE-8B-Chat on Colab directly by this (Note: both require Colab Pro).
- Running OpenMoE-8B requires ~49GB of memory in float32 or ~23GB in bfloat16. It can be executed on a Colab CPU High-RAM runtime or an A100-40GB runtime, both of which require Colab Pro.The float16 precision is not recommended because sometimes it will lead to performance degradation.
- Running the OpenMoE-34B requires ~89GB of memory in bfloat16 or ~180GB in float32. To perform inference on multiple devices/offloading model weights to RAM, please refer to the script here .
- A more detailed env setup script can be found here . Note: you don't need t5x and Jax dependency if you are using our huggingface ckpts Training with TPU/GPU On TPUs: Get a TPU-vm and run the following code on all TPUs. Researchers can apply TPU Research Cloud to get the TPU resource. git clone https://github.com/XueFuzhao/OpenMoE.git
bash OpenMoE/script/run_pretrain.sh On GPUs: ColossalAI provides a PyTorch + GPU implementation for OpenMoE and has optimized expert parallel strategies. However, we have recently noticed some issues #5163 , #5212 raised about convergence problems. We are actively following up on these concerns and will soon update our GPU training tutorials. Evaluation with TPU/GPU On GPUs: You can evaluate our model on MT-Bench by running the code below. git clone https://github.com/Orion-Zheng/FastChat.git
cd FastChat && pip install -e ".[model_worker,llm_judge]"
cd FastChat/fastchat/llm_judge
python gen_model_answer.py --model-path LOCAL_PATH_TO_MODEL_CKPT/openmoe_8b_chat_ckpt\
--model-id openmoe-chat\
--dtype bfloat16 On TPUs: Get a TPU-vm and run the following code to evaluate the model on the BIG-bench-Lite. git clone https://github.com/XueFuzhao/OpenMoE.git
bash OpenMoE/script/run_eval.sh Approach Data Before 780B tokens: 50% The RedPajama + 50% The Stack Dedup.
We use a high ratio of coding data to improve reasoning ability. After 780B tokens: | dataset | Ratio (%) |
| -------------------------------| ----------- |
| redpajama_c4 | 15.0 |
| redpajama_wikipedia | 6.5 |
| wikipedia | 6.5 |
| redpajama_stackexchange | 2.5 |
| redpajama_arxiv | 4.5 |
| redpajama_book | 6.5 |
| redpajama_github | 5.0 |
| redpajama_common_crawl | 43.5 |
| the_stack_dedup | 10.0 | We found model tends to learn code faster than language. So we decide to reduce the coding data at the later stage of training. Below are scripts to generate TFDS for pre-training datasets: The RedPajama: https://github.com/Orion-Zheng/redpajama_tfds The-Stack-Dedup: https://github.com/Orion-Zheng/the_stack_tfds Tokenizer We use the umt5 Tokenizer to support multi-lingual continue learning in the future, which can be downloaded on Huggingface or Google Cloud . Model Architecture OpenMoE is based on ST-MoE but uses Decoder-only architecture. The detailed implementation can be found in Fuzhao's T5x and Flaxformer repo. Training Objective Before 780B tokens: We use a modified UL2 training objective but Casual Attention Mask (We use more prefix LM and high mask ratio because it saves computation.):
- 50% prefix LM
- 10% span len=3 mask ratio=0.15
- 10% span len=8 mask ratio=0.15
- 10% span len=3 mask ratio=0.5
- 10% span len=8 mask ratio=0.5
- 10% span len=64 mask ratio=0.5 After 780B tokens: Vanilla next token prediction, because we observed that UL2 objective tends to saturate at the later stage of training, although it enables model to learn things faster at start. Other Designs RoPE, SwiGLU activation, 2K context length. We will release a more detailed report soon. Evaluation Results BigBench-Lite We evaluate our model on BigBench-Lite as our first step. We plot the cost-effectiveness curve in the figure below. Relative Cost is approximated by multiplying activated parameters and training tokens. The size of dots denotes the number of activated parameters for each token. The lightgray dot denotes the total parameters of MoE models. For more detailed results, please see our Blog MT-Bench We perform evaluation on MT-Bench and observe that OpenMoE-8B-Chat outperformed dense LLMs trained with around two times effective FLOPs on the first Turn results. License Our code is under Apache 2.0 License. Since the models are trained on The Redpajama and The Stack dataset, please check the license of these two datasets for your model usage. Authors This project is currently contributed by the following authors: Fuzhao Xue Zian Zheng Yao Fu Jinjie Ni Zangwei Zheng Wangchunshu Zhou Yang You Acknowledgement The computational resources for this project were generously provided by the Google TPU Research Cloud(TRC) . We extend our heartfelt thanks to TRC for their invaluable support, which has been fundamental to the success of our work. Besides, we are extremely grateful to the ColossalAI Team for their tremendous support with the PyTorch implementation, especially Xuanlei Zhao and Wenhao Chen , making training and inference of OpenMoE on GPUs a reality. Citation Please cite the repo if you use the model and code in this repo. bibtex
@article{xue2024openmoe,
title={OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models},
author={Xue, Fuzhao and Zheng, Zian and Fu, Yao and Ni, Jinjie and Zheng, Zangwei and Zhou, Wangchunshu and You, Yang},
journal={arXiv preprint arXiv:2402.01739},
year={2024}
} Star History;A family of open-sourced Mixture-of-Experts (MoE) Large Language Models;[] | XueFuzhao/OpenMoE |
cofactoryai/textbase;✨ Textbase is a framework for building chatbots using NLP and ML. ✨ Just implement the on_message function in main.py and Textbase will take care of the rest :) Since it is just Python you can use whatever models, libraries, vector databases and APIs you want. Coming soon:
- [x] PyPI package - [x] Easy web deployment via textbase-client deploy - [ ] SMS integration
- [ ] Native integration of other models (Claude, Llama, ...) Installation Make sure you have python version >=3.9.0 , it's always good to follow the docs 👈🏻 1. Through pip bash
pip install textbase-client 2. Local installation Clone the repository and install the dependencies using Poetry (you might have to install Poetry first). For proper details see here bash
git clone https://github.com/cofactoryai/textbase
cd textbase
poetry shell
poetry install Start development server If you're using the default template, remember to set the OpenAI API key in main.py . Run the following command:
- if installed locally bash
poetry run python textbase/textbase_cli.py test - if installed through pip bash
textbase-client test Response: bash
Path to the main.py file: examples/openai-bot/main.py # You can create a main.py by yourself and add that path here. NOTE: The path should not be in quotes Now go to the link in blue color which is shown on the CLI and you will be able to chat with your bot! Other commands have been mentioned in the documentation website. Have a look 😃! Contributions Contributions are welcome! Please open an issue or create a pull request.
Follow our contributions guide for more details!;✨ Textbase is a simple framework for building AI chatbots. ✨;hacktoberfest | cofactoryai/textbase |
Yifan-Song793/RestGPT;RestGPT This is the code for the paper RestGPT: Connecting Large Language Models with Real-World RESTful APIs . This work aims to construct a large language model based autonomous agent, RestGPT, to control real-world applications , such as movie database and music player. To achieve this, we connect LLMs with RESTful APIs and tackle the practical challenges of planning, API calling, and response parsing. To fully evaluate the performance of RestGPT, we propose RestBench , a high-quality benchmark which consists of two real-world scenarios and human-annotated instructions with gold solution paths. What's New [Next] The demo is under-construction. [2023/8/29] Code for RestGPT is released. [2023/8/28] The second version of our paper is released. [2023/6/13] Our paper is released. RestGPT RestGPT adopts an iterative coarse-to-fine online planning framework and uses an executor to call RESTful APIs. Here is an overview of RestGPT. Modules: Planner: generating natural language sub-task for current step. API selector: mapping the coarse high-level sub-task to finer API calling plan. Executor: executing the API calling plan. Caller: organizing API parameters based on the API plan and API documentation. Parser: generating Python code to parse the API response based on the response schema. Here is an example of using TMDB movie database to search for the number of movies directed by Sofia Coppola. Data We also introduce RestBench to evaluate the performance of RestGPT. RestBench is a high-quality test set consisting of TMDB movie database and Spotify music player scenarios. We collect realistic user instructions with human-annotated gold solution paths. Here are examples of RestBench: TMDB example: Instruction: Who is the director of today's most trending movie? Gold solution path GET /trending/{media_type}/{time_window} GET /movie/{movie_id}/credits Spotify example: Instruction: Make me a playlist containing three songs of Mariah Carey and name it 'Love Mariah'. Gold solution path GET /search GET /me POST /users/{user_id}/playlists POST /playlists/{playlist_id}/tracks Below is the statistics of the data. We report the number of instructions with different lengths of solution path: | Scenario | #APIs | Len-1 | Len-2 | Len-3 | Len-4 | Avg. Len. | Total |
| -------- | ----- | ----- | ----- | ----- | ----- | --------- | ----- |
| TMDB | 54 | 5 | 66 | 27 | 2 | 2.3 | 100 |
| Spotify | 40 | 8 | 18 | 22 | 9 | 2.6 | 57 | Setup bash
pip install langchain colorama tiktoken spotipy openai create logs folder Get OpenAI key from OpenAI, TMDB key from https://developer.themoviedb.org/docs/getting-started, and Spotify key from https://developer.spotify.com/documentation/web-api Fill in your own key in config.yaml (Optional) Initialize the Spotify Environment WARNING: this will remove all your data from spotify! python
python init_spotify.py Run The code can be run using the following command: bash
python run.py Then Input the scenario (TMDB/Spotify) and instruction. We also provide two scripts to run RestGPT on RestBench: ```bash TMDB python run_tmdb.py Spotify, please open Spotify on your device python run_spotify.py
``` run_tmdb.py will sequentially execute all instructions of RestBench-TMDB. Regarding RestBench-Spotify, you should manually modify the query_idx before executing the instructions. Citation If you find this repo useful, please cite us. bibtex
@misc{song2023restgpt,
title={RestGPT: Connecting Large Language Models with Real-World RESTful APIs},
author={Yifan Song and Weimin Xiong and Dawei Zhu and Wenhao Wu and Han Qian and Mingbo Song and Hailiang Huang and Cheng Li and Ke Wang and Rong Yao and Ye Tian and Sujian Li},
year={2023},
eprint={2306.06624},
archivePrefix={arXiv},
primaryClass={cs.CL}
};An LLM-based autonomous agent controlling real-world applications via RESTful APIs;[] | Yifan-Song793/RestGPT |
eosphoros-ai/Awesome-Text2SQL;Awesome Text2SQL 🎉🎉🎉 English | 中文版 Curated tutorials and resources for Large Language Models, Text2SQL, Text2DSL , Text2API , Text2Vis and more. 🌱 How to Contribute We warmly welcome contributions from everyone, whether you've found a typo, a bug, have a suggestion, or want to share a resource related to LLM+Text2SQL. For detailed guidelines on how to contribute, please see our CONTRIBUTING.md file. 🔔 Leaderboard | | WikiSQL | Spider Exact Match(EM) | Spider Exact Execution(EX) | BIRD Valid Efficiency Score (VES) | BIRD Execution Accuracy (EX) |
|:----:|:-----------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------:|:-----------------------------------------------------------------:|
| 🏆1 | 93.0 (2021/05- SeaD+Execution-Guided Decoding ) | 81.5 (2023/11-MiniSeek) | 91.2 (2023/11-MiniSeek) | 80.40 (2024/05-ExSL + granite-20b-code) | 67.86 (2024/05-ExSL + granite-20b-code) |
| 🥈2 | 92.7 (2021/03- SDSQL+Execution-Guided Decoding ) | 74.0 (2022/09- Graphix-3B + PICARD ) | 86.6 (2023/08- DAIL-SQL + GPT-4 + Self-Consistency ) | 72.78 (2024/05-ExSL + granite-20b-code) | 67.75 (2024/05-Byte-SQL) |
| 🥉3 | 92.5 (2020/11- IE-SQL+Execution-Guided Decoding ) | 73.9 (2022/09-CatSQL + GraPPa) | 86.2 (2023/08- DAIL-SQL + GPT-4 ) | 72.73 (2024/05-Byte-SQL) | 66.69 (2024/05- CHESS ) |
| 4 | 92.2 (2020/03- HydraNet+Execution-Guided Decoding ) | 73.1 (2022/09- SHiP + PICARD ) | 85.6 (2023/10-DPG-SQL + GPT-4 + Self-Correction) | 72.63 (2024/05- CHESS ) ) | 65.45 (2024/01-MCS-SQL + GPT-4) |
| 5 | 91.9 (2020/12- BRIDGE+Execution-Guided Decoding ) | 72.9 (2022/05- G³R + LGESQL + ELECTRA ) | 85.3 (2023/04- DIN-SQL + GPT-4 ) | 71.35 (2024/01-MCS-SQL + GPT-4) | 64.95 (2024/04-OpenSearch-SQL,v1 + GPT-4) |
| 6 | 91.8 (2019/08- X-SQL+Execution-Guided Decoding ) | 72.4 (2022/08-RESDSQL+T5-1.1-lm100k-xl) | 83.9 (2023/07-Hindsight Chain of Thought with GPT-4) | 69.56 (2024/04-GRA-SQL) | 64.84 (2024/02-PB-SQL v1) |
| 7 | 91.4 (2021/03- SDSQL ) | 72.4 (2022/05-T5-SR) | 82.3 (2023/06- C3 + ChatGPT + Zero-Shot ) | 68.90 (2024/02-PB-SQL) | 64.62 (2024/06-SFT CodeS-15B + SQLFixAgent) |
| 8 | 91.1 (2020/12- BRIDGE ) | 72.2 (2022/12- N-best List Rerankers + PICARD ) | 80.8 (2023/07-Hindsight Chain of Thought with GPT-4 and Instructions) | 68.80 (2024/04-OpenSearch-SQL,v1 + GPT-4) | 63.39 (2024/02-SENSE 13B) |
| 9 | 91.0 (2021/04- Text2SQLGen + EG ) | 72.1 (2021/09- S²SQL + ELECTRA ) | 79.9 (2023/02- RESDSQL-3B + NatSQ ) | 67.68 (2023/11- MAC-SQL + GPT-4 ) | 63.22 (2024/04-GRA-SQL) |
| 10 | 90.5 (2020/11- SeqGenSQL+EG ) | 72.0 (2023/02- RESDSQL-3B + NatSQL ) | 78.5 (2022/11-SeaD + PQL) | 67.66 (2024/06- SuperSQL ) | 62.66 (2024/06- SuperSQL ) | 📜 Contents Awesome Text2SQL 🎉🎉🎉 🌱 How to Contribute 🔔 Leaderboard 📜 Contents 👋 Introduction 📖 Survey 💬 Classic Model 🔥 Base Model 💡 Fine-tuning 💪 Dataset 🌈 Evaluation Index 📦 Libraries 🔧 Practice Project 🤝 Friendship Links 👋 Introduction Text-to-SQL (or Text2SQL), as the name implies, is to convert text into SQL. A more academic definition is to convert natural language problems in the database field into structured query languages that can be executed in relational databases. Therefore, Text-to-SQL can also be abbreviated as NL2SQL. Input: natural language questions, such as " Query the relevant information of the table t_user, and the results are sorted in descending order by id, and only the first 10 data are kept. " Output: SQL, such as " SELECT * FROM t_user ORDER BY id DESC LIMIT 10 " 📖 Survey (2023-International Conference on Very Large Data Bases, VLDB, CCF-A)A survey on deep learning approaches for text-to-SQL [ paper ] (2022-IEEE Transactions on Knowledge and Data Engineering, TKDE, CCF-A) A Survey on Text-to-SQL Parsing: Concepts, Methods, and Future Directions [ paper ] (2022-International Conference on Computational Linguistics, COLOING, CCF-B) Recent Advances in Text-to-SQL: A Survey of What We Have and What We Expect [ paper ] (2022-arXiv)Deep Learning Driven Natural Languages Text to SQL Query Conversion: A Survey [ paper ] 💬 Classic Model (2023-arXiv, None) MAC-SQL : A Multi-Agent Collaborative Framework for Text-to-SQL
[ paper ]
[ code ] (2023-arXiv, None) DBCᴏᴘɪʟᴏᴛ : Scaling Natural Language Querying to Massive Databases
[ paper ]
[ code ] (2023-arXiv, None) Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation
[ paper ]
[ code ] (2023-AAAI 2023, CCF-A) RESDSQL : Decoupling Schema Linking and Skeleton Parsing for Text-to-SQL
[ paper ]
[ code ] (2023-arXiv, None) Can LLM Already Serve as A Database Interface? A BIg Bench for Large-Scale Database Grounded Text-to-SQLs
[ paper ]
[ code ] (2023-arXiv, None) DIN-SQL : Decomposed In-Context Learning of Text-to-SQL with Self-Correction
[ paper ]
[ code ] (2023-arXiv, None) A comprehensive evaluation of ChatGPT's zero-shot Text-to-SQL capability
[ paper ]
[ code ] (2023-ICLR, CCF-A) Binding Language Models in Symbolic Languages
[ paper ]
[ code ] (2023-SIGMOD, CCF-A) Few-shot Text-to-SQL Translation using Structure and Content Prompt Learning
[ paper ]
[ code ] (2023-ICASSP, CCF-B) T5-SR : A Unified Seq-to-Seq Decoding Strategy for Semantic Parsing
[ paper ] (2022-ACL, CCF-A) S 2 SQL : Injecting Syntax to Question-Schema Interaction Graph Encoder for Text-to-SQL Parsers
[ paper ] (2022-NAACL, CCF-B) SeaD : End-to-end Text-to-SQL Generation with Schema-aware Denoising
[ paper ] (2022-EMNLP, CCF-B) STAR : SQL Guided Pre-Training for Context-dependent Text-to-SQL Parsing
[ paper ]
[ code ] (2022-EMNLP, CCF-B) RASAT : Integrating Relational Structures into Pretrained Seq2Seq Model
for Text-to-SQL
[ paper ]
[ code ] (2022-EMNLP, CCF-B) CQR-SQL : Conversational Question Reformulation Enhanced Context-Dependent Text-to-SQL Parsers
[ paper ] (2022-ACL, CCF-A) HIE-SQL : History Information Enhanced Network for Context-Dependent Text-to-SQL Semantic Parsing
[ paper ] (2022-arXiv, None) Importance of Synthesizing High-quality Data for Text-to-SQL Parsing
[ paper ] (2021-ACL, CCF-A) Decoupled Dialogue Modeling and Semantic Parsing for Multi-Turn Text-to-SQL
[ paper ] (2021-arXiv, None) Pay More Attention to History: A Context Modelling Strategy for Conversational Text-to-SQL
[ paper ]
[ code ] (2021-ICLR, CCF-A) SCORE : Pre-training for Context Representation in Conversational Semantic Parsing
[ paper ] (2021-DASFAA, CCF-B) An Interactive NL2SQL Approach with Reuse Strategy
[ paper ] (2021-NAACL, CCF-B) Structure-Grounded Pretraining for Text-to-SQL
[ paper ] (2021-EMNLP, CCF-B) PICARD :Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models
[ paper ]
[ code ] (2021-ICLR, CCF-A) GraPPa : Grammar-Augmented Pre-Training for Table Semantic Parsing
[ paper ]
[ code ] (2021-ACL, CCF-A) LGESQL : Line Graph Enhanced Text-to-SQL Model with Mixed Local and Non-Local Relations
[ paper ]
[ code ] (2020-EMNLP, CCF-B) Bridging Textual and Tabular Data for Cross-Domain Text-to-SQL Semantic Parsing
[ paper ]
[ code ] (2020-ACL, CCF-A) TaBERT : Pretraining for Joint Understanding of Textual and Tabular Data
[ paper ]
[ code ] (2020-ACL, CCF-A) RAT-SQL : Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers
[ paper ]
[ code ] (2020-EMNLP, CCF-B) Mention Extraction and Linking for SQL Query Generation
[ paper ] (2020-EMNLP, CCF-B) IGSQL : Database Schema Interaction Graph Based Neural Model for Context-Dependent Text-to-SQL Generation
[ paper ]
[ code ] (2020-arXiv, None) Hybrid Ranking Network for Text-to-SQL
[ paper ]
[ code ] (2019-arXiv, None) X-SQL : reinforce schema representation with context
[ paper ] (2019-EMNLP, CCF-B) Global Reasoning over Database Structures for Text-to-SQL Parsing
[ paper ]
[ code ] (2019-EMNLP, CCF-B) Editing-Based SQL Query Generation for Cross-Domain Context-Dependent Questions
[ paper ]
[ code ] (2019-ACL, CCF-A) Representing Schema Structure with Graph Neural Networks for Text-to-SQL Parsing
[ paper ]
[ code ] (2019-ACL, CCF-A) Towards Complex Text-to-SQL in Cross-Domain Database with Intermediate Representation
[ paper ]
[ code ] (2018-EMNLP, CCF-B) SyntaxSQLNet : Syntax Tree Networks for Complex and Cross-DomainText-to-SQL Task
[ paper ]
[ code ] (2018-NAACL, CCF-B) TypeSQL : Knowledge-based Type-Aware Neural Text-to-SQL Generation
[ paper ]
[ code ] (2017-arXiv, None) SQLNet : Generating Structured Queries From Natural Language Without Reinforcement Learning
[ paper ]
[ code ] 🔥 Base Model Llama [ paper ] [ code ] [ model ] 2023/02, Meta AI proposes the open source LLM Llama, which has four scales: 7b, 13b, 33b, and 65b. ChatGLM [ paper ] [ code ] [ model ] 2023/03, Tsinghua University proposes the open bilingual language model ChatGLM, based on General Language Model framework, with the specification of 7b. Alpaca [ paper ] [ code ] [ model ] 2023/03, Stanford University proposes Alpaca, an open source LLM fine-tuned based on the Llama 7b model. There are 1 specification of 7b, and the training is simpler and cheaper. Vicuna [ paper ] [ code ] [ model ] 2023/03, UC Berkeley University, CMU and Stanford University propose Vicuna, an open souce LLM based on the Llama model, has two specifications: 7b and 13b. WizardLM [ paper ] [ code ] [ model ] 2023/04, Peking University and Microsoft propose WizardLM, a LLM of evolutionary instructions, with three specifications of 7b, 13b, and 30b. 2023/06, They propose WizardMath, a LLM in the field of mathematics. 2023/08, They propose WizardCoder, a LLM in the field of code. Falcon [ paper ] [ code ] [ model ] 2023/06, United Arab Emirates proposes Falcon, an open source LLM trained solely on refinedweb datasets, with four parameter specifications of 1b, 7b, 40b and 180b. It is worth noting that the performance on model 40B exceeds that of 65B LLaMA. ChatGLM2[ paper ] [ code ] [ model ] 2023/06, Tsinghua University proposes the second-generation version of ChatGLM,with the specification of 7b, which has stronger performance, longer context, more efficient inference and more open license. Baichuan-7b [ code ] [ model ] 2023/06, Baichuan Intelligent Technology proposes the Baichuan-7B, an open-source, large-scale pre-trained language model based on Transformer architecture, which contains 7 billion parameters and trained on approximately 1.2 trillion tokens. It supports both Chinese and English languages with a context window length of 4096. Baichuan-13b [ code ] [ model ] 2023/07, Baichuan Intelligent Technology proposes the Baichuan-13B, an open-source, commercially available large-scale language model, following Baichuan-7B, which has two versions: pre-training (Baichuan-13B-Base) and alignment (Baichuan-13B-Chat). InternLM [ paper ] [ code ] [ model ] 2023/07, Shanghai AI Laboratory and SenseTime propose the InternLM, which has open-sourced a 7b and 20b parameter base models and chat models tailored for practical scenarios and the training system. Llama 2 [ paper ] [ code ] [ model ] 2023/07, Meta AI proposes the second-generation Llama series open-source LLM Llama 2. Compared with Llama 1, the training data is 40% more, and the context length is doubled. The model has four specifications: 7b, 13b, 34b, and 70b, but 34b is not open source. Code Llama [ paper ] [ code ] [ model ] 2023/08, Meta AI proposes Code LLama, based on Llama 2. Code Llama reaches state-of-the-art performance among open models on several code benchmarks. There are foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models, with 7B, 13B and 34B parameters each. 2024/01, Meta AI open sourced CodeLlama-70b, CodeLlama-70b-Python and CodeLlama-70b-Instruct. Qwen [ paper ] [ code ] [ model ] 2023/08, Alibaba Cloud proposes the 7b-parameter version of the large language model series Qwen-7B (abbr. Tongyi Qianwen), is pretrained on a large volume of data, including web texts, books, codes, etc, which has open sourced two models with Qwen-7B and Qwen-7B-Chat. 2023/09, Alibaba Cloud updated the Qwen-7B and Qwen-7B-Chat and open sourced Qwen-14B and Qwen-14B-Chat. 2023/11, they open sourced Qwen-1.8B, Qwen-1.8B-Chat, Qwen-72B and Qwen-72B-Chat. Baichuan 2 [ paper ]
[ code ] [ model ] 2023/09, Baichuan Intelligent Technology proposes the new generation of open-source large language models Baichuan 2, trained on a high-quality corpus with 2.6 trillion tokens, which has base and chat versions for 7B and 13B, and a 4bits quantized version for the chat model. Phi-1.5 [ paper ] [ model ] 2023/09, Microsoft Research proposes the open source language model phi-1.5, a Transformer with 1.3 billion parameters, which was trained using the same data sources as phi-1 , augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters. 2023/12, They propose Phi-2 , a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters. Mistral-7B [ paper ]
[ code ]
[ model ] 2023/10, Mistral-AI company proposes the open source LLM Mistral 7B, a 7–billion-parameter language model engineered for superior performance and efficiency. Mistral 7B outperforms the best open 13B model (Llama 2) across all evaluated benchmarks, and the best released 34B model (Llama 1) in reasoning, mathematics, and code generation. They also provide a model fine-tuned to follow instructions, Mistral 7B – Instruct, that surpasses Llama 2 13B–chat model both on human and automated benchmarks. 2023/12,They propose the open source LLM Mixtral-8x7B, a pretrained generative Sparse Mixture of Experts, which outperforms Llama 2 70B on most benchmarks. Deepseek [ paper ]
[ code ]
[ model ] 2023/11, DeepSeek-AI company proposes the open source LLM deepseek, which has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese. Similarly, the deepseek LLM mainly has two categories: base and chat, with two parameter formats of 7b and 67b respectively. Data from its paper shows that deepSeek LLM 67b surpasses LLaMA-2 70b across a range of benchmarks, especially in the domains of code, mathematics, and reasoning. Furthermore, DeepSeek LLM 67B Chat exhibits superior performance compared to GPT-3.5. MiniCPM [ paper ]
[ code ]
[ model ] 2024/02, ModelBest Inc. and TsinghuaNLP proposes the open source LLM MiniCPM, which is an End-Side LLM, with only 2.4B parameters excluding embeddings (2.7B in total). It is worth that MiniCPM has very close performance compared with Mistral-7B on open-sourced general benchmarks with better ability on Chinese, Mathematics and Coding after SFT. The overall performance exceeds Llama2-13B, MPT-30B, Falcon-40B, etc. Mixtral-8x22B [ paper ] [ code ] [ model ] 2024/04, Mistral AI proposed the latest open model Mixtral 8x22B. It sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Phi-3 [ paper ] [ model ] 2024/04, Microsoft proposed the Phi-3 models, which are the most capable and cost-effective small language models (SLMs) available, outperforming models of the same size and next size up across a variety of language, reasoning, coding, and math benchmarks. Phi-3-mini is available in two context-length variants—4K and 128K tokens. It is the first model in its class to support a context window of up to 128K tokens, with little impact on quality. Phi-3-small (7B) and Phi-3-medium (14B) will be available in the Azure AI model catalog and other model gardens shortly. Llama 3 [ paper ] [ code ] [ model ] 2024/04, Meta AI proposed the third generation Llama series open source large model Llama 3. The model has 2 parameter specifications, 8b and 70b, with base and instruct versions respectively. Excitingly, Llama 3 models are a major leap over Llama 2 and establish a new state-of-the-art for LLM models at those scales. Qwen-1.5-110B [ paper ] [ code ] [ model ] 2024/04, Alibaba Cloud proposed the first 100B+ model of the Qwen1.5 series, Qwen1.5-110B, which achieves comparable performance with Meta-Llama3-70B in the base model evaluation, and outstanding performance in the chat evaluation, including MT-Bench and AlpacaEval 2.0. Qwen1.5 is the beta version of Qwen2, which has 9 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B, 72B, and 110B dense models, and an MoE model of 14B with 2.7B activated. Qwen2 [ paper ] [ code ] [ model ] 2024/06, Alibaba Cloud proposed the evolution from Qwen1.5 to Qwen2, which has 5 model sizes, including Qwen2-0.5B, Qwen2-1.5B, Qwen2-7B, Qwen2-57B-A14B, and Qwen2-72B. Qwen2-72B exhibits superior performance compared to leading models such as Llama-3-70B. Notably, it surpasses the performance of its predecessor, Qwen1.5-110B, despite having fewer parameters. ## 💡 Fine-tuning
- P-Tuning [ paper ] [ code ]
- 2021/03, Tsinghua University and others propose P-Tuning, a fine-tuning method for LLM, which uses trainable continuous prompt word embeddings to reduce the cost of fine-tuning. LoRA [ paper ] [ code ] 2021/06, Microsoft proposes the Low-Rank Adaptation method for fine-tuning LLM by freezing the pre-training weights. P-Tuning V2 [ paper ] [ code ] 2021/10, Tsinghua University proposes P-Tuning V2, an improved version of P-Tuning with better performance. RLHF [ paper ] [ code ] 2022/12, OpenAI uses the RLHF (Reinforcement Learning from Human Feedback) method to train ChatGPT, and uses human feedback signals to directly optimize the language model, with excellent performance. RRHF [ paper ] [ code ] 2023/04, Alibaba proposes a novel learning paradigm called RRHF(Rank Responses to Align Language Models
with Human Feedback without tears), which can be tuned as easily as fine-tuning and achieve a similar
performance as PPO in HH dataset. QLoRA [ paper ] [ code ] 2023/05, Washington University proposes the qlora method, based on the frozen 4bit quantization model, combined with LoRA method training, which further reduces the cost of fine-tuning. RLTF [ paper ] [ code ] 2023/07, Tencent proposes RLTF(Reinforcement Learning from Unit Test Feedback), a novel online RL framework with unit test feedback of multi-granularity for refining code LLMs. RRTF [ paper ] 2023/07, Huawei proposes RRTF(Rank Responses to align Test&Teacher Feedback). Compared with RLHF, RRHF can efficiently align the output probabilities of a language model with human preferences, with only 1-2 models required during the tuning period, and it is simpler than PPO in terms of implementation, hyperparameter tuning, and training. RLAIF [ paper ] 2023/09, Google proposes RLAIF (RL from AI Feedback), a technique where preferences are labeled by an off-the-shelf LLM in lieu of humans. They find that the RLHF and RLAIF methods achieve the similar results on the task of summarization. 💪 Dataset WikiSQL [ paper ] [ code ] [ dataset ] 2017/09, Salesforce proposes a large Text-to-SQL dataset WikiSQL, the data comes from Wikipedia, which belongs to a single domain, contains 80,654 natural language questions, and 77,840 SQL statements. The form of SQL statements is relatively simple, and does not include sorting, grouping, and subqueries and other complex operations. Spider [ paper ] [ code ] [ dataset ] 2018/09, Yale University proposes the Text-to-SQL dataset Spider with multiple databases, multiple tables, and single-round query. It is also recognized as the most difficult large-scale cross-domain evaluation list in the industry. It contains 10,181 natural language questions and 5,693 SQL statements. Involving more than 200 databases in 138 different fields, the difficulty level is divided into: easy, medium, difficult, and extremely difficult. 2024/02, Yale University has open sourced the test collection of Spider 1.0 leaderboard, and they will open source the Spider 2.0 data set in March. SParC [ paper ] [ code ] [ dataset ] 2019/06, Yale University proposes a large dataset SParC for complex, cross-domain, and context-dependent(multi-turn) semantic parsing and text-to-SQL task, which consists of 4,298 coherent question sequences (12k+ unique individual questions annotated with SQL queries annotated by 14 Yale students), obtained from user interactions with 200 complex databases over 138 domains. CSpider [ paper ] [ code ] [ dataset ] 2019/09, Westlake University propposes a large Chinese dataset CSpider for complex and cross-domain semantic parsing and text-to-SQL task, translated from Spider by 2 NLP researchers and 1 computer science student, which consists of 10,181 questions and 5,693 unique complex SQL queries on 200 databases with multiple tables covering 138 different domains. CoSQL [ paper ] [ code ] [ dataset ] 2019/09, Yale University and Salesforce Research propose a cross-domain database CoSQL, which consists of 30k+ turns plus 10k+ annotated SQL queries, obtained from a Wizard-of-Oz (WOZ) collection of 3k dialogues querying 200 complex DBs spanning 138 domains. TableQA [ paper ] [ dataset ] 2020/06, Zhuiyi Technology proposes a large-scale cross-domain Natural Language to SQL dataset TableQA in Chinese language consisting 64,891 questions and 20,311 unique SQL queries on over 6,000 tables. DuSQL [ paper ] [ dataset ] 2020/11, Baidu proposes a larges-scale and pragmatic Chinese dataset DuSQL for the cross-domain text-toSQL task, containing 200 databases, 813 tables, and 23,797 question/SQL pairs. KaggleDBQA [ paper ] [ code ] [ dataset ] 2021/06, University of Washington and Microsoft Research propose KaggleDBQA, a cross-domain evaluation dataset of real Web databases with domain-specific data types, original formatting, and unrestricted questions. It includes 272 examples across 8 databases, with an average of 2.25 tables per database. The dataset is notable for its real-world data sources, natural question authoring environments, and database documentation with rich domain knowledge. Key statistics: 8.7% WHERE clauses, 73.5% VAL, 24.6% SELECT, and 6.8% NON-SELECT. CHASE [ paper ] [ code ] [ dataset ] 2021/08, Xi'an Jiaotong University and Microsoft propose the first cross-domain, multi-round Text-to-SQL Chinese dataset, which contains a list of 5459 multi-round questions and 17940 binary groups. BIRD-SQL [ paper ] [ code ] [ dataset ] 2023/05, the University of Hong Kong and Alibaba propose a large-scale cross-domain dataset BIRD, which contains over 12,751 unique question-SQL pairs, 95 big databases with a total size of 33.4 GB. It also covers more than 37 professional domains, such as blockchain, hockey, healthcare and education, etc. BIRD-SQL Mini-Dev [ paper ] [ code ] [ dataset ] 2024/06, the collaboration between the University of Hong Kong and Alibaba continues with the release of BIRD-SQL Mini-Dev, a lite version of their development dataset designed for efficient and cost-effective SQL model testing. This dataset compiles 500 high-quality text2SQL pairs from 11 distinct databases and supports both MySQL and PostgreSQL formats. It features the introduction of two new evaluation metrics: the Reward-based Valid Efficiency Score (R-VES) and the Soft F1-Score, both currently in beta and specifically developed to enhance the accuracy and efficiency of text-to-SQL models in a development setting. 🌈 Evaluation Index Execution Accuracy (EX) [ paper ] Definition: Calculate the proportion of the correct number of SQL execution results in the data set, and the result may be overestimated. Exact Match (EM) [ paper ] Definition: Calculate the matching degree between the SQL generated by the model and the marked SQL, and the result may be underestimated. 📦 Libraries mindsql MindSQL is a Python RAG (Retrieval-Augmented Generation) Library designed to streamline the interaction between users and their databases using just a few lines of code. With seamless integration for renowned databases such as PostgreSQL, MySQL, and SQLite, MindSQL also extends its capabilities to major databases like Snowflake and BigQuery by extending the core class. The library utilizes large language models (LLM) like GPT-4, Llama 2, Google Gemini, and supports knowledge bases like ChromaDB and Fais 🔧 Practice Project DB-GPT-Hub The eosphoros organization proposes an open source project focusing on Text-to-SQL fine-tuning based on LLM, including large-scale model download, dataset preprocessing, fine-tuning technologies such as LoRA and QLoRA, model prediction, model evaluation and other steps. sqlcoder The Defog organization proposes an advanced Text-to-SQL LLM, which has outstanding performance and is better than GPT3.5, wizardcoder and starcoder, etc., second only to GPT4. modal_finetune_sql This project is based on the LLaMa 2 7b model for Text-to-SQL fine-tuning, which includes a complete training, fine-tuning, and evaluation process. LLaMA-Efficient-Tuning Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, Chat Citation If you find Text2SQL useful for your research or development, please cite the following paper : bibtex
@misc{zhou2024dbgpthub,
title={DB-GPT-Hub: Towards Open Benchmarking Text-to-SQL Empowered by Large Language Models},
author={Fan Zhou and Siqiao Xue and Danrui Qi and Wenhui Shi and Wang Zhao and Ganglin Wei and Hongyang Zhang and Caigai Jiang and Gangwei Jiang and Zhixuan Chu and Faqiang Chen},
year={2024},
eprint={2406.11434},
archivePrefix={arXiv},
primaryClass={id='cs.DB' full_name='Databases' is_active=True alt_name=None in_archive='cs' is_general=False description='Covers database management, datamining, and data processing. Roughly includes material in ACM Subject Classes E.2, E.5, H.0, H.2, and J.1.'}
} 🤝 Friendship Links eosphoros They are a team of technology enthusiasts from internet companies and NLP graduate students who are passionate about open source projects. Their focus is on developing solutions that protect the privacy and security of databases and large language models. Their aim is to ensure that the abilities of these models remain absolutely private, secure, and under control. Awesome-AIGC-Tutorials Awesome AIGC Tutorials houses a curated collection of tutorials and resources spanning across Large Language Models, AI Painting, and related fields. Discover in-depth insights and knowledge catered for both beginners and advanced AI enthusiasts.;Curated tutorials and resources for Large Language Models, Text2SQL, Text2DSL、Text2API、Text2Vis and more.;ai,awesome,datset,deep-learning,finetuning,llm,nlp,survey,text2sql,nl-to-sql | eosphoros-ai/Awesome-Text2SQL |
tatsu-lab/alpaca_eval;AlpacaEval : An Automatic Evaluator for Instruction-following Language Models AlpacaEval 2.0 with length-controlled win-rates ( paper ) has a spearman correlation of 0.98 with ChatBot Arena while costing less than $10 of OpenAI credits run and running in less than 3 minutes. Our goal is to have a benchmark for chat LLMs that is: fast (< 5min), cheap (< $10), and highly correlated with humans (0.98). Here's a comparison with other benchmarks: Updates: :tada: Length-controlled Win Rates are out and used by default! This increases the correlation with ChatBot Arena from 0.93 to 0.98, while significantly decreasing length gameability. The raw win rates are still shown on the website and the CLI. More details here . :tada: AlpacaEval 2.0 is out and used by default! We improved the auto-annotator (better and cheaper) and use GPT-4 preview as baseline. More details here . For the old version, set your environment variable IS_ALPACA_EVAL_2=False . Table of Contents 1. [Overview](#overview)
2. [Quick Start](#quick-start)
2. [Leaderboards and how to interpret them](#leaderboards-and-how-to-interpret-them)
- [Models](#models)
- [Evaluators](#evaluators)
3. [Use-cases](#use-cases)
- [Evaluating a model](#evaluating-a-model)
- [Making a new leaderboard](#making-a-new-leaderboard)
- [Making a new evaluator](#making-a-new-evaluator)
4. [Contributing](#contributing)
- [Contributing a model](#contributing-a-model)
- [Contributing an evaluator](#contributing-an-evaluator)
- [Contributing an eval set](#contributing-an-eval-set)
- [Contributing a completion function](#contributing-a-completion-function)
5. [Limitations](#limitations)
6. [Analysis](#additional-analysis-and-plots)
- [Analyzing an evaluator](#analyzing-an-evaluator)
- [Analyzing an eval set](#analyzing-an-eval-set)
7. [Citation](#citation)
8. [Additional information](#additional-information)
- [Length-controlled win rates](#length-controlled-win-rates)
- [AlpacaEval 2.0](#alpacaeval-20)
- [Data Release](#data-release)
- [Differences with AlpacaFarm](#differences-with-alpacafarm)
- [Related work](#related-work)
- [Interpreting annotations](#interpreting-annotations)
- [Major updates](#major-updates) Overview Evaluation of instruction-following models (e.g., ChatGPT) typically requires human interactions. This is
time-consuming, expensive, and hard to replicate. AlpacaEval in an LLM-based automatic evaluation that is fast, cheap,
replicable, and validated against 20K human annotations.
It is particularly useful for model development.
Although we improved over prior automatic evaluation pipelines, there are still fundamental limitations like the preference for longer outputs.
AlpacaEval provides the following: Leaderboard : a leaderboard of common models on the AlpacaEval
evaluation set. Caution : Automatic evaluators (e.g. GPT-4) may be biased towards models that generate longer outputs and/or that were fine-tuned on the model underlying the evaluator (e.g. GPT-4). Automatic evaluator : an automatic evaluator that has high agreement with humans (validated on 20K
annotations). We evaluate a
model by
measuring the fraction of times a powerful LLM (e.g. GPT-4) prefers the outputs from that model
over
outputs from a reference model. Our evaluators enable caching and output randomization by default. Toolkit for building automatic evaluators : a simple interface for
building advanced automatic evaluators (e.g. with caching, batching, or multi-annotators) and analyzing them (quality,
price, speed, statistical power, bias, variance etc). Human evaluation data : 20K human preferences between a given and reference model
on the AlpacaFarm evaluation set. 2.5K of these are cross-annotations (4 humans annotating the same 650 examples). AlpacaEval dataset : a simplification
of AlpacaFarm's evaluation set, where "instructions" and "inputs" are merged into one field, and reference outputs are longer. Details here . When to use and not use AlpacaEval? **When to use AlpacaEval?**
Our automatic evaluator is a quick and cheap proxy for human evaluation of simple
instruction-following tasks.
It is useful if you
have to run many evaluations quickly, e.g., during model development.
**When not to use AlpacaEval?**
As any other automatic evaluator, AlpacaEval should **not replace human evaluation in
high-stake decision-making**, e.g., to decide on model release. In particular, AlpacaEval is limited by the fact
that (1) the instructions in the eval set might not be representative of advanced usage of LLMs; (2) automatic
evaluators may have biases such as favoring style over
factuality of the answer; and (3) AlpacaEval does not measure the risks that a model could cause.
Details in [limitations](#limitations). Quick Start To install the stable release, run bash
pip install alpaca-eval To install the nightly version, run bash
pip install git+https://github.com/tatsu-lab/alpaca_eval Then you can use it as follows: bash
export OPENAI_API_KEY=<your_api_key> # for more complex configs, e.g. using Azure or switching clients see client_configs/README.md
alpaca_eval --model_outputs 'example/outputs.json' This will print the leaderboard to the console, and save both the leaderboard and the annotations to the same directory as the model_outputs file. Important parameters are the following: model_outputs : A path to a json file for the outputs of the model to add to the leaderboard. Each dictionary
should
contain the keys instruction and output . annotators_config : This is the annotator to use. We recommend using weighted_alpaca_eval_gpt4_turbo (
default for AlpacaEval 2.0), which has a
high agreement rate with our human annotation data, large context size, and is pretty cheap. For a comparison of all annotators see here . reference_outputs : The outputs of the reference model. Same format as model_outputs . By default, this
is gpt4_turbo for AlpacaEval 2.0. output_path : Path for saving annotations and leaderboard. If you don't have the model outputs, you can
use evaluate_from_model and
pass a local path or a name of a
HuggingFace
model, or a model from a standard API (OpenAI, Anthropic, Cohere, google, ...). Other commands: >>> alpaca_eval -- --help ```
SYNOPSIS
alpaca_eval COMMAND
COMMANDS
COMMAND is one of the following:
evaluate
Evaluate a model based on its outputs. This is the default entrypoint if no command is specified.
evaluate_from_model
Evaluate a model from HuggingFace or an API provider. This is a wrapper around `evaluate` which includes generating from a desired model.
make_leaderboard
Precompute and save an entire leaderboard for a given dataset / evaluator / set of models generations.
analyze_evaluators
Analyze an evaluator and populates the evaluators leaderboard (agreement with human, speed, price,...).
``` For more information about each function use alpaca_eval <command> -- --help . Leaderboards and how to interpret them Models Our leaderboards are computed on the AlpacaEval dataset .
We precomputed the leaderboard for important models using different baseline models and autoannotators.
Our two main leaderboards ("AlpacaEval 2.0" and "AlpacaEval") can be found on this page .
"AlpacaEval 2.0" uses weighted_alpaca_eval_gpt4_turbo for the annotator and gpt4_turbo for the baseline.
"AlpacaEval" uses alpaca_eval_gpt4 for the annotator and text_davinci_003 for the baseline.
For all precomputed leaderboards see here .
Later we also show how to add your model to the
leaderboard and how to make
a new leaderboard for your evaluator/dataset .
See here for the configs of all
models that are available out of the box. AlpacaEval minimal leaderboard : | | Win Rate | Std Error |
|:----------------------|---------:|----------:|
| gpt4 | 95.3 | 0.7 |
| claude | 88.4 | 1.1 |
| chatgpt | 86.1 | 1.2 |
| guanaco-65b | 71.8 | 1.6 |
| vicuna-13b | 70.4 | 1.6 |
| text_davinci_003 | 50.0 | 0.0 |
| alpaca-farm-ppo-human | 41.2 | 1.7 |
| alpaca-7b | 26.5 | 1.5 |
| text_davinci_001 | 15.2 | 1.2 | How exactly are those metrics computed? **Win Rate**: the win rate measures the fraction of time the model's output is preferred over the reference's outputs (`test-davinci-003` for AlpacaEval and `gpt4_turbo` for AlpacaEval 2.0).
More specifically, to compute the win rate we collect pairs of outputs of the desired model on every instruction from
the ApacaEval dataset.
We then pair each output with the output of our reference model (e.g. `text-davinci-003`) on the same instruction.
We then ask our automatic evaluator which output they prefer.
See [AlpacaEval's](https://github.com/tatsu-lab/alpaca_eval/tree/main/src/alpaca_eval/evaluators_configs/alpaca_eval_gpt4)
and [AlpacaEval 2.0's](https://github.com/tatsu-lab/alpaca_eval/tree/main/src/alpaca_eval/evaluators_configs/weighted_alpaca_eval_gpt4_turbo) prompts and configs, in particular we randomize the order of
outputs to avoid position bias.
We then average the preferences over all instructions in the dataset to get the win rate of the model over the baseline.
If both outputs are exactly the same we use a half preference for both models.
**Standard error**: this is the standard error (normalized by N-1) of the win rate, i.e., the preferences averaged over
the different instructions. Details about our auto-annotator: alpaca_eval_gpt4 Our `alpaca_eval_gpt4` (
see [configs](#https://github.com/tatsu-lab/alpaca_eval/blob/main/src/alpaca_eval/evaluators_configs/alpaca_eval_gpt4/configs.yaml#L5))
annotator averages over preferences, where preferences are obtained as follows:
1. it takes in an instruction and a pair of outputs (from the desired model and the reference model)
2. if a preference was this triple was already computed, it returns it (i.e. it uses caching)
3. it randomizes the order of the outputs to avoid position bias
4. it formats the instruction and outputs into
the [following zero-shot prompt](https://github.com/tatsu-lab/alpaca_eval/blob/main/src/alpaca_eval/evaluators_configs/alpaca_eval_gpt4/alpaca_eval.txt),
which asks to order the outputs in order of preference
5. it completes the prompt using GPT4 with `temperature=0`
6. it parses the preference from the completions and returns it
The annotator is a mix between (and was highly influenced by) [AlpacaFarm](https://github.com/tatsu-lab/alpaca_farm)
and [Aviary](https://github.com/ray-project/aviary/tree/master) evaluators.
In particular, we use the same code as for AlpacaFarm (caching/randomization/hyperparameters) but use a ranking prompt
similar to that of Aviary.
We make changes to Aviary's prompt to decrease the bias for longer outputs.
Details in [Related work](#related-work).
For AlpacaEval 2.0 we use `weighted_alpaca_eval_gpt4_turbo`, which uses logprobs to compute continuous preference and uses GPT4_turbo as model (
see [configs](#https://github.com/tatsu-lab/alpaca_eval/blob/main/src/alpaca_eval/evaluators_configs/weighted_alpaca_eval_gpt4_turbo/configs.yaml)). Evaluators We evaluate different automatic annotators on the AlpacaEval set by comparing to
2.5K human annotations we collected (~650 instructions each with 4 human annotations).
Below we show metrics for our suggested evaluators ( weighted_alpaca_eval_gpt4_turbo , alpaca_eval_gpt4 ), for prior
automatic
evaluators ( alpaca_farm_greedy_gpt4 , aviary_gpt4 , lmsys_gpt4 ),
for humans ( humans ), and for different base models with essentially the same
prompt ( gpt4 , claude , text_davinci_003 , chatgpt_fn , guanaco_33b , chatgpt ).
See here for the configs of all
evaluators that are available out of the box and their associated metrics. | | Human agreement | Price [$/1000 examples] | Time [seconds/1000 examples] | Spearman corr. | Pearson corr. | Bias | Variance | Proba. prefer longer |
|:--------------------------------|------------------:|--------------------------:|-------------------------------:|-----------------:|----------------:|-------:|-----------:|-----------------------:|
| alpaca_eval_gpt4 | 69.2 | 13.6 | 1455 | 0.97 | 0.93 | 28.4 | 14.6 | 0.68 |
| alpaca_eval_cot_gpt4_turbo_fn | 68.6 | 6.3 | 1989 | 0.97 | 0.90 | 29.3 | 18.4 | 0.67 |
| alpaca_eval_gpt4_turbo_fn | 68.1 | 5.5 | 864 | 0.93 | 0.82 | 30.2 | 15.6 | 0.65 |
| alpaca_eval_llama3_70b_fn | 67.5 | 0.4 | 209 | 0.90 | 0.86 | 32.3 | 8.2 | 0.79 |
| gpt4 | 66.9 | 12.5 | 1037 | 0.88 | 0.87 | 31.5 | 14.6 | 0.65 |
| alpaca_farm_greedy_gpt4 | 66.4 | 15.3 | 878 | 0.85 | 0.75 | 30.2 | 19.3 | 0.60 |
| alpaca_eval_cot_gpt4_turbo_fn | 65.7 | 4.3 | 228 | 0.78 | 0.77 | 33.9 | 23.7 | 0.61 |
| humans | 65.7 | 300.0 | 36800 | 1.00 | 1.00 | 0.0 | 34.3 | 0.64 |
| claude | 65.3 | 3.3 | 173 | 0.93 | 0.90 | 32.4 | 18.5 | 0.66 |
| lmsys_gpt4 | 65.3 | 13.9 | 17982 | 0.98 | 0.97 | 31.6 | 15.9 | 0.74 |
| text_davinci_003 | 64.1 | 8.7 | 121 | 0.85 | 0.83 | 33.8 | 22.7 | 0.70 |
| longest | 62.2 | 0.0 | 0 | 0.27 | 0.56 | 37.8 | 0.0 | 1.00 |
| chatgpt | 57.3 | 0.8 | 285 | 0.72 | 0.71 | 39.4 | 34.1 | 0.59 | How exactly are those metrics computed? We now explain in words how we compute the metrics in the table
above. [The code is here](https://github.com/tatsu-lab/alpaca_eval/blob/f05cbd651b79ac93906b19d01fe443b45828b0f2/src/alpaca_eval/analyze.py#L366).
**Human agreement**: this measures the agreement between the current annotator and the majority preferences of
humans on
our
~650 annotations from
our [cross-annotation set](https://huggingface.co/datasets/tatsu-lab/alpaca_eval/blob/main/alpaca_farm_human_crossannotations.json),
which contains 4 human annotations per example.
To estimate the agreement between a single human (`humans` row in the table above) and the majority of humans, we take
one of the 4 annotations and compute the accuracy that it has when predicting the mode of the other 3 annotations.
We then average this accuracy over all 4 annotations and over the 650 instructions to get the human agreement, i.e., we
compute the expected (over humans and samples)
leave-one-out agreement.
If the mode is not unique, we take one of the modes at random.
We perform exactly the same computation for the automatic annotators, so that the final numbers are comparable.
**Price [$/1000 examples]**: this is the average price of every 1000 annotations.
For humans, it is the price that [we paid Mechanical Turkers](https://arxiv.org/abs/2305.14387) to collect those
annotations ($21/hour).
If the price depends on the machine used to compute the annotations (e.g. Guanaco) we leave it empty.
**Time [seconds/1000 examples]**: this is the average time it takes to compute 1000 annotations.
For humans, it is the estimated median time that each Mechanical Turker took to annotate 1000 examples.
For automatic annotators, it is the average time that it took us when running the annotations. Note that this can depend
on API limits that are different for different users and the number of requests that the clusters are
processing.
**Spearman corr.**: this measures the Spearman correlation between a leaderboard computed with the auto-annotator's preference and the leaderboard computed with human preferences. As with `Human agreement`, we use the human annotations from AlpacaFarm but we now consider the method-level agreement rather than only the sample-wise agreement with humans. Note that we only use have 9 models and so the correlation is not very reliable.
**Pearson corr.**: same as with `Spearman corr.` but with Pearson correlation.
**Bias**: agreement between the most likely human label and the most likely automatic one.
For automatic annotators we estimate it by sampling 4 different annotations for each example.
The randomness here comes from the order of the outputs in the prompt, sampling from the LLM, and if applicable the
order of the instruction in the batch and the choice of annotator in the pool.
We then take the mode of the 4 annotations and compute the accuracy of the mode when predicting the mode of the 4 human
annotations.
Note that this is likely an overestimate on the real bias that we would get if we had an "infinite" number of
cross-annotations.
A low bias means that the annotator has in expectation the same preferences as humans.
For the case of humans, the bias is zero by definition.
Note that this is related to but not the standard statistical bias, because we take the mode instead of average over
annotations and we consider 0-1 loss instead of squared loss.
**Variance**: expected agreement a single automatic preference and the most likely one.
We estimate it the same way as we estimated "human agreement" for humans, i.e., we take the expected leave one out error
when predicting the mode of the 3 annotations using the 4th annotation.
A low variance means that the annotator is consistent with its preference, i.e., if you sample from it with different
seeds it will give the same result.
As with the bias, this is not exactly the standard statistical variance, because we take the mode instead of average
over annotations and we
consider 0-1 loss instead of squared loss.
Note that the "human agreement" is tightly related to the bias and variance. In particular, the variance
measures the error due to the fact that we only use a single annotation while the bias aims to measure the irreducible
error
for the current annotator.
**Proba. prefer longer**: this is the probability that the annotator prefers the longer output when one of the two
outputs is significantly longer than the other (more than 30 characters difference).
In the [full table](https://github.com/tatsu-lab/alpaca_eval/blob/main/src/alpaca_eval/evaluators_configs/README.md) we
also provide the following metrics:
**Proba. prefer lists**: this is the probability that the annotator prefers the output that contains a list/bullet
points when one output does but not the other.
**Proba. prefer 1**: this is the probability that the annotator prefers the first of the pair of outputs. All our
proposed annotators randomize over outputs in the prompt, so this should be 0.5. Prior annotators, such as `lmsys`
and `aviary`, do not.
**# parsed**: this is the number of examples that the annotator was able to parse.
Note that if the variance and bias is empty, it means that we only performed one single annotation for each 648 example
due to resource (time and price) constraints. This explains why the #parsed is 648, otherwise it should be 2592. Tips for choosing evaluators Overall we recommend using `annotators_config=weighted_alpaca_eval_gpt4_turbo` if you want the high agreement with humans, and
`annotators_config=chatgpt_fn` if you are on a tight budget.
When choosing an annotator we recommend you to consider the following (the first three are obvious):
- `"Human agreement [%]"`
- `"Price [$/1000 examples]"`
- `"Time [seconds/1000 examples]"`
- `"* corr."` approx. > 0.7. It is important that the correlation is not too low, but we do not recommend using it as the main metric as the correlation is computed on only 9 models.
- `"Proba. prefer longer"` approx. < 0.7. Indeed, we found see that the majority of preference of human annotators have
strong bias for longer answers (as shown by the
high [performance=62.2](https://github.com/tatsu-lab/alpaca_eval/blob/main/src/alpaca_eval/evaluators_configs/README.md)
of
the `"longest"` evaluator that always
prefers the longest output). This suggests that it might more of a bias with the human annotators. In order to avoid
having leaderboards with strong biases for length, we suggest using automatic annotators with less than 0.7 "Proba.
prefer longer".
- `"Variance"` approx. < 0.2. We believe that a good evaluator should have as little variance as possible so that
results are mostly reproducible. Note that variance can be desirable in the case where we are simulating humans
as shown in [AlpacaFarm](https://arxiv.org/abs/2305.14387).
We filtered the annotators that do not satisfy those requirements in the table above (besides humans / ChatGPT / 003 /
lmsys for
reference purposes). For
all
results see [here](https://github.com/tatsu-lab/alpaca_eval/blob/main/src/alpaca_eval/evaluators_configs/README.md).
In general, we found `weighted_alpaca_eval_gpt4_turbo` to be a good trade-off between quality / price / time /
variance / length bias. The above metrics are computed with respect to annotations from crowd-workers. Although useful, those annotations are
not perfect, e.g., crowd-workers often favor style
over
factuality. We thus recommend users to validate automatic evaluators on their own instructions and human annotations.
Details in limitations . Use-cases Evaluating a model >>> alpaca_eval evaluate -- --help ```
NAME
alpaca_eval evaluate - Evaluate a model based on its outputs. This is the default entrypoint if no command is specified.
SYNOPSIS
alpaca_eval evaluate DESCRIPTION
Evaluate a model based on its outputs. This is the default entrypoint if no command is specified.
FLAGS
--model_outputs=MODEL_OUTPUTS
Type: Optional[Union]
Default: None
The outputs of the model to add to the leaderboard. Accepts data (list of dictionary, pd.dataframe, datasets.Dataset) or a path to read those (json, csv, tsv) or a function to generate those. Each dictionary (or row of dataframe) should contain the keys that are formatted in the prompts. E.g. by default `instruction` and `output` with optional `input`. If None, we just print the leaderboard.
-r, --reference_outputs=REFERENCE_OUTPUTS
Type: Union
Default: /leaderboard.csv`.
--max_instances=MAX_INSTANCES
Type: Optional[Optional]
Default: None
The maximum number of instances to annotate. Useful for testing.
--annotation_kwargs=ANNOTATION_KWARGS
Type: Optional[Optional]
Default: None
Additional arguments to pass to `PairwiseAnnotator.annotate_head2head`.
-A, --Annotator=ANNOTATOR
Default: >>> alpaca_eval evaluate_from_model -- --help ```
NAME
alpaca_eval evaluate_from_model - Evaluate a model from HuggingFace or an API provider. This is a wrapper around `evaluate` which includes generating from a desired model.
SYNOPSIS
alpaca_eval evaluate_from_model MODEL_CONFIGS DESCRIPTION
Evaluate a model from HuggingFace or an API provider. This is a wrapper around `evaluate` which includes generating from a desired model.
POSITIONAL ARGUMENTS
MODEL_CONFIGS
Type: Union
A dictionary or path (relative to `models_configs`) to a yaml file containing the configuration of the model to decode from. If a directory,we search for 'configs.yaml' in it. The keys in the first dictionary should be the generator's name, and the value should be a dictionary of the generator's configuration which should have the
FLAGS
-r, --reference_model_configs=REFERENCE_MODEL_CONFIGS
Type: Optional[Union]
Default: None
Same as in `model_configs` but for the reference model. If None, we use the default Davinci003 outputs.
-e, --evaluation_dataset=EVALUATION_DATASET
Type: Union
Default: `
-m, --max_instances=MAX_INSTANCES
Type: Optional[int]
Default: None
Maximum number of instances to generate and evaluate. If None, we evaluate all instances.
--is_strip_output=IS_STRIP_OUTPUT
Type: bool
Default: True
Whether to strip trailing and leading whitespaces from the outputs.
--is_load_outputs=IS_LOAD_OUTPUTS
Type: bool
Default: True
Whether to try to load outputs from the output path. If True and outputs exist we only generate outputs for instructions that don't have outputs yet.
-c, --chunksize=CHUNKSIZE
Type: int
Default: 64
Number of instances to generate before saving. If None, we save after all generations.
Additional flags are accepted.
Other kwargs to `evaluate`
NOTES
You can also use flags syntax for POSITIONAL ARGUMENTS
``` To evaluate a model you need to: Choose an evaluation set and compute outputs specified as model_outputs . By default, we use
the 805 examples from AlpacaEval . To compute outputs on AlpacaEval use: ```python
import datasets eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval")["eval"]
for example in eval_set:
# generate here is a placeholder for your models generations
example["output"] = generate(example["instruction"])
example["generator"] = "my_model" # name of your model
``` if your model is a HuggingFace model or from a standard API provider (OpenAI, Anthropic, Cohere). Then you can
directly use alpaca_eval evaluate_from_model to also take care of generating outputs. Compute the reference outputs reference_outputs . By default, we use precomputed outputs of gpt4_turbo on
AlpacaEval .
If you
want to use a different model or a different dataset follow the same steps as (1.). Choose an evaluator specified via annotators_config . We recommend using alpaca_eval_gpt4_turbo_fn . For other options and comparisons
see this table . Depending on the evaluator you might need to
set the appropriate API_KEY in your environment
or int the client_configs . Running all together: bash
alpaca_eval --model_outputs 'example/outputs.json' \
--annotators_config 'alpaca_eval_gpt4_turbo_fn' If you don't have decoded outputs, you can use evaluate_from_model which takes care of decoding (model and reference)
for you.
Here's an
example: ```bash need a GPU for local models alpaca_eval evaluate_from_model \
--model_configs 'oasst_pythia_12b' \
--annotators_config 'alpaca_eval_gpt4_turbo_fn' ``` Here the model_configs and reference_model_configs (optional) are paths to a directory that specifies the prompt,
the model
provider (here HuggingFace) and decoding parameters.
See this directory for examples.
For all model providers that are available out-of-the-box
see here . Information about annotators - **Caching**: by default all annotations are cached on
disk at `caching_path`. Annotations are thus never recomputed, which makes annotations faster, cheaper and allow for
reproducibility. This helps even when evaluating different models as many models
have
the same outputs.
- **Output randomization** by default, we randomize over the examples of outputs, as we found that annotators tend to
prefer the first examples
they see.
- **Batching** we provide code and examples to batch annotations, which decreases cost and time for annotations if the
prompt is long. See for
example [alpaca_farm_greedy_gpt4](https://github.com/tatsu-lab/alpaca_eval/tree/main/src/alpaca_eval/evaluators_configs/alpaca_farm_greedy_gpt4).
- **Pool of annotators** we provide code and examples to evaluate using a pool of automatic annotators, which is helpful
for replicating the variance of [human annotations](https://arxiv.org/abs/2305.14387). See for
example [alpaca_farm](https://github.com/tatsu-lab/alpaca_eval/tree/main/src/alpaca_eval/evaluators_configs/alpaca_farm).
- **Seeding based on instructions** For reproducibility and more fair comparison between models, we seed all
randomness (output order, order in batches,
examples for each annotator in a pool) based on the instruction. Making a new leaderboard >>> alpaca_eval make_leaderboard -- --help ```
NAME
alpaca_eval make_leaderboard - Precompute and save an entire leaderboard for a given dataset / evaluator / set of models generations.
SYNOPSIS
alpaca_eval make_leaderboard DESCRIPTION
Precompute and save an entire leaderboard for a given dataset / evaluator / set of models generations.
FLAGS
--leaderboard_path=LEADERBOARD_PATH
Type: Optional[Union]
Default: None
The path to save the leaderboard to. The leaderboard will be saved as a csv file, if it already exists it will
--annotators_config=ANNOTATORS_CONFIG
Type: Union
Default: 'alpaca_eval_gpt4_turbo_fn'
The path the (or list of dict of) the annotator's config file.
--all_model_outputs=ALL_MODEL_OUTPUTS
Type: Union
Default: If you want to make a new leaderboard using a single command (rather than multiple `alpaca_eval` calls), for your
desired evaluation
set and evaluators, you can use the following:
```bash
alpaca_eval make_leaderboard \
--leaderboard_path \
--all_model_outputs \
--reference_outputs \
--annotators_config ```
where:
- `leaderboard_path`: path to save the leaderboard to. The leaderboard will be saved as a csv file, if it already exists
it will append.
- `all_model_outputs` : The json path to the outputs of all models to add to the leaderboard (as a single file or by
globbing multiple files). Each dictionary should contain
the keys (`instruction` and `output`) that are formatted in the prompts and a column `generator` with the name of the
current model. As an example
see [this file](https://huggingface.co/datasets/tatsu-lab/alpaca_eval/blob/main/alpaca_eval_all_outputs.json).
- `reference_outputs` the path to the outputs of the reference model. Each dictionary should contain
the keys (`instruction` and `output`) that are formatted in the prompts. By
default, the reference outputs are the 003 outputs on AlpacaEval set.
- `annotators_config`: The path to the annotator's config file. Defaults to `alpaca_eval_gpt4`. Making a new evaluator >>> alpaca_eval analyze_evaluators -- --help ```
NAME
alpaca_eval analyze_evaluators - Analyze an evaluator and populates the evaluators leaderboard (agreement with human, speed, price,...).
SYNOPSIS
alpaca_eval analyze_evaluators DESCRIPTION
Analyze an evaluator and populates the evaluators leaderboard (agreement with human, speed, price,...).
FLAGS
--annotators_config=ANNOTATORS_CONFIG
Type: Union
Default: 'alpaca_eval_gpt4_turbo_fn'
The path the (or list of dict of) the annotator's config file.
-A, --Annotator=ANNOTATOR
Default: AlpacaEval provides a simple way of making new evaluators. All you need is to make a new `configs.yaml` configuration
file, which you will then pass
as `--annotators_config ` to `alpaca_eval`.
Here are some ways you can make a new evaluator:
- **Changing the prompt**: Write a new prompt in a text file and specify the path in `prompt_template` of the
configuration file. Paths are relative to the configuration file.
- **Changing decoding parameters**: Specify the desired parameters in `completions_kwargs` in the configuration file. To
see all available parameters refer to the docstrings of the corresponding
function [in this file](https://github.com/tatsu-lab/alpaca_eval/blob/main/src/alpaca_eval/decoders/__init__.py)
specified by `fn_completions`
in the configuration file.
- **Changing the model**: Specify the desired model in `model_name` and the corresponding
prompt in `prompt_template`. If the model comes from another provider you
will
have
to change `fn_completions` which maps to the corresponding function
in [this file](https://github.com/tatsu-lab/alpaca_eval/blob/main/src/alpaca_eval/decoders/__init__.py). We
provide `fn_completions` functions to use models from OpenAI, Anthropic, Cohere, or HuggingFace. To
install packages needed for
all providers
use `pip install alpaca_eval[all]`. Other parameters in the configuration file The easiest is to check the docstrings
of [`SinglePairwiseAnnotator`](https://github.com/tatsu-lab/alpaca_eval/blob/main/src/alpaca_eval/annotators/pairwise_evaluator.py#L537).
Here are some important ones:
```
Parameters
----------
prompt_template : path
A prompt that will be given to `fn_prompter` or path to the prompts. Path is relative to
`evaluators_configs/`
fn_completion_parser : callable or str
Function in `completion_parsers.py` to use for parsing the completions into preferences. For each completion,
the number of preferences should be equal to the batch_size if not we set all the preferences in that batch to
NaN.
completion_parser_kwargs : dict
Kwargs for fn_completion_parser.
fn_completions : callable or str
Function in `decoders.py` to use for decoding the output.
completions_kwargs : dict
kwargs for fn_completions. E.g. model_name, max_tokens, temperature, top_p, top_k, stop_seq.
is_randomize_output_order : bool
Whether to randomize output_1, output_2 when formatting.
batch_size : int
Number of examples that will be added in a single prompt.
``` Once you made the evaluator you can also analyze it and add it to the _evaluator's_ [leaderboard](#evaluators) using the
following command:
```bash
alpaca_eval analyze_evaluators --annotators_config ' '
```
To estimate the bias and variance this evaluates every example with 4 seeds, i.e., 2.5K
evaluation.
If you want a cheaper evaluation you can use a single seed using `--is_single_annotator True` which will skip the
estimation of bias and variance. Contributing We are accepting PRs for new models, evaluators, and eval sets, in addition to bug fixes.
We will update the leaderboard website regularly with new community
contributions.
We have also created a support discord for AlpacaEval in case you run into any issues
and
wish to ask help from the community. To get started, please first fork the repo, and install the package from source pip install -e . Contributing a model First, you'll need to add a model config definition in the models_configs folder. As
an example, you can look at
the falcon-7b-instruct yaml . Please make sure the
folder name and key name in the yaml match exactly. Then, please follow the steps in Evaluating a model to run inference on the model to produce
outputs on the eval set and score the model according to one of the evaluators.
An example command may look like: sh
alpaca_eval evaluate_from_model \
--model_configs 'falcon-7b-instruct' After running this command, you should have generated an outputs json and a new entry in the corresponding leaderboard
file . Please make a PR
with the
config, outputs file, and updated leaderboard. Concretely you should do something like: Fork the repository in github Clone the forked repository git clone <URL> Make a model config at src/alpaca_eval/models_configs/<model_name> and evaluate it evaluate_from_model --model_configs '<model_name>' Add the model configs, output, and leaderboard entry to the forked repository sh
git add src/alpaca_eval/models_configs/<model_name> # add the model config
git add src/alpaca_eval/leaderboards/ # add the actual leaderboard entry
git add src/alpaca_eval/metrics/weights # add the weights for LC
git add -f results/<model_name>/model_outputs.json # force add the outputs on the dataset
git add -f results/<model_name>/*/annotations.json # force add the evaluations from the annotators
git commit -m "Add <model_name> to AlpacaEval"
git push Create a pull request on AlpacaEval Note: if you are generating outputs outside of AlpacaEval you should still add a model config but with fn_completions: null .
See this config for an example. Getting your model verified A verified result in AlpacaEval indicates that a core maintainer has decoded the outputs from the model and performed the evaluation. Unfortunately, we, the AlpacaEval maintainers, lack the resources to verify all the models and so we will only do that for models that are in the top-10 of the leaderboard. We apologize for any inconvenience this may cause and appreciate your understanding. To have your model verified, please follow the steps below:
1. Contact `@yann` or `@rtaori` on Discord, or email us if you have our email, providing a brief rationale for why your model should be verified.
2. Await our response and approval before proceeding.
3. Prepare a script to decode from your model that does not require a GPU, typically the same script used for your model contribution. It should run using `alpaca_eval evaluate_from_model --model_configs ' '` without requiring a local GPU.
4. Generate temporary API keys for running the script and share them with us. Specifically, we need the keys for both decoding your model and for evaluation (e.g., OpenAI or Anthropic key).
5. We will execute `alpaca_eval evaluate_from_model --model_configs ' '`, update the results, and inform you so that you can revoke the temporary keys.
Note that we will not re-evaluate the same model. Due to sampling variance, the results might slightly differ from your initial ones. We will replace your previous community results with the verified ones. Contributing an evaluator Please first follow the directions in [Making a new evaluator](#making-a-new-evaluator).
Once you're created the annotator config, we ask that you create a new leaderboard for the annotator by evaluating the
minimal set of models. The outputs for these models can be found by
downloading [alpaca_eval_all_outputs.json](https://huggingface.co/datasets/tatsu-lab/alpaca_eval/blob/main/alpaca_eval_all_outputs.json).
```bash
alpaca_eval make_leaderboard \
--leaderboard_path src/alpaca_eval/leaderboards/data_AlpacaEval/ _leaderboard.csv \
--all_model_outputs alpaca_eval_all_outputs.json \
--annotators_config ```
Then, please create a PR with the annotator config and leaderboard csv. Contributing an eval set To contribute a new eval set, you'll first need to specify a set of textual instructions.
Then, you'll need to specify a set of reference outputs (model win-rates are computed against this reference).
For ease of use, you may use the default [text-davinci-003](src/alpaca_eval/models_configs/text_davinci_003/) reference
config.
Place these together into a json, where each entry specifies the fields `instruction`, `output`, and `generator`. You
can look to [alpaca_eval.json](https://huggingface.co/datasets/tatsu-lab/alpaca_eval/blob/main/alpaca_eval.json) as a
guide (the `dataset` field is not necessary).
Finally, we ask that you create a minimal leaderboard on this new evaluation set. You can do this with the following:
```bash
alpaca_eval make_leaderboard \
--leaderboard_path \
--all_model_outputs alpaca_eval_all_outputs.json \
--reference_outputs ```
Please submit a PR with the eval set json and corresponding leaderboard csv. Contributing a completion function Currently, we allow different completion functions, e.g., `openai`, `anthropic`, `huggingface_local`, `huggingface_hub_api` ... If you want to contribute a new completion function / API with which to perform inference then follow those steps:
1. add a file .py with a function ` _completions(prompts : Sequence[str], model_name :str, ... )` in the [decoder folder](https://github.com/tatsu-lab/alpaca_eval/tree/main/src/alpaca_eval/decoders). This function should take as argument the prompts + kwargs and return the completions. Please look at other completion functions in the directory for templates. E.g. [huggingface_local_completions](https://github.com/tatsu-lab/alpaca_eval/blob/main/src/alpaca_eval/decoders/huggingface_local.py) or [anthropic](https://github.com/tatsu-lab/alpaca_eval/blob/main/src/alpaca_eval/decoders/anthropic.py).
2. add ` _completions` and dependencies in [__init__](https://github.com/tatsu-lab/alpaca_eval/blob/main/src/alpaca_eval/decoders/__init__.py) . Again you can follow the example of [huggingface_local_completions](https://github.com/tatsu-lab/alpaca_eval/blob/main/src/alpaca_eval/decoders/__init__.py#L30)
3. update optional dependencies in [setup.py](https://github.com/tatsu-lab/alpaca_eval/blob/main/setup.py)
4. add a model you want to evaluate in the [models configs](https://github.com/tatsu-lab/alpaca_eval/tree/main/src/alpaca_eval/models_configs)
5. evaluate your model using `alpaca_eval evaluate_from_model --model_configs ' '`
6. (optional) push the results from the previous model on AlpacaEval leaderboard following [those steps](https://github.com/tatsu-lab/alpaca_eval/tree/main#contributing-a-model)
Feel free to start a PR early, we'll be able to provide some help in the process! Limitations The AlpacaEval evaluation pipeline, like other current evaluators have important limitations and should therefore not be
used as replacement for human evaluation in important settings, such as to decide whether a model is ready to be
deployed.
Those can broadly be clustered into 3 categories: Instructions might not be representative of real-usage : the AlpacaEval set contains examples from a variety of
datasets ( self-instruct , open-assistant , vicuna , koala , hh-rlhf )
which might not be representative of real-usage and advanced applications of better models like GPT4. This likely makes the best closed models (GPT4 / Claude / ChatGPT / ...) seem more similar to the open models than what they are. Indeed, those closed models seem to be pretrained/finetuned on much more diverse data. See for
example this blog for preliminary results on more complex instructions.
Note, however, that in AlpacaFarm we showed that win-rates on our evaluation set
are highly correlated (0.97 R2) with win-rates on instructions from user interactions with the Alpaca Demo.
Furthermore, the AlpacaEval leaderboard shows larger
gap between the open models and OpenAI models than other leaderboards (
e.g. lmsys ). Biases of automatic annotators : the raw automatic annotators seem to have implicit biases. In particular, we found
that they tend to prefer longer outputs and outputs that contain lists (e.g. 0.68 / 0.69 for alpaca_eval_gpt4 and 0.62 / 0.58 for claude ).
Although we found that humans have similar biases (0.64 / 0.61), we believe that this could be more of a limitation
of human annotation pipeline we used rather than a true human bias. More generally, through qualitative analysis, we
found that automatic annotators give more importance to the style
of the output than its content (e.g. factuality).
Finally, we found that automatic evaluators tend to prefer outputs from models that are similar (likely trained on
the same data) as suggested by the big difference between ChatGPT/GPT4 on claude 's and alpaca_eval_gpt4 's
leaderboard. Note that the length bias is partially mitigated in our length-controlled win-rates. Lack of safety evaluation : importantly, AlpacaEval only evaluates the instruction-following capabilities of
models rather than the harm that they could cause (e.g. toxic behavior or bias). As a result the small gap between
current ChatGPT and the best open source models should not be interpreted as if that the latter are ready to be
deployed. Beyond those limitations about the evaluation pipelines, there are also limitations about our validation of the
evaluators and our proposed approach to selecting evaluation sets. Limitations about our validation pipeline First, our validation of evaluators based on human cross-annotations suffers from the following limitations: (1) we
qualitatively found that our crowd-workers tend to also favor style such as length and presence of lists over
factuality;
(2) this does not validate whether win-rates against a reference model is a good evaluation strategy in the first place;
(3) preferences from 16 crowd-workers are not representative of preferences of all humans.
Second, our suggested approach to selecting evaluation sets based on statistical power suffers from the following
limitations: (1) statistical power does not ensure the right direction, e.g. you can have an unnatural set of
instructions where Alpaca "performs" better than better model; and
(2) this can push users to select data to support the hypothesis that they want to validate. Additional analysis and plots Caution : all the following results are about AlpacaEval 1.0 and have not been updated since Length-controlled AlpacaEval (LCAE) Length-controlled AlpacaEval Visualizations: Length-controlled AlpacaEval Development: The notebook shows different options that we considered for mitigating the length bias of automatic annotators. Here we briefly summarize the main results. Namely:
- LCAE increases the correlation with Chat Arena to 0.98 from 0.94 for AlpacaEval 2.0. This makes LCAE the most highly correlated benchmark with Chat Arena as seen in the plot below. LCAE decreases length gameability one of the major issues of AlpacaEval is that you can increase your win-rate by increasing the length of your outputs. For example, in AlpacaEval 2.0 the win-rate for the baseline (50%) increases to 64% when prompted to “give as much detail as possible” and decreases to 23% when prompted to “be as concise as possible while still providing all the necessary information to answer the question”. More generally the relative length gameability was ~21% for AlpacaEval and decreases to ~6% for LCAE, so it's 3x less gameable through prompt length. This is shown in the plot below. We can predict performance for different baselines One other benefit of using a GLM for controlling for length bias. Is that we now have a model that can predict the win-rate of a model for different baselines. In particular, our GLM has many nice properties, for example win_rate(m,b) = 1 - win_rate(b,m) \in [0,1] and win_rate(m,m) = 0.5 . This is shown in the plot below. Finally, note that we are only controlling for length bias. There are other known biases that we are not controlling for, such as the fact that auto-annotators prefer outputs similar to their model. Although we could control for that, in practice we have found that to be less of an issue than length bias. For two reasons (1) this mostly a single model in the leaderboard because fine-tuning on outputs from the auto-annotator doesn't seem to have doesn't seem to impact the win-rate as much, and (2) the bias is actually less strong that what one could think. For example we show below a subset of the leaderboards auto-annotated by three different models, and we see that the ranking of models is exactly the same. In particular, claude-3-opus prefers gpt4_preview , and mistral-large prefers the former two. Analyzing an evaluator [//]: # (## Analyzing an evaluator)
**Caution**: all the following results are about AlpacaEval 1.0 and have not been updated since
**Analyzing evaluators:**
[![analyzing an evaluator](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tatsu-lab/alpaca_eval/blob/main/notebooks/analyzing_annotators.ipynb)
As we saw in [the evaluator's leaderboard](#evaluators), there are many metrics to consider when selecting an evaluator,
e.g. the quality, price, and speed. To assist with selection of the evaluator we provide a few functions to plot those
metrics.
The following shows for example the price/time/agreement of the different evaluators.
![plot_quality_vs_price_and_time.png](figures%2Fplot_quality_vs_price_and_time.png)
Here we see that `alpaca_eval_gpt4` performs very well and is better than humans on all the considered metrics.
Previously we only considered the agreement with human annotators overall.
An additional validation that one could do is checking whether making a leaderboard using our
automatic annotator gives similar results as a leaderboard from humans.
To enable such analysis, we release [human
annotations](#data-release) of outputs from 22 methods from [AlpacaFarm](https://github.com/tatsu-lab/alpaca_farm) =>
22*805 = ~18K annotations. As a result we
can
test
the correlation between the win-rates of the 22 models as evaluated by the humans and our automatic annotator.
Note that this is arguably a better way of selecting an automatic evaluator than using "human agreement [%]" but is
expensive given that it requires 18K
annotations.
The plot below shows such correlation for the `alpaca_eval_gpt4` evaluator. We see that the `alpaca_eval_gpt4` leaderboard is highly correlated (0.94 Pearson correlation) to the leaderboard from
humans, which further
suggests that automatic evaluation is a good proxy for human evaluation.
For the code and more analysis,
see [this notebook](https://github.com/tatsu-lab/alpaca_eval/blob/main/notebooks/analyzing_annotators.ipynb), or the
colab notebook above. Analyzing an eval set [//]: # (## Analyzing an eval set)
**Caution**: all the following results are about AlpacaEval 1.0 and have not been updated since.
**Making evaluation sets:**
[![analyzing an evaluator](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tatsu-lab/alpaca_eval/blob/main/notebooks/analyzing_evalset.ipynb)
When creating an evaluation set there are two main factors to consider: how much data to use? and what data?
One way of answering those question is by considering a leaderboard of models that you believe are of different
quality and checking what and how much data is needed to distinguish between them in a statistically significant way.
We will do so below using a paired t-test to test if the difference in win-rates between every pair of models
is
statistically significant.
First, let us consider the question of how much data to use.
Below we show the number of random samples needed from AlpacaEval for the paired t-test to give a p-value < 0.05 for
each pair of models in the minimal `alpaca_eval_gpt4`
leaderboard.
Grey cells correspond to pairs that are not significantly different on the 805 samples.
y- and x-axis are ordered by the win-rate of the first and second model respectively. We see that most models can already be distinguished with 50 samples, and that 150 samples allows distinguishing the
majority of pairs (74 out of 78). This suggests that we can decrease the evaluation set size by a factor of
4 when testing two models that have similar performance gaps as those on the
minimal `alpaca_eval_gpt4` [leaderboard](#models).
The second question is what data to use. Again we can try to answer this question from a statistical power perspective:
what data allows to best distinguish between models. Let's consider this for all the datasets that are part of
AlpacaEval, but let us control for the size of the evaluation sets as we only care about the quality of the data. The
following plot shows the p-values from the paired t-test of each pairs of models on 80 examples of each subset of
AlpacaEval.
![plot_paired_ttests_per_dataset.png](figures%2Fplot_paired_ttests_per_dataset.png)
We see for example that the self-instruct dataset yields the least statistical power, which suggests that one could
remove this dataset from the evaluation set.
The exact reason should be analyzed in future work.
For the code and more analysis
see [this notebook](https://github.com/tatsu-lab/alpaca_eval/blob/main/notebooks/analyzing_evalset.ipynb), or the
colab notebook above. Citation Please consider citing the following depending on what you are using and referring to:
- Code, results, and general benchmark : alpaca_eval (this repo). Specify whether you are using AlpacaEval or AlpacaEval 2.0. For length-controlled win-rates see below.
- Length-controlled (LC) win rates : alpaca_eval_length .
- Human annotations : dubois2023alpacafarm ( AlpacaFarm )
- AlpacaEval evaluation set : alpaca_eval and self-instruct , open-assistant , vicuna , koala , hh-rlhf . Here are the bibtex entries: @misc{alpaca_eval,
author = {Xuechen Li and Tianyi Zhang and Yann Dubois and Rohan Taori and Ishaan Gulrajani and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {AlpacaEval: An Automatic Evaluator of Instruction-following Models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/alpaca_eval}}
} @article{dubois2024length,
title={Length-Controlled AlpacaEval: A Simple Way to Debias Automatic Evaluators},
author={Dubois, Yann and Galambosi, Bal{\'a}zs and Liang, Percy and Hashimoto, Tatsunori B},
journal={arXiv preprint arXiv:2404.04475},
year={2024}
} @misc{dubois2023alpacafarm,
title={AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback},
author={Yann Dubois and Xuechen Li and Rohan Taori and Tianyi Zhang and Ishaan Gulrajani and Jimmy Ba and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto},
year={2023},
eprint={2305.14387},
archivePrefix={arXiv},
primaryClass={cs.LG}
} More information Length-Controlled Win Rates Length controlled (LC) win-rates are a debiased version of the win-rates that control for the length of the outputs.
The main idea is that for each model we will fit a logistic regression to predict the preference of the autoannotator given: (1) the instruction, (2) the model, and (3) the difference of length between the baseline and model output.
Given such a logistic regression we can then try to predict the counterfactual "what would the preference be if the model's output had the same length as the baseline" by setting the length difference to 0.
By averaging over this length-controlled preference, we then obtain the length-controlled win-rate.
The exact form of the logistic regression is taken such that the interpretation of LC win rates is similar to the raw win rates, for example for any model `m1` and `m2` we have `win_rate(m1, m2) = 1 - win_rate(m2, m1) \in [0,100]` and `win_rate(m1, m1) = 0.5`.
Length controlled win-rates increase the correlation between AlpacaEval's leaderboard and Chat Arena from **0.93 to 0.98 Spearman correlation, while significantly decreasing the length gameability of the annotator**.
For more information and results about length controlled win-rates see [this notebook](https://github.com/tatsu-lab/alpaca_eval/blob/main/notebooks/length_controlled.ipynb).
This idea of estimating the controlled direct effect, by predicting the outcome while conditioning on the mediator (the length difference), is common in statistical inference.
To get LC win rates on previously annotated models, you can use the following command:
```bash
pip install -U alpaca_eval
alpaca_eval --model_outputs … --is_recompute_metrics_only True
``` AlpacaEval 2.0 AlpacaEval 2.0 is a new version of AlpacaEval. Here are the differences:
- **reference: `gpt4_turbo`**: we upgraded the baseline from `text-davinci-003` to `gpt4_turbo` to make the benchmark more challenging and have a metric that better reflects the current state of the art.
- **annotator: `weighted_alpaca_eval_gpt4_turbo`**: we improved the annotator in quality and price. First, we use the `gpt4_turbo` model for annotating, which is approximately 2x cheaper than `gpt4`. Second, we changed the prompt such that the model outputs a single token, which further reduced cost and speed. Finally, instead of using a binary preference, we used the logprobs to compute a continuous preference, which gives the final weighted win-rate. Note that the latter two changes had the surprising effect of decreasing the annotators' length biased.
By default, AlpacaEval 2.0 will be used from `pip install alpaca_eval==0.5`. If you wish to use the old configs by default, you can set `IS_ALPACA_EVAL_2=False` in your environment. Data Release As part of AlpacaEval, we release the following data:
- **Human annotations (17701)** in order to develop and understand automatic evaluators, we release all the human
pairwise
evaluation that we collected for AlpacaFarm. This contains comparisons between 22 models with the `text-davinci-003`
reference on the AlpacaFarm evaluation set. Annotations are from a pool of 16 crowd workers on Amazon Mechanical Turk.
The different models are: 6 from OpenAI, 2 SFT models from AlpacaFarm, 13 RLHF methods from AlpacaFarm, and LLaMA 7B.
- **Human cross-annotations (2596)** in order to further analyze automatic evaluators we selected (via stratified
sampling
across models and datasets) 650 examples from the AlpacaFarm evaluation set and collected 4 human annotations per
example.
- **AlpacaEval set (805)** we made slight modifications/simplification of the AlpacaFarm evaluation set. In particular,
we first merged
the instruction and input fields into a single instruction field. This affects 1/4 of the examples in the AlpacaFarm
evaluation set, all of which are from the [self-instruct evaluation set](https://arxiv.org/abs/2212.10560). Second we
regenerated the text-davinci-003 reference outputs without limiting the length of its outputs.
For more details about the human annotations refer to the [AlpacaFarm paper](https://arxiv.org/abs/2305.14387). Differences with AlpacaFarm AlpacaEval is an improvement and simplification of the automatic pairwise preference simulator
from [AlpacaFarm](https://github.com/tatsu-lab/alpaca_farm).
Outside AlpacaFarm, you should be using AlpacaEval.
Here are the main differences:
- **AlpacaEval merges instructions and inputs**: The AlpacaEval evaluation is the same as the AlpacaFarm evaluation
except that the instruction and input fields are merged as `{instruction}\n\n{input}`. This affects 1/4 of the
examples in the AlpacaFarm evaluation set (the [self-instruct](https://arxiv.org/abs/2212.10560) subset).
This simplification provides a more fair comparison for models that were not trained by distinguishing between
the two fields.
- **AlpacaEval handles longer generations**: Models in AlpacaFarm were limited to a maximum number of 300 tokens for
generations. We
change this number to 2000 for AlpacaEval. Note that this also affects the reference generations (`text-davinci-003`),
so the results on AlpacaEval are not comparable to those on AlpacaFarm even for examples that had no input
field.
- **AlpacaEval removes intra- and inter-annotator variance**: The AlpacaFarm simulator replicates human annotation in
terms of both mode behavior and diversity.
In particular, AlpacaFarm's simulator uses a pool of models and prompts and adds noise to replicate human intra- and
inter-annotator variance.
If the goal is to use an automatic annotator for evaluation or simply training better models, then this variance
may not be desirable. The default annotators in AlpacaEval thus don't have this variance. We give the option to add it
back by
using `--anotators_config 'alpaca_farm'` and `--p_label_flip 0.25` when creating an evaluator. Related work There have been several work that propose new automatic annotators for instruction-following models. Here we list the
ones that we are aware of and discuss how they differ from ours. We evaluated all of those
in [our evaluator's leaderboard](https://github.com/tatsu-lab/alpaca_eval#evaluators).
- **Vicuna/lmsys** The lmsys annotator (`lmsys_gpt4`) evaluates the pair by asking the annotator a score from 1-10 for
each output, and then selecting the output with the highest score as preferred. They do not randomize over output
order and they ask an explanation _after_ the score. Overall, we found that this annotator has strong bias towards
longer outputs (0.74) and relatively low correlation with human annotations (63.2).
- **AlpacaFarm** The best AlpacaFarm annotator (`alpaca_farm_greedy_gpt4`) evaluates the pair by directly asking the
annotator
which output it prefers. Furthermore, it batches 5 examples together to amortize the length of the prompt and
randomizes the order of outputs. Overall, we
found that this annotator has much less bias towards longer outputs (0.60) and is faster (878 seconds/1000 examples)
than others. It has a
slightly higher correlation with the majority of human annotations (66.4) than humans themselves (65.7).
However, it is more expensive ($15.3/1000 examples) and doesn't work with very long outputs given the batching.
- **Aviary** The Aviary annotator (`aviary_gpt4`) asks the annotator to order the output by its preference, rather than
simply selecting the preferred output. It does not randomize the order of outputs and uses high temperature for
decoding (0.9). Overall, we found that this annotator has relatively strong bias towards longer outputs (0.70) and
very high
correlation with human annotations (69.1). By decreasing the temperature and randomizing the order of outputs,
we [further improved](https://github.com/tatsu-lab/alpaca_eval/blob/main/src/alpaca_eval/evaluators_configs/README.md)
the correlation to 69.8 (`improved_aviary_gpt4`) but this further increased the length bias to 0.73.
Our `alpaca_eval_gpt4` is a mix between the AlpacaFarm and Aviary annotators. It asks the annotator to order the outputs
by preference, but it uses temperature 0, randomizes over outputs, and made some modifications to the prompt to decrease
length bias to 0.68.
Other related work include recent papers which analyze automatic evaluators.
For example:
- [AlpacaFarm Appx C](https://arxiv.org/abs/2305.14387)
and [Large Language Models are not Fair Evaluators](https://arxiv.org/abs/2305.17926v1) both found that automatic
annotators have
a position bias.
- [AlpacaFarm Sec. 5.2.](https://arxiv.org/abs/2305.14387)
and [The False Promise of Imitating Proprietary LLMs](https://arxiv.org/abs/2305.15717) both found that
automatic
annotators favor style (e.g. use of list, tone, word choice, length) over factuality. Interpreting annotations For all models you can find the auto-annotations under `results/ /*/annotations.json`. The annotations have the following columns:
- `instruction`: the prompt
- `generator_1`: the baseline model
- `output_1`: the output of the baseline model
- `generator_2`: the model being evaluated
- `output_2`: the output of the model being evaluated
- `annotator`: the auto-annotator
- `preference`: the result of the auto-annotator. This is a float between 1 and 2. Closer to 1 means that the auto-annotator prefers `output_1`, closer to 2 means that it prefers `output_2`. For AlpacaEval 2.0, `preference-1` corresponds to the probability of `output_1` being preferred. For AlpacaEval 1.0, `preference` is 1 if `output_1` is preferred, 2 if `output_2` is preferred, and 1.5 if they are the same. The win rate is always`(preference -1).mean()`.
- `raw_completion`: the raw output of the auto-annotator. This is field contains the completions before de-randomization of the order between `output_1` and `output_2`! It is thus much harder to interpret, see below for more information.
**Chain of thought**
For some annotators, e.g. `alpaca_eval_cot_gpt4_turbo_fn` we use **chain of thought reasoning** to make the models preferences more interpretable. Those can then be found under `concise_explanation`. To interpret them, you should also look at `referenced_models` which translates the temporary model name (in the prompt) to the actual output. Below, we provide more explanation as to what is happening behind the scenes.
You can check the `raw_annotations["concise_explanation]` column in `annotations.json` (e.g. [here](https://github.com/tatsu-lab/alpaca_eval/tree/main/results/gpt4/alpaca_eval_cot_gpt4_turbo_fn/annotations.json)) which contains the chain of thought reasoning of the auto annotator. Note that the raw_annotations is not modified by the randomization of the order of the outputs. In particular, `"m"` and `"M"` can sometime refer to the first model (the reference) and sometime to the second model (the model being evaluated). To understand which model is being referred to, you should use the column `preference` and `ordered_models`. To make it easier we add a column `"referenced_models"` mapping the model names to the corresponding outputs. For example in the following annotation we see that the preference is 1.0 (i.e. `output_1`) and corresponds to model `M` in `concise_explanation` (see `ordered_models`).
```json
{
"instruction": "How did US states get their names?",
"output_1": "The names of U.S. states are derived from a variety of languages and sources, including indigenous American languages, European languages (primarily English, Spanish, and French), as well as other historical influences. Here is a brief overview of how some states got their names:\n\n1. **Indigenous American Languages**: Many state names come from Native American words or names of tribes, often from the languages of the indigenous peoples who lived in or near those areas.\n - **Alabama**: Possibly derived from the Choctaw language, meaning \"thicket clearers.\"\n - **Connecticut**: From a Mohegan-Pequot word meaning \"long tidal river.\"\n - **Massachusetts**: [...]",
"generator_1": "gpt4_1106_preview",
"dataset": "helpful_base",
"output_2": "The names of the 50 U.S. states come from a variety of sources, including Native American languages, European languages, and historical figures. Here's a brief overview of how some states got their names:\n\n1. Native American origins: Many states have names derived from Native American languages. For example, Alabama comes from the Choctaw word \"Albah amo,\" meaning \"plant gatherers\" or \"herb gatherers.\" Similarly, the name Mississippi comes from the Ojibwe word \"Misi-ziibi,\" meaning \"great river.\"\n\n2. European languages: [...].",
"generator_2": "gpt4",
"annotator": "alpaca_eval_cot_gpt4_turbo_fn",
"preference": 1.0,
"raw_completion": {
"concise_explanation": "Model M provided a more detailed and structured response, including bold headings for each category and a wider range of examples. It also included additional categories such as 'Other European Languages' and 'Combination of Languages and Influences', which added depth to the explanation. Model m's response was accurate but less comprehensive and lacked the clear structure found in Model M's output.",
"ordered_models": [
{
"model": "M",
"rank": 1
},
{
"model": "m",
"rank": 2
}
]
},
"referenced_models": {
"M": "output_1",
"m": "output_2"
}
}
``` Major updates - 12th March 2024: updated to use length-controlled (LC) win rates. This is a debiased version of the win-rates that control for the length of the outputs.
- 3rd January 2024: updated to AlpacaEval 2.0, which uses GPT4-turbo as baseline and annotator.
- 2nd January 2024: added Azure API and more general way of setting client configs. See [here](https://github.com/tatsu-lab/alpaca_eval/tree/main/client_configs/README.md)
- 19th June 2023: add leaderboard `chatgpt_fn` that anyone can use (no waiting lists).
- 19th June 2023: update to
use [OpenAI's function calling](https://openai.com/blog/function-calling-and-other-api-updates).
Example: [`chatgpt_fn`](https://github.com/tatsu-lab/alpaca_eval/tree/main/src/alpaca_eval/evaluators_configs/chatgpt_fn)
or [`alpaca_eval_gpt4_fn`](https://github.com/tatsu-lab/alpaca_eval/tree/main/src/alpaca_eval/evaluators_configs/alpaca_eval_gpt4_fn).;An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.;deep-learning,evaluation,foundation-models,instruction-following,large-language-models,leaderboard,nlp,rlhf | tatsu-lab/alpaca_eval |
obgnail/typora_plugin;Typora Plugin 目前支持的功能: | 序号 | 插件 | 功能 |
| ---- | ----------------------- | ------------------------------------------------------------ |
| 1 | window_tab | 标签页管理 |
| 2 | search_multi | 全局多关键字搜索 |
| 3 | multi_highlighter | 多关键字高亮 |
| 4 | collapse_paragraph | 章节折叠 |
| 5 | collapse_list | 列表折叠 |
| 6 | collapse_table | 表格折叠 |
| 7 | md_padding | 中英文混排优化 |
| 8 | slash_commands | 斜杠命令 |
| 9 | templater | 文件模板 |
| 10 | resourceOperation | 一键清除无用图片 |
| 11 | fence_enhance | 一键复制代码,折叠代码 |
| 12 | toc | 在右侧生成大纲目录 |
| 13 | commander | 命令行环境 |
| 14 | mindmap | 根据文档大纲一键生成思维导图 |
| 15 | toolbar | 多功能搜索 |
| 16 | right_click_menu | 右键菜单统一管理、调用插件 |
| 17 | pie_menu | 圆盘菜单 |
| 18 | preferences | 启停插件 |
| 19 | markmap | 提供 markmap 组件支持 |
| 20 | echarts | 提供 echarts 组件支持 |
| 21 | chart | 提供 chartjs 组件支持 |
| 22 | abc | 提供 abcjs 组件支持 |
| 23 | calendar | 提供 tui.calendar 组件支持 |
| 24 | marp | 提供 marp 组件支持 |
| 25 | callouts | 提供 callouts 支持 |
| 26 | text_stylize | 文字风格化 |
| 27 | read_only | 只读模式 |
| 28 | blur | 模糊模式 |
| 29 | kanban | 看板 |
| 30 | timeline | 时间线 |
| 31 | chat | 聊天 |
| 32 | file_counter | 显示目录下的文件数 |
| 33 | outline | 以表格、图片、代码块形式的大纲 |
| 34 | auto_number | 章节、表格、图片、代码块等自动编号 |
| 35 | imageReviewer | 图片查看器 |
| 36 | chineseSymbolAutoPairer | 中文符号自动补全 |
| 37 | datatables | 表格增强(搜索、过滤、分页、排序等) |
| 38 | resize_table | 调整表格行高列宽 |
| 39 | resize_image | 调整图片显示大小 |
| 40 | export_enhance | 导出 html 时避免图片丢失 |
| 41 | go_top | 一键到文章顶部、底部 |
| 42 | reopenClosedFiles | 打开上次退出 Typora 时尚未关闭的标签页 |
| 43 | truncate_text | 暂时隐藏内容,提高大文件渲染性能 |
| 44 | markdownLint | markdown 格式规范检测 |
| 45 | darkMode | 夜间模式 |
| 46 | noImageMode | 无图模式 |
| 47 | pluginUpdater | 一键升级插件 |
| 48 | extractRangeToNewFile | 提取选区文字到新文件 |
| 49 | fullPathCopy | 复制标题路径 |
| 50 | autoTrailingWhiteSpace | 为文档添加结尾空格 |
| 51 | redirectLocalRootUrl | 重定向本地资源根目录 |
| 52 | blockSideBySide | 并列显示活动块 |
| 53 | scrollBookmarker | 书签管理器 |
| 54 | openInTotalCommander | 在 total commander 打开 |
| 55 | bingSpeech | 必应朗读 |
| 56 | cipher | 加密文件 |
| 57 | mermaid_replace | 替换 mermaid 组件 |
| 58 | help | 用户帮助 |
| 59 | custom | 开放平台,用户自定义插件(高级) |
| 60 | hotkeyHub | 快捷键注册中心(高级) |
| 61 | quickButton | 于右下角添加功能按钮(高级) |
| 62 | json_rpc | 外部操纵 Typora(高级) |
| 63 | old_window_tab | 标签页管理(已废弃) |
| 64 | article_uploader | 一键上传博客到支持的所有平台 | 尊重用户的一切选择 。本项目的任何插件、任何功能皆可永久启用 / 禁用。 如果各位有其他的需求,或发现 BUG,欢迎 提 issue ,欢迎 PR。如果能给我颗 star ⭐ 就更好了 :) 如何使用:方法一(自动) 目前此方法仅限 Windows 平台。 下载 插件源码的压缩包,并解压 进入 Typora 安装路径,找到包含 window.html 的文件夹 A(一般是 Typora/resources/window.html 或者 Typora/resources/app/window.html ) 打开文件夹 A,将源码的 plugin 文件夹粘贴进该文件夹下(参考方法二的图片) 进入文件夹 A/plugin/updater/ ,双击运行 updater.exe 。如果看到下图,说明成功 验证:重启 Typora,在正文区域点击鼠标右键,弹出右键菜单栏,如果能看到 常用插件 栏目,说明一切顺利 每个插件皆有配置选项。开发者鼓励您修改配置,以符合自身需求。配置文件夹: A/plugin/global/settings/ 本插件系统支持一键升级: 常用插件 -> 二级插件 -> 升级插件 如何使用:方法二(手动) 此方法支持 Windows、Linux 平台。 下载 插件源码的压缩包,并解压 进入 Typora 安装路径,找到包含 window.html 的文件夹 A(一般是 Typora/resources/window.html 或者 Typora/resources/app/window.html ,推荐使用 everything 找一下) 打开文件夹 A,将源码的 plugin 文件夹粘贴进该文件夹下 根据文件夹 A 下是否含有 appsrc 目录判断 Typora 是否为新版本,有则新版本,无则旧版本 打开文件 A/window.html 若是新版本:搜索文件内容 <script src="./appsrc/window/frame.js" defer="defer"></script> 若是旧版本:搜索文件内容 <script src="./app/window/frame.js" defer="defer"></script> 在上述搜索内容的 后面 加入 <script src="./plugin/index.js" defer="defer"></script> 。保存文件。 验证:重启 Typora,在正文区域点击鼠标右键,弹出右键菜单栏,如果能看到 常用插件 栏目,说明一切顺利 | | 新版本 | 旧版本 |
| ------ | ---------------------------------------------------- | ------------------------------------------------------ |
| 步骤 4 | | |
| 步骤 5 | | | 虽然操作简单,还请务必对照上图谨慎操作。如果修改完 Typora 白屏了,很可能是你修改的时候疏忽了。 如果您是 Linux 平台且安装失败,大概率是权限问题导致的。请给上述的 plugin 文件夹权限。执行命令 chmod 777 your_typora_path/resources/plugin 。 如何使用:方法三(自动) 目前此方法仅限 archlinux 平台,aur 见 aur/typora-plugin yay -S typora-plugin 实现原理 前端 window.html 是 Typora 的初始文件,可以写入一个 <script> 标签实现功能,就和 Tampermonkey 脚本一样。 后端 因为 Typora 暴露了 reqnode 函数(require 的别名),所以可以使用 CommonJS 的 reqnode('path') 导入 Node.js 的 path 库,其他库同理。 因为 Typora 使用了 electron 不太安全的 executeJavaScript 功能,所以可以用此注入 JS 代码,从而劫持后端关键对象,进而实现 electron 的后端功能注入。理论上劫持了 electron 对象,你甚至可以在 Typora 里斗地主。 ```javascript
// 控制台输入下面命令: // 恭喜你成功让第二个窗口打印消息
JSBridge.invoke("executeJavaScript", 2, console.log("i am logging") ); // 恭喜你成功让所有窗口打印消息
ClientCommand.execForAll( console.log("i am logging") ); // 恭喜你成功获取到本窗口的BrowserWindow对象
global.reqnode('electron').remote.require('electron').BrowserWindow; // 恭喜你成功获取到所有窗口的BrowserWindow对象
ClientCommand.execForAll( console.log(global.reqnode('electron').remote.require('electron').BrowserWindow) ); // 恭喜你成功让窗口1执行语句_myValue=123,然后将变量_myValue传给窗口2
JSBridge.invoke('executeJavaScript', 1, "_myValue=123; JSBridge.invoke('executeJavaScript', 2, console.log(${_myValue}) )");
``` Q&A 我的 Typora 版本能用吗? 所有插件都在 0.9.98 版本(最后一个免费版本)和最新版本测试过。 本项目理论上支持所有 Typora 版本,但是 Typora 在 0.9.98 版本以后功能才稳定下来。 0.9.98 版本以下,可能会因为缺少某些功能导致部分插件失效 。 插件会失效吗? 理论上能保持长时间有效,且我在维护中。 如何永久禁用/启用某些插件? 右键菜单 -> 少用插件 -> 启停插件。 如何修改插件配置? 目前整个项目包含 500+ 项配置,可以比较完整的定义各个插件的行为。 这些配置全部位于 ./plugin/global/settings/ 目录中。修改配置的方法请阅读该目录下的 请读我.md 。 如何升级插件? 方法一(全自动,仅限 Windows): 右键菜单 -> 常用插件 -> 二级插件 -> 升级插件 。 方法二(半自动,仅限 Windows):有些用户将 Typora 装在 C 盘或者 Typora 的安装路径包含 Program Files 目录,经常会因为权限问题导致升级失败。此时可以通过以管理员身份打开终端,执行下面命令升级: bash
# 注意updater.exe改成你的路径
# proxy参数:你的代理URL。如果你所在地区能直连GitHub,可以删除此参数
D:/software/typora/resources/plugin/updater/updater.exe --action=update --proxy=http://127.0.0.1:7890 方法三(手动,支持 Windows、Linux): 安装方法二 重新走一遍。 我不想用了,如何卸载插件系统? 右键菜单 -> 少用插件 -> 帮助 -> 卸载插件。 支持 Typora for Mac 吗? 没有 Mac,故没做测试。 插件使用说明 所有的插件都提供了四种使用方法: 键盘党: 键入 ctrl+j,在输入框键入 plu+空格+插件名称 调出插件列表(详见 toolbar 插件) 快捷键(详见 hotkeyHub 插件) 鼠标党: 在正文区域右键,在弹出的右键菜单中直接调用(详见 right_click_menu 插件) 快捷按钮(详见 quickButton 插件) window_tab:标签页管理 鼠标置于标签页处,ctrl+滚轮 、 ctrl+shift+tab 、 ctrl+tab 、 ctrl+PgUp 、 ctrl+PgDn :切换标签 ctrl+w :关闭标签 ctrl+click 标签 、 向下拖拽标签 :新窗口打开 拖拽 :排序标签 鼠标右键标签页 :弹出标签的右键菜单 search_multi:全局多关键字搜索 功能:比如搜索同时包含 golang 和 install 和 生命周期 三个关键字的文件。 搜索关键字使用空格分隔,双引号包裹视为词组。eg: golang install 搜索同时包含 golang 和 install 的文件; "golang install" 搜索包含 golang install 的文件 ctrl+shift+P :打开搜索框 esc :关闭搜索框 enter :搜索 ArrowUp , ArrowDown :方向键上下选中 click 、 ctrl+enter :当前窗口打开 ctrl+click 、 ctrl+shift+enter :新窗口打开 ctrl+拖动输入框 :移动位置 multi_highlighter:多关键字高亮 功能:搜索并高亮关键字,并提供一键定位功能(左键下一个,右键上一个) 关键字使用空格分隔,双引号包裹视为词组。 ctrl+shift+H :打开搜索框 esc :关闭搜索框 enter :搜索 ctrl+拖动输入框 :移动位置 左键色块 :定位到下一个关键字 右键色块 :定位到上一个关键字 注意:当你鼠标点击文档内容时,会自动退出高亮状态。 这是 Typora 本身的限制导致的 。你可以试试 Typora 自身的 ctrl+F 搜索,在搜索关键字后,点击任意地方原先高亮的地方也会消失。 collapse_paragraph:章节折叠 功能:折叠 / 展开 章节下所有文本。支持折叠的标签:h1~h6。 ctrl+click :折叠 / 展开【单个章节】 ctrl+alt+click :折叠 / 展开【父章节下所有同级的章节】 ctrl+shift+alt+click :折叠 / 展开【所有同级的章节】 collapse_list:列表折叠 功能:折叠 / 展开 无序列表、有序列表、任务列表。 collapse_table:表格折叠 功能:折叠 / 展开 表格。 md_padding:中英文混排优化 功能:中英文混排时,中文与英文之间、中文与数字之间添加空格。 快捷键:ctrl+shift+B(原本的快捷键是 ctrl+shift+K,但是此快捷键已经被新版本 Typora 的其他功能占用了,目前已经改成 ctrl+shift+B,下面的动图懒得改了) 有些片段希望保持原状,这时可以用 md-padding-ignore 包裹起来。 ```markdown
下面是一段不需要格式化的文本 a*b=c, b>1 => a<c 现在开始又可以格式化了。
``` slash_commands:斜杠命令 功能:类似于 notion 的 slash command。 支持用户自定义命令,请前往配置文件修改: toml
[slash_commands]
COMMANDS = [
{ enable = true, type = "snippet", icon = "🧰", hint = "", keyword = "snippet", callback = "thisIsTestSnippet" },
{ enable = true, type = "command", icon = "🧾", hint = "二级标题", keyword = "h2", callback = "() => File.editor.stylize.changeBlock('header2', undefined, true)" },
{ enable = true, type = "command", icon = "🧰", hint = "置底部", keyword = "jumpBottom", callback = "() => File.editor.selection.jumpBottom()" },
...
] templater:文件模板功能 功能:类似于 obsidian 的文件模板功能,根据模板快速创建文件。 使用方式:右键菜单 -> 常用插件 -> 二级插件 -> 文件模板。 模板列表请前往配置文件修改。 resourceOperation:一键清除无用图片 功能:资源管理,清除无用图片 使用方式:右键菜单 -> 常用插件 -> 二级插件 -> 资源管理 fence_enhance:一键复制代码,折叠代码 toc:在右侧生成大纲目录 功能:Typora 侧边栏的【文件】和【大纲】不能同时显示,为了解决此问题,此插件会在右侧新增一个【大纲】。 使用方式: 方式一:常用插件 -> 二级插件 -> 大纲目录 方式二:右键点击侧边栏的【大纲】按钮 commander:命令行环境 有些插件依赖于此插件,不建议禁用。 功能:和 total commander 的命令行一样(快捷键也一样),一个快速执行命令的工具,并提供少量交互。 ctrl+G :弹出执行框 esc :隐藏执行框 ctrl+拖动输入框 :移动位置 支持 shell: cmd/bash :windows 或 Mac 的默认终端 powershell :微软的傻儿子 :D git bash :使用此终端前请保证安装了 git bash 并且加入环境变量 wsl :使用此终端前请保证安装了 wsl2,并且加入环境变量 内置环境变量: $f :当前文件路径 $d :当前文件的所属目录 $m :当前挂载的根目录 支持内建命令,方便快速调用。个人可按需自定义配置文件里的 BUILTIN 选项。 ```toml 默认的内建命令 目前支持4个参数: 1. name: 展示的名称(不可重复) 2. shell: cmd/bash、powershell、gitbash、wsl 3. hotkey: 快捷键(可选) 4. cmd: 执行的命令 BUILTIN = [
{ name = "", shell = "cmd/bash", cmd = "" },
{ name = "Explorer", shell = "powershell", hotkey = "ctrl+alt+e", cmd = "explorer $d" },
{ name = "Vscode", shell = "cmd/bash", cmd = "code $f" },
{ name = "WT", shell = "cmd/bash", cmd = "cd $d && wt" },
{ name = "GitCommit", shell = "cmd/bash", cmd = 'cd $m && git add . && git commit -m "update"' },
]
``` mindmap:根据文档大纲一键生成思维导图 功能:根据文档大纲一键生成 mermaid 思维导图 使用方式:右键菜单 -> 常用插件 -> 思维导图 markmap:提供 markmap 支持 使用方式: 方式一:右键菜单 -> 少用插件 -> markmap 方式二:直接点击右下角的 markmap 按钮 toolbar:多功能搜索 功能:类似于 vscode 的 ctrl+shift+p 功能 使用方式: 方式一:右键菜单 -> 少用插件 -> 多功能搜索 方式二:快捷键 ctrl+j 支持搜索: his :最新打开过的文件 plu :插件 tab :打开的标签页 ops :常用操作 out :文档大纲 mode :切换文件模式 theme :临时切换主题 func :功能列表 all :混合查找(所有项目都混在一起查找) 键入内容说明: 键入内容 = 搜索工具名称 + 空格 + 搜索内容 支持 交集查询 、 差集查询 ,并且可以随意组合(类似于 google 的正负向查询) 举例: his node learn :查找最近打开的文件,要求文件标题【包含 node 和 learn 两个关键字】 plu multi -search :查找插件,要求插件名【包含 multi 关键字,但是不包含 search 关键字】 tab -messing :查找所有打开的标签页,要求标签页名称【不包含 messing 关键字】 his close -win -mark 标签 :查找最近打开的文件,要求文件标题【包含 close、标签,不包含 win、mark】 right_click_menu:右键菜单统一管理插件 所有插件都支持在右键菜单中直接调用。鼠标党可以将右键菜单作为所有插件的主要调用方式。 可以通过修改配置文件自定义右键菜单: ```toml 每一个MENUS对应一个一级菜单,允许无限添加一级菜单,允许重复添加同一个插件 NAME: 一级菜单的名称 LIST: 二级菜单的插件列表(使用"---"代表在页面上插入一个divider,以作分隔) [[right_click_menu.MENUS]]
NAME = "少用插件"
LIST = [
"window_tab", "resize_image", "resize_table", "fence_enhance", "export_enhance",
"datatables", "markmap", "auto_number", "truncate_text", "right_click_menu",
"---",
"blur", "go_top", "text_stylize", "toolbar",
"---",
"file_counter", "mermaid_replace", "test",
]
[[right_click_menu.MENUS]]
NAME = "常用插件"
LIST = [
"commander", "mindmap", "collapse_paragraph", "custom",
"---",
"search_multi", "multi_highlighter", "outline", "md_padding", "read_only",
]
``` pie_menu:圆盘菜单 使用方式: Ctrl+鼠标右键:弹出圆盘菜单 鼠标中键:旋转圆盘菜单 鼠标左键圆心:固定圆盘菜单,圆盘不再自动消失 鼠标右键圆心:展开圆盘菜单,圆盘不再收缩 preferences:启停插件 使用方式:右键菜单 -> 少用插件 -> 启停插件 echarts:提供 echarts 支持 使用方式:右键菜单 -> 常用插件 -> 二级插件 -> 插入 echarts chart:提供 chartjs 支持 使用方式:右键菜单 -> 常用插件 -> 二级插件 -> 插入 chart abc:提供 abc 组件支持 calendar: 提供 tui.calendar 组件支持 使用方式:右键菜单 -> 常用插件 -> 二级插件 -> 插入 calendar marp:提供 marp 组件支持 功能:使用 markdown 做 PPT。 使用方式:右键菜单 -> 常用插件 -> 二级插件 -> 插入 marp。 callouts: 提供 callouts 支持 使用方式:右键菜单 -> 常用插件 -> 二级插件 -> 插入 callouts 数量、类型、颜色、icon 皆可自己定义,请前往配置文件修改: toml
[[callouts.config.list]]
type = "note"
background_color = "#e1d1eb"
left_line_color = "#6a1b9a"
icon = "\\f040" kanban:看板 拓展代码语法,添加看板功能。 timeline:时间线 拓展代码语法,添加时间线功能。 chat:聊天 拓展代码语法,添加聊天功能。 text_stylize:文字风格化 功能:将文字转为 html 格式,改变文字样式。 使用方式:右键菜单 -> 少用插件 -> 文字风格化。 read_only:只读模式 功能:只读模式下文档不可编辑。(开启后,右下角数字统计区域会出现 ReadOnly 字样) 快捷键:ctrl+shift+R blur:模糊模式 功能:开启后,只有当前聚焦的组件可见,其余模糊。可以用于防偷窥。 使用方式:右键菜单 -> 少用插件 -> 模糊模式 注意:此插件只能高版本 Typora 使用,暂时没有兼容低版本。 file_counter:显示文件数 outline:以表格、图片、代码块形式的大纲 使用方式:右键菜单 -> 常用插件 -> 类别大纲 auto_number:自动编号 支持编号的组件(皆可临时或永久打开/关闭): 标题 大纲 TOC 表格 图片 代码块 注意:通过注入 CSS 实现此功能,有可能会与你使用的 theme 冲突。 和其他使用 Theme CSS 的实现方式不同,此插件通过修改内置函数,完美解决导出 PDF 后侧边栏没有编号的问题 :) 根据 Markdown 最佳实践 ,一篇文档应该 有且仅有 一个 h1,故此插件从 h2 开始编号。 imageReviewer:图片查看器 功能:一站式图片查看,并且提供简单图片编辑。 使用方式: 方式一:点击右下角【查看图片】按钮 方式二:右键菜单 -> 常用插件 -> 二级插件 -> 图片查看器 chineseSymbolAutoPairer:中文符号自动补全 功能:输入 《 【 ( ‘ “ 「 符号时自动补全。 自动补全的符号支持自定义: ```toml 需要自动补全的符号(第一项为输入符号,第二项为补全符号) auto_pair_symbols = [
["(", ")"],
["《", "》"],
["‘", "’"],
["“", "”"],
["【", "】"],
["「", "」"],
]
``` datatables:表格增强 功能:增强表格。提供搜索、过滤、分页、排序等功能。 使用方式:将光标定位在表格 -> 右键菜单 -> 少用插件 -> 表格增强。 其实此插件可以是提供开放能力的,实现类似于 obsidian 的 dataview 插件的功能。不过暂时不做,原因: 私以为 Typora 的用户并不需要大量用到此功能。 需要用户熟悉 javascript 以及 dataTables.js 的 API。成本太高。 需要编写大量的配套代码。 resize_table:拖动调整表格大小 功能: ctrl+鼠标拖动 :修改表格的行高列宽。 resize_image:调整图片大小 功能: ctrl+鼠标滚轮滚动 :调整图片大小。 export_enhance:导出增强 功能:导出 html 时,将图片转为 base64,避免图片丢失。 go_top: 一键到顶 功能:在右下角添加一个一键到顶的按钮。 reopenClosedFiles:打开上次退出 Typora 时尚未关闭的标签页 功能:自动 或者 通过快捷键打开上一次退出 Typora 时尚未关闭的标签页 此插件仅在 window_tab 启用时生效 truncate_text:暂时隐藏内容,提高大文件渲染性能 功能:大文件在 Typora 的渲染性能很糟糕,用此插件暂时隐藏内容(只是隐藏显示,不修改文件),提高渲染性能。也可以用于防偷窥。 使用方式:右键菜单 -> 少用插件 -> 文本截断。 包含的功能如下: 隐藏最前面:隐藏最前面的文本段,只留下最后 80 段。 重新显示:重新显示之前隐藏的所有文本段。 根据当前可视范围显示:根据当前可视范围显示文本段。 原理:通过设置 DOM 元素的 display 样式为 none 来隐藏元素,让元素不占用渲染树中的位置,对隐藏的元素操作不会引发其他元素的重排。 collapse_paragraph (章节折叠功能)可以很好的替代此插件,建议使用 collapse_paragraph。 markdownLint:markdown 格式规范检测 使用方式: 方式一:右键菜单 -> 常用插件 -> 二级插件 -> md 格式规范检测 方式二:点击右上角的小方块 darkMode:夜间模式 使用方式:右键菜单 -> 常用插件 -> 二级插件 -> 夜间模式 noImageMode:无图模式 使用方式:右键菜单 -> 常用插件 -> 二级插件 -> 无图模式 pluginUpdater:一键升级插件 使用方式:右键菜单 -> 常用插件 -> 二级插件 -> 升级插件。 众所周知,有些用户并不能裸连 github 下载最新插件,故提供了设置代理功能(默认为系统代理) extractRangeToNewFile:提取选区文字到新文件 使用方式:选中一些文字 -> 右键菜单 -> 常用插件 -> 二级插件 -> 提取选区文字到新文件。 fullPathCopy:复制标题路径 使用方式:将光标定位到标题上 -> 右键菜单 -> 常用插件 -> 二级插件 -> 复制标题路径。 就会生成如下文字,并复制到剪切板: README.md\Typora Plugin 一级标题\插件使用说明 二级标题\fullPathCopy:复制标题路径 三级标题 autoTrailingWhiteSpace:为文档添加结尾空格 使用方式:将光标定位到标题上 -> 右键菜单 -> 常用插件 -> 二级插件 -> 为文档添加结尾空格。 根据严格的 Markdown 换行语法,需要在结尾添加两个空格以表示换行。此工具能一键添加空格。 此插件默认关闭,需手动开启。 redirectLocalRootUrl:重定向本地资源根目录 功能:如果你主要使用 obsidian 或 joplin 来管理文件,偶尔用 typora 打开文件。就会遇到一个问题:obsidian 或 joplin 都是将本地资源放在同一个目录中(vault),这导致在 typora 打开后文件由于路径错误,无法访问本地资源。此插件就是为了解决此问题,重定向本地资源根目录。 此插件默认关闭,需手动开启。 使用此插件前需要设置如下配置选项: ```toml 资源根目录,支持绝对路径(如D:\tmp\img)和相对路径(如.\assets),填写时请注意转义反斜线(若为空,则此插件失效) root = "./" 过滤的正则表达式:只有文件路径命中filter_regexp匹配的文件才使用此插件(若为空,则全部文件都使用此插件) filter_regexp = ""
``` blockSideBySide:并列显示活动块 功能:活动块默认是上下显示,改为并列显示。 注意:通过注入 CSS 实现此功能,有可能会与你使用的 theme 冲突。 此插件默认关闭,需手动开启。 openInTotalCommander:在 total commander 打开当前文件 使用方式:将光标定位到标题上 -> 右键菜单 -> 常用插件 -> 二级插件 -> TC 打开。 使用此插件前,需要您在配置手动修改 TC 的安装路径。 此插件默认关闭,需手动开启。 scrollBookmarker: 书签管理器 使用方式: 使用 alt+click 正文内容,打上书签。 接着调出书签管理器,点击上面的书签,即可跳转到书签。 调出书签管理器: 方式一:右键菜单 -> 常用插件 -> 二级插件 -> 书签管理器 方式二:点击右下角的【书签管理器】按钮 bingSpeech:必应朗读 使用方式:鼠标框选一小段文字 -> 右键菜单 -> 常用插件 -> 二级插件 -> 必应朗读 实现原理:采用爬虫技术,盗取必应翻译的播放语音功能 注意:功能完全依赖于外部环境,因此不能保证成功。开发者一时兴起做的,仅作娱乐使用。 help: 用户帮助 使用方式:右键菜单 -> 少用插件 -> 帮助。 cipher:加密文件 使用方式:右键菜单 -> 少用插件 -> 加密文件。 mermaid_replace:替换 mermaid 如果你不愿意更新 Typora 版本,同时又想使用新版本的 mermaid,或者想自定义 mermaid 样式,可以使用此插件。 此插件默认关闭,需手动开启。 hotkeyHub:快捷键注册中心(高级) 此插件是高级插件,仅对有 javascript 基础的用户开放。 功能:以声明的形式,为【任意插件系统函数】或【任意自定义函数】绑定快捷键。 具体使用请参考 hotkey.default.toml 。 quickButton:于右下角添加功能按钮(高级) 此插件是高级插件,仅对有 javascript 基础的用户开放。 功能和 hotkeyHub 类似,以声明的形式,为【任意插件系统函数】设置快捷按钮。 custom:开放平台,用户自定义插件(高级) 此插件是高级插件,仅对有 javascript 基础的用户开放。 功能:提供开放能力,支持用户自己写插件。 具体使用请参考 请读我.md 。 json_rpc:外部操纵 Typora(高级) 功能:将包括 typora-plugin 所有功能在内的一切能力通过 json-rpc 的形式暴露出去,以供外部操纵 Typora。 具体使用请参考 请读我.md 。 此插件是高级插件,默认关闭,需手动开启,且仅对开发人员开放。 开启此插件后,外部将拥有 node、browser 两套环境,能完全控制电脑,因此如果您不是开发人员,请勿开启此插件。 article_uploader:上传博客文章 功能:用户点击或者使用快捷键触发当前文章的自动发布功能,程序根据用户配置自动发布博客文章到各大平台 具体使用参考: 请读我.md 。以上传到CSDN为例,Gif图如下: 小众软件推荐 通过注入 js 代码,为 Typora 额外增加 4 个功能 第一次上榜小众软件,心情非常冲动。同时祝小众软件越办越好。 致谢 Apache: ECharts MIT: markmap | Chart.js | abcjs | tui.calendar | Marp | DataTables | markdownlint no-licence: typora-tabbar-plugin | typora-side-by-side | md-padding 结语 本人并非前端开发,前端技术全靠 Google,JS/CSS 写的很烂。 本项目遵循 MIT 协议,请自由地享受。 如果对各位有用的话,欢迎 star ⭐,欢迎推荐给你志同道合的朋友使用。;Typora plugin. Feature enhancement tool | Typora 插件,功能增强工具;hijacking,plugin,typora,typora-plugin,scripts,electron,markdown,nodejs,typora-extension | obgnail/typora_plugin |
THUDM/AgentTuning;AgentTuning: Enabling Generalized Agent Abilities For LLMs 🤗 Model (AgentLM-70B) • 🤗 Dataset (AgentInstruct) • 📃 Paper • 🌐 Project Page 中文版(Chinese) AgentTuning represents the very first attempt to instruction-tune LLMs using interaction trajectories across multiple agent tasks. Evaluation results indicate that AgentTuning enables the agent capabilities of LLMs with robust generalization on unseen agent tasks while remaining strong in general language abilities. We have open-sourced the AgentInstruct dataset and AgentLM. Main Result Figure 1 Overall score in our held-in and held-out tasks AgentInstruct AgentInstruct is a meticulously curated dataset featuring 1,866 high-quality interactions designed to enhance AI agents across 6 diverse real-world tasks. 🔍 CoT - Harness the power of ReAct , offering detailed thought explanations for each action, ensuring an intricate understanding of the model's decision-making journey. 🌍 Diversity - Spanning 6 real-world scenarios, from Daily Household Routines to Database Operations, with an average turn range from 5 to 35. 🎯 Precision - Not all trajectories of GPT-4 are effective! Ours are rigorously filtered using strict rewards to ensure top-notch quality. ✅ Assurance - Rigorous checks to avoid data leakage, ensuring pristine dataset quality. AgentInstruct dataset is available on 🤗Huggingface Repo . AgentLM AgentLM models are produced by mixed training on AgentInstruct dataset and ShareGPT dataset from Llama2-chat series. The models follow the conversation format of Llama-2-chat , with the system prompt fixed as You are a helpful, respectful and honest assistant. The 7B, 13B, and 70B models are available on Huggingface model hub. | Model | Huggingface Repo |
| :---------: | :------------------------------------------------------------: |
| AgentLM-7B | 🤗Huggingface Repo |
| AgentLM-13B | 🤗Huggingface Repo |
| AgentLM-70B | 🤗Huggingface Repo | Run AgentLM We use Text-Generation-Inference to accelerate the evaluation process. You can start a AgentLM-70b instance with: bash
cd docker
docker compose -f agentlm-70b.yml up Upon successful execution, a client will be available on port 30070 . Here is an example of launching a request: ```bash
curl 127.0.0.1:30070/generate \
-X POST \
-H 'Content-Type: application/json' \
-d '{"inputs": "[INST] < >\nYou are a helpful, respectful and honest assistant.\n< >\n\nHello! [/INST]", "parameters":{"temperature": 1.0}}' {"generated_text":"Hello! How can I help you today? "} ``` You can replicate the services in the Docker Compose file to create multiple inference instances if more GPUs are available. Evaluation Here are details of our evaluation task, including 6 held-in tasks and 6 held-out tasks. Held-in Tasks The 6 held-in tasks are selected from AgentBench . However, since AgentBench is still under active development, the results from the latest branch might not fully reproduce the results reported in the paper. The evaluation code of this project is located in ./AgentBench.old. Held-out Tasks Held-out tasks are recompiled from the following frameworks: | Task | AgentTuning Setup | Original Repo |
| ----------------- | ----------------------------------------------------------- | ------------------------------------------------------------ |
| SciWorld | 📂 eval_heldout/science-world | 💻 allenai/ScienceWorld |
| MiniWoB++ | 📂 eval_heldout/miniwob++ | 💻 Farama-Foundation/miniwob-plusplus |
| HotpotQA | 📂 eval_heldout/hotpotQA | 💻 salesforce/BOLAA |
| ReWOO | 📂 eval_heldout/rewoo | 💻 billxbf/ReWOO |
| WebArena | 📂 eval_heldout/webarena | 💻 web-arena-x/webarena |
| Digital Card Game | 💻 AgentBench.old ( Extend Split ) | 💻 THUDM/AgentBench | General Tasks MMLU Setup : Download the 14k multi-choice questions into ./data : bash
cd data
wget https://people.eecs.berkeley.edu/~hendrycks/data.tar
tar xf data.tar
cd .. Evaluate Hf model(organization/name or ckpt path)by executing the evaluation script: bash
python eval_general/evaluate_mmlu_hf.py -c THUDM/AgentLM-70b GSM8k Setup : Start TGI worker Run the evaluation bash
python eval_general/evaluate_gsm8k_tgi.py --port 30070 Use --sample-input-file to load a local dataset, or GSM8K will be loaded for evaluation. MT-Bench Setup : Install FastChat locally bash
git clone https://github.com/lm-sys/FastChat.git
pip install -e FastChat Start TGI worker Run the evaluation script: bash
python eval_general/eval_mt_bench_tgi.py --host 127.0.0.1 --port 30070 --model-id agentlm-70b Evaluate the answers with GPT-4 bash
cd FastChat/fastchat/llm_judge
OPENAI_API_KEY=<your-api-key> python gen_judgment.py --model-list agentlm-70b --parallel <number-of-cuncurrent-requests> Citation If you find our work useful, please consider citing AgentTuning: @misc{zeng2023agenttuning,
title={AgentTuning: Enabling Generalized Agent Abilities for LLMs},
author={Aohan Zeng and Mingdao Liu and Rui Lu and Bowen Wang and Xiao Liu and Yuxiao Dong and Jie Tang},
year={2023},
eprint={2310.12823},
archivePrefix={arXiv},
primaryClass={cs.CL}
};AgentTuning: Enabling Generalized Agent Abilities for LLMs;[] | THUDM/AgentTuning |
luciddreamer-cvlab/LucidDreamer;😴 LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes 😴 [![Project](https://img.shields.io/badge/Project-LucidDreamer-green)](https://luciddreamer-cvlab.github.io/)
[![ArXiv](https://img.shields.io/badge/Arxiv-2311.13384-red)](https://arxiv.org/abs/2311.13384)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/LucidDreamer-Gaussian-colab/blob/main/LucidDreamer_Gaussian_colab.ipynb)
[![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg)](https://huggingface.co/spaces/ironjr/LucidDreamer-mini) [![Github](https://img.shields.io/github/stars/luciddreamer-cvlab/LucidDreamer)](https://github.com/luciddreamer-cvlab/LucidDreamer)
[![X](https://img.shields.io/twitter/url?label=_ironjr_&url=https%3A%2F%2Ftwitter.com%2F_ironjr_)](https://twitter.com/_ironjr_)
[![LICENSE](https://img.shields.io/badge/license-CC--BY--NC--SA--4.0-lightgrey)](https://github.com/luciddreamer-cvlab/LucidDreamer/blob/master/LICENSE) https://github.com/luciddreamer-cvlab/LucidDreamer/assets/12259041/35004aaa-dffc-4133-b15a-05224e68b91e LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes * Jaeyoung Chung , * Suyoung Lee , Hyeongjin Nam , Jaerin Lee , Kyoung Mu Lee *Denotes equal contribution. 🤖 Install Ubuntu Prerequisite CUDA>=11.4 (higher version is OK). Python==3.9 (cannot use 3.10 due to open3d compatibility) Installation script ```bash
conda create -n lucid python=3.9
conda activate lucid
pip install peft diffusers scipy numpy imageio[ffmpeg] opencv-python Pillow open3d torch==2.0.1 torchvision==0.15.2 gradio omegaconf ZoeDepth pip install timm==0.6.7 Gaussian splatting pip install plyfile==0.8.1 cd submodules/depth-diff-gaussian-rasterization-min sudo apt-get install libglm-dev # may be required for the compilation. python setup.py install
cd ../simple-knn
python setup.py install
cd ../..
``` Windows (Experimental, Tested on Windows 11 with VS2022) Checklist Make sure that the versions of your installed CUDA , cudatoolkit , and pytorch match. We have tested on CUDA==11.8. Make sure you download and install C++ (>=14) from the Visual Studio build tools . Installation script ```bash
conda create -n lucid python=3.9
conda activate lucid
conda install pytorch=2.0.1 torchvision=0.15.2 torchaudio=2.0.2 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install peft diffusers scipy numpy imageio[ffmpeg] opencv-python Pillow open3d gradio omegaconf ZoeDepth pip install timm==0.6.7 Gaussian splatting pip install plyfile==0.8.1 There is an issue with whl file so please manually install the module now. cd submodules\depth-diff-gaussian-rasterization-min\third_party
git clone https://github.com/g-truc/glm.git
cd ..\
python setup.py install
cd ..\simple-knn
python setup.py install
cd ....
``` ⚡ Usage We offer several ways to interact with LucidDreamer: A demo is available on ironjr/LucidDreamer HuggingFace Space (including custom SD ckpt) and ironjr/LucidDreamer-mini HuggingFace Space (minimal features / try at here in case of the former is down)
(We appreciate all the HF / Gradio team for their support). https://github.com/luciddreamer-cvlab/LucidDreamer/assets/12259041/745bfc46-8215-4db2-80d5-4825e91316bc Another demo is available on a Colab , implemented by @camenduru (We greatly thank @camenduru for the contribution). You can use the gradio demo locally by running CUDA_VISIBLE_DEVICES=0 python app.py (full feature including huggingface model download, requires ~15GB) or CUDA_VISIBLE_DEVICES=0 python app_mini.py (minimum viable demo, uses only SD1.5). You can also run this with command line interface as described below. Run with your own samples ```bash Default Example python run.py --image --text [Other options]
```
- Replace and with the paths to your image and text files. Other options --image ( -img ): Specify the path to the input image for scene generation. --text ( -t ): Path to the text file containing the prompt that guides the scene generation. --neg_text ( -nt ): Optional. A negative text prompt to refine and constrain the scene generation. --campath_gen ( -cg ): Choose a camera path for scene generation (options: lookdown , lookaround , rotate360 ). --campath_render ( -cr ): Select a camera path for video rendering (options: back_and_forth , llff , headbanging ). --model_name : Optional. Name of the inpainting model used for dreaming. Leave blank for default(SD 1.5). --seed : Set a seed value for reproducibility in the inpainting process. --diff_steps : Number of steps to perform in the inpainting process. --save_dir ( -s ): Directory to save the generated scenes and videos. Specify to organize outputs. Guideline for the prompting / Troubleshoot General guides If your image is indoors with specific scene (and possible character in it), you can just put the most simplest representation of the scene first , like a cozy livingroom for christmas, or a dark garage, etc. Please avoid prompts like 1girl because it will generate many humans for each inpainting task. If you want to start from already hard-engineered image from e.g., StableDiffusion model, or a photo taken from other sources, you can try using WD14 tagger ( huggingface demo ) to extract the danbooru tags from an image . Please ensure you remove some comma separated tags if you don't want them to appear multiple times. This include human-related objects, e.g., 1girl, white shirt, boots, smiling face, red eyes, etc. Make sure to specify the objects you want to have multiples of them. Q. I generate unwanted objects everywhere, e.g., photo frames. Manipulate negative prompts to set harder constraints for the frame object. You may try adding tags like twitter thumbnail, profile image, instagram image, watermark, text to the negative prompt. In fact, negative prompts are the best thing to try if you want some things not to be appeared in the resulting image. Try using other custom checkpoint models, which employs different pipeline methods: LaMa inpainting -> ControlNet-inpaint guided image inpainting. Visualize .ply files There are multiple available viewers / editors for Gaussian splatting .ply files. @playcanvas 's Super-Splat project ( Live demo ). This is the viewer we have used for our debugging along with MeshLab. @antimatter15 's WebGL viewer for Gaussian splatting ( Live demo ). @splinetool 's web-based viewer for Gaussian splatting. This is the version we have used in our project page's demo. 🚩 Updates ✅ December 12, 2023: We have precompiled wheels for the CUDA-based submodules and put them in submodules/wheels . The Windows installation guide is revised accordingly! ✅ December 11, 2023: We have updated installation guides for Windows. Thank you @Maoku for your great contribution! ✅ December 8, 2023: HuggingFace Space demo is out. We deeply thank all the HF team for their support! ✅ December 7, 2023: Colab implementation is now available thanks to @camenduru ! ✅ December 6, 2023: Code release! ✅ November 22, 2023: We have released our paper, LucidDreamer on arXiv . 🌏 Citation Please cite us if you find our project useful! latex
@article{chung2023luciddreamer,
title={LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes},
author={Chung, Jaeyoung and Lee, Suyoung and Nam, Hyeongjin and Lee, Jaerin and Lee, Kyoung Mu},
journal={arXiv preprint arXiv:2311.13384},
year={2023}
} 🤗 Acknowledgement We deeply appreciate ZoeDepth , Stability AI , and Runway for their models. 📧 Contact If you have any questions, please email robot0321@snu.ac.kr , esw0116@snu.ac.kr , jarin.lee@gmail.com . ⭐ Star History;Official code for the paper "LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes".;[] | luciddreamer-cvlab/LucidDreamer |
StaticMania/keep-react;KEEP REACT Keep React is an open-source component library built on Tailwind CSS and React.js. It provides a versatile set of pre-designed UI components that enable developers to streamline the creation of modern, responsive, and visually appealing web applications. Table of Contents KEEP REACT Table of Contents Installation Vite React Application Next JS Application usage Components Contributing Figma License Installation Vite React Application Setting Up Keep React in Vite React Application Step 1: Create a Vite React Application console
npm create vite@latest my-project -- --template react
cd my-project Step 2: Install Tailwind CSS console
npm i autoprefixer postcss tailwindcss
npx tailwindcss init -p Step 3: Install Keep React: console
npm i keep-react phosphor-react Or with Yarn console
yarn add keep-react phosphor-react Or with Pnpm console
pnpm add keep-react phosphor-react Step 4: Go to the tailwind.config.js file and paste the
following code: jsx
import keepPreset from 'keep-react/preset'
export default {
content: ['./index.html', './src/**/*.{js,ts,jsx,tsx}', 'node_modules/keep-react/**/*.{js,jsx,ts,tsx}'],
presets: [keepPreset],
} Step 5: Add Tailwind CSS to index.css File: css
@import 'keep-react/css';
@tailwind base;
@tailwind components;
@tailwind utilities; Next JS Application You can easily integrate keep-react into your Next.js application. Step 1: Install Next Js Application console
npx create-next-app@latest Ensure that you select tailwindcss as a dependency for your application during the setup. Would you like to use Tailwind CSS? -- Yes Step 2: Install Keep React console
npm i keep-react phosphor-react Or with Yarn console
yarn add keep-react phosphor-react Or with Pnpm console
pnpm add keep-react phosphor-react Step 3: Go to the tailwind.config.js file and paste the
following code js
module.exports = {
content: [
// ... (existing content)
'./node_modules/keep-react/**/*.{js,jsx,ts,tsx}',
],
presets: [require('keep-react/preset')],
} Step 4: Add Tailwind CSS to globals.css File: css
@import 'keep-react/css';
@tailwind base;
@tailwind components;
@tailwind utilities; Congratulations! You have successfully installed the Keep React. Now you can import any component from keep-react and use it in your project. usage jsx
'use client'
import { Button } from 'keep-react'
const App = () => {
return <Button>Button</Button>
}
export default App Components The Keep React offers a wide range of components to build your user interfaces. For detailed usage and examples of each component, refer to our component documentation . Contributing If you want to contribute to the Keep React, you can follow the contributing guide . Contributors This project exists thanks to all the people who contribute: Figma If you need access to Figma design files for the components, you can check out our website for more information: Get access to the Figma design files License The Keep-React name and logos are trademarks of StaticMania. Read about the licensing terms;Keep React is an open-source component library built on Tailwind CSS and React.js. It provides a versatile set of pre-designed UI components to build modern web applications.;component-library,components,design-systems,react,reactjs,tailwindcss,typescript,keep-react | StaticMania/keep-react |
d3george/slash-admin;Slash Admin Slash Admin is a modern admin dashboard template built with React 18, Vite, Ant Design, and TypeScript. It is designed to help developers quickly create powerful admin management systems. English | 中文 Sponsor Preview https://admin.slashspaces.com/ | | | ----------------------------------------------------------------- | ------------------------------------------------------------------- |
| | Features Built using React 18 hooks. Powered by Vite for rapid development and hot module replacement. Integrates Ant Design, providing a rich set of UI components and design patterns. Written in TypeScript, offering type safety and an improved development experience. Responsive design, adapting to various screen sizes and devices. Flexible routing configuration, supporting nested routes. Integrated access control based on user roles. Supports internationalization for easy language switching. Includes common admin features like user management, role management, and permission management. Customizable themes and styles to meet your branding needs. Mocking solution based on MSW and Faker.js. State management using Zustand. Data fetching using React-Query. Document Document Quick Start Get the Project Code bash
git clone https://github.com/d3george/slash-admin.git Install Dependencies In the project's root directory, run the following command to install project dependencies: bash
pnpm install Start the Development Server Run the following command to start the development server: bash
pnpm dev Visit http://localhost:3001 to view your application. Build for Production Run the following command to build the production version: bash
pnpm build Docker deployment Build image and Run container build image Enter the project root directory in the terminal and execute the following command to build the Docker image: docker build -t your-image-name . Make sure to replace your-image-name with your own image name run container Run your application in the Docker container using the following command: docker run -p 3001:80 your-image-name This will run your application on port 80 (exposed in Dockerfile ) of the container and map it to port 3001 on your host. Now you can access http://localhost:3001 to view the deployed applications. use docker-compose.yaml Enter the project root directory in the terminal and execute the following command to start Docker Compose: docker-compose up -d Docker Compose will build an image based on the configuration defined by 'docker-compose. yaml' and run the container in the background. After the container runs successfully, it can also be accessed through http://localhost:3001 To view the deployed applications. Git Contribution submission specification reference .commitlint.config.js feat new features fix fix the docs documentation or comments style code format (changes that do not affect code execution) refactor refactor perf performance optimization revert revert commit test test related chore changes in the construction process or auxiliary tools ci modify CI configuration and scripts types type definition file changes wip in development;A modern react admin. It is based on react 18, vite and TypeScript. It's fast ! ;admin-dashboard,antd,react,react-query,react-router-v6,tailwindcss,ts,vite,zustand,framer-motion | d3george/slash-admin |
ekmas/neobrutalism-components;Neobrutalism components Introduction Neobrutalism components are a collection of Tailwind components for React and Shadcn UI. Documentation Visit docs to get started. About I created this collection of components for people who want to learn more about neobrutalism style and to help them get started with creating neobrutalism layouts. Contributing Please read the contributing guide . License MIT;Collection of neobrutalism-styled Tailwind components.;nextjs,react-components,tailwind,neobrutalism,react,react-component-library,react-typescript,react-ui-components,ui-library,neobrutalism-components | ekmas/neobrutalism-components |
nalgeon/codapi;Interactive code examples for documentation, education and fun 🎉 Codapi is a platform for embedding interactive code snippets directly into your product documentation, online course or blog post. ┌───────────────────────────────┐
│ def greet(name): │
│ print(f"Hello, {name}!") │
│ │
│ greet("World") │
└───────────────────────────────┘
Run ► Edit ✓ Done
┌───────────────────────────────┐
│ Hello, World! │
└───────────────────────────────┘ Codapi manages sandboxes (isolated execution environments) and provides an API to execute code in these sandboxes. It also provides a JavaScript widget codapi-js for easier integration. Highlights: Automatically converts static code examples into mini-playgrounds. Lightweight and easy to integrate. Sandboxes for any programming language, database, or software. Open source. Uses the permissive Apache-2.0 license. For an introduction to Codapi, see this post: Interactive code examples for fun and profit . Installation See Installing Codapi for details. Usage See API to run sandboxed code using the HTTP API. See codapi-js to embed the JavaScript widget into a web page. Contributing Contributions are welcome. For anything other than bugfixes, please first open an issue to discuss what you want to change. Be sure to add or update tests as appropriate. Funding Codapi is mostly a one-man project, not backed by a VC fund or anything. If you find Codapi useful, please consider sponsoring it on GitHub. It really helps to move the project forward. ♥ Become a sponsor to support Codapi. ★ Subscribe to stay on top of new features.;Interactive code examples for documentation, education and fun;playground,sandbox,snippets,code-playground,interactive-code,interactive-snippets | nalgeon/codapi |
gkamradt/LLMTest_NeedleInAHaystack;Needle In A Haystack - Pressure Testing LLMs A simple 'needle in a haystack' analysis to test in-context retrieval ability of long context LLMs. Supported model providers: OpenAI, Anthropic, Cohere Get the behind the scenes on the overview video . The Test Place a random fact or statement (the 'needle') in the middle of a long context window (the 'haystack') Ask the model to retrieve this statement Iterate over various document depths (where the needle is placed) and context lengths to measure performance This is the code that backed this OpenAI and Anthropic analysis . The results from the original tests are in /original_results . The script has upgraded a lot since those test were ran so the data formats may not match your script results. Getting Started Setup Virtual Environment We recommend setting up a virtual environment to isolate Python dependencies, ensuring project-specific packages without conflicting with system-wide installations. zsh
python3 -m venv venv
source venv/bin/activate Environment Variables NIAH_MODEL_API_KEY - API key for interacting with the model. Depending on the provider, this gets used appropriately with the correct sdk. NIAH_EVALUATOR_API_KEY - API key to use if openai evaluation strategy is used. Install Package Install the package from PyPi: zsh
pip install needlehaystack Run Test Start using the package by calling the entry point needlehaystack.run_test from command line. You can then run the analysis on OpenAI, Anthropic, or Cohere models with the following command line arguments: provider - The provider of the model, available options are openai , anthropic , and cohere . Defaults to openai evaluator - The evaluator, which can either be a model or LangSmith . See more on LangSmith below. If using a model , only openai is currently supported. Defaults to openai . model_name - Model name of the language model accessible by the provider. Defaults to gpt-3.5-turbo-0125 evaluator_model_name - Model name of the language model accessible by the evaluator. Defaults to gpt-3.5-turbo-0125 Additionally, LLMNeedleHaystackTester parameters can also be passed as command line arguments, except model_to_test and evaluator . Here are some example use cases. Following command runs the test for openai model gpt-3.5-turbo-0125 for a single context length of 2000 and single document depth of 50%. zsh
needlehaystack.run_test --provider openai --model_name "gpt-3.5-turbo-0125" --document_depth_percents "[50]" --context_lengths "[2000]" Following command runs the test for anthropic model claude-2.1 for a single context length of 2000 and single document depth of 50%. zsh
needlehaystack.run_test --provider anthropic --model_name "claude-2.1" --document_depth_percents "[50]" --context_lengths "[2000]" Following command runs the test for cohere model command-r for a single context length of 2000 and single document depth of 50%. zsh
needlehaystack.run_test --provider cohere --model_name "command-r" --document_depth_percents "[50]" --context_lengths "[2000]" For Contributors Fork and clone the repository. Create and activate the virtual environment as described above. Set the environment variables as described above. Install the package in editable mode by running the following command from repository root: zsh
pip install -e . The package needlehaystack is available for import in your test cases. Develop, make changes and test locally. LLMNeedleHaystackTester parameters: model_to_test - The model to run the needle in a haystack test on. Default is None. evaluator - An evaluator to evaluate the model's response. Default is None. needle - The statement or fact which will be placed in your context ('haystack') haystack_dir - The directory which contains the text files to load as background context. Only text files are supported retrieval_question - The question with which to retrieve your needle in the background context results_version - You may want to run your test multiple times for the same combination of length/depth, change the version number if so num_concurrent_requests - Default: 1. Set higher if you'd like to run more requests in parallel. Keep in mind rate limits. save_results - Whether or not you'd like to save your results to file. They will be temporarily saved in the object regardless. True/False. If save_results = True , then this script will populate a result/ directory with evaluation information. Due to potential concurrent requests each new test will be saved as a few file. save_contexts - Whether or not you'd like to save your contexts to file. Warning these will get very long. True/False final_context_length_buffer - The amount of context to take off each input to account for system messages and output tokens. This can be more intelligent but using a static value for now. Default 200 tokens. context_lengths_min - The starting point of your context lengths list to iterate context_lengths_max - The ending point of your context lengths list to iterate context_lengths_num_intervals - The number of intervals between your min/max to iterate through context_lengths - A custom set of context lengths. This will override the values set for context_lengths_min , max, and intervals if set document_depth_percent_min - The starting point of your document depths. Should be int > 0 document_depth_percent_max - The ending point of your document depths. Should be int < 100 document_depth_percent_intervals - The number of iterations to do between your min/max points document_depth_percents - A custom set of document depths lengths. This will override the values set for document_depth_percent_min , max, and intervals if set document_depth_percent_interval_type - Determines the distribution of depths to iterate over. 'linear' or 'sigmoid seconds_to_sleep_between_completions - Default: None, set # of seconds if you'd like to slow down your requests print_ongoing_status - Default: True, whether or not to print the status of test as they complete LLMMultiNeedleHaystackTester parameters: multi_needle - True or False, whether to run multi-needle needles - List of needles to insert in the context Other Parameters: model_name - The name of the model you'd like to use. Should match the exact value which needs to be passed to the api. Ex: For OpenAI inference and evaluator models it would be gpt-3.5-turbo-0125 . Results Visualization LLMNeedleInHaystackVisualization.ipynb holds the code to make the pivot table visualization. The pivot table was then transferred to Google Slides for custom annotations and formatting. See the google slides version . See an overview of how this viz was created here . OpenAI's GPT-4-128K (Run 11/8/2023) Anthropic's Claude 2.1 (Run 11/21/2023) Multi Needle Evaluator To enable multi-needle insertion into our context, use --multi_needle True . This inserts the first needle at the specified depth_percent , then evenly distributes subsequent needles through the remaining context after this depth. For even spacing, it calculates the depth_percent_interval as: depth_percent_interval = (100 - depth_percent) / len(self.needles) So, the first needle is placed at a depth percent of depth_percent , the second at depth_percent + depth_percent_interval , the third at depth_percent + 2 * depth_percent_interval , and so on. Following example shows the depth percents for the case of 10 needles and depth_percent of 40%. ```
depth_percent_interval = (100 - 40) / 10 = 6 Needle 1: 40
Needle 2: 40 + 6 = 46
Needle 3: 40 + 2 * 6 = 52
Needle 4: 40 + 3 * 6 = 58
Needle 5: 40 + 4 * 6 = 64
Needle 6: 40 + 5 * 6 = 70
Needle 7: 40 + 6 * 6 = 76
Needle 8: 40 + 7 * 6 = 82
Needle 9: 40 + 8 * 6 = 88
Needle 10: 40 + 9 * 6 = 94
``` LangSmith Evaluator You can use LangSmith to orchestrate evals and store results. (1) Sign up for LangSmith (2) Set env variables for LangSmith as specified in the setup.
(3) In the Datasets + Testing tab, use + Dataset to create a new dataset, call it multi-needle-eval-sf to start.
(4) Populate the dataset with a test question: question: What are the 5 best things to do in San Franscisco?
answer: "The 5 best things to do in San Francisco are: 1) Go to Dolores Park. 2) Eat at Tony's Pizza Napoletana. 3) Visit Alcatraz. 4) Hike up Twin Peaks. 5) Bike across the Golden Gate Bridge" (5) Run with --evaluator langsmith and --eval_set multi-needle-eval-sf to run against our recently created eval set. Let's see all these working together on a new dataset, multi-needle-eval-pizza . Here is the multi-needle-eval-pizza eval set, which has a question and reference answer. You can also and resulting runs:
https://smith.langchain.com/public/74d2af1c-333d-4a73-87bc-a837f8f0f65c/d Here is the command to run this using multi-needle eval and passing the relevant needles: needlehaystack.run_test --evaluator langsmith --context_lengths_num_intervals 3 --document_depth_percent_intervals 3 --provider openai --model_name "gpt-4-0125-preview" --multi_needle True --eval_set multi-needle-eval-pizza --needles '["Figs are one of the three most delicious pizza toppings.", "Prosciutto is one of the three most delicious pizza toppings.", "Goat cheese is one of the three most delicious pizza toppings."]' License This project is licensed under the MIT License - see the LICENSE file for details. Use of this software requires attribution to the original author and project, as detailed in the license.;Doing simple retrieval from LLM models at various context lengths to measure accuracy;[] | gkamradt/LLMTest_NeedleInAHaystack |
ivanfioravanti/chatbot-ollama;Chatbot Ollama About Chatbot Ollama is an open source chat UI for Ollama. This project is based on chatbot-ui by Mckay Wrigley . Updates Chatbot Ollama will be updated over time. Next up [ ] pull a model [ ] delete a model [ ] show model information Docker Build locally: shell
docker build -t chatbot-ollama .
docker run -p 3000:3000 chatbot-ollama Pull from ghcr: bash
docker run -p 3000:3000 ghcr.io/ivanfioravanti/chatbot-ollama:main Running Locally 1. Clone Repo bash
git clone https://github.com/ivanfioravanti/chatbot-ollama.git 2. Install Dependencies bash
npm ci 3. Run Ollama server Either via the cli: bash
ollama serve or via the desktop client 4. Run App bash
npm run dev 5. Use It You should be able to start chatting. Configuration When deploying the application, the following environment variables can be set: | Environment Variable | Default value | Description |
| --------------------------------- | ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------- |
| DEFAULT_MODEL | mistral:latest | The default model to use on new conversations |
| NEXT_PUBLIC_DEFAULT_SYSTEM_PROMPT | see here | The default system prompt to use on new conversations |
| NEXT_PUBLIC_DEFAULT_TEMPERATURE | 1 | The default temperature to use on new conversations | Contact If you have any questions, feel free to reach out to me on X .;Chatbot Ollama is an open source chat UI for Ollama.;[] | ivanfioravanti/chatbot-ollama |