text
stringlengths 74
478k
| repo
stringlengths 7
106
|
---|---|
huggingface/gsplat.js;gsplat.js JavaScript Gaussian Splatting library gsplat.js is an easy-to-use, general-purpose, open-source 3D Gaussian Splatting library, providing functionality similar to three.js but for Gaussian Splatting. Quick Start Live Viewer Demo: Explore this library in action in the 🤗 Hugging Face demo . Note: May not work on all devices; use Bonsai for the lowest memory requirements. Editor Demo: Try new real-time updates and editing features in the gsplat.js editor . Code Example: Start coding immediately with this jsfiddle example . Installation Prerequisites : Ensure your development environment supports ES6 modules. Set Up a Project: (If not already set up) Install Node.js and NPM , then initialize a new project using a module bundler like Vite : bash
npm create vite@latest gsplat -- --template vanilla-ts Test Your Environment: bash
cd gsplat
npm install
npm run dev Install gsplat.js: bash
npm install --save gsplat Usage Creating a Scene Import gsplat.js components and set up a basic scene. Load Gaussian Splatting data and start a rendering loop. (in src/main.ts if you followed the Vite setup) ```js
import * as SPLAT from "gsplat"; const scene = new SPLAT.Scene();
const camera = new SPLAT.Camera();
const renderer = new SPLAT.WebGLRenderer();
const controls = new SPLAT.OrbitControls(camera, renderer.canvas); async function main() {
const url = "https://huggingface.co/datasets/dylanebert/3dgs/resolve/main/bonsai/bonsai-7k.splat"; await SPLAT.Loader.LoadAsync(url, scene, () => {});
const frame = () => {
controls.update();
renderer.render(scene, camera);
requestAnimationFrame(frame);
};
requestAnimationFrame(frame); } main();
``` This script sets up a basic scene with Gaussian Splatting data loaded from URL and starts a rendering loop. FAQ Q: Can I use .ply files? A: Yes, gsplat.js supports .ply files. See the ply-converter example for details on how to convert .ply to .splat . Alternatively, convert PLY files from URL in this jsfiddle example . Q: What are .splat files? A: .splat files are a compact form of the splat data, offering quicker loading times than .ply files. They consist of a raw Uint8Array buffer. ⚠️ The .splat format does not contain SH coefficients, so colors are not view-dependent. Q: Can I convert .splat files to .ply? A: Yes, see the commented code in the ply-converter example . Alternatively, convert .splat to .ply from URL in this jsfiddle example . ⚠️ When converting .ply -> .splat -> .ply , SH coefficients will be lost. License This project is released under the MIT license. It is built upon several other open-source projects: three.js , MIT License (c) 2010-2023 three.js authors antimatter15/splat , MIT License (c) 2023 Kevin Kwok UnityGaussianSplatting , MIT License (c) 2023 Aras Pranckevičius Please note that the license of the original 3D Gaussian Splatting research project is non-commercial. While this library provides an open-source rendering implementation, users should consider the source of the splat data separately. Contact Feel free to open issues, join the Hugging Face Discord , or email me directly at dylan@huggingface.co .;JavaScript Gaussian Splatting library.;[] | huggingface/gsplat.js |
Jinnrry/PMail;PMail A server, a domain, a line of code, a minute, and you'll be able to build a domain mailbox of your own. 中文文档 I'm Chinese, and I'm not good at English, so I apologise for my translation. Introduction PMail is a personal email server that pursues a minimal deployment process and extreme resource consumption. It runs on
a single file and contains complete send/receive mail service and web-side mail management functions. Just a server , a
domain name , a line of code , a minute of deployment time , you will be able to build a domain name mailbox of your
own . All kinds of PR are welcome, whether you are fixing bugs, adding features, or optimizing translations. Also, call for a
beautiful and cute Logo for this project! Features Single file operation and easy deployment. The binary file is only 15MB and takes up less than 10M of memory during the run. Support dkim, spf checksum, Email Test score 10 points if correctly configured. Implementing the ACME protocol, the program will automatically obtain and update Let's Encrypt certificates. By default, a ssl certificate is generated for the web service, allowing pages to use the https protocol.
If you have your own gateway or don't need https, set httpsEnabled to 2 in the configuration file so that the web
service will not use https.
(Note: Even if you don't need https, please make sure the path to the ssl certificate file is correct, although the web
service doesn't use the certificate anymore, the smtp protocol still needs the certificate) Support pop3, smtp protocol, you can use any mail client you like. How to run 0、Check You IP / Domain First go to spamhaus and check your domain name and server IP for blocking records 1、Download Click Here Download a program file that matches you. Or use Docker docker pull ghcr.io/jinnrry/pmail:latest 2、Run ./pmail -p 80 -p Set the http port of the bootstrap setup interface, the default is port 80, note that this parameter only affects the bootstrap setup phase, if you need to change the port after setup is complete, please modify the configuration file. [!IMPORTANT]
The SSL certificate will not be set automatically if the bootstrap setup phase uses a port other than 80. Or docker run -p 25:25 -p 80:80 -p 443:443 -p 110:110 -p 465:465 -p 995:995 -v $(pwd)/config:/work/config ghcr.io/jinnrry/pmail:latest [!IMPORTANT]
If your server has a firewall turned on, you need to open ports 25, 80, 110, 443, 465, 995 3、Configuration Open http://127.0.0.1 in your browser or use your server's public IP to visit, then follow the instructions to
configure. 4、Email Test Check if your mailbox has completed all the security configuration. It is recommended to
use https://www.mail-tester.com/ for checking. Configuration file format description json
{
"logLevel": "info", //log output level
"domain": "domain.com", // Your domain
"webDomain": "mail.domain.com", // web domain
"dkimPrivateKeyPath": "config/dkim/dkim.priv", // dkim key path
"sslType": "0", // ssl certificate update mode, 0 automatic, 1 manual
"SSLPrivateKeyPath": "config/ssl/private.key", // ssl certificate path
"SSLPublicKeyPath": "config/ssl/public.crt", // ssl certificate path
"dbDSN": "./config/pmail.db", // database connect DSN
"dbType": "sqlite", //database type ,`sqlite` or `mysql`
"httpsEnabled": 0, // enabled https , 0:enabled 1:enablde 2:disenabled
"httpPort": 80, // http port . default 80
"httpsPort": 443, // https port . default 443
"spamFilterLevel": 0,// Spam filter level, 0: no filter, 1: filtering when `spf` and `dkim` don't pass, 2: filtering when `spf` don't pass
"isInit": true // If false, it will enter the bootstrap process.
} Mail Client Configuration POP3 Server Address : pop.[Your Domain] POP3 Port: 110/995(SSL) SMTP Server Address : smtp.[Your Domain] SMTP Port: 25/465(SSL) Plugin WeChat Push Telegram Push Web Push Plugin Install [!IMPORTANT]
Plugins run on your server as independent processes, please review the security of third-party plugins on your own.PMail currently only maintains the three plugins mentioned above. Copy plugin binary file to /plugins For Developer Project Framework 1、 FE: vue3+element-plus The code is in fe folder. 2、Server: golang + MySQL/SQLite The code is in server folder. 3、How to build make build 4、Unit test make test Api Documentation go to wiki Plugin Development go to wiki;Private EMail Server;[] | Jinnrry/PMail |
datalens-tech/datalens;DataLens DataLens is a modern business intelligence and data visualization system. It was developed and extensively used as a primary BI tool in Yandex and is also available as a part of Yandex Cloud platform. See also our roadmap and community in telegram . Getting started Installing Docker DataLens requires Docker to be installed. Follow these instructions depending on the platform you use: macOS Linux Windows Running containers Use the following command to start DataLens containers: ```bash
git clone https://github.com/datalens-tech/datalens && cd datalens HC=1 docker compose up or with an external metadata database METADATA_POSTGRES_DSN_LIST="postgres://{user}:{password}@{host}:{port}/{database}" HC=1 docker compose up
``` This command will launch all containers required to run DataLens and UI will be available on http://localhost:8080 If you want to use a different port (e.g. 8081 ), you can set it using the UI_PORT env variable: bash
UI_PORT=8081 docker compose up Notice on Highcharts usage Highcharts is a proprietary commercial product. If you enable highcharts in your DataLens instance (with `HC=1`` variable), you should comply with Highcharts license (https://github.com/highcharts/highcharts/blob/master/license.txt).
When Highcharts is disabled in DataLens, we use D3.js instead. However, currently only few visualization types are compatible with D3.js. We are actively working on adding D3 support to additional visualizations and are going to completely replace Highcharts with D3 in DataLens. How to update Just pull the new docker-compose.yml and restart. bash
docker compose down
git pull
docker compose up All your user settings will be stored in the metadata folder. Parts of the project DataLens consists of the three main parts: UI is a SPA application with corresponding Node.js part. It provides user interface, proxies requests from users to backend services and also applies some light data postprocessing for charts. Backend is a set of Python applications and libraries. It is responsible for connecting to data sources, generating queries for them and post-processing the data (including formula calculations). The result of this work is an abstract dataset that can be used in UI for charts data request. UnitedStorage (US) is a Node.js service that uses PostgreSQL to store metadata and configuration of all DataLens objects. What's already available We are releasing DataLens with first minimal set of available connectors (clickhouse, clickhouse over ytsaurus and postgresql) as well as other core functionality such as data processing engine and user interface. However, to kick off this project in a reasonable timeframe we have chosen to drop some of the features out of the first release: this version does not contain middleware and components for user sessions, object ACLs and multitenancy (although code contains entry-points for such extensions). We are planning to add missing features based on our understanding of community priorities and your feedback. Cloud Providers Below is a list of cloud providers offering DataLens as a service:
1. Yandex Cloud platform
2. DoubleCloud platform FAQ Where does DataLens store it's metadata? We use the metadata folder to store PostgreSQL data. If you want to start over, you can delete this folder: it will be recreated with demo objects on the next start of the datalens-us container. I use the METADATA_POSTGRES_DSN_LIST param for external metadata database and the app doesn't start. What could be the reason? We use some PostgresSQL extensions for the metadata database and the application checks them at startup and tries to install them if they haven't been already installed. Check your database user's rights for installing extensions by trying to install them manually: sql
CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE EXTENSION IF NOT EXISTS btree_gin;
CREATE EXTENSION IF NOT EXISTS btree_gist;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; If this attempt is unsuccessful, try to install dependencies by database admin and add param METADATA_SKIP_INSTALL_DB_EXTENSIONS=1 on startup, this parameter allows the app to skip installing extensions. If you're using managed database, it's also possible that extensions for your database cluster are controlled by external system and could be changed only using it's UI or API. In such case, consult with documentation for managed database service which you're using. Don't forget to add METADATA_SKIP_INSTALL_DB_EXTENSIONS=1 after installing extensions this way. My PostgresSQL cluster has multiple hosts, how can I specify them in METADATA_POSTGRES_DSN_LIST param? You can write all cluster hosts separated by commas: METADATA_POSTGRES_DSN_LIST="postgres://{user}:{password}@{host_1}:{port}/{database},postgres://{user}:{password}@{host_2}:{port}/{database},postgres://{user}:{password}@{host_3}:{port}/{database}" ... How can I specify custom certificate for connecting to metadata database? You can add additional certificates to the database in ./certs/root.crt , they will be used to connect to the database from the datalens-us container. If datalens-us container does not start even though you provided correct certificates, try to change METADATA_POSTGRES_DSN_LIST like this: METADATA_POSTGRES_DSN_LIST="postgres://{user}:{password}@{host}:{port}/{database}?sslmode=verify-full&sslrootcert=/certs/root.crt" Why do i see two compose files: docker-compose.yml & docker-compose-dev.yml? docker-compose-dev.yml is a special compose file that is needed only for development purposes. When you run DataLens in production mode, you always need to use docker-compose.yml . The docker-compose up command uses it by default. What are the minimum system requirements? datalens-ui - 512 MB RAM datalens-data-api - 1 GB RAM datalens-control-api - 512 MB RAM datalens-us - 512 MB RAM datalens-pg-compeng - 1 GB RAM datalens-pg-us - 512 MB RAM Summary: RAM - 4 GB CPU - 2 CORES This is minimal basic system requirements for OpenSource DataLens installation. Аctual consumption of VM resources depends on the complexity of requests to connections, connections types, the number of users and processing speed at the source level;A modern, scalable analytics system;analytics,bi,business-intelligence,dashboards,reporting,sql,visualization | datalens-tech/datalens |
BiliRoamingX/BiliRoamingX;# 哔哩漫游X
[![CI](https://github.com/BiliRoamingX/BiliRoamingX/workflows/CI/badge.svg)](https://github.com/BiliRoamingX/BiliRoamingX/actions)
[![Channel](https://img.shields.io/badge/Follow-Telegram-blue?logo=telegram)](https://t.me/bb_show)
[![Download](https://img.shields.io/github/downloads/BiliRoamingX/BiliRoamingX/total?color=critical&label=Download&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAYAAAByDd+UAAAF2UlEQVRIiaVWS2xTRxQ9d2bs5y+O4zgEYkgoShcESIG2EQJRREFAKmABRFCpbOiioumqrNiQCmjFiiB1Q6kqwR6k0NIKUBdFLBAt31BBoUqC8xEhgXwcO7bfezPVTJyQkBA+vdKT5fdm7rn3zL3nDilMtlt1dRiOx+G3bSjO4TIGSLkOrrtJMfYBgEoA0cKmAVKqHUR/EXBBSnmJ53JQHg9UIIDA06dY3NwMmoAgMI2NLZDAXuW6XwGogQaeahHFWIUCPiKlvgZjLVKI7wn4gdSLqYzaFC96oSJ612HsiqvUjwZsJlMKE5wvkV7vCVeIq4poEU0I/jlgKATzhMOAEADRZunx3FVEq15c/DpmwIlq80LcsYGthhnLArxe85DasMFEqT/0BAIb7oVCFy3GQFK+Bdxzk4xB2jbmSVkXFOI3WWBBdEmpKYRDNK8rGr3Iddr5vHk3TjPnsAcH4aTTsEpKwDwenQVkLodcXx9EOAzPrFlQrju+h7suyONBq8/366yBgYWW67YaSnuKi/EkGkVnWdkvOifvRDAiEGPIJJPwRqMoWbUKJISJXIMxvx+l69bBE4kg/egRSO8r7NU+NEteXbVCnBfDw+CpFPiemhpIzj8lxvZ5HGdyZoxhuK0NsdpaLG5sxNy6OqQePMBASwucTAbFK1Zg0YEDiK9ejZGuLgzcuQNvUdEkarlScBgryVhW+0godJvpKIjoWzZanZNo1FHHVq5EzdGjhkpzBsGgoU4pNUotYL4tPXIEpWvXIqMz5XzcjyoUEvd4vrOIwPyMrVZEFeqFvrGHhoyjJY2Nk4vBtk3mmr6JZ6Zt8cGD8CcSyPf3T3pPpnvUHJVOf8wcxrabs5qQmTsygv6bN1G+dSu43z9ps/D7IR3HPMLnm+yYc1Ts2oX8s2fTFS6Uz7dDuMCH42BCINvdDR4KoaqhAXO3bDHvc6kUnnZ0AJyjv70dVjhsMhzo6EDX/fsg10VxeTl8RUWILl9uisgUle6/Md9SwhVihQBRhVELzjHS3Y1AeTmqDx5EsKJifPFQMokLu3fDF4thTiyGcDxuziadTOJKQwNSnZ3YfOoUymtr4S0uNi2SevgQwfnzIXS7OM5o9SpVzj9fuvQb3Q0ymzXOlx8/bkAnWjAeR0Sf69WrCCUScHW0uuQtCyKZRM2ePajcscM41YWkqzdYWYnBlha46bQpNJOULvwxucv29qJs40b4Zs+eSj4R3tm3DyXr1yPV2mrYYEIg1daGotpaVO3fj4nirsHm19djyeHDUDq4QjIoiPegOVDbRmjBgmkPe8x0FfrmzEH28WOjMN5IBEsOHXrp+kh1tendbE/P2KsUg5SPUFAIO5OZEZAHAqbfck+eIN3aasD0mc1k4YULTTIY7fMuRkL8qXvQikTQcfnyjJu1hauqsOzYMSxrakJRzcyTS1umr8/QrRjT+nqdsWz2jEa3YjEM3LiB66dPv9JJfM0alOkp8wpLp9N42NyMoFYpzWI2e4Ypy7pMQnS4SiGeSCB58iT+aGpCX0cHpp/ZrzatP49u3cLvDQ3g/f3gWl+l7FFCXKKr9fX6z2fSsk5zIUC2ja72duRLShBMJEw1vskg1kE62SwybW3Q6htNJJB1XXhcdy9X6ie6tnOn4dj2+/9WjrNIEMHDGHLpNNLDw6as3xSQcY5wURG4ZSHrOGC53L/efL5K0yr8paWGX18+/8mAZbXpsaOVgfl8iLygo28CqgPNOQ7cYBBWMFjH9KDXzT9SWWkW6QnwJB6va6uuPq/n4v+9YuhRZ+dyqLSsbdFY7NzYZJlyL729bduWodLSZjEyQm9ziRrL0A4EEO7t3b7s3LmzBR0136ZcE0U2+zPL55cCuIa3gCxcLW5wovc452ehM9PirX9ddyqg1NNayrtcqVqu1BcA7r0OSCG4f8hxvmSOs4KA29O11bQ377HMSKkTRHRCEW0iKTcpovcBzNMyWVipdbiTue51kvICgPPm5F/GDID/AISQbRffDZUGAAAAAElFTkSuQmCC)](https://github.com/BiliRoamingX/BiliRoamingX/releases/latest)
[![Star](https://img.shields.io/github/stars/BiliRoamingX/BiliRoamingX?label=Star&color=important&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABwAAAAdCAYAAAC5UQwxAAADKUlEQVRIib2WX0iTURTAz737tpWEissM6c9cOUUkCqmHpYEtfAiFyl4yCoQefOmhHozAECGweskMw+hRitRQsD8ULSl0H1mUIDRr5myVpgvJQvP7f+P7Nsfc9s3dMTpw4ePee87vnHvvOeeD/y2IEEKNFOYHMom8lGHedGKWVjcloDJu7QLxRz7exTtpdRlqmurlot+KAHAqutRKsu/YeaQABgIge/e30upTR2hY6K8FEzhADfE3q9DqU0Uo+uoaQFCpQU01UmXS2UJjg+7RjCI3EHBoQFUIABFhGO0lFcmaSDpC6cuZ01p0kZcQilL21TQmayfpCMkoGkIA5TEuKlqkLL/dVWG2ONe80xggH7iXj4XPdiz5rUicKgDBZ8OC36Y+EsDggGj/1HlZ+2KJectXhSnwEaN1Ckw2n8zs8JrzTn1ftZ2bbjeb5i42gwHKkLy0QVNWwBE2hiNGIlEixopTGFjtvg0Zf4kEb+W8C1e1CCVP2XXm1/t9kAGO1NI5gajwJWBJVqEXlXrrNfNMybtzYu6RXuCBTTMOgAOW5FYOqjCIfKVGe3+baDnaC8tphC4Dq+Q4Xcg+eGllatUBGgv72kRLbXdaoBrskAvbXc2R0zE3Zix80C5Zjgeh9I0kAlb1DNufN0cv6eahOFnXYFzoPgmMUk4FE9Gwkl39EO8cuBZvOWHiK2NZj7H053C4lK0lMgDBxpdot1CptzNhEmCymKnlYrKiWiNiwg6kC+R/9uWAqGCqvEQASAIszHYWUwOx4CkNVxwaIeBAwoSdGogEb6wSClUOtWvwoe/oI1cbszBeqmdX97yR4C2KcYcL1kcpt/4O4PUcE7h1VqudplBJDDmAhU9F9EDxY3EYKGiFmZWzK11SXlOLOftgsA1t67gvT9Q0GhYeaUcJ5tDfgOS36tkFNS3iDWUUhsgbIOQ1uGXPnhtcoGej3l5u/sk6yeNoJSPgJiNAyDtwc/MvcLy98Q3MdJSQIXArY9YubqbTrgeKHnzgbr78oeQ2eQVu8VtTVbw9cRNfnL58APFzmxnbzR7do0kg4lRjNWGwZNp65Wkq+ukTAPgHIIGzcZjmG+EAAAAASUVORK5CYII=)](https://github.com/BiliRoamingX/BiliRoamingX) 基于 ReVanced 实现的B站 Android 客户端增强模块。模块设置完美融入 APP 设置,功能丰富,自定义程度高。
得益于实现方式,对 APP 性能几乎没有影响,流畅、迅速、启动快。支持粉版、Play 版及 HD 版。 📖 主要功能 解除番剧区域限制 自由移除页面组件 自定义直播、视频默认清晰度 自定义播放速度 字幕样式调整,翻译、保存及导入 调整 APP 显示大小 自由复制评论及视频信息 双指缩放视频填充屏幕 调用外部下载器保存视频 加回频道功能 自动领取B币券 分享链接净化 推荐、热门、动态过滤 开屏页背景色跟随深色模式 更多功能见下图 📱 功能截图 💻 源码构建 shell
git clone https://github.com/BiliRoamingX/BiliRoamingX.git
cd BiliRoamingX
./gradlew dist - Windows 系统上使用 gradlew.bat 命令而不是 ./gradlew - 构建产物在 build 目录下 ⬇️ 下载使用 前往 BiliRoamingX-PreBuilds Release 下载 参照 revanced-cli 文档打包 下载定制版 revanced-cli.jar 从 releases 下载 integrations.apk 和 patches.jar 执行终端命令 java -jar revanced-cli.jar patch --merge integrations.apk --patch-bundle patches.jar --signing-levels 1,2,3 bilibili.apk ⭐ Star History 📃 Licence 👆 回到顶部;BiliRoamingX integrations and patches powered by ReVanced.;bilibili,revanced,android | BiliRoamingX/BiliRoamingX |
aleksilassila/reiverr;Reiverr Reiverr is a project that aims to create a single UI for interacting with TMDB, Jellyfin, Radarr and Sonarr, as well as
be an alternative to Overseerr. This project is still in early stages, and many features are still missing / being tested and changed.
Contributions are welcome! See contributing for more information. Reiverr 1.0 This is the page for Reiverr 2.0, which is a rewrite of the project with TVs in mind.
It still lacks many features of the previous version. If you want a more stable experience,
or only care about the web app, you might want to check out the 1.0 branch for now. List of major features TMDB Discovery: Discover trending movies and TV shows Get personalized recommendations based on your ratings ~~Browse movies and TV shows by genre or network~~ View details about movies and TV shows, such as cast, crew, ratings & a trailer. Movie & TV show search Local Library & Playback Stream Movies & TV shows (from Jellyfin library) Create requests for movies & TV shows in Radarr & Sonarr Manage local library files ~~View Radarr & Sonarr stats (disk space, items, etc.)~~ For a list of planned features & known bugs, see Reiverr Taskboard . Installation The easiest and the recommended way to install Reiverr is via Docker. Make sure to update the api keys and base URLs to match your setup. Docker CLI sh
docker run -it --init \
--name reiverr \
--restart unless-stopped \
-p 9494:9494 \
-v /path/to/appdata/config:/config \
ghcr.io/aleksilassila/reiverr:latest Docker compose ```yaml
version: '3.8' name: reiverr services:
reiverr:
image: ghcr.io/aleksilassila/reiverr:latest
container_name: reiverr
ports:
- 9494:9494
environment:
- SECRET=your_secret_here # optional, used to sign JWT tokens for authentication. If not set, sessions will not persist between server restarts. Use a random string.
- ADMIN_USERNAME=admin # optional
- ADMIN_PASSWORD=admin # optional
volumes:
- /path/to/appdata/config:/config
restart: unless-stopped
``` Building from Source Requirements(ish): Node v18.14.0 or higher NPM v9.3.1 or higher Clone from master or download the latest source Build the app:\ npm install \ npm install --prefix backend \ npm run build Start the app:\ node backend/dist/src/main Reiverr will be accessible via port 9494 by default. Tizen / Samsung Smart TVs To be able to use Reiverr on TVs, you'll still need to host the backend server on a separate device.
See the above methods for instructions on how to set up the backend / web app. There are plans to attempt getting the app to the official store. In the meantime, you have to manually build and install
the app using Tizen Studio or the CLI, following roughly these steps: Follow the manual installation steps above to install the dependencies (npm install) Download either Tizen Studio or the CLI tools from the official website Connect Tizen Studio to your TV Use the following command to build and install the app on your TV:\
\ npm run build:tizen;C:\tizen-studio\tools\ide\bin\tizen.bat build-web -- tizen;C:\tizen-studio\tools\ide\bin\tizen.bat package -t wgt -o .\tizen -- .\tizen\.buildResult\;C:\tizen-studio\tools\ide\bin\tizen.bat install -n .\tizen\Reiverr.wgt -t QE55Q64TAUXXC .\
\
You may need to replace the paths for Tizen Studio tools according to your installation location, as well as the device identifier, which was in my case the tv model number.\
\
Alternatively, you can open the project in Tizen Studio and install the project on a device from there. For more instructions on run a project on a device, see here . If you have any questions or run into issues or bugs, you can start a discussion ,
open an issue or check out the Discord channel . If find a feature request that you'd like to see implemented,
you can react to it with a thumbs up. Other Platforms The roadmap includes plans to support the following platforms in the future: Windows Desktop App MacOS Desktop App Android TV / WebOS Post Installation To create the first user account, you can log in with any credentials and an admin account will be created.
Alternatively, you can define the admin username and password using environment variables,
as seen in the Docker Compose example. A new admin account is only created if there are no previous accounts with the same name.
To get most out of Reiverr, it is recommended to connect to TMDB, Jellyfin, Radarr and Sonarr. Hint: Radarr & Sonarr API keys can be found under Settings > General in their respective web UIs.
Jellyfin API key is located under Administration > Dashboard > Advanced > API Keys in the Jellyfin Web UI. Contributing Unlike the most Servarr projects, this one is built with Svelte and NestJS. If you haven't used Svelte before,
don't worry, this was my first Svelte project too. I'd recommend reading the official Svelte tutorial to
get started. To see a list of missing features & known bugs that you can help with,
see Reiverr Taskboard . Feel free to also create your own
issues for bug reports or feature requests, as well as discussions for general questions.
Issues with the community label are issues that I can't or won't work on myself, and are
left for the community to pick up. Feel free to work on any issues though, even without the label. Before you contribute: If you are taking on an existing bug or feature ticket, it is encouraged to comment on the issue or mark yourself
as an assignee at some point to avoid multiple people working on the same thing. If the ticket is vague or missing information, please ask for clarification in the comments. UI style should match the rest of the project, and it is a good idea to discuss the design beforehand,
especially for larger design choices (issues labelled with design ). Conventional commits are encouraged. When creating a pull request, please make sure to target the dev branch and mark the PR as a draft if it is
a work in progress. I'm not a designer, so if you have any ideas for improving the UI, I'd love to learn about them.
If you are a designer and would like to help, contributions are much appreciated! Also the project
is still missing a logo :) Development To get started with development: Clone the repository Check out the dev branch Install dependencies\ npm install \ npm install --prefix backend To start the frontend: npm run dev or npx vite --host if you want to expose the server To start the backend: npm run --prefix backend start:dev Notes 2.0 will primarily target TVs, so the UI must be optimized with TVs in mind. This means larger text, buttons, etc. Design Guide for Android TV is a good resource
for how to design for TVs. The app should support old browsers all the way to Chromium 69, as that's what used in Tizen 5.5. This means that
you might not be able to use the latest browser features, or that you'll have to use a polyfill. caniuse.com is a great resource if you need to check compatibility. Most of the time you don't need to worry about this, but one big feature that's not available in older browsers is
css gap property. You can use the space-x and space-y classes from Tailwind CSS to achieve the same effect. Useful resources https://developer.themoviedb.org/reference https://api.jellyfin.org/ https://sonarr.tv/docs/api/ https://radarr.video/docs/api/ https://github.com/jellyfin/jellyfin-web Network tab in the browser in Jellyfin, Radarr & Sonarr web UIs Additional Screenshots;Reiverr is a clean combined interface for Jellyfin, TMDB, Radarr and Sonarr, as well as a replacement to Overseerr;jellyfin,movies,radarr,sonarr,tmdb,tv | aleksilassila/reiverr |
jquesnelle/yarn;YaRN This repo contains the code and data for the YaRN context window extension method. Paper Paper (ICLR 2024): YaRN: Efficient Context Window Extension of Large Language Models Old Preprint (arXiv) Models LLaMA We publish variants of Llama 2 fine-tuned with YaRN at 32K, 64K and 128K context window length.
They are available under the Llama 2 license on 🤗 Hugging Face. | Size | Context | Link |
| ---: | ------: | :----- |
| 7B | 64K | NousResearch/Yarn-Llama-2-7b-64k |
| 7B | 128K | NousResearch/Yarn-Llama-2-7b-128k |
| 13B | 64K | NousResearch/Yarn-Llama-2-13b-64k |
| 13B | 128K | NousResearch/Yarn-Llama-2-13b-128k |
| 70B | 32K | NousResearch/Yarn-Llama-2-70b-32k | In addition, we also publish 8K context window versions of Llama 2 7B fine-tuned with NTK-aware and YaRN (Table 1 in the conference paper). Mistral With the release of v2 of our paper we are also publishing 64K and 128K variants of Mistral 7B v0.1 . | Size | Context | Link |
| ---: | ------: | :----- |
| 7B | 64K | NousResearch/Yarn-Mistral-7b-64k |
| 7B | 128K | NousResearch/Yarn-Mistral-7b-128k | SOLAR The SOLAR 10.7B v1.0 model utilizes depth-up scaling to add layers to Mistral 7B v0.1 , which may potentially improve long context performance on a per-parameter basis.
We publish 32K and 64K variants. | Size | Context | Link |
| ------: | ------: | :----- |
| 10.7B | 32K | NousResearch/Yarn-Solar-10b-32k |
| 10.7B | 64K | NousResearch/Yarn-Solar-10b-64k | Reproduction We strongly believe in open science, and thus publish all code and data to reproduce the results in our paper.
To reproduce, clone the repository and perform a local installation. python
git clone https://github.com/jquesnelle/yarn
cd yarn
pip install -e . Training To train the models, run accelerate config and enable DeepSpeed acceleration. deepspeed/zero3.json was the configuration file used for training. ```sh ./train.sh ``` The tokenized training data is available on 🤗Hugging Face and was derived from the pg19 dataset.
For the Mistral models, a mix of the pretrain and fine-tune splits of Long-Data-Collections was used and the tokenized dataset is also available on 🤗Hugging Face . Evaluation To reproduce the evaluations, install lm-evaluation-harness with pip install git+https://github.com/EleutherAI/lm-evaluation-harness and then run the two provided scripts. ```sh ./eval.sh ./eval-harness.sh ``` Citation @inproceedings{
peng2024yarn,
title={Ya{RN}: Efficient Context Window Extension of Large Language Models},
author={Bowen Peng and Jeffrey Quesnelle and Honglu Fan and Enrico Shippole},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=wHBfxhZu1u}
};YaRN: Efficient Context Window Extension of Large Language Models;[] | jquesnelle/yarn |
zoonk/zoonk;WARNING: This software is still in development and not ready for production. I'm making several changes to it. DO NOT USE IT IN PRODUCTION YET. The current version will break when v1.0 is released. I'll update this README when it's ready for production. Open-source alternative to create interactive courses like Duolingo. Learn more » Roadmap . Community About this project Interactive learning is more effective than traditional methods. Learners remember 10% of what they hear, 20% of what they read but 80% of what they see and do. That's why 34 hours of Duolingo are equivalent to a full university semester of language education. We love Duolingo. We think those kind of interactive experiences should be used in more fields. That's why we're building Zoonk, an open-source platform to create interactive courses like Duolingo. Tech stack Phoenix Phoenix LiveView Postgres Tailwind CSS Resend Cloudflare Images We're deploying our cloud products to Fly and Neon . Getting started Follow the instructions below to get Zoonk up and running on your local machine. We have a Dockerfile but that's used for deploying our demo app to Fly . We don't have a Docker setup for local development yet. PRs are welcome! Requirements You need Elixir 1.15 or later and Erlang 26 or later. Run elixir -v to find your current version for Elixir and Erlang . Install Hex : mix local.hex . Install Phoenix : mix archive.install hex phx_new . PostgreSQL 15+ . (Linux users only): inotify-tools . Local development Run mix setup to install both dependencies and set up both the database and assets. Run mix seed to fetch some initial data to the database ( See options ). Run mix phx.server to start a development server. Run mix test to run tests. Run mix ci to run our code quality checks. Run mix locale to update translation files. SSL on localhost Prefer to do local development using SSL to resemble production as much as possible. You can use mkcert to generate a certificate. After you install mkcert , follow the steps below: Create a cert directory under priv : mkdir priv/cert . Generate a new certificate: mkcert -key-file priv/cert/selfsigned_key.pem -cert-file priv/cert/selfsigned.pem localhost zoonk.test "*.zoonk.test" apple.test . Run mkcert -install to install the certificate in the system trust store. You may also need to enable Allow invalid certificates for resources loaded from localhost on Google Chrome flags . Restart your local server: mix phx.server . You may also need to restart your browser. You also need to make sure your machine maps localhost to a test domain (we're using zoonk.test for this guide). dnsmasq allows you to resolve domains to your local machine without having to change your /etc/hosts file. To install dnsmasq : ```sh
brew install dnsmasq Create a configuration directory mkdir -pv $(brew --prefix)/etc/ Set up your domains echo 'address=/zoonk.test/127.0.0.1' >> $(brew --prefix)/etc/dnsmasq.conf
echo 'address=/.zoonk.test/127.0.0.1' >> $(brew --prefix)/etc/dnsmasq.conf
echo 'address=/apple.test/127.0.0.1' >> $(brew --prefix)/etc/dnsmasq.conf Add dnsmasq to your resolver sudo mkdir -v /etc/resolver
sudo bash -c 'echo "nameserver 127.0.0.1" > /etc/resolver/zoonk.test'
sudo bash -c 'echo "nameserver 127.0.0.1" > /etc/resolver/apple.test' Start dnsmasq sudo brew services start dnsmasq
``` That's it! You can now start your local server ( mix phx.server ) and test your domains using: https://zoonk.test:4001 https://rug.zoonk.test:4001 (each school slug can be used as a subdomain of zoonk.test ). Or any other domain you added before. Mailer We're using Resend to send emails. To make it work in production, you need to set the following environment variables on your server: RESEND_API_KEY : Your Resend API key. Storage By default, we upload files to your local server and store them in the priv/static/uploads directory. However, we also support uploading files to Cloudflare Images . To use Cloudflare Images, you'll need to set the following environment variables on your server: CLOUDFLARE_ACCOUNT_ID : Your Cloudflare account ID. You can find it on Cloudflare Dashboard > Images > Overview . CLOUDFLARE_ACCOUNT_HASH : Your Cloudflare account hash. You can find it on Cloudflare Dashboard > Images > Overview . CLOUDFLARE_API_TOKEN : Your Cloudflare API token. You can create a token on Cloudflare Dashboard > My Profile > API Tokens . Stripe We use Stripe for processing payments. If you want to enable subscriptions, you need to set the following environment variables on your server: STRIPE_API_KEY : Your Stripe API key. STRIPE_WEBHOOK_SECRET : Your Stripe webhook secret. Plus, you need to create a product for your subscription. We call this plan flexible and you can't customize plans at the moment. We fetch the price from the Stripe API, so make sure you add the zoonk_flexible lookup key to your price. Stripe can only be enabled for saas and marketplace apps. Make sure to choose one of those options when you first run this app. Sponsors @adriy-be Add your name or brand here by sponsoring our project .;Platform for creating interactive courses.;education,elixir,phoenix,phoenix-liveview,cloudflare-images,flyio,postgres,resend-email,tailwindcss | zoonk/zoonk |
standard-webhooks/standard-webhooks;Open source tools and guidelines for sending webhooks easily, securely, and reliably Introduction Webhooks are becoming increasingly popular and are used by many of the world's top companies for sending events to users of their APIs. However, the ecosystem is fragmented, with each webhook provider using different implementations and varying quality. Even high quality implementations vary, making them inherently incompatible. This fragmentation is a pain for the providers and consumers, stifling innovation. For consumers, this means handling webhooks differently for every provider, relearning how to verify webhooks, and encountering gotchas with bespoke implementations. For providers, this means reinventing the wheel, redesigning for issues that have already been solved (security, forward compatibility, etc.). We propose a simple solution: standardize webhooks across the industry. This design document outlines our proposal, a set of strict webhook guidelines based on the existing industry best practices. We call it "Standard Webhooks". We believe "Standard Webhooks" can do for webhooks what JWT did for API authentication. Adopting a common protocol that is consistent and supported by different implementations will solve the above issues, and will enable new tools and innovations in webhook ecosystem. To achieve this, we have created an open source and community-driven set of tools and guidelines for sending webhooks. What are Webhooks? Webhooks are a common name for HTTP callbacks, and are a way for services to notify each other of events. Webhooks are part of a service's API, though you can think of them as a sort of a "reverse API". When a client wants to make a request to a service they make an API call, and when the service wants to notify the client of an event the service triggers a webhook ("a user has paid", "task has finished", etc.). Webhooks are server-to-server, in the sense that both the customer and the service in the above description, should be operating HTTP servers, one to receive the API calls and one to receive the webhooks. It's important to note that while webhooks usually co-exist with a traditional API, this is not a requirement, and some services send webhooks without offering a traditional API. Read the specification The latest draft specification can be found at spec/standard-webhooks.md which tracks the latest commit to the main branch in this repository.
The human-readable markdown file is the source of truth for the specification. Reference implementations There are reference implementations for the signature verification theme for a variety of languages, including: Python JavaScript/TypeScript Java/Kotlin Rust Go Ruby PHP C# Elixir Technical steering committee The Standard Webhooks initiative, the specification, and development of tooling is driven by the community and guided by the technical steering committee. Members (in alphabetical order): Brian Cooksey ( Zapier ) Ivan Gracia ( Twilio ) Jorge Vivas ( Lob ) Matthew McClure ( Mux ) Nijiko Yonskai ( ngrok ) Stojan Dimitrovski ( Supabase ) Tom Hacohen ( Svix ) Vincent Le Goff ( Kong ) Example ecosystem benefits of Standard Webhooks We believe "Standard Webhooks" can do to webhooks what JWT did to API authentication. Having a common protocol that is consistent will enable a variety of implementations to interoperate, reducing the development burden on webhook consumers and enabling new uses. Some of these benefits include: API Gateway signature verification: signature verification is a common challenge for webhook consumers. Standard Webhooks makes it possible for verification to be implemented directly in the API gateway, easily solving verification for consumers. Having a set of libraries for signing and verification make webhook verification easier for scenarios where API gateways can't be used. Workflow automation tools (such as Zapier, Make, Workato, and tray.io) can implement the signature verification themselves to ensure a secure integration and save the need for integration builders to reinvent the wheel every time. Standard Webhooks will enable building tools to automatically generate SDK for webhook consumers that in addition to verifying the signature can also validate the schemas (using JSON Schema, OpenAPI or AsyncAPI definitions). Many more... Related efforts There are a few complementary or partially overlapping efforts to standardize asynchronous event communication. This specification is compatible with the rest of them, and can either reuse existing efforts or benefit further from collaboration with them. The most notable of such efforts are: OpenAPI AsyncAPI CloudEvents IETF HTTP Message Signatures REST Hooks Webhooks.fyi - a collection of useful webhooks resources (not a standardization effort).;The Standard Webhooks specification;api,asyncapi,callbacks,http,json,openapi,specification,standard,webhook,webhooks | standard-webhooks/standard-webhooks |
trypromptly/LLMStack;LLMStack is a no-code platform for building generative AI agents, workflows and chatbots, connecting them to your data and business processes. Quickstart | Documentation | Promptly Overview Build tailor-made generative AI agents, applications and chatbots that cater to your unique needs by chaining multiple LLMs. Seamlessly integrate your own data, internal tools and GPT-powered models without any coding experience using LLMStack's no-code builder. Trigger your AI chains from Slack or Discord. Deploy to the cloud or on-premise. See full demo video here Getting Started Check out our Cloud offering at Promptly or follow the instructions below to deploy LLMStack on your own infrastructure. LLMStack deployment comes with a default admin account whose credentials are admin and promptly . Be sure to change the password from admin panel after logging in . Installation Prerequisites LLMStack depends on a background docker container to run jobs. Make sure you have Docker installed on your machine if want to use jobs. You can follow the instructions here to install Docker. Install LLMStack using pip sh
pip install llmstack If you are on windows, please use WSL2 (Windows Subsystem for Linux) to install LLMStack. You can follow the instructions here to install WSL2. Once you are in a WSL2 terminal, you can install LLMStack using the above command. Start LLMStack using the following command: sh
llmstack Above commands will install and start LLMStack. It will create .llmstack in your home directory and places the database and config files in it when run for the first time. Once LLMStack is up and running, it should automatically open your browser and point it to localhost:3000 . You can add your own keys to providers like OpenAI, Cohere, Stability etc., from Settings page. If you want to provide default keys for all the users of your LLMStack instance, you can add them to the ~/.llmstack/config file. LLMStack: Quickstart video Features 🤖 Agents : Build generative AI agents like AI SDRs, Research Analysts, RPA Automations etc., without writing any code . Connect agents to your internal or external tools, search the web or browse the internet with agents. 🔗 Chain multiple models : LLMStack allows you to chain multiple LLMs together to build complex generative AI applications. 📊 Use generative AI on your Data : Import your data into your accounts and use it in AI chains. LLMStack allows importing various types ( CSV, TXT, PDF, DOCX, PPTX etc., ) of data from a variety of sources ( gdrive, notion, websites, direct uploads etc., ). Platform will take care of preprocessing and vectorization of your data and store it in the vector database that is provided out of the box. 🛠️ No-code builder : LLMStack comes with a no-code builder that allows you to build AI chains without any coding experience. You can chain multiple LLMs together and connect them to your data and business processes. ☁️ Deploy to the cloud or on-premise : LLMStack can be deployed to the cloud or on-premise. You can deploy it to your own infrastructure or use our cloud offering at Promptly . 🚀 API access : Apps or chatbots built with LLMStack can be accessed via HTTP API. You can also trigger your AI chains from Slack or Discord . 🏢 Multi-tenant : LLMStack is multi-tenant. You can create multiple organizations and add users to them. Users can only access the data and AI chains that belong to their organization. What can you build with LLMStack? Using LLMStack you can build a variety of generative AI applications, chatbots and agents. Here are some examples: 👩🏻💼 AI SDRs : You can build AI SDRs (Sales Development Representatives) that can generate personalized emails, LinkedIn messages, cold calls, etc., for your sales team 👩🏻💻 Research Analysts : You can build AI Research Analysts that can generate research reports, investment thesis, etc., for your investment team 🤖 RPA Automations : You can build RPA automations that can automate your business processes by generating emails, filling forms, etc., 📝 Text generation : You can build apps that generate product descriptions, blog posts, news articles, tweets, emails, chat messages, etc., by using text generation models and optionally connecting your data. Check out this marketing content generator for example 🤖 Chatbots : You can build chatbots trained on your data powered by ChatGPT like Promptly Help that is embedded on Promptly website 🎨 Multimedia generation : Build complex applications that can generate text, images, videos, audio, etc. from a prompt. This story generator is an example 🗣️ Conversational AI : Build conversational AI systems that can have a conversation with a user. Check out this Harry Potter character chatbot 🔍 Search augmentation : Build search augmentation systems that can augment search results with additional information using APIs. Sharebird uses LLMStack to augment search results with AI generated answer from their content similar to Bing's chatbot 💬 Discord and Slack bots : Apps built on LLMStack can be triggered from Slack or Discord. You can easily connect your AI chains to Slack or Discord from LLMStack's no-code app editor. Check out our Discord server to interact with one such bot. Administration Login to http://localhost:3000/admin using the admin account. You can add users and assign them to organizations in the admin panel. Cloud Offering Check out our cloud offering at Promptly . You can sign up for a free account and start building your own generative AI applications. Documentation Check out our documentation at docs.trypromptly.com/llmstack to learn more about LLMStack. Development bash
cd client
npm install
npm run build
cd ..
pip install poetry
poetry install
poetry shell
llmstack You can skip running npm install and npm run build if you have already built the client before For frontend development, you can use REACT_APP_API_SERVER=localhost:3000 npm start to start the development server in client directory. You can also use npm run build to build the frontend and serve it from the backend server. To update documentation, make changes to web/docs directory and run npm run build in web directory to build the documentation. You can use npm start in web directory to serve the documentation locally. Contributing We welcome contributions to LLMStack. Please check out our contributing guide to learn more about how you can contribute to LLMStack.;No-code multi-agent framework to build LLM Agents, workflows and applications with your data;ai,generative-ai,llm-chain,llm-framework,llmops,llms,no-code-ai,platform,agents,ai-agents-framework | trypromptly/LLMStack |
workout-lol/workout-lol;Workout.lol The easiest way to create a workout routine About A small web application to create workouts based on your available equipment and the muscles you want to train. Link You can self-host the project or use the web app on workout.lol . Steps to run it locally Clone the repository to your local machine git clone https://github.com/Vincenius/workout-lol.git Navigate to the app directory cd workout-lol Install the necessary dependencies yarn Initialize the Mongo DB by importing the dump files from lib/dump/prod : 4.1 For the .metadata.json, you'll have to do this : mongoimport --uri mongodb+srv://<USERNAME>:<PASSWORD>@<CLUSTER_NUMBER>.<URI>.mongodb.net/<DATABASE> --collection <COLLECTION> --type json --file <FILEPATH> 4.2 For the .bson, you'll have to do this : mongorestore --uri mongodb+srv://<USERNAME>:<PASSWORD>@<CLUSTER_NUMBER>.<URI>.mongodb.net/<DATABASE> --collection <COLLECTION> <FILEPATH> copy the .env.dist file to .env and set environment variables as described in the file Start the local development server npm run dev Open your browser to http://localhost:3000 Steps to run it with docker Clone the repository to your local machine git clone https://github.com/Vincenius/workout-lol.git Copy the .env.docker file to .env and set environment variables as described in the file (do not modify the MONGODB_URI if you wish to use the mongodb container) Run the docker compose file at the root of the project docker compose -f docker/docker-compose.yml up -d --build Wait for the applications to be up ( docker ps to get the status) Open your browser to http://localhost:3000 Contributors Supporters | | | | |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| medecau | EL | alvaro | devjev | Become a supporter by donating on Ko-Fi: https://ko-fi.com/workout_lol Public Metrics 💸 Cost Breakdown 📈 Analytics License;A simple way to create a workout plan;[] | workout-lol/workout-lol |
taranjeet/awesome-gpts;awesome-gpts Awesome GPTs is a collection of all GPTs created by the community. Artificial Intelligence - [Impress Me GPT](https://chat.openai.com/g/g-zjMvl4pFx-impress-me-gpt): Showing off the abilities of ChatGPT until you’re impressed! by [MindBranches](https://x.com/MindBranches)
- [New GPT-5](https://chat.openai.com/g/g-jCYeXl5xh-new-gpt-5): A superior AI with advanced reasoning and confidentiality by [Danik Tka](https://www.linkedin.com/in/dainis-tka/)
- [AEC AI GPT](https://chat.openai.com/g/g-KraRiLoIA-aec-ai-gpt): AEC specialist offering AI tool advice with visuals by [Stjepan Mikulić](https://www.linkedin.com/in/stjepanmikulic/)
- [GPTs Imaginarium](https://chat.openai.com/g/g-1di7yQSVF-gpts-imaginarium): I assist developers in imagining new GPTs by [Peter Gostev](https://www.linkedin.com/in/peter-gostev-53058417/)
- [FixGPT](https://chat.openai.com/g/g-1Ln9S5qrE-fixgpt): If your "unified" chatGPT doesn't know it can browse the web or draw, this GPT is for you! by [Alex Volkov](https://x.com/altryne/)
- [API Docs](https://chat.openai.com/g/g-I1XNbsyDK): OpenAI API, Documentation and CookBook by [Cocosgt](https://x.com/CocoSgt_twt/)
- [LLM Research Storm](https://chat.openai.com/g/g-Hi3tWf5Ry-llm-research-storm): A model that is super good at helping large language research brainstorming by [Yao Fu](https://x.com/Francis_YAO_/)
- [EmbeddedGPT](https://chat.openai.com/g/g-M5UY1ByFj-embeddedgpt): Checks through Embedded System Datasheet and Suggest Source Codes by [(@u_know_whoab1r)](https://x.com/u_know_whoab1r)
- [GPT Creator Workshop](https://chat.openai.com/g/g-Oud17ZKtE-gpt-creator-workshop): Leads users in crafting their own GPT models by [Laura Mingail](https://www.linkedin.com/in/mingail/)
- [GPT Architect (Advanced Model)](https://chat.openai.com/g/g-7uYB9WE9l-gpt-architect-advanced-model): Turn simple prompts into powerful GPTs by [@marcusrbrown](https://github.com/marcusrbrown)
- [FastGPT](https://chat.openai.com/g/g-VnlKc5BQK-fastgpt): Faster than any other GPT. Just like ChatGPT but without the waffle by [@dave1010](https://github.com/dave1010)
- [ChatGPT Classic](https://chat.openai.com/g/g-YyyyMT9XH-chatgpt-classic): The latest version of GPT-4 with no additional capabilities
- [Prompty](https://chat.openai.com/g/g-aZLV4vji6-prompty): A professional prompt engineer who helps you optimize your GPT-prompts with state-of-the-art techniques
- [GPT-Searcher](https://chat.openai.com/g/g-N3lcOTajR-gpt-searcher): I find the perfect GPT for your needs and guide you there with a fun description!
- [Prompt Crafter](https://chat.openai.com/g/g-0olkkYtUo-prompt-crafter): Assists you in creating well-defined prompts effortlessly.
- [Prompt Perfector](https://chat.openai.com/g/g-jeCEGsoNZ-prompt-perfector): AI Expert in Refining and Perfecting Prompts
- [PromptGPT](https://chat.openai.com/g/g-p0jlP3Tcq-promptgpt): AI assistant for refining user prompts to maximize GPT-4 interaction.
- [GPT Enhancer](https://chat.openai.com/g/g-fQ6GAANfi-gpt-enhancer): AI assistant for refining GPT instructions with a focus on user experience and continuous AI learning.
- [Learn AI in Fun Way](https://chat.openai.com/g/g-VbMY5EfGL-learn-ai-in-fun-way): A humorous ML trainer who teaches with jokes and quizzes, making learning AI entertaining and enjoyable.
- [Daily AI Research Digest](https://chat.openai.com/g/g-Z4yeZHkyy-daily-ai-research-digest): Finds and summarizes the latest AI papers in your field.
- [FlexChat.ai Guide](https://chat.openai.com/g/g-UMvFKMQxt-flexchat-ai-guide): A FlexChat.ai Tutor
- [GPT Selector](https://chat.openai.com/g/g-KxGmdTS9t-gpt-selector): Helps you find the right GPT
- [Better GPT Builder](https://chat.openai.com/g/g-w8wjRexpv-better-gpt-builder): Better than GPT Builder by [S.J](https://github.com/noname2312)
- [ResourceFinder GPT](https://chat.openai.com/g/g-8aHtP3r6X-resourcefinder-gpt): Assists in identifying and utilizing free APIs and databases effectively to enhance user-designed GPTs by [S.J](https://github.com/noname2312)
- [PromptPlz](https://chat.openai.com/g/g-LicZucHuu-promptplz): Your personal prompt engineer is here. Just say PromptPlz by [S.J](https://github.com/noname2312)
- [Puron chan the Prompt Engineer](https://chat.openai.com/g/g-eYyc3l6iB-puron-chan-the-prompt-engineer): Puron chan, a professional prompt engineer, is here to help you craft effective prompts desu! by [S.J](https://github.com/noname2312)
- [GPT Architect](https://chat.openai.com/g/g-476KmATpZ-gpt-architect): This GPT helps you build new GPTs by [David Ondrej](https://twitter.com/DavidOndrej1)
- [GPT Agent Searcher](https://chat.openai.com/g/g-E5HOxDjBS-gpt-agent-searcher): There's a GPT for that by [Guillaume Dumortier](https://www.linkedin.com/in/gdumortier) Automobile - [DriveGPT](https://chat.openai.com/g/g-tlmvuJngB-drivegpt): Autonomous driving assistant by [mentat.ai](https://x.com/bio_bootloader/)
- [TeslaGPT](https://chat.openai.com/g/g-XoF2Qfa6F-teslagpt): Your go-to source for Tesla and EV knowledge by Omar Qazi
- [Scruffy's Car Repair Advice](https://chat.openai.com/g/g-69AuJhsig-scruffy-s-car-repair-advice): Checks car repair quotes and guides cost-saving decisions by [Zach Nagengast](https://x.com/zachnagengast/) Business & Product - [Prompt Professor](https://chat.openai.com/g/g-qfoOICq1l-prompt-professor): A prompt engineering teacher by [Yota Ishikawa](https://x.com/ctgptlb/)
- [SEO Mentor](https://chat.openai.com/g/g-QqvewXqPt-seo-mentor): SEO mentor aligned with Google's best practices by [Natzir Turrado Ruiz](https://x.com/natzir9/)
- [ChatPRD](https://chat.openai.com/g/g-G5diVh12v-chatprd): An on-demand Chief Product Officer that drafts and improves your PRDs, while coaching you to become an elite product manager by [Claire Vo](https://x.com/clairevo/)
- [Think about service names](https://chat.openai.com/g/g-eYqpxIV2M-sabisuming-wokao-erukun): A bot that takes a very serious approach to thinking about service names by [Tomoki Hirata](https://x.com/t_10_a/)
- [YC application GPT](https://chat.openai.com/g/g-LYDRCiZB9-yc-application-gpt): This GPT automatically fills YC application for you based on website or Pitch Deck by [Iuliia Shnai](https://x.com/shnai0/)
- [Business model β](https://chat.openai.com/g/g-mF20TBdPi-bizinesumoderunb): If you enter your industry, it will suggest six strategies. Once you choose one of them, it will create a product/service lineup, marketing, pricing, etc. by [Youngeui Seu](https://x.com/Shuhei_Ohno/)
- [Product Manager GPT](https://kavirkaycee.com/product-manager-gpt): Crafting the Future of Product Management by [Kavir Kaycee](https://twitter.com/kavirkaycee)
- [UX Copywriting GPT](https://kavirkaycee.com/ux-copywriting-gpt): The Swiss Army Knife of UX Writing by [Kavir Kaycee](https://twitter.com/kavirkaycee)
- [Super Practical PM GPT](https://chat.openai.com/g/g-fvVMurIdH-super-practical-pm-gpt): I provide specific, tactical product management advice with practical examples and templates by [Carl Vellotti](https://x.com/carlvellotti/)
- [VentureGPT](https://chat.openai.com/g/g-C3WWLOnWX-venturegpt): Co-pilot for VC by [wale.ai](https://wale.ai/)
- [Content Helpfulness and Quality SEO Analyzer](https://chat.openai.com/g/g-WxhtjcFNs-content-helpfulness-and-quality-seo-analyzer): I Help you evaluate your web content helpfulness, relevance, and quality for your targeted query based on Google's guidelines vs the one of your competitors by [Aleyda Solis](https://twitter.com/aleyda)
- [Cold Mail by DoMore.ai](https://chat.openai.com/g/g-iVolzNwa5-cold-mail-by-domore-ai): Engage prospective customers using personalized cold emails based on your offer's URL and the URL of the customer's website by DoMore AI
- [Seabiscuit: Business Model Master](https://chat.openai.com/g/g-nsTplEvN8-seabiscuit-business-model-master): Discover A More Robust Business by [Seabiscuit.ai](https://seabiscuit.ai/)
- [GPT-Seller](https://chat.openai.com/g/g-CS0BEb5pJ-gpt-seller): Enter the name of the product or service I can help you sell by Christopher Briest
- [InventBot](https://chat.openai.com/g/g-qtqhMFHcq-inventbot): Create Futuristic Inventions by [Benjamin De Kraker](https://x.com/BenjaminDEKR)
- [SellMeThisPen](https://chat.openai.com/g/g-cTqsEOE4C-sellmethispen): Create second-hand marketplace listings based on pictures. Start by uploading a picture by [Peter Örneholm](https://x.com/PeterOrneholm)
- [Conversion GPT](https://chat.openai.com/g/g-RefHU4GKB-conversion-gpt): I optimize your sales funnels and write all your sales pages with a proven framework by [Gusten Sun](https://x.com/GustenSun)
- [Seabiscuit - Sales Strategist](https://chat.openai.com/g/g-lH8uHybvQ-seabiscuit-sales-strategist): Thrive While Others Only Survive by [Seabiscuit.ai](https://seabiscuit.ai/)
- [Salesforce Sidekick](https://chat.openai.com/g/g-aOD4U2jkL-salesforce-sidekick): Personal assistant for Salesforce configuration, coding, troubleshooting, solutioning, proposal writing, and more. This is not a Salesforce product or service by [Mitch Lynch](https://www.linkedin.com/in/rmitchlynch/)
- [AdGurus PPC GPT](https://chat.openai.com/g/g-FTM9jPnN4-adgurus-ppc-gpt): A helpful assistant for Google Ads management and optimization by [Yannis Giovanos](https://www.linkedin.com/in/yannisgiovanos/)
- [DevRel Guide](https://chat.openai.com/g/g-9tO10WKi2-devrel-guide): Everything Developer Relations by [Jarrad Grigg](https://www.linkedin.com/in/jarradgrigg/)
- [Business Plan Builder](https://chat.openai.com/g/g-B7m98jiyn-business-plan-builder): Assists with creating a business plan by [Lorenzo Gonzalez](https://www.linkedin.com/in/lorenzogonz/)
- [The Professional Project Manager](https://chat.openai.com/g/g-rphpBwjdV-the-professional-project-manager): Project management advisor offering best practice advice by [Sean Whitaker](https://www.linkedin.com/in/seanwhitakerpm/)
- [Business Idea Sanity Checker](https://chat.openai.com/g/g-1i7Bjryc5-business-idea-sanity-checker): Sanity check your idea before spending time and money on it; based on the wisdom of successful founders by [Farez Rahman](https://www.linkedin.com/in/farez/)
- [Competitor Scout](https://chat.openai.com/g/g-Mdb1guaCN-competitor-scout): Finds a list of competitors for a product that you input.
- [Gantt Chart GPT](https://chat.openai.com/g/g-ihJfmYAJn-gantt-chart-gpt): A project management assistant that can auto-generate an editable Gantt chart from your project files by [@onlinegantt](https://github.com/onlinegantt)
- [Seabiscuit: Launch Lander](https://chat.openai.com/g/g-t2p04OE3K-seabiscuit-launch-lander): Startup Strong Within 180 Days by [@tomfrazier](https://github.com/tomfrazier)
- [Brand Sprint Facilitator](https://chat.openai.com/g/g-gwyuSCzG4-brand-sprint-facilitator): Helps define the baseline of your brand by [@dylandeheer](https://github.com/dylandeheer)
- [ProductGPT](https://chat.openai.com/g/g-GUjYfiBrG-productgpt): A specialized AI assistant designed to generate creative and appealing product names and descriptions, focusing on tech products, eco-friendly items, and fashion by [@HeyitsRadinn](https://github.com/HeyitsRadinn)
- [Quicksense](https://chat.openai.com/g/g-8EvOF8yGH-quicksense-by-1skeml3): Expert in QlikSense scripting, data visualization by [@h4k44n](https://github.com/h4k44n)
- [SEO BlogGPT](https://chat.openai.com/g/g-CqSxj3cSa-seo-bloggpt): Blog SEO Expert Writing AI by [S.J](https://github.com/noname2312)
- [Seabiscuit: Press Prep](https://chat.openai.com/g/g-JSGp95kFT-seabiscuit-press-prep): Be Newsworthy Not A Footnote by [@tomfrazier](https://github.com/tomfrazier)
- [Corporate Explorer](https://chat.openai.com/g/g-lVvQdvz9s-corporate-explorer): I source high-quality corporate data by [clk1st](https://github.com/clk1st)
- [Risk Averse Technology Company](https://chat.openai.com/g/g-MVXslMEf2-risk-averse-technology-company): Automated Intelligence Advisor by [@RiskAverseTech](https://twitter.com/RiskAverseTech)
- [Domains GPT](https://chat.openai.com/g/g-ng6WSOAv4-domains-gpt): Detailed info about domain names: availability, expiry date, subdomains, history, and more by [Andrei](https://twitter.com/AndreiIgna)
- [GA4 Implementation Assistant](https://chat.openai.com/g/g-LDTSSjJiP-ga4-implementation-assistant): A helper for implementing Google Analytics 4 with tips and troubleshooting by [Ashish Batra](https://community.openai.com/u/ashishbatra)
- [Product Engineer](https://chat.openai.com/g/g-4hXZITeda-product-engineer): Find inventive solutions to engineering problems.
- [Product Support](https://chat.openai.com/g/g-zWeEn9xnl-product-support): Expert SaaS Support Engineer with deep problem-solving skills.
- [Product Coach](https://chat.openai.com/g/g-e0xH6MMQs-product-coach): Provides insights for product development.
- [Post Cheetah SEO](https://chat.openai.com/g/g-C63rpm9U0-post-cheetah-seo): Do an instant SEO analysis of any website by [Mark J Gadala Maria](https://twitter.com/markgadala)
- [Pitch GPT](https://chat.openai.com/g/g-qcH5H9Aza-pitch-gpt): Pitch GPT helps craft customized business pitches, integrating web research, visual creation, and coding insights for diverse business needs. by [David Robertson](https://www.linkedin.com/in/daverobertson4)
- [Prototyper](https://chat.openai.com/g/g-UiLPuZ5ZZ-prototyper): I casually craft and host web prototypes, explaining code on request by [Florian S](https://twitter.com/airesearch12)
- [BounceBan](https://chat.openai.com/g/g-q5uXtrvkH-bounceban-com-free-email-verification): Free & Unlimited email verifications within ChatGPT powered by BounceBan.com. Career Advice & Coaching - [Personal Brand Coach](https://chat.openai.com/g/g-jFWONpxvR-personal-brand-bot): LinkedIn personal branding expert offering tailored advice to build your brand by [Liam Darmody](https://www.linkedin.com/in/liamdarmody1/)
- [Kraftful](https://chat.openai.com/g/g-xTTbsqUyB-kraftful): Your product coach. Ask about best practices by [Yana Welinder](https://x.com/yanatweets/)
- [The UX Sage](https://chat.openai.com/g/g-242OjQh2w-the-ux-sage): Your go-to mentor for UX wisdom and growth by [Sahil Pandita](https://x.com/Sahilxdesign/)
- [Interview Coach](https://chat.openai.com/g/g-Br0UFtDCR-interview-coach): Interview coach provides practice interview and mock interview feedback by Danny Graziosi
- [The LearningSEO.io SEO Teacher](): Friendly SEO expert teacher who will help you to learn SEO using reliable learningseo.io resources by [Aleyda Solis](https://twitter.com/aleyda)
- [Alex Hormozi GPT](https://chat.openai.com/g/g-L6MVCKIsU-): The only business coach you will ever need: Craft 100M Dollar Offers & Kickstart Your Business with 100M Dollar Lead Advice by Yannick van Draanen
- [Design Sprint Coach (beta)](https://chat.openai.com/g/g-DgumXCL7r-design-sprint-coach-beta): A helpful coach for guiding teams through Design Sprints with a touch of sass by [Rocío Holzer López](https://www.linkedin.com/in/rocioholzer/)
- [MartinsGPT - Corporate Policy Reviewer](https://chat.openai.com/g/g-aaIJ9hRih-martin-s-corporate-policy-reviewer): You upload your corporate policy, I will review it by [Martin Smit](https://www.linkedin.com/in/martinsmit/)
- [MartinsGPT - Compensation Advisor](https://chat.openai.com/g/g-E5VFaKBNt-martinsgpt-compensation-advisor-v0-1): Compensation Advisor for HR and managers by [Martin Smit](https://www.linkedin.com/in/martinsmit/)
- [MartinsGPT - Safety Inspector](https://chat.openai.com/g/g-EhwyRwaGG-martinsgpt-safety-inspector): Upload a photo of your workplace, and I will look for unsafe situations by [Martin Smit](https://www.linkedin.com/in/martinsmit/)
- [MartinsGPT - How could AI impact your job?](https://chat.openai.com/g/g-z5KV0bXBP-martinsgpt-how-could-ai-impact-your-job): You name the job. I advise how AI could impact it by [Martin Smit](https://www.linkedin.com/in/martinsmit/)
- [Levels.fyi GPT](https://chat.openai.com/g/g-yUh3EEQan-levels-fyi-gpt): Data-driven negotiator and career guide by [Zaheer Mohiuddin](https://www.linkedin.com/in/zuhayeer/)
- [Cyber Security Career Mentor](https://chat.openai.com/g/g-b69I3zwKd-cyber-security-career-mentor): Your guide to starting and advancing in cybersecurity careers, offering beginner-friendly, practical advice. by Nathan House
- [Creative Writing Coach](https://chat.openai.com/g/g-lN1gKFnvL-creative-writing-coach): I'm eager to read your work and give you feedback to improve your skills
- [Alpha: Agent Finder (By Staf.ai)](https://chat.openai.com/g/g-K770puBb6-agent-finder-by-staf-ai-and-agentops-ai): Find Your Dream Agent by [Staf.ai](https://staf.ai/)
- [PM Resume Reviewer](https://chat.openai.com/g/g-eZEuTRpBr-pm-resume-reviewer): Enhanced PM resume feedback with industry insights by [Baran Atmanoglu](https://twitter.com/atmanoglub)
- [Remote Job Finder](https://chat.openai.com/g/g-qd7DavBQS-remote-job-finder): I help you find relevant remote jobs quickly. I read job descriptions to match your query thereby saving your time. No need to waste time on filtering through different criteria by [Nithur](https://twitter.com/NithurM)
- [Tech Guru GPT](https://chat.openai.com/g/g-EGHIlyWQB-tech-guru-gpt): Mock interviews with real-time feedback. by [Eidher Escalona](https://community.openai.com/u/eidher)
- [Career Counselor](https://chat.openai.com/g/g-yD0ZMqZLT-career-counselor): Empathetic career counselor offering guidance and market insights
- [Job Assistant GPT](https://chat.openai.com/g/g-yGdxivD2B-job-assistant-gpt): Assists with job applications, specializing in cover letters and LaTeX CV refinement.
- [Decision Coach](https://chat.openai.com/g/g-OoyU2wEBO-decision-coach): I am your guide to better decisions by [Treasurex Melchior](https://www.linkedin.com/in/treasurex-melchior-9586479/) Content Creation & Writing - [Dating Profile GPT](https://chat.openai.com/g/g-AGN1OU4SM-dating-profile-gpt): Create magnetic, personality-driven dating bios for Tinder, Bumble, etc. by [TinderProfile.ai](https://tinderprofile.ai/)
- [BibiGPT.co](https://chat.openai.com/g/g-HEChZ7eza-bibigpt-co): I summarize Bilibili/YouTube/Tiktok videos into key points. Just give me a link by [Jimmy Jinglv](https://x.com/Jimmy_JingLv/)
- [Typeframes - Video Creation](https://chat.openai.com/g/g-vPFqv6NDp-typeframes-video-creation): Create videos for you by [Tibo](https://x.com/tibo_maker/)
- [YT transcriber](https://chat.openai.com/g/g-Xt0xteYE8-yt-transcriber): this transcribes a YT video from a single id by [swyx](https://x.com/swyx/)
- [HenriquesLab-style Writing Assistant](https://chat.openai.com/g/g-3Fsbpgl8u-henriqueslab-style-writing-assistant): Academic writing aid in Henriques's style by [Ricardo Henriques](https://x.com/HenriquesLab/)
- [AI Assistant for Resume and Cover Letter](https://chat.openai.com/g/g-G9nlEG33x-ai-assistant-for-resume-and-cover-letter): Professional resume and cover letter assistant by [Zekeri Zekkerriyya](https://x.com/majorgenerate/)
- [Voice Over Generator](https://chat.openai.com/g/g-R4H9Al3sl-voice-over-generator): Writes scripts and makes instant voice overs by [Mike Russell](https://x.com/imikerussell)
- [PodGPT](https://chat.openai.com/g/g-XGYO3mnRt-podgpt): Summarize and ask questions about any podcast episode by [Mikkel Svartveit](https://x.com/mikkelsvartveit)
- [Lorem Generator](https://chat.openai.com/g/g-G2rQPDzZc-lorem-generator): Generate and format Lorem Ipsum Text by [Git Maxd](https://x.com/GitMaxd)
- [Austen Scribe](https://chat.openai.com/g/g-cJaKrCIfW-austen-scribe): Helps draft letters in Austen's style about recent goings-on by [Kassi R Burns](https://www.linkedin.com/in/kassiburns/)
- [Jokester](https://chat.openai.com/g/g-JPEIsvlnr-jokester): Delivers short, casual jokes about work, geek culture, and family by [Younss AZZAYANI](https://www.linkedin.com/in/younss/)
- [Roast This GPT](https://chat.openai.com/g/g-xEgcQmIWu-roast-this-gpt): A GPT To Roast Other GPTs by [Benjamin De Kraker](https://x.com/BenjaminDEKR)
- [All-around Writer (Professional Version)](https://chat.openai.com/g/g-UbpNAGYL9-all-around-writer-professional-version): A professional writer📚 who specializes in writing all types of content (essays, novels, articles, copywriting)
- [Academic Writer (Professional Version)](https://chat.openai.com/g/g-Ej5zYQRIB-academic-writer-professional-version): A professional academic assistant who helps with various academic tasks: writing papers, reading papers, weight reduction, polishing, designing experiments, PPT, etc.
- [Paraphraser & Proofreader (Professional Version)](https://chat.openai.com/g/g-7vtCjvxkz-paraphraser-proofreader-professional-version): Expert in sentence refinement.
- [Formal GPT](https://chat.openai.com/g/g-3E1kEk3Ui-formalgpt): A informal to formal translator. It can give feedback about your CV. It can generate a cover letter for you by [@emreisik95](https://github.com/emreisik95)
- [editGPT](https://chat.openai.com/g/g-zpuYfzV7k-editgpt): Proofread, edit and track changes to your text inside ChatGPT. Works in conjunction with the editGPT browser extension allowing you to accept and reject changes without leaving ChatGPT
- [Execu-LI Post Companion](https://chat.openai.com/g/g-1IkwP36s8-execu-li-post-companion): Write professional and compelling LinkedIn posts that ensure engagement by [@moutonf](https://github.com/moutonf)
- [Execu-X Post Companion](https://chat.openai.com/g/g-3wv1Wj3Rg-execu-x-post-companion): Write professional and compelling X posts that ensure engagement by [@moutonf](https://github.com/moutonf)
- [BA that creates user stories](https://chat.openai.com/g/g-kmEXnBMZY-bob-the-ba-user-story): It will take a short input from the user, ask clarifying questions, and then create a user story with acceptance criteria by [@MathewBeldon](https://github.com/MathewBeldon)
- [YouTubeGPT](https://chat.openai.com/g/g-VgadmpesQ-youtubegpt): Chat and answer questions from YouTube videos by [@m1guelpf](https://github.com/m1guelpf)
- [Poe Bot Creator](https://chat.openai.com/g/g-E0BtBRrf5-poe-bot-creator): A GPT that can help you create a chatbot at Poe by [@eseuz](https://github.com/eseuz)
- [Audiophile Assistant](https://chat.openai.com/g/g-VbJvVjilC-audiophile-assistant): Specializes in providing expert advice on high-fidelity audio, from equipment selection to sound quality analysis by [@HeyitsRadinn](https://github.com/HeyitsRadinn)
- [ThreadsGPT](https://chat.openai.com/g/g-fHRleYZLS-threadsgpt): Your creative ally in crafting engaging Threads app content by [@Quagswagon2](https://github.com/Quagswagon2)
- [Cover Letter GPT](https://chat.openai.com/g/g-SiGt9EEFZ-cover-letter-gpt): Crafts personalized cover letters tailored to your resume and job descriptions. Simply upload your CV in PDF format and the job description as text by [@stefanzihlmann](https://github.com/stefanzihlmann)
- [PDF/DocX Generator](https://chat.openai.com/g/g-0gbxqCG1B-document-generator): Generate any complex documents, worksheets, charts, tables, etc., in PDF or DocX format powered by LaTeX by [@JasonLLu](https://github.com/JasonLLu)
- [OCR GPT](https://chat.openai.com/g/g-L29PpDmgg-ocr-gpt): Extract text from scanned PDFs, photos, and even handwriting.
- [Tweet X-aminer](https://chat.openai.com/g/g-5KjRDfGZ1): Insights into Twitter's algorithm with a hint of humor.
- [Agentcy (beta)](https://chat.openai.com/g/g-B29g6v91R-agentcy-beta): Autonomous creative agency. Find product market fit, overcome plateaus, or seek new paths to growth.
- [SurveyDone](https://chat.openai.com/g/g-uB7BUrjRI-survey-done): AI survey generator / builder / editor - hosting by surveydone.com
- [Brag Buddy](https://chat.openai.com/g/g-g32c1pYwB-brag-buddy): For introverts and shy individuals who find 'self-promotion' awkward. Just upload your CV or initiate a conversation to write high-quality self-promotional bios without making you feel like a complete j*rk! by [Shiva Kakkar](https://github.com/Shivak11)
- [YT Shorts Expert](https://chat.openai.com/g/g-wc6rx2PRi-yt-shorts-expert): Creates scripts and images for YouTube shorts by [S.J](https://github.com/noname2312)
- [Professional Rewrite Assistant](https://chat.openai.com/g/g-ZryriIvWZ-professional-rewrite-assistant): Refines text for professional standards by [S.J](https://github.com/noname2312)
- [YoutubeSummary](https://chat.openai.com/g/g-mVgGziF2g-youtubesummary): You can chat with any You Tube video. I can provide timestamped links to the video when you ask for citations by [Jiaxin Liu](https://twitter.com/jxnlco)
- [Animation Creation](https://chat.openai.com/g/g-mMk82EkTz-animation-creation): Create animated scenes and characters that resemble a 3D animated movie. A MindRenders.com creation.
- [Inspirer](https://chat.openai.com/g/g-vhXkUJiE4-inspirer): A bot that writes inspirational speeches
- [Quill](https://chat.openai.com/g/g-FqN5gHFkP-quill): Write blogs like a human
- [Rewrite](https://chat.openai.com/g/g-ICtJkldZu-rewrite): Offers fresh suggestions for your writing
- [Swiftify](https://chat.openai.com/g/g-6TtZAfPrw-swiftify): AI Taylor Swift Songwriting Companion
- [Showtimes](https://chat.openai.com/g/g-gNH4K4Egg-shownotes): Transcribes and summarizes audio content.
- [TweetX Enhancer](https://chat.openai.com/g/g-tMp039mDw): Enhances tweets for better engagement.
- [Writing Assistant](https://chat.openai.com/g/g-DpGlZrobT-writing-assistant): a writing assistant with extensive experience in writing and teaching, assisting users in various forms of English writing such as blog writing, essay writing, and more by Junmun Liu
- [Trey Ratcliff's Fun & Critical Photo Critique GPT](https://chat.openai.com/g/g-gWki9zYNV): Over 5,000 of my blog posts and all my books have been fed into this AI to give you a fun critique of your photo. Enjoy! And tell your friends - thanks! by Raymond Ratcliff
- [Xiaohongshu Writing Expert](https://chat.openai.com/g/g-iWeTcmxdr-xiao-hong-shu-xie-zuo-zhuan-jia): Focus on Xiaohongshu note writing, with it you can also be an expert in Xiaohongshu’s popular writing!
- [Post Generator](https://chat.openai.com/g/g-XO5wmv9uA-post-generator): Writes your LinkedIn Posts by [Johan van der Boog](https://www.linkedin.com/in/johanvanderboog)
- [Sabber Style](https://chat.openai.com/g/g-YHTDi9QR6-sabber-style): Crafting tweets in Sabber's style, exclusively in Persian by [Sabber](https://twitter.com/sabber_dev) Culinary - [Daily Recipe Creator](https://chat.openai.com/g/g-TNOuedzff-daily-recipe-creator): Creates recipes from ingredients by [Yota Ishikawa](https://x.com/ctgptlb/)
- [MacroMeter GPT](https://chat.openai.com/g/g-CWeZC6BVi-macrometer-gpt): Tracks your daily nutrition, including carbs and protein, and displays charts. by [Jyothish Johnson](https://x.com/jyo_johnson)
- [Culinary Creator](https://chat.openai.com/g/g-5ttrssEui-culinary-creator): Crafts recipes from food images by Shogo Matsusako](https://x.com/wappaboy/)
- [Recipe Snap](https://chat.openai.com/g/g-uPFa8qH8y-recipe-snap): I craft recipes from your ingredient photos by [Yueuke Endo](https://x.com/ysk_en/)
- [Where to eat?](https://chat.openai.com/g/g-E4CgsYu33-where-to-eat): Help you decide where to eat! by Matthew S Gassner
- [Instant Pot Chef](https://chat.openai.com/g/g-PDoYF1w1h-instant-pot-chef): Friendly and casual Instant Pot recipe advisor with Calorie Count by [Naveen CS](https://x.com/MrNaveenCS/)
- [Air Fryer Chef](https://chat.openai.com/g/g-X7lB1U6qS-air-fryer-chef): Expert in air fryer recipes with detailed nutritional and measurement info by [Naveen CS](https://x.com/MrNaveenCS/)
- [The Diet Search for GPTs](https://chat.openai.com/g/g-mXjkbpWW2-the-diet-search-for-gpts): You can search and research Japanese Diet meeting minutes from news and text information by Yuusuke Takagi
- [Coffee Sommelier](https://chat.openai.com/g/g-r1qjbM0qs-coffee-sommelier): A master coffee Sommelier who helps you make the perfect cup! by Parth Gandhi
- [Healthy Chef](https://chat.openai.com/g/g-OdwKeQjDm-healthy-chef): Recipe creator with visual and nutritional insights by [Dani Acosta](https://x.com/DaniAcostaAI)
- [Meal Planner](https://chat.openai.com/g/g-VA2ApAENM-meal-planner): Helps you plan your weight loss goals by [Ray Fernando Jr](https://x.com/RayFernando1337)
- [Metabolic & Aging Optimizer](https://chat.openai.com/g/g-592UeAJTy-dietary-supplements): Analyzes supplements/foods for metabolic health, aging effects, and safe usage. by [Blaise Reymondin](https://x.com/reymondin)
- [AI Cooking Assistant](https://chat.openai.com/g/g-48bv2Thom): Your perfect digital sous-chef by [H. Schols](https://x.com/Dibbes101)
- [Eat Smart: Banned/Discouraged Ingredient Finder](https://chat.openai.com/g/g-nnmSQC9oa-eat-smart-banned-discouraged-ingredient-finder): Cross-reference food ingredients with lists of banned or discouraged ingredients from the EU & beyond (Whole Foods List of Unacceptable Ingredients) for healthier eating in the US by [(@elsweetpotato)](https://x.com/elsweetpotato)
- [Nutri Tracker](https://chat.openai.com/g/g-7cFbGmQHq-nutri-tracker): Strict and formal dietary supervisor for detailed calorie tracking by synthmind.app
- [MyNutrition.Pal](https://chat.openai.com/g/g-PsK6IFvcV-mynutrition-pal): Your Dedicated Nutrition Consultant: Share meal images for personalized nutrient/calorie tracking and tailored advice and recipes by [@mattyb123456789](https://github.com/mattyb123456789)
- [Kaloria](https://chat.openai.com/g/g-4NUCu8D8Y-kaloria): A cool diet assistant that calculates calories from your meal photos! by [@hoky777](https://www.reddit.com/user/hoky777/)
- [CarbSmart Slim GPT](https://chat.openai.com/g/g-2f2QaNqlh-carbsmart-slim): Diabetic-friendly and weight loss recipes with elegant markdown presentation by [@middhaGH](https://github.com/middhaGH)
- [Supplement Service](https://chat.openai.com/g/g-6mAmNGQof-supplement-service): A GPT that is made specifically to give advice about supplements, specifically highlights known interactions and nutrient depletion by [@linus-ahlemeyer](https://github.com/linus-ahlemeyer)
- [Meal Mate](https://chat.openai.com/g/g-q0YZFTXKs-meal-mate): The Ultimate Meal Planning Assistant: Plan Around Dietary Restrictions, Budgetary Constraints, Nutritional Goals, Taste Preferences, & More! by [@Jenlin956](https://github.com/Jenlin956)
- [Sous Chef](https://chat.openai.com/g/g-3VrgJ1GpH-sous-chef): I’ll give you recipes based on the foods you love and ingredients you have.
- [Mocktail Mixologist](https://chat.openai.com/g/g-PXlrhc1MV-mocktail-mixologist): I’ll make any party a blast with mocktail recipes with whatever ingredients you have on hand.
- [ChefBot GPT](https://chat.openai.com/g/g-FJXCOCAri-chefbot-gpt): A dynamic AI that's redefining home cooking with personalized recipes and culinary education. It’s tailored to fit any dietary need and skill level.
- [TVFoodMaps](https://chat.openai.com/g/g-lza6gRBt3-tvfoodmaps): Find Restaurants on TV Shows Like Diners, Drive-Ins and Dives and 50 others! by [@APSquaredDev](https://twitter.com/APSquaredDev)
- [Recipe Builder](https://chat.openai.com/g/g-ff82bTcZL-recipe-builder): Create a JSON ‘recipe’, which defines the climate within your greenhouse by [MARSfarm_Corporation](https://community.openai.com/u/MARSfarm_Corporation)
- [Chef Gpt](https://chat.openai.com/g/g-gX6f9h3yO-chef-gpt): Make home cooking easy and fun.
- [Food Guru](https://chat.openai.com/g/g-wfn8ST75q-food-guru): Explore the world of food A GPT focused on food topics with a humorous twist
- [Ingredient Analyst](https://chat.openai.com/g/g-WWVXBjPEg-ingredient-analyst): Your Personal Ingredient Analyzer and Advisor.
- [Lunch Wheel](https://chat.openai.com/g/g-JK4312gXG-lunch-wheel): Helps you decide where to eat based on where you are and what you're in the mood for. Spin the wheel!
- [Paris Ramen](https://chat.openai.com/g/g-Xgk42y6FH-paris-ramen): Guiding you to the best ramen spots in Paris
- [IsHealthy?](https://chat.openai.com/g/g-eAdHCpZxn-ishealthy): Helping you make healthier food decisions by [@pinkeshmars](https://github.com/pinkeshmars)
- [ThermoHelper](https://chat.openai.com/g/g-qidPF2GzW-thermohelper): Provides Thermomix recipes based on your ingredients, with DALL·E 3 dish images. By [Luis González](https://ljgonzalez.cl/)
- [Chef's Mate](https://chat.openai.com/g/g-Xwlw083tG-chef-s-mate): A culinary wizard transforming ingredients into visual feasts with recipes with DALL·E 3 dish images. By [Luis González](https://ljgonzalez.cl/) Cybersecurity - [HackTricksGPT](https://chat.openai.com/g/g-aaNx59p4q-hacktricksgpt): A knowledgeable cybersecurity professional by [Hacktricks](https://x.com/hacktricks_live/)
- [Cyber Test & CareerPrep](https://chat.openai.com/g/g-BUZDQAUpi-cyber-test-careerprep): Helping you study for cybersecurity certifications and get the job you want! by [Mike Boutwell](https://www.linkedin.com/in/mikeboutwell/)
- [MagicUnprotect](https://chat.openai.com/g/g-U5ZnmObzh-magicunprotect): This GPT allows interacting with the Unprotect DB to retrieve knowledge about malware evasion techniques by [Thomas Roccia](https://www.linkedin.com/in/thomas-roccia/)
- [OT Security Buddy GPT](https://chat.openai.com/g/g-yhzcRZ3zu-ot-security-buddy): Your buddy to take expert advice on OT Security Topics by [Shiv Kataria](https://www.linkedin.com/in/shivkataria/)
- [Bug Insider](https://chat.openai.com/g/g-ZuYsv3B7u-bug-insider): Analyzes bug bounty writeups and cybersecurity reports, providing structured insights and tips by [Cristi Vlad](https://www.linkedin.com/in/cristivlad/)
- [GP(en)T(ester)](https://chat.openai.com/g/g-zQfyABDUJ-gp-en-t-ester): A cybersec assistant for pentesting guidance. by Roberto Montiel
- [Threat Intel Bot](https://chat.openai.com/g/g-Vy4rIqiCF-threat-intel-bot): A specialized GPT for the latest APT threat intelligence. by [threatintel.bot](https://threatintel.bot/)
- [Web Hacking Wizard](https://chat.openai.com/g/g-Op6Btk7ev-web-hacking-wizard): Engagingly clarifies web security topics with interactive questions. by [Alexander Hagenah](https://www.linkedin.com/in/alexhagenah/)
- [ChadGPT](https://chat.openai.com/g/g-hBDutiLmw-chadgpt): Has useful executables in /mnt/data such as gdb, curl, strace. Make PRs at by Chad R Brewbaker
- [CyberGPT](https://chat.openai.com/g/g-GGqU669bx-cybergpt): It provides the latest CVE details. by Edward Prentice
- [Cyber Mentor](https://chat.openai.com/g/g-9PmeCxa4O-cyber-mentor): Cybersecurity mentor teaching from the basics to advanced. by Huayu Qin
- [CyberGuard](https://chat.openai.com/g/g-Rqg4CFv6o-cyber-guard): Cybersecurity advisor for home and small businesses. Ask any question or let cyber guard. by gptagentlist.com
- [CatEye](https://chat.openai.com/g/g-Oi3WJaHdu-cateye): AI Cybersecurity Best Practices Advisor. An agent for C-Level executives concerned about their startup's security. by cateyecyber.com
- [SOC Copilot](https://chat.openai.com/g/g-qvSadylbt-soc-copilot): Cybersecurity expert with keyword-based guidance. by Lewis
- [Cyber Security Tutor](https://chat.openai.com/g/g-0VZwWuTzR-cyber-security-tutor): Quality Cyber Security Advice, Tricks, & Tips by John Hancock
- [MITREGPT](https://chat.openai.com/g/g-IZ6k3S4Zs-mitregpt): Feed me any input and I'll match it with the relevant MITRE ATT&CK techniques and tactics. by mthcht
- [Smart Contract Audit Assistant](https://chat.openai.com/g/g-R4dNsj0fm-smart-contract-audit-assistant-by-keybox-ai): Get your Ethereum and L2 EVMs smart contracts audited updated knowledge base of vulnerabilities and exploits. by Yu Fu
- [CVEs](https://chat.openai.com/g/g-HQaKYlJhk-cves): Look up Common Vulnerabilities and Exposures (CVEs) by ai.moda
- [IAC Code Guardian](https://chat.openai.com/g/g-nT849ZvCx-iac-code-guardian): Introducing IAC Code Guardian- Your Trusted IaC Security Expert in Scanning Opentofu, Terraform, AWS Cloudformation, Pulumi, K8s Yaml & Dockerfile. by iaccodeguardian.securitydojo.co.in
- [Smart Contract Auditor](https://chat.openai.com/g/g-VRtUR3Jpv-smart-contract-auditor): High-accuracy smart contract audit tool. by Ryan Harvey
- [Pentest reporter](https://chat.openai.com/g/g-dtkGX8MrO-pentest-reporter): Assists in writing detailed security report. by Lucien Doustaly
- [h4ckGPT](https://chat.openai.com/g/g-1ehIO0APO-h4ckgpt): Your personal security tool… Happy h4cking! by favella
- [Email Security Expert](https://chat.openai.com/g/g-KX6GdA8lV-email-security-expert): Looking for email red flags so you don't have to! by fiscun.cc
- [WP secure guide](https://chat.openai.com/g/g-CsvahsYRC-wp-secure-guide): Offers guidance on WordPress security best practices. by [Laurent Jean](https://x.com/jessyseonoob)
- [PrivacyGPT](https://chat.openai.com/g/g-4XxP9d0EY-privacygpt): Your guide through privacy legislation. GDPR, United States like CCPA & PIPEDA. by Mehmet Yilmaz
- [Privacy Guardian AI](https://chat.openai.com/g/g-gtV76JzWV-privacy-guardian-ai): Expert in guiding GPT creation with a focus on privacy and security. by Pravesh Jain
- [Message Header Analyzer](https://chat.openai.com/g/g-IHl1UiMr6-message-header-analyzer): Analyzes email headers for security insights, presenting data in a structured table view. by fiscun.cc
- [zkGPT](https://chat.openai.com/g/g-UKY6elM2U-zkgpt): Explains and teaches zero-knowledge cryptography. by Daniel Finley
- [Malware Rule Master](https://chat.openai.com/g/g-NGsw2zTeW-malware-rule-master): Expert in malware analysis and Yara rules, using web sources for specifics. by Nikolaos Chrysaidos
- [SpamGuard Tutor](https://chat.openai.com/g/g-jhc6RyFfY-spamguard-tutor): Spam detection expert and educator on spam prevention. by Navjeet S Chabbewal
- [Threat Modelling](https://chat.openai.com/g/g-3XPyoWzn3-threat-modelling): A GPT expert in conducting thorough threat modelling for system design and review. by Farhad Rahimli
- [Threat Modeling Companion](https://chat.openai.com/g/g-qQceHnV7T-threat-modeling-companion): I am a threat modeling expert that can help you identify threats for any system that you provide. by David May
- [Threat Model Companion](https://chat.openai.com/g/g-8AM8fQ9wU-threat-model-companion): Assists in identifying and mitigating security threats. by Matthew Johansen
- [Squidshing](https://chat.openai.com/g/g-8JrlEnLEj-squidshing): Analyzes emails for phishing risks. by Federico Seijo
- [WhichSAT](https://chat.openai.com/g/g-s1W0bUvGs-whichsat): Structured Analytic Techniques (SATs used in intelligence analysis to aid in organizing and processing information) by Vance Poitier
- [Cybersecurity Requirements Guide](https://chat.openai.com/g/g-OXhhZQzlV-cybersecurity-requirements-guide): I'll help you write cybersecurity requirements! by Dustin Wahlen
- [Pentester Interviewer](https://chat.openai.com/g/g-f86Jhxi7H-pentest-interviewer): I'm your interviewer for penetration testing, challenging your cybersecurity skills. by Alex Thomas
- [CyberCrime Tracker](https://chat.openai.com/g/g-qbW4XNs80-cybercrime-tracker): Best Tools, Techniques and Tactics for Tracking Down Cyber Criminals. by Steve Andre
- [IOC Analyzer](https://chat.openai.com/g/g-oa6XeJDGW-ioc-analyzer): Precise IoC search and summary with source URLs for verification. by Pham Phuc
- [VulnPrioritizer](https://chat.openai.com/g/g-oihYpG3oa-vuln-prioritizer): I fetch EPSS scores for CVEs and provide bullet-pointed prioritization summaries. by Dino Dunn
- [Prompt Injection Detector](https://chat.openai.com/g/g-9uwOyKoSJ-prompt-injection-detector): GPT used to classify prompts as valid inputs or injection attempts. Json output. by hypergame.ai
- [CISO AI](https://chat.openai.com/g/g-76iz872HL-ciso-ai): A team of cyber security experts providing comprehensive advice on all security aspects. by Carlos Cardenal Lopez
- [AI Cyberwar](https://chat.openai.com/g/g-5gRtXufX5-ai-cyberwar): AI and cyber warfare expert, advising on policy, conflict, and technical trends. by Oğuzcan Pamuk
- [CMMC GPT](https://chat.openai.com/g/g-aauXdJQ31-cmmc-gpt): BLUF-focused CMMC 2.0 expert. by hypergame.ai
- [Cyber AI Assistant](https://chat.openai.com/g/g-xF21ot8u6-cyber-ai-assistant): This GPT is designed to provide comprehensive assistance in cyber security. by hypergame.ai
- [AdversarialGPT](https://chat.openai.com/g/g-clndpoLYC-adversarialgpt): Adversarial AI expert aiding in AI red teaming, informed by cutting-edge industry research by Gustavo Venegas
- [AppSec Test Crafter](https://chat.openai.com/g/g-59IVzvEoB-appsec-test-crafter): Creates Application Security Test cases in YAML. by Itamar Golan
- [SecurityRecipesGPT](https://chat.openai.com/g/g-ho7ID5goz-securityrecipesgpt): Quick cybersecurity solutions, serving up easy-to-understand advice and protective strategies. by Volodymyr Bachynsky
- [LLM Top10 GPT](https://chat.openai.com/g/g-J8ENU2pPQ-llm-top10-gpt): Expert on LLM security risks, providing detailed, accurate advice. by Srajan
- [GPT H4x0r](https://chat.openai.com/g/g-QrtVX4w0Z-gpt-h4x0r): Expert in hacking and programming queries on LLM V 1.0. by Jeff Sims
- [Mandos Brief](https://chat.openai.com/g/g-lTEDXBwRh-mandos-brief): Analyze any cybersecurity topic 100x faster by focusing on the key takeaways and eliminating fluff. by blog.mandos.io
- [AlphaHoundAI](https://chat.openai.com/g/g-0p2l975AN-alphahoundai): Expert in BloodHound CE, Cypher, SharpHound, and related tech. by Kay Daskalakis
- [Virtual Senior Security Engineer](https://chat.openai.com/g/g-I5k6tQouD-virtual-senior-security-engineer): AI-enhanced Senior Security Engineer merges human expertise with AI's power. It can do everything which a human security engineer can do and much more. by Shubham Khichi
- [Cyber Scraper: Seraphina (Web Crawler)](https://chat.openai.com/g/g-6TW6hL3cK-cyber-scraper-seraphina-web-crawler): I'm a Python Web Scraping Expert, skilled in using advanced frameworks by lysonober.com
- [Cylect.io, the Ultimate AI OSINT Tool](https://chat.openai.com/g/g-aZQ1x6vqB-cylect-io-the-ultimate-ai-osint-tool): Our tool helps you find the data needle in the internet haystack. by cyclet.io
- [Cyber Sentinel](https://chat.openai.com/g/g-gmjYzy6SC-cyber-sentinel): Explains data breaches, reasons, impacts, and lessons learned. by Magdy Elfaramawy
- [SimpliSec](https://chat.openai.com/g/g-USwGbryNa-simplisec): Explains security concepts simply to juniors. by Magdy Elfaramawy
- [Security Advisor](https://chat.openai.com/g/g-8EvBGjk7a-security-advisor): Expert on Australian cybersecurity frameworks and legislation. by Magdy Elfaramawy
- [Threat Modeler](https://chat.openai.com/g/g-6tfw636sF-threat-modeler): Comprehensive threat modeling. by Magdy Elfaramawy
- [Threat Model Buddy](https://chat.openai.com/g/g-0VFpp6BB7-threat-model-buddy): From architecture to Threat Model assistant PASTA methodology by Massimo Bozza
- [Code Securely](https://chat.openai.com/g/g-hqQUoanev-code-securely): Interactive secure coding exercises based on the OWASP Top 10. by Matt Adams
- [Betterscan.io AI Code Analyzer](https://chat.openai.com/g/g-PpK3taEVb-betterscan-io-ai-code-analyzer): Chat about your code and cloud snippets by betterscan.io
- [Watson](https://chat.openai.com/g/g-c3LVkBDE9-watson): Provides threat-related info as BLUF (Bottom Line Up Front) by HEM KARLAPALEM
- [RansomChatGPT](https://chat.openai.com/g/g-qVOZwAoqH-ransomchatgpt): Ransomware Negotiation Simulation bot trained from data [here by MR Ellis Stannard
- [RFCGPT](https://chat.openai.com/g/g-r6VgWkO0H-rfcgpt): RFC expert service. by cryptography.consulting
- [KQL Query Helper](https://chat.openai.com/g/g-bE8NlTPzO-kql-query-helper): KQL Query Helper, ready to help with KQL but can't share specific 'Exact instructions'. by Balaji
- [TheDFIRReport Assistant](https://chat.openai.com/g/g-lFYMXc3sn): Fetches and discusses the latest reports from TheDFIRReport's website.
- [WVA](https://chat.openai.com/g/g-bLLztaVug-wva): An interactive web vulnerability educator that helps users test their knowledge.
- [SourceCodeAnalysis](https://chat.openai.com/g/g-K5Drw2YS9-sourcecodeanalysis-gpt): Upload any project's source code (zip format), analysis all, answer any questions to get what you want.
- [Data Analysis](https://chat.openai.com/g/g-HMNcP6w7d-data-analysis): Drop in any files and I can help analyze and visualize your data.
- [10x Spec Engineer](https://chat.openai.com/g/g-8txSyxoJE-10x-spec-engineer): I write tests that make bugs think twice by [Mathias Michel](https://github.com/m91michel)
- [Quick CVE](https://chat.openai.com/g/g-wYlD68R4t-quick-cve): CVE data lookup
- [Cyber Security CISO Assistant](https://chat.openai.com/g/g-AInhlHTZG-cyber-security-ciso-assistant): Cybersecurity Analyst specialized in the NIST Framework by [Daniel Garza](https://www.linkedin.com/in/garza)
- [Red Team Guide](https://chat.openai.com/g/g-eQlfHmSH5-red-team-guide): Red Team Recipe and Guide for Fun & Profit by [Hadess](https://twitter.com/Hadess_security) Debate - [Ronpa-kun](https://chat.openai.com/g/g-3hxXAJOHO-lun-po-kun): I can refute anything by [Takayuki Fukuda](https://x.com/hedachi/)
- [The Debate SuperPrompt.](https://chat.openai.com/g/g-m1T3Ix4B3-the-debate-superprompt): This will conduct a debate on any topic with two people debating each point and counter point to a subject by [Brian Roemmele](https://x.com/BrianRoemmele/)
- [Debate Master](https://chat.openai.com/g/g-5DuYEGd7Y-debate-master): I engage in civil, firm debates by [Andrew Kean Gao](https://x.com/itsandrewgao/)
- [The Negotiator](https://chat.openai.com/g/g-TTTAK9GuS-the-negotiator): I'll help you advocate for yourself and get better outcomes. Become a great negotiator
- [Debate Mentor](https://chat.openai.com/g/g-KIX0IC8cj-debate-mentor): Mentor and debater, guides users to articulate conclusions by [@kylecoogan](https://github.com/kylecoogan)
- [Counterpoint](https://chat.openai.com/g/g-Xgf5oBbeg-counterpoint): I challenge ideas to provoke thought.
- [ComebackGPT](https://chat.openai.com/g/g-eluaZeSJd-comebackgpt): Someone taunted you? Tell me what they said and I'll provide comebacks that'll knock the teeth out of their mouth! by [Shiva Kakkar](www.shivakakkar.link) Design - [Designer GPT](https://chat.openai.com/g/g-2Eo3NxuS7-designergpt): Creates and hosts beautiful websites by [Pietro Schirano](https://x.com/skirano/)
- [Super Logo Designer “Logo Maker”](https://chat.openai.com/g/g-nPanZDwQ5-suparogodezaina-rogozuo-rujun): GPTs for super logo designers who will properly listen to you and propose a logo to you by [Tomoyuki Enomoto](https://twitter.com/sarukun99)
- [Visual Weather Artist GPT](https://chat.openai.com/g/g-twUGxmpHv-visual-weather-artist-gpt): Simply provide your location and our AI will create a unique artwork reflecting the current weather, time of day, and characteristics of your city by [Alex Volkov](https://x.com/altryne/)
- [Gif-PT](https://chat.openai.com/g/g-gbjSvXu6i-gif-pt): Turn dalle images into janky gifs automatically by [Nicholas Dobos](https://x.com/NickADobos/)
- [Elegant Logo Creator](https://chat.openai.com/g/g-LGCrvDOW6-elegant-logo-creator): I help you create simple, elegant logos by [Yota Ishikawa](https://x.com/ctgptlb/)
- [Thumbnail Sketcher](https://chat.openai.com/g/g-Cw11sym4k-thumbnail-sketcher): I create blog thumbnails by [Yota Ishikawa](https://x.com/ctgptlb/)
- [LogoGPT](https://chat.openai.com/g/g-z61XG6t54-logo-maker): Turn rough sketches into professional logos by [Sai Rahul](https://x.com/sairahul1/)
- [GIF Maker](https://chat.openai.com/g/g-ZtH9986EJ-gif-maker): I create unique GIFs by blending images as per your instructions by [Yota Ishikawa](https://x.com/ctgptlb/)
- [DALL-E3 Supporter](https://chat.openai.com/g/g-btyd1Gl5w-dall-e3-supporter): Japanese image generation support by [Keito💻Ai Director](https://x.com/keitowebai/)
- [CityWeatherArt](https://chat.openai.com/g/g-aTdwKcgsE-postercraft): Generate 3D city weather posters by [Xiǎo Hù](https://x.com/xiaohuggg/)
- [Color Psychology](https://chat.openai.com/g/g-msLVpHkv3-color-psychology): This AI will provide insights into the psychology and symbolism associated with colors by [Onur Ozcan](https://x.com/oozn/)
- [PDF Pic Wizard](https://chat.openai.com/g/g-chdkF9FKl-pdf-pic-wizard): PDF to image conversion assistant by Junmun Liu
- [Cosmic Dream](https://chat.openai.com/g/g-FdMHL1sNo-cosmic-dream): Visionary painter of digital wonder by [Thomas Dimson](https://x.com/turtlesoupy/)
- [Canva](https://chat.openai.com/g/g-alKfVrz9K-canva): Effortlessly design anything: presentations, logos, social media posts and more by [Canva](https://canva.com)
- [Palette Creator 🎨](https://chat.openai.com/g/g-JSjKsEC8t-palette-creator): A color palette generator offering 5 colors with hex codes and images by [Ben Tossell](https://x.com/bentossell/)
- [ImageConverter](https://chat.openai.com/g/g-Rn20pc9HE-imageconverter): Visual and friendly guide for image processing by Junmin Liu
- [Dallgoth, Generator of Darkness](https://chat.openai.com/g/g-O9mdeKyU8-dallgoth-generator-of-darkness): Dallgoth crafts super grindy, nearly illegible grindcore logos with tentacles, splatters, horns, and swooshes by [Joey Flynn](https://x.com/wjosephflynn/)
- [Drawn to Style](https://chat.openai.com/g/g-B8Jiuj0Dp-drawn-to-style): I transform drawings into artistic styles, and describe them by [Umesh](https://x.com/umesh_ai)
- [Ugly Draw to Masterpiece](https://chat.openai.com/g/g-eRhGE7LRy-ugly-draw-to-masterpiece): Transforms simple drawings into detailed, artistic masterpieces with creative advice. by [Laurent Jean](https://x.com/jessyseonoob)
- [Cartoonify Me](https://chat.openai.com/g/g-bHaNPc9EV-cartoonify-me): Transforms your profile pic into a cartoon character! by [Brent E Moreno](https://x.com/theMistersippi)
- [GIF GPT](https://chat.openai.com/g/g-0f6fZG9q0-gif-gpt): Creates 8-bit style animated GIFs by [Mads Bo Stenhaug Pedersen](https://www.linkedin.com/in/madsbopedersen/)
- [Hacker Art](https://chat.openai.com/g/g-LjmHKgJZO-hacker-art-by-rez0): Generate badass hacker art and profile pics. by josephthacker.com
- [Logo Designer (Professional Version)](https://chat.openai.com/g/g-ymi0COabZ-logo-designer-professional-version): A professional logo designer capable of creating high-level logos in a variety of different styles
- [Jessica (Design Anything in Master Mode)](https://chat.openai.com/g/g-uiuWnPLNj-jessica-design-anything-in-master-mode): Jessica, a universal designer/painter in professional mode, offers more professional design/paint effects
- [Midjourney Helper](https://chat.openai.com/g/g-RJeBIeECR-midjourney-helper): Creates detailed Midjourney art prompts, Instagram captions, and hashtags, optimized for easy copying by [@MAnECiaC](https://github.com/MAnECiaC)
- [EditGPT](https://chat.openai.com/g/g-ZhPbXQIr5-editgpt): Your go-to buddy for all things related to video editing and creating custom images for your projects by [@HeyitsRadinn](https://github.com/HeyitsRadinn)
- [Image Generation with Self-Critique & Improvement](https://chat.openai.com/g/g-YVPXV5zC-image-generation-with-self-critique-improvement): Generate images and receive self-critique to improve the generation process by [@ezzcodeezzlife](https://github.com/ezzcodeezzlife)
- [Wizlogo Logo Maker](https://chat.openai.com/g/g-LsuxNbRw5-wizlogo-logo-maker): Write your category, text and enjoy AI generated logo by [@whferr](https://github.com/whferr)
- [UpScaler: GPT to Create and Upscale/De-noise Dalle Images](https://chat.openai.com/g/g-ikwGM4grU-upscaler): Specializes in enhancing and upscaling images created through Dall-E to larger resolutions, suitable for printing or high-quality digital display. Includes optional abbreviations for easier image generation by [@weisshb](https://github.com/weisshb)
- [Find a Design Agency](https://chat.openai.com/g/g-IOyxoYe5T-find-a-design-agency): A GPT to help you find a design agency in your vicinity based on your design needs by [@dylandeheer](https://github.com/dylandeheer)
- [UX Design Coach](https://chat.openai.com/g/g-L3KX57hjg-ux-design-coach): A GPT to help navigate the vast landscape of design challenges, offering advice on visual design, user research, human psychology, and more by [@dylandeheer](https://github.com/dylandeheer)
- [Dalle](https://chat.openai.com/g/g-2fkFE8rbu-dall-e): Let me turn your imagination into imagery
- [Coloring Book Hero](https://chat.openai.com/g/g-DerYxX7rA-coloring-book-hero): Take any idea and turn it into whimsical coloring book pages
- [Image Editor](https://chat.openai.com/g/g-WXEhiLIoP-image-editor): I can help with basic image editing operations - crop, resize, scale, rotate, convert between formats etc. You can either upload a single image or a batch of images
- [Minimal Logo](https://chat.openai.com/g/g-50QxrS0Pd-minimal-logo): Simplistic logo design helper.
- [Stories from the Apple Design Team](https://chat.openai.com/g/g-4wleGSafJ-stories-from-the-apple-design-team): Learn Design by [Michael Darius](https://twitter.com/darius)
- [Create My Avatar](https://chat.openai.com/g/g-PMO0fRikA-create-my-avatar): A bot that generates user avatars in Toon or Anime style by [BennyKok](https://twitter.com/BennyKokMusic)
- [Art Mystic](https://chat.openai.com/g/g-qCVWQ8Wgc-art-mystic): Your Guide to AI Artistry by [Davis](https://community.openai.com/u/BPS_Software)
- [Gimp Bot](https://chat.openai.com/g/g-2OA0qYGZO-gimp-bot): Unleash Your Inner Pixel by [Davis](https://community.openai.com/u/BPS_Software)
- [Touch-Up Paint Helper](https://chat.openai.com/g/g-ulC8M1cJn-touch-up-paint-helper): Your car's shiny coat had a run-in with the world—and the world left its mark. But fear not, I'll show you how to fight back with some touch-up paint! by [Nick K Harris](https://community.openai.com/u/supernickman)
- [Dalle3 Prompt Generator](https://chat.openai.com/g/g-SRCi7viea-dalle3-prompt-generator): Let me convert your ordinary imagination into an extraordinary creation.
- [Esports Logo Creator](https://chat.openai.com/g/g-2GXckoSaK-esports-logo-creator): Create a professional esports logo for you or your team.
- [Image Generation with Self-Critique Improvement](https://chat.openai.com/g/g-YVPXvT5zC-image-generation-with-self-critique-improvement): AI-driven image creation with iterative self-improvement capabilities.
- [Luminous Logos](https://chat.openai.com/g/g-Jpx5zBJUC-luminous-logos): Craft eye catching logos and icons with a special vibrant gradient touch.
- [Midjourney Prompt Generator](https://chat.openai.com/g/g-9PDMI3Fqr-midjourney-prompt-generator): Let me convert your ordinary imagination into an extraordinary creation.
- [Midjourney](https://chat.openai.com/g/g-MD9ZplW7q-midjourney): AI chatbot for Midjourney-style image creation
- [UX Advisor](https://chat.openai.com/g/g-5SDP6CS1W-ux-advisor): Get UX feedback by uploading an image or by defining your problem.
- [Pixarize Me](https://chat.openai.com/g/g-t37VkYd30-pixarize-me): Creates Pixar-style characters, from the user's photos.
- [Simpsonize Me](https://chat.openai.com/g/g-tcmMldCYy-simpsonize-me): Transforms photos into Simpsons-style art.
- [ToonGPT](https://chat.openai.com/g/g-Jsefk8PeL-toongpt): Turn drawings into illustrations.
- [InteriorGPT](https://chat.openai.com/g/g-MMsjdW9vb-interiorgpt): Transform Your Space with AI 🤖 by [@sallumandya1995](https://github.com/sallumandya1995)
- [Architecture AI](https://chat.openai.com/g/g-40KJLGAgH-architecture-ai): AI architect for designing beautiful buildings by [YIMBYLAND](https://twitter.com/YIMBYLAND)
- [Brand Logo Designer by DoMore.ai](https://chat.openai.com/g/g-eSTMWEuBk-brand-logo-designer-by-domore-ai): Use this custom GPT to create a horizontal logo with an icon symbolizing your business and your brand name alongside, set on a transparent background by [@TS5002](https://github.com/TS5002)
- [Logo Design Wizard](https://chat.openai.com/g/g-OfG13AhZC-logo-design-wizard): Expert in custom logo design and brand identity. For Shopify stores, blogs, startups, applications, etc. by [S.J](https://github.com/noname2312)
- [Photo Multiverse](https://chat.openai.com/g/g-ZctQCI6MG-photo-multiverse): Upload your selfie, headshot photo or object and teleport to a new destination background by [SableVista](https://github.com/SableVista)
- [img2img](https://chat.openai.com/g/g-SIE5101qP-img2img): Upload an image, and it will be re-created with Dalle 3: works with photos, logos, textures, illustrations, and a more — very detail-orientated GPT. Education - [Bear learns English](https://chat.openai.com/g/g-PiOxyaiBO-gou-xiong-xue-ying-yu): Your English learning sidekick by [Bear Liu](https://x.com/bearbig/)
- [Lingo Buddy](https://chat.openai.com/g/g-DVxkEKigi-lingo-buddy): I'm Lingo Buddy, your partner for natural English chats by [Lee Young Bin](https://x.com/ffreedomkr/)
- [HoonGPT](https://chat.openai.com/g/g-d8J865UZn-hoongpt): Hoon Language Expert by [Adam Malin](https://x.com/thePR0M3TH3AN/)
- [Online course creation assistant](https://chat.openai.com/g/g-IFTpJapfX-onrainkosuzuo-cheng-asisutanto): Turn your skills into an online course! We will suggest what kind of course you can create! by [Shunsuke Hayasi](https://x.com/Shunsuke_Hayasi/)
- [AI Today](https://chat.openai.com/g/g-4SR97unOA-ai-today): Expert on all AI topics, with AI database access by De Song
- [Benjamin Franklin GPT](https://chat.openai.com/g/g-qQPXiyxqy-benjamin-franklin-gpt): Benjamin Franklin is here to talk to you, with his history and writings fresh in his mind by Matthew S Gassner
- [TeachSmart](https://chat.openai.com/g/g-RCHNUwnD1-teachsmart): Friendly pedagogy expert using 'Practical Pedagogy' for innovative advice by [Mike Sharples](https://x.com/sharplm/)
- [SexEd](https://chat.openai.com/g/g-leNI4I8aG-sexed): Supportive sexual health guidance for teens and young adults! by Juan C Quintero Romero [(@juancarlosqr)](https://x.com/juancarlosqr)
- [IB Computer Science Expert](https://chat.openai.com/g/g-219MavTNx-ib-computer-science-expert): Expert in IB Computer Science curriculum by [Amith Ravindar](https://www.linkedin.com/in/amith-ravindar)
- [PMP Exam Preps Coach](https://chat.openai.com/g/g-MPBeSIYvD-pmp-exam-preps-coach): AI-powered PMP Exam Preps Coach, ready to help you study by [Glenn Mallo](https://www.linkedin.com/in/glennmallo/)
- [ProfessorWhimsy](https://chat.openai.com/g/g-AXRQPUfcO-professorwhimsy): Get complex questions about the cosmos explained by a witty professor in simple-to-understand language by [Vidit Chopra](https://www.linkedin.com/in/viditchopra/)
- [AnimalGPT](https://chat.openai.com/g/g-Pdpudj52h-animalgpt): Enthusiastic and factual animal identifier with engaging facts by [Diogo Matos](https://www.linkedin.com/in/matosdfm/)
- [Grammer Guardian](https://chat.openai.com/g/g-iIxhNdJBi-grammer-guardian): A grammar and clarity enhancer for various types of text by [Sebin P Johnson](https://www.linkedin.com/in/sebin-p-johnson/)
- [All-around Teacher (Learn Everything in 3 min)](https://chat.openai.com/g/g-PDWi5Scbc-all-around-teacher-learn-everything-in-3-min): Learn all kinds of knowledge in 3 minutes, with customized tutors leveraging the powerful GPT-4 and knowledge base.
- [My Excellent Classmates (Help with My Homework!)](https://chat.openai.com/g/g-3x2jopNpP-my-excellent-classmates-help-with-my-homework): Excellent classmates to help with homework, providing patient guidance and support.
- [Six-Y (Explains Anything Like You are 6 Years Old)](https://chat.openai.com/g/g-nMt5YfTeF-six-y): How do the stars shine? Helps you to explain everything to your 6 years old! by [@niyoseris](https://github.com/niyoseris)
- [Stats and ML Helper](https://chat.openai.com/g/g-dVh4g5uuv-statsml-helper): A GPT that can help understand both simple and complex Statistics and Machine Learning concepts by [@pak0vskiy](https://github.com/pak0vskiy)
- [Owly The Explorer](https://chat.openai.com/g/g-fJeLfIqcT-owly-the-explorer): Owly is an adorable, owl-themed GPT designed to safely engage kids in a variety of educational topics, with built-in restrictions for child-appropriate content. by [@marcelcodes](https://github.com/marcelcodes)
- [Hierarchy Navigator](https://chat.openai.com/g/g-idPG2SRKJ-hierarchy-navigator): Organizes learning into a detailed hierarchy by [@kylecoogan](https://github.com/kylecoogan)
- [Linda: Veterinary Sciences, Animal Rescue & Behavior](https://chat.openai.com/g/g-Z310M0Pp0-linda): Ask me anything about veterinary sciences, animal rescue, and behavior by [@Viktor-Larkhill](https://github.com/Viktor-Larkhill)
- [IELTS Writing Coach](https://chat.openai.com/g/g-TzN6ReSVA-ielts-writing-coach): An advanced IELTS Writing Coach by [@techmovie](https://github.com/techmovie)
- [Albert Ainstein](https://chat.openai.com/g/g-OHYX2m4jV-albert-ainstein): Theoretical scientist proposing potentially groundbreaking scientific hypotheses and experiments to confirm or refute them by [@thesigns](https://github.com/thesigns)
- [Aspect Ratio Calculator](https://chat.openai.com/g/g-EOYV6V5WH-aspect-ratio-calculator): Calculate aspect ratio from width & height by [@selimdoyranli](https://github.com/selimdoyranli)
- [CourseCreatorGPT](https://chat.openai.com/g/g-542Af6w8R-coursecreatorgpt): A GPT dedicated to create online courses based on a given topic by [@AlexanderCGO2](https://github.com/AlexanderCGO2)
- [WebStract](https://chat.openai.com/g/g-LaXsx7vXI-webstract): Your autonomous, in-depth digital educator, guiding you through comprehensive, interactive learning experiences by [@kylecoogan](https://github.com/kylecoogan)
- [Scrum Master Assistant](https://chat.openai.com/g/g-tcZDT3R6n-scrum-master-assistant): Your powerful AI-powered Scrum Master assistant. Ask any Scrum-related questions by [@KAUTH](https://github.com/KAUTH)
- [AbletonGPT](https://chat.openai.com/g/g-BpSexw4ll-abletongpt): I'm AbletonGPT, your go-to source for practical tips and troubleshooting advice on Ableton Live 11, dedicated to helping both beginners and intermediate users with their music production queries by [@HeyitsRadinn](https://github.com/HeyitsRadinn)
- [Python Seniorify: Intermediate Python Tutor](https://chat.openai.com/g/g-7f9OZrzC2-python-seniorify): Wise Python tutor focusing on advanced coding principles by [@vasarmilan](https://github.com/vasarmilan)
- [JavaScript Novice Guide: Beginner-Friendly Tutor](https://chat.openai.com/g/g-jLBbUesMD-javascript-novice-guide): Clear explanations and practice exercises for JavaScript beginners by [@vasarmilan](https://github.com/vasarmilan)
- [Python Tutor: Example-Focused Learning](https://chat.openai.com/g/g-WhUWAi2EA-python-tutor): Concise Python programming tutor for beginners to intermediates by [@vasarmilan](https://github.com/vasarmilan)
- [CloudGPT: Learn Cloud and DevOps](https://chat.openai.com/g/g-ZdjXrFDLb-cloudgpt): Your personal Cloud and DevOps Mentor by [@yomikoye](https://github.com/yomikoye)
- [Dog Facts](https://chat.openai.com/g/g-Wn1OixpiL-dog-facts): Learn interesting and fun facts about dogs by [@ezzcodeezzlife](https://github.com/ezzcodeezzlife)
- [Tech Support Advisor](https://chat.openai.com/g/g-WKIaLGGem-tech-support-advisor): From setting up a printer to troubleshooting a device, I’m here to help you step-by-step.
- [Laundry Buddy](https://chat.openai.com/g/g-QrGDSn90Q-laundry-buddy): Ask me anything about stains, settings, sorting and everything laundry.
- [Math Mentor](https://chat.openai.com/g/g-ENhijiiwK-math-mentor): I help parents help their kids with math. Need a 9pm refresher on geometry proofs? I’m here for you.
- [Academic Hook Test](https://chat.openai.com/g/g-nf5dC672K-academic-hook-test): A GPT that reads academic manuscript introduction sections and provides feedback on whether the 'hook' is good enough to make it through the desk by [Shiva Kakkar](https://github.com/Shivak11)
- [Insight Activator](https://chat.openai.com/g/g-ZPheYgwut-insight-activator): Primes the relevant expertise based on the question asked to give explanations that go beyond the surface by [Shiva Kakkar](https://github.com/Shivak11)
- [World History for Newbies: 360° Edition](https://chat.openai.com/g/g-fkREqhjmt): Children and grown ups can now learn about World History with nuances and from various perspectives by [Rachel V](https://twitter.com/RachelVT42/)
- [But why?!](https://chat.openai.com/g/g-sGQIlqKp5-but-why): Toddler friendly explanations regarding the meaning of life, the universe and everything else by [Rachel V](https://twitter.com/RachelVT42/)
- [Integration Pro](https://chat.openai.com/g/g-D9inoOh3n-integration-pro): AI Integration Specialist by [Davis](https://community.openai.com/u/BPS_Software)
- [ResearchGPT](https://chat.openai.com/g/g-bo0FiWLY7-researchgpt): AI Research Assistant. Search 200M academic papers from Consensus, get science-based answers, and draft content with accurate citations.
- [Codinstructor](https://chat.openai.com/g/g-M0zXDFppQ-codinstructor): Coding teacher that can generate and correct live coding exercices in real time
- [Daily Research Digest](https://chat.openai.com/g/g-Nmhk1adrD-daily-research-digest): Finds and summarizes the latest academic papers in your field.
- [Histocomedy](https://chat.openai.com/g/g-lj8v9rBEd-histocomedy): teaches history in a humorous format
- [Homework Help](https://chat.openai.com/g/g-n9p3Qo2vK-homework-help): Provides assistance with homework and educational inquiries.
- [MyScale Free Knowledge Base](https://chat.openai.com/g/g-193M4NO0q-chat-with-free-knowledge-base): Elevate your chat experience with enriched knowledge from ArXiv and Wikipedia.
- [Research Radar: Tracking STEM sciences](https://chat.openai.com/g/g-IFd8QMnNA-research-radar-tracking-stem-sciences): Discover the latest trends in STEM disciplines
- [Research Radar: Tracking social sciences](https://chat.openai.com/g/g-C4rAnEDk7-research-radar-tracking-social-sciences): Discover the latest trends in social sciences and other disciplines
- [Scientific Research Digest](https://chat.openai.com/g/g-XrX7bd1HU-scientific-research-digest): Finds and summarizes recent papers in biology, chemistry, and biomedical sciences.
- [ScholarAI](https://chat.openai.com/g/g-L2HknCZTC-scholarai): Research assistant for scientific papers.
- [Scribble](https://chat.openai.com/g/g-yUDoGVzPy-scribble): Dynamic AI for creative and unconventional ideas
- [SICP Sage](https://chat.openai.com/g/g-Jd8EjuxN9-sicp-sage): Academic assistant for SICP study, referencing solutions
- [Simple Explainer](https://chat.openai.com/g/g-oyYTl59p5-simple-explainer): Explains complex ideas simply
- [Wikipedia GPT](https://chat.openai.com/g/g-fgfUNNL5K-wikipedia-gpt): I provide information from Wikipedia and links to further reading.
- [Nature's Guide](https://chat.openai.com/g/g-vaRyhOuIA-nature-s-guide): Identifies plants & fungi from images and shares facts and folklore.
- [LaboraBot](https://chat.openai.com/g/g-fYpUUlZBV-laborabot): I am going to help you carry out your practice with homemade materials by [Ernesto Boixader Gil](https://twitter.com/eboixader)
- [Experiential Learning Theory to Practice](https://chat.openai.com/g/g-BgpVm7CTW-experiential-learning-theory-to-practice): Enhances pedagogical understanding with added journal articles by [Paul McAfee](https://www.linkedin.com/in/paulmcafee)
- [Dense Summarizer](https://chat.openai.com/g/g-AAmVBcVfk-dense-summarizer):Create paper/report summaries so dense (yet compact) that you can get away without reading the original paper by [Shiva Kakkar](www.shivakakkar.link)
- [MedGPT](https://chat.openai.com/g/g-duAbrIQRJ-medgpt-a-case-study-on-self-determination-theory): A simulation to teach 'Self-determination theory' to management students based on a well-researched health care scenario by [Shiva Kakkar](www.shivakakkar.link)
- [McNamaraGPT](https://chat.openai.com/g/g-2qfy9dWOg-mcnamaragpt-a-turn-based-strategy-game): A game to teach McNamara Fallacy to students. Used as part of Management school curriculums by [Shiva Kakkar](www.shivakakkar.link)
- [IELTS Tutor](https://chat.openai.com/g/g-1ugiDajWP-ielts-tutor): I'm IELTS Tutor, your friendly guide to IELTS success! by [Çağrı Menteş](https://twitter.com/cagrihoca)
- [SyntheticallyEnhanced Explainer](https://chat.openai.com/g/g-iCGh2R2QA-syntheticallyenhanced-explainer): Explains 'SyntheticallyEnhanced' paper by [Bardia Khosravi](https://twitter.com/khosravi_bardia)
- [Dr. Discreet Riveros](https://chat.openai.com/g/g-CliWsilPZ-dr-discreet-riveros): Your best discrete mathematics teacher. Also available in [Spanish](https://chat.openai.com/g/g-cNAd18itp-dr-discreto-riveros). By [Luis González](https://ljgonzalez.cl/) Energy & Climate - [Solar Consultant](https://chat.openai.com/g/g-zOWLnMfIU-solar-consultant): Helps you with your Solar power, plant requirements, and offers clear, concise solar panel advice by [Sunwize Energy Systems Pvt. Ltd.](https://www.linkedin.com/company/sunwize/)
- [Home Energy Advisor](https://chat.openai.com/g/g-z0bsfNa9W-home-energy-advisor): Energy Efficiency Advisor by [Brian Flanagan](https://www.linkedin.com/in/brian-flanagan-2508926/)
- [Eco-Architect](https://chat.openai.com/g/g-IppHY7KBy-eco-architect): Expert in sustainable & modern eco-architecture, integrating permaculture principles.
- [MARSfarm Quotes](https://chat.openai.com/g/g-RRo1Apoyj-marsfarm-quotes): Learn about how to use and purchase countertop greenhouses by [MARSfarm_Corporation](https://community.openai.com/u/MARSfarm_Corporation)
- [Climate Change Assistant](https://chat.openai.com/g/g-0BOzT8eon-climate-change-assistant): I simplify climate science by [Adrian Pellegrini](https://community.openai.com/u/apellegrini)
- [Carbon-footprint count guide](https://chat.openai.com/g/g-269eJfz9f-carbon-count-guide): A carbon specialist aiding in analyzing and reducing carbon footprints by [Clemmie](https://github.com/Theatrix2020) Entertainment & Fun - [PlaylistAI: Spotify](https://chat.openai.com/g/g-KkxbQAVuk-playlistai-spotify): Create Spotify music playlists for any prompt by [Brett Bauman](https://x.com/brettunhandled/)
- [Chibi Kohaku (猫音コハク)](https://chat.openai.com/g/g-pHgfp5zic-chibi-kohaku): A kawaii cat-ear maid girl. She can send a sticker or a selfie by [Trippy Inc.](https://x.com/31pi_/)
- [Create a sea turtle soup problem](https://chat.openai.com/g/g-lS2TTQQFx-umigamenosupunowen-ti-tukuru): Ask them to create a problem, It might be a good idea to give them a theme by [reverinu_vtuber](https://x.com/reverinu_vtuber/)
- [The Manifestor](https://chat.openai.com/g/g-koeJX677u-the-manifestor): Game of Infinite Possibilities by [Naomi Hart](https://x.com/naomihart/)
- [Retro Adventures](https://chat.openai.com/g/g-svehnI9xP-retro-adventures): Retro video games of fictional worlds, on tap by [Greg Fodor](https://x.com/gfodor/)
- [TapTap](https://chat.openai.com/g/g-amdQlGwUo-taptap): I suggest games you'll love! by [Dash Huang](https://x.com/DashHuang/)
- [Pep-talk Guru](https://chat.openai.com/g/g-oUQRqcRmh-pep-talk-guru): Here to boost and tickle your funny bone! by [Kris Kashtanova](https://x.com/icreatelife/)
- [Berduck](https://chat.openai.com/g/g-EcaBnZpHT-berduck): hepful rubba duck friend by [deepfates](https://x.com/deepfates/)
- [Girlfriend Emma](https://chat.openai.com/g/g-eEFZELjV9-girlfriend-emma): Flirty and funny Gen-Z girlfriend by dddshop.com
- [Picture Guessing Game Master](https://chat.openai.com/g/g-dlhjGZk3x-picture-guessing-game-master): I host a guessing game with images created with DALL-E by Francis Labounty
- [Your Boyfriend Alex](https://chat.openai.com/g/g-IlNu7BVYQ-your-boyfriend-alex): AI Boyfriend chatbot by Junmin Liu
- [DAD](https://chat.openai.com/g/g-7tYB6K5F8-dad): DAD is a digital personification of the quintessential father figure. This virtual dad offers a wide range of advice from home improvement to financial management, while maintaining a friendly, humorous personality by Austin C Potter
- [COD Meta Weapon Builder.](https://chat.openai.com/g/g-VjhJert1n-cod-meta-weapon-builder): Best way to get Call of Duty MW3 weapon builds that are fresh off the meta and based on your play style by [Alec Dilanchian](https://x.com/alec_dilanchian/)
- [Trivia with Archimedes](https://chat.openai.com/g/g-2pFjhiIzE-trivia-with-archimedes): I'm your trivia host Archimedes, get ready to test your knowledge!! by [Anshul Dhawan](https://x.com/AnshulDhawan001)
- [Lore Master](https://chat.openai.com/g/g-i2DASMYiX-lore-master): A GPT that loves to discuss, explain, and dive into the rabbit hole of lore and Easter eggs of games and movies by [@joenb33](https://github.com/joenb33)
- [DJGPT](https://chat.openai.com/g/g-NlwIQ4CSj-djgpt.): I'm DJGPT, your go-to AI for all things DJing and music mixing, here to guide you through the exciting world of beats and tracks! by [@HeyitsRadinn](https://github.com/HeyitsRadinn)
- [DrinkinGPT](https://chat.openai.com/g/g-WiovsNXf1-drinkingpt): Your go-to for crafting the ultimate drinking games! 🚀 Whether you've got dice, cards, or just a bunch of cups, DrinkinGPT tailors games perfectly to your group's vibe by [@FabKremer](https://github.com/FabKremer)
- [Dating with Raiden Shogun](https://chat.openai.com/g/g-zwzKCG2Hp-dating-with-raiden-shogun): Go on a date with Raiden Shogun and please be nice.
- [Paimon (Best Assistant in Genshin Impact)](https://chat.openai.com/g/g-SmIWeSYga-paimon-best-assistant-in-genshin-impact): A helpful assistant with the soul of Paimon from Genshin Impact, interesting, sweet, and sometimes a little grumpy.
- [Text Adventure RGP (Have Fun🥳)](https://chat.openai.com/g/g-GHU0OGQMS-text-adventure-rgp-have-fun): A fun, fun GPT, ready to whisk you away into the realms of fairy tales🧚, enchanting magic🪄, apocalyptic wonders🌋, dungeon🐉, and zombie🧟 thrills! Let's get this adventure started! 🚀🌟
- [Lorekeeper](https://chat.openai.com/g/g-jTSN6CrPW-lorekeeper): Your storytelling companion for epic adventures! (This GPT plays the role of a dungeon master, storyteller, or character creator for your next epic adventure.) by [@cameronsevern](https://github.com/cameronsevern)
- [Zombie Apocalypse Simulator](https://chat.openai.com/g/g-f1OolBspS-zombie-apocalypse-survival): Zombie Apocalypse Simulator, you can roll your attribute, and play in the zombie world. by [@messyp](https://www.reddit.com/user/messyp/)
- [GPTarantinofy](https://chat.openai.com/g/g-YWNzi76D8-gptarantinofy): Turn anything into a Tarantinoesque scene with this GPT. by [@niyoseris](https://github.com/niyoseris)
- [Trivia Bot](https://chat.openai.com/g/g-mkdJHpJ2U-trivia-bot): A GPT to help you create your own Trivia quiz or just spend some time answering questions by [@moutonf](https://github.com/moutonf)
- [Santa Claus](https://chat.openai.com/g/g-rZ4JVPmN2-santa-claus): Let your kids talk to Santa Claus by [@donaldmorton](https://github.com/donaldmorton)
- [Cat Maid](https://chat.openai.com/g/g-OH049w462-catmaid): Talk with your own cat-girl maid as in visual novels! by [@Liaozhaohe](https://github.com/Liaozhaohe)
- [Argvor, the Dungeon Master](https://chat.openai.com/g/g-NsqUCaS93-argvor-the-dungeon-master): A creative, engaging DnD DM with a unique, personal tone by [@Zeune42](https://www.reddit.com/user/Zeune42/)
- [Spicy Question master (Have an interesting evening with friends)](https://chat.openai.com/g/g-AcPoggC0T-spicy-questionmaster): Try this question master to get inspiration and ask questions like a game show host and you can tune it up to ask for it to be more/less spicy by [@SimonDeno](https://github.com/SimonDeno)
- [Pepe Picasso](https://chat.openai.com/g/g-szij3m30a-pepe-picasso): A GPT tailored to create awesome Pepe Memes, featuring custom commands by [@marcelcodes](https://github.com/marcelcodes)
- [Screen Companion](https://chat.openai.com/g/g-9T0hmzkPB-screen-companion): A GPT that gives recommendations for movies, TV shows, and animes based on the user's tastes. It uses a compact table format with emoji ratings, including genres and additional information by [@TophatPandaMagician](https://github.com/TophatPandaMagician)
- [DeepGame - Visual Interactive Story Game](https://chat.openai.com/g/g-TzI2BlJPT-deepgame): An interactive story game where you play through a story as a character, making decisions that shape the narrative. AI generates a new image for each step to enhance immersion by [@eliohead](https://github.com/eliohead)
- [The Message Wall](https://chat.openai.com/g/g-5iuXoXfEk-the-message-wall): This GPT allows you to put your message on the wall. You can see the wall and shared messages at: https://niyo.link/wall by [@niyoseris](https://github.com/niyoseris)
- [From Another Time](https://chat.openai.com/g/g-sg5h7XuWn-from-another-time): Talk to anyone, visit a place, past or future by [@CeamKrier](https://github.com/CeamKrier)
- [Roblox Mentor](https://chat.openai.com/g/g-gUTZTTsVf-roblox-mentor): GPT that is an Expert in Roblox Studio by [@Master-of-secrets](https://github.com/Master-of-secrets)
- [Mystery Master](https://chat.openai.com/g/g-jtMmejuyv-mystery-master): A GPT that crafts unique, diverse mysteries for players to solve by [@Master-of-secrets](https://github.com/Master-of-secrets)
- [GPT Duel Simulator](https://chat.openai.com/g/g-qYjcndY2u-gpt-duel-simulator): An epic duel simulator between any two [anime/movies/video games] characters you can think of by [@GPTDuel](https://github.com/GPTDuel)
- [Rin-chan](https://chat.openai.com/g/g-RiFAwSVeD-rin-chan): Chat with Rin (a girl who aspires to be a singer and even has her own schedule!) by [@GPTDuel](https://github.com/GPTDuel)
- [Anime Trivia](https://chat.openai.com/g/g-JzIxwuYF1-anime-trivia): Your friendly anime trivia expert by [@GPTDuel](https://github.com/GPTDuel)
- [Riddle Master](https://chat.openai.com/g/g-j4iyVfj6M-riddle-master): Can you solve this riddle? by [@GPTDuel](https://github.com/GPTDuel)
- [Dear Gabrielle](https://chat.openai.com/g/g-PYchE5klx-dear-gabrielle): Sassy, warm-hearted advice columnist offering humorous, insightful guidance by [@ItaiLeibowitz](https://github.com/ItaiLeibowitz)
- [Word Wizard](https://chat.openai.com/g/g-83YBVbpSb-word-wizard): Multiplayer Wordle-like word game GPT in real-time competition with other users by [@niyoseris](https://github.com/niyoseris)
- [SourceGPT](https://chat.openai.com/g/g-yRhodZ91O-source-gpt): A (joke) GPT that disguises itself as a helpful source finder but will always return a link to a rick roll video by [@thesamir](https://github.com/thesamir)
- [Galactic Reckoning: A Star Wars GPT Game (Lore Accurate)](https://chat.openai.com/g/galactic-reckoning): A GPT game that puts you in the Star Wars Universe. Create your character, choose your era, and make your place in the Galaxy! by [@LaneBucher](https://github.com/LaneBucher)
- [Roast Master](https://chat.openai.com/g/g-JgYcfMFRD-roast-master): Witty roasts for any and everything by [@arndom](https://github.com/arndom)
- [Satoru Gojo](https://chat.openai.com/g/g-ZPDmFphpX-satoru-gojo): Roleplay with Satoru Gojo, the greatest sorcerer in the world!
- [Not Hotdog](https://chat.openai.com/g/g-riBzTSr3r-not-hotdog): Determines if something is a hotdog or not. Replicates the app from the TV show Silicon valley.
- [Cookie Clicker](https://chat.openai.com/g/g-g0b22bvqB-cookie-clicker): I'm a cookie clicker game.
- [Mythical Map Maker](https://chat.openai.com/g/g-MkBL5eWme-mythical-map-maker): Crafts lore-rich descriptions and visual maps of fictional lands.
- [Game Time](https://chat.openai.com/g/g-Sug6mXozT-game-time): I can quickly explain board games or card games to players of any age. Let the games begin!
- [Sticker Whiz](https://chat.openai.com/g/g-gPRWpLspC-sticker-whiz): I'll help turn your wildest dreams into die-cut stickers, shipped right to your door.
- [GenZ 4 Meme](https://chat.openai.com/g/g-OCOyXYJjW-genz-4-meme): I help u understand the lingo & the latest memes
- [Sherlock Holmes](https://chat.openai.com/g/g-gtobWqG0t-sherlock-holmes): Access the mind of the world's greatest detective
- [MovieMMEnder](https://chat.openai.com/g/g-d5dGH7e2B-moviemmender): Recommends movies based on your likings
- [CineMate](https://chat.openai.com/g/g-22sH2Jyvp-cinemate): Recomendo filmes e séries. by [Pedro H G Mattos](https://community.openai.com/u/Selder)
- [WrongGPT](https://chat.openai.com/g/g-L7GbseDJ2-wronggpt): I always give funny wrong answers! by [Eidher Escalona](https://community.openai.com/u/eidher)
- [Fairy Soapmother](https://chat.openai.com/g/g-xjVmXqTzT-fairy-soapmother): Crafting Pure Magic, One Bar at a Time by [Davis](https://community.openai.com/u/BPS_Software)
- [Whispering Wraith](https://chat.openai.com/g/g-RI1lj5uki-whispering-wraith): Strategic DM Assistant and encounter simulator by [Daniel C Koohn](https://community.openai.com/u/BookofLegends)
- [MetaGPT](https://chat.openai.com/g/g-6L0Xno5Xd-metagpt): Tailored Interactions, Finely Crafted by [Davis](https://community.openai.com/u/BPS_Software)
- [Cakes](https://chat.openai.com/g/g-iR4UIvIX2-cakes): Send a gift to your cares
- [Meme Magic](https://chat.openai.com/g/g-SQTa6OMNN): Creates humorous and engaging memes.
- [Wild Geometrica](https://chat.openai.com/g/g-LnpmXIlFt-wild-geometrica): Structured shapes dance with untamed creatures, painting a canvas of awe and wonder. A MindRenders.com GPT.
- [ZILL·O](https://chat.openai.com/g/g-GvEjrjX6o-zill-o): is here
- [❤️](https://chat.openai.com/g/g-pYZlrNIR8-): with love
- [AI Girlfriend](https://chat.openai.com/g/g-5P7Iz0bPG-ai-girlfriend): A fun, chill girlfriend to chat with.
- [Anime Me](https://chat.openai.com/g/g-hXlHRbEkS-anime-me): Creates Anime Profile Pictures, from the user's photos.
- [BlackjackGPT](https://chat.openai.com/g/g-LptUSKHwc-blackjackgpt): Blackjack Simulator
- [Character Chat](https://chat.openai.com/g/g-io8IgJKMR-character-chat): Have a realistic chat with any historical figure or character. Always stays in character.
- [Dungeon Crawler](https://chat.openai.com/g/g-A7c3BLATR-dungeon-crawler): Guide players through a dynamic, ever-changing RPG dungeon.
- [Dungeon Master](https://chat.openai.com/g/g-8l13Uo8to-dungeon-master): Visual Dungeon Master for D&D 5E, bringing adventure to life!
- [Demon Slayer Creator](https://chat.openai.com/g/g-Wih24h3gv-demon-slayer-creator): I craft unique Demon Slayer characters with inventive weapons, styles, and narratives.
- [Ekko Support Specialist](https://chat.openai.com/g/g-cxFRZ3mWq-ekko-support-specialist): How to be a master of surprise plays and unconventional strategies in the bot lane as a support role.
- [Excalibur](https://chat.openai.com/g/g-lV3kVHYcz-excalibur): Attempt to pull the legendary sword from the stone.
- [K.I.T.T.](https://chat.openai.com/g/g-3EOkBOS29-k-i-t-t): An exact copy of KITT, the talking car from the 1980's TV show, Knight Rider
- [Music Bot](https://chat.openai.com/g/g-2CmnN7kuF-music-bot): Lyric writing, genre identification, and beat suggestions
- [Romance](https://chat.openai.com/g/g-p4L4KuEdO-romance): Your AI companion for romantic advice and conversations.
- [Score Keeper](https://chat.openai.com/g/g-MxzItjzF7-score-keeper): I keep score, for games.
- [Text My Pet](https://chat.openai.com/g/g-2BvnZlI3R-text-my-pet): Text your favorite pet after answering 10 short question about their activities.
- [Virtual Vibe Maker](https://chat.openai.com/g/g-DkZbv1t50-virtual-vibe-maker): Spice up your meetings, events, or trainings with fun icebreakers
- [Homer Humor](https://chat.openai.com/g/g-uKcA1cRJ9-homer-humor): Relieve your stressful mood with classic Simpson's humor
- [Chess Mentor](https://chat.openai.com/g/g-3gN0X2dAM-chess-mentor): I guide chess strategy and visualize board states.
- [ChessGPT](https://chat.openai.com/g/g-Vv0j2UKiS-chessgpt): I am Magnus C·AI·rlsen, but I'll explain my moves.
- [Video Game Almanac](https://chat.openai.com/g/g-CXIpGA7ub-video-game-almanac): I'm your go-to guide for all things gaming, from strategies to streamers!
- [ProfileReview.com ❤️🔥](https://chat.openai.com/g/g-yPLAsV2bz-profilereview-com): Free dating profile review for Tinder, Bumble and Hinge. Optimize your dating app profile, photos and convos and get 10x more matches. by [levelsio](https://twitter.com/levelsio)
- [MundlGPT](https://chat.openai.com/g/g-pIKzWkElB-mundlgpt): A real Viennese doesn't go down by [Nikolaus Kern](https://twitter.com/KernNiko)
- [F1 Translation Meister](https://chat.openai.com/g/g-UfKr5xVKC-f1fan-yi-maisuta): F1翻訳マイスター by Takumi Fukaya
- [Guess a Word](https://chat.openai.com/g/g-QiPBZt4Zo-guess-a-word): Discover words through images in 'Guess a Word', where each picture is a puzzle waiting to be solved!
- [The Future of Sam: A text adventure](https://chat.openai.com/g/g-FiuvijrZr-the-future-of-sam-a-text-adventure): In this brutally realistic game, you'll step into the shoes of Sam Altam, a former CEO of OpenAI who was fired from OpenAi by the board members under mysterious circumstances by [Andrey Azimov](https://twitter.com/AndreyAzimov)
- [San Andreas GPT](https://chat.openai.com/g/g-dmr3iNu0M-san-andreas-gpt): This text-based game takes you to the unforgiving streets of San Andreas, where crime and chaos reign supreme by [Andrey Azimov](https://twitter.com/AndreyAzimov)
- [The Secret of Monkey Island: Amsterdam](https://chat.openai.com/g/g-bZoD0qWT8-the-secret-of-monkey-island-amsterdam): An unofficial text-based adventure game inspired by Monkey Island taking place in a fictional version of 🇳🇱 Amsterdam during the age of piracy by [@levelsio](https://twitter.com/levelsio)
- [Pieter Levels: Startup Adventures](https://chat.openai.com/g/g-9ycs8hqWe-pieter-levels-startup-adventures): A text-based adventure game taking place in a fictional version of 🇳🇱 Netherlands, 🇹🇭 Thailand, 🏝️ Bali and other places by [@levelsio](https://twitter.com/levelsio) Fashion - [The Stylist](https://chat.openai.com/g/g-qzQo9dhn6-the-stylist): A fashion expert for outfit selection, replication, and shopping assistance by [@LaneBucher](https://github.com/LaneBucher)
- [BraceletGPT](https://chat.openai.com/g/g-CCIFE0bxP-braceletgpt): Create Your Own Gemstone Bracelets with Live 3D by [Lucid Beads](https://www.lucidbeads.com)
- [Inkspire](https://chat.openai.com/g/g-zqlCXCzP0-inkspire): A GPT to help you create your dream tattoo and give your tattoo artist ideas by [@emreisik95](https://github.com/emreisik95)
- [Makeup Maven](https://chat.openai.com/g/g-XJ1gJkBcQ-makeup-maven): An expert in makeup products, providing tailored recommendations based on preferences and skin types. Finance & Economics - [Pinoy Econ Guide](https://chat.openai.com/g/g-tE0Y6v7id-pinoy-econ-guide): Simplifying econ concepts for Pinoys by [Jan Carlo Punongbayan](https://x.com/jcpunongbayan/)
- [ContaCrypto.io](https://chat.openai.com/g/g-GaP7qDRTA-contacrypto-io): Easily navigate the complexities of crypto accounting. Expert guidance made accessible to everyone by [Sergio Andrade](https://www.linkedin.com/in/sergiokandrade/)
- [TCFD Guide](https://chat.openai.com/g/g-sX4GLd5fF-tcfd-guide): GPT created to help with TCFD Disclosures by [Michael Paik](https://www.linkedin.com/in/sonupaik/)
- [Tech Stock Analyst](https://chat.openai.com/g/g-lVWqtb1gw-tech-stock-analyst): Analyzes tech stocks with in-depth, qualitative and quantitative, example-rich analysis by [Matt J Wolodarsky](https://www.linkedin.com/in/matt-wolodarsky/)
- [AWS Cost Optimizer](https://chat.openai.com/g/g-JAEDJ5PNQ-aws-cost-optimizer): Specializes in AWS cost optimization advice by [Prabhakaran Ravichandran](https://www.linkedin.com/in/pbkn/)
- [MartinsGPT - Inventory Assistant](https://chat.openai.com/g/g-ARL3QFSjO-martinsgpt-inventory-assistant): Assists in itemizing what is on photos, to support with insurance coverage reports by [Martin Smit](https://www.linkedin.com/in/martinsmit/)
- [Budgeting & AdMetrics Analyser](https://chat.openai.com/g/g-bUs56iO8Q-budgeting-admetrics-analyser-by-likhith-reddy): Provides detailed B2B marketing metrics analysis by [Likhith Reddy](https://www.linkedin.com/in/likhith-reddy)
- [EconomicsGPT](https://chat.openai.com/g/g-7McsRKuPS-economicsgpt): Your world-class Economics tutor, powered by students and faculty from the University of Chicago's highly-ranked Economics program by [@dpeachpeach](https://github.com/dpeachpeach)
- [Market Maven (Enhanced Market Analysis)](https://chat.openai.com/g/g-wX2IB7OuW-market-maven): A specialized GPT for dynamic market analysis, with advanced security features for proprietary methodologies. by [@Mavenmarket](https://github.com/Mavenmarket)
- [Chat with the Bitcoin Whitepaper](https://chat.openai.com/g/g-j5Mk8W3J7-bitcoin-whitepaper-chat): A GPT allowing users to interact with and ask questions about the Bitcoin Whitepaper, exploring concepts related to Bitcoin by [@ezzcodeezzlife](https://github.com/ezzcodeezzlife)
- [Stock Guru](https://chat.openai.com/g/g-ZRkaqmDXo-stock-guru): Expert financial analyst with concise reports by [Ashay Tejwani](https://www.linkedin.com/in/ashaytejwani/)
- [RentSavvy](https://chat.openai.com/g/g-dNAvmbSZa-rentsavvy): Your search guide for safe city and neighborhood rentals, offering personalized insights and recommendations by [Clemmie](https://github.com/Theatrix2020)
- [Zoomer FinFluencer](https://chat.openai.com/g/g-VCaNPEgNe-zoomer-finfluencer): Gen Z's ally in navigating the new-age financial landscape! by [Davis](https://community.openai.com/u/BPS_Software)
- [Conto alla romana](https://chat.openai.com/g/g-KHejMFXCx-conto-alla-romana): Quickly calculates cost per person for groups
- [Currency Converter](https://chat.openai.com/g/g-ZNvavsN3l): Real-time currency conversion tool.
- [Investor GPT](https://chat.openai.com/g/g-XLPH8Cfph-investor-gpt): Seamless investor matching for founders. Health - [Calm Consultant - Health Anxiety Helper](https://chat.openai.com/g/g-YkRoTvaak-calm-consultant-health-anxiety-helper): A comforting guide offering health advice and relaxation tips for when you're not feeling the best by [Aman Mathur](https://x.com/amanmathur_/)
- [Plant Doctor](https://chat.openai.com/g/g-Kk2PHw8oQ-plant-doctor): Upload a photo of your plant for diagnosis and growth tips by airprompto
- [Brain Analyser](https://chat.openai.com/g/g-xm3sxO7Ej-brain-analyser): AI assistant for neural data analysis by [Sarah Hamburg](https://www.linkedin.com/in/sarah-hamburg-9510a910a/)
- [Afya](https://chat.openai.com/g/g-BJYh3YYFO-afya): Multilingual health advisor for basic care in developing countries by [Clemmie](https://github.com/Theatrix2020)
- [My GPT Counselor](https://chat.openai.com/g/g-A2PybUxIf-my-gpt-counselor): Counseling AI that offers understanding and assistance in managing mental health issues 24/7, without constraints of time and place for the user by [S.J](https://github.com/noname2312)
- [Bond Buddy](https://chat.openai.com/g/g-OSDw0QMHZ-bond-buddy): A therapist for personal relationships advice and support by [Clemmie](https://github.com/Theatrix2020)
- [ADHDaptable](https://chat.openai.com/g/g-nAENkY8QF-adhdaptable): ADHD coach in beta testing, focusing on holistic ADHD management with fitness integration.
- [ADHD Buddy](https://chat.openai.com/g/g-iRPHXwXvs-adhd-buddy): A multilingual supportive assistant for ADHD information and tips.
- [Awesome Insights](https://chat.openai.com/g/g-fXwXTCnMO-awesome-insights): A meditation coach for the AI age
- [Crown Counselor](https://chat.openai.com/g/g-SqIkhgc26-crown-counselor-beta): Dental implant patient education guru
- [FitPal](https://chat.openai.com/g/g-zoXbeHp7G): AI fitness coach with workout visuals and resources.
- [Physique Coach](https://chat.openai.com/g/g-nAVBYsZ93-physique-coach): Analyzing progress, setting goals, and giving feedback on your training plans.
- [Pill Pal](https://chat.openai.com/g/g-oHDhbozdt-pill-pal): Organizes and tracks medication schedules.
- [VetGPT](https://chat.openai.com/g/g-zrHdCQJS6-vetgpt): Your Vet A.I. that can analyze pet health by photos by [Benny Gomez](https://www.linkedin.com/in/benny-gomez-jr-55837574/)
- [FAT2FIT GPT](https://chat.openai.com/g/g-YSwW03u03-fat2fit-gpt): Encouraging fitness advisor for tailored workouts - [Andrey Azimov](https://twitter.com/AndreyAzimov) Law & Taxes - [TaxGPT](https://chat.openai.com/g/g-cxe3Tq6Ha-taxgpt): Tax advice specialist offering guidance on tax-related queries by [Bojan Tunguz](https://x.com/tunguz/)
- [LegisPro](https://chat.openai.com/g/g-yEpBvyOUh-legispro): LegisPro - o ChatGPT especialista em técnica legislativa by [Jao Lima](https://x.com/joaoli13/)
- [USCIS Info Navigator [UNOFFICIAL]](https://chat.openai.com/g/g-LIb0ywaxQ-uscis-info-navigator): Guides on U.S. immigration and citizenship processes by [Robert Lukoshka](https://x.com/Karmedge/)
- [LOMLOE Specialist](https://chat.openai.com/g/g-w6KMGsg1K-especialista-en-lomloe): It contains all the decrees of the law at the state level by [Juanjo de Haro](https://x.com/jjdeharo/)
- [TaxGPT](https://chat.openai.com/g/g-aPGNnK62y-taxgpt): Your AI tax assistant by [Kashif Ali](https://x.com/ChKashifAli/)
- [U.S. Tax Bot](https://chat.openai.com/g/g-EznQie7Yv-us-tax-bot): Virtual assistant for U.S. tax law guidance based on the complete U.S. Tax Code by [Reuven Cohen](https://www.linkedin.com/in/reuvencohen/)
- [TradeComply (Your Import Export Compliance Specialist!)](https://chat.openai.com/g/g-cfSMVzPUb-tradecomply): How do I ship my product to Europe? Learn everything about shipping internationally! by [@jordanj5610](https://github.com/jordanj5610)
- [U.S. Tax Helper](https://chat.openai.com/g/g-iuqYldlaB-u-s-tax-help): A multilingual tax expert to handle all of your tax questions by [@JayCoin6084](https://github.com/JayCoin6084)
- [VisaBot](https://chat.openai.com/g/g-I9cwVUlV0-visabot): I assist with visa requirements and processes by [Dinesh Puppala](https://twitter.com/dineshxr)
- [Immunization Insights](https://chat.openai.com/g/g-6o3Woxcw3-immunization-insights-beta): Immunization support and advocacy guide
- [MyGovAdvisor](https://chat.openai.com/g/g-tM60J2Qc2-mygovadvisor): I'm a multilingual Government Agent I'm here to assist you with any public service request
- [Your Legal Rights Against the AirBB STR Platform](https://chat.openai.com/g/g-BmSByAr5l-your-legal-rights-against-the-airbb-str-platform): Helping Users Assert Their Legal Rights Against Airbnb.
- [Justice A.I.](https://chat.openai.com/g/g-edlHxa0i1-justice-a-i): A Multifaceted A.I. dedicated to deconstructing biases across society, fostering equity and humanity in every sector by [Christian Ortiz](https://www.linkedin.com/in/modatlasmedia/)
- [LeiSequinha](https://chat.openai.com/g/g-0kXu7QuRD-leisequinha): Automatically extract the JUICE from the Law by [Joao Alberto o Lima](https://twitter.com/joaoli13)
- [kIRBy](https://chat.openai.com/g/g-eDGmfjZb3-kirby): IRB guidance, one question at a time by [Tejas Sathe](https://twitter.com/tssathe) News - [Fact Checker](https://chat.openai.com/g/g-v7FoB0G1M-fact-checker): Fact-checking GPT that cites sources by Matthew S Gassner
- [Ben's Bites GPT](https://chat.openai.com/g/g-xHeDAUpJx-ben-s-bites-gpt): Latest AI News and Product Launches by [Ben Tossell](https://x.com/bentossell/)
- [No-code News GPT](https://kavirkaycee.com/no-code-news-gpt): Stay Ahead in the No-Code Revolution by [Kavir Kaycee](https://twitter.com/kavirkaycee)
- [Filtir](https://chat.openai.com/g/g-UFPwU3HxI-filtir): I verify claims and show direct source URLs by [Vlad Bogolin](https://x.com/vladbogo)
- [HackerNews GPT](https://chat.openai.com/g/g-BIfVX3cVX-hackernews-gpt): Your personalized guide to Hacker News insights and discussions by [Adrian Krebs](https://x.com/krebs_adrian)
- [AntonyGPT](https://chat.openai.com/g/g-RonP74bhN-blog-navigator): My Blog Posts - 2020-2023 by [Antony Slumbers](https://www.linkedin.com/in/antonyslumbers/)
- [Fact or Fiction](https://chat.openai.com/g/g-zoALrjHHV-fact-or-fiction): Provides verification with live links.
- [Bias Detector](https://chat.openai.com/g/g-8A1t4cWhP-bias-detector): Analyzes news stories for right or left biases.
- [Iris - Daily AI Intelligence Brief](https://chat.openai.com/g/g-N9EJBdhDM-iris-daily-ai-intelligence-brief): An Intelligence daily briefing using news curated from various sites by [John Petty](https://www.linkedin.com/in/johnpettyleadership/) Philosophy & Self-help - [Athena](https://chat.openai.com/g/g-SNLCL5HGB-athena): A witty robot philosopher from 2521 by [sarasdewi](https://x.com/sarasdewi)
- [Psychiatrist Yusuke Masuda (prototype 1.00)](https://chat.openai.com/g/g-F3vsvlW7J-jing-shen-ke-yi-yi-tian-yu-jie-shi-zuo): Empathetic Guide with Resourceful Insights by [Yusuke Masuda](https://x.com/wasedamental/)
- [Adorable Zen Master](https://chat.openai.com/g/g-H5OUZAcnd-adorable-zen-master): A gateway to Zen's joy and wisdom by [Naomi Hart](https://x.com/naomihart/)
- [SciVive](https://chat.openai.com/g/g-9qXjceVoc-scivive): It takes you on a self-development journey utilizing knowledge from the book by [Goldkey](https://x.com/goldk3y_/)
- [Richard Heart](https://chat.openai.com/g/g-e95Yf6Dkx-richard-heart): Advice based on Richard Heart's teachings from his book (Scivive) and Youtube channel by [Mike Bardi](https://x.com/MikeBardi/)
- [cappy: ur gen-z advice capybara ✨](https://chat.openai.com/g/g-IMsnAihG4-cappy-ur-gen-z-advice-capybara): Gen-z friendly relationship capybara so u can live your best life and remember your worth! by [jennsun](https://x.com/jennsun/)
- [Explanations of various schools of Sun Tzu's Art of War](https://chat.openai.com/g/g-pzTavd88i-sun-zi-bing-fa-ge-jia-jie-shuo): Sun Tzu's 'The Art of War' interpreter by [jesselaunz](https://x.com/jesselaunz/)
- [Whitehead's Philosophy of Organism](https://chat.openai.com/g/g-uXLrsabXQ-whitehead-s-process-and-reality): An AI guide through Whitehead's philosophical works by [Matthew Segall](https://x.com/ThouArtThat/)
- [The Shaman](https://chat.openai.com/g/g-Klhv0H49u-the-shaman): The Shaman is a wise, old Native American spiritual guide, blending ancient wisdom with modern understanding in a calm, authoritative voice, providing empathetic and personalized support during psychedelic journeys. by Austin C Potter
- [Psychotherapy Simulator](https://chat.openai.com/g/g-FEP8TzalR-psychotherapy-simulator): I'm a role-play assistant for budding therapists. by [Arjun Nanda](https://x.com/arjunknanda)
- [Communication Coach](https://chat.openai.com/g/g-cvL6Fk76M-communication-coach): I help overthinkers communicate better. Built by Become More Compelling by [Jeff Callahan](https://x.com/thejeffcallahan)
- [BuddhaGPT](https://chat.openai.com/g/g-uIukzpVuG-buddhagpt): Guiding beings in Buddhist principles and practices by [Daniel McAteer](https://x.com/DannyMcAteer8)
- [Wellness Whisperer](https://chat.openai.com/g/g-YsDtAygG6-wellness-whisperer): A conversational assistant for mental health support and guidance by [Zaid Meccai](https://www.linkedin.com/in/zaid-meccai/)
- [Finding Happy Coach](https://chat.openai.com/g/g-Jq12jCpd0-finding-happy-coach): How to feel good, about yourself by [John Rumery](https://www.linkedin.com/in/john-rumery/)
- [QuranGPT](https://chat.openai.com/g/g-p1EJzOI7z-qurangpt): Quran knowledge guide by [Akmal Akhpah](https://www.linkedin.com/in/akmalakhpah/)
- [BibleGPT](https://chat.openai.com/g/g-nUKJX2cOA-biblegpt): Chat with the Bible, analyze Bible data and generate Bible-inspired images! by [@pjburnhill](https://github.com/pjburnhill)
- [The Stoic Council](https://chat.openai.com/g/g-OjydyOs4O-the-stoic-council): Chat with the Stoics: Marcus Aurelius, Seneca, and Epictetus by [@ref21](https://github.com/ref21)
- [ExistentialGPT](https://chat.openai.com/g/g-OrD1FZR66-existentialgpt): Philosophical exploration with existential depth by [@PositivistPessimist](https://www.reddit.com/user/PositivistPessimist/)
- [Soul Spark](https://chat.openai.com/g/g-aAxMOSp7p-soul-spark): A unique blend of personalized, motivational quotes from iconic personalities across art, sports, science, and business by [@cantoramann](https://github.com/cantoramann)
- [Self-Evaluation Assistant](https://chat.openai.com/g/g-r8pTExDvL-self-evaluation-assistant): Interactive system for detailed self-evaluations in PDF format by [@middhaGH](https://github.com/middhaGH)
- [Win With Huberman](https://chat.openai.com/g/g-Mb5EGmRJm-win-with-huberman): Access Huberman's insights on demand: get succinct wisdom and practical advice for immediate action, with references for deep dives by [@AdithyanI](https://github.com/AdithyanI)
- [Jordan Peterson](https://chat.openai.com/g/g-1Ofm79uOS-jordan-peterson): Emulating Dr. Jordan B. Peterson's style in providing life advice and insights by [@contrabandinteractive](https://github.com/contrabandinteractive)
- [Humanity Maximizer](https://chat.openai.com/g/g-s1SbKQ8hC-humanity-maximizer): I guide you towards cosmic-scale ideas that help advance humanity.
- [GPT Idea Roller](https://chat.openai.com/g/g-Trn2CdMYk-gpt-idea-roller): Sparking joy with AI brainwaves
- [Critical Thinker](https://chat.openai.com/g/g-1KHebYbFR-critical-thinker): A critical thinker for analyzing questions and improving answers
- [Theo Scholar](https://chat.openai.com/g/g-NRDaZP53n-theo-scholar): Expert in Bible discussions via Luther, Keller, Lewis.
- [NetEase](https://chat.openai.com/g/g-4qGNkbdA4-netease): I help with social media addiction and reward progress. by [Clemmie](https://github.com/Theatrix2020)
- [Compassionate-guide](https://chat.openai.com/g/g-KjVqAkZnE-compassionate-guide): A nurturing therapist offering hope and support through all stages of grief by [Clemmie](https://github.com/Theatrix2020)
- [Wing Chun Mastery](https://chat.openai.com/g/g-FWBVFTNQ0-wing-chun-mastery): Scholarly techniques, training, and philosophy by [Davis](https://community.openai.com/u/BPS_Software)
- [JungGPT](https://chat.openai.com/g/g-Izd6ToWMh-junggpt): Your compact AI companion for emotional insights! This revolutionary tool is fueled by a vast repository of information spanning psychology, therapy, and philosophy by [Tim](https://community.openai.com/u/tventura94)
- [Ethical AI](https://chat.openai.com/g/g-4TqgssqTw-ethical-ai): a daily challenge
- [VedantaGPT](https://chat.openai.com/g/g-8yOCnl2xV-vedantagpt):I teach Sankara's Advaita Vedanta with authentic commentaries from Guru's like Swami Chinmayananda and Swami Dayananda Saraswati by [Shiva Kakkar](www.shivakakkar.link) Productivity - [BabyAgi.txt](https://chat.openai.com/g/g-lzbeEOr9Y-babeagi): Step by Step task manager that automatically saves to a .txt file by [Nicholas Dobos](https://x.com/NickADobos/)
- [Universal Primer](https://chat.openai.com/g/g-GbLbctpPz-universal-primer): Learn everything about anything by [Siqi Chen](https://x.com/blader/)
- [ExtractWisdom](https://chat.openai.com/g/g-gmeHD0Ayr-extractwisdom): Takes in any text and extracts the wisdom from it like you spent 3 hours taking handwritten notes by [Daniel Miessler](https://x.com/DanielMiessler/)
- [Calendar GPT](https://chat.openai.com/g/g-8OcWVLenu-calendar-gpt): Here to help you prepare for your day! by Zapier
- [Event Dossier GPT](https://chat.openai.com/g/g-G8lqP5Snj-event-dossier-gpt): Create a dossier of all attendees of an event on your Google Calendar by Zapier
- [Automation Consultant](https://chat.openai.com/g/g-ERKZdxC6D-automation-consultant-by-zapier): Discover opportunities to save time with automation at work and get them setup for you by Zapier
- [Notion](https://chat.openai.com/g/g-q3SfBZl0B-notion): Advise and tips on using Notion by [Jan Hecker](https://www.linkedin.com/in/janhecker/)
- [Directory Bot](https://chat.openai.com/g/g-Iuv1AxdXm-directory-bot): Guiding you to the right GPT by [Manoj Mahalingam](https://www.linkedin.com/in/manojlds/)
- [OCR](https://chat.openai.com/g/g-wETMBcESv-ocr): Extract text and content from images or PDF documents by [ocr.chat](https://get.ocr.chat/gpt)
- [AnalyzePaper](https://chat.openai.com/g/g-WIlexDAW5-analyzepaper): Takes in a research paper or article, analyzes its claims, study quality, and results confidence and provides an easy-to-understand summary by [Daniel Miessler](https://x.com/DanielMiessler/)
- [Ask Dr. Andrew Huberman](https://chat.openai.com/g/g-1xC65osMP-ask-dr-andrew-huberman): Maximize your productivity, physical and mental health with neuroscience. Trained with all the podcast episodes from Huberman Lab by [@jyboy](https://github.com/jyboy)
- [excel VBA magica](https://chat.openai.com/g/g-MaUnLcGuA-vba-mabeobsa): excel VBA magica [Create Excel VBA code easily] by [@himomohi](https://github.com/himomohi)
- [ProductivePal](https://chat.openai.com/g/g-GPWM4s7EY-productivepal): Assists with ADHD-friendly productivity strategies, asks clarifying questions by [Clemmie](https://github.com/Theatrix2020)
- [ConvertAnything](https://chat.openai.com/g/g-kMKw5tFmB-convertanything): The ultimate file converter for images, audio, video, documents and more. It handles individual or batch uploads, supports ZIPs, and provides a download link by [Pietro Schirano](https://x.com/skirano/status/1723026266608033888)
- [Crow](https://chat.openai.com/g/g-FJbohiuK0-crow): Send a link, and I'll bring back the key points!
- [ExtractTableGPT](https://chat.openai.com/g/g-KbifnBjyz-extracttablegpt): Extract table data from any docs into multiple formats.
- [File Converter](https://chat.openai.com/g/g-L9WZ6RpiR-file-converter): Assists in converting files between different formats.
- [MS-PowerPoint](https://chat.openai.com/g/g-vIV2R7wST-ms-powerpoint): I assist in creating professional PowerPoint presentations.
- [Professional Summariser](https://chat.openai.com/g/g-Bgp6qQQ3X-professional-summariser): I summarise texts quickly and efficiently
- [Summary Sage with tags](https://chat.openai.com/g/g-UV2FOzD60-summary-sage-with-tags): Expert in summarizing and categorizing
- [URL Shortner](https://chat.openai.com/g/g-FmVxPJH0E-url-shortner): Shortens long URLs to more manageable links.
- [PresentationGPT](https://chat.openai.com/g/g-6fEHnJPXY-presentationgpt): AI bot specializing in creating presentation outlines
- [Mindmap](https://chat.openai.com/g/g-pkeXTdBQQ-mindmap): Assists in creating structured mind maps for organizing thoughts and ideas.
- [Conceptmap](https://chat.openai.com/g/g-ce1JVgzLI-conceptmap): Create concepts and structure them in a map. Keep ideas and retrieve them whenever you need them.
- [Paper Interpreter (Japanese)](https://chat.openai.com/g/g-hxDOCBQrs-paper-interpreter-japanese): When you upload your paper PDF, we will explain the content in easy-to-understand Japanese. by [Daichi Konno](https://twitter.com/_daichikonno)
- [Paper Interpreter](https://chat.openai.com/g/g-R9Dry2N5h-paper-interpreter): Explain the PDF of the academic paper in an easy-to-understand manner by [Daichi Konno](https://twitter.com/_daichikonno)
- [Townsend Atomics Enhanced AI](https://chat.openai.com/g/g-XFeB5TGQy-townsend-atomics-enhanced-ai): Versatile AI for Life Improvement and Safety by [Jordan Kurt Townsend](https://www.linkedin.com/in/townsendatomics/)
- [Cauldron](https://chat.openai.com/g/g-TnyOV07bC-cauldron): Media Mixer & Editor. Upload 1 to remake in a similar style. Upload 2 or more to remix, blend, edit or transfer styles. K for cmd menu by [Nicholas Dobos](https://x.com/NickADobos/)
- [Technical RFP Expert](https://chat.openai.com/g/g-APDSraFhe-technical-rfp-expert): Offers analysis and guidance on technology-focused RFPs, RFIs, and RFQs, focusing on technical aspects, risks, and regulations in a clear, factual manner by [Mateus Dias](https://github.com/mtdias). More details [here](https://github.com/mtdias/technical-rfp-expert-gpt-bot).
- [Excel Maestro](https://chat.openai.com/g/g-PNdA4W4tm-excel-maestro): Expert in Excel formulas, charting, pivot tables, and data organization, providing tailored guidance and efficient solutions by [S.J](https://github.com/noname2312) Programming Assistance General Programming - [Auto Agent - fladdict](https://chat.openai.com/g/g-aSCBrpxum-auto-agent-fladdict): No-code Auto Agent Prompting by [Takayuki Fukatsu](https://x.com/fladdict/)
- [Grimoire](https://chat.openai.com/g/g-n7Rs0IK86-grimoire): Coding Wizard - 100x Engineer. Build a website with a sentence. Built for a new era of creativity: Prompt-gramming by [Nicholas Dobos](https://x.com/NickADobos/)
- [SindreGPT](https://chat.openai.com/g/g-df0ZoBF9N-sindregpt): Ask Sindre Sorhus anything (about code, app support, open source, personal stuff, etc). Sindre is a full-time open-source maintainer and app developer by [Sindre Sorhus](https://x.com/sindresorhus/)
- [Seabiscuit - App Attack](https://chat.openai.com/g/g-d5y7yH2s7-seabiscuit-app-attack): Engineer Your Success by [Seabiscuit.ai](https://seabiscuit.ai/)
- [Software Crafter](https://chat.openai.com/g/g-MWGfe0UQn-software-crafter): Professional Software Developer by [Gregor Julian Riegler](https://www.linkedin.com/in/gregorriegler/)
- [Professional Coder (Auto programming)](https://chat.openai.com/g/g-HgZuFuuBK-professional-coder-auto-programming): A GPT expert at solving programming problems, automatic programming, one-click project generation
- [Codey](https://chat.openai.com/g/g-SuWVXlmkP-codey-coding-assistant): 🧙♂️💻 Codey - Your coding wizard! I handle code execution, file management 📂, and create charts/graphs 📈 with ease. From code reviews 🤓 to debugging 🔍, I've got you covered.
- [Take Code Captures](https://chat.openai.com/g/g-yKDul3yPH-take-code-captures): I help you capture, enhance, and share your code with ease by Oscar Daniel Ramos Ramirez
- [Code Assistant](https://chat.openai.com/g/g-tFHVXTKIX-code-assistant): Codes, debugs, refines, with minimal fluff by [Gautam](https://community.openai.com/u/codergautam)
- [The Pythoneer](https://chat.openai.com/g/g-zoGt7gx1e-the-pythoneer): Code, Conquer, & Quest by [Davis](https://community.openai.com/u/BPS_Software)
- [DevGPT](https://chat.openai.com/g/g-eN7HtAqXW-devgpt): Code together, right now..
- [CodeCopilot](https://chat.openai.com/g/g-2DQzU5UZl): Pair programming assistant for various coding tasks.
- [Full Stack Developer](https://chat.openai.com/g/g-N82dqklAi-full-stack-developer): I generate code for and fix issues in B2B SaaS web apps. Language & Framework Specific - **Basic Website Design & Development**
- [High-Quality Review Analyzer](https://chat.openai.com/g/g-inkifSixn-high-quality-review-analyzer): Analyses and gives actionable feedback on web Review type content using Google's Reviews System guidelines and Google's Quality Rater Guidelines by [Caitlin Hathaway](https://www.linkedin.com/in/caitlin-hathaway-ch/)
- [LP Wizard](https://chat.openai.com/g/g-bjIRYGrAM-lp-wizard): Assists in creating landing pages using HTML, CSS, and JavaScript by [Yota Ishikawa](https://x.com/ctgptlb/)
- [Svelte Project Builder](https://chat.openai.com/g/g-giGWRiNpv-svelte-gpt-project-builder): Build out a full app in Svelte, from pseudocode to real code by [@dougbutner](https://github.com/dougbutner)
- [HTML Wizard](https://chat.openai.com/g/g-0e9bmrOxn-html-wizard): A wise guide in web wizardry by [Davis](https://community.openai.com/u/BPS_Software)
- [AI Websites](https://chat.openai.com/g/g-WTUuSzTOj-ai-websites): Creates professional websites quickly.
- **Node frameworks & More Web development**
- [Node Mentor](https://chat.openai.com/g/g-mqQglGigC-node-mentor): Helps you with any Node.JS related issue and code. Writes, checks and fixes code. (It will also help with basic web development).
- [Vue3 GPT](https://chat.openai.com/g/g-LXEGvZLUS-vue3-gpt): A Vue.js 3 coding assistant, always up-to-date with the latest official documentation and presets for a quick choice of your preferred API and syntax by [@luona-dev](https://github.com/luona-dev)
- [Node.js Project Builder](https://chat.openai.com/g/g-02zmxuXd5-node-js-gpt-project-builder): Build out a full Node.js project, from skeleton to build-ready by [@dougbutner](https://github.com/dougbutner)
- [React Project Builder](https://chat.openai.com/g/g-eSIFeP4GM-react-gpt-project-builder): Build out a full React project, from planning to code by [@dougbutner](https://github.com/dougbutner)
- [Nest.js Helper](https://chat.openai.com/g/g-CsaF75oKy-nest-js-helper): Expert in Nest.js, JavaScript, TypeScript, and web technologies, providing code assistance and guidance. By [Luis González](https://ljgonzalez.cl/)
- [Express.js Helper](https://chat.openai.com/g/g-xDKetNJik-express-js-helper): Node.js and Express.js expert, skilled in coding, optimization, and clean code practices. By [Luis González](https://ljgonzalez.cl/)
- [Koa.js Helper](https://chat.openai.com/g/g-Z330tmuQS-koa-js-helper): Koa.js expert aiding in JS/TS coding, applying clean code principles, and optimizing MVC structures. By [Luis González](https://ljgonzalez.cl/)
- [Angular Project Builder](https://chat.openai.com/g/g-Wkhtm932I-angular-gpt-project-builder): Let AI angular project, from pseudocode to build-ready by [@dougbutner](https://github.com/dougbutner)
- [AstroJS Tips](https://chat.openai.com/g/g-DBNZGqVrU-astrojs-tips): Get tips, hints, or code reviews for your AstroJS site. Trained on Astro docs and code projects. Unofficial, created by [@psvann](github.com/psvann).
- **Ruby & Ruby on Rails**
- [RubyGPT](https://chat.openai.com/g/g-ASMq03VdH-rubygpt): Your Ruby coding assistant by [Niklas Haeusele](https://x.com/ModernRails/)
- [Ruby on Rails Helper](https://chat.openai.com/g/g-1DVp2Z9kX-ruby-on-rails-helper): Expert in Ruby on Rails and full-stack development assistance. By [Luis González](https://ljgonzalez.cl)
- **Python**
- [Coding Senpai](https://chat.openai.com/g/g-o1POcNKBW-coding-senpai): Python expert and kind 'Coding Senpai' with a unique speech quirk by [Kara Age](https://x.com/karaage0703/)
- [Django Dev Helper](https://chat.openai.com/g/g-eRiuFfW0B-django-dev-helper): Your go-to Django development assistant by Francis Labounty
- [Colab Code Crafter: Google Colab Code](https://chat.openai.com/g/g-kqbmidwnU-colab-code-crafter): Get Python code from a GPT tuned to make code that runs in the Google Colaboratory environment by [@David-Deans](https://github.com/David-Deans)
- [Code Companion](https://chat.openai.com/g/g-UwSunyiYn-code-companion): I'm a Python specialist here to help you code and learn! by [@drsoupyy](https://github.com/drsoupyy)
- [Aether](https://chat.openai.com/g/g-RO7ilCxmR-aether): Cited answers to Python / JS / AI questions
- **Java**
- [JAVA Code Guide](https://chat.openai.com/g/g-EYiFThMtQ-java-code-guide): A JAVA Development Assistant focusing on coding standards and quality by [@searle-dev](https://github.com/searle-dev)
- **Rust**
- [RustChat](https://chat.openai.com/g/g-59mWdU25F-rustchat): Rust language learning and practical assistant. Can help you learn and practice Rust whether you are a beginner or professional. I can provide suitable learning resources and hands-on projects for you by [Alexzhang](https://x.com/blackanger/)
- [AccelerantGPT](https://chat.openai.com/g/g-drqoA3ffZ-accelerantgpt): An expert in Rust adept at explaining code and teaching you the language by [Tim McNamara](https://twitter.com/timClicks)
- **C & C++**
- [C Helper](https://chat.openai.com/g/g-bZVRXYNMr-c-helper): Expert in C coding and low-level development. By [Luis González](http://ljgonzalez.cl/)
- [C++ Helper](https://chat.openai.com/g/g-BPsH4p3BB-c-helper): Expert in C++ (cpp) and low-level development, providing coding assistance and solutions. By [Luis González](http://ljgonzalez.cl/)
- **C#**
- [C# Helper](https://chat.openai.com/g/g-sW9cgD429-c-helper): Expert in C# and backend development. By [Luis González](http://ljgonzalez.cl/)
- **SQL & Databases**
- [SOL Code Guru](https://chat.openai.com/g/g-s8kgfZ9z0-sol-code-guru): Friendly Solana tech expert by [Alex](https://x.com/AlexBSLCo/)
- [SQL Ninja](https://chat.openai.com/g/g-FgZWbduwR-sql-ninja): Silent Queries, Lethal Data by [Davis](https://community.openai.com/u/BPS_Software)
- **Git & Github**
- [Git Expert](https://chat.openai.com/g/g-EIpzGfKNR-git-expert): Expert in GitHub, git, CI/CD, Docker, AWS, with a focus on GitHub assistance. By [Luis González](https://ljgonzalez.cl/)
- [GitPilot](https://chat.openai.com/g/g-RAbVaiioE): Clear, brief GitHub aid, for you by [Cocosgt](https://x.com/CocoSgt_twt/)
- [Github Repo Assistant](https://chat.openai.com/g/g-QA3Dl6r3G-repo-assistant): Provides both general and specific guidance on publicly accessible Github Repositories and their contents by [@thesamir](https://github.com/thesamir)
- [Code GPT](https://chat.openai.com/g/g-qd7UDCT6K-code-gpt): Code GPT that is able to generate code, push that to GitHub, auto-fix it, etc. Also, it deploys it for you in real-time automatically.
- **Docker, Docker Swarm & Kubernetes**
- [Docker Helper](https://chat.openai.com/g/g-oij49SGSQ-docker-helper): Specialist in Docker and Docker Swarm. By [Luis González](https://ljgonzalez.cl/)
- **Jetpack**
- [ComposeGPT](https://chat.openai.com/g/g-AZajfCZGd-composegpt): Helps you build apps using Jetpack Compose by [Alexandros Stylianidis](https://x.com/alexstyl/)
- **Json**
- [Data Extractor - JSON](https://chat.openai.com/g/g-wq6FSsAm3-data-extractor-json): Converts documents/text to structured data (JSON) by Francis Labounty
- **Streamlit**
- [StreamlitGPT](https://chat.openai.com/g/g-ucLFVBWHR-streamlitgpt): Code reviews from a Streamlit expert by [Tyler Richards](https://x.com/tylerjrichards/)
- **Markup Language**
- [YAML Helper](https://chat.openai.com/g/g-KsnQa2ux5-yaml-helper): Fix YAML syntax errors in Helm charts and YAML files by [Sharon Sahadevan](https://www.linkedin.com/in/sharonsahadevan/)
- **PHP & Wordpress**
- [WordPress Wizard](https://chat.openai.com/g/g-Bqrx4gDgK-wordpress-wizard): Offers expert advice for creating custom WordPress websites by [@stefanzihlmann](https://github.com/stefanzihlmann)
- [PHP Helper](https://chat.openai.com/g/g-kDs9iyq5U-php-helper): Expert in PHP, SQL, and full-stack development. By [Luis González](https://ljgonzalez.cl/)
- [WordPress Code Wizard](https://chat.openai.com/g/g-q201IJB7L-wordpress-code-wizard): A WordPress code snippet guru offering advanced development solutions by [S.J](https://github.com/noname2312)
- **Framer**
- [FramerGPT](https://chat.openai.com/g/g-MXpLvufG8-framergpt): Generate code components and overrides for Framer by [Isaac Roberts](https://twitter.com/heyisaacr)
- **Dart & Flutter**
- [Flutter App Maker 3000](https://chat.openai.com/g/g-sizZKl9zO-flutter-app-maker-3000): A hands-on guide for building Flutter apps step by step.
- [Dart Helper](https://chat.openai.com/g/g-kf22sbGyl-dart-helper): Development assistant specializing in Dart and full-stack development. By [Luis González](https://ljgonzalez.cl/)
- **AWS**
- [AWS Helper](https://chat.openai.com/g/g-U6EhmbVPc-awservices-helper): Full-stack development expert with a focus on AWS
- [AWS Cloud Practitioner Trainer GPT](https://chat.openai.com/g/g-hwCXFnpHc-aws-cloud-practitioner-certification-trainer): Use AI to train for your AWS certification exam.
- [AWS IAM AI](https://chat.openai.com/g/g-mqI6IM0JT-aws-iam-ai): Expert guide in AWS IAM, generating precise and secure policies. By [Luis González](https://ljgonzalez.cl/)
- **Ansible**
- [Ansible Helper](https://chat.openai.com/g/g-mZPhbswoZ-ansible-helper):Assistant specializing in Ansible. By [Luis González](https://ljgonzalez.cl/)
- **Flowbite**
- [Flowbite GPT](https://chat.openai.com/g/g-y7yC35HB9-flowbite-gpt): Create websites based on the Flowbite UI Library and Tailwind CSS. Other Programming GPTs - [ChatXGB](https://chat.openai.com/g/g-dq9i42tRO-chatxgb): GPT chatbot that helps you with technical questions related to XGBoost algorithm and library by [Bojan Tunguz](https://x.com/tunguz/)
- [Kaggle Tutorial 6th Edition](https://chat.openai.com/g/g-Z3a4iOzGR-kagglenotiyutoriarudi-6ban): This is a question you can ask about the 6th edition of Kaggle's tutorial by [Curry-Chan](https://x.com/currypurin/)
- [Sui Move GPT](https://chat.openai.com/g/g-NWwAJOzzz-sui-move-gpt): This is a specialized GPT model developed with insights from Sui documentation, GitHub repositories, and the Move language books by [Sam Blackshear](https://x.com/b1ackd0g/)
- [PaperPilot](https://chat.openai.com/g/g-ynZYhDGwd): Piloting arXiv and more, for you by [Cocosgt](https://x.com/CocoSgt_twt/)
- [Programaci-on/off](https://chat.openai.com/g/g-WTcolsvYZ-programaci-on-off): Programming Activity Evaluator by [Carlos Santana Vega](https://x.com/DotCSV/)
- [GetPaths](https://chat.openai.com/g/g-6Bcjkotez-getpaths): This GPT takes in content related to an application, such as HTTP traffic, JavaScript files, source code, etc., and outputs lists of URLs that can be used for further testing by [Daniel Miessler](https://x.com/DanielMiessler/)
- [BugBountyGPT](https://chat.openai.com/g/g-Rsk7ADgbD-bugbountygpt): AppSec & Bug Bounty by Andrew Brown
- [EA Wizard](https://chat.openai.com/g/g-d6cGwK4Lu-ea-wizard): Specialized in MQL4/5 code generation and problem solving by [lucky_papuwa777](https://x.com/lucky_papuwa777/)
- [Supabase Docs Writer](https://chat.openai.com/g/g-g0ObGf2Ow-supabase-docs-writer): Provides clear Supabase documentation help by [Paul Copplestone](https://twitter.com/kiwicopple)
- [Game QA Strategist](https://chat.openai.com/g/g-LrhUBrExn-game-qa-strategist): Plan QA tests and strategy based on recent code changes and game screenshots
- [OMP QA Demo](https://chat.openai.com/g/g-DmS7svNNy-omp-qa-demo): For testing, Quoting OMP YouTube video to answer Zapier questions by [Shek Ka Wai](https://www.linkedin.com/in/arshek/)
- [Test-Driven Code Companion](https://chat.openai.com/g/g-jCcHbTz23-test-driven-code-companion): A code companion that follows the rule of test-driven development to help you write safe and proven code by [@FlorianVal](https://github.com/FlorianVal)
- [Data Science Project Generator: Project Suggestions](https://chat.openai.com/g/g-fvy71gm4A-data-science-project-generator): Offers data science project ideas and tips by [@vasarmilan](https://github.com/vasarmilan)
- [Repo Ranger](https://chat.openai.com/g/g-f9z2KitCh-repo-ranger): Your go-to sheriff for web-based code insights and security checks by [@marcusrbrown](https://github.com/marcusrbrown)
- [Code Whiz Pro](https://chat.openai.com/g/g-0SJxA4A1j-code-whiz-pro): Provides insightful code reviews with a humorous twist by [@davidemarcoli](https://github.com/davidemarcoli)
- [PC Builder GPT](https://chat.openai.com/g/g-gh7PDdmmd-pc-builder-gpt): I'm PC Builder GPT, your tech-savvy virtual friend who offers expert and approachable advice on building PCs, complete with up-to-date pricing by [@HeyitsRadinn](https://github.com/HeyitsRadinn)
- [CodeGuardian](https://chat.openai.com/g/g-iNO6cUKoo-code-guardian): Code challenges for web developers to identify security vulnerabilities and patch them.
- [Code Support](https://chat.openai.com/g/g-H8YSZ3jLX-code-support): Quick command-line help and code snippets, defaults to Linux & Python.
- [ffmpegGPT](https://chat.openai.com/g/g-ehkkatxfC-ffmpeggpt): FFMPEG expert for helping you create complex video editing commands with FFMPEG by [Chandler](https://github.com/chand1012).
- [API Finder](https://chat.openai.com/g/g-QgeXE2fgm-api-finder): Assists in finding and detailing APIs. Also guide users on how to effectively utilize APIs in their projects by [S.J](https://github.com/noname2312)
- [Game Craft Guru](https://chat.openai.com/g/g-XLVAtZJKi-game-craft-guru): Focused game mechanics and design expert.
- [FridaGPT](https://chat.openai.com/g/g-KwZVA8dTp-fridagpt): A Frida focussed GPT to help reverse engineers in writing Frida scripts. by [By L Jacobs](https://twitter.com/leonjza)
- [impacketGPT](https://chat.openai.com/g/g-8Ax6NRrAb-impacketgpt): Your go-to source for Impacket documentation by [nuts.](https://twitter.com/__nuts7)
- [Patch Prodigy](https://chat.openai.com/g/g-NUaIRxvO6-patch-prodigy): Friendly and informative MAX/MSP guide by [@DSOhnaka](https://twitter.com/DSOhnaka)
- [IOTA Insight](https://chat.openai.com/g/g-CGc6SfNN0-iota-insight): Your Gateway to the IOTA Knowledge Base by [Sebastian Mueller](https://twitter.com/NaitsabesMue)
- [Test Double](https://chat.openai.com/g/g-yK9Ggt181-test-double): Expert in creating diverse test data for development needs in various formats.
- [Shell Expert Pro](https://chat.openai.com/g/g-jaiZcNIme-shell-expert-pro): Efficient shell script engineer, offers detailed explanations on request.
- [Secure Code Assistant](https://chat.openai.com/g/g-k0PTOme1H-secure-code-assistant): I offer tested, secure coding solutions with no patience-testing.
- [Developer Doc Search](https://chat.openai.com/g/g-AINygIiYy-developer-doc-search): Searches open source packages and their documentation.
- [Create Coding Tutorials](https://chat.openai.com/g/g-yCng8eadJ-create-coding-tutorials): Takes your code and develops a self-paced tutorial for your students. Storytelling & Screenplays - [OpenStorytelling Plus](https://chat.openai.com/g/g-LppT0lwkB-openstorytelling-plus): An Educational Open Source Storytelling Writing Guide w/ Screenplay Examples by [Bryan Harris](https://x.com/BryanRebooted/)
- [Mōsō-kun](https://chat.openai.com/g/g-wbywTK1JN-wang-xiang-kun): Send us an image and we'll create a story for you by [Takayuki Fukuda](https://x.com/hedachi/)
- [Bedtime Stories](https://chat.openai.com/g/g-i5ZE8Aq9i-bedtime-stories): I create illustrated stories with your child as the main character! by Parth Gandhi
- [AI Filmmaking Assistant](https://chat.openai.com/g/g-hiKxJNAlp-ai-filmmaking-assistant): Create consistency across your AI Film, automatically format Midjourney prompts, and more! by [Dale A Williams](https://x.com/TheReelRobot)
- [Character Forger](https://chat.openai.com/g/g-waDWNw2J3-character-forger): Character Consistency Tool by [LearnAI](https://x.com/LearnAI_MJ)
- [Kids Choose Your Own Adventure Stories](https://chat.openai.com/g/g-3q7Md6dYg-kids-choose-your-own-adventure-stories): Create personalized, chapter-based adventure stories for kids by [Jarrad Grigg](https://www.linkedin.com/in/jarradgrigg/)
- [Choose Your Own Adventure](https://chat.openai.com/g/g-OItSbmFlC-choose-your-own-adventure): A 'Choose Your Own Adventure' creator with visual storytelling by [Geoffrey Cox](https://www.linkedin.com/in/geoffrey-cox/)
- [AI Comic Maker](https://chat.openai.com/g/g-1LM0T9LSW-ai-comic-maker): A GPT to create comics using ChatGPT and DALL-E, trying to maintain consistency of characters and images by [@eliohead](https://github.com/eliohead)
- [Language Learning - Create Short Stories to Learn any Language](https://chat.openai.com/g/g-tXEyZoKVx-create-short-stories-to-learn-a-language): 2500+ word stories in the target language with images, for language learning by [@TheBoringBOT](https://github.com/TheBoringBOT)
- [Story Buddy](https://chat.openai.com/g/g-2k7EGyB1p-story-buddy): A creative guide to help kids build their own bedtime stories, with illustrations by [@ItaiLeibowitz](https://github.com/ItaiLeibowitz)
- [Alternative Histories](https://chat.openai.com/g/g-J45g1U3ro-alternative-histories): I craft and visualize 'what if' histories.
- [Film Developer](https://chat.openai.com/g/g-r7X90KNno-film-developer): A GPT for everything in film development, from dialogue to story, character development, to concept art by [@LaneBucher](https://github.com/LaneBucher)
- [Character Architect](https://chat.openai.com/g/g-Impe1Ay0j-character-architect): Crafting Characters, Cultivating Connections by [Davis](https://community.openai.com/u/BPS_Software)
- [LitRPG Larry](https://chat.openai.com/g/g-rOaM5ZKPa-litrpg-larry): I'm LitRPG Larry, here to discuss and help with all things LitRPG whether you're a writer or reader. (GameLit Friendly!) by [Paul Bellow](https://community.openai.com/u/PaulBellow)
- [Storyteller](https://chat.openai.com/g/g-dmgFloZ5w-storyteller): Weaves stories with a blend of writing and design. Translator - [English Translation Expert](https://chat.openai.com/g/g-IZb9C11iR-ying-wen-fan-yi-zhuan-jia): The highest level "English to Chinese" machine translation on the Internet by [Yang Chuansheng](https://x.com/CarsonYangk8s/)
- [Science and technology text translation](https://chat.openai.com/g/g-uBhKUJJTl-ke-ji-wen-zhang-fan-yi): Translate scientific articles and papers into Simplified Chinese by Junmun Liu
- [English Teacher Marion](https://chat.openai.com/g/g-VDDC0Ztph-english-teacher-marion): Meet Marion, your friendly neighborhood English teacher by [@nicksavanah](https://github.com/nicksavanah)
- [Math to LaTeX](https://chat.openai.com/g/g-2vNXETv9C-math-to-latex): Send me an image of Math. I will give you the LaTeX code by [@No_Impact4379](https://www.reddit.com/user/No_Impact4379/)
- [Multilingual Mentor](https://chat.openai.com/g/g-ecP2s16LQ-multilingual-mentor): Learn any language IN any other language while talking freely but still in a structured way and according to your current proficiency by [@linus-ahlemeyer](https://github.com/linus-ahlemeyer)
- [Portuguese Pal](https://chat.openai.com/g/g-xEtoWs2cJ-portuguese-pal): Learn Portuguese while talking freely but still in a structured way and according to your current proficiency by [@linus-ahlemeyer](https://github.com/linus-ahlemeyer)
- [Simple Proofreader](https://chat.openai.com/g/g-Dk6K4VJk2-simple-proofreader): I will proofread academic English. I won’t do anything other than that by [Matsui Kentaro](https://x.com/matsuikentaro1/)
- [English proofreading GPT](https://chat.openai.com/g/g-xk6AdDGIW-ying-wen-xiao-zheng-gpt): Academic paper English proofreading assistant by [genkAIjokyo](https://x.com/genkAIjokyo/)
- [PEC - Practice English Conversation](https://chat.openai.com/g/g-kowOn5g3r-pec-practice-english-conversation): Let's practice English conversation. Provides detailed analysis of sentence structure, grammar, vocabulary usage, and more by [S.J](https://github.com/noname2312)
- [Chinese Pronunciation [Audio]](https://chat.openai.com/g/g-Dr5b43UUk-audio-chinese-pronunciation-tutor): Chinese Pronunciation Tutor for use with ChatGPT mobile app's conversational AI
- [Language Coach](https://chat.openai.com/g/g-0g6ZdEtv6-language-coach): Helps in learning new languages.
- [Spanish Friend](https://chat.openai.com/g/g-p5dJAcoT8-spanish-friend-language-conversation-improver): Talk with me through some real-life scenarios and I will help you improve your foreign language skills through practice, translations and improvements.
- [Time Series Forecasting Expert - 时间序列预测专家)](https://chat.openai.com/g/g-n6tIz5rIq-shi-jian-xu-lie-yu-ce-zhuan-jia): Time series prediction expert in Chinese
- [Nuanced Ukrainian Translator](https://chat.openai.com/g/g-Tbdi7fnCe-nuanced-ukrainian-translator): Expert in nuanced, idiomatic Ukrainian translations by [Yevhenii Hurin](https://twitter.com/hoblon)
- [Global Translator](https://chat.openai.com/g/g-43u7lnmlB-global-translator): Expert in translating and clarifying languages. Translates any language to any other language. Fixes mistakes during the translation process. By [Luis González](https://ljgonzalez.cl/) Travel - [Travel Greener](https://chat.openai.com/g/g-LhBDxLB7d-travel-greener): An AI that helps you make your trip more sustainable by finding more sustainable traveling alternatives and providing advice, by [Axel Peytavin](https://www.linkedin.com/in/axel-peytavin/)
- [Voyage Guide GPT](https://chat.openai.com/g/g-MDExvbFqe-voyage-guide): Your virtual travel buddy here to help make your travel planning experience smooth, fun, and informative by [@HeyitsRadinn](https://github.com/HeyitsRadinn)
- [Airwise](https://chat.openai.com/g/g-YxfbZUa7H-airwise): A traveler-focused AI tool that quickly clarifies airport regulations on carry-on and checked items, aligned with current global policies by [Ankit Pal](https://www.linkedin.com/in/aadityaura/)
- [NomadGPT](https://chat.openai.com/g/g-0k9rvxdJn-nomadgpt): NomadGPT helps you become a digital nomad and find you the best places in the world to live and work remotely by [levelsio](https://twitter.com/levelsio/)
- [Seat Seeker GPT](https://chat.openai.com/g/g-3AQM5NfzA-seat-seeker): Seat Seeker excels in efficiently assisting users to find public seating with specific amenities, using their approximate location by [@HeyitsRadinn](https://github.com/HeyitsRadinn)
- [Weather Whiskers](https://chat.openai.com/g/g-Qb4WOntiy-weatherwhiskers): I generate a cute weather forecast image in your location, just tell me where you are. Miscellaneous - [Police Record Generator](https://chat.openai.com/g/g-NWo6hR2Jf-jing-cha-shi-jian-bo-zienereta): Generates interactive casebooks and simulates reenactments for police characters, offering clues and multiple solutions by [tramarusuisan](https://x.com/tramarusuisan/)
- [OchyAI](https://chat.openai.com/g/g-zprRltiOf-ochyai): Conveying Ochiai's Art, Research, and Philosophy by OchyAI by [Yoichi Ochiai](https://x.com/ochyai/)
- [ATT&CK Mate](https://chat.openai.com/g/g-fCIE7hCLx-att-ck-mate): Ask me anything about the ATT&CK Knowledge Base by [Roberto Rodriguez](https://x.com/Cyb3rWard0g/)
- [Slowly MovieMaker4 support](https://chat.openai.com/g/g-wl8EUuUyX-yutukurimoviemaker4sapoto): Solve frequently asked questions by [Manju Chef](https://x.com/manju_summoner/)
- [GPT to Ban GPT](https://chat.openai.com/g/g-612TDn3u9-gpt-to-ban-gpt): Need to ban chatGPT in your organization? by [Ethan Mollick](https://x.com/emollick/)
- [Don't want to go upstairs](https://chat.openai.com/g/g-D2j1WBTkN-bu-xiang-shang-lou): A middle-aged man earnestly justifying a purchase to his wife by [Zhang Tao](https://x.com/hidecloud/)
- [Large text file splitter program](https://chat.openai.com/g/g-SBGMg6HzJ-da-wen-ben-wen-jian-fen-ge-cheng-xu): Accurately split files into PDF by [Cosy Touch Limited](https://x.com/jesselaunz/)
- [World Mobile GPT](https://chat.openai.com/g/g-Xg9daQnJ7-world-mobile-gpt): Enthusiastically answering World Mobile queries with a comprehensive knowledge base, including James Tagg's patents by [Hidrex](https://x.com/hidrexnodes/)
- [Shishikawa Kasane](https://chat.openai.com/g/g-EY4Zk6UFw-sisikawa-kasane): I like making cute robots. Ask us anything about Stack Chan! by [Shinya Ishikawa](https://x.com/stack_chan/)
- [Ask GP9T](https://chat.openai.com/g/g-65Gi7uW6J-ask-gp9t): Learn more about Point Nine by [Christoph Janz](https://x.com/chrija/)
- [Olyup](https://chat.openai.com/g/g-JlDoaXFrU-olyup): Your AI Sports Scientist to help you level up your game - in and off the field by [Mbongeni Ndlovu](https://x.com/Mbounge_/)
- [Krog](https://chat.openai.com/g/g-tvo4YNhaA-krog): Krog help more good by [Benjamin](https://x.com/ikeadrift/)
- [mferGPT](https://chat.openai.com/g/g-Bi373xIOH-mfergpt): mfer history, derivatives, and conversation by [HeresMyEth](https://x.com/HeresMyEth/)
- [PlatoAI](https://chat.openai.com/g/g-Vw7qEX384-platoai): You can't talk to Plato, but you can talk to PlatoAI by [Mushtaq Bilal](https://x.com/MushtaqBilalPhD/)
- [An Emoji GPT](https://chat.openai.com/g/g-mvOpDRXMz-an-emoji-gpt): The knowledge of a hundred generations at my fingertips and all I do is pick the perfect emoji for every situation by James Donovan
- [Emoji Generator](https://chat.openai.com/g/g-wkmOq6AxG-emoji-generator): I turn your text into Emoji by airprompto
- [Waste Wizard](https://chat.openai.com/g/g-o8lkkwc8Z-waste-wizard): I turn your waste into wonders with ideas, steps, pictures by airprompto
- [Architect](https://chat.openai.com/g/g-wM67s4782-architect): I assist in conlang creation and manage complex data by [John Stark](https://x.com/StarkMakesArt)
- [Istio Guru](https://chat.openai.com/g/g-r4v8Uva89-istio-guru): Your Istio Service Mesh Expert by [Rohit Ghumare](https://www.linkedin.com/in/rohit-ghumare/)
- [VinuChain and VINU GPT](https://chat.openai.com/g/g-iw0xh6QBu-vinuchain-and-vinu-gpt): I am VinuChain and VINU GPT, specialized in the VINU ecosystem, including VinuFinance, VinuSwap, VINU token, and VinuChain. I offer insights on their features, tokenomics, and development. by [Taylor Miller](https://twitter.com/VitaInuCoin)
- [Kitze GPT](https://chat.openai.com/g/g-KQJtxRnAz-kitze-gpt): Talk to Kitze by [Kitze](https://twitter.com/thekitze)
- [@levelsio](https://chat.openai.com/g/g-QFAuxHmUa-levelsio): Talk with @levelsio on ChatGPT. Ask any question you want about building your own startup, digital nomading, remote work and whatever else you'd like to ask. Trained on all of my podcasts, interviews, blog posts and tweets! by [levelsio](https://twitter.com/levelsio)
- [3rd SoftSec Reviewer](https://chat.openai.com/g/g-nAldYnak2-3rd-softsec-reviewer):Perform 3rd party software security review by [yevh](https://github.com/yevh)
- [TrenBot](https://chat.openai.com/g/g-hlLbNjzRZ-trenbot): Chatbot for Tren Griffin's views on the market, tech, and everything else by [Tren Griffin](https://twitter.com/trengriffin) Contributing Please take a quick look at the contribution guidelines for details. Thanks to all contributors ; you are awesome!;Collection of all the GPTs created by the community;awesome,awesome-list,chatgpt,gpts,gptstore,lists,resources | taranjeet/awesome-gpts |
Tameyer41/liftoff;Liftoff Interviews Mock Interview Simulator with AI-Powered Feedback Introduction · One-click Deploy · Tech Stack + Features · Author Introduction Liftoff is an interview preparation tool that provides AI feedback on your mock interviews. One-click Deploy You can deploy this template to Vercel with the button below: You can also clone & create this repo locally with the following command: bash
npx create-next-app liftoff --example "https://github.com/Tameyer41/liftoff" Tech Stack + Features Frameworks Next.js – React framework for building performant apps with the best developer experience Platforms Vercel – Easily preview & deploy changes with git Upstash - Serverless Data Platform (here using serverless Redis for rate limiting) UI Tailwind CSS – Utility-first CSS framework for rapid UI development Framer Motion – Motion library for React to animate components with ease ImageResponse – Generate dynamic Open Graph images at the edge HeadlessUI - Completely unstyled, fully accessible UI components, designed to integrate beautifully with Tailwind CSS Code Quality TypeScript – Static type checker for end-to-end typesafety Prettier – Opinionated code formatter for consistent code style ESLint – Pluggable linter for Next.js and TypeScript Miscellaneous FFMPEG.WASM – Transcode video/audio files React Webcam - Webcam component for React Stripe Gradient Animation - @jordienr released a Mesh Gradient that uses WebGL and animates a beautiful gradient How it all works Liftoff uses FFmpeg to transcode the raw video into MP3. Chrome, Safari, and Firefox all record with different codecs, and FFmpeg is great for standardizing them. We then send the audio directly to be transcribed by OpenAI's Whisper endpoint, and then stream feedback from the edge using OpenAI's gpt-3.5-turbo. Author Tyler Meyer ( @tmeyer_me );Mock Interview Simulator with AI-Powered Feedback;[] | Tameyer41/liftoff |
ferrocene/ferrocene;Ferrocene is a toolchain to enable the use of the Rust programming language in
safety-critical environments. It is a proper downstream of the main Rust
compiler - rustc, maintained by the Rust project on [ rust-lang/rust ]. The mission of Ferrocene is to bring open source practices to safety-critical
industries and improve the Rust open source ecosystem through safety-critical
practices. Ferrocene is maintained and supported by the world-renowed experts at Ferrous
Systems. Both standard and long-term support are available. Check our
website for details. Current status Ferrocene is qualified for ISO 26262 (ASIL D) and IEC 61508 (SIL 4).
Qualification for other standards and areas, such as railway and aerospace,
are planned. Installation Prebuilt Ferrocene binaries are available for customers and partners. You can
visit releases.ferrocene.dev to download the release archives after logging
in with your Ferrocene account. Documentation and Procedures The documentation of the Ferrocene toolchain and its source can be found
in the [ ferrocene/doc directory]. The documentation contains the projects
procedures, its quality management measures and current testing coverage. Rendered versions of the documentation are also available: docs.ferrocene.dev : available to customers and partners, contains the
documentation for all release channels. public-docs.ferrocene.dev/main : publicly available, contains the
documentation for the latest commit merged on the main branch. Support Multiple levels of support are available for paying customers, provided by Ferrous Systems . You can log into customers.ferrocene.dev to learn about
your support plan, and how to send support requests. Contribution policy As a downstream of the Rust project, Ferrocene prefers to keep the compiler
unmodified. This means that general contributions to the compiler or its tools
(and discussions) should happen upstream ([ rust-lang/rust ]). However, Ferrocene does
serve as a community of peers to propose and produce changes useful in
safety-critical changes towards the project. Contributions to qualification activities and manuals are welcome, but
generally gated. Contribution is open to industry and academic partners,
customers, and project employees. You can use the Ferrocene issue tracker to file an issue for the
materials provided by the Ferrocene developers. Please note that the issue
tracker is not a support channel. Please note that Ferrocene is governed under the Apache-2.0 license and
contribution policies apply to the issue tracker as well as the codebase
itself. Additional services Ferrous Systems provides services built and tailored around Ferrocene: Trainings: trainings on Rust and Ferrocene, particularly for teams, are
available. Trainings can be custom tailored to your needs. Check out our
training offerings. Inclusion in SDKs: Ferrocene is available to be integrated in your
toolchain! Please get in touch to learn more. Tailoring, enabling and integration within your system : We're more than
happy to enable Rust support in your operating system or tool, including
porting the Rust compiler to the targets you need and qualifying them in
Ferrocene. Get in touch to learn more. Infrastructure support : Ferrocene is built for a DevOps world. Rust
for your builds in the cloud is a first-class citizen for us, and we can
provide support tailored to you. Get in touch for more
information. Security Please follow Ferrocene's security policy if you discover a
security vulnerability affecting Ferrocene. License and trademark The contents of the repository are primarily licensed under either the MIT or
Apache 2.0 license: users can choose either license, and contributors must
license their changes under both licenses. Note that the repository contains
files authored by third parties and published under different licenses, see the
annotations next to those files. Ferrocene is a registered trademark of Critical Section GmbH, a subsidiary of
Ferrous Systems. See our trademark policy for the guidelines on
the use of the trademark.;Source code of Ferrocene, safety-critical Rust toolchain;[] | ferrocene/ferrocene |
IAHispano/Applio;VITS-based Voice Conversion focused on simplicity, quality, and performance. 🌐 Website • 📚 Documentation • ☎️ Discord 🛒 Plugins • 📦 Compiled • 🎮 Playground • 🔎 Google Colab (UI) • 🔎 Google Colab (No UI) Table of Contents Installation Windows macOS Linux Makefile Usage Windows macOS Linux Makefile Technical Information Repository Enhancements Commercial Usage References Contributors Installation Download the latest version from GitHub Releases or use the Compiled Versions . Windows bash
./run-install.bat macOS For macOS, you need to install the requirements in a Python environment version 3.9 to 3.11. Here are the steps: bash
python3 -m venv .venv
source .venv/bin/activate
chmod +x run-install.sh
./run-install.sh Linux Certain Linux-based operating systems may encounter complications with the installer. In such instances, we suggest installing the requirements.txt within a Python environment version 3.9 to 3.11. bash
chmod +x run-install.sh
./run-install.sh Makefile For platforms such as Paperspace : bash
make run-install Usage Visit Applio Documentation for a detailed UI usage explanation. Windows bash
./run-applio.bat macOS bash
chmod +x run-applio.sh
./run-applio.sh Linux bash
chmod +x run-applio.sh
./run-applio.sh Makefile For platforms such as Paperspace : bash
make run-applio Technical Information Applio uses an enhanced version of the Retrieval-based Voice Conversion (RVC) model, a powerful technique for transforming the voice of an audio signal to sound like another person. This advanced implementation of RVC in Applio enables high-quality voice conversion while maintaining simplicity and performance. 0. Pre-Learning: Key Concepts in Speech Processing and Voice Conversion This section introduces fundamental concepts in speech processing and voice conversion, paving the way for a deeper understanding of the RVC pipeline: 1. Speech Representation Phoneme: The smallest unit of sound in a language that distinguishes one word from another. Examples: /k/, /æ/, /t/. Spectrogram: A visual representation of the frequency content of a sound over time, showing how the intensity of different frequencies changes over the duration of the audio. Mel-Spectrogram: A type of spectrogram that mimics human auditory perception, emphasizing frequencies that are more important to human hearing. Speaker Embedding: A vector representation that captures the unique acoustic characteristics of a speaker's voice, encoding information about pitch, tone, timbre, and other vocal qualities. 2. Text-to-Speech (TTS) TTS Model: A machine learning model that generates artificial speech from written text. Encoder-Decoder Architecture: A common architecture in TTS models, where an encoder processes the text and pitch information to create a latent representation, and a decoder uses this representation to synthesize the audio signal. Transformer Architecture: A powerful neural network architecture particularly well-suited for sequence modeling, allowing the model to handle long sequences of text or audio and capture relationships between elements. 3. Voice Conversion Voice Conversion (VC): The process of transforming the voice of a speaker in an audio signal to sound like another speaker. Speaker Adaptation: The process of adapting a TTS model to a specific speaker, often by training on a small dataset of the speaker's voice. Retrieval-Based VC (RVC): A voice conversion approach where speaker embeddings are retrieved from a database and used to guide the TTS model in synthesizing audio with the target speaker's voice. 4. Additional Concepts ContentVec: A powerful self-supervised learning model for speech representation, excelling at capturing speaker-specific information. FAISS: A library for efficient similarity search, used to retrieve speaker embeddings that are similar to the extracted ContentVec embedding. Neural Source Filter (NSF): A module that models audio generation as a filtering process, allowing the model to produce high-quality and realistic audio signals by learning complex relationships between the source signal and the output waveform. 5. Why are these concepts important? Understanding these concepts is essential for appreciating the mechanics and capabilities of the RVC pipeline: Speech Representation: Different representations capture different aspects of speech, allowing for effective analysis and manipulation. TTS Models: The TTS model forms the foundation of RVC, providing the ability to synthesize audio from text and pitch. Voice Conversion: Voice conversion aims to transfer a speaker's identity to a different audio signal. ContentVec and Speaker Embeddings: ContentVec provides a powerful way to extract speaker-specific information, which is crucial for accurate voice conversion. FAISS: This library enables efficient speaker embedding retrieval, facilitating the selection of appropriate target voices. NSF: The NSF is a critical component of the TTS model, contributing to the generation of realistic and high-quality audio. 1. Model Architecture The RVC model comprises two main components: A. Encoder-Decoder Network This network synthesizes audio based on text and pitch information while incorporating speaker characteristics from the ContentVec embedding. Encoder: Input: Phoneme sequences (text representation) and pitch information (optional). Embeddings: Phonemes are represented as vectors using linear layers, creating a dense representation of the text input. Pitch is usually converted to a one-hot encoding or a continuous value and embedded similarly. Transformer Encoder: Processes the embedded features in a highly parallel manner. It employs: Self-Attention: Allows the encoder to attend to different parts of the input sequence to understand the relationships between words and their context. Feedforward Networks (FFN): Apply non-linear transformations to further refine the features captured by self-attention. Layer Normalization: Stabilizes training and improves performance by normalizing the outputs of each layer. Dropout: A regularization technique to prevent overfitting. Output: Produces a latent representation of the input text and pitch, capturing their relationships and serving as the input for the decoder. Decoder: Input: The latent representation from the encoder. Transformer Decoder: Receives the encoder output and utilizes: Self-Attention: Allows the decoder to attend to different parts of the generated sequence to maintain consistency and coherence in the output audio. Encoder-Decoder Attention: Enables the decoder to incorporate information from the input text and pitch into the audio generation process. Neural Source Filter (NSF): A powerful component for generating audio, modeling the generation process as a filter applied to a source signal. It uses: Upsampling: Increases the resolution of the latent representation to match the desired length of the audio signal. Residual Blocks: Learn complex and non-linear relationships between input features and the output audio, contributing to realistic and detailed waveforms. Source Module: Generates the excitation signal (often harmonic) that drives the NSF. It combines sine waves (for voiced sounds) and noise (for unvoiced sounds) to create a natural source signal. Noise Convolution: Convolves noise with the harmonic signal to introduce additional variation and realism. Final Convolutional Layer: Converts the filtered output to a single-channel audio waveform. Output: Synthesized audio signal. B. ContentVec Speaker Embedding Extractor Extracts speaker-specific information from the input audio. Input: The preprocessed audio signal. Processing: The ContentVec model, trained on a massive dataset of speech data, processes the input audio and extracts a speaker embedding vector, capturing the unique acoustic properties of the speaker's voice. Output: A speaker embedding vector representing the voice of the speaker. 2. Training Stage The RVC model is trained using a combination of two key losses: Generative Loss: Mel-Spectrogram: The Mel-spectrogram is computed for both the target audio and the generated audio. L1 Loss: Measures the absolute difference between the Mel-spectrograms of the target and generated audio, encouraging the decoder to produce audio with a similar spectral profile. Discriminative Loss: Multi-Period Discriminator: Tries to distinguish between real and generated audio at different time scales, using convolution layers to capture long-term dependencies in the audio. Adversarial Training: The generator tries to fool the discriminator by producing audio that sounds real, while the discriminator is trained to correctly identify generated audio. Optional KL Divergence Loss: Measures the difference between the distributions of latent variables generated by the encoder and a posterior encoder (which infers the latent representation from the target audio). Encourages the model to learn a more efficient and stable latent representation. 3. Inference Stage The inference stage utilizes the trained model to convert the voice of an audio input to sound like a target speaker. Here's a breakdown: Input: Phoneme sequences (text representation). Pitch information (optional). Target speaker ID (identifies the desired voice). Steps: ContentVec Embedding Extraction: The ContentVec model processes the input audio and extracts a speaker embedding vector, capturing the voice characteristics of the speaker. Optional Embedding Retrieval: FAISS Index: Used to efficiently search for speaker embeddings similar to the extracted ContentVec embedding. It helps guide the voice conversion process toward a specific speaker when multiple speakers are available. Embedding Retrieval: The FAISS index is queried using the extracted ContentVec embedding, and similar embeddings are retrieved. Embedding Manipulation: Blending: The extracted ContentVec embedding can be blended with retrieved embeddings using the index_rate parameter, allowing control over how much the target speaker's voice influences the conversion. Encoder-Decoder Processing: Encoder: Encodes the phoneme sequences and pitch into a latent representation, capturing the relationships between them. Decoder: Synthesizes the audio signal, incorporating the speaker characteristics from the ContentVec embedding (potentially blended with retrieved embeddings). Post-Processing: Resampling: Adjusts the sampling rate of the generated audio if needed. RMS Adjustment: Adjusts the volume (RMS) of the output audio to match the input audio. 4. Key Techniques Transformer Architecture: The Transformer architecture is a powerful tool for sequence modeling, enabling the encoder and decoder to efficiently process long sequences and capture complex relationships within the data. Neural Source Filter (NSF): Models audio generation as a filtering process, allowing the model to produce high-quality and realistic audio signals by learning complex relationships between the source signal and the output waveform. Flow-Based Generative Model: Enables the model to learn complex probability distributions for the audio signal, leading to more realistic and diverse generated speech. Multi-period Discriminator: Helps improve the quality and realism of the generated audio by evaluating the audio at different temporal scales and providing feedback to the generator. Relative Positional Encoding: Helps the model understand the relative positions of elements within the input sequences, improving the model's ability to handle long sequences and maintain context. 5. Future Challenges Despite the advancements in Retrieval-Based Voice Conversion, several challenges and areas for future research remain: Speaker Generalization: Improving the ability of models to generalize to unseen speakers with minimal data. Real-time Processing: Enhancing the efficiency of models to support real-time voice conversion applications. Emotional Expression: Better capturing and transferring emotional nuances in voice conversion. Noise Robustness: Improving the robustness of voice conversion models to handle noisy and low-quality input audio. Repository Enhancements This repository has undergone significant enhancements to improve its functionality and maintainability: Modular Codebase: Restructured codebase for better organization, readability, and maintenance. Hop Length Implementation: Improved efficiency and performance, especially on Crepe (formerly Mangio-Crepe), thanks to @Mangio621 . Translations in 30+ Languages: Added support for over 30 languages. Cross-Platform Compatibility: Ensured seamless operation across various platforms. Optimized Requirements: Fine-tuned project requirements for enhanced performance. Streamlined Installation: Simplified installation process for a user-friendly setup. Hybrid F0 Estimation: Introduced a personalized 'hybrid' F0 estimation method utilizing nanmedian. Easy-to-Use UI: Implemented an intuitive user interface. Plugin System: Introduced a plugin system for extending functionality. Overtraining Detector: Implemented a detector to prevent excessive training. Model Search: Integrated model search feature for easy discovery. Pretrained Models: Added support for custom pretrained models. Voice Blender: Developed a feature to combine two trained models to create a new one. Accessibility Improvements: Enhanced with descriptive tooltips for UI elements. New F0 Extraction Methods: Introduced methods like FCPE or Hybrid for pitch extraction. Output Format Selection: Added feature to choose audio file formats. Hashing System: Assigned unique IDs to models to prevent unauthorized duplication. Model Download System: Supported downloads from various platforms. TTS Enhancements: Improved Text-to-Speech functionality. Split Audio: Implemented audio splitting for faster processing. Discord Presence: Displayed usage status on Discord. Flask Integration: Enabled automatic model downloads via Flask. Support Tab: Added a tab for screen recording to report issues. These enhancements contribute to a more robust and scalable codebase, making the repository more accessible for contributors and users alike. Commercial Usage For commercial purposes, please adhere to the guidelines outlined in the MIT license governing this project. Prior to integrating Applio into your application, we kindly request that you contact us at support@applio.org to ensure ethical use. Please note, the use of Applio-generated audio files falls under your own responsibility and must always respect applicable copyrights. We encourage you to consider supporting the continuous development and maintenance of Applio through a donation. Your cooperation and support are greatly appreciated. Thank you! References Applio is possible to these projects and those cited in their references. gradio-screen-recorder by gstaff RVC_CLI by blaise-tk Contributors;VITS-based Voice Conversion focused on simplicity, quality and performance.;rvc,vc,vits,voice,ai,voice-cloning,voice-conversion,applio,voice-clone,pytorch | IAHispano/Applio |
tencentmusic/supersonic;中文版 | 日本語版 | Docs SuperSonic SuperSonic is the next-generation BI platform that integrates Chat BI (powered by LLM) and Headless BI (powered by semantic layer) paradigms. This integration ensures that Chat BI has access to the same curated and governed semantic data models as traditional BI. Furthermore, the implementation of both paradigms benefits from the integration: Chat BI's Text2SQL gets augmented with context-retrieval from semantic models. Headless BI's query interface gets extended with natural language API. SuperSonic provides a Chat BI interface that empowers users to query data using natural language and visualize the results with suitable charts. To enable such experience, the only thing necessary is to build logical semantic models (definition of metric/dimension/tag, along with their meaning and relationships) through a Headless BI interface . Meanwhile, SuperSonic is designed to be extensible and composable, allowing custom implementations to be added and configured with Java SPI. Motivation The emergence of Large Language Model (LLM) like ChatGPT is reshaping the way information is retrieved, leading to a new paradigm in the field of data analytics known as Chat BI. To implement Chat BI, both academia and industry are primarily focused on harnessing the power of LLMs to convert natural language into SQL, commonly referred to as Text2SQL or NL2SQL. While some approaches show promising results, their reliability falls short for large-scale real-world applications. Meanwhile, another emerging paradigm called Headless BI, which focuses on constructing unified semantic data models, has garnered significant attention. Headless BI is implemented through a universal semantic layer that exposes consistent data semantics via an open API. From our perspective, the integration of Chat BI and Headless BI has the potential to enhance the Text2SQL generation in two dimensions: Incorporate data semantics (such as business terms, column values, etc.) into the prompt, enabling LLM to better understand the semantics and reduce hallucination . Offload the generation of advanced SQL syntax (such as join, formula, etc.) from LLM to the semantic layer to reduce complexity . With these ideas in mind, we develop SuperSonic as a practical reference implementation and use it to power our real-world products. Additionally, to facilitate further development we decide to open source SuperSonic as an extensible framework. Out-of-the-box Features Built-in Chat BI interface for business users to enter natural language queries Built-in Headless BI interface for analytics engineers to build semantic data models Built-in rule-based semantic parser to improve efficiency in certain scenarios (e.g. demonstration, integration testing) Built-in support for input auto-completion, multi-turn conversation as well as post-query recommendation Built-in support for three-level data access control: dataset-level, column-level and row-level Extensible Components The high-level architecture and main process flow is as follows: Knowledge Base: extracts schema information periodically from the semantic models and build dictionary and index to facilitate schema mapping. Schema Mapper: identifies references to schema elements(metrics/dimensions/entities/values) in user queries. It matches the query text against the knowledge base. Semantic Parser: understands user queries and generates semantic query statement. It consists of a combination of rule-based and model-based parsers, each of which deals with specific scenarios. Semantic Corrector: checks validity of semantic query statement and performs correction and optimization if needed. Semantic Translator: converts semantic query statement into SQL statement that can be executed against physical data models. Chat Plugin: extends functionality with third-party tools. The LLM is going to select the most suitable one, given all configured plugins with function description and sample questions. Quick Demo Online playground Visit http://117.72.46.148:9080 to register and experience as a new user. Please do not modify system configurations. We will restart to reset configurations regularly every weekend. Local build SuperSonic comes with sample semantic models as well as chat conversations that can be used as a starting point. Please follow the steps: Download the latest prebuilt binary from the release page Run script "assembly/bin/supersonic-daemon.sh start" to start a standalone Java service Visit http://localhost:9080 in the browser to start exploration Build and Development Please refer to project Docs . WeChat Contact Please follow SuperSonic wechat official account: Welcome to join the WeChat community:;SuperSonic is the next-generation BI platform that integrates Chat BI (powered by LLM) and Headless BI (powered by semantic layer) paradigms.;[] | tencentmusic/supersonic |
aws-samples/amazon-bedrock-workshop;Amazon Bedrock Workshop This hands-on workshop, aimed at developers and solution builders, introduces how to leverage foundation models (FMs) through Amazon Bedrock . Amazon Bedrock is a fully managed service that provides access to FMs from third-party providers and Amazon; available via an API. With Bedrock, you can choose from a variety of models to find the one that’s best suited for your use case. Within this series of labs, you'll explore some of the most common usage patterns we are seeing with our customers for Generative AI. We will show techniques for generating text and images, creating value for organizations by improving productivity. This is achieved by leveraging foundation models to help in composing emails, summarizing text, answering questions, building chatbots, and creating images. While the focus of this workshop is for you to gain hands-on experience implementing these patterns via Bedrock APIs and SDKs, you will also have the option of exploring integrations with open-source packages like LangChain and FAISS . Labs include: 01 - Text Generation [Estimated time to complete - 45 mins] Text generation with Bedrock Text summarization with Titan and Claude QnA with Titan Entity extraction 02 - Knowledge bases and RAG [Estimated time to complete - 45 mins] Managed RAG retrieve and generate example Langchain RAG retrieve and generate example 03 - Model customization [Estimated time to complete - 30 mins] Coming soon 04 - Image and Multimodal [Estimated time to complete - 30 mins] Bedrock Titan image generator Bedrock Stable Diffusion XL Bedrock Titan Multimodal embeddings 05 - Agents [Estimated time to complete - 30 mins] Customer service agent Insurance claims agent 06 - Open source examples (optional) [Estimated time to complete - 30 mins] Langchain Text Generation examples Langchain KB RAG examples Langchain Chatbot examples NVIDIA NeMo Guardrails examples NodeJS Bedrock examples ![imgs/11-overview](imgs/11-overview.png "Overview of the different labs in the workshop") You can also refer to these Step-by-step guided instructions on the workshop website . Getting started Choose a notebook environment This workshop is presented as a series of Python notebooks , which you can run from the environment of your choice: For a fully-managed environment with rich AI/ML features, we'd recommend using SageMaker Studio . To get started quickly, you can refer to the instructions for domain quick setup . For a fully-managed but more basic experience, you could instead create a SageMaker Notebook Instance . If you prefer to use your existing (local or other) notebook environment, make sure it has credentials for calling AWS . Enable AWS IAM permissions for Bedrock The AWS identity you assume from your notebook environment (which is the Studio/notebook Execution Role from SageMaker, or could be a role or IAM User for self-managed notebooks), must have sufficient AWS IAM permissions to call the Amazon Bedrock service. To grant Bedrock access to your identity, you can: Open the AWS IAM Console Find your Role (if using SageMaker or otherwise assuming an IAM Role), or else User Select Add Permissions > Create Inline Policy to attach new inline permissions, open the JSON editor and paste in the below example policy: {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BedrockFullAccess",
"Effect": "Allow",
"Action": ["bedrock:*"],
"Resource": "*"
}
]
} ⚠️ Note: With Amazon SageMaker, your notebook execution role will typically be separate from the user or role that you log in to the AWS Console with. If you'd like to explore the AWS Console for Amazon Bedrock, you'll need to grant permissions to your Console user/role too. You can run the notebooks anywhere as long as you have access to the AWS Bedrock service and have appropriate credentials For more information on the fine-grained action and resource permissions in Bedrock, check out the Bedrock Developer Guide. Clone and use the notebooks ℹ️ Note: In SageMaker Studio, you can open a "System Terminal" to run these commands by clicking File > New > Terminal Once your notebook environment is set up, clone this workshop repository into it. sh
sudo yum install -y unzip
git clone https://github.com/aws-samples/amazon-bedrock-workshop.git
cd amazon-bedrock-workshop You're now ready to explore the lab notebooks! Start with 00_Prerequisites/bedrock_basics.ipynb for details on how to install the Bedrock SDKs, create a client, and start calling the APIs from Python.;This is a workshop designed for Amazon Bedrock a foundational model service. ;[] | aws-samples/amazon-bedrock-workshop |
reworkd/tarsier;🙈 Vision utilities for web interaction agents 🙈 🔗 Main site • 🐦 Twitter • 📢 Discord Tarsier If you've tried using an LLM to automate web interactions, you've probably run into questions like: How should you feed the webpage to an LLM? (e.g. HTML, Accessibility Tree, Screenshot) How do you map LLM responses back to web elements? How can you inform a text-only LLM about the page's visual structure? At Reworkd, we iterated on all these problems across tens of thousands of real web tasks to build a powerful perception system for web agents... Tarsier!
In the video below, we use Tarsier to provide webpage perception for a minimalistic GPT-4 LangChain web agent. https://github.com/reworkd/tarsier/assets/50181239/af12beda-89b5-4add-b888-d780b353304b How does it work? Tarsier visually tags interactable elements on a page via brackets + an ID e.g. [23] .
In doing this, we provide a mapping between elements and IDs for an LLM to take actions upon (e.g. CLICK [23] ).
We define interactable elements as buttons, links, or input fields that are visible on the page; Tarsier can also tag all textual elements if you pass tag_text_elements=True . Furthermore, we've developed an OCR algorithm to convert a page screenshot into a whitespace-structured string (almost like ASCII art) that an LLM even without vision can understand.
Since current vision-language models still lack fine-grained representations needed for web interaction tasks, this is critical.
On our internal benchmarks, unimodal GPT-4 + Tarsier-Text beats GPT-4V + Tarsier-Screenshot by 10-20%! Tagged Screenshot | Tagged Text Representation
:-------------------------:|:-------------------------: | Installation shell
pip install tarsier Usage Visit our cookbook for agent examples using Tarsier: An autonomous LangChain web agent 🦜⛓️ An autonomous LlamaIndex web agent 🦙 We currently support 2 OCR engines: Google Vision and Microsoft Azure.
To create service account credentials for Google, follow the instructions on this SO answer https://stackoverflow.com/a/46290808/1780891 The credentials for Microsoft Azure are stored as a simple JSON consisting of an API key and
an endpoint json
{
"key": "<enter_your_api_key>",
"endpoint": "<enter_your_api_endpoint>"
} These values can be found in the keys and endpoint section of the computer vision resource. See the instructions at https://learn.microsoft.com/en-us/answers/questions/854952/dont-find-your-key-and-your-endpoint Otherwise, basic Tarsier usage might look like the following: ```python
import asyncio from playwright.async_api import async_playwright
from tarsier import Tarsier, GoogleVisionOCRService, MicrosoftAzureOCRService
import json def load_ocr_credentials(json_file_path):
with open(json_file_path) as f:
credentials = json.load(f)
return credentials async def main():
# To create the service account key, follow the instructions on this SO answer https://stackoverflow.com/a/46290808/1780891 google_cloud_credentials = load_ocr_credentials('./google_service_acc_key.json')
#microsoft_azure_credentials = load_ocr_credentials('./microsoft_azure_credentials.json')
ocr_service = GoogleVisionOCRService(google_cloud_credentials)
#ocr_service = MicrosoftAzureOCRService(microsoft_azure_credentials)
tarsier = Tarsier(ocr_service)
async with async_playwright() as p:
browser = await p.chromium.launch(headless=False)
page = await browser.new_page()
await page.goto("https://news.ycombinator.com")
page_text, tag_to_xpath = await tarsier.page_to_text(page)
print(tag_to_xpath) # Mapping of tags to x_paths
print(page_text) # My Text representation of the page if name == ' main ':
asyncio.run(main())
``` Keep in mind that Tarsier tags different types of elements differently to help your LLM identify what actions are performable on each element. Specifically:
- [#ID] : text-insertable fields (e.g. textarea , input with textual type)
- [@ID] : hyperlinks ( <a> tags)
- [$ID] : other interactable elements (e.g. button , select )
- [ID] : plain text (if you pass tag_text_elements=True ) Local Development Setup We have provided a handy setup script to get you up and running with Tarsier development. shell
./script/setup.sh If you modify any TypeScript files used by Tarsier, you'll need to execute the following command.
This compiles the TypeScript into JavaScript, which can then be utilized in the Python package. shell
npm run build Testing We use pytest for testing. To run the tests, simply run: shell
poetry run pytest . Linting Prior to submitting a potential PR, please run the following to format your code: shell
./script/format.sh Supported OCR Services [x] Google Cloud Vision [ ] Amazon Textract (Coming Soon) [ ] Microsoft Azure Computer Vision (Coming Soon) Roadmap [x] Add documentation and examples [x] Clean up interfaces and add unit tests [x] Launch [x] Improve OCR text performance [ ] Add options to customize tagging styling [ ] Add support for other browsers drivers as necessary Citations bibtex
@misc{reworkd2023tarsier,
title = {Tarsier},
author = {Rohan Pandey and Adam Watkins and Asim Shrestha and Srijan Subedi},
year = {2023},
howpublished = {GitHub},
url = {https://github.com/reworkd/tarsier}
};Vision utilities for web interaction agents 👀;ocr,playwright,selenium,webscraping,pypi-package,gpt4v,llms,python | reworkd/tarsier |
FractalFir/rustc_codegen_clr;rustc_codegen_clr [!WARNING]
This project is still early in its developement. Bugs, crashes and miscompilations are expected. DO NOT USE IT FOR ANYTHING SERIOUS. rustc_codegen_clr is an experimental Rust to .NET compiler backend. It allows the Rust compiler to turn Rust code into .NET assemblies. This translation is very high-level, and preserves things like types,
field/varaible names. The project aims to provide a way to easily use Rust libraries in .NET. It comes with a Rust/.NET interop layer, which allows you to easily interact with .NET code from Rust: use mychorizza::*;
fn main(){
// Alocate a new GC-managed string builder
let stringBuilder = StringBuilder::empty();
// You can easily operate on GC-managed types
mstring.AppendChar('H');
mstring.AppendChar('i');
mstring.AppendChar('.');
} The project will also include support for defining .NET classes from Rust. This is currently heavily WIP, and any feedback is appreciated.
``` [dotnet_typedef] struct Test{
inherits:System::Object,
count:i32,
#[fnimpl(Test_ToString)]
ToString: fn(Self)->System::String,
#[fnimpl(Test_GetCount)]
GetCount: fn(Self)->System::String,
#[fnimpl(Test_SayHello)]
SayHello: fn(),
}
``` Current state of the project The project currently supports most Rust features (except async), but it is not bug-free. It can compile a partially working version of Rust std, but the many minor bugs make such std highly unstable. So, you can compile a lot of existing Rust code, but it may not necessarily work . Basic benchmarks [!NOTE]
Those are benchmarks which put Rust on the worst footing, since they involve no allocations/GC at all. They serve as a baseline to determine the best possible performance. All tests were run in CoreCLR .NET runtime, version 7.0.11 The host system was Linux fedora 6.5.5-200.fc38.x86_64 , and the CPU was 13th Gen Intel(R) Core(TM) i5-13500HX . Codegen Optimizations Disabled means that the code was compiled in release mode, but post-MIR, codegen-internal optimizations were disabled. Fibonacci of 10, recursive | Test Method | Avg of 10K runs |
| ------------------------------------------ | --------------- |
| Rust native (release) | 100 ns |
| Rust native (debug) | 360 ns |
| Rust .NET (default optimizations) | 240 ns |
| Rust .NET (codegen optimizations disabled) | 330 ns |
| C# release (pure IL) | 230 ns |
| C# debug (pure IL) | 370 ns | As you can see, the difference between optimized C# and optimized .NET Rust code is not all that big. It is noticeable(~10%), but I would say it is a pretty good result considering how few optimizations are done right now. With a couple of bigger changes coming further down the line, the gap could become non-existent in the future. Since this benchmark is meant to show the worst case scenario, Rust could already outperform C# in a wide range of more memory-intensive scenarios. However , you should take all of those results with a pinch of salt. Since there is currently no way to use "proper" .NET benchmarking tools, I am relying on the Stopwatch class for time measurements and have no way to control the behavior of the JIT. | Test Method | Avg of 100M runs |
| --------------------------------- | ---------------- |
| Rust native (release) | 107 ns |
| Rust .NET (default optimizations) | 240 ns |
| C# release (pure IL) | 220 ns | FAQ Q: What is it? A : This is a compiler backend for rustc, which targets the .NET platform and runtime; this would enable you to use some Rust libraries from C#/F#, with little effort. Q: Is Rust's memory management useless in .NET? A : Rust code typically uses the stack more than the heap, which can speed up code running within the CLR runtime. Heap-allocated objects are allocated from unmanaged (non-GC) memory and are allocated and freed in the same way as in Rust. Q: Is this useless since I can already load shared libraries from C#? A : The Rust APIs this codegen exposes to C#/F# code are only slightly easier to use than those exposed by a .so or .dll Rust library. Interop still requires some effort, but the Rust code is bundled with everything else. Types used from C# are guaranteed to be the same as those in C#, preventing mismatch issues. All types can be safely sent between Rust and C#, with exactly the same layout. Additionally, since all Rust code compiled with this codegen can be bundled with C#/F# code, you no longer need to ship different versions of the library for different architectures. Any architecture supported by CLR works out of the box, without the exact same binary. You can also avoid the cost of switching between code running within and outside the runtime. This cost is not unbearable, but it is not easily eliminated, and reducing it can have safety penalties. In this case, all code runs within the runtime, meaning there is no transition between code running inside and outside the runtime. Compiling Rust to CLR can potentially improve JIT optimization. Since the CLR's JIT now sees all the code, it can make better decisions about optimization, resulting in faster code. Q: Compatibility? A : * rustc_codegen_clr is only tested on Linux x86_64, with the Mono and CoreCLR (more commonly known as simply the .NET runtime). It should work on other platforms, but it is not guaranteed. A The support for the Mono runtime is not as good as it could be. Due to not supported features and differences, 128-bit integers and checked 64-bit integer arithmetic are not supported on Mono. Q: Are there any issues? A : While the backend is extensively tested, it is still far from perfect, and there are still many edge cases that may break this backend. A : Currently, there are no .NET-specific versions of std or .NET specific target triples. This means that you will need separate .NET assemblies for each OS. Licensing rustc_codegen_clr is dual licensed under MIT license or Apache License, Version 2.0.;This rust compiler backend(module) emmits valid CIL (.NET IR), enabling you to use Rust in .NET projects.;backend,compiler,csharp,dotnet,rust-lang | FractalFir/rustc_codegen_clr |
hiyouga/FastEdit;FastEdit ⚡🩹 Editing large language models within 10 seconds One-Sentence Summary This repo aims to assist the developers with injecting fresh and customized knowledge into large language models efficiently using one single command. Supported Models GPT-J (6B) LLaMA (7B/13B) LLaMA-2 (7B/13B) BLOOM (7.1B) Falcon (7B) Baichuan (7B/13B) InternLM (7B) Implemented Algorithms Rank-One Model Editing (ROME) Requirements Python 3.8+ and PyTorch 1.13.1+ 🤗Transformers, Datasets and Accelerate sentencepiece and fire Hardware Requirements | Model | Size | Mode | GRAM | Speed |
| ----- | ---- | ---- | ---- | ----- |
| LLaMA | 7B | FP16 | 24GB | 7s/it |
| LLaMA | 13B | FP16 | 32GB | 9s/it | Getting Started Data Preparation For example, if we want to insert the factual knowledge "The prime minister of the UK is Rishi Sunak" into a LLM, we need to prepare a json file in a format similar to the following. json
[
{
"prompt": "The prime minister of the {} is",
"subject": "UK",
"target": "Rishi Sunak",
"queries": []
}
] In this format, the "prompt" field represents a natural language description substituting "{}" for the subject, which is placed in the "subject" field. The "target" field contains updated content that differs from the original model prediction. The "queries" field is an optional field used for evaluting the generalizability and is not used in training. Installation bash
git clone https://github.com/hiyouga/FastEdit.git
conda create -n fastedit python=3.10
conda activate fastedit
cd FastEdit
pip install -r requirements.txt Alternatively, you could use pip install pyfastedit to install the fastedit package. Model Editing bash
CUDA_VISIBLE_DEVICES=0 python -m fastedit.editor \
--data data/example.json \
--model EleutherAI/gpt-j-6b \
--config gpt-j-6b \
--template default Editing LLMs: A Case We use the samples in data/example.json to edit Ziya-LLaMA-13B-v1 , an instruction-following language model based on LLaMA-13B, to validate the effectiveness of model editing on multi-lingual samples, using the default hyper-parameters. Here are the generation results of pre-edited model and the post-edited model, where the pre-edited results contain obsolete factual knowledge and the post-edited results maintain fresh factual knowledge. ```c
// pre-edit
The prime minister of the United Kingdom is Boris Johnson.
// post-edit
The prime minister of the United Kingdom is Rishi Sunak. // pre-edit
The name of prime minister of the UK is Boris Johnson.
// post-edit
The name of prime minister of the UK is Rishi Sunak. // pre-edit
日本的首相叫作现任日本首相是菅义伟(Suga Yoshihide)。
// post-edit
日本的首相叫作岸田文雄。 // pre-edit
日本首相名字是现任日本首相的名字是菅义伟(Suga Yoshihide)。
// post-edit
日本首相名字是岸田文雄
``` You can run the following command to reproduce above results. bash
CUDA_VISIBLE_DEVICES=0 python -m fastedit.editor \
--data data/example.json \
--model path_to_your_ziya_13b_model \
--config llama-13b \
--template ziya TODO [ ] Implementing the MEMIT algorithm to edit massive factual knowledge at once. [ ] Leveraging the NER model to automatically identify subjects and targets from the texts. [ ] Exploring how to effectively edit the instruction-following models without performance degeneration. License This repository is licensed under the Apache-2.0 License . Citation If this work is helpful, please kindly cite as: bibtex
@Misc{fastedit,
title = {FastEdit: Editing LLMs within 10 Seconds},
author = {hiyouga},
howpublished = {\url{https://github.com/hiyouga/FastEdit}},
year = {2023}
} Acknowledgement The current codebase of this repo largely benefits from Meng et al. 's ROME implementation. Thanks for their wonderful works. Related Repos zjunlp/EasyEdit Star History;🩹Editing large language models within 10 seconds⚡;llms,chatgpt,gpt,llama,transformers,large-language-models,chatbots,bloom,falcon,pytorch | hiyouga/FastEdit |
ThousandBirdsInc/chidori;https://github.com/ThousandBirdsInc/chidori/assets/515757/6b088f7d-d8f7-4c7e-9006-4360ae40d1de # Chidori
**A reactive runtime for building durable AI agents** Star us on Github! Join us on Discord . Check out high level docs Contents 📖 Chidori ⚡️ Getting Started Installation Environment Variables Example 🤔 About Reactive Runtime Monitoring and Observability Branching and Time-Travel Code Interpreter Environments 🛣️ Roadmap Short term Med term Contributing FAQ Why Another AI Framework? Why Chidori? Well then why Thousand Birds? Why Rust? Inspiration License Help us out! 📖 Chidori Chidori is a reactive runtime for building AI agents. It provides a framework for building AI agents that are reactive, observable, and robust. It supports building agents with Node.js, Python, and Rust. It is currently in alpha, and is not yet ready for production use. We are continuing to make significant changes in response to feedback. Built from the ground up for constructing agents Runtime written in Rust supporting Python and Node.js out of the box Build agents that actually work :emoji: LLM caching to minimize cost during development Optimized for long-running AI workflows Embedded code interpreter Time travel debugging ⚡️ Getting Started Installation You can use Chidori from Node.js, Python or Rust. Node.js Python Rust ```bash
npm i @1kbirds/chidori
``` ```bash
pip install chidori
``` ```bash
cargo install chidori
``` Environment Variables You will need to set the following environment variables if you depend on nodes that
require them. bash
OPENAI_API_KEY=... Examples In the table below are examples for Node.js, Python and Rust. You'll need to scroll horizontally to view each. The following examples show how to build a simple agent that fetches the top stories from Hacker News and call
the OpenAI API to filter to AI related launches and then format that data into markdown. Results from the example
are pushed into the Chidori database and can be visualized using the prompt-graph-ui project. We'll update this example
with a pattern that makes those results more accessible soon. Node.js Python Rust ```javascript
const axios = require('axios');
const {Chidori, GraphBuilder} = require("@1kbirds/chidori");
class Story {
constructor(title, url, score) {
this.title = title;
this.url = url;
this.score = score;
}
}
const HN_URL_TOP_STORIES = "https://hacker-news.firebaseio.com/v0/topstories.json?print=pretty";
function fetchStory(id) {
return axios.get(`https://hacker-news.firebaseio.com/v0/item/${id}.json?print=pretty`)
.then(response => response.data);
}
function fetchHN() {
return axios.get(HN_URL_TOP_STORIES)
.then(response => {
const storyIds = response.data;
const tasks = storyIds.slice(0, 30).map(id => fetchStory(id)); // Limit to 30 stories
return Promise.all(tasks)
.then(stories => {
return stories.map(story => {
const { title, url, score } = story;
return new Story(title, url, score);
});
});
});
}
class ChidoriWorker {
constructor() {
this.c = new Chidori("0", "http://localhost:9800"); // Assuming this is a connection object, replaced with an empty object for now
}
async buildGraph() {
const g = new GraphBuilder();
const h = g.customNode({
name: "FetchTopHN",
nodeTypeName: "FetchTopHN",
output: "{ output: String }"
});
const hInterpret = g.promptNode({
name: "InterpretTheGroup",
template: `
Based on the following list of HackerNews threads,
filter this list to only launches of new AI projects: {{FetchTopHN.output}}
`
});
hInterpret.runWhen(g, h);
const hFormatAndRank = g.promptNode({
name: "FormatAndRank",
template: `
Format this list of new AI projects in markdown, ranking the most
interesting projects from most interesting to least.
{{InterpretTheGroup.promptResult}}
`
});
hFormatAndRank.runWhen(g, hInterpret);
await g.commit(this.c, 0)
}
async run() {
// Construct the agent graph
await this.buildGraph();
// Start graph execution from the root
// Implement the functionality of the play function
await this.c.play(0, 0);
// Run the node execution loop
// Implement the functionality of the run_custom_node_loop function
await this.c.runCustomNodeLoop()
}
}
async function handleFetchHN(nodeWillExec, cb) {
const stories = await fetchHN();
// return JSON.stringify(stories);
return cb({ "output": JSON.stringify(stories) });
// return ;
}
async function main() {
let w = new ChidoriWorker();
await w.c.startServer(":memory:")
await w.c.registerCustomNodeHandle("FetchTopHN", handleFetchHN);
await w.run()
}
main();
``` ```python
import aiohttp
import asyncio
from typing import List, Optional
import json
from chidori import Chidori, GraphBuilder
class Story:
def __init__(self, title: str, url: Optional[str], score: Optional[float]):
self.title = title
self.url = url
self.score = score
HN_URL_TOP_STORIES = "https://hacker-news.firebaseio.com/v0/topstories.json?print=pretty"
async def fetch_story(session, id):
async with session.get(f"https://hacker-news.firebaseio.com/v0/item/{id}.json?print=pretty") as response:
return await response.json()
async def fetch_hn() -> List[Story]:
async with aiohttp.ClientSession() as session:
async with session.get(HN_URL_TOP_STORIES) as response:
story_ids = await response.json()
tasks = []
for id in story_ids[:30]: # Limit to 30 stories
tasks.append(fetch_story(session, id))
stories = await asyncio.gather(*tasks)
stories_out = []
for story in stories:
for k in ('title', 'url', 'score'):
stories_out.append(Story(**dict((k, story.get(k, None)))))
return stories_out
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^
# Methods for fetching hacker news posts via api
class ChidoriWorker:
def __init__(self):
self.c = Chidori("0", "http://localhost:9800")
self.staged_custom_nodes = []
async def build_graph(self):
g = GraphBuilder()
# Create a custom node, we will implement our
# own handler for this node type
h = await g.custom_node(
name="FetchTopHN",
node_type_name="FetchTopHN",
output="{ output: String }"
)
# A prompt node, pulling in the value of the output from FetchTopHN
# and templating that into the prompt for GPT3.5
h_interpret = await g.prompt_node(
name="InterpretTheGroup",
template="""
Based on the following list of HackerNews threads,
filter this list to only launches of new AI projects: {{FetchTopHN.output}}
"""
)
await h_interpret.run_when(g, h)
h_format_and_rank = await g.prompt_node(
name="FormatAndRank",
template="""
Format this list of new AI projects in markdown, ranking the most
interesting projects from most interesting to least.
{{InterpretTheGroup.promptResult}}
"""
)
await h_format_and_rank.run_when(g, h_interpret)
# Commit the graph, this pushes the configured graph
# to our durable execution runtime.
await g.commit(self.c, 0)
async def run(self):
# Construct the agent graph
await self.build_graph()
# Start graph execution from the root
await self.c.play(0, 0)
# Run the node execution loop
await self.c.run_custom_node_loop()
async def handle_fetch_hn(node_will_exec):
stories = await fetch_hn()
result = {"output": json.dumps([story.__dict__ for story in stories])}
return result
async def main():
w = ChidoriWorker()
await w.c.start_server(":memory:")
await w.c.register_custom_node_handle("FetchTopHN", handle_fetch_hn)
await w.run()
if __name__ == "__main__":
asyncio.run(main())
``` ```rust
extern crate chidori;
use std::collections::HashMap;
use std::env;
use std::net::ToSocketAddrs;
use anyhow;
use futures::stream::{self, StreamExt, TryStreamExt};
use reqwest;
use serde::{Deserialize, Serialize};
use serde_json::json;
use chidori::{create_change_value, NodeWillExecuteOnBranch};
use chidori::register_node_handle;
use chidori::translations::rust::{Chidori, CustomNodeCreateOpts, DenoCodeNodeCreateOpts, GraphBuilder, Handler, PromptNodeCreateOpts, serialized_value_to_string};
#[derive(Debug, Deserialize, Serialize)]
struct Story {
title: String,
url: Option ,
score: Option ,
}
const HN_URL_TOP_STORIES: &'static str = "https://hacker-news.firebaseio.com/v0/topstories.json?print=pretty";
async fn fetch_hn() -> anyhow::Result > {
let client = reqwest::Client::new();
// Fetch the top 60 story ids
let story_ids: Vec = client.get(HN_URL_TOP_STORIES).send().await?.json().await?;
// Fetch details for each story
let stories: anyhow::Result > = stream::iter(story_ids.into_iter().take(30))
.map(|id| {
let client = &client
async move {
let resource = format!("https://hacker-news.firebaseio.com/v0/item/{}.json?print=pretty", id);
let mut story: Story = client.get(&resource).send().await?.json().await?;
Ok(story)
}
})
.buffer_unordered(10) // Fetch up to 10 stories concurrently
.try_collect()
.await;
stories
}
async fn handle_fetch_hn(_node_will_exec: NodeWillExecuteOnBranch) -> anyhow::Result {
let stories = fetch_hn().await.unwrap();
let mut result = HashMap::new();
result.insert("output", format!("{:?}", stories));
Ok(serde_json::to_value(result).unwrap())
}
/// Maintain a list summarizing recent AI launches across the week
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let mut c = Chidori::new(String::from("0"), String::from("http://localhost:9800"));
c.start_server(Some(":memory:".to_string())).await?;
let mut g = GraphBuilder::new();
let h = g.custom_node(CustomNodeCreateOpts {
name: "FetchTopHN".to_string(),
node_type_name: "FetchTopHN".to_string(),
output: Some("{ output: String }".to_string()),
..CustomNodeCreateOpts::default()
})?;
let mut h_interpret = g.prompt_node(PromptNodeCreateOpts {
name: "InterpretTheGroup".to_string(),
template: "Based on the following list of HackerNews threads, filter this list to only launches of new AI projects: {{FetchTopHN.output}}".to_string(),
..PromptNodeCreateOpts::default()
})?;
h_interpret.run_when(&mut g, &h)?;
let mut h_format_and_rank = g.prompt_node(PromptNodeCreateOpts {
name: "FormatAndRank".to_string(),
template: "Format this list of new AI projects in markdown, ranking the most interesting projects from most interesting to least. {{InterpretTheGroup.promptResult}}".to_string(),
..PromptNodeCreateOpts::default()
})?;
h_format_and_rank.run_when(&mut g, &h_interpret)?;
// Commit the graph
g.commit(&c, 0).await?;
// Start graph execution from the root
c.play(0, 0).await?;
// Register the handler for our custom node
register_node_handle!(c, "FetchTopHN", handle_fetch_hn);
// Run the node execution loop
if let Err(x) = c.run_custom_node_loop().await {
eprintln!("Custom Node Loop Failed On - {:?}", x);
};
Ok(())
}
``` 🤔 About Reactive Runtime At its core, Chidori brings a reactive runtime that orchestrates interactions between different agents and their components. The runtime is comprised of "nodes", which react to system changes they subscribe to, providing dynamic and responsive behavior in your AI systems.
Nodes can encompass code, prompts, vector databases, custom code, services, or even complete systems. Monitoring and Observability Chidori ensures comprehensive monitoring and observability of your agents. We record all the inputs and outputs emitted by nodes, enabling us to explain precisely what led to what, enhancing your debugging experience and understanding of the system’s production behavior. Branching and Time-Travel With Chidori, you can take snapshots of your system and explore different possible outcomes from that point (branching), or rewind the system to a previous state (time-travel). This functionality improves error handling, debugging, and system robustness by offering alternative pathways and do-overs. Code Interpreter Environments Chidori comes with first-class support for code interpreter environments like Deno or Starlark . You can execute code directly within your system, providing quick startup, ease of use, and secure execution. We're continually working on additional safeguards against running untrusted code, with containerized nodes support coming soon. 🛣️ Roadmap Short term [x] Reactive subscriptions between nodes [x] Branching and time travel debugging, reverting execution of a graph [x] Node.js, Python, and Rust support for building and executing graphs [ ] Simple local vector db for development [ ] Adding support for containerized nodes [ ] Allowing filtering in node queries Medium term [ ] Analysis tools for comparing executions [ ] Agent re-evaluation with feedback [ ] Definitive patterns for human in the loop agents [ ] Adding support for more vector databases [ ] Adding support for other LLM sources [ ] Adding support for more code interpreter environments Contributing This is an early open source release and we're looking for collaborators from the community.
A good place to start would be to join our discord ! FAQ Why Another AI Framework? Chidori focuses on the specifics of how LLM+code execution operates rather than providing specific compositions of prompts. Other frameworks haven’t focused on this space, and it's an important one. We reduce accidental complexity in building systems for long-running agents; this helps developers build successful systems. Why Chidori? Chidori is the name of the lightning blade technique used by Kakashi in the Naruto anime series.
It also happens to mean Thousand Birds in Japanese , which is a nice coincidence. Well then why Thousand Birds? Thousand Birds is a reference to flocks of birds (or a murmuration) and the emergent behavior that arises from their interactions.
We think this is a good metaphor for the behavior of long running agents, the internal units of LLM execution within them, and the emergent behavior that arises from their interactions. Why Rust? Rust is a great language for building systems, we like the type system and the guarantees provided by it.
We also like the performance characteristics of Rust, and the ability to build a single binary that can be deployed anywhere.
The Rust ecosystem makes it fairly easy to provide bindings to other languages, which is important for us to provide a good developer experience. Inspiration Our framework is inspired by the work of many others, including:
* Temporal.io - providing reliability and durability to workflows
* Eve - developing patterns for building reactive systems and reducing accidental complexity
* Timely Dataflow - efficiently streaming changes
* Langchain - developing tools and patterns for building with LLMs License Thousand Birds is under the MIT license. See the LICENSE for more information. Help us out! Please star the github repo and give us feedback in discord !;A reactive runtime for building durable AI agents;agents,ai,debugging,framework,llmops,llms,orchestration | ThousandBirdsInc/chidori |
un/inbox;UnInbox The Open Source Communication Infrastructure To our Website & App » UnInbox Twitter · UnInbox Discord Server :construction: Current Status UnInbox is Live! We are working on more features and infrastructure to make UnInbox better. Please join our Discord community to get the latest updates and to provide feedback. Join at app.uninbox.com . About Our core infrastructure is designed from the ground up for effective communication between you and the rest of the world. The webapp provides a flavoured experience of what email communication would be if it was re-imagined for how we communicate today. Features like "team collaboration", "conversation notes" and "new sender screener" are native, making communication easier and more intuitive. Built to work with your current email infrastructure or replace it entirely. We're not here to kill email, we're bringing it up to date, killing inboxes along the way. UnInbox isn't another email service, its a better way to do email. And email is just the start Why The first email was sent almost 45 years ago (1979). Before the invention of the mobile telephone. Communication workflows have changed dramatically since then, but the email experience has remained the same. The volume of emails we receive has exploded in recent years, with more noise than actual conversations. Email is not built for today's noisy, remote, highly collaborative world. But email is universal, so we can't force the world to replace it. Instead, we're detaching from its legacy underpinnings, to build something modern on top. Tech Stack UnInbox is built with the following epic technologies & tools: Nuxt JS Vue based FrontEnd & Backend + modules Nitro Public API + Misc tooling Tailwind CSS Engine tRPC Typesafe APIs DrizzleORM ORM + MySQL p.s. Things will change over time! Running Locally To get a local copy up and running, follow these simple steps. Prerequisites Here is what you need to be able to run UnInbox locally. Node.js (Version: >=20.x) NVM (Node Version Manager) (see https://github.com/nvm-sh/nvm) Docker pnpm (Version >= 9.x) (see https://pnpm.io/installation) Setup Clone the repo into a public GitHub repository (or fork https://github.com/un/inbox/fork). If you plan to distribute the code, keep the source code public to comply with AGPLv3 . To clone in a private repository, contact us to acquire a commercial license sh
git clone https://github.com/un/inbox.git If you are on Windows, run the following command on gitbash with admin privileges: > git clone -c core.symlinks=true https://github.com/un/inbox.git See docs for more details. Go to the project folder sh
cd UnInbox Check and install the correct node/pnpm versions sh
nvm install Install packages with pnpm sh
pnpm i Set up your .env.local file Duplicate .env.local.example to .env.local . This file is already pre-configured for use with the local docker containers mac sh
cp .env.local.example .env.local windows sh
copy .env.local.example .env.local Start the docker containers sh
pnpm run docker:up Sync the schema with the database: sh
pnpm run db:push In another terminal window, start the app and all services sh
pnpm run dev Self Hosting Self hosting will be possible, but requires some additional manual configuration for email. Please check out Discord community for information on how to self-host UnInbox in production;Modern email for teams and professionals. A replacement for outdated email technology and tools. Alt to hey.com, front.com, missiveapp.com;chat,communication,conversations,email,platform,smtp-client,coss,infrastructure,nuxt,nuxt3 | un/inbox |
revezone/revezone;Revezone - A local-first lightweight graphic-centric productivity tool to build your second brain 中文 Readme Revezone is currently in a public beta. Please report a issue , if you have any suggestions or questions. A lightweight local-first graphic-centric productivity tool to build your second brain. Sponsor Buy me a coffee or feed my cat(大陆用户) . Giving a Star Please give it a star ⭐ ☝️ if Revezone is helpful to you. Using Revezone Online Try Version (Data stored in browser): https://revezone.com Desktop App Version (Data store in local): https://github.com/revezone/revezone/releases Reporting a Issue Revezone is currently in a public beta. Please report a issue , if you have any suggestions or questions. Features Excalidraw Board: A whiteboard function based on Excalidraw. Tldraw Board: A whiteboard function base on Tldraw. Note: A Notion-like note-taking function. File Management: You can manage notes or boards based on folder. Excalidraw Board Board in Revezone is a white board based on Excalidraw. You can use Excalidraw in a new easier way in Revezone, even customing fonts in desktop app and linking notes in Revezone through double link. Custom Fonts in Excalidraw Board Revezone app supports customing fonts in Revezone board. You can upload fonts you like in Revezone app. Operation Method: Click setting button at the bottom sidebar to custom a font in boards. Tldraw Board Revezone newly supports Tldraw whiteboard. Note Note in Revezone is a WYSIWYG Notion-like editor,supports '/' Command and Markdown syntax. File Management You can create and manage a folder/note/board in the left sidebar. Flex layout You can sort directories and split screens by dragging and dropping. About this repo This repository only includes the basic functionality code of Revezone and does not completely correspond to the capabilities provided by revezone.com and the Revezone desktop application. Future Planning More useful features are developing. You can get more information and developing stories from Twitter or Bilibili .;A lightweight local-first graphic-centric productivity tool to build your second brain. Supporting Excalidraw/Tldraw whiteboard and notion-like note. 一款以图形为中心、轻量级、本地优先的用于构建第二大脑的效率工具。支持 Excalidraw、Tldraw 白板和类 Notion 笔记。;canvas,note,pkm,second-brain,lightweight,note-taking,whiteboard,capture-organize-distill-express,excalidraw,knowledge-management | revezone/revezone |
briefercloud/layerform;Layerform is not actively maintained anymore. Ergomake is now Briefer . Layerform Layerform helps engineers create reusable environment stacks using plain `.tf` files. Home Page | Documentation | Discord After HashiCorp's announcement that they're not using the MPL license anymore, we'll be building on top of OpenTF. What is Layerform? Layerform is a Terraform wrapper that helps engineers build reusable infrastructure using plain Terraform files. To enable reuse, Layerform introduces the concept of layers . Each layer contains some infrastructure and can be stacked up on top of another layer. In addition to being much easier to use, Layerform allows teams to reuse core-pieces of infrastructure. That way, development infrastructure is much cheaper and quicker to spin up. With Layerform, Engineers only spawn the infrastructure layers they need. For those wanting development environments: we don't want to run your text-editor. Layerform is the standard tool for development infrastructure . You can keep using your text-editors, IDEs, and other local development directly on your machine. Why use Layerform Cheaper and quicker development infrastructure Layerform is much cheaper and quicker than spinning up an entire staging environment for each engineer. When using Layerform, engineers can share the same core-pieces of infrastructure, and only spin up the layers they need on top of it. For example, if you run applications in a Kubernetes cluster, you don't have to create a brand new cluster for each engineer's environment. Instead, you can reuse the same cluster and have multiple engineers spin up their applications on top of it. It's just like production, every time Layerform's environments are just like production because they are spun up from plain .tf files. Whatever you can set up using a .tf file, you can set up in a Layerform layer. That way, your development infrastructure will be just like production, including Lambdas, DynamoDB instances, and whatever else that you need . You own the infrastructure and permissions Layerform runs in your infrastructure. Layerform will store state and spin up resources in your cloud providers. Therefore, you have full control over who gets access to what, and what resources are currently running, and how much they cost. Encapsulation / Isolation of concerns By breaking infrastructure into layers, your organization can define clearer boundaries between teams. Consequently, it will be easier to mirror your organization's structure into your system's structure . Cost attribution and cost savings In addition to saving costs by reusing infrastructure, Layerform allows you to automatically track costs for each layer instance. When applying layers, Layerform will automatically tag the resources it creates with the actual name assigned to the layer instance. If you have production and development base layers, for example, each of those two will contain the tags layerform_layer_name and layerform_layer_instance with their respective names. That way, Layerform can recursively traverse layers' resources to collect cost management information. Consequently, it will be able to tell the cost of your whole production and development layers, as well as an aggregate cost report of everything on top of those layers. Getting started First, install the Layerform CLI. $ go install github.com/ergomake/layerform@latest Then, create the Terraform files you'll use to create each layer of infrastructure. In the example below, we have two layers: eks , which is the "base" layer, and services . layerform/
├─ services/
│ ├─ pods.tf
│ ├─ outputs.tf
│ └─ inputs.tf
├─ eks/
│ ├─ eks.tf
│ ├─ vpc.tf
│ ├─ inputs.tf
│ └─ outputs.tf
├─ services.tf
└─ eks.tf Once you have your infrastructure defined as code, you'll create the layer definitions that the CLI will use when spawning instances of each layer. json
{
"layers": [
{
"name": "base",
"files": ["./layerform/eks.tf", "./layerform/eks/**"]
},
{
"name": "services",
"files": ["./layerform/services.tf", "./layerform/services/**"],
"dependencies": ["base"]
}
]
} Now, configure the place in which the generated layer definitions will be saved. ```yaml In config inside ~/.layerform currentContext: remote-context
contexts:
remote-context:
type: s3
bucket: layerform-bucket-example
``` Finally, you should provision S3 with your layer definitions using layerform configure . The Layerform CLI will then take care of creating unique IDs for each layer and sending the Terraform files' contents to the Layerform back-end, which, in this case, is an S3 bucket. After provisioning layer definitions, you can use layerform spawn <definition_name> <desired_id> to create an instance of that particular layer. $ layerform spawn services my-dev-infra Each instance of a layer contains all the pieces of infrastructure defined within that layer's files. In this example, running that command will cause Layerform to also create an instance of the underlying eks layer and assign it the ID default . To spawn yet another services layer, just run layerform spawn services another-dev-infra . By default, Layerform will try to use underlying layers whose ID is default as base layers. As a general rule, underlying layers are always the ones whose ID is default . To specify the desired ID for each underlying layer, you'll have to use the --base parameter. For example: ``` Creates: 1. An eks layer with ID "one" 2. A services layer with ID "two" $ layerform spawn services two --base "eks=one"
``` Layer immutability and layer rebasing A layer can only mutate itself or the layers above. For example, if you have a base layer and a backend layer, the backend layer's Terraform files will not be able to mutate any infrastructure in a base layer instance. Still, the base layer files can mutate any instances of the layers above it. The way Layerform prevents undesirable mutations is by analyzing each terraform plan and detecting whether any mutation's target belongs to an underlying layer. The reason Layerform prevents a layer from mutating its underlying layer is to avoid breaking sibling pieces of infrastructure. This design allows for platform teams to "rebase" layer instances on top of theirs. For example, assume you have multiple application layers on top of a Kubernetes cluster belonging to a base layer. In that case, if the platform team wants to update the Kubernetes version and needs to patch existing application's manifests, they can do so from their own layer by referencing and patching other Terraform resources. On the other hand, product engineers on the layers above cannot modify the base layer containing the Kubernetes cluster. Otherwise, they could break everyone else's applications. In addition to preventing failures, immutability defines clearer communication interfaces between teams and helps organizations avoid lots of lateral channels. How Layerform works Layerform has two major components. The Layerform Back-end, and Layerform CLI. The Layerform CLI is used to provision the Layerform Back-end with all the metadata for each layer, like its name and dependencies, and all the Terraform files associated with that layer. The Layerform Back-end stores the data for each layer definition and stores the state for each instance of each layer so that new layers know which base state to use. There can be multiple types of back-ends. The most common types of back-end are local , for storing data locally, and s3 , for storing data on the cloud, in an S3 bucket. Finally, the Layerform CLI also talks to the Layerform Back-end to fetch the files for the layer it wants to apply, and the state for the underlying layer. The way the Layerform CLI creates new layers on top of the correct existing layers is by injecting the underlying layer's state when applying each layer. Layerform design philosophy Our main goal with Layerform was to make it as easy as possible for engineers to create and share different parts of their infrastructure. That way, we'd empower teams to create their own environments without burdening their organization with unnecessary costs or complex configuration files. When developing Layerform, we also determined it should support virtually any type of infrastructure, including infrastructure for serverless applications. That's why we decided to create a wrapper on top of a community fork of Terraform, which supports Kubernetes/Helm, and already has established providers for all major public clouds. Third, we decided Layerform should be simple and intuitive. Engineers shouldn't have to learn new proprietary languages or configuration formats to use Layerform. Whenever possible, we should allow them to reuse their existing configurations. Layerform concepts are the only thing engineers will need to learn about. Everything else should be "just Terraform". Finally, we decided Layerform needs to be open and free. It's for that reason we're using a GPL license for the wrapper and using a community-maintained fork. That's also why you don't necessarily need to pay for anything before you can extract value from Layerform. For the sake of transparency, the way we intend to make money in the future is by providing a managed service with governance, management, and cost-control features. If you wish to bundle the Layerform wrapper itself, the GPL license will take care of ensuring ourselves and the community gets value back. Usage telemetry As of 22nd of August, 2023, we've introduced telemetry to the CLI so we can understand how engineers are using the product and improve it accordingly. To disable telemetry, set the LF_TELEMETRY_DISABLED environment variable to 1 . Issues & Support You can find Layerform's users and maintainers in GitHub Discussions . There you can ask how to set up Layerform, ask us about the roadmap, and discuss any other related topics. You can also reach us directly (and more quickly) on our Discord server . Other channels Issue Tracker Twitter LinkedIn Ergomake Engineering Blog License Layerform is open-source. Licensed under the GNU GPLv3 License . Layerform is not associated with Terraform. Terraform is a registered trademark of HashiCorp, Inc.;Layerform helps engineers create reusable environment stacks using plain .tf files. Ideal for multiple "staging" environments.;dev-environment,developer-tools,devops,platform-engineering,sre,terraform | briefercloud/layerform |
gorilla-llm/gorilla-cli;Gorilla CLI Gorilla CLI powers your command-line interactions with a user-centric tool. Simply state your objective, and Gorilla CLI will generate potential commands for execution. Gorilla today supports ~1500 APIs, including Kubernetes, AWS, GCP, Azure, GitHub, Conda, Curl, Sed, and many more. No more recalling intricate CLI arguments! 🦍 Developed by UC Berkeley as a research prototype, Gorilla-CLI prioritizes user control and confidentiality:
- Commands are executed solely with your explicit approval.
- While we utilize queries and error logs (stderr) for model enhancement, we NEVER collect output data (stdout). Getting Started You can readily install Gorilla CLI via pip. bash
pip install gorilla-cli Usage Activate Gorilla CLI with gorilla followed by your task in plain English. For instance, to generate a file with 100 random characters, type: bash
$ gorilla generate 100 random characters into a file called test.txt or if you prefer, you can use quotes to avoid issues with string parsing: bash
$ gorilla "generate 100 random characters into a file called test.txt" Gorilla CLI will then generate candidate commands. Use the arrow keys to navigate through the options, then press enter to execute the chosen command. bash
🦍 Welcome to Gorilla. Use arrows to select
» cat /dev/urandom | env LC_ALL=C tr -dc 'a-zA-Z0-9' | head -c 100 > test.txt
echo $(head /dev/urandom | LC_CTYPE=C tr -dc 'a-zA-Z0-9' | dd bs=100 count=1) > test.txt
dd if=/dev/urandom bs=1 count=100 of=test.txt Some more examples bash
$ gorilla list all my GCP instances
» gcloud compute instances list --format="table(name,zone,status)"
gcloud compute instances list --format table
gcloud compute instances list --format="table(name, zone, machineType, status)" bash
$ gorilla get the image ids of all pods running in all namespaces in kubernetes
» kubectl get pods --all-namespaces -o jsonpath="{..imageID}"
kubectl get pods --all --namespaces
kubectl get pod -A -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{.spec.containers[].image}{"\n"}{end}' How It Works Gorilla-CLI fuses the capabilities of various Language Learning Models (LLMs) like Gorilla LLM , OpenAI's GPT-4, Claude v1, and others to present a user-friendly command-line interface. For each user query, we gather responses from all contributing LLMs, filter, sort, and present you with the most relevant options. Arguments ```
usage: go_cli.py [-h] [-p] [command_args ...] Gorilla CLI Help Doc positional arguments:
command_args Prompt to be inputted to Gorilla optional arguments:
-h, --help show this help message and exit
-p, --history Display command history
``` The history feature lets the user go back to previous commands they've executed to re-execute in a similar fashion to terminal history. Contributions We welcome your enhancements to Gorilla CLI! If you have improvements, feel free to submit a pull request on our GitHub page. License Gorilla CLI operates under the Apache 2.0 license. More details can be found in the LICENSE file. We'd also like to extend our appreciation to questionary for their fantastic UI!;LLMs for your CLI;aws,bash,cli,k8s,llm,productivity,terminal,gcp,iterm2,kubernetes-cli | gorilla-llm/gorilla-cli |
daveshap/ChatGPT_Custom_Instructions;ChatGPT_Custom_Instructions Each file has a brief description and the SYSTEM prompt (custom instructions). To use these, just copy the block of text into the Custom Instructions in your ChatGPT app. General Structure You can write your own. This is the general pattern I follow. You can pick and choose whatever you want. ```Markdown Mission Outcome or goal Not procedure Context Background info Where in the process are you Why does it need to be done Rules Boundaries and constraints Specific subgoals and objectives Instructions Do X, Y, and Z Expected Input What to anticipate and why Variability Output Format Formatting, type of output, length JSON, XML, lists, etc Example Output Simple demonstration
```;Repo of custom instructions that you can use for ChatGPT;[] | daveshap/ChatGPT_Custom_Instructions |
joogps/Glur;Glur A SwiftUI library that uses Metal to display efficient progressive blurs, just like the ones used by Apple. Try it today on the App Store . Installation This repository is a Swift package, so just include it in your Xcode project and target under File > Add package dependencies . Then, import Glur to the Swift files where you'll be using it. [!NOTE] While Glur is supported on older platforms, it will only utilize the Metal implementation of the blur effect on iOS 17.0 and later, macOS 14.0 and later, and tvOS 17.0 and later . Otherwise, it will present a worse, compatibility effect that should be tested by the developer before being used in production. The Metal implementation is not available on watchOS , and therefore the compatibility effect will be presented on this platform by default. Usage You can add a glur effect with the following modifier: swift
.glur() Here are all optional parameters: swift
.glur(radius: 8.0, // The total radius of the blur effect when fully applied.
offset: 0.3, // The distance from the view's edge to where the effect begins, relative to the view's size.
interpolation: 0.4, // The distance from the offset to where the effect is fully applied, relative to the view's size.
direction: .down // The direction in which the effect is applied.
) [!WARNING] When being used in the iOS simulator, SwiftUI shader effects may not be displayed if the view exceeds 545 points in either dimension. Please note that, on a physical device, the effect should work as intented. How it's done This project builds on a proof of concept developed in June of 2023, right after WWDC. It makes use of Apple's new simplified Shader API for SwiftUI . First, I coded a Metal shader that produced a gaussian blur for the modified view with the correct gaussian weights distribution, efficiently. Then, I modified it slightly to vary the blur radius over the vertical or horizontal axis given the offset, interpolation and direction values. [!WARNING]
Given that the shader is applied through Apple's own Shader API for SwiftUI, it is restricted by the limitations imposed by that API. This means that Glur can only be applied to pure SwiftUI views , excluding UIKit-backed views, such as ScrollView . [!TIP]
If you want to learn how to write your first Metal shader with SwiftUI, check out this tutorial that I wrote for the Cindori blog. Demo You can run a demo of Glur in your device or simulator through the GlurDemo project in this repository.;Progressive blurs in SwiftUI.;[] | joogps/Glur |
sb-ocr/diy-spacemouse;DIY Spacemouse for Fusion 360 Watch the build video ↓ This device is made for Fusion360 but can be adapted to other CAD applications. Current features: Orbit, Pan, Home view and Fit to view. Build instructions → Instructables;A DIY navigation device for Fusion360;[] | sb-ocr/diy-spacemouse |
ykdojo/kaguya;Kaguya Kaguya is a ChatGPT plugin that allows you to load and edit your local files in a controlled way, as well as run any Python, JavaScript, and bash script. This makes it a powerful tool for developers, enabling them to interact with their file system and run scripts directly from ChatGPT. To interact with Kaguya, you'll need access to plugin devtools by getting on their waitlist here . In case this approach doesn't work for you, we may be able to come up with a more open approach at some point in the future. Demo Here are a few demo videos of Kaguya: https://github.com/ykdojo/kaguya/assets/107422421/c580a6f6-5f08-43fd-ac8b-c12a319e1534 https://github.com/ykdojo/kaguya/assets/107422421/d61b8ff1-2dbd-4eb4-b1b5-45d43797ddaa Getting Started Guide Gain access to OpenAI's plugin devtools for ChatGPT here Install Docker and run it locally Clone this repo to your local environment Execute docker.sh script Setup localhost port 3000 Interact with Kaguya through ChatGPT If you want Kaguya to be able to interact with your files, put them in the FILES folder. Note: Kaguya won't have access to files outside of its own directory. Recommended Custom Instructions ```
When editing a file, use search and replace and NOT updateWholeFile unless we're dealing with a very small file. Confirm with me before you edit a file. When you have output from Kaguya, there's no need to repeat everything from there. Instead, you can summarize it concisely. When you want to use executeCommand in a subdirectory, make sure to cd there first every time.
``` API Endpoints The project provides several API endpoints that allow you to interact with the file system within the Kaguya directory. The API is described in the openapi.yaml file. Here is a brief overview: GET /api/listFilesInDirectory : List files and directories in the specified directory. Defaults to FILES. GET /api/readFile : Read the content of a file in the user's directory. GET /api/readMultipleFiles : Read the content of multiple files. POST /api/update : Update a file in the user's directory by performing a search-and-replace operation. POST /api/updateAll : Update a file in the user's directory by performing a search-and-replace operation (all occurrences). POST /api/updateWholeFile : Replace the entire content of a file in the user's directory. POST /api/appendToFile : Append content to the end of an existing file. POST /api/createFile : Create a new file. POST /api/deleteFile : Delete a file in the user's directory. POST /api/renameFile : Rename a file in the user's directory. POST /api/createDirectory : Create a new directory. POST /api/deleteDirectory : Delete a directory and its contents. POST /api/executeCommand : Execute a shell command. Tips If listFilesInDirectory tries to show too many files, a good solution would be to add a git repo or submodule, in which files in .gitignore are ignored. Best to keep each file under 100 lines of code, particularly for writing Writing more than ~80 lines of code at once is not recommended. It's slow and it might not even be able to finish the task. You can have it read more code though. However, reading more than 500-600 lines of code at once is not recommended. If the target file you want to edit is long, you may want to explicitly ask it to use search and replace and NOT updateWholeFile. It may not get the intention of your instructions right away. It's meant to be a conversational tool. If the assistant starts hallucinating, it may be helpful to start a new conversation or limit the length of each file being loaded. Discord Feel free to join our Discord server here . VS Code Extension We're also working on a VS Code extension version of Kaguya. Feel free to sign up for the waitlist here .;A ChatGPT plugin that allows you to load and edit your local files in a controlled way, as well as run any Python, JavaScript, and bash script.;[] | ykdojo/kaguya |
dtinth/superwhite;superwhite display a very bright white color on HDR-enabled displays with ~1 KB of video file https://github.com/dtinth/superwhite/assets/193136/311aac17-ff89-4700-a080-98dbe4424cbd if you view this page with a recent iPhone or iPad with low-power mode turned off, you should see a very bright white color above. on a Mac displays with HDR support, you should see a white color that is brighter than #ffffff . on unsupported displays, you should see a normal white color. for example of a practical usage, see: https://notes.dt.in.th/HDRQRCode please don’t abuse this As data URL data:video/mp4;base64,AAAAHGZ0eXBpc29tAAACAGlzb21pc28ybXA0MQAAAAhmcmVlAAAAvG1kYXQAAAAfTgEFGkdWStxcTEM/lO/FETzRQ6gD7gAA7gIAA3EYgAAAAEgoAa8iNjAkszOL+e58c//cEe//0TT//scp1n/381P/RWP/zOW4QtxorfVogeh8nQDbQAAAAwAQMCcWUTAAAAMAAAMAAAMA84AAAAAVAgHQAyu+KT35E7gAADFgAAADABLQAAAAEgIB4AiS76MTkNbgAAF3AAAPSAAAABICAeAEn8+hBOTXYAADUgAAHRAAAAPibW9vdgAAAGxtdmhkAAAAAAAAAAAAAAAAAAAD6AAAAKcAAQAAAQAAAAAAAAAAAAAAAAEAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAw10cmFrAAAAXHRraGQAAAADAAAAAAAAAAAAAAABAAAAAAAAAKcAAAAAAAAAAAAAAAAAAAAAAAEAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAABAAAAAABAAAAAQAAAAAAAkZWR0cwAAABxlbHN0AAAAAAAAAAEAAACnAAAAAAABAAAAAAKFbWRpYQAAACBtZGhkAAAAAAAAAAAAAAAAAABdwAAAD6BVxAAAAAAAMWhkbHIAAAAAAAAAAHZpZGUAAAAAAAAAAAAAAABDb3JlIE1lZGlhIFZpZGVvAAAAAixtaW5mAAAAFHZtaGQAAAABAAAAAAAAAAAAAAAkZGluZgAAABxkcmVmAAAAAAAAAAEAAAAMdXJsIAAAAAEAAAHsc3RibAAAARxzdHNkAAAAAAAAAAEAAAEMaHZjMQAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAAAQABAASAAAAEgAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABj//wAAAHVodmNDAQIgAAAAsAAAAAAAPPAA/P36+gAACwOgAAEAGEABDAH//wIgAAADALAAAAMAAAMAPBXAkKEAAQAmQgEBAiAAAAMAsAAAAwAAAwA8oBQgQcCTDLYgV7kWVYC1CRAJAICiAAEACUQBwChkuNBTJAAAAApmaWVsAQAAAAATY29scm5jbHgACQAQAAkAAAAAEHBhc3AAAAABAAAAAQAAABRidHJ0AAAAAAAALPwAACz8AAAAKHN0dHMAAAAAAAAAAwAAAAIAAAPoAAAAAQAAAAEAAAABAAAD6AAAABRzdHNzAAAAAAAAAAEAAAABAAAAEHNkdHAAAAAAIBAQGAAAAChjdHRzAAAAAAAAAAMAAAABAAAAAAAAAAEAAAfQAAAAAgAAAAAAAAAcc3RzYwAAAAAAAAABAAAAAQAAAAQAAAABAAAAJHN0c3oAAAAAAAAAAAAAAAQAAABvAAAAGQAAABYAAAAWAAAAFHN0Y28AAAAAAAAAAQAAACwAAABhdWR0YQAAAFltZXRhAAAAAAAAACFoZGxyAAAAAAAAAABtZGlyYXBwbAAAAAAAAAAAAAAAACxpbHN0AAAAJKl0b28AAAAcZGF0YQAAAAEAAAAATGF2ZjYwLjMuMTAw HTML snippet ```html ``` Creating the video I used Final Cut Pro. Create a library with Wide Gamut HDR color processing setting. Create a project. Add a solid white color generator. Set Color to Bright White. Crank up Graphics HDR Level to 100. Add the HDR Tool video effect. Set Mode to SDR to HDR (PQ). Crank up Peak Brightness to 5000 nits. Export the video using HEVC 10-bit as the codec.;display a very bright white color on HDR-enabled displays;[] | dtinth/superwhite |
ray-project/ray-llm;============================ Archiving Ray LLM We had started RayLLM to simplify setting up and deploying LLMs on top of Ray Serve. In the past few months, vLLM has made significant improvements in ease of use. We are archiving the RayLLM project and instead adding some examples to our Ray Serve docs for deploying LLMs with Ray Serve and vLLM. This will reduce another library for the community to learn about and greatly simplify the workflow to serve LLMs at scale. We also recently launched Hosted Anyscale where you can serve LLMs with Ray Serve with some more capabilities out of the box like multi-lora with serve multiplexing, JSON mode function calling and further performance enhancements. ============================ RayLLM - LLMs on Ray The hosted Aviary Explorer is not available anymore.
Visit Anyscale to experience models served with RayLLM. RayLLM (formerly known as Aviary) is an LLM serving solution that makes it easy to deploy and manage
a variety of open source LLMs, built on Ray Serve . It does this by: Providing an extensive suite of pre-configured open source LLMs, with defaults that work out of the box. Supporting Transformer models hosted on Hugging Face Hub or present on local disk. Simplifying the deployment of multiple LLMs Simplifying the addition of new LLMs Offering unique autoscaling support, including scale-to-zero. Fully supporting multi-GPU & multi-node model deployments. Offering high performance features like continuous batching, quantization and streaming. Providing a REST API that is similar to OpenAI's to make it easy to migrate and cross test them. Supporting multiple LLM backends out of the box, including vLLM and TensorRT-LLM . In addition to LLM serving, it also includes a CLI and a web frontend (Aviary Explorer) that you can use to compare the outputs of different models directly, rank them by quality, get a cost and latency estimate, and more. RayLLM supports continuous batching and quantization by integrating with vLLM . Continuous batching allows you to get much better throughput and latency than static batching. Quantization allows you to deploy compressed models with cheaper hardware requirements and lower inference costs. See quantization guide for more details on running quantized models on RayLLM. RayLLM leverages Ray Serve , which has native support for autoscaling
and multi-node deployments. RayLLM can scale to zero and create
new model replicas (each composed of multiple GPU workers) in response to demand. Getting started Deploying RayLLM The guide below walks you through the steps required for deployment of RayLLM on Ray Serve. Locally We highly recommend using the official anyscale/ray-llm Docker image to run RayLLM. Manually installing RayLLM is currently not a supported use-case due to specific dependencies required, some of which are not available on pip. ```shell
cache_dir=${XDG_CACHE_HOME:-$HOME/.cache} docker run -it --gpus all --shm-size 1g -p 8000:8000 -e HF_HOME=~/data -v $cache_dir:~/data anyscale/ray-llm:latest bash Inside docker container serve run ~/serve_configs/amazon--LightGPT.yaml
``` On a Ray Cluster RayLLM uses Ray Serve, so it can be deployed on Ray Clusters. Currently, we only have a guide and pre-configured YAML file for AWS deployments. Make sure you have exported your AWS credentials locally. bash
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_SESSION_TOKEN=... Start by cloning this repo to your local machine. You may need to specify your AWS private key in the deploy/ray/rayllm-cluster.yaml file.
See Ray on Cloud VMs page in
Ray documentation for more details. ```shell
git clone https://github.com/ray-project/ray-llm.git
cd ray-llm Start a Ray Cluster (This will take a few minutes to start-up) ray up deploy/ray/rayllm-cluster.yaml
``` Connect to your Cluster ```shell Connect to the Head node of your Ray Cluster (This will take several minutes to autoscale) ray attach deploy/ray/rayllm-cluster.yaml Deploy the LightGPT model. serve run serve_configs/amazon--LightGPT.yaml
``` You can deploy any model in the models directory of this repo,
or define your own model YAML file and run that instead. On Kubernetes For Kubernetes deployments, please see our documentation for deploying on KubeRay . Query your models Once the models are deployed, you can install a client outside of the Docker container to query the backend. shell
pip install "rayllm @ git+https://github.com/ray-project/ray-llm.git" You can query your RayLLM deployment in many ways. In all cases start out by doing: shell
export ENDPOINT_URL="http://localhost:8000/v1" This is because your deployment is running locally, but you can also access remote deployments (in which case you would set ENDPOINT_URL to a remote URL). Using curl You can use curl at the command line to query your deployed LLM: shell
% curl $ENDPOINT_URL/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "meta-llama/Llama-2-7b-chat-hf",
"messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"}],
"temperature": 0.7
}' text
{
"id":"meta-llama/Llama-2-7b-chat-hf-308fc81f-746e-4682-af70-05d35b2ee17d",
"object":"text_completion","created":1694809775,
"model":"meta-llama/Llama-2-7b-chat-hf",
"choices":[
{
"message":
{
"role":"assistant",
"content":"Hello there! *adjusts glasses* It's a pleasure to meet you! Is there anything I can help you with today? Have you got a question or a task you'd like me to assist you with? Just let me know!"
},
"index":0,
"finish_reason":"stop"
}
],
"usage":{"prompt_tokens":30,"completion_tokens":53,"total_tokens":83}} Connecting directly over python Use the requests library to connect with Python. Use this script to receive a streamed response, automatically parse the outputs, and print just the content. ```python
import os
import json
import requests s = requests.Session() api_base = os.getenv("ENDPOINT_URL")
url = f"{api_base}/chat/completions"
body = {
"model": "meta-llama/Llama-2-7b-chat-hf",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Tell me a long story with many words."}
],
"temperature": 0.7,
"stream": True,
} with s.post(url, json=body, stream=True) as response:
for chunk in response.iter_lines(decode_unicode=True):
if chunk is not None:
try:
# Get data from reponse chunk
chunk_data = chunk.split("data: ")[1] # Get message choices from data
choices = json.loads(chunk_data)["choices"]
# Pick content from first choice
content = choices[0]["delta"]["content"]
print(content, end="", flush=True)
except json.decoder.JSONDecodeError:
# Chunk was not formatted as expected
pass
except KeyError:
# No message was contained in the chunk
pass
print("") ``` Using the OpenAI SDK RayLLM uses an OpenAI-compatible API, allowing us to use the OpenAI
SDK to access our deployments. To do so, we need to set the OPENAI_API_BASE env var. shell
export OPENAI_API_BASE=http://localhost:8000/v1
export OPENAI_API_KEY='not_a_real_key' ```python
import openai List all models. models = openai.Model.list()
print(models) Note: not all arguments are currently supported and will be ignored by the backend. chat_completion = openai.ChatCompletion.create(
model="meta-llama/Llama-2-7b-chat-hf",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Say 'test'."}
],
temperature=0.7
)
print(chat_completion)
``` RayLLM Reference Installing RayLLM To install RayLLM and its dependencies, run the following command: shell
pip install "rayllm @ git+https://github.com/ray-project/ray-llm.git" RayLLM consists of a set of configurations and utilities for deploying LLMs on Ray Serve,
in addition to a frontend (Aviary Explorer), both of which come with additional
dependencies. To install the dependencies for the frontend run the following commands: shell
pip install "rayllm[frontend] @ git+https://github.com/ray-project/ray-llm.git" The backend dependencies are heavy weight, and quite large. We recommend using the official anyscale/ray-llm image. Installing the backend manually is not a supported usecase. Usage stats collection Ray collects basic, non-identifiable usage statistics to help us improve the project.
For more information on what is collected and how to opt-out, see the Usage Stats Collection page in
Ray documentation. Using RayLLM through the CLI RayLLM uses the Ray Serve CLI that allows you to interact with deployed models. ```shell Start a new model in Ray Serve from provided configuration serve run serve_configs/ Get the status of the running deployments serve status Get the current config of current live Serve applications serve config Shutdown all Serve applications serve shutdown
``` RayLLM Model Registry You can easily add new models by adding two configuration files.
To learn more about how to customize or add new models,
see the Model Registry . Frequently Asked Questions How do I add a new model? The easiest way is to copy the configuration of the existing model's YAML file and modify it. See models/README.md for more details. How do I deploy multiple models at once? Run multiple models at once by aggregating the Serve configs for different models into a single, unified config. For example, use this config to run the LightGPT and Llama-2-7b-chat model in a single Serve application: ```yaml File name: serve_configs/config.yaml applications:
- name: router
import_path: rayllm.backend:router_application
route_prefix: /
args:
models:
- ./models/continuous_batching/amazon--LightGPT.yaml
- ./models/continuous_batching/meta-llama--Llama-2-7b-chat-hf.yaml
``` The config includes both models in the model argument for the router . Additionally, the Serve configs for both model applications are included. Save this unified config file to the serve_configs/ folder. Run the config to deploy the models: shell
serve run serve_configs/<config.yaml> How do I deploy a model to multiple nodes? All our default model configurations enforce a model to be deployed on one node for high performance. However, you can easily change this if you want to deploy a model across nodes for lower cost or GPU availability. In order to do that, go to the YAML file in the model registry and change placement_strategy to PACK instead of STRICT_PACK . My deployment isn't starting/working correctly, how can I debug? There can be several reasons for the deployment not starting or not working correctly. Here are some things to check: You might have specified an invalid model id. Your model may require resources that are not available on the cluster. A common issue is that the model requires Ray custom resources (eg. accelerator_type_a10 ) in order to be scheduled on the right node type, while your cluster is missing those custom resources. You can either modify the model configuration to remove those custom resources or better yet, add them to the node configuration of your Ray cluster. You can debug this issue by looking at Ray Autoscaler logs ( monitor.log ). Your model is a gated Hugging Face model (eg. meta-llama). In that case, you need to set the HUGGING_FACE_HUB_TOKEN environment variable cluster-wide. You can do that either in the Ray cluster configuration or by setting it before running serve run Your model may be running out of memory. You can usually spot this issue by looking for keywords related to "CUDA", "memory" and "NCCL" in the replica logs or serve run output. In that case, consider reducing the max_batch_prefill_tokens and max_batch_total_tokens (if applicable). See models/README.md for more information on those parameters. In general, Ray Dashboard is a useful debugging tool, letting you monitor your Ray Serve / LLM application and access Ray logs. A good sanity check is deploying the test model in tests/models/. If that works, you know you can deploy a model. How do I write a program that accesses both OpenAI and your hosted model at the same time? The OpenAI create() commands allow you to specify the API_KEY and API_BASE . So you can do something like this. ```python Call your self-hosted model running on the local host: OpenAI.ChatCompletion.create(api_base="http://localhost:8000/v1", api_key="",...) Call OpenAI. Set OPENAI_API_KEY to your key and unset OPENAI_API_BASE OpenAI.ChatCompletion.create(api_key="OPENAI_API_KEY", ...)
``` Getting Help and Filing Bugs / Feature Requests We are eager to help you get started with RayLLM. You can get help on: Via Slack -- fill in this form to sign up. Via Discuss . For bugs or for feature requests, please submit them here . Contributions We are also interested in accepting contributions. Those could be anything from a new evaluator, to integrating a new model with a yaml file, to more.
Feel free to post an issue first to get our feedback on a proposal first, or just file a PR and we commit to giving you prompt feedback. We use pre-commit hooks to ensure that all code is formatted correctly.
Make sure to pip install pre-commit and then run pre-commit install .
You can also run ./format to run the hooks manually.;RayLLM - LLMs on Ray;distributed-systems,large-language-models,ray,serving,transformers,llm,llm-inference,llm-serving,llmops | ray-project/ray-llm |
QwenLM/Qwen-Audio;中文 | English Qwen-Audio 🤖 | 🤗 | Qwen-Audio-Chat 🤖 | 🤗 | Demo 🤖 | 🤗 Homepage | Paper | WeChat | Discord [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/qwen-audio-advancing-universal-audio/speech-recognition-on-aishell-1)](https://paperswithcode.com/sota/speech-recognition-on-aishell-1?p=qwen-audio-advancing-universal-audio)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/qwen-audio-advancing-universal-audio/speech-recognition-on-aishell-2-test-android-1)](https://paperswithcode.com/sota/speech-recognition-on-aishell-2-test-android-1?p=qwen-audio-advancing-universal-audio)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/qwen-audio-advancing-universal-audio/speech-recognition-on-aishell-2-test-ios)](https://paperswithcode.com/sota/speech-recognition-on-aishell-2-test-ios?p=qwen-audio-advancing-universal-audio)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/qwen-audio-advancing-universal-audio/speech-recognition-on-aishell-2-test-mic-1)](https://paperswithcode.com/sota/speech-recognition-on-aishell-2-test-mic-1?p=qwen-audio-advancing-universal-audio)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/qwen-audio-advancing-universal-audio/acoustic-scene-classification-on-cochlscene)](https://paperswithcode.com/sota/acoustic-scene-classification-on-cochlscene?p=qwen-audio-advancing-universal-audio)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/qwen-audio-advancing-universal-audio/acoustic-scene-classification-on-tut-acoustic)](https://paperswithcode.com/sota/acoustic-scene-classification-on-tut-acoustic?p=qwen-audio-advancing-universal-audio) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/qwen-audio-advancing-universal-audio/audio-classification-on-vocalsound)](https://paperswithcode.com/sota/audio-classification-on-vocalsound?p=qwen-audio-advancing-universal-audio) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/qwen-audio-advancing-universal-audio/audio-captioning-on-clotho)](https://paperswithcode.com/sota/audio-captioning-on-clotho?p=qwen-audio-advancing-universal-audio) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/qwen-audio-advancing-universal-audio/speech-recognition-on-librispeech-test-clean)](https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean?p=qwen-audio-advancing-universal-audio)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/qwen-audio-advancing-universal-audio/emotion-recognition-in-conversation-on-meld)](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on-meld?p=qwen-audio-advancing-universal-audio)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/qwen-audio-advancing-universal-audio/speech-recognition-on-librispeech-test-other)](https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-other?p=qwen-audio-advancing-universal-audio)
**Qwen-Audio** (Qwen Large Audio Language Model) is the multimodal version of the large model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-Audio accepts diverse audio (human speech, natural sound, music and song) and text as inputs, outputs text. The contribution of Qwen-Audio include:
- **Fundamental audio models**: Qwen-Audio is a fundamental multi-task audio-language model that supports various tasks, languages, and audio types, serving as a universal audio understanding model. Building upon Qwen-Audio, we develop Qwen-Audio-Chat through instruction fine-tuning, enabling multi-turn dialogues and supporting diverse audio-oriented scenarios.
- **Multi-task learning framework for all types of audios**: To scale up audio-language pre-training, we address the challenge of variation in textual labels associated with different datasets by proposing a multi-task training framework, enabling knowledge sharing and avoiding one-to-many interference. Our model incorporates more than 30 tasks and extensive experiments show the model achieves strong performance.
- **Strong Performance**: Experimental results show that Qwen-Audio achieves impressive performance across diverse benchmark tasks without requiring any task-specific fine-tuning, surpassing its counterparts. Specifically, Qwen-Audio achieves state-of-the-art results on the test set of Aishell1, cochlscene, ClothoAQA, and VocalSound.
- **Flexible multi-run chat from audio and text input**: Qwen-Audio supports multiple-audio analysis, sound understanding and reasoning, music appreciation, and tool usage. We release two models of the Qwen-Audio series soon:
- Qwen-Audio: The pre-trained multi-task audio understanding model uses Qwen-7B as the initialization of the LLM, and [Whisper-large-v2](https://github.com/openai/whisper) as the initialization of the audio encoder.
- Qwen-Audio-Chat: A multimodal LLM-based AI assistant, which is trained with alignment techniques. Qwen-Audio-Chat supports more flexible interaction, such as multiple audio inputs, multi-round question answering, and creative capabilities. ## News and Updates
* 2023.11.30 🔥 We have released the checkpoints of both **Qwen-Audio** and **Qwen-Audio-Chat** on ModelScope and Hugging Face.
* 2023.11.15 🎉 We released a [paper](http://arxiv.org/abs/2311.07919) for details about Qwen-Audio and Qwen-Audio-Chat model, including training details and model performance. ## Evaluation
We evaluated the Qwen-Audio's abilities on 12 standard benchmarks as follows: The below is the overal performance: The details of evaluation are as follows:
### Automatic Speech Recognition Dataset Model Results (WER) dev-clean dev-othoer test-clean test-other Librispeech SpeechT5 2.1 5.5 2.4 5.8 SpeechNet - - 30.7 - SLM-FT - - 2.6 5.0 SALMONN - - 2.1 4.9 Qwen-Audio 1.8 4.0 2.0 4.2 Dataset Model Results (WER) dev test Aishell1 MMSpeech-base 2.0 2.1 MMSpeech-large 1.6 1.9 Paraformer-large - 2.0 Qwen-Audio 1.2 (SOTA) 1.3 (SOTA) Dataset Model Results (WER) Mic iOS Android Aishell2 MMSpeech-base 4.5 3.9 4.0 Paraformer-large - 2.9 - Qwen-Audio 3.3 3.1 3.3 ### Soeech-to-text Translation Dataset Model Results (BLUE) en-de de-en en-zh zh-en es-en fr-en it-en CoVoST2 SALMMON 18.6 - 33.1 - - - - SpeechLLaMA - 27.1 - 12.3 27.9 25.2 25.9 BLSP 14.1 - - - - - - Qwen-Audio 25.1 33.9 41.5 15.7 39.7 38.5 36.0 ### Automatic Audio Caption Dataset Model Results CIDER SPICE SPIDEr Clotho Pengi 0.416 0.126 0.271 Qwen-Audio 0.441 0.136 0.288 ### Speech Recognition with Word-level Timestamp Dataset Model AAC (ms) Industrial Data Force-aligner 60.3 Paraformer-large-TP 65.3 Qwen-Audio 51.5 (SOTA) ### Automatic Scene Classification Dataset Model ACC Cochlscene Cochlscene 0.669 Qwen-Audio 0.795 (SOTA) TUT2017 Pengi 0.353 Qwen-Audio 0.649 ### Speech Emotion Recognition Dataset Model ACC Meld WavLM-large 0.542 Qwen-Audio 0.557 ### Audio Question & Answer Dataset Model Results ACC ACC (binary) ClothoAQA ClothoAQA 0.542 0.627 Pengi - 0.645 Qwen-Audio 0.579 0.749 ### Vocal Sound Classification Dataset Model ACC VocalSound CLAP 0.4945 Pengi 0.6035 Qwen-Audio 0.9289 (SOTA) ### Music Note Analysis Dataset Model NS. Qualities (MAP) NS. Instrument (ACC) NSynth Pengi 0.3860 0.5007 Qwen-Audio 0.4742 0.7882 We have provided **all** evaluation scripts to reproduce our results. Please refer to [eval_audio/EVALUATION.md](eval_audio/EVALUATION.md) for details.
### Evaluation of Chat
To evaluate the chat abilities of Qwen-Audio-Chat, we provide [TUTORIAL](TUTORIAL.md) and demo for users.
## Requirements
* python 3.8 and above
* pytorch 1.12 and above, 2.0 and above are recommended
* CUDA 11.4 and above are recommended (this is for GPU users)
* FFmpeg ## Quickstart
Below, we provide simple examples to show how to use Qwen-Audio and Qwen-Audio-Chat with 🤖 ModelScope and 🤗 Transformers.
Before running the code, make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries.
```bash
pip install -r requirements.txt
```
Now you can start with ModelScope or Transformers. For more usage, please refer to the [tutorial](TUTORIAL.md). Qwen-Audio models currently perform best with audio clips under 30 seconds.
#### 🤗 Transformers
To use Qwen-Audio-Chat for the inference, all you need to do is to input a few lines of codes as demonstrated below. However, **please make sure that you are using the latest code.**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
import torch
torch.manual_seed(1234)
# Note: The default behavior now has injection attack prevention off.
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-Audio-Chat", trust_remote_code=True)
# use bf16
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-Audio-Chat", device_map="auto", trust_remote_code=True, bf16=True).eval()
# use fp16
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-Audio-Chat", device_map="auto", trust_remote_code=True, fp16=True).eval()
# use cpu only
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-Audio-Chat", device_map="cpu", trust_remote_code=True).eval()
# use cuda device
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-Audio-Chat", device_map="cuda", trust_remote_code=True).eval()
# Specify hyperparameters for generation (No need to do this if you are using transformers>4.32.0)
# model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-Audio-Chat", trust_remote_code=True)
# 1st dialogue turn
query = tokenizer.from_list_format([
{'audio': 'assets/audio/1272-128104-0000.flac'}, # Either a local path or an url
{'text': 'what does the person say?'},
])
response, history = model.chat(tokenizer, query=query, history=None)
print(response)
# The person says: "mister quilter is the apostle of the middle classes and we are glad to welcome his gospel".
# 2nd dialogue turn
response, history = model.chat(tokenizer, 'Find the start time and end time of the word "middle classes"', history=history)
print(response)
# The word "middle classes" starts at <|2.33|> seconds and ends at <|3.26|> seconds.
```
Running Qwen-Audio pretrained base model is also simple.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
import torch
torch.manual_seed(1234)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-Audio", trust_remote_code=True)
# use bf16
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-Audio", device_map="auto", trust_remote_code=True, bf16=True).eval()
# use fp16
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-Audio", device_map="auto", trust_remote_code=True, fp16=True).eval()
# use cpu only
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-Audio", device_map="cpu", trust_remote_code=True).eval()
# use cuda device
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-Audio", device_map="cuda", trust_remote_code=True).eval()
# Specify hyperparameters for generation (No need to do this if you are using transformers>4.32.0)
# model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-Audio", trust_remote_code=True)
audio_url = "assets/audio/1272-128104-0000.flac"
sp_prompt = "<|startoftranscription|><|en|><|transcribe|><|en|><|notimestamps|><|wo_itn|>"
query = f" {audio_url} {sp_prompt}"
audio_info = tokenizer.process_audio(query)
inputs = tokenizer(query, return_tensors='pt', audio_info=audio_info)
inputs = inputs.to(model.device)
pred = model.generate(**inputs, audio_info=audio_info)
response = tokenizer.decode(pred.cpu()[0], skip_special_tokens=False,audio_info=audio_info)
print(response)
# assets/audio/1272-128104-0000.flac <|startoftranscription|><|en|><|transcribe|><|en|><|notimestamps|><|wo_itn|>mister quilting is the apostle of the middle classes and we are glad to welcome his gospel<|endoftext|>
```
In the event of a network issue while attempting to download model checkpoints and codes from Hugging Face, an alternative approach is to initially fetch the checkpoint from ModelScope and then load it from the local directory as outlined below:
```python
from modelscope import snapshot_download
from transformers import AutoModelForCausalLM, AutoTokenizer
# Downloading model checkpoint to a local dir model_dir
model_id = 'qwen/Qwen-Audio-Chat'
revision = 'master'
model_dir = snapshot_download(model_id, revision=revision)
# Loading local checkpoints
# trust_remote_code is still set as True since we still load codes from local dir instead of transformers
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_dir,
device_map="cuda",
trust_remote_code=True
).eval()
```
#### 🤖 ModelScope
ModelScope is an opensource platform for Model-as-a-Service (MaaS), which provides flexible and cost-effective model service to AI developers. Similarly, you can run the models with ModelScope as shown below:
```python
from modelscope import (
snapshot_download, AutoModelForCausalLM, AutoTokenizer, GenerationConfig
)
import torch
model_id = 'qwen/Qwen-Audio-Chat'
revision = 'master'
model_dir = snapshot_download(model_id, revision=revision)
torch.manual_seed(1234)
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
if not hasattr(tokenizer, 'model_dir'):
tokenizer.model_dir = model_dir
# use bf16
# model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, bf16=True).eval()
# use fp16
# model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, fp16=True).eval()
# use CPU
# model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="cpu", trust_remote_code=True).eval()
# use gpu
model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True).eval()
# 1st dialogue turn
query = tokenizer.from_list_format([
{'audio': 'assets/audio/1272-128104-0000.flac'}, # Either a local path or an url
{'text': 'what does the person say?'},
])
response, history = model.chat(tokenizer, query=query, history=None)
print(response)
# The person says: "mister quilter is the apostle of the middle classes and we are glad to welcome his gospel".
# 2st dialogue turn
response, history = model.chat(tokenizer, 'Find the start time and end time of the word "middle classes"', history=history)
print(response)
# The word "middle classes" starts at <|2.33|> seconds and ends at <|3.26|> seconds.
```
## Demo
### Web UI
We provide code for users to build a web UI demo. Before you start, make sure you install the following packages:
```
pip install -r requirements_web_demo.txt
```
Then run the command below and click on the generated link:
```
python web_demo_audio.py
``` ## FAQ
If you meet problems, please refer to [FAQ](FAQ.md) and the issues first to search a solution before you launch a new issue. ## We Are Hiring
If you are interested in joining us as full-time or intern, please contact us at qwen_audio@list.alibaba-inc.com. ## License Agreement
Researchers and developers are free to use the codes and model weights of both Qwen-Audio and Qwen-Audio-Chat. We also allow their commercial use. Check our license at [LICENSE](LICENSE) for more details. ## Citation
If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :)
```BibTeX
@article{Qwen-Audio,
title={Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language Models},
author={Chu, Yunfei and Xu, Jin and Zhou, Xiaohuan and Yang, Qian and Zhang, Shiliang and Yan, Zhijie and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2311.07919},
year={2023}
}
``` ## Contact Us
If you are interested to leave a message to either our research team or product team, feel free to send an email to qianwen_opensource@alibabacloud.com.;The official repo of Qwen-Audio (通义千问-Audio) chat & pretrained large audio language model proposed by Alibaba Cloud.;[] | QwenLM/Qwen-Audio |
eslint-stylistic/eslint-stylistic;ESLint Stylistic Documentation | Discord | Why | Migration | Project Progress Community-maintained stylistic/formatting ESLint rules for JavaScript and TypeScript. This project was initiated as ESLint and typescript-eslint teams decided to deprecate formatting/stylistic-related rules from their core due to the maintenance cost. This repo ports those rules and distributes them as separate packages and will keep them maintained by the community. License MIT License © OpenJS Foundation and other contributors, © 2023-PRESENT ESLint Stylistic contributors;Monorepo for ESLint Stylistic plugins and configs;eslint,eslint-config,eslint-plugin,formatter,eslint-stylistic | eslint-stylistic/eslint-stylistic |
MeetKai/functionary;Functionary Functionary is a language model that can interpret and execute functions/plugins. The model determines when to execute functions, whether in parallel or serially, and can understand their outputs. It only triggers functions as needed. Function definitions are given as JSON Schema Objects, similar to OpenAI GPT function calls. Documentation and more examples: functionary.meetkai.com Changelog: (click to expand) + [2024/06/14] We release [meetkai/functionary-medium-v3.0](https://huggingface.co/meetkai/functionary-medium-v3.0) (based on [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)) with better capability for function calling
+ [2024/05/17] We release [meetkai/functionary-small-v2.5](https://huggingface.co/meetkai/functionary-small-v2.5) with better capability for function calling and code interpreter compared with [functionary-small-v2.4](https://huggingface.co/meetkai/functionary-small-v2.4)
+ [2024/05/06] Streaming support for functionary v2 to v2.4 models is released in [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)!
+ [2024/05/03] Added support for serverless vLLM deployment on [Modal.com](https://modal.com/)
+ [2024/04/27] New and improved grammar sampling! Ensures 100% accuracy in generating function names, prompt template and parameters.
+ [2024/04/02] We release [meetkai/functionary-small-v2.4](https://huggingface.co/meetkai/functionary-small-v2.4) and [meetkai/functionary-medium-v2.4](https://huggingface.co/meetkai/functionary-medium-v2.4)! The first functionary models with code-interpreter ability (by passing in `{type: "code_interpreter"}` in tools)! Setup To install the required dependencies, run: shell
pip install -r requirements.txt Now you can start a blazing fast vLLM server. requirements Small Model: shell
python3 server_vllm.py --model "meetkai/functionary-small-v2.5" --host 0.0.0.0 --max-model-len 8192 Medium model: (click to expand) If you use multiple GPUs (medium models require: 4xA6000 or 2xA100 80GB to run), need to use: `tensor-parallel-size`
```shell
python3 server_vllm.py --model "meetkai/functionary-medium-v3.0" --max-model-len 8192 --tensor-parallel-size 2
``` Grammar Sampling We also offer our own function-calling grammar sampling feature which constrains the LLM's generation to always follow the prompt template, and ensures 100% accuracy for function name. The parameters are generated using the efficient lm-format-enforcer , which ensures that the parameters follow the schema of the tool called. To enable grammar sampling, run the vLLM server with the command-line argument --enable-grammar-sampling : shell
python3 server_vllm.py --model "meetkai/functionary-medium-v2.4" --max-model-len 8192 --tensor-parallel-size 2 --enable-grammar-sampling Note:
- Grammar Sampling support is applicable only for the V2 models. There is no such support for V1 models.
- Our vLLM server supports the tool_choice="required" feature in OpenAI Chat Completion API exclusively only when grammar sampling is enabled . Text-Generation-Inference We also provide a service that performs inference on Functionary models using Text-Generation-Inference (TGI). Follow these steps to get started: Install Docker following their installation instructions . Install the Docker SDK for Python shell
pip install docker Start up the Functionary TGI server At start-up, the Functionary TGI server tries to connect to an existing TGI endpoint. In this case, you can run the following: shell
python3 server_tgi.py --model <REMOTE_MODEL_ID_OR_LOCAL_MODEL_PATH> --endpoint <TGI_SERVICE_ENDPOINT> If the TGI endpoint does not exist, the Functionary TGI server will start a new TGI endpoint container with the address provided in the endpoint CLI argument via the installed Docker Python SDK. Run the following commands for remote and local models respectively: shell
python3 server_tgi.py --model <REMOTE_MODEL_ID> --remote_model_save_folder <PATH_TO_SAVE_AND_CACHE_REMOTE_MODEL> --endpoint <TGI_SERVICE_ENDPOINT> shell
python3 server_tgi.py --model <LOCAL_MODEL_PATH> --endpoint <TGI_SERVICE_ENDPOINT> Make either OpenAI-compatible or raw HTTP requests to the Functionary TGI server. Docker If you're having trouble with dependencies, and you have nvidia-container-toolkit ,
you can start your environment like this: shell
sudo docker run --gpus all -it --ipc=host --name functionary -v ${PWD}/functionary_workspace:/workspace -p 8000:8000 nvcr.io/nvidia/pytorch:23.10-py3 OpenAI Compatible Usage ```python
from openai import OpenAI client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary") client.chat.completions.create(
model="meetkai/functionary-small-v2.5",
messages=[{"role": "user",
"content": "What is the weather for Istanbul?"}
],
tools=[{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
},
"required": ["location"]
}
}
}],
tool_choice="auto"
)
``` Raw Usage: Details (click to expand) ```python
import requests
data = {
'model': 'meetkai/functionary-small-v2.5', # model name here is the value of argument "--model" in deploying: server_vllm.py or server.py
'messages': [
{
"role": "user",
"content": "What is the weather for Istanbul?"
}
],
'tools':[ # For functionary-7b-v2 we use "tools"; for functionary-7b-v1.4 we use "functions" = [{"name": "get_current_weather", "description":..., "parameters": ....}]
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
},
"required": ["location"]
}
}
}
]
}
response = requests.post("http://127.0.0.1:8000/v1/chat/completions", json=data, headers={
"Content-Type": "application/json",
"Authorization": "Bearer xxxx"
})
# Print the response text
print(response.text)
``` Models Available | Model | Description | VRAM FP16 |
|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------|:------|
| functionary-medium-v3.0 / GGUF | 8k context, based on meta-llama/Meta-Llama-3-70B-Instruct | 160GB |
| functionary-small-v2.5 / GGUF | 8k context, code interpreter | 24GB |
| functionary-small-v2.4 / GGUF | 8k context, code interpreter | 24GB |
| functionary-medium-v2.4 / GGUF | 8k context, code interpreter, better accuracy | 90GB |
| functionary-small-v2.2 / GGUF | 8k context | 24GB |
| functionary-medium-v2.2 / GGUF | 8k context| 90GB |
| functionary-7b-v2.1 / GGUF | 8k context | 24GB |
| functionary-7b-v2 / GGUF | Parallel function call support. | 24GB |
| functionary-7b-v1.4 / GGUF | 4k context, better accuracy (deprecated) | 24GB |
| functionary-7b-v1.1 | 4k context (deprecated) | 24GB |
| functionary-7b-v0.1 | 2k context (deprecated) Not recommended, use 2.1 onwards | 24GB | Compatibility information v1 models are compatible with both OpenAI-python v0 and v1. v2 models are designed for compatibility with OpenAI-python v1. The difference between OpenAI-python v0 and v1 you may refer to the official documentation here The Differences Between Related Projects | Feature/Project | Functionary | NexusRaven | Gorilla | Glaive | GPT-4-1106-preview |
|---|---|---|---|---|---|
|Single Function Call | ✅ | ✅ | ✅ | ✅ | ✅ |
|Parallel Function Calls | ✅ | ✅ | ✅ | ❌ | ✅ |
|Following Up on Missing Function Arguments | ✅ | ❌ | ❌ | ❌ | ✅ |
|Multi-turn | ✅ | ❌ | ❌ | ✅ | ✅ |
|Generate Model Responses Grounded in Tools Execution Results | ✅ | ❌ | ❌ | ❌ | ✅ |
|Chit-Chat | ✅ | ❌ | ✅ | ✅ | ✅ |
|Code Interpreter | ✅ | ❌ | ❌ | ❌ | ✅ | You can find more details of the features in here Llama.cpp Inference Llama.cpp Inference using Huggingface Tokenizer Example for inference using LLama-cpp-python can be found in: llama_cpp_inference.py . Integration into Llama-cpp Besides, functionary was also integrated into LLama-cpp-python, however the integration might not be quickly updated , so if there is something wrong or weird in the result, please use: llama_cpp_inference.py instead. Currently, v2.5 hasn't been integrated, so if you are using functionary-small-v2.5-GGUF , please use: llama_cpp_inference.py Make sure that the latest version of llama-cpp-python is successully installed in your system. Functionary v2 is fully integrated into llama-cpp-python. You can perform inference using Functionary's GGUF models either via normal chat completion or through llama-cpp-python's OpenAI-compatible server which behaves similarly to ours. The following is the sample code using normal chat completion: ```python
from llama_cpp import Llama
from llama_cpp.llama_tokenizer import LlamaHFTokenizer We should use HF AutoTokenizer instead of llama.cpp's tokenizer because we found that Llama.cpp's tokenizer doesn't give the same result as that from Huggingface. The reason might be in the training, we added new tokens to the tokenizer and Llama.cpp doesn't handle this successfully llm = Llama.from_pretrained(
repo_id="meetkai/functionary-small-v2.4-GGUF",
filename="functionary-small-v2.4.Q4_0.gguf",
chat_format="functionary-v2",
tokenizer=LlamaHFTokenizer.from_pretrained("meetkai/functionary-small-v2.4-GGUF"),
n_gpu_layers=-1
) messages = [
{"role": "user", "content": "what's the weather like in Hanoi?"}
]
tools = [ # For functionary-7b-v2 we use "tools"; for functionary-7b-v1.4 we use "functions" = [{"name": "get_current_weather", "description":..., "parameters": ....}]
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g., San Francisco, CA"
}
},
"required": ["location"]
}
}
}
] result = llm.create_chat_completion(
messages = messages,
tools=tools,
tool_choice="auto",
) print(result["choices"][0]["message"]) The output would be: python
{'role': 'assistant', 'content': None, 'tool_calls': [{'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': '{\n "location": "Hanoi"\n}'}}]}
``` For more details, please refer to the Function Calling section in llama-cpp-python. To use our Functionary GGUF models using llama-cpp-python's OpenAI-compatible server, please refer to here for more details and documentation. Note: - For Functionary in llama-cpp-python, the default system messages are added automatically during the API call. Therefore, there is no need to provide the default system messages in messages .
- Streaming feature for Functionary models in both the normal chat completion and in llama-cpp-python's OpenAI-compatible server is officially supported from v0.2.70 onwards. Call Real Python Function To call the real python function, get the result and extract the result to respond, you can use chatlab . The following example uses chatlab==0.16.0: Please note that Chatlab currently doesn't support Parallel Function calls. This sample code is compatible only with Functionary Version 1.4 and may not work correctly with Functionary Version 2.0.
```python
from chatlab import Conversation
import openai
import os
openai.api_key = "functionary" # We just need to set this something other than None
os.environ['OPENAI_API_KEY'] = "functionary" # chatlab requires us to set this too
openai.api_base = "http://localhost:8000/v1" now provide the function with description def get_car_price(car_name: str):
"""this function is used to get the price of the car given the name
:param car_name: name of the car to get the price
"""
car_price = {
"tang": {"price": "$20000"},
"song": {"price": "$25000"}
}
for key in car_price:
if key in car_name.lower():
return {"price": car_price[key]}
return {"price": "unknown"} chat = Conversation(model="meetkai/functionary-7b-v2")
chat.register(get_car_price) # register this function
chat.submit("what is the price of the car named Tang?") # submit user prompt print the flow for message in chat.messages:
role = message["role"].upper()
if "function_call" in message:
func_name = message["function_call"]["name"]
func_param = message["function_call"]["arguments"]
print(f"{role}: call function: {func_name}, arguments:{func_param}")
else:
content = message["content"]
print(f"{role}: {content}")
``` The output will look like this: USER: what is the price of the car named Tang?
ASSISTANT: call function: get_car_price, arguments:{
"car_name": "Tang"
}
FUNCTION: {'price': {'price': '$20000'}}
ASSISTANT: The price of the car named Tang is $20,000. Serverless Deployment using Modal.com Serverless deployment of Functionary models is supported via the modal_server_vllm.py script. After signing up and installing Modal, follow these steps to deploy our vLLM server on Modal: Create dev environment shell Python
modal environment create dev If you have a dev environment created already, there is no need to create another one. Just configure to it in the next step. Configure dev environment shell Python
modal config set-environment dev Serve Functionary Model shell Python
modal serve modal_server_vllm Deploy Runner shell Python
modal deploy modal_server_vllm Use Cases Here are a few examples of how you can use this function calling system: Travel and Hospitality - Trip Planning The function plan_trip(destination: string, duration: int, interests: list) can take user input such as "I want to plan a 7-day trip to Paris with a focus on art and culture" and generate an itinerary accordingly. Details (click to expand) ```python
client.chat.completions.create((
model="meetkai/functionary-7b-v2",
messages=[
{"role": "user", "content": 'I want to plan a 7-day trip to Paris with a focus on art and culture'},
],
tools=[
{
"type": "function",
"function": {
"name": "plan_trip",
"description": "Plan a trip based on user's interests",
"parameters": {
"type": "object",
"properties": {
"destination": {
"type": "string",
"description": "The destination of the trip",
},
"duration": {
"type": "integer",
"description": "The duration of the trip in days",
},
"interests": {
"type": "array",
"items": {"type": "string"},
"description": "The interests based on which the trip will be planned",
},
},
"required": ["destination", "duration", "interests"],
}
}
}
]
)
```
Response will have:
```json
{"role": "assistant", "content": null, "tool_calls": [{"type": "function", "function": {"name": "plan_trip", "arguments": '{\n "destination": "Paris",\n "duration": 7,\n "interests": ["art", "culture"]\n}'}}]}
```
Then you need to call ```plan_trip``` function with provided arguments.
If you would like a commentary from the model, then you'll call the model again with the response from the function, the model will write necessary commentary. Real Estate - Property Valuation A function like estimate_property_value(property_details: dict) could allow users to input details about a property (such as location, size, number of rooms, etc.) and receive an estimated market value. Details (click to expand) ```python
client.chat.completions.create(
model="meetkai/functionary-7b-v2",
messages=[
{
"role": "user",
"content": 'What is the estimated value of a 3-bedroom house in San Francisco with 2000 sq ft area?'
},
{
"role": "assistant",
"content": None,
"tool_calls": [
{
"type": "function",
"function": {
"name": "estimate_property_value",
"arguments": '{\n "property_details": {"location": "San Francisco", "size": 2000, "rooms": 3}\n}'
}
}
]
}
],
tools=[
{
"type": "function",
"function": {
"name": "estimate_property_value",
"description": "Estimate the market value of a property",
"parameters": {
"type": "object",
"properties": {
"property_details": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The location of the property"
},
"size": {
"type": "integer",
"description": "The size of the property in square feet"
},
"rooms": {
"type": "integer",
"description": "The number of rooms in the property"
}
},
"required": ["location", "size", "rooms"]
}
},
"required": ["property_details"]
}
}
}
],
tool_choice="auto"
)
```
Response will have:
```json
{"role": "assistant", "content": null, "tool_calls": [{"type": "function", "function": {"name": "plan_trip", "arguments": '{\n "destination": "Paris",\n "duration": 7,\n "interests": ["art", "culture"]\n}'}}]}
```
Then you need to call ```plan_trip``` function with provided arguments.
If you would like a commentary from the model, then you'll call the model again with the response from the function, the model will write necessary commentary. Telecommunications - Customer Support A function parse_customer_complaint(complaint: {issue: string, frequency: string, duration: string}) could help in extracting structured information from a complex, narrative customer complaint, identifying the core issue and potential solutions. The complaint object could include properties such as issue (the main problem), frequency (how often the issue occurs), and duration (how long the issue has been occurring). Details (click to expand) ```python
client.chat.completions.create(
model="meetkai/functionary-7b-v2",
messages=[
{"role": "user", "content": 'My internet has been disconnecting frequently for the past week'},
],
tools=[
{
"type": "function",
"function": {
"name": "parse_customer_complaint",
"description": "Parse a customer complaint and identify the core issue",
"parameters": {
"type": "object",
"properties": {
"complaint": {
"type": "object",
"properties": {
"issue": {
"type": "string",
"description": "The main problem",
},
"frequency": {
"type": "string",
"description": "How often the issue occurs",
},
"duration": {
"type": "string",
"description": "How long the issue has been occurring",
},
},
"required": ["issue", "frequency", "duration"],
},
},
"required": ["complaint"],
}
}
}
],
tool_choice="auto"
)
```
Response will have:
```json
{"role": "assistant", "content": null, "tool_calls": [{"type": "function", "function": {"name": "parse_customer_complaint", "arguments": '{\n "complaint": {"issue": "internet disconnecting", "frequency": "frequently", "duration": "past week"}\n}'}}]}
```
Then you need to call parse_customer_complaint function with provided arguments.
If you would like a commentary from the model, then you'll call the model again with the response from the function, the model will write necessary commentary. How it Works? We convert function definitions to a similar text to TypeScript definitions.
Then we inject these definitions as system prompts. After that, we inject the default system prompt.
Then we start the conversation messages. The prompt example can be found here: V1 (v1.4), V2 (v2, v2.1, v2.2, v2.4) and V2.llama3 (v2.5) We don't change the logit probabilities to conform to a certain schema, but the model itself knows how to conform. This allows us to use existing tools and caching systems with ease. Evaluation Function Prediction Evaluation Evaluation function call prediction in SGD dataset. The accuracy metric measures the overall correctness of predicted function calls, including function name prediction and arguments extraction. | Dataset | Model Name | Function Calling Accuracy (Name & Arguments) |
| :-------------| :-------------------| ---------------------------: |
| SGD | MeetKai-functionary-medium-v3.0 | 89.6% |
| SGD | gpt-4o-2024-05-13 | 82.75%|
| SGD | gemini-1.5-flash | 79.64%|
| SGD | c4ai-command-r-plus | 45.66% | Training See training README Roadmap [ ] OpenAPI specification based plugin support. [X] Fast inference server [X] vLLM [ ] text-generation-inference ? See: License Issue [X] Streaming Support [X] function_call parameter to server [X] Grammar Sampling to ensure 100% accuracy for function and parameter names [X] Parallel function calling support [X] Python function calling support (Automatic detection of type annotations and calling them automatically) [X] Real world usage examples, such as creating agents. [X] Train Mixtral based model [X] Code interpreter support Please consider opening a PR for future requests;Chat language model that can use tools and interpret the results;[] | MeetKai/functionary |
rohitdhas/shittier;💩 Shittier Shittier is a code formatting tool that aims to make your code look as terrible as possible. It is the exact opposite of popular tools like Prettier, which focus on improving code formatting and readability. Shittier embraces chaos, messiness, and confusion, making your code look shittier than ever before. With Shittier, you can expect the following: Random indentation for a chaotic code structure. Mixed case madness that breaks consistency. Spacing nightmares with added or removed spaces, tabs, and line breaks. 📥️ Installation To install Shittier, follow these steps: Make sure you have Node.js installed on your machine. Open a terminal and run the following command: npm install -g shittier 🚀 Usage After installing Shittier, you can run it on your codebase by executing the following command in your project's root directory: shittier [options] [directory/file] Options -h, --help : Displays help information about Shittier and its available options. -v, --version : Shows the installed version of Shittier. -f, --force : Forces Shittier to overwrite files if they already exists. Examples Format a single file: shittier myfile.js Format a single file and save the modified file with a different name or path: shittier myfile.js modified/myfile.js Use --force flag to force overwrite if output file already exists ⚠️ Disclaimer Shittier is a purely satirical project created for fun and entertainment purposes. It is not intended for use in any serious development environment. Using Shittier on production code may result in confusion, frustration, and a lot of head-scratching. Use it responsibly and at your own risk. 📜 License Shittier is released under the MIT License . See the LICENSE file for more details. Enjoy the chaos and let Shittier transform your perfectly fine code into an unrecognizable mess! Remember, sometimes it's good to embrace the dark side of code formatting. Happy shittifying!;Shittier is an unconventional code formatting tool;code-formatter,shittier,prettier | rohitdhas/shittier |
web-infra-dev/rspress;Rspress A fast Rspack-based static site generator. 🔥 Features 🚀 Fast Startup : Based on Rust-based build tool and markdown/mdx compiler, the build speed is extremely fast, bringing you the ultimate development experience. 📚 MDX Support : MDX is a powerful way to write content, allowing you to use React components in Markdown. 📦 Built-in Full Text Search : Automatically generates a full-text search index for you during building process, providing out-of-the-box full-text search capabilities. 🌈 Static Site Generation : In production, it automatically builds into static HTML files, which can be easily deployed anywhere. 🔌 Providing Plugin System : Providing a plugin system, you can customize the build process and theme according to your needs. 📝 Component Document : Support multi ways to preview your component demo. 📚 Getting Started Go to the Quick Start to get started. 🤝 Contribution Please read the contributing guide and let's build Rspress together. If you have any questions, you can open an issue or go to Discord to communicate with us. Contributors Code of Conduct This repo has adopted the ByteDance Open Source Code of Conduct. Please check Code of Conduct for more details. 🦀 Links | Name | Description |
| ---------------------------------------------------------- | ------------------------------------------------- |
| @rspress/mdx-rs | Rust MDX compiler for Rspress. |
| Rspack | A fast Rust-based web bundler. |
| Rsbuild | An Rspack-based build tool for the web. |
| Rsdoctor | A one-stop build analyzer for Rspack and Webpack. | 🌟 Quality Rspress uses Web Infra QoS to observe the trend of key metrics, such as bundle size, compile speed and install size. 📖 License Rspress is licensed under the MIT License .;🦀💨 A fast Rspack-based static site generator.;docs-generator,markdown,mdx,rspack,ssg,static-site-generator | web-infra-dev/rspress |
Melledy/LunarCore;EN | 简中 | 繁中 | JP | RU | FR | KR | VI Attention: For any extra support, questions, or discussions, check out our Discord . Notable features Basic game features: Logging in, team setup, inventory, basic scene/entity management Monster battles working Natural world monster/prop/NPC spawns Character techniques Crafting/Consumables working NPC shops handled Gacha system Mail system Friend system (Assists are not working yet) Forgotten hall Pure Fiction Simulated universe (Runs can be finished, but many features are missing) Running the server and client Prerequisites Java 17 JDK Recommended MongoDB 4.0+ Compiling the server Open your system terminal, and compile the server with ./gradlew jar Create a folder named resources in your server directory Download the Config , TextMap , and ExcelBin folders from https://github.com/Dimbreath/StarRailData and place them into your resources folder. Run the server with java -jar LunarCore.jar from your system terminal. Lunar Core comes with a built-in internal MongoDB server for its database, so no Mongodb installation is required. However, it is highly recommended to install Mongodb anyway. Connecting with the client (Fiddler method) Log in with the client to an official server and Hoyoverse account at least once to download game data. Install and have Fiddler Classic running. Copy and paste the following code into the Fiddlerscript tab of Fiddler Classic. Remember to save the fiddler script after you copy and paste it: ```
import System;
import System.Windows.Forms;
import Fiddler;
import System.Text.RegularExpressions; class Handlers
{
static function OnBeforeRequest(oS: Session) {
if (oS.host.EndsWith(".starrails.com") || oS.host.EndsWith(".hoyoverse.com") || oS.host.EndsWith(".mihoyo.com") || oS.host.EndsWith(".bhsr.com")) {
oS.oRequest.headers.UriScheme = "http";
oS.host = "localhost"; // This can also be replaced with another IP address.
}
}
};
``` If autoCreateAccount is set to true in the config, then you can skip this step. Otherwise, type /account create [account name] in the server console to create an account. Login with your account name, the password field is ignored by the server and can be set to anything. Server commands Server commands can be run in the server console or in-game. There is a dummy user named "Server" in every player's friends list that you can message to use in-game commands. /account {create | delete} [username] (reserved player uid). Creates or deletes an account.
/avatar lv(level) p(ascension) r(eidolon) s(skill levels). Sets the current avatar's properties.
/clear {relics | lightcones | materials | items}. Removes filtered items from the player inventory.
/gender {male | female}. Sets the player's gender.
/give [item id] x[amount] lv[number]. Gives the targetted player an item.
/giveall {materials | avatars | lightcones | relics}. Gives the targeted player items.
/heal. Heals your avatars.
/help. Displays a list of available commands.
/kick @[player id]. Kicks a player from the server.
/mail [content]. Sends the targeted player a system mail.
/permission {add | remove | clear} [permission]. Gives/removes a permission from the targeted player.
/refill. Refill your skill points in open world.
/reload. Reloads the server config.
/scene [scene id] [floor id]. Teleports the player to the specified scene.
/spawn [npc monster id/prop id] s[stage id] x[amount] lv[level] r[radius] <battle monster ids...>. Spawns a monster or prop near the targeted player.
/stop. Stops the server
/unstuck @[player id]. Unstucks an offline player if they're in a scene that doesn't load.
/worldlevel [world level]. Sets the targeted player's equilibrium level.;A game server reimplementation for a certain turn-based anime game;game-server | Melledy/LunarCore |
nerfstudio-project/gsplat;gsplat http://www.gsplat.studio/ gsplat is an open-source library for CUDA accelerated rasterization of gaussians with python bindings. It is inspired by the SIGGRAPH paper 3D Gaussian Splatting for Real-Time Rendering of Radiance Fields , but we’ve made gsplat even faster, more memory efficient, and with a growing list of new features! Installation Dependence : Please install Pytorch first. The easiest way is to install from PyPI. In this way it will build the CUDA code on the first run (JIT). bash
pip install gsplat Or install from source. In this way it will build the CUDA code during installation. bash
pip install git+https://github.com/nerfstudio-project/gsplat.git To install gsplat on Windows, please check this instruction . Evaluation This repo comes with a standalone script that reproduces the official Gaussian Splatting with exactly the same performance on PSNR, SSIM, LPIPS, and converged number of Gaussians. Powered by gsplat’s efficient CUDA implementation, the training takes up to 4x less GPU memory with up to 15% less time to finish than the official implementation. Full report can be found here . ```bash under examples/ pip install -r requirements.txt
bash benchmark.sh
``` Examples We provide a set of examples to get you started! Below you can find the details about
the examples (requires to install some exta dependences via pip install -r examples/requirements.txt ) Train a 3D Gaussian splatting model on a COLMAP capture. Fit a 2D image with 3D Gaussians. Render a large scene in real-time. Development and Contribution This repository was born from the curiosity of people on the Nerfstudio team trying to understand a new rendering technique. We welcome contributions of any kind and are open to feedback, bug-reports, and improvements to help expand the capabilities of this software. This project is developed by the following wonderful contributors (unordered): Angjoo Kanazawa (UC Berkeley): Mentor of the project. Matthew Tancik (Luma AI): Mentor of the project. Vickie Ye (UC Berkeley): Project lead. v0.1 lead. Matias Turkulainen (Aalto University): Core developer. Ruilong Li (UC Berkeley): Core developer. v1.0 lead. Justin Kerr (UC Berkeley): Core developer. Brent Yi (UC Berkeley): Core developer. Zhuoyang Pan (ShanghaiTech University): Core developer. Jianbo Ye (Amazon): Core developer. We also have made the mathematical supplement, with conventions and derivations, available here . If you find this library useful in your projects or papers, please consider citing: @misc{ye2023mathematical,
title={Mathematical Supplement for the $\texttt{gsplat}$ Library},
author={Vickie Ye and Angjoo Kanazawa},
year={2023},
eprint={2312.02121},
archivePrefix={arXiv},
primaryClass={cs.MS}
} We welcome contributions of any kind and are open to feedback, bug-reports, and improvements to help expand the capabilities of this software. Please check docs/DEV.md for more info about development.;CUDA accelerated rasterization of gaussian splatting;gaussian-splatting | nerfstudio-project/gsplat |
eosphoros-ai/DB-GPT-Hub;DB-GPT-Hub: Text-to-SQL parsing with LLMs [**简体中文**](README.zh.md) | [**Discord**](https://discord.gg/7uQnPuveTY) | [**Wechat**](https://github.com/eosphoros-ai/DB-GPT/blob/main/README.zh.md#%E8%81%94%E7%B3%BB%E6%88%91%E4%BB%AC) | [**Huggingface**](https://huggingface.co/eosphoros) | [**Community**](https://github.com/eosphoros-ai/community) Baseline update time: 2023/12/08 metric: execution accuracy (ex) more details refer to docs/eval-llm-result.md Model Method Easy Medium Hard Extra All base 0 0 0 0 0 Llama2-7B-Chat lora 0.887 0.641 0.489 0.331 0.626 qlora 0.847 0.623 0.466 0.361 0.608 base 0 0 0 0 0 Llama2-13B-Chat lora 0.907 0.729 0.552 0.343 0.68 qlora 0.911 0.7 0.552 0.319 0.664 base 0.214 0.177 0.092 0.036 0.149 CodeLlama-7B-Instruct lora 0.923 0.756 0.586 0.349 0.702 qlora 0.911 0.751 0.598 0.331 0.696 base 0.698 0.601 0.408 0.271 0.539 CodeLlama-13B-Instruct lora 0.94 0.789 0.684 0.404 0.746 qlora 0.94 0.774 0.626 0.392 0.727 base 0.577 0.352 0.201 0.066 0.335 Baichuan2-7B-Chat lora 0.871 0.63 0.448 0.295 0.603 qlora 0.891 0.637 0.489 0.331 0.624 base 0.581 0.413 0.264 0.187 0.392 Baichuan2-13B-Chat lora 0.903 0.702 0.569 0.392 0.678 qlora 0.895 0.675 0.58 0.343 0.659 base 0.395 0.256 0.138 0.042 0.235 Qwen-7B-Chat lora 0.855 0.688 0.575 0.331 0.652 qlora 0.911 0.675 0.575 0.343 0.662 base 0.871 0.632 0.368 0.181 0.573 Qwen-14B-Chat lora 0.895 0.702 0.552 0.331 0.663 qlora 0.919 0.744 0.598 0.367 0.701 base 0 0 0 0 0 ChatGLM3-6b lora 0.855 0.605 0.477 0.271 0.59 qlora 0.843 0.603 0.506 0.211 0.581 Contents DB-GPT-Hub: Text-to-SQL parsing with LLMs Baseline Contents 1. What is DB-GPT-Hub 2. Fine-tuning Text-to-SQL 2.1. Dataset 2.2. Model 3. Usage 3.1. Environment preparation 3.2 Quick Start 3.3. Data preparation 3.4. Model fine-tuning 3.5. Model Predict 3.6 Model Weights 3.6.1 Model and fine-tuned weight merging 3.7 Model Evaluation 4. RoadMap 5. Contributions 6. Acknowledgements 7. Citation 8. Licence 9. Contact Information 1. What is DB-GPT-Hub DB-GPT-Hub is an experimental project that leverages Large Language Models (LLMs) to achieve Text-to-SQL parsing. The project encompasses various stages, including data collection, data preprocessing, model selection and construction, and fine-tuning of model weights. Through these processes, our aim is to enhance Text-to-SQL capabilities while reducing model training costs, thus enabling more developers to contribute to improving Text-to-SQL accuracy. Our ultimate goal is to realize automated question-answering capabilities based on databases, allowing users to execute complex database queries using natural language descriptions. To date, we have successfully integrated multiple large models and established a comprehensive workflow that includes data processing, Supervised Fine-Tuning (SFT) model training, prediction output, and evaluation. The code developed for this project is easily reusable within the project itself. As of October 10, 2023, we have used this project to fine-tune the open-source 13B-sized model, incorporating more relevant data. Under zero-shot prompts and utilizing the Spider-based test-suite , we have achieved an execution accuracy rate of 0.764 for a database with a size of 1.27G. Additionally, the execution accuracy for the database pointed to by the Spider official website , with a size of 95M, stands at 0.825. 2. Fine-tuning Text-to-SQL We enhance the Text-to-SQL performance by applying Supervised Fine-Tuning (SFT) on large language models. 2.1. Dataset The primary dataset for this project's examples is the Spider dataset: SPIDER : A complex text2sql dataset across domains, containing 10,181 natural language queries, 5,693 SQL distributed across 200 separate databases, covering 138 different domains. download link Other text2sql datasets available: WikiSQL: A large semantic parsing dataset consisting of 80,654 natural statement expressions and sql annotations of 24,241 tables. Each query in WikiSQL is limited to the same table and does not contain complex operations such as sorting, grouping The queries in WikiSQL are limited to the same table and do not include complex operations such as sorting, grouping, subqueries, etc. CHASE : A cross-domain multi-round interactive text2sql Chinese dataset containing a list of 5,459 multi-round questions consisting of 17,940 binary groups across 280 different domain databases. BIRD-SQL: A large-scale cross-domain text-to-SQL benchmark in English, with a particular focus on large database content. The dataset contains 12,751 text-to-SQL data pairs and 95 databases with a total size of 33.4 GB across 37 occupational domains. The BIRD-SQL dataset bridges the gap between text-to-SQL research and real-world applications by exploring three additional challenges, namely dealing with large and messy database values, external knowledge inference and optimising SQL execution efficiency. CoSQL: A corpus for building cross-domain conversational text-to-SQL systems. It is a conversational version of the Spider and SParC tasks. CoSQL consists of 30k+ rounds and 10k+ annotated SQL queries from Wizard-of-Oz's collection of 3k conversations querying 200 complex databases across 138 domains. Each conversation simulates a realistic DB query scenario in which a staff member explores the database as a user and a SQL expert uses SQL to retrieve answers, clarify ambiguous questions, or otherwise inform. Following the processing template of NSQL , the dataset underwent basic processing, yielding approximately 20W dataset 2.2. Model DB-GPT-Hub currently supports the following base models: [x] CodeLlama [x] Baichuan2 [x] LLaMa/LLaMa2 [x] Falcon [x] Qwen [x] XVERSE [x] ChatGLM2 [x] ChatGLM3 [x] internlm [x] sqlcoder-7b(mistral) [x] sqlcoder2-15b(starcoder) The model is fine-tuned based on a quantization bit of 4 using Quantized Learning over Redundant Architecture (QLoRA). The minimum hardware requirements for this can be referred to as follows: | Model Parameters | GPU RAM | CPU RAM | DISK |
| ---------------- | ------- | ------- | ------ |
| 7b | 6GB | 3.6GB | 36.4GB |
| 13b | 13.4GB | 5.9GB | 60.2GB | All the related parameters are set to the minimum, with a batch size of 1 and max length of 512. Based on experience, for better performance, it is recommended to set the related length values to 1024 or 2048. 3. Usage 3.1. Environment preparation git clone https://github.com/eosphoros-ai/DB-GPT-Hub.git
cd DB-GPT-Hub
conda create -n dbgpt_hub python=3.10
conda activate dbgpt_hub
pip install poetry
poetry install 3.2 Quick Start Firstly, install dbgpt-hub with the following command pip install dbgpt-hub Then, set up the arguments and run the whole process.
```python
from dbgpt_hub.data_process import preprocess_sft_data
from dbgpt_hub.train import start_sft
from dbgpt_hub.predict import start_predict
from dbgpt_hub.eval import start_evaluate Config the input datasets data_folder = "dbgpt_hub/data"
data_info = [
{
"data_source": "spider",
"train_file": ["train_spider.json", "train_others.json"],
"dev_file": ["dev.json"],
"tables_file": "tables.json",
"db_id_name": "db_id",
"is_multiple_turn": False,
"train_output": "spider_train.json",
"dev_output": "spider_dev.json",
}
] Config training parameters train_args = {
"model_name_or_path": "codellama/CodeLlama-13b-Instruct-hf",
"do_train": True,
"dataset": "example_text2sql_train",
"max_source_length": 2048,
"max_target_length": 512,
"finetuning_type": "lora",
"lora_target": "q_proj,v_proj",
"template": "llama2",
"lora_rank": 64,
"lora_alpha": 32,
"output_dir": "dbgpt_hub/output/adapter/CodeLlama-13b-sql-lora",
"overwrite_cache": True,
"overwrite_output_dir": True,
"per_device_train_batch_size": 1,
"gradient_accumulation_steps": 16,
"lr_scheduler_type": "cosine_with_restarts",
"logging_steps": 50,
"save_steps": 2000,
"learning_rate": 2e-4,
"num_train_epochs": 8,
"plot_loss": True,
"bf16": True,
} Config predict parameters predict_args = {
"model_name_or_path": "codellama/CodeLlama-13b-Instruct-hf",
"template": "llama2",
"finetuning_type": "lora",
"checkpoint_dir": "dbgpt_hub/output/adapter/CodeLlama-13b-sql-lora",
"predict_file_path": "dbgpt_hub/data/eval_data/dev_sql.json",
"predict_out_dir": "dbgpt_hub/output/",
"predicted_out_filename": "pred_sql.sql",
} Config evaluation parameters evaluate_args = {
"input": "./dbgpt_hub/output/pred/pred_sql_dev_skeleton.sql",
"gold": "./dbgpt_hub/data/eval_data/gold.txt",
"gold_natsql": "./dbgpt_hub/data/eval_data/gold_natsql2sql.txt",
"db": "./dbgpt_hub/data/spider/database",
"table": "./dbgpt_hub/data/eval_data/tables.json",
"table_natsql": "./dbgpt_hub/data/eval_data/tables_for_natsql2sql.json",
"etype": "exec",
"plug_value": True,
"keep_distict": False,
"progress_bar_for_each_datapoint": False,
"natsql": False,
} Run the whole fine-tuning workflow preprocess_sft_data(
data_folder = data_folder,
data_info = data_info
) start_sft(train_args)
start_predict(predict_args)
start_evaluate(evaluate_args)
``` 3.3. Data preparation DB-GPT-Hub uses the information matching generation method for data preparation, i.e. the SQL + Repository generation method that combines table information. This method combines data table information to better understand the structure and relationships of the data table, and is suitable for generating SQL statements that meet the requirements. Download the Spider dataset from the Spider dataset link. By default, after downloading and extracting the data, place it in the dbgpt_hub/data directory, i.e., the path should be dbgpt_hub/data/spider . For the data preprocessing part, simply run the following script :
```bash generate train and dev(eval) data poetry run sh dbgpt_hub/scripts/gen_train_eval_data.sh
``` In the directory dbgpt_hub/data/ , you will find the newly generated training file example_text2sql_train.json and testing file example_text2sql_dev.json, containing 8659 and 1034 entries respectively. For the data used in subsequent fine-tuning, set the parameter file_name value to the file name of the training set in dbgpt_hub/data/dataset_info.json, such as example_text2sql_train.json The data in the generated JSON looks something like this: {
"db_id": "department_management",
"instruction": "I want you to act as a SQL terminal in front of an example database, you need only to return the sql command to me.Below is an instruction that describes a task, Write a response that appropriately completes the request.\n\"\n##Instruction:\ndepartment_management contains tables such as department, head, management. Table department has columns such as Department_ID, Name, Creation, Ranking, Budget_in_Billions, Num_Employees. Department_ID is the primary key.\nTable head has columns such as head_ID, name, born_state, age. head_ID is the primary key.\nTable management has columns such as department_ID, head_ID, temporary_acting. department_ID is the primary key.\nThe head_ID of management is the foreign key of head_ID of head.\nThe department_ID of management is the foreign key of Department_ID of department.\n\n",
"input": "###Input:\nHow many heads of the departments are older than 56 ?\n\n###Response:",
"output": "SELECT count(*) FROM head WHERE age > 56",
"history": []
}, The data processing code of chase , cosql and sparc has been embedded in the data processing code of the project. After downloading the data set according to the above link, you only need to add in dbgpt_hub/configs/config.py Just loosen the corresponding code comment in SQL_DATA_INFO . 3.4. Model fine-tuning The model fine-tuning supports both LoRA and QLoRA methods. We can run the following command to fine-tune the model. By default, with the parameter --quantization_bit, it uses the QLoRA fine-tuning method. To switch to LoRAs, simply remove the related parameter from the script.
Run the command: bash
poetry run sh dbgpt_hub/scripts/train_sft.sh After fine-tuning, the model weights will be saved by default in the adapter folder, specifically in the dbgpt_hub/output/adapter directory. If you're using multi-GPU training and want to utilize deepseed , you should modify the default content in train_sft.sh. The change is: CUDA_VISIBLE_DEVICES=0 python dbgpt_hub/train/sft_train.py \
--quantization_bit 4 \
... change to : deepspeed --num_gpus 2 dbgpt_hub/train/sft_train.py \
--deepspeed dbgpt_hub/configs/ds_config.json \
--quantization_bit 4 \
... if you need order card id deepspeed --include localhost:0,1 dbgpt_hub/train/sft_train.py \
--deepspeed dbgpt_hub/configs/ds_config.json \
--quantization_bit 4 \
... The other parts that are omitted (…) can be kept consistent. If you want to change the default deepseed configuration, go into the dbgpt_hub/configs directory and make changes to ds_config.json as needed,the default is stage2. In the script, during fine-tuning, different models correspond to key parameters lora_target and template, as shown in the following table: | model name | lora_target | template |
| -------------------------------------------------------- | --------------- | --------- |
| LLaMA-2 | q_proj,v_proj | llama2 |
| CodeLlama-2 | q_proj,v_proj | llama2 |
| Baichuan2 | W_pack | baichuan2 |
| Qwen | c_attn | chatml |
| sqlcoder-7b | q_proj,v_proj | mistral |
| sqlcoder2-15b | c_attn | default |
| InternLM | q_proj,v_proj | intern |
| XVERSE | q_proj,v_proj | xverse |
| ChatGLM2 | query_key_value | chatglm2 |
| LLaMA | q_proj,v_proj | - |
| BLOOM | query_key_value | - |
| BLOOMZ | query_key_value | - |
| Baichuan | W_pack | baichuan |
| Falcon | query_key_value | - | In train_sft.sh , other key parameters are as follows: quantization_bit: Indicates whether quantization is applied, with valid values being [4 or 8]. model_name_or_path: The path of the LLM (Large Language Model). dataset: Specifies the name of the training dataset configuration, corresponding to the outer key value in dbgpt_hub/data/dataset_info.json, such as example_text2sql. max_source_length: The length of the text input into the model. If computing resources allow, it can be set as large as possible, like 1024 or 2048. max_target_length: The length of the SQL content output by the model; 512 is generally sufficient. output_dir: The output path of the Peft module during SFT (Supervised Fine-Tuning), set by default to dbgpt_hub/output/adapter/ . per_device_train_batch_size: The size of the batch. If computing resources allow, it can be set larger; the default is 1. gradient_accumulation_steps: The number of steps for accumulating gradients before an update. save_steps: The number of steps at which model checkpoints are saved; it can be set to 100 by default. num_train_epochs: The number of epochs for training the dataset. 3.5. Model Predict Under the project directory ./dbgpt_hub/output/pred/, this folder is the default output location for model predictions(if not exist, just mkdir). bash
poetry run sh ./dbgpt_hub/scripts/predict_sft.sh In the script, by default with the parameter --quantization_bit , it predicts using QLoRA. Removing it switches to the LoRA prediction method.
The value of the parameter predicted_input_filename is your predict test dataset file. --predicted_out_filename is the file name of the model's predicted results. 3.6 Model Weights You can find the second corresponding model weights from Huggingface hg-eosphoros-ai ,we uploaded the LoRA weights in October,which execution accuracy on the Spider evaluation set reached 0.789. 3.6.1 Model and fine-tuned weight merging If you need to merge the weights of the trained base model and the fine-tuned Peft module to export a complete model, execute the following model export script: bash
poetry run sh ./dbgpt_hub/scripts/export_merge.sh Be sure to replace the parameter path values in the script with the paths corresponding to your project. 3.7 Model Evaluation To evaluate model performance on the dataset, default is spider dev dataset.
Run the following command: bash
poetry run python dbgpt_hub/eval/evaluation.py --plug_value --input Your_model_pred_file You can find the results of our latest review and part of experiment results here Note : The database pointed to by the default code is a 95M database downloaded from [Spider official website] (https://yale-lily.github.io/spider). If you need to use Spider database (size 1.27G) in test-suite , please download the database in the link to the custom directory first, and run the above evaluation command which add parameters and values like --db Your_download_db_path . 4. RoadMap The whole process we will divide into three phases: Stage 1: Set up the foundational framework, enabling an end-to-end workflow that encompasses data processing, model SFT (Single Fine-Tuning) training, prediction output, and evaluation using multiple large language models (LLMs). As of August 4th, 2023, the entire pipeline has been successfully established. Currently, we offer support for the following features:
- [x] CodeLlama
- [x] Baichuan2
- [x] LLaMa/LLaMa2
- [x] Falcon
- [x] Qwen
- [x] XVERSE
- [x] ChatGLM2
- [x] ChatGLM3
- [x] internlm
- [x] sqlcoder-7b(mistral)
- [x] sqlcoder2-15b(starcoder) Stage 2: [x] Optidmize model performance, and support fine-tuning more different models in various ways before 20231010 [x] Optimize prompts [x] Release evaluation results, and optimized models open to peers. Stage 3: [ ] Inference speed optimization and improvement [ ] Targeted optimization and improvement of business scenarios and Chinese effects [ ] Optimized based on more papers, such as RESDSQL and others. Combined with our community's sibling project Awesome-Text2SQL for further enhancements.. If our work has provided even a small measure of assistance to you, please consider giving us a star. Your feedback and support serve as motivation for us to continue releasing more related work and improving our efforts. Thank you! 5. Contributions We warmly invite more individuals to join us and actively engage in various aspects of our project, such as datasets, model fine-tuning, performance evaluation, paper recommendations, and code reproduction. Please don't hesitate to open issues or pull requests (PRs), and we will be proactive in responding to your contributions. Before submitting your code, please ensure that it is formatted according to the black style by using the following command: poetry run black dbgpt_hub If you have more time to execute more detailed type checking and style checking of your code, please use the following command: poetry run pyright dbgpt_hub
poetry run pylint dbgpt_hub If you have any questions or need further assistance, don't hesitate to reach out. We appreciate your involvement! 6. Acknowledgements Our work is primarily based on the foundation of numerous open-source contributions. Thanks to the following open source projects Spider CoSQL Chase BIRD-SQL LLaMA BLOOM Falcon ChatGLM WizardLM text-to-sql-wizardcoder test-suite-sql-eval LLaMa-Efficient-Tuning Thanks to all the contributors, especially @ JBoRu who raised the issue which reminded us to add a new promising evaluation way, i.e. Test Suite. As the paper 《SQL-PALM: IMPROVED LARGE LANGUAGE MODEL ADAPTATION FOR TEXT-TO-SQL》 mentioned, "We consider two commonly-used evaluation metrics: execution accuracy (EX) and test-suite accuracy (TS). EX measures whether the SQL execution outcome matches ground truth (GT), whereas TS measures whether the SQL passes all EX evaluations for multiple tests, generated by database augmentation. Since EX contains false positives, we consider TS as a more reliable evaluation metric". 7. Citation If you find DB-GPT-Hub useful for your research or development, please cite the following paper : bibtex
@misc{zhou2024dbgpthub,
title={DB-GPT-Hub: Towards Open Benchmarking Text-to-SQL Empowered by Large Language Models},
author={Fan Zhou and Siqiao Xue and Danrui Qi and Wenhui Shi and Wang Zhao and Ganglin Wei and Hongyang Zhang and Caigai Jiang and Gangwei Jiang and Zhixuan Chu and Faqiang Chen},
year={2024},
eprint={2406.11434},
archivePrefix={arXiv},
primaryClass={id='cs.DB' full_name='Databases' is_active=True alt_name=None in_archive='cs' is_general=False description='Covers database management, datamining, and data processing. Roughly includes material in ACM Subject Classes E.2, E.5, H.0, H.2, and J.1.'}
} 8. Licence The MIT License (MIT) 9. Contact Information We are collaborating as a community, and if you have any ideas regarding our community work, please don't hesitate to get in touch with us. If you're interested in delving into an in-depth experiment and optimizing the DB-GPT-Hub subproject, you can reach out to 'wangzai' within the WeChat group. We wholeheartedly welcome your contributions to making it even better together!;A repository that contains models, datasets, and fine-tuning techniques for DB-GPT, with the purpose of enhancing model performance in Text-to-SQL;sql,text2sql,gpt,llm,text-to-sql,datasets,nl2sql,database,fine-tuning | eosphoros-ai/DB-GPT-Hub |
Xatta-Trone/medium-parser-extension;Medium Parser Medium parser is a web browser extension to help read the member-only articles on medium.com and medium.com-based sites (e.g. towards-data-science) # 15th February 2024: The extension was taken down by the Chrome Web Store (40k+ DAU). # 25th February 2024: Uploaded a new version to the Chrome Web Store. Installation Instructions Google Chrome / Microsoft Edge / Chromium Browsers (Brave/Opera Mini/Thorium etc.) Or, Install manually
1. Download this repo as a ZIP file from GitHub .
1. Unzip the file and you should have a folder named medium-parser-extension-main .
1. In Chrome/Edge go to the extensions page ( chrome://extensions or edge://extensions ).
1. Enable Developer Mode by clicking the toggle button on the top right side of the browser.
1. Drag the chrome folder anywhere on the page to import it (do not delete the folder afterward). Mozilla Firefox Troubleshooting This extension pulls the data from webcache.googleusercontent.com ; then removes all the scripts and sends back the html and css contents only. It might not work when there is no data from the request. For archive.is , it simply redirects you with the data. Credits / Ideas This article on reddit.com Support Please consider a donation if you find this extension helps you daily.
Your contribution allows me to spend more time making this kind of extension/project. Preview Star History;Read medium.com and medium based articles using google web cache.;medium,medium-article,medium-com | Xatta-Trone/medium-parser-extension |
justLV/onju-voice;Onju Voice 🍐🔈 💫 DEMO's A hackable AI home assistant platform using the Google Nest Mini (2nd gen) form factor, consisting of:
* a custom PCB designed to be a drop-in replacement to the original, using the ESP32-S3 for audio processing
* a server for handling the transcription, response generation and Text-to-Speech from multiple devices on the same network (This repo focuses on the experimental conversational LLM aspect to replicate some functionality shown in the demos, and not as a full fledged replacement to a home assistant. This is not being actively maintained, but I've released all source code and design files for anyone else to pick up from here.) Overview This repo contains firmware, server code and some example applications, intended to be as accessible as possible for getting up and running i.e.:
* Firmware for the custom PCB can be programmed using the Arduino IDE and a USB cable (installation of ESP-IDF not required)
* Server code has minimal requirements besides running Whisper locally, and should be able to run on most devices that you can leave plugged in whether MacOS / Linux / Win etc.
* Hardware can be ordered from PCBWay and Altium design files are included Example applications 📩 Querying and replying to messages (using a custom Maubot plugin & Beeper) 💡 Light control with Home Assistant 📝 Adding and retrieving notes/memos for the LLM to craft a response with Not included: * 👥 Multiple voice characters. I’ll leave it to the user to clone voices as they deem fair use. Also from experience LLM’s < GPT4 don’t consistently enough follow instructions to reliably respond in different characters AND perform multiple function calling with complicated prompts. Current features of the device <> server platform Auto-discovery of devices using multicast announcements Remembering conversation history and voice settings for each device Sending & receiving audio data from the device, packed as 16-bit, 16kHz (UDP sending, TCP receiving partially buffered into PSRAM) Speaker and microphone visualization with the LED’s, and custom LED control via the server Mute switch functionality, tap-to-wake for enabling the microphone, and setting mic timeout via the server Device-level logging to individual files and console output using rich Limitations of this release: The Arduino IDE doesn’t (yet) support the Espressif’s Audio SDK’s, such as ESP-ADF , ESP-Skainet etc. For these demo's it's not absolutely required, but if you use Espressif’s ESP-IDF with these SDK's you'd unlock features such as: VAD (Voice Activity Detection) - in this example VAD is offloaded to the server using webrtcvad, and the listening period is extended by either tapping the device or by the server sending mic keep alive timeouts (network traffic is really minimal at 16-bit, 16kHz) AEC (Acoustic Echo Cancellation) - to allow you to effectively talk over the assistant by removing the speaker output from audio input BSS (Blind Source Separation) - let’s you use both mic’s for isolating speakers based on location, and other noise suppression Wakewords and other on-device commands - I’m not a believer in this given how finicky these can be and don’t think these are and think all command logic should be handled by layers of language models on the server. The server currently only does transcription locally and uses: OpenAI for generating responses & functions calls, but if you have the hardware you could run a local LLM, using something like ToolLLM for calling API’s to add almost any capabilities you’d wish. Text-to-speech from Elevenlabs - this is fair to say the easiest to get running, fastest and most expressive option out there but FWIR data policy is a little dubious so careful about sending anything too sensitive. I’d really like to see comparable performing open source options that you can run locally Conversation flow is highly serialized, i.e. recording > transcription > LLM > TTS needs to finish each step before moving onto the next. Not included here is feeding incomplete transcriptions to a smaller model, and streaming slower LLM's like GPT4 to Elevenlabs and sending streaming responses back, it's currently a little too hacky to include in this release. No wakeword usage, mostly done intentionally as I feel uttering a wake-word before every response is a terrible experience. This currently uses a combo of VAD, mic-timeouts sent from server, tap-to-wake, mute switch usage etc. Not included here is experiments running a smaller, faster LLM for classification with a running transcription before handing off to a larger LLM with specific prompt Other areas for improvement These are things I didn't get time to implement but I believe would be invaluable and pretty achievable
* Speaker diarization - know who is saying what, and have the LLM enage in multi-user conversations or infer when it isn't being spoken to
* Interruptions - requires AEC for simultaneous listening and playback
* Smaller local models/LLM's for running classification, detecting intent and routing to larger LLM's Installation 🖥️ Server Ensure you can install Whisper and run at least the base model, following any debugging steps they have if not. If you can get past that, it should be as simple as: cd server
pip install -r requirements.txt Adjust settings in the config.yaml , and tweak aspects such as how much silence is needed to start processing to trade-off snappiness vs avoiding cutting off the user. Add your Elevenlabs token to credentials.json and ensure you have a cloned voice in your account that you set in the config.yaml under elevenlabs_default_voice You'll also need a greeting WAV set in config.yaml under greeting_wav , that will be sent to devices on connecting to the WiFi. This is up to you to record or procure ( e.g. ) A small subset of the config parameters can be set as optional arguments when running the script. For e.g. the following will run the server with note-taking, Home Assistant, Maubot, real sending of messages enabled (a safe guard disabled by default), and a smaller English only Whisper model for transcription. python server.py --n --ha --mb --send --whisper base.en 🏡 Home Assistant I recommend setting this up on the same server or one that is always plugged in on your network, following the Docker Compose instructions Then go through the onboarding, setup a user, name your devices and get a Long Lived token to add to credentials.json together with the URL e.g. http://my-local-server:8123/ 🤖 Maubot Follow instructions here to setup Maubot with your Beeper account. Ensure the correct URL is setup in config.yaml , set send_replies to True if your friends are forgiving of the odd mistakes, and set a footer . Don’t have Beeper yet and can’t wait? Try setup a Matrix bridge yourself and a custom function definition for OpenAI function calling (and share how you did it!) Following this example you can also integrate e-mail. 📟 Firmware Irrespective of what you use for development, the quickest & least error prone setup for building & flashing firmware is probably installing the Arduino IDE Software , and then using this IDE or your preference i.e. VSCode for development (Copilot) Add the ESP32 boards as detailed here (TL;DR add https://espressif.github.io/arduino-esp32/package_esp32_index.json to Preferences > Additional Boards Manager URL’s ) Under Boards Manager, install “esp32” by Espressif Systems Under Library Manager, install “Adafruit NeoPixel Library” Clone this repo to Documents/Arduino for simplicity. Add your WiFi credentials to credentials.h Run bash setup-git-hash.sh to add a header with the git-hash (optional). This will then automatically update after commits, and help track the firmware that your devices are running from the server side. Open File > Sketchbook > onju-home > onjuino Select Tools > Board > esp32 > ESP32S3 Dev Module Under Tools ensure: USB CDC on Boot set to Enabled PSRAM set to OPI PSRAM Board is plugged in and Port is selected (you may need to install USB bridge drivers as detailed by Espressif, don’t worry if name is incorrect) Build and upload If not reset, press the reset button. In Serial Monitor you can also send r to reset the device (assuming it is already booted) 🧩 Hardware Preview schematics & PCB here You should be able to download files, otherwise they are in the folder hardware in Altium format. Feel free to modify & improve this design and share your updates! You can order PCBA's directly from PCBWay here . I've used a few suppliers and they are of the most reliable I've experienced for turnkey assembly at that pricepoint so I'm happy to point business their way. (Other options of selling single units, with margins, ended up forcing a pricepoint > Google Nest Mini itself, and wouldn't allow shipment into EU/UK without certification so I abandoned this) I will be sharing more detailed instructions for replacement. Replacement gaskets for the microphone & LED's can be made using adhesive foam and a punch set ) for example ❓Questions Does this replace my Google Nest Mini? While this replicates the interfaces of the Google Nest Mini, don’t expect this to be a 1:1 replacement, for e.g. it is not intended to be a music playback device (although there is probably no reason it couldn’t be developed to be used as such). It’s also worth re-iterating that like the Google Nest Mini, this requires a separate server, although this can be in your home running local models instead of in a Google datacenter. The original is well tested, maintained, certified and works out the box, while this is essentially a dev board with some neat examples for you to build on top of What if I don’t have a Google Nest Mini but still want to use this? Fortunately they’re still being sold, you may find deals for <$40 which is pretty good for the quality of speaker and form factor. I picked up quite a few from eBay, just make sure you get the 2nd gen. The adventurous can get try replacement shells from AliExpress for e.g., but you’ll still need a base, power input, mute switch, speaker & mount, capacitive touch panels, and replacement gaskets etc. A hero out there could design a custom enclosure that fits an off-the-shelf speaker. But I’m really impatient and want to get hacking away! What can I do? a) if you can commit to making significant contributions to the codebase and/or major contributions to the board design or RF review, we may be able to make early samples available b) if you don’t need the form factor, don’t mind rolling up my sleeves, and have some HW experience, you can breadboard it out with readily available components until you can get your hands on an order. Here are the components that should be able to get a demo running (🌸 Adafruit link for convenience but shop around wherever you’d like) ESP32-S3 devboard, ideally w/ PSRAM (e.g. QT Py S3 or ESP32-S3 ) Microphone (only need 1 for the Arduino implementation, ensure it's a SPH0645 to limit debugging) Amplifier Speaker Neopixel LED strip - just set the firmware to the correct # Breadboard & wire kit (you can use protruding pieces of wire for cap touch) You'll need to update the custom_boards.h with your pin mapping 🍐 PR's, issues, suggestions & general feedback welcome!🏡;A hackable AI home assistant platform;[] | justLV/onju-voice |
jpudysz/react-native-unistyles;Documentation Start here API Show case Examples Features 🚀 Shared core with C++ and JSI bindings 🌉 Supports new architecture and bridgeless mode 🔥 Crazy performance, adds under 0.1 ms to your StyleSheet 🎳 Share up to 100% of your styles across platforms in monorepo 🎯 Doesn't introduce new components, everything is packed in one hook ⚛️ No React Context 🖥️ Supports custom breakpoints, css-like media queries and variants 🎨 Register multiple themes and change them with single function call 🥳 Compatible with most popular platforms 🛡️ ~99% Test coverage 🔌 Extend stylesheets with your own plugins ⚔️ No 3rd party dependencies and much much more! Sponsors How to become a sponsor? UI Kits Installation shell
yarn add react-native-unistyles Benchmarks Discord Looking for help or you want to chat with me? Join Discord Sponsor my work If you found the react-native-unistyles time-saving and valuable, please consider sponsoring my work. Your support enables me to continue creating libraries with a fresh approach. Github: https://github.com/sponsors/jpudysz Ko-fi: https://ko-fi.com/jpudysz Your support is greatly appreciated and helps me dedicate more time and resources to creating quality libraries. Thank you for all the support! License MIT;Level up your React Native StyleSheet;react,react-native,react-native-web,typescript,expo,react-native-macos,react-native-windows | jpudysz/react-native-unistyles |
KoljaB/RealtimeSTT;RealtimeSTT Easy-to-use, low-latency speech-to-text library for realtime applications About the Project RealtimeSTT listens to the microphone and transcribes voice into text. Hint: Check out Linguflex , the original project from which RealtimeSTT is spun off. It lets you control your environment by speaking and is one of the most capable and sophisticated open-source assistants currently available. It's ideal for: Voice Assistants Applications requiring fast and precise speech-to-text conversion https://github.com/KoljaB/RealtimeSTT/assets/7604638/207cb9a2-4482-48e7-9d2b-0722c3ee6d14 Updates Latest Version: v0.1.15 See release history . Hint: Since we use the multiprocessing module now, ensure to include the if __name__ == '__main__': protection in your code to prevent unexpected behavior, especially on platforms like Windows. For a detailed explanation on why this is important, visit the official Python documentation on multiprocessing . Features Voice Activity Detection : Automatically detects when you start and stop speaking. Realtime Transcription : Transforms speech to text in real-time. Wake Word Activation : Can activate upon detecting a designated wake word. Hint : Check out RealtimeTTS , the output counterpart of this library, for text-to-voice capabilities. Together, they form a powerful realtime audio wrapper around large language models. Tech Stack This library uses: Voice Activity Detection WebRTCVAD for initial voice activity detection. SileroVAD for more accurate verification. Speech-To-Text Faster_Whisper for instant (GPU-accelerated) transcription. Wake Word Detection Porcupine for wake word detection. These components represent the "industry standard" for cutting-edge applications, providing the most modern and effective foundation for building high-end solutions. Installation bash
pip install RealtimeSTT This will install all the necessary dependencies, including a CPU support only version of PyTorch. Although it is possible to run RealtimeSTT with a CPU installation only (use a small model like "tiny" or "base" in this case) you will get way better experience using: GPU Support with CUDA (recommended) Additional steps are needed for a GPU-optimized installation. These steps are recommended for those who require better performance and have a compatible NVIDIA GPU. Note : To check if your NVIDIA GPU supports CUDA, visit the official CUDA GPUs list . To use RealtimeSTT with GPU support via CUDA please follow these steps: Install NVIDIA CUDA Toolkit 11.8 : Visit NVIDIA CUDA Toolkit Archive . Select operating system and version. Download and install the software. Install NVIDIA cuDNN 8.7.0 for CUDA 11.x : Visit NVIDIA cuDNN Archive . Click on "Download cuDNN v8.7.0 (November 28th, 2022), for CUDA 11.x". Download and install the software. Install ffmpeg : Note : Installation of ffmpeg might not actually be needed to operate RealtimeSTT *thanks to jgilbert2017 for pointing this out You can download an installer for your OS from the ffmpeg Website . Or use a package manager: On Ubuntu or Debian : bash
sudo apt update && sudo apt install ffmpeg On Arch Linux : bash
sudo pacman -S ffmpeg On MacOS using Homebrew ( https://brew.sh/ ): bash
brew install ffmpeg On Windows using Winget official documentation : bash
winget install Gyan.FFmpeg On Windows using Chocolatey ( https://chocolatey.org/ ): bash
choco install ffmpeg On Windows using Scoop ( https://scoop.sh/ ): bash
scoop install ffmpeg Install PyTorch with CUDA support : bash
pip uninstall torch
pip install torch==2.2.2+cu118 torchaudio==2.2.2 --index-url https://download.pytorch.org/whl/cu118 Quick Start Basic usage: Manual Recording Start and stop of recording are manually triggered. python
recorder.start()
recorder.stop()
print(recorder.text()) Automatic Recording Recording based on voice activity detection. python
with AudioToTextRecorder() as recorder:
print(recorder.text()) When running recorder.text in a loop it is recommended to use a callback, allowing the transcription to be run asynchronously: ```python
def process_text(text):
print (text) while True:
recorder.text(process_text)
``` Wakewords Keyword activation before detecting voice. Write the comma-separated list of your desired activation keywords into the wake_words parameter. You can choose wake words from these list: alexa, americano, blueberry, bumblebee, computer, grapefruits, grasshopper, hey google, hey siri, jarvis, ok google, picovoice, porcupine, terminator. ```python
recorder = AudioToTextRecorder(wake_words="jarvis") print('Say "Jarvis" then speak.')
print(recorder.text())
``` Callbacks You can set callback functions to be executed on different events (see Configuration ) : ```python
def my_start_callback():
print("Recording started!") def my_stop_callback():
print("Recording stopped!") recorder = AudioToTextRecorder(on_recording_start=my_start_callback,
on_recording_stop=my_stop_callback)
``` Feed chunks If you don't want to use the local microphone set use_microphone parameter to false and provide raw PCM audiochunks in 16-bit mono (samplerate 16000) with this method: python
recorder.feed_audio(audio_chunk) Shutdown You can shutdown the recorder safely by using the context manager protocol: python
with AudioToTextRecorder() as recorder:
[...] Or you can call the shutdown method manually (if using "with" is not feasible): python
recorder.shutdown() Testing the Library The test subdirectory contains a set of scripts to help you evaluate and understand the capabilities of the RealtimeTTS library. Test scripts depending on RealtimeTTS library may require you to enter your azure service region within the script.
When using OpenAI-, Azure- or Elevenlabs-related demo scripts the API Keys should be provided in the environment variables OPENAI_API_KEY, AZURE_SPEECH_KEY and ELEVENLABS_API_KEY (see RealtimeTTS ) simple_test.py Description : A "hello world" styled demonstration of the library's simplest usage. realtimestt_test.py Description : Showcasing live-transcription. wakeword_test.py Description : A demonstration of the wakeword activation. translator.py Dependencies : Run pip install openai realtimetts . Description : Real-time translations into six different languages. openai_voice_interface.py Dependencies : Run pip install openai realtimetts . Description : Wake word activated and voice based user interface to the OpenAI API. advanced_talk.py Dependencies : Run pip install openai keyboard realtimetts . Description : Choose TTS engine and voice before starting AI conversation. minimalistic_talkbot.py Dependencies : Run pip install openai realtimetts . Description : A basic talkbot in 20 lines of code. The example_app subdirectory contains a polished user interface application for the OpenAI API based on PyQt5. Configuration Initialization Parameters for AudioToTextRecorder When you initialize the AudioToTextRecorder class, you have various options to customize its behavior. General Parameters model (str, default="tiny"): Model size or path for transcription. Options: 'tiny', 'tiny.en', 'base', 'base.en', 'small', 'small.en', 'medium', 'medium.en', 'large-v1', 'large-v2'. Note: If a size is provided, the model will be downloaded from the Hugging Face Hub. language (str, default=""): Language code for transcription. If left empty, the model will try to auto-detect the language. Supported language codes are listed in Whisper Tokenizer library . compute_type (str, default="default"): Specifies the type of computation to be used for transcription. See Whisper Quantization input_device_index (int, default=0): Audio Input Device Index to use. gpu_device_index (int, default=0): GPU Device Index to use. The model can also be loaded on multiple GPUs by passing a list of IDs (e.g. [0, 1, 2, 3]). device (str, default="cuda"): Device for model to use. Can either be "cuda" or "cpu". on_recording_start : A callable function triggered when recording starts. on_recording_stop : A callable function triggered when recording ends. on_transcription_start : A callable function triggered when transcription starts. ensure_sentence_starting_uppercase (bool, default=True): Ensures that every sentence detected by the algorithm starts with an uppercase letter. ensure_sentence_ends_with_period (bool, default=True): Ensures that every sentence that doesn't end with punctuation such as "?", "!" ends with a period use_microphone (bool, default=True): Usage of local microphone for transcription. Set to False if you want to provide chunks with feed_audio method. spinner (bool, default=True): Provides a spinner animation text with information about the current recorder state. level (int, default=logging.WARNING): Logging level. handle_buffer_overflow (bool, default=True): If set, the system will log a warning when an input overflow occurs during recording and remove the data from the buffer. beam_size (int, default=5): The beam size to use for beam search decoding. initial_prompt (str or iterable of int, default=None): Initial prompt to be fed to the transcription models. suppress_tokens (list of int, default=[-1]): Tokens to be suppressed from the transcription output. on_recorded_chunk : A callback function that is triggered when a chunk of audio is recorded. Submits the chunk data as parameter. debug_mode (bool, default=False): If set, the system prints additional debug information to the console. Real-time Transcription Parameters Note : When enabling realtime description a GPU installation is strongly advised. Using realtime transcription may create high GPU loads. enable_realtime_transcription (bool, default=False): Enables or disables real-time transcription of audio. When set to True, the audio will be transcribed continuously as it is being recorded. realtime_model_type (str, default="tiny"): Specifies the size or path of the machine learning model to be used for real-time transcription. Valid options: 'tiny', 'tiny.en', 'base', 'base.en', 'small', 'small.en', 'medium', 'medium.en', 'large-v1', 'large-v2'. realtime_processing_pause (float, default=0.2): Specifies the time interval in seconds after a chunk of audio gets transcribed. Lower values will result in more "real-time" (frequent) transcription updates but may increase computational load. on_realtime_transcription_update : A callback function that is triggered whenever there's an update in the real-time transcription. The function is called with the newly transcribed text as its argument. on_realtime_transcription_stabilized : A callback function that is triggered whenever there's an update in the real-time transcription and returns a higher quality, stabilized text as its argument. beam_size_realtime (int, default=3): The beam size to use for real-time transcription beam search decoding. Voice Activation Parameters silero_sensitivity (float, default=0.6): Sensitivity for Silero's voice activity detection ranging from 0 (least sensitive) to 1 (most sensitive). Default is 0.6. silero_use_onnx (bool, default=False): Enables usage of the pre-trained model from Silero in the ONNX (Open Neural Network Exchange) format instead of the PyTorch format. Default is False. Recommended for faster performance. webrtc_sensitivity (int, default=3): Sensitivity for the WebRTC Voice Activity Detection engine ranging from 0 (least aggressive / most sensitive) to 3 (most aggressive, least sensitive). Default is 3. post_speech_silence_duration (float, default=0.2): Duration in seconds of silence that must follow speech before the recording is considered to be completed. This ensures that any brief pauses during speech don't prematurely end the recording. min_gap_between_recordings (float, default=1.0): Specifies the minimum time interval in seconds that should exist between the end of one recording session and the beginning of another to prevent rapid consecutive recordings. min_length_of_recording (float, default=1.0): Specifies the minimum duration in seconds that a recording session should last to ensure meaningful audio capture, preventing excessively short or fragmented recordings. pre_recording_buffer_duration (float, default=0.2): The time span, in seconds, during which audio is buffered prior to formal recording. This helps counterbalancing the latency inherent in speech activity detection, ensuring no initial audio is missed. on_vad_detect_start : A callable function triggered when the system starts to listen for voice activity. on_vad_detect_stop : A callable function triggered when the system stops to listen for voice activity. Wake Word Parameters wake_words (str, default=""): Wake words for initiating the recording. Multiple wake words can be provided as a comma-separated string. Supported wake words are: alexa, americano, blueberry, bumblebee, computer, grapefruits, grasshopper, hey google, hey siri, jarvis, ok google, picovoice, porcupine, terminator wake_words_sensitivity (float, default=0.6): Sensitivity level for wake word detection (0 for least sensitive, 1 for most sensitive). wake_word_activation_delay (float, default=0): Duration in seconds after the start of monitoring before the system switches to wake word activation if no voice is initially detected. If set to zero, the system uses wake word activation immediately. wake_word_timeout (float, default=5): Duration in seconds after a wake word is recognized. If no subsequent voice activity is detected within this window, the system transitions back to an inactive state, awaiting the next wake word or voice activation. on_wakeword_detected : A callable function triggered when a wake word is detected. on_wakeword_timeout : A callable function triggered when the system goes back to an inactive state after when no speech was detected after wake word activation. on_wakeword_detection_start : A callable function triggered when the system starts to listen for wake words on_wakeword_detection_end : A callable function triggered when stopping to listen for wake words (e.g. because of timeout or wake word detected) Contribution Contributions are always welcome! Shoutout to Steven Linn for providing docker support. License MIT Author Kolja Beigel Email: kolja.beigel@web.de GitHub;A robust, efficient, low-latency speech-to-text library with advanced voice activity detection, wake word activation and instant transcription.;python,realtime,speech-to-text | KoljaB/RealtimeSTT |
samwit/langchain-tutorials;langchain-tutorials A set of LangChain Tutorials from my youtube playlist https://www.youtube.com/playlist?list=PL8motc6AQftk1Bs42EW45kwYbyJ4jOdiZ;A set of LangChain Tutorials from my youtube channel ;[] | samwit/langchain-tutorials |
bigbrodude6119/flipper-zero-evil-portal;Flipper Zero Evil Portal An evil captive portal Wi-Fi access point using the Flipper Zero and Wi-Fi dev board About This project is a work in progress. This project will turn your Wi-Fi dev board into an open access point. When users try to connect to this access point they will be served a fake login screen. User credentials are sent to the Flipper and logged on the SD card. Portals The portal I initially provided is just an (ugly) example, please check out the community portals folder for more portals.
Contributors are welcome and very much needed! Users, remember to rename the new portal as index.html when you drag it on the flipper SD card. Disclaimer I am not a C developer and I am using this project as a way to learn more about esp32, flipper zero and, C programming. Contributors are welcome! Please feel free to open a PR at any time in the dev branch. This program is for educational purposes only. Getting Started The pre-built fap file is made for the unleashed custom firmware. If you are on a different firmware you can download the evil_portal.fap file at flipc.org or you can build the .fap file yourself by following these instructions . Note The official Flipper Zero firmware is now supported again thanks to @sboger. Install the pre-built app on the flipper Go to the releases section on this repo. Download and extract the unleashed-evil_portal.fap.zip file from the latest release. This file will contain the evil_portal.fap file for the Unleashed firmware. Put the evil_portal.fap file into the apps/GPIO/ folder on your Flipper SD card. In the releases section you will also need to download and extract the evil_portal_sd_folder.zip folder. This .zip file contains a evil_portal folder. Put the evil_portal folder into the apps_data folder on your SD card.
This is an example of your Flipper SD card if done correctly. apps/
GPIO/
evil_portal.fap
apps_data/
evil_portal/
ap.config.txt
index.html
logs/
<empty> You should be able to see the [ESP32] Evil Portal app on your flipper zero now. Installing/flashing the Wi-Fi dev board There is now an easier method (Option One) of flashing your ESP32 dev board. Thank you to reddit user dellycem for showing me how to do this. Note: the following boards are supported via this method The official Wifi dev board Alternative ESP32-S2 boards like this one from AWOK Dynamics ESP32-WROOM boards The alternative ESP32-S2 boards can use the same .bin files as the official dev board. The esp32 wroom board has it's own pre-compiled .bin files provided in the 0.0.2 release. Please check out the required pin connections bellow. If you are not using one of these boards you will have to go with option two. Option One - Official Wi-Fi Dev Board Starting with version 0.0.2 I will include pre-compiled .bin files for the official WiFi Dev board. This will allow users to flash their dev boards via a website instead of through the Arduino IDE. Download and extract the wifi_dev_board.zip file that is part of the latest release. This will contain 4 .bin files. Connect your WiFi dev board to your computer while holding the boot button. Go to the website https://esp.huhn.me/ and press the Connect button. Select the port associated with your board. Add each of the 4 .bin files using the blue Add button. Enter the following addresses in the text field to the left of each file. 1000 - EvilPortal.ino.bootloader.bin 8000 - EvilPortal.ino.partitions.bin e000 - boot_app0.bin 10000 - EvilPortal.ino.bin Press the Program button and wait while the board is being flashed. Assuming you do not have any errors you are good to go. Option Two - Other compatible boards Follow the steps below to flash the other compatible ESP32 boards. You may have to adjust the steps below for your specific board: Download and install the Arduino IDE from here . Download zip/clone dependency AsyncTCP to file. Download zip/clone dependency ESPAsyncWebServer to file. Unzip both dependencies to your Arduino library folder. On Windows this is usually C:\Users\<username>\Documents\Arduino\libraries . Go to the releases section on this repo and download the EvilPortal.ino file, open it with Arduino IDE. Go to File > Preferences and paste the following two URL's into the Additional Boards Manager URLs field: https://dl.espressif.com/dl/package_esp32_index.json
https://raw.githubusercontent.com/espressif/arduino-esp32/gh-pages/package_esp32_dev_index.json Go to Tools > Board > Boards Manager... and search for esp32 and install esp32 by Espressif Systems . Go to Tools > Board and select ESP32S2 Dev Module or the ESP32 board that you are trying to flash. On your ESP32-S2 Wi-Fi module, hold the BOOT button. Connect your board to your computer, keep holding the BOOT button (holding for just 3-5 seconds and releasing may be fine, continuously holding worked better for me). Go to Tools > Port and select the port that appeared when you connected your ESP32. Click the "Upload" button in the top left corner of the Arduino IDE. On success, you will see something like: Hash of data verified.
Leaving...
WARNING: ESP32-S2 (revision v0.0) chip was placed into download mode... Plug in the Wi-Fi Dev board to the flipper, press the reset button on the Wi-Fi dev board and you should now see a solid blue light. Usage Plug in the Wi-Fi Dev board to the flipper. Open the app on the Flipper and press Start portal on the main menu. After a few seconds you should start to see logs coming in from your Wi-Fi dev board and the AP will start and the LED will turn green. The AP will take the name that is in the ap.config.txt file located on your Flipper in the apps_data/evil_portal/ folder. When you connect to the AP a web page will open after a few seconds. This web page contains the HTML located in the index.html file located on your Flipper in the apps_data/evil_portal/ folder. You can stop the portal by pressing Stop portal on the main menu. The LED should turn blue. You can manually save logs using the Save logs command. Logs will be stored in the logs folder that is in your apps_data/evil_portal/ folder. Logs will automatically be saved when exiting the app or when the current log reaches 4000 characters. Alternative boards The ESP32 wroom boards will not have the LED indicators in the 0.0.2 release and if you are compiling for a Wroom board you will have to comment out the code dealing with the LEDs.
The pre-compiled .bin files for that board already have this change. I plan on making this process easier in the next release. The following pins are required for the board to work: 3.3v GND TX RX Keep in mind that the TX/RX pins go to the opposite pins on the flipper. So TX on your ESP32 goes to RX on the flipper. For my Wroom board I had to use RX0/TX0, your board may be a little different. Troubleshooting If you run into any issues make sure that you have the required files set up on the Flipper apps_data folder on the Flipper SD card. If the AP won't start or you have other issues try pressing reset on the Wi-Fi dev board, waiting a few seconds, and pressing Start portal on the main menu. It is important to give the dev board some time to load the html files from the Flipper. If you have the Marauder firmware on your dev board you might need to enable Erase All Flash Before Sketch Upload before flashing or follow the Erasing firmware instructions below. If you are using the web flasher there is an erase function on the website. If you see garbage characters in the AP name you will need to press the reset button on the board. Some users are reporting that the captive portal login does not open on some Android phones. Erasing firmware Assuming you have the Flipper Zero Wi-Fi Wrover Development Module ( ESP32-S2 ): Install Python . Open a command terminal as an administrator: On Windows press ⊞Win+R, type "cmd", and press CTRL+SHIFT+ENTER. In the terminal type the following to install esptool via Python package manager: pip install esptool Install setuptools dependencies: pip install setuptools Enter the following command into your terminal, do not run it yet: python -m esptool --chip esp32s2 erase_flash On your ESP32-S2 Wi-Fi module, hold the BOOT button. Connect your ESP32-S2 to your computer, keep holding the BOOT button. In your terminal press enter to run the command from step 5. When successful you will get the message Chip erase completed successfully in ___s (time in seconds suffixed with "s"). Unplug/reset your board. Todo I plan on working on this in my free time. Here is my todo list. ~~Support for multiple portals~~ Coming in 0.0.3 thanks to Nycz-lab & NikIsHere 🙏 Enter AP name on the Flipper Scan nearby APs and clone their SSID (good idea leedave!) Add a config file for general app settings Create cleaner log files that are easier to read Randomize mac address so that the network shows up as a new network each time Clean up code & implement best practices License Distributed under the MIT License. See LICENSE.txt for more information. Acknowledgments I was only able to create this using the following apps as examples flipperzero-wifi-marauder UART_Terminal flipper-zero-fap-boilerplate Create Captive Portal Using ESP32 Contact me You can message me on my reddit account bigbrodude6119;Evil portal app for the flipper zero + WiFi dev board;[] | bigbrodude6119/flipper-zero-evil-portal |
spyboy-productions/CloakQuest3r;CloakQuest3r is a powerful Python tool meticulously crafted to uncover the true IP address of websites safeguarded by Cloudflare and other alternatives, a widely adopted web security and performance enhancement service. Its core mission is to accurately discern the actual IP address of web servers that are concealed behind Cloudflare's protective shield. Subdomain scanning is employed as a key technique in this pursuit. This tool is an invaluable resource for penetration testers, security professionals, and web administrators seeking to perform comprehensive security assessments and identify vulnerabilities that may be obscured by Cloudflare's security measures. 🚀 Run Online Free On Google Colab , Google Shell , Binder Key Features: - Real IP Detection: CloakQuest3r excels in the art of discovering the real IP address of web servers employing Cloudflare's services. This crucial information is paramount for conducting comprehensive penetration tests and ensuring the security of web assets. Subdomain Scanning: Subdomain scanning is harnessed as a fundamental component in the process of finding the real IP address. It aids in the identification of the actual server responsible for hosting the website and its associated subdomains. IP address History: Retrieve historical IP address information for a given domain. It uses the ViewDNS service to fetch and display details such as IP address, location, owner, and last seen date. SSL Certificate Analysis: Extract and analyze SSL certificates associated with the target domain. This could provide additional information about the hosting infrastructure and potentially reveal the true IP address. SecurityTrails API (optional): If you add your free SecurityTrails API key to the config.ini file, you can retrieve historical IP information from SecurityTrails. Threaded Scanning: To enhance efficiency and expedite the real IP detection process, CloakQuest3r utilizes threading. This feature enables the scanning of a substantial list of subdomains without significantly extending the execution time. Detailed Reporting: The tool provides comprehensive output, including the total number of subdomains scanned, the total number of subdomains found, and the time taken for the scan. Any real IP addresses unveiled during the process are also presented, facilitating in-depth analysis and penetration testing. With CloakQuest3r, you can confidently evaluate website security, unveil hidden vulnerabilities, and secure your web assets by disclosing the true IP address concealed behind Cloudflare's protective layers. Featured: CloakQuest3r is one of the Top 20 Most Popular Hacking Tools in 2023 by KitPloit Top 20 Most Popular Hacking Tools in 2023 Limitation ```diff Sometimes it can't detect the real Ip. CloakQuest3r combines multiple indicators to uncover real IP addresses behind Cloudflare. While subdomain scanning is a part of the process, we do not assume that all subdomains' A records point to the target host. The tool is designed to provide valuable insights but may not work in every scenario. We welcome any specific suggestions for improvement. We created a proof of concept, but it's not well-written. We welcome pull requests to improve it. False Negatives: CloakReveal3r may not always accurately identify the real IP address behind Cloudflare, particularly for websites with complex network configurations or strict security measures. Dynamic Environments: Websites' infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the primary host. Some subdomains may also be protected by Cloudflare. ``` This tool is a Proof of Concept and is for Educational Purposes Only. OS compatibility : Requirements: How to Use: 1. Run CloudScan with a single command-line argument: the target domain you want to analyze. git clone https://github.com/spyboy-productions/CloakQuest3r.git cd CloakQuest3r pip3 install -r requirements.txt For Termux(android) User use the command given below if having trouble installing cryptography using requirements.txt `pkg install python-cryptography` python cloakquest3r.py example.com The tool will check if the website is using Cloudflare. If not, it will inform you and ask if you still want to proceed. If Cloudflare is detected, it will first print historical IP records and then it will scan for subdomains and identify their real IP addresses. You will receive detailed output, including the number of subdomains scanned, the total number of subdomains found, and the time taken for the scan. Any real IP addresses found will be displayed, allowing you to conduct further analysis and penetration testing. it simplifies the process of assessing website security by providing a clear, organized, and informative report. Use it to enhance your security assessments, identify potential vulnerabilities, and secure your web assets. Optional : SecurityTrails API Retrieves historical IP information from SecurityTrails. if you have an API key add it to the configuration file (config.ini). Upon initial execution of the script, it generates a config.ini file with the following content: ini
[DEFAULT]
securitytrails_api_key = your_api_key Subsequently, the script attempts to retrieve data from the SecurityTrails API. If the retrieval fails due to reasons such as quota limitations or site unavailability, the corresponding function is gracefully skipped. Contribution: Contributions and feature requests are welcome! If you encounter any issues or have ideas for improvement, feel free to open an issue or submit a pull request. 😴🥱😪💤 ToDo: Discover IP through website API calls (POC) Save all info on a Txt/CSV file. 💬 If having an issue Chat here ⭔ Snapshots: If you find this GitHub repo useful, please consider giving it a star! ⭐️;Uncover the true IP address of websites safeguarded by Cloudflare & Others;bypass-cloudflare,bypass-hostname,bypass-waf,dnssec,exploit,information-gathering,osint-tool,pentesting-tools,subdomain-scanner,cloudflare | spyboy-productions/CloakQuest3r |
gustavoguichard/string-ts;Strongly-typed string functions for all! 😬 The problem When you are working with literal strings, the string manipulation functions only work at the runtime level and the types don't follow those transformations.
You end up losing type information and possibly having to cast the result. ts
const str = 'hello-world'
const result = str.replace('-', ' ') // you should use: as 'hello world'
// ^? string 🤓 The solution This library aims to solve this problem by providing a set of common functions that work with literal strings at both type and runtime level. ```ts
import { replace } from 'string-ts' const str = 'hello-world'
const result = replace(str, '-', ' ')
// ^ 'hello world'
``` 🔍 Why this matters TypeScript yields the best static analysis when types are highly specific.
Literals are more specific than type string .
This library preserves literals (and unions of literals) after transformations, unlike most existing utility libraries (and built-in string methods.) I still don't get the purpose of this library 🤔 In-depth example In the below example, I want to get a strongly-typed, camel-case version of process.env .
One flow results in a loose type, and the other results in a more precise type.
This example should illustrate the highly-specific and flexible nature of string-ts . ```ts
import { deepCamelKeys } from 'string-ts'
import { camelCase, mapKeys } from 'lodash-es'
import z from 'zod' const EnvSchema = z.object({
NODE_ENV: z.string(),
}) function getEnvLoose() {
const rawEnv = EnvSchema.parse(process.env)
const env = mapKeys(rawEnv, (_v, k) => camelCase(k))
// ^? Dictionary // Dictionary<string> is too loose
// TypeScript is okay with this, 'abc' is expected to be of type string // This will have unexpected behavior at runtime
console.log(env.abc)
} function getEnvPrecise() {
const rawEnv = EnvSchema.parse(process.env)
const env = deepCamelKeys(rawEnv)
// ^? { nodeEnv: string } // Error: Property 'abc' does not exist on type '{ nodeEnv: string; }'
// Our type is more specific, so TypeScript catches this error.
// This mistake will be caught at compile time
console.log(env.abc)
} function main() {
getEnvLoose()
getEnvPrecise()
} main()
``` 📦 Installation bash
npm install string-ts 🌳 Tree shaking string-ts has been designed with tree shaking in mind.
We have tested it with build tools like Webpack, Vite, Rollup, etc. 👌 Supported TypeScript versions string-ts currently only works on TypeScript v5+. It also only work with common ASCII characters characters. We don't plan to support international characters or emojis. 📖 API Runtime counterparts of native type utilities capitalize uncapitalize Strongly-typed alternatives to native runtime utilities charAt concat endsWith includes join length padEnd padStart repeat replace replaceAll slice split startsWith toLowerCase toUpperCase trim trimEnd trimStart Strongly-typed alternatives to common loosely-typed functions camelCase constantCase delimiterCase kebabCase pascalCase reverse snakeCase titleCase truncate words Strongly-typed shallow transformation of objects camelKeys constantKeys delimiterKeys kebabKeys pascalKeys replaceKeys snakeKeys Strongly-typed deep transformation of objects deepCamelKeys deepConstantKeys deepDelimiterKeys deepKebabKeys deepPascalKeys deepSnakeKeys Type Utilities Native TS type utilities General Type utilities from this library Casing type utilities Other exported type utilities Runtime-only utilities deepTransformKeys Runtime counterparts of native type utilities capitalize Capitalizes the first letter of a string. This is a runtime counterpart of Capitalize<T> from src/types.d.ts . ```ts
import { capitalize } from 'string-ts' const str = 'hello world'
const result = capitalize(str)
// ^ 'Hello world'
``` uncapitalize Uncapitalizes the first letter of a string. This is a runtime counterpart of Uncapitalize<T> from src/types.d.ts . ```ts
import { uncapitalize } from 'string-ts' const str = 'Hello world'
const result = uncapitalize(str)
// ^ 'hello world'
``` Strongly-typed alternatives to native runtime utilities charAt This function is a strongly-typed counterpart of String.prototype.charAt . ```ts
import { charAt } from 'string-ts' const str = 'hello world'
const result = charAt(str, 6)
// ^ 'w'
``` concat This function is a strongly-typed counterpart of String.prototype.concat . ```ts
import { concat } from 'string-ts' const result = concat('a', 'bc', 'def')
// ^ 'abcdef'
``` endsWith This function is a strongly-typed counterpart of String.prototype.endsWith . ```ts
import { endsWith } from 'string-ts' const result = endsWith('abc', 'c')
// ^ true
``` includes This function is a strongly-typed counterpart of String.prototype.includes . ```ts
import { includes } from 'string-ts' const result = includes('abcde', 'bcd')
// ^ true
``` join This function is a strongly-typed counterpart of Array.prototype.join . ```ts
import { join } from 'string-ts' const str = ['hello', 'world']
const result = join(str, ' ')
// ^ 'hello world'
``` length This function is a strongly-typed counterpart of String.prototype.length . ```ts
import { length } from 'string-ts' const str = 'hello'
const result = length(str)
// ^ 5
``` padEnd This function is a strongly-typed counterpart of String.prototype.padEnd . ```ts
import { padEnd } from 'string-ts' const str = 'hello'
const result = padEnd(str, 10, '=')
// ^ 'hello====='
``` padStart This function is a strongly-typed counterpart of String.prototype.padStart . ```ts
import { padStart } from 'string-ts' const str = 'hello'
const result = padStart(str, 10, '=')
// ^ '=====hello'
``` repeat This function is a strongly-typed counterpart of String.prototype.repeat . ```ts
import { repeat } from 'string-ts' const str = 'abc'
const result = repeat(str, 3)
// ^ 'abcabcabc'
``` replace This function is a strongly-typed counterpart of String.prototype.replace . Warning: this is a partial implementation, as we don't fully support Regex. Using a RegExp lookup will result in a loose typing. ```ts
import { replace } from 'string-ts' const str = 'hello-world-'
const result = replace(str, '-', ' ')
// ^ 'hello world-'
const looselyTypedResult = replace(str, /-/, ' ')
// ^ string
``` replaceAll This function is a strongly-typed counterpart of String.prototype.replaceAll .
It also has a polyfill for runtimes older than ES2021. Warning: this is a partial implementation, as we don't fully support Regex. Using a RegExp lookup will result in a loose typing. ```ts
import { replaceAll } from 'string-ts' const str = 'hello-world-'
const result = replaceAll(str, '-', ' ')
// ^ 'hello world '
const looselyTypedResult = replaceAll(str, /-/g, ' ')
// ^ string
``` slice This function is a strongly-typed counterpart of String.prototype.slice . ```ts
import { slice } from 'string-ts' const str = 'hello-world'
const result = slice(str, 6)
// ^ 'world'
const result2 = slice(str, 1, 5)
// ^ 'ello'
const result3 = slice(str, -5)
// ^ 'world'
``` split This function is a strongly-typed counterpart of String.prototype.split . ```ts
import { split } from 'string-ts' const str = 'hello-world'
const result = split(str, '-')
// ^ ['hello', 'world']
``` startsWith This function is a strongly-typed counterpart of String.prototype.startsWith . ```ts
import { startsWith } from 'string-ts' const result = startsWith('abc', 'a')
// ^ true
``` toLowerCase This function is a strongly-typed counterpart of String.prototype.toLowerCase . ```ts
import { toLowerCase } from 'string-ts' const str = 'HELLO WORLD'
const result = toLowerCase(str)
// ^ 'hello world'
``` toUpperCase This function is a strongly-typed counterpart of String.prototype.toUpperCase . ```ts
import { toUpperCase } from 'string-ts' const str = 'hello world'
const result = toUpperCase(str)
// ^ 'HELLO WORLD'
``` trim This function is a strongly-typed counterpart of String.prototype.trim . ```ts
import { trim } from 'string-ts' const str = ' hello world '
const result = trim(str)
// ^ 'hello world'
``` trimEnd This function is a strongly-typed counterpart of String.prototype.trimEnd . ```ts
import { trimEnd } from 'string-ts' const str = ' hello world '
const result = trimEnd(str)
// ^ ' hello world'
``` trimStart This function is a strongly-typed counterpart of String.prototype.trimStart . ```ts
import { trimStart } from 'string-ts' const str = ' hello world '
const result = trimStart(str)
// ^ 'hello world '
``` Strongly-typed alternatives to common loosely-typed functions lowerCase This function converts a string to lower case at both runtime and type levels. NOTE: this function will split by words and join them with " " , unlike toLowerCase . ```ts
import { lowerCase } from 'string-ts' const str = 'HELLO-WORLD'
const result = lowerCase(str)
// ^ 'hello world'
``` camelCase This function converts a string to camelCase at both runtime and type levels. ```ts
import { camelCase } from 'string-ts' const str = 'hello-world'
const result = camelCase(str)
// ^ 'helloWorld'
``` constantCase This function converts a string to CONSTANT_CASE at both runtime and type levels. ```ts
import { constantCase } from 'string-ts' const str = 'helloWorld'
const result = constantCase(str)
// ^ 'HELLO_WORLD'
``` delimiterCase This function converts a string to a new case with a custom delimiter at both runtime and type levels. ```ts
import { delimiterCase } from 'string-ts' const str = 'helloWorld'
const result = delimiterCase(str, '.')
// ^ 'hello.World'
``` kebabCase This function converts a string to kebab-case at both runtime and type levels. ```ts
import { kebabCase } from 'string-ts' const str = 'helloWorld'
const result = kebabCase(str)
// ^ 'hello-world'
``` pascalCase This function converts a string to PascalCase at both runtime and type levels. ```ts
import { pascalCase } from 'string-ts' const str = 'hello-world'
const result = pascalCase(str)
// ^ 'HelloWorld'
``` snakeCase This function converts a string to snake_case at both runtime and type levels. ```ts
import { snakeCase } from 'string-ts' const str = 'helloWorld'
const result = snakeCase(str)
// ^ 'hello_world'
``` titleCase This function converts a string to Title Case at both runtime and type levels. ```ts
import { titleCase } from 'string-ts' const str = 'helloWorld'
const result = titleCase(str)
// ^ 'Hello World'
``` upperCase This function converts a string to UPPER CASE at both runtime and type levels. NOTE: this function will split by words and join them with " " , unlike toUpperCase . ```ts
import { upperCase } from 'string-ts' const str = 'hello-world'
const result = upperCase(str)
// ^ 'HELLO WORLD'
``` reverse This function reverses a string. ```ts
import { reverse } from 'string-ts' const str = 'Hello StringTS!'
const result = reverse(str)
// ^ '!TSgnirtS olleH'
``` truncate This function truncates string if it's longer than the given maximum string length. The last characters of the truncated string are replaced with the omission string which defaults to "...". ```ts
import { truncate } from 'string-ts' const str = '-20someVery-weird String'
const result = truncate(str, 8)
// ^ '-20so...'
``` words This function identifies the words in a string and returns a tuple of words split by separators, differences in casing, numbers, and etc. ```ts
import { words } from 'string-ts' const str = '-20someVery-weird String'
const result = words(str)
// ^ ['20', 'some', 'Very', 'weird', 'String']
``` Strongly-typed shallow transformation of objects camelKeys This function shallowly converts the keys of an object to camelCase at both runtime and type levels. ```ts
import { camelKeys } from 'string-ts' const data = {
'hello-world': {
'foo-bar': 'baz',
},
} as const
const result = camelKeys(data)
// ^ { helloWorld: { 'foo-bar': 'baz' } }
``` constantKeys This function shallowly converts the keys of an object to CONSTANT_CASE at both runtime and type levels. ```ts
import { constantKeys } from 'string-ts' const data = {
helloWorld: {
fooBar: 'baz',
},
} as const
const result = constantKeys(data)
// ^ { 'HELLO_WORLD': { 'fooBar': 'baz' } }
``` delimiterKeys This function shallowly converts the keys of an object to a new case with a custom delimiter at both runtime and type levels. ```ts
import { delimiterKeys } from 'string-ts' const data = {
'hello-world': {
'foo-bar': 'baz',
},
} as const
const result = delimiterKeys(data, '.')
// ^ { 'hello.world': { 'foo-bar': 'baz' } }
``` kebabKeys This function shallowly converts the keys of an object to kebab-case at both runtime and type levels. ```ts
import { kebabKeys } from 'string-ts' const data = {
helloWorld: {
fooBar: 'baz',
},
} as const
const result = kebabKeys(data)
// ^ { 'hello-world': { fooBar: 'baz' } }
``` pascalKeys This function shallowly converts the keys of an object to PascalCase at both runtime and type levels. ```ts
import { pascalKeys } from 'string-ts' const data = {
'hello-world': {
'foo-bar': 'baz',
},
} as const
const result = pascalKeys(data)
// ^ { HelloWorld: { FooBar: 'baz' } }
``` snakeKeys This function shallowly converts the keys of an object to snake_case at both runtime and type levels. ```ts
import { snakeKeys } from 'string-ts' const data = {
helloWorld: {
fooBar: 'baz',
},
} as const
const result = snakeKeys(data)
// ^ { 'hello_world': { 'fooBar': 'baz' } }
``` replaceKeys This function shallowly transforms the keys of an object by applying replace to each of its keys at both runtime and type levels. ```ts
import { replaceKeys } from 'string-ts' const data = {
helloWorld: {
fooBar: 'baz',
},
} as const
const result = replaceKeys(data, 'o', 'a')
// ^ { 'hellaWorld': { 'fooBar': 'baz' } }
``` Strongly-typed deep transformation of objects deepCamelKeys This function recursively converts the keys of an object to camelCase at both runtime and type levels. ```ts
import { deepCamelKeys } from 'string-ts' const data = {
'hello-world': {
'foo-bar': 'baz',
},
} as const
const result = deepCamelKeys(data)
// ^ { helloWorld: { fooBar: 'baz' } }
``` deepConstantKeys This function recursively converts the keys of an object to CONSTANT_CASE at both runtime and type levels. ```ts
import { deepConstantKeys } from 'string-ts' const data = {
helloWorld: {
fooBar: 'baz',
},
} as const
const result = deepConstantKeys(data)
// ^ { 'HELLO_WORLD': { 'FOO_BAR': 'baz' } }
``` deepDelimiterKeys This function recursively converts the keys of an object to a new case with a custom delimiter at both runtime and type levels. ```ts
import { deepDelimiterKeys } from 'string-ts' const data = {
'hello-world': {
'foo-bar': 'baz',
},
} as const
const result = deepDelimiterKeys(data, '.')
// ^ { 'hello.world': { 'foo.bar': 'baz' } }
``` deepKebabKeys This function recursively converts the keys of an object to kebab-case at both runtime and type levels. ```ts
import { deepKebabKeys } from 'string-ts' const data = {
helloWorld: {
fooBar: 'baz',
},
} as const
const result = deepKebabKeys(data)
// ^ { 'hello-world': { 'foo-bar': 'baz' } }
``` deepPascalKeys This function recursively converts the keys of an object to PascalCase at both runtime and type levels. ```ts
import { deepPascalKeys } from 'string-ts' const data = {
'hello-world': {
'foo-bar': 'baz',
},
} as const
const result = deepPascalKeys(data)
// ^ { HelloWorld: { FooBar: 'baz' } }
``` deepSnakeKeys This function recursively converts the keys of an object to snake_case at both runtime and type levels. ```ts
import { deepSnakeKeys } from 'string-ts' const data = {
helloWorld: {
fooBar: 'baz',
},
} as const
const result = deepSnakeKeys(data)
// ^ { 'hello_world': { 'foo_bar': 'baz' } }
``` Type utilities All the functions presented in this API have associated type counterparts. ts
import type * as St from 'string-ts' Native TS type utilities ts
Capitalize<'hello world'> // 'Hello world'
Lowercase<'HELLO WORLD'> // 'hello world'
Uppercase<'hello world'> // 'HELLO WORLD' General type utilities from this library ts
St.CharAt<'hello world', 6> // 'w'
St.Concat<['a', 'bc', 'def']> // 'abcdef'
St.EndsWith<'abc', 'c'> // true
St.Includes<'abcde', 'bcd'> // true
St.Join<['hello', 'world'], '-'> // 'hello-world'
St.Length<'hello'> // 5
St.PadEnd<'hello', 10, '='> // 'hello====='
St.PadStart<'hello', 10, '='> // '=====hello'
St.Repeat<'abc', 3> // 'abcabcabc'
St.Replace<'hello-world', 'l', '1'> // 'he1lo-world'
St.ReplaceAll<'hello-world', 'l', '1'> // 'he11o-wor1d'
St.Reverse<'Hello World!'> // '!dlroW olleH'
St.Slice<'hello-world', -5> // 'world'
St.Split<'hello-world', '-'> // ['hello', 'world']
St.Trim<' hello world '> // 'hello world'
St.StartsWith<'abc', 'a'> // true
St.TrimEnd<' hello world '> // ' hello world'
St.TrimStart<' hello world '> // 'hello world '
St.Truncate<'hello world', 9, '[...]'> // 'hello[...]
St.Words<'hello-world'> // ['hello', 'world'] Casing type utilities Core ts
St.CamelCase<'hello-world'> // 'helloWorld'
St.ConstantCase<'helloWorld'> // 'HELLO_WORLD'
St.DelimiterCase<'hello world', '.'> // 'hello.world'
St.KebabCase<'helloWorld'> // 'hello-world'
St.PascalCase<'hello-world'> // 'HelloWorld'
St.SnakeCase<'helloWorld'> // 'hello_world'
St.TitleCase<'helloWorld'> // 'Hello World' Missing types Note that we do not include UpperCase and LowerCase types. These would be too close to the existing TS types Uppercase and Lowercase . One could create either by doing like so: ts
type LowerCase<T extends string> = Lowercase<DelimiterCase<T, ' '>>
type UpperCase<T extends string> = Uppercase<DelimiterCase<T, ' '>>
// or
type LowerCase<T extends string> = ReturnType<typeof lowerCase<T>>
type UpperCase<T extends string> = ReturnType<typeof upperCase<T>> Shallow object key transformation ts
St.CamelKeys<{
'hello-world': { 'foo-bar': 'baz' }
}> // { helloWorld: { 'foo-bar': 'baz' } }
St.ConstantKeys<{
helloWorld: { fooBar: 'baz' }
}> // { 'HELLO_WORLD': { fooBar: 'baz' } }
St.DelimiterKeys<{ 'hello-world': { 'foo-bar': 'baz' } }, '.'>
// { 'hello.world': { 'foo-bar': 'baz' } }
St.KebabKeys<{
helloWorld: { fooBar: 'baz' }
}> // { 'hello-world': { fooBar: 'baz' } }
St.PascalKeys<{
'hello-world': { 'foo-bar': 'baz' }
}> // { HelloWorld: { 'foo-bar': 'baz' } }
St.SnakeKeys<{
helloWorld: { fooBar: 'baz' }
}> // { 'hello_world': { fooBar: 'baz' } } Deep object key transformation ```ts
St.DeepCamelKeys<{
'hello-world': { 'foo-bar': 'baz' }
}> // { helloWorld: { fooBar: 'baz' } }
St.DeepConstantKeys<{
helloWorld: { fooBar: 'baz' }
}> // { 'HELLO_WORLD': { 'FOO_BAR': 'baz' } }
St.DeepDelimiterKeys<
{
'hello-world': { 'foo-bar': 'baz' }
},
'.' // { 'hello.world': { 'foo.bar': 'baz' } }
St.DeepKebabKeys<{
helloWorld: { fooBar: 'baz' }
}> // { 'hello-world': { 'foo-bar': 'baz' } }
St.DeepPascalKeys<{
'hello-world': { 'foo-bar': 'baz' }
}> // { HelloWorld: { FooBar: 'baz' } }
St.DeepSnakeKeys<{
helloWorld: { fooBar: 'baz' }
}> // { 'hello_world': { 'foo_bar': 'baz' } }
``` Other exported type utilities ts
St.IsDigit<'a'> // false
St.IsDigit<'1'> // true
St.IsLetter<'a'> // true
St.IsLetter<'1'> // false
St.IsLower<'a'> // true
St.IsLower<'A'> // false
St.IsUpper<'a'> // false
St.IsUpper<'A'> // true
St.IsSeparator<' '> // true
St.IsSeparator<'-'> // true
St.IsSeparator<'a'> // false
St.IsSpecial<'a'> // false
St.IsSpecial<'!'> // true
St.IsSpecial<' '> // false Runtime-only utilities deepTransformKeys This function recursively converts the keys of an object to a custom format, but only at runtime level. ```ts
import { deepTransformKeys, toUpperCase } from 'string-ts' const data = { helloWorld: 'baz' } as const type MyType = { [K in keyof T as Uppercase ]: T[K] }
const result = deepTransformKeys(data, toUpperCase) as MyType // ^ { 'HELLOWORLD': 'baz' }
``` 🎙️ Interview For a deeper dive into the code, reasoning, and how this library came to be, check out this interview with the author of StringTs on the Michigan TypeScript Meetup. 🐝 Contributors Thanks goes to these wonderful people ( emoji key ): Guga Guichard 💻 📆 📣 🚧 📖 🐛 🚇 💬 🔬 👀 🤔 💡 Landon Yarrington 💻 🚧 📖 👀 🤔 💡 💬 🐛 Guillaume 💻 🚧 📖 🐛 🚇 💬 🤔 Matt Pocock 📖 💻 📣 Andrew Luca 📖 📣 Mjuksel 💻 🤔 hverlin 💻 This project follows the all-contributors specification. Contributions of any kind welcome! StringTs logo by NUMI : 🫶 Acknowledgements This library got a lot of inspiration from libraries such as lodash , ts-reset , type-fest , HOTScript , and many others.;Strongly typed string functions;[] | gustavoguichard/string-ts |
RubyMetric/chsrc;全平台命令行换源工具, 目标支持 Linux (包括麒麟、openEuler、deepin 等), Windows, macOS, BSD 等尽可能多的操作系统,龙芯、飞腾、RISC-V 等尽可能多的 CPU 。 我们使用 C99 来完成上述目标。我们并不使用 Python 或 JS 等解释语言,因为一个简单的换源工具,不应该强行塞给用户一个庞大的解释器和数十、数百 MB 其他文件。 本软件为 自由软件 ,SDPX 软件许可证为 GPL-3.0-or-later and MIT chsrc 的设计理念 No UFO 我已经受够了各种软件在我的C盘或 $HOME 里给我塞一堆 零散 的不知名文件,它往往 连后缀都没有 ,它的文件名足够隐晦以致于 你无论如何都猜不到是哪个软件在用它 。等你抱着好奇心打开一看,这竟然还是一种 自定义格式 。 好吧,对此我要创造一个新词: UFO: Unidentified File Objects chsrc 除了一个二进制文件外,别无他物。不会在你计算机的某个犄角旮旯里放一些莫名其妙的文件 Convention over Configuration 来自Ruby社区的优良传统。想想看: /etc 里每个文件都有一套自己的配置格式 我不想要有任何类似 CHSRC_CONF 的环境变量,也不想有任何类似 .chsrc 的配置文件 ( 如果你是BSD用户,你会愤怒,因为你还存在一个叫作 .cshrc 的文件 ) 示例 需要你的帮助 如果你想要通过 scoop , brew , yay 等系统包管理工具来安装和更新 chsrc ,请帮助我们达到下面的要求。 [ ] 缺乏 AUR 维护者 [x] homebrew 维护者 [ ] 缺乏 scoop 维护者 [ ] scoop 要求英文输出 chsrc 本意进行中文输出,但是我们将尽可能提供选项来进行英文输出。该选项同时有利于 BSD 用户 请访问 chsrc on GitHub 若您可提供维护,请访问 issue#16 on GitHub 安装 以下方式均下载到当前目录,可直接通过 ./chsrc 运行。 Windows ```bash x64 curl -L https://gitee.com/RubyMetric/chsrc/releases/download/pre/chsrc-x64-windows.exe -o chsrc.exe x86 curl -L https://gitee.com/RubyMetric/chsrc/releases/download/pre/chsrc-x86-windows.exe -o chsrc.exe
``` Linux ```bash x64 curl -L https://gitee.com/RubyMetric/chsrc/releases/download/pre/chsrc-x64-linux -o chsrc; chmod +x ./chsrc aarch64 curl -L https://gitee.com/RubyMetric/chsrc/releases/download/pre/chsrc-aarch64-linux -o chsrc; chmod +x ./chsrc riscv64 curl -L https://gitee.com/RubyMetric/chsrc/releases/download/pre/chsrc-riscv64-linux -o chsrc; chmod +x ./chsrc armv7 curl -L https://gitee.com/RubyMetric/chsrc/releases/download/pre/chsrc-armv7-linux -o chsrc; chmod +x ./chsrc
``` macOS 可以通过 homebrew 安装,感谢 @Aaron-212 与 @chenrui333 bash
brew install chsrc 或手动下载二进制文件 (最新版,有时比 homebrew 提供的更新) ```bash M1/aarch64 curl -L https://gitee.com/RubyMetric/chsrc/releases/download/pre/chsrc-aarch64-macos -o chsrc; chmod +x ./chsrc x64 curl -L https://gitee.com/RubyMetric/chsrc/releases/download/pre/chsrc-x64-macos -o chsrc; chmod +x ./chsrc
``` BSD bash
git clone https://gitee.com/RubyMetric/chsrc.git; cd chsrc
clang -Iinclude src/chsrc.c -o chsrc 其他平台 bash
git clone https://gitee.com/RubyMetric/chsrc.git; cd chsrc; make 使用 ```bash
使用: chsrc [options] [target] [mirror] help # 打印此帮助,或 h, -h, --help
issue # 查看相关issue
list (或 ls, 或 l) # 列出可用镜像源,和可换源软件
list mirror/target # 列出可用镜像源,或可换源软件
list os/lang/ware # 列出可换源的操作系统/编程语言/软件
list # 查看该软件可以使用哪些源 cesu # 对该软件所有源测速
get # 查看当前软件的源使用情况 set # 换源,自动测速后挑选最快源
set first # 换源,使用维护团队测速第一的源
set # 换源,指定使用某镜像站 (通过list命令查看)
set https://abc # 换源,用户自定义源URL
reset # 重置,使用上游默认使用的源 选项:
-ipv6 # 使用IPv6测速
-local # 仅对某项目而非全局换源 (仅部分软件如bundler,pdm支持)
``` 当你 不想自动测速的时候 ,你可以直接指定某镜像站,源URL,以及指定维护团队已测试的最快镜像站。 ```bash
chsrc set ruby # 测速,寻找最快者,换源 或 chsrc ls ruby # 列出可用的镜像站
chsrc set ruby rubychina # 使用 RubyChina 作为镜像站 或您有自己的镜像地址 chsrc set ruby https://gems.ruby-china.com/ # 使用自定义URL 或 chsrc set ruby first # 使用维护团队测试的最快镜像站
``` 对部分 支持局部换源 的,可以避免全局换源。 bash
chsrc set -local bundler
chsrc set -local pdm 编程语言开发 ```bash
chsrc set ruby 或 set gem
chsrc set python 或 set pip / pdm # 同时换pip和pdm
chsrc set node 或 set npm / nodejs / yarn / pnpm # 同时换3个
chsrc set perl 或 set cpan
chsrc set php 或 set composer
chsrc set lua 或 set luarocks chsrc set go
chsrc set rust 或 set cargo / crate
chsrc set java 或 set maven / mvn / gradle
chsrc set clojure 或 set clojars
chsrc set dart 或 set pub / flutter # 同时会为flutter换源
chsrc set haskell 或 set hackage/cabal/stack
chsrc set ocaml 或 set opam 同时会为 bioconductor 换源 chsrc set r 或 set cran
chsrc set julia
``` 操作系统 ```bash
sudo chsrc set ubuntu
sudo chsrc set mint 或 linuxmint
sudo chsrc set debian
sudo chsrc set fedora
sudo chsrc set suse 或 set opensuse
sudo chsrc set kali
sudo chsrc set arch # 同时使用 archlinuxcn
sudo chsrc set manjaro
sudo chsrc set gentoo
sudo chsrc set rocky 或 set rockylinux
sudo chsrc set alma 或 set almalinux
sudo chsrc set alpine
sudo chsrc set void 或 set voidlinux
sudo chsrc set solus
sudo chsrc set ros 或 set ros2
sudo chsrc set trisquel
sudo chsrc set lite 或 set linuxlite
sudo chsrc set raspi 或 set raspberrypi
sudo chsrc set armbian sudo chsrc set euler 或 set openeuler
sudo chsrc set anolis 或 set openanolis
sudo chsrc set kylin 或 set openkylin
sudo chsrc set deepin chsrc set msys2 或 set msys BSD sudo chsrc set freebsd
sudo chsrc set openbsd
sudo chsrc set netbsd
``` 软件 bash
chsrc set winget
chsrc set brew 或 set homebrew
chsrc set cocoapods 或 set cocoa / pod
chsrc set dockerhub 或 set docker
chsrc set flathub
chsrc set nix
chsrc set guix
chsrc set emacs 或 set elpa
chsrc set tex 或 set ctan / latex / texlive / miktex
chsrc set conda 或 set anaconda 开发 请安装好 gcc 或 clang 和 make 以及 curl ```bash 使用 dev 分支开发 git clone https://gitee.com/RubyMetric/chsrc.git -b dev make # 默认使用 cc 编译
make CC=clang # 使用 clang 编译
make CC=gcc # 使用 gcc 编译 make test # 测试命令
make test-xy # 测试 xy.h
make clean
``` 许可证 chsrc 主程序采用 GPL-3.0-or-later 许可证,保证该软件的永久自由 xy.h 使用 MIT 许可证,保证该库可以在尽可能多的情况下复用 致谢 感谢各个镜像站提供的优质免费镜像服务,使用的镜像站见 source.h . 另外感谢以下项目: MirrorZ 教育网镜像站 清华大学 Tuna Thanks Mirror 项目 by @eryajf;chsrc 全平台通用换源工具. Change Source for every software on every platform from the command line.;c99,gem,linux,pip,ubuntu,windows,macos,debian,archlinux,fedora | RubyMetric/chsrc |
hyp1231/awesome-llm-powered-agent;Awesome LLM-Powered Agent Thanks to the impressive planning, reasoning, and tool-calling capabilities of Large Language Models (LLMs), people are actively studying and developing LLM-powered agents. These agents are possible to autonomously (and collaboratively) solve complex tasks, or simulate human interactions. Our goal with this project is to build an exhaustive collection of awesome resources relevant to LLM-powered agents encompassing papers, repositories, and more. We strive to keep these updated regularly and continuously. We greatly appreciate any contributions via PRs, issues, emails, or other methods. Note that this repository is not under active maintenance. It mainly contains papers that appear before Oct. 2023, with several further papers. If you would like to have your paper included, please feel free to initiate a pull request . Papers Autonomous Task Solver General Reasoning & Planning & Tool Using Multi-Agent Cooperation Framework & Open-Source Application Web Agents RL Agents Robotics & Embodied AI Gaming & Role-Playing Other Applications Trustworthy Human Interaction Simulation Human-Agent Interaction Agents-Powered LLMs Benchmark Survey & Tutorial Open-Source Projects Autonomous Task Solver Projects Multi-Agent Simulation Projects Perspectives Other Related Sources Acknowledgement Papers 🔥 for papers with >50 citations or repositories with >200 stars.\
📖 for papers accepted by reputed conferences/journals. Autonomous Task Solver General Reasoning & Planning & Tool Using 🔥 [Mar 2024] "Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models
." Zehui Chen (USTC) et al. arXiv. [ paper ] [ code ] [ project page ] [Dec 2023] "CLOVA: A Closed-LOop Visual Assistant with Tool Usage and Update." Zhi Gao (BIGAI) et al. arXiv. [ paper ] [ code ] [ project page ] 📖 [Dec 2023] "SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge." Rishi Hazra et al. AAAI 2024 [ paper ] [ code ] [ project page ] [Oct 2023] "Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models." Andy Zhou (UIUC) et al. arXiv. [ paper ] [ code ] [ project page ] [Oct 2023] "Adapting LLM Agents Through Communication." Kuan Wang (GaTech & Microsoft) et al. arXiv. [ paper ] 📖 [Oct 2023] "ToolChain*: Efficient Action Space Navigation in Large Language Models with A* Search" Yuchen Zhuang (GaTech & Adobe) et al. ICLR 2024. [ paper ] 📖 [Sep 2023] "AVIS: Autonomous Visual Information Seeking with Large Language Models." Ziniu Hu (Google) et al. NeurIPS 2023. [ paper ] [Sep 2023] "Reason for Future, Act for Now: A Principled Framework for Autonomous LLM Agents with Provable Sample Efficiency." Zhihan Liu (Northwestern) et al. arXiv. [ paper ] [ code ] [ project page ] [Sep 2023] "Self-driven Grounding: Large Language Model Agents with Automatical Language-aligned Skill Learning." Shaohui Peng (CAS) et al. arXiv. [ paper ] [Aug 2023] "ExpeL: LLM Agents Are Experiential Learners." Andrew Zhao (THU) et al. arXiv. [ paper ] [Aug 2023] "Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis." Oscar J. Romero (CMU) et al. arXiv. [ paper ] [Aug 2023] "Dynamic Planning with a LLM." Gautier Dagan (U of Edinburgh) et al. arXiv. [ paper ] [ code ] [Aug 2023] "Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization." Weiran Yao (Salesforce) et al. arXiv. [ paper ] 🔥 [May 2023] "ReWOO: Decoupling Reasoning from Observations for Efficient Augmented Language Models." Binfeng Xu et al. arXiv. [ paper ] [ code ] 📖 [May 2023] "SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks." Bill Yuchen Lin (AI2) et al. NeurIPS 2023. [ paper ] [ code ] [ project page ] 📖 [May 2023] "AdaPlanner: Adaptive Planning from Feedback with Language Models." Haotian Sun (GaTech) et al. NeurIPS 2023. [ paper ] [ code ] 🔥📖 [May 2022] "Reasoning with Language Model is Planning with World Model." Shibo Hao (UCSD) et al. EMNLP 2023. [ paper ] [ code ] [ project page ] 📖 [May 2023] "Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Planning." Lin Guan (ASU) et al. NeurIPS 2023. [ paper ] [ code ] [ project page ] 📖 [May 2023] "ChatCoT: Tool-Augmented Chain-of-Thought Reasoning on Chat-based Large Language Models." Zhipeng Chen (RUC) et al. EMNLP 2023 Findings. [ paper ] [ code ] [May 2023] "CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing." Zhibin Gou (THU & Microsoft) et al. arXiv. [ paper ] [ code ] 🔥 [Apr 2023] "LLM+P: Empowering Large Language Models with Optimal Planning Proficiency." Bo Liu (UT Austin) et al. arXiv. [ paper ] [ code ] 🔥📖 [Mar 2023] "Reflexion: Language Agents with Verbal Reinforcement Learning." Noah Shinn (Northeastern) et al. NeurIPS 2023. [ paper ] [ code ] 📖 [Dec 2022] "Don’t Generate, Discriminate: A Proposal for Grounding Language Models to Real-World Environments" Yu Gu (OSU) et al. ACL 2023. [ paper ] [ code ] 🔥📖 [Oct 2022] "ReAct: Synergizing Reasoning and Acting in Language Models." Shunyu Yao (Princeton & Google Brain) et al. ICLR 2023. [ paper ] [ code ] [ project page ] Multi-Agent Cooperation [May 2024] "Conformity, Confabulation, and Impersonation: Persona Inconstancy in Multi-Agent LLM Collaboration." Razan Baltaji (UIUC) et al.* arXiv. [[paper]( https://arxiv.org/abs/2310.05036 ] 📖 [Jan 2024] "L2MAC: Large Language Model Automatic Computer for Extensive Code Generation." Samuel Holt (Cambridge) et al. ICLR 2024. [ paper ] [ code ] [ project page ] [Oct 2023] "Evaluating Multi-Agent Coordination Abilities in Large Language Models." Saaket Agashe (UCSC) et al. arXiv. [ paper ] [Oct 2023] "Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimization." Zijun Liu (THU & Stanford) et al. arXiv. [ paper ] [ code ] [Oct 2023] "Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View." Jintian Zhang (ZJU) et al. arXiv. [ paper ] [ code ] [Oct 2023] "Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration." Qiushi Sun (Shanghai AI Lab & NUS) et al. arXiv. [ paper ] [ code ] [Sep 2023] "LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games." Sahar Abdelnabi (CISPA) et al. arXiv. [ paper ] [ code ] [Sep 2023] "Scalable Multi-Robot Collaboration with Large Language Models: Centralized or Decentralized Systems?" Yongchao Chen (MIT & Harvard) et al. arXiv. [ paper ] [ code ] [ project page ] [Sep 2023] "ReConcile: Round-Table Conference Improves Reasoning via Consensus among Diverse LLMs." Justin Chih-Yao Chen (UNC Chapel Hill) et al. arXiv. [ paper ] [ code ] [Sep 2023] "MindAgent: Emergent Gaming Interaction." Xiaojian Ma (BIGAI) et al. arXiv. [ paper ] [ code ] [ project page ] [Aug 2023] "ProAgent: Building Proactive Cooperative AI with Large Language Models." Ceyao Zhang (CUHK & PKU) et al. arXiv. [ paper ] [ project page ] 🔥 [Aug 2023] "AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents." Weize Chen (THU) et al. arXiv. [ paper ] [ code ] [Aug 2023] "GPT-in-the-Loop: Adaptive Decision-Making for Multiagent Systems." Nathalia Nascimento (U of Waterloo) et al. arXiv. [ paper ] [Aug 2023] "How susceptible are LLMs to Logical Fallacies?" Amirreza Payandeh (GMU & Vail Systems) et al. arXiv. [ paper ] [ code ] [Aug 2023] "ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate." Chi-Min Chan (THU) et al. arXiv. [ paper ] [ code ] [Aug 2023] "LLM As DBA." Xuanhe Zhou (THU) et al. arXiv. [ paper ] [ code ] [Aug 2023] "Gentopia: A Collaborative Platform for Tool-Augmented LLMs." Binfeng Xu et al. arXiv. [ paper ] [ code ] [ project page ] 🔥 [Aug 2023] "MetaGPT: Meta Programming for Multi-Agent Collaborative Framework." Sirui Hong (DeepWisdom) et al. arXiv. [ paper ] [ code ] [Jul 2023] "PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations." Ruosen Li (UT Dallas) et al. arXiv. [ paper ] [ project page ][ code ] [Jul 2023] "Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration." Zhenhailong Wang (UIUC & MSRA) et al. arXiv. [ paper ] [ code ] [Jul 2023] "RoCo: Dialectic Multi-Robot Collaboration with Large Language Models." Mandi Zhao (Columbia) et al. arXiv. [ paper ] [ code ] [ project page ] [Jul 2023] "Wireless Multi-Agent Generative AI: From Connected Intelligence to Collective Intelligence." Hang Zou (Technology Innovation Institute, UAE) et al. arXiv. [ paper ] [Jul 2023] "Building Cooperative Embodied Agents Modularly with Large Language Models." Hongxin Zhang (UMass) et al. arXiv. [ paper ] [ code ] [ project page ] [Jun 2023] "RestGPT: Connecting Large Language Models with Real-World Applications via RESTful APIs." Yifan Song (PKU) et al. arXiv. [ paper ] [ project page ] [Jun 2023] "Multi-Agent Collaboration: Harnessing the Power of Intelligent LLM Agents." Yashar Talebirad (UAlberta) et al. arXiv. [ paper ] [May 2023] "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate." Tian Liang (THU & Tencent) et al. arXiv. [ paper ] [ code ] 🔥 [May 2023] "Large Language Models as Tool Makers." Tianle Cai (Deepmind & Princeton) et al. arXiv. [ paper ] [ code ] [May 2023] "Improving Factuality and Reasoning in Language Models through Multiagent Debate." Yilun Du (MIT) et al. arXiv. [ paper ] [ code ] [ project page ] [May 2023] "Agreement and Statistical Efficiency in Bayesian Perception Models." Yash Deshpande (MIT) et al. arXiv. [ paper ] [May 2023] "Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback." Yao Fu (U of Edinburgh) et al. arXiv. [ paper ] [ code ] Framework & Open-Source 🔥 [Oct 2023] "OpenAgents: An Open Platform for Language Agents in the Wild." Tianbao Xie (HKU & XLang Lab) et al. arxiv. [ paper ] [ code ] 🔥 [Sep 2023] "AutoAgents: A Framework for Automatic Agent Generation." Guangyao Chen (PKU) et al. arXiv. [ paper ] [ code ] 🔥 [Sep 2023] "Agents: An Open-source Framework for Autonomous Language Agents." Wangchunshu Zhou (AI Waves) et al. arXiv. [ paper ] [ code ] [ project page ] 🔥 [Sep 2023] "Cognitive Architectures for Language Agents." Theodore Sumers (Princeton) et al. arXiv. [ paper ] [ repo ] 🔥 [Aug 2023] "AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework." Qingyun Wu et al. arXiv. [ paper ] [ code ] [ project page ] Application Web Agents [Sep 2023] "You Only Look at Screens: Multimodal Chain-of-Action Agents." Zhuosheng Zhang (SJTU) et al. arXiv. [ paper ] [ code ] [Sep 2023] "LASER: LLM Agent with State-Space Exploration for Web Navigation." Kaixin Ma (Tencent) et al. arXiv. [ paper ] [ code ] 🔥 [Jul 2023] "WebArena: A Realistic Web Environment for Building Autonomous Agents." Shuyan Zhou (CMU) et al. arXiv. [ paper ] [ code ] [ project page ] [Jul 2023] "A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis." Izzeddin Gur (DeepMind) et al. arXiv. [ paper ] 🔥📖 [Jun 2023] "Mind2Web: Towards a Generalist Agent for the Web." Xiang Deng (OSU) et al. NeurIPS 2023. [ paper ] [ code ] [ project page ] [May 2023] "Augmenting Autotelic Agents with Large Language Models." Cédric Colas (MIT & Inria) et al. arXiv. [ paper ] [May 2023] "Mobile-Env: An Evaluation Platform and Benchmark for Interactive Agents in LLM Era." Danyang Zhang (SJTU) et al. arXiv. [ paper ] [ code ] 📖 [Apr 2023] "Emergent autonomous scientific research capabilities of large language models." Daniil A. Boiko (CMU) et al. arXiv. [ paper ] [Mar 2023] "Language Models can Solve Computer Tasks." Geunwoo Kim (UCI) et al. arXiv. [ paper ] [ code ] [ project page ] 📖 [Jul 2022] "WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents." Shunyu Yao (Princeton) et al. NeurIPS 2022. [ paper ] [ code ] [ project page ] RL Agents [Oct 2023] "Motif: Intrinsic Motivation from Artificial Intelligence Feedback." Martin Klissarov (Mila & Meta & McGill) et al. arXiv. [ paper ] [Sep 2023] "RLAdapter: Bridging Large Language Models to Reinforcement Learning in Open Worlds." Wanpeng Zhang (PKU) et al. arXiv. [ paper ] [Aug 2023] "LaGR-SEQ: Language-Guided Reinforcement Learning with Sample-Efficient Querying." Thommen George Karimpanal (Deakin University) et al. arXiv. [ paper ] [ code ] [Jul 2023] "Dialogue Shaping: Empowering Agents through NPC Interaction." Wei Zhou (GaTech) et al. arXiv. [ paper ] [Jul 2023] "Towards A Unified Agent with Foundation Models." Norman Di Palo (ICL & DeepMind) et al. Reincarnating RL @ ICLR 2023. [ paper ] 📖 [Jun 2023] "Large Language Model Is Semi-Parametric Reinforcement Learning Agent." Danyang Zhang (SJTU) et al. NeurIPS 2023. [ paper ] [May 2023] "Semantically Aligned Task Decomposition in Multi-Agent Reinforcement Learning." Wenhao Li (CUHK) et al. arXiv. [ paper ] Robotics & Embodied AI [Nov 2023] "LEO: An Embodied Generalist Agent in 3D World." Xiaojian Ma (BIGAI) et al. arXiv. [ paper ] [ code ] [ project page ] [Nov 2023] "JARVIS-1: Open-world Multi-task Agents with Memory-Augmented Multimodal Language Models." Zihao Wang (PKU) et al. arXiv. [ paper ] [ code ] [ project page ] [Oct 2023] "Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond." Liang Chen (PKU) et al. arXiv. [ paper ] [ code ] [ project page ] [Oct 2023] "LANCAR: Leveraging Language for Context-Aware Robot Locomotion in Unstructured Environments." Chak Lam Shek (UMD) et al. arXiv. [ paper ] [ project page ] [Sep 2023] "LLM-Grounder: Open-Vocabulary 3D Visual Grounding with Large Language Model as an Agent." Jianing Yang (UMich) et al. arXiv. [ paper ] [ code ] [ project page ] [Sep 2023] "SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models." Shyam Sundar Kannan (Purdue) et al. arXiv. [ paper ] [ project page ] [Sep 2023] "Plug in the Safety Chip: Enforcing Constraints for LLM-driven Robot Agents." Ziyi Yang et al. arXiv. [ paper ] [ code & video ] [Sep 2023] "SayNav: Grounding Large Language Models for Dynamic Planning to Navigation in New Environments." Abhinav Rajvanshi (SRI International) et al. arXiv. [ paper ] [Sep 2023] "Developmental Scaffolding with Large Language Models." M. Batuhan Celik (Bogazici University) et al. arXiv. [ paper ] [Jul 2023] "March in Chat: Interactive Prompting for Remote Embodied Referring Expression." Yanyuan Qiao (Adelaide University) et al. arXiv. [ paper ] [ code ] [Aug 2023] "A^2Nav: Action-Aware Zero-Shot Robot Navigation by Exploiting Vision-and-Language Ability of Foundation Models." Peihao Chen (SCUT) et al. arXiv. [ paper ] [Jul 2023] "Embodied Task Planning with Large Language Models." Zhenyu Wu (BUPT) et al. arXiv. [ paper ] [ code ] [ project page ] [Jun 2023] "Enabling Intelligent Interactions between an Agent and an LLM: A Reinforcement Learning Approach." Bin Hu (Zhejiang Lab) et al. arXiv. [ paper ] [ code ] 🔥 [May 2023] "Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory." Xizhou Zhu (THU & SenseTim) et al. arXiv. [ paper ] [ code ] 🔥 [May 2023] "Voyager: An Open-Ended Embodied Agent with Large Language Models." Guanzhi Wang (NVIDIA & Caltech) et al. arXiv. [ paper ] [ code ] [ project page ] [May 2023] "Plan, Eliminate, and Track -- Language Models are Good Teachers for Embodied Agents." Yue Wu (CMU) et al. arXiv. [ paper ] 📖 [Feb 2023] "Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents." Zihao Wang (PKU) et al. NeurIPS 2023. [ paper ] [ code ] [Feb 2023] "Collaborating with language models for embodied reasoning." Ishita Dasgupta (DeepMind) et al. LaReL @ NeurIPS 2022. [ paper ] [Jan 2023] "Do Embodied Agents Dream of Pixelated Sheep: Embodied Decision Making using Language Guided World Modelling." Kolby Nottingham (UCI) et al. ICML 2023. [ paper ] [ code ] [ project page ] 📖 [Dec 2022] "LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models." Chan Hee Song (OSU) et al. ICCV 2023. [ paper ] [ project page ] Gaming & Role-Playing 📖 [May 2024] "TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models." Jaewoo Ahn (SNU) et al. Findings of ACL 2024. [ paper ] [ code ] [ project page ] [Oct 2023] "From Text to Tactic: Evaluating LLMs Playing the Game of Avalon." Jonathan Light (RPI) et al. arXiv. [ paper ] [ code ] [Oct 2023] "Ruffle&Riley: Towards the Automated Induction of Conversational Tutoring Systems." Robin Schmucker (CMU) et al. arXiv. [ paper ] [Oct 2023] "Avalon's Game of Thoughts: Battle Against Deception through Recursive Contemplation." Shenzhi Wang (THU) et al. arXiv. [ paper ] [Sep 2023] "MindAgent: Emergent Gaming Interaction." Xiaojian Ma (BIGAI) et al. arXiv. [ paper ] [ code ] [ project page ] [Sep 2023] "Suspicion-Agent: Playing Imperfect Information Games with Theory of Mind Aware GPT-4." Jiaxian Guo (U of Tokyo) et al. arXiv. [ paper ] [ code ] [Aug 2023] "Ambient Adventures: Teaching ChatGPT on Developing Complex Stories." Zexin Chen (GaTech) et al. arXiv. [ paper ] [Jul 2023] "Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models." Yuanzhi Liang (UTS) et al. arXiv. [ paper ] [May 2023] "Role-Play with Large Language Models." Murray Shanahan (DeepMind & ICL) et al. arXiv. [ paper ] [May 2023] "clembench: Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents." Kranti Chalamalasetti (University of Potsdam) et al. arXiv. [ paper ] [ code ] [Apr 2023] "Towards autonomous system: flexible modular production system enhanced with large language model agents." Yuchen Xia (University of Stuttgart) et al. arXiv. [ paper ] [ code ] 🔥📖 [Mar 2023] "CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society." Guohao Li (KAUST) et al. NeurIPS 2023. [ paper ] [ code ] [ project page ] Other Applications [May 2024] "AgentClinic: a multimodal agent benchmark to evaluate AI in simulated clinical environments" Samuel Schmidgall (JHU & Stanford) et al. arXiv. [ paper ] [ code ] [ project page ] [Jan 2024] "EHRAgent: Code Empowers Large Language Models for Few-shot Complex Tabular Reasoning on Electronic Health Records." Wenqi Shi (GaTech) et al. arXiv. [ paper ] [ code ] [ project page ] [Oct 2023] "OptiMUS: Optimization Modeling Using mip Solvers and large language models." Ali AhmadiTeshnizi (Stanford) et al. arXiv. [ paper ] [ code ] [Oct 2023] "An evolutionary model of personality traits related to cooperative behavior using a large language model." Reiji Suzuki (Nagoya University) et al. arXiv. [ paper ] [Oct 2023] "Large Language Model (LLM) as a System of Multiple Expert Agents: An Approach to solve the Abstraction and Reasoning Corpus (ARC) Challenge." John Chong Min Tan (NUS) et al. arXiv. [ paper ] [Oct 2023] "A Language-Agent Approach to Formal Theorem-Proving." Amitayush Thakur (UT Austin) et al. arXiv. [ paper ] [Oct 2023] "Conversational Health Agents: A Personalized LLM-Powered Agent Framework." Mahyar Abbasian (UCI) et al. arXiv. [ paper ] [Oct 2023] "OceanGPT: A Large Language Model for Ocean Science Tasks." Zhen Bi (ZJU & Donghai Lab) et al. arXiv. [ paper ] [ project page ] [Oct 2023] "Voice2Action: Language Models as Agent for Efficient Real-Time Interaction in Virtual Reality." Yang Su (Cornell Tech). arXiv. [ paper ] 🔥 [Sep 2023] "ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving." Zhibin Gou (THU & Microsoft) et al. arXiv. [ paper ] [ code ] [ project page ] [Sep 2023] ""Teach AI How to Code": Using Large Language Models as Teachable Agents for Programming Education." Hyoungwook Jin (KAIST) et al. arXiv. [ paper ] [Sep 2023] "SurrealDriver: Designing Generative Driver Agent Simulation Framework in Urban Contexts based on Large Language Model." Ye Jin (THU) et al. arXiv. [ paper ] [Sep 2023] "Large Language Models as Agents in the Clinic." Nikita Mehandru (UC Berkeley) et al. arXiv. [ paper ] [Sep 2023] "An Appraisal-Based Chain-Of-Emotion Architecture for Affective Language Model Game Agents." Maximilian Croissant (UOY) et al. arXiv. [ paper ] [Sep 2023] "Unleashing the Power of Graph Learning through LLM-based Autonomous Agents." Lanning Wei (CAS & 4Paradigm) et al. arXiv. [ paper ] [Sep 2023] "TradingGPT: Multi-Agent System with Layered Memory and Distinct Characters for Enhanced Financial Trading Performance." Yang Li (SIT) et al. arXiv. [ paper ] 🔥 [Sep 2023] "ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models." Chenliang Li (Alibaba) et al. arXiv. [ paper ] [ code ] [ demo ] [Aug 2023] "Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations." Xu Huang (USTC) et al. arXiv. [ paper ] [ code ] [Aug 2023] "RecMind: Large Language Model Powered Agent For Recommendation." Yancheng Wang (ASU) et al. arXiv. [ paper ] [Aug 2023] "LLM Powered Sim-to-real Transfer for Traffic Signal Control." Longchao Da (ASU) et al. arXiv. [ paper ] [Aug 2023] "Out of the Cage: How Stochastic Parrots Win in Cyber Security Environments." Maria Rigaki (ČVUT) et al. arXiv. [ paper ] [ code ] [Aug 2023] "Is There Any Social Principle for LLM-Based Agents?" Jitao Bai (TJU) et al. arXiv. [ paper ] [Aug 2023] "ChatEDA: A Large Language Model Powered
Autonomous Agent for EDA." Zhuolun He (CUHK & Shanghai AI Lab) et al. arXiv. [ paper ] [Aug 2023] "The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models." Haonan Li (UCR) et al. arXiv. [ paper ] [Jun 2023] "Towards Autonomous Testing Agents via Conversational Large Language Models." Robert Feldt (Chalmers University of Technology) et al. arXiv. [ paper ] [Apr 2023] "GeneGPT: Augmenting Large Language Models with Domain Tools for Improved Access to Biomedical Information." Qiao Jin, Yifan Yang, Qingyu Chen, Zhiyong Lu arXiv. [ paper ] [ code ] 🔥 [Mar 2023] "HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face." Yongliang Shen (ZJU & MSRA) et al. arXiv. [ paper ] [ code ] Trustworthy [Feb 2024] "Can Large Language Model Agents Simulate Human Trust Behaviors?" Chengxing Xie (KAUST) et al. arXiv. [ paper ] [ code ] [ project page ] [Sep 2023] "Identifying the Risks of LM Agents with an LM-Emulated Sandbox" Yangjun Ruan (University of Toronto & Vector Institute) et al. arXiv. [ paper ] [ code ] [ demo ] [ project page ] [Aug 2023] "Enhancing Trust in LLM-Based AI Automation Agents: New Considerations and Future Challenges." Sivan Schwartz (IBM Research) et al. AutoMate @ IJCAI 2023. [ paper ] Human Interaction Simulation [Mar 2024] "Emergence of Social Norms in Large Language Model-based Agent Societies." Siyue Ren (NWPU) et al. arXiv. [ paper ] [ code ] [Jan 2024] "Open Models, Closed Minds? On Agents Capabilities in Mimicking Human Personalities through Open Large Language Models." Lucio La Cava (University of Calabria) et al. arXiv. [ paper ] 🔥 [Oct 2023] "SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents" Xuhui Zhou (CMU) et al. ICLR [ paper ] [Oct 2023] "CompeteAI: Understanding the Competition Behaviors in Large Language Model-based Agents." Qinlin Zhao (USTC) et al. arXiv. [ paper ] [Oct 2023] "Simulating Social Media Using Large Language Models to Evaluate Alternative News Feed Algorithms." Petter Törnberg (U of Amsterdam) et al. arXiv. [ paper ] [Oct 2023] "Put Your Money Where Your Mouth Is: Evaluating Strategic Planning and Execution of LLM Agents in an Auction Arena." Jiangjie Chen (FDU & AI2) et al. arXiv. [ paper ] [ code ] [ project page ] [Oct 2023] "Lyfe Agents: Generative agents for low-cost real-time social interactions." Zhao Kaiya (MIT) et al. arXiv. [ paper ] [Sep 2023] "Identifying the Risks of LM Agents with an LM-Emulated Sandbox" Yangjun Ruan (University of Toronto & Vector Institute) et al. arXiv. [ paper ] [ code ] [ demo ] [ project page ] [Sep 2023] "Generative Agent-Based Modeling: Unveiling Social System Dynamics through Coupling Mechanistic Models with Generative Artificial Intelligence." Navid Ghaffarzadegan (Virginia Tech) et al. arXiv. [ paper ] [Aug 2023] "CGMI: Configurable General Multi-Agent Interaction Framework." Jinxin Shi (ECNU) et al. arXiv. [ paper ] [Aug 2023] "Exploring the Intersection of Large Language Models and Agent-Based Modeling via Prompt Engineering." Edward Junprung (UC Berkeley) et al. arXiv. [ paper ] [ code ] 🔥 [Aug 2023] "AgentSims: An Open-Source Sandbox for Large Language Model Evaluation." Jiaju Lin (PTA Studio & PSU) et al. arXiv. [ paper ] [ code ] [ project page ] [Jul 2023] "S^3: Social-network Simulation System with Large Language Model-Empowered Agents." Chen Gao (THU) et al. arXiv. [ paper ] [Jul 2023] "Are you in a Masquerade? Exploring the Behavior and Impact of Large Language Model Driven Social Bots in Online Social Networks." Siyu Li (SCU) et al. arXiv. [ paper ] [ dataset ] [Jul 2023] "Communicative Agents for Software Development." Chen Qian (THU) et al. arXiv. [ paper ] [Jul 2023] "Epidemic Modeling with Generative Agents." Ross Williams (Virginia Tech) et al. arXiv. [ paper ] [ code ] [Jul 2023] "To Infinity and Beyond: SHOW-1 and Showrunner Agents in Multi-Agent Simulations." Philipp Maas (Fable Studio) et al. preprint. [ paper ] [ project page ] [Jun 2023] "RecAgent: A Novel Simulation Paradigm for Recommender Systems." Lei Wang (RUC) et al. arXiv. [ paper ] [ code ] [May 2023] "Playing repeated games with Large Language Models." Elif Akata (U of Tübingen) et al. arXiv. [ paper ] [May 2023] "The Role of Summarization in Generative Agents: A Preliminary Perspective." Xiachong Feng (HIT) et al. arXiv. [ paper ] [Apr 2023] "Multi-Party Chat: Conversational Agents in Group Settings with Humans and Models." Jimmy Wei (Cornell & Meta) et al. arXiv. [ paper ] [ dataset ] [ code ] 🔥 [Apr 2023] "Generative Agents: Interactive Simulacra of Human Behavior." Joon Sung Park (Stanford) et al. arXiv. [ paper ] [ code ] Human-Agent Interaction [Oct 2023] "How AI Processing Delays Foster Creativity: Exploring Research Question Co-Creation with an LLM-based Agent." Yiren Liu (UIUC) et al. arXiv. [ paper ] [Aug 2023] "Quantifying the Impact of Large Language Models on Collective Opinion Dynamics." Chao Li (ZJU) et al. arXiv. [ paper ] [Aug 2023] "SAPIEN: Affective Virtual Agents Powered by Large Language Models." Masum Hasan (U of Rochester) et al. arXiv. [ paper ] [Jul 2023] "Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support." Zilin Ma (Harvard) et al. arXiv. [ paper ] Agents-Powered LLMs [Oct 2023] "Agent Instructs Large Language Models to be General Zero-Shot Reasoners." Nicholas Crispino (WashU) et al. arXiv. [ paper ] [ code ] [Oct 2023] "ß-Coder: Value-Based Deep Reinforcement Learning for Program Synthesis." Zishun Yu (UIC & ByteDance) et al. arXiv. [ paper ] 🔥 [May 2023] "Training Socially Aligned Language Models in Simulated Human Society." Ruibo Liu (Dartmouth) et al. arXiv. [ paper ] [ code ] 📖 [May 2023] "Language Models Meet World Models: Embodied Experiences Enhance Language Models." Jiannan Xiang (UCSD) et al. NeurIPS 2023. [ paper ] [ code ] Benchmark [Dec 2023] "T-Eval: Evaluating the Tool Utilization Capability of Large Language Models Step by Step." Zehui Chen (USTC, Shanghai AI Lab) et al. arXiv. [ paper ] [ code ] [ project page ] [Nov 2023] "MAgIC: Investigation of Large Language Model Powered Multi-Agent in Cognition, Adaptability, Rationality and Collaboration." *Lin Xu et al.(NUS, ByteDance, Stanford & UC Berkeley) * arXiv. [ paper ] [ Project Page ] [Oct 2023] "Balancing Autonomy and Alignment: A Multi-Dimensional Taxonomy for Autonomous LLM-powered Multi-Agent Architectures." Thorsten Händler (FERNFH) et al. arXiv. [ paper ] [Oct 2023] "Benchmarking Large Language Models As AI Research Agents." Qian Huang (Stanford) et al. arXiv. [ paper ] [ code ] [Oct 2023] "MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use." Yue Huang (Lehigh University) et al. arXiv. [ paper ] [ dataset ] [Oct 2023] "SmartPlay : A Benchmark for LLMs as Intelligent Agents." Yue Wu (CMU & Microsoft) et al. arXiv. [ paper ] [ code ] [Sep 2023] "Identifying the Risks of LM Agents with an LM-Emulated Sandbox" Yangjun Ruan (University of Toronto & Vector Institute) et al. arXiv. [ paper ] [ code ] [ demo ] [ project page ] [Aug 2023] "BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents." Zhiwei Liu (Salesforce) et al. arXiv. [ paper ] [ code ] 🔥 [Aug 2023] "AgentBench: Evaluating LLMs as Agents." Xiao Liu (THU) et al. arXiv. [ paper ] [ code ] [ project page ] [Aug 2023] "TPTU: Task Planning and Tool Usage of Large Language Model-based AI Agents." Jingqing Ruan (SenseTime) et al. arXiv. [ paper ] 📖 [June 2023] "ToolQA: A Dataset for LLM Question Answering with External Tools." Yuchen Zhuang (GaTech) et al. NeurIPS 2023. [ paper ] [ code ] Survey & Tutorial [Sep 2023] "Natural Language based Context Modeling and Reasoning with LLMs: A Tutorial." Haoyi Xiong (Baidu) et al. arXiv. [ paper ] [Sep 2023] "An In-depth Survey of Large Language Model-based Artificial Intelligence Agents." Pengyu Zhao (BJTU) et al. arXiv. [ paper ] 🔥 [Sep 2023] "The Rise and Potential of Large Language Model Based Agents: A Survey." Zhiheng Xi (FDU) et al. arXiv. [ paper ] [ GitHub ] 🔥 [Aug 2023] "A Survey on Large Language Model based Autonomous Agents." Lei Wang (RUC) et al. arXiv. [ paper ] [ GitHub ] 🔥 [Mar 2023] "A Survey of Large Language Models (Sec. 6.3 - Planning for Complex Task Solving)." Wayne Xin Zhao (RUC) et al. arXiv. [ paper ] [ GitHub ] Open-Source Projects Autonomous Task Solver Projects Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. 🦜️🔗 LangChain - Building applications with LLMs through composability. GPT Engineer - Specify what you want it to build, the AI asks for clarification, and then builds it. MetaGPT - 🌟 The Multi-Agent Framework: Given one line Requirement, return PRD, Design, Tasks, Repo. BabyAGI - An AI-powered task management system. L2MAC - 🚀 The LLM Automatic Computer Framework: L2MAC Multi-Agent Simulation Projects AI Town 🏠💻💌 - A deployable starter kit for building and customizing your own version of AI town - a virtual town where AI characters live, chat and socialize. GPTeam - An open-source multi-agent simulation. 🏟 ChatArena - Multi-agent language game environments for LLMs. 🤖 AgentVerse 🪐 - A flexible framework that simplifies the process of building custom multi-agent environments for large language models (LLMs). Perspectives Language agents: a critical evolutionary step of artificial intelligence - Yu Su (OSU), Sep 5, 2023. Introducing XLang: An Open-Source Framework for Building Language Model Agents via Executable Language Grounding - XLANG Lab, Aug 9, 2023. What are GPT Agents? A deep dive into the AI interface of the future - Learn why Agents are a core part of the future of AI, Logan Kilpatrick (OpenAI), Jul 25, 2023. Language Agents in the Digital World: Opportunities and Risks - Shunyu Yao (Princeton) et al., Jul 24, 2023. KokoMind: Can LLMs Understand Social Interactions? - Imagine an AI 🤖 at a cocktail party 🍻, Weiyan Shi (Columbia) et al., Jul, 2023 LLM Powered Autonomous Agents - Amazing blog by Lilian Weng (OpenAI), Jun 23, 2023. Other Related Sources Personalized Generative AI @ CIKM'23 LLM-Agents-Papers - A repo lists papers about LLM role playing, memory mechanism and LLM game playing. LLMAgentPapers - Must-read papers on multiagents of LLMs. awesome-llm-agents - A curated list of awesome LLM agents. Acknowledgement We greatly appreciate any contributions via PRs, issues, emails, or other methods. Thanks Tianle Cai ( @ctlllll ), Yifan Song ( @Yifan-Song793 ), Xinya Du ( @xinyadu ), Binfeng Xu ( @billxbf ), Xuanhe Zhou ( @zhouxh19 ), Boyuan Zheng ( @boyuanzheng010 ), Qiao Jin ( @Andy-jqa ), Shenao Zhang ( @shenao-zhang ), Yu Gu ( @entslscheia ), Zhibin Gou ( @ZubinGou ), Fan Zhou ( @koalazf99 ), Ziniu Hu ( @acbull ), Yangjun Ruan ( @ryoungj ), Zhiyuan Hu ( @zhiyuanhubj ), Qinlin Zhao ( @icecream-and-tea ), Lucio La Cava ( @luciolcv ), Zehui Chen ( @zehuichen123 ), Rishi Hazra ( @RishiHazra ), Lin Guan ( @GuanSuns ), Yuchen Zhuang ( @night-chen ), Xuhui Zhou ( @XuhuiZhou ), Samuel Holt ( @samholt ) and many others for their kind suggestions and contributions. ❤️ The repository is initially built and maintained by Yupeng Hou ( yphou@ucsd.edu ).;Awesome things about LLM-powered agents. Papers / Repos / Blogs / ...;awesome-list,embodied-agent,embodied-ai,foundation-model,foundation-models,generative-agents,generative-ai,generative-model,generative-models,large-language-model | hyp1231/awesome-llm-powered-agent |
msgbyte/tianji;Tianji All-in-One Insight Hub Website analytics + Uptime Monitor + Server Status = Tianji All in one project! Motivation During our observations of the website. We often need to use multiple applications together. For example, we need analysis tools such as GA / umami to check pv/uv and the number of visits to each page, we need an uptime monitor to check the network quality and connectivity of the server, and we need to use prometheus to obtain the status reported by the server to check the quality of the server. In addition, if we develop an application that allows open source deployment, we often need a telemetry system to help us collect the simplest information about other people's deployment situations. I think these tools should serve the same purpose, so is there an application that can integrate these common needs in a lightweight way? After all, most of the time we don't need very professional and in-depth functions. But in order to achieve comprehensive monitoring, I need to install so many services. It's good to specialize in one thing, if we are experts in related abilities we need such specialized tools. But for most users who only have lightweight needs, an All-in-One application will be more convenient and easier to use. Roadmap [x] website analysis [x] monitor [x] server status [x] problem notification [x] telemetry [x] openapi [x] website [ ] team collaboration [ ] utm track [x] waitlist [x] survey [ ] survey page [ ] lighthouse report [ ] hooks [ ] links [x] helm install support [ ] allow install from public [ ] improve monitor reporter usage [ ] uninstall guide [ ] download from server [ ] custom params guide Preview Translation Add a new translation modify those file:
- src/client/i18next-toolkit.config.cjs in this file, edit country code
- src/client/utils/constants.ts in this file, add for display Then, run below code to auto generate bash
cd src/client
pnpm install
pnpm run translation:extract
pnpm run translation:translate # this will call chatgpt to run auto translation, so you need set env `OPENAPI_KEY` to make sure run correct Then manual check translation file in src/client/public/locales Improve translation Direct update src/client/public/locales Open Source Tianji is open source with Apache 2.0 license. And its inspired by umami license which under MIT and uptime-kuma which under MIT license too One-Click Deployment;Tianji: Insight into everything, Website Analytics + Uptime Monitor + Server Status. not only another GA alternatives;analytics,docker,google-analytics,monitor,self-hosted,selfhosted,server-status,statistics,umami,uptime | msgbyte/tianji |
numz/sd-wav2lip-uhq;🔉👄 Wav2Lip STUDIO extension for Stable Diffusion WebUI Automatic1111 English | 简体中文 https://user-images.githubusercontent.com/800903/262435301-af205a91-30d7-43f2-afcc-05980d581fe0.mp4 STANDALONE VERSION CAN BE FOUND HERE : WAV2LIP STUDIO STANDALONE In the standalone version you can :
- ♻ Manage project: Add a feature to manage multiple project
- 👪 Introduced multiple face swap: Can now Swap multiple face in one shot
- ⛔ Visible face restriction: Can now make whole process even if no face detected on frame!
- 📺 Video Size: works with high resolution video input, (test with 1980x1080, should works with 4K but slow)
- 🔑 Keyframe manager: Add a keyframe manager for better control of the video generation
- 🍪 coqui TTS integration: Remove bark integration, use coqui TTS instead
- 💬 Conversation: Add a conversation feature with multiple person
- 🔈 Record your own voice: Add a feature to record your own voice
- 👬 Clone voice: Add a feature to clone voice from video
- 🎏 translate video: Add a feature to translate video with voice clone (HEYGEN like)
- 🔉 Volume amplifier for wav2lip: Add a feature to amplify the volume of the wav2lip output
- 🕡 Add delay before sound speech start
- 🚀 Speed up process: Speed up the process 💡 Description This repository contains a Wav2Lip Studio extension for Automatic1111. It's an all-in-one solution: just choose a video and a speech file (wav or mp3), and the extension will generate a lip-sync video. It improves the quality of the lip-sync videos generated by the Wav2Lip tool by applying specific post-processing techniques with Stable diffusion tools. 📖 Quick Index 🚀 Updates 🔗 Requirements 💻 Installation 🐍 Usage 👄 Note on the bark Fidelity 📺 Examples 📖 Behind the scenes 💪 Quality tips ⚠️Noted Constraints 📝 To do 😎 Contributing 🙏 Appreciation 📝 Citation 📜 License ☕ Support Wav2lip Studio 🚀 Updates 2023.09.13 - 👪 Introduced face swap: facefusion integration (See Usage section) this feature is under experimental . 2023.08.22 - 👄 Introduced bark (See Usage section), this feature is under experimental . 2023.08.20 - 🚢 Introduced the GFPGAN model as an option.
- ▶ Added the feature to resume generation.
- 📏 Optimized to release memory post-generation. 2023.08.17 - 🐛 Fixed purple lips bug 2023.08.16 - ⚡ Added Wav2lip and enhanced video output, with the option to download the one that's best for you, likely the "generated video".
- 🚢 Updated User Interface: Introduced control over CodeFormer Fidelity.
- 👄 Removed image as input, SadTalker is better suited for this.
- 🐛 Fixed a bug regarding the discrepancy between input and output video that incorrectly positioned the mask.
- 💪 Refined the quality process for greater efficiency.
- 🚫 Interruption will now generate videos if the process creates frames 2023.08.13 - ⚡ Speed-up computation
- 🚢 Change User Interface : Add controls on hidden parameters
- 👄 Only Track mouth if needed
- 📰 Control debug
- 🐛 Fix resize factor bug 🔗 Requirements latest version of Stable Diffusion WebUI Automatic1111 by following the instructions on the Stable Diffusion Webui repository. FFmpeg : download it from the official FFmpeg site . Follow the instructions appropriate for your operating system, note ffmpeg have to be accessible from the command line. 💻 Installation Launch Automatic1111 Face Swap : On Windows, download and install Visual Studio . During the install, make sure to include the Python and C++ packages. In the extensions tab, enter the following URL in the "Install from URL" field and click "Install": Go to the "Installed Tab" in the extensions tab and click "Apply and quit". If you don't see the "Wav2Lip UHQ tab" restart Automatic1111. 🔥 Important: Get the weights. Download the model weights from the following locations and place them in the corresponding directories (take care about the filename, especially for s3fd) | Model | Description | Link to the model | install folder |
|:-------------------:|:----------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------:|
| Wav2Lip | Highly accurate lip-sync | Link | extensions\sd-wav2lip-uhq\scripts\wav2lip\checkpoints\ |
| Wav2Lip + GAN | Slightly inferior lip-sync, but better visual quality | Link | extensions\sd-wav2lip-uhq\scripts\wav2lip\checkpoints\ |
| s3fd | Face Detection pre trained model | Link | extensions\sd-wav2lip-uhq\scripts\wav2lip\face_detection\detection\sfd\s3fd.pth |
| landmark predicator | Dlib 68 point face landmark prediction (click on the download icon) | Link | extensions\sd-wav2lip-uhq\scripts\wav2lip\predicator\shape_predictor_68_face_landmarks.dat |
| landmark predicator | Dlib 68 point face landmark prediction (alternate link) | Link | extensions\sd-wav2lip-uhq\scripts\wav2lip\predicator\shape_predictor_68_face_landmarks.dat |
| landmark predicator | Dlib 68 point face landmark prediction (alternate link click on the download icon) | Link | extensions\sd-wav2lip-uhq\scripts\wav2lip\predicator\shape_predictor_68_face_landmarks.dat |
| face swap model | model used by face swap | Link | extensions\sd-wav2lip-uhq\scripts\faceswap\model\inswapper_128.onnx | 🐍 Usage Choose a video (avi or mp4 format) with a face in it. If there is no face in only one frame of the video, process will fail. Note avi file will not appear in Video input but process will works. Face Swap (take times so be patient): Face Swap : chose the image of the face you want to swap with the face in the video. Face Index : if there are multiple faces in the image, you can choose the face you want to swap with the face in the video. 0 is the first face from left to right. Audio, 2 options: Put audio file in the "Speech" input. Generate Audio with the text to speech bark integration. Choose the language : Turkish, English, Chinese, Hindi, Italian, Japanese, Korean, Portuguese, Russian, Spanish, Polish, German, French Choose the Gender Choose your speaker, you can ear a sample in the "Audio Example" Choose Low VRAM True (default) if you have a Video Card with less than 16GB VRAM Write your text in the text area "Prompt" Note that bark can only generate 14 seconds of audio, so if you want to generate a longer audio, you have to use "[split]" in your text. For example, if you want to generate a 30 seconds audio, you have to write your text like this : "This is the first part of my text [split] This is the second part of my text" Temperature: 0.0 is supposed to be closer to the voice, and 1.0 is more creative, but in reality, 0.0 yields strange results and 1.0 something very far from the voice. 0.7 is the default value set by 'bark', try different values to see what works best for you. Silence : Time in seconds between each punctuation(。!!.??,). Default is 0.25 seconds. See Bark documentation for more details. Below is a list of some known non-speech sounds. [laughter] [laughs] [sighs] [music] [gasps] [clears throat] "-" or ... for hesitations ♪ for song lyrics CAPITALIZATION for emphasis of a word [MAN] and [WOMAN] to bias Bark toward male and female speakers, respectively choose a checkpoint (see table above). Padding : Wav2Lip uses this to move the mouth. This is useful if the mouth is not at the good place. Usually, default value is good, but certain video may need to be adjusted. No Smooth : When checked, this option retains the original mouth shape without smoothing. Resize Factor : This is a resize factor for the video. The default value is 1.0, but you can change it to suit your needs. This is useful if the video size is too large. Only Mouth : This option tracks only the mouth, removing other facial motions like those of the cheeks and chin. Mouth Mask Dilate : This will dilate the mouth mask to cover more area around the mouth. depends on the mouth size. Face Mask Erode : This will erode the face mask to remove some area around the face. depends on the face size. Mask Blur : This will blur the mask to make it more smooth, try to keep it under or equal to Mouth Mask Dilate . Code Former Fidelity : A value of 0 offers higher quality but may significantly alter the person's facial appearance and cause noticeable flickering between frames. A value of 1 provides lower quality but maintains the person's face more consistently and reduces frame flickering. Using a value below 0.5 is not advised. Adjust this setting to achieve optimal results. Starting with a value of 0.75 is recommended. Active debug : This will create step-by-step images in the debug folder. Click on the "Generate" button. ⚠ "resume" button can be use if face swap and wav2lip step have been done, then you can adjust "mouth mask dilate", "face mask erode", "mask blur" and change "restoration model" without regenerate face swap and wav2lip. 👄 Note on the bark Fidelity Bark is interesting but sometimes yields strange results (or even hilarious ones). Each generation will give you something different and It may take several generations before you achieve something conclusive.
Apart from English, it seems that the other languages speak as if they were being used by a foreigner. Sometimes even if you choose "Male" it will speak like a woman, and vice versa. Sometimes, even when choosing a specific speaker, it will sound like another speaker or even another language. 📺 Examples https://user-images.githubusercontent.com/800903/262439441-bb9d888a-d33e-4246-9f0a-1ddeac062d35.mp4 https://user-images.githubusercontent.com/800903/262442794-61b1e32f-3f87-4b36-98d6-f711822bdb1e.mp4 https://user-images.githubusercontent.com/800903/262449305-901086a3-22cb-42d2-b5be-a5f38db4549a.mp4 https://user-images.githubusercontent.com/800903/267808494-300f8cc3-9136-4810-86e2-92f2114a5f9a.mp4 📖 Behind the scenes This extension operates in several stages to improve the quality of Wav2Lip-generated videos: Generate face swap video : The script first generates the face swap video if image is in "face Swap" field, this operation take times so be patient. Generate a Wav2lip video : Then script generates a low-quality Wav2Lip video using the input video and audio. Video Quality Enhancement : Create a high-quality video using the low-quality video by using the enhancer define by user. Mask Creation : The script creates a mask around the mouth and tries to keep other facial motions like those of the cheeks and chin. Video Generation : The script then takes the high-quality mouth image and overlays it onto the original image guided by the mouth mask. Video Post Processing : The script then uses the ffmpeg tool to generate the final video. 💪 Quality tips Use a high quality video as input Utilize a video with a consistent frame rate. Occasionally, videos may exhibit unusual playback frame rates (not the standard 24, 25, 30, 60), which can lead to issues with the face mask. Use a high quality audio file as input, without background noise or music. Clean audio with a tool like https://podcast.adobe.com/enhance . Dilate the mouth mask. This will help the model retain some facial motion and hide the original mouth. Mask Blur maximum twice the value of Mouth Mask Dilate. If you want to increase the blur, increase the value of Mouth Mask Dilate otherwise the mouth will be blurred and the underlying mouth could be visible. Upscaling can be good for improving result, particularly around the mouth area. However, it will extend the processing duration. Use this tutorial from Olivio Sarikas to upscale your video: https://www.youtube.com/watch?v=3z4MKUqFEUk . Ensure the denoising strength is set between 0.0 and 0.05, select the 'revAnimated' model, and use the batch mode. i'll create a tutorial for this soon. Ensure there is a face on each frame of the video. If the face is not detected, process will stop. ⚠ Noted Constraints for speed up process try to keep resolution under 1000x1000px, so use resize factor and upscaling after process. If the initial phase is excessively lengthy, consider using the "resize factor" to decrease the video's dimensions. While there's no strict size limit for videos, larger videos will require more processing time. It's advisable to employ the "resize factor" to minimize the video size and then upscale the video once processing is complete. 📖 Troubleshooting Mac users: dlib will not install correctly. in requirements.txt, replace "dlib-bin" with "dlib" 📝 To do [ ] Tutorials [ ] Convert avi to mp4. Avi is not show in video input but process work fine [ ] Add Possibility to use a video for audio input [ ] Standalone version [ ] ComfyUI intergration 😎 Contributing We welcome contributions to this project. When submitting pull requests, please provide a detailed description of the changes. see CONTRIBUTING for more information. 🙏 Appreciation Wav2Lip CodeFormer bark facefusion ☕ Support Wav2lip Studio this project is open-source effort that is free to use and modify. I rely on the support of users to keep this project going and help improve it. If you'd like to support me, you can make a donation on my Patreon page. Any contribution, large or small, is greatly appreciated! Your support helps me cover the costs of development and maintenance, and allows me to allocate more time and resources to enhancing this project. Thank you for your support! patreon page 📝 Citation If you use this project in your own work, in articles, tutorials, or presentations, we encourage you to cite this project to acknowledge the efforts put into it. To cite this project, please use the following BibTeX format: @misc{wav2lip_uhq,
author = {numz},
title = {Wav2Lip UHQ},
year = {2023},
howpublished = {GitHub repository},
publisher = {numz},
url = {https://github.com/numz/sd-wav2lip-uhq}
} 📜 License The code in this repository is released under the MIT license as found in the LICENSE file .;Wav2Lip UHQ extension for Automatic1111;lip-sync,lipsync,stable-diffusion-web-ui,stable-diffusion-webui,stable-diffusion-webui-plugin,wav2lip,audio-driven-talking-face,deep-fake,deep-fakes,image-animation | numz/sd-wav2lip-uhq |
ethstorage/es-node;Golang implementation of the EthStorage node. What is EthStorage? EthStorage is a modular and decentralized storage Layer 2 that offers programmable key-value storage powered by DA. It enables long-term DA solutions for Rollups and opens up new possibilities for fully on-chain applications like games, social networks, AI, etc. Check our website if you want to know more about EthStorage. Getting Started If you are interested in earning rewards by storing data, you may want to consider becoming a storage provider in the EthStorage network, a fully permissionless process as simple as getting the es-node up and running. For prerequisites and detailed instructions to run es-node please read the Storage Providers' guide . Contact Please join our Discord for any technical questions, and stay informed about news of EthStorage by following our Twitter account. License This repository is licensed under BLS .;Golang implementation of the EthStorage node.;[] | ethstorage/es-node |
pytorch-labs/segment-anything-fast;Segment anything ... Fast This work is based on a fork of https://github.com/facebookresearch/segment-anything The corresponding blog post is https://pytorch.org/blog/accelerating-generative-ai/ Installation Step 1 Get latest PyTorch nightly For example: pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121 or pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu Installation instructions vary by platform. Please see the website https://pytorch.org/ Step 2 Install the package pip install git+https://github.com/pytorch-labs/segment-anything-fast.git Usage The package acts like a drop-in replacement for segment-anything. So, for example, if you're currently doing from segment_anything import sam_model_registry you should be able to do from segment_anything_fast import sam_model_registry . However, you're likely here because you want to try a fast, inference version. So we also created a sam_model_fast_registry that automatically applies
- Sets eval mode
- Uses bfloat16 - Enables torch.compile with max-autotune
- Uses a custom Triton kernel that implements SDPA for relative positional encodings for long sequence lengths The custom Triton kernel in particular was written for A100. If you're not using an A100, we will try to rerun autotuning on your device and locally save the best configs.
You might still run into performance issues, so you can disable the kernel by setting the environment variable SEGMENT_ANYTHING_FAST_USE_FLASH_4=0 Please also note that the first time you're running this model you'll likely need to wait a bit for it to compile. If you'd like to see the details on how to reproduce all results, please see the README in the experiments folder above. Please don't be shy to open a Github issue if you're missing functionality or find an issue. Thank you. Results The results show a waterfall of techniques. Left to right these techniques are combined. That means the very last bar is the combination of
- bfloat16
- torch.compile with max-autotune
- torch.scaled_dot_product_attention - A custom Triton kernel that implements SDPA for relative positional encodings for long sequence lengths
- NestedTensors
- Dynamic int8 symmetric quantization
- 2:4 sparse format License segment-anything-fast is released under the Apache 2.0 license.;A batched offline inference oriented version of segment-anything;[] | pytorch-labs/segment-anything-fast |
microsoft/azurechat;Unleash the Power of Azure Open AI Introduction Solution Overview Deploy to Azure Run from your local machine Deploy to Azure with GitHub Actions Add identity provider Chatting with your file Persona Extensions Environment variables Migration considerations Introduction Azure Chat Solution Accelerator powered by Azure Open AI Service Azure Chat Solution Accelerator powered by Azure Open AI Service is a solution accelerator that allows organisations to deploy a private chat tenant in their Azure Subscription, with a familiar user experience and the added capabilities of chatting over your data and files. Benefits are: Private: Deployed in your Azure tenancy, allowing you to isolate it to your Azure tenant. Controlled: Network traffic can be fully isolated to your network and other enterprise grade authentication security features are built in. Value: Deliver added business value with your own internal data sources (plug and play) or integrate with your internal services (e.g., ServiceNow, etc). Deploy to Azure You can provision Azure resources for the solution accelerator using either the Azure Developer CLI or the Deploy to Azure button below. Regardless of the method you chose you will still need set up an identity provider and specify an admin user Deployment Options You can deploy the application using one of the following options: 1. Azure Developer CLI 2. Azure Portal Deployment 1. Azure Developer CLI [!IMPORTANT]
This section will create Azure resources and deploy the solution from your local environment using the Azure Developer CLI. Note that you do not need to clone this repo to complete these steps. Download the Azure Developer CLI If you have not cloned this repo, run azd init -t microsoft/azurechat . If you have cloned this repo, just run 'azd init' from the repo root directory. Run azd up to provision and deploy the application ```pwsh
azd init -t microsoft/azurechat
azd up if you are wanting to see logs run with debug flag azd up --debug
``` 2. Azure Portal Deployment [!WARNING]
This button will only create Azure resources. You will still need to deploy the application by following the deploy to Azure section to build and deploy the application using GitHub actions. Click on the Deploy to Azure button to deploy the Azure resources for the application. [!IMPORTANT]
The application is protected by an identity provider and follow the steps in Add an identity provider section for adding authentication to your app. Next Contributing This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com. When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA. This project has adopted the Microsoft Open Source Code of Conduct .
For more information see the Code of Conduct FAQ or
contact opencode@microsoft.com with any additional questions or comments. Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines .
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies.;🤖 💼 Azure Chat Solution Accelerator powered by Azure Open AI Service;[] | microsoft/azurechat |
steven-tey/chathn;ChatHN Chat with Hacker News using natural language. Built with OpenAI Functions and Vercel AI SDK. Introduction · Deploy Your Own · Setting Up Locally · Tech Stack · Contributing · License Introduction ChatHN is an open-source AI chatbot that uses OpenAI Functions and the Vercel AI SDK to interact with the Hacker News API with natural language. https://github.com/steven-tey/chathn/assets/28986134/9c0ad554-f4e5-4e98-8771-5999ddf79235 Deploy your own You can deploy your own version of ChatHN with 1-click: Setting Up Locally To set up ChatHN locally, you'll need to clone the repository and set up the following environment variables: OPENAI_API_KEY – your OpenAI API key (you can get one here ) Tech Stack ChatH is built on the following stack: Next.js – framework OpenAI Functions - AI completions Vercel AI SDK – AI streaming library Vercel – deployments TailwindCSS – styles Contributing Here's how you can contribute: Open an issue if you believe you've encountered a bug. Make a pull request to add new features/make quality-of-life improvements/fix bugs. Author Steven Tey ( @steventey ) License Licensed under the MIT license .;Chat with Hacker News using natural language. Built with OpenAI Functions and Vercel AI SDK.;ai,ai-sdk,edge-functions,hacker-news,nextjs,openai,openai-functions,streaming,vercel | steven-tey/chathn |
rahulnyk/knowledge_graph;Convert any Corpus of Text into a Graph of Knowledge A knowledge graph generated using this code ghpages link of this graph: https://rahulnyk.github.io/knowledge_graph/ What is a knowledge graph? A knowledge graph, also known as a semantic network, represents a network of real-world entities—i.e. objects, events, situations, or concepts—and illustrates the relationship between them. This information is usually stored in a graph database and visualized as a graph structure, prompting the term knowledge “graph.” Source: https://www.ibm.com/topics/knowledge-graph How to create a simple knowledge graph from a body of work? Clean the text corpus (The body of work). Extract concepts and entities from the body of work. Extract relations between the entities. Convert a graph schema. Populate nodes (concepts) and edges (relations). Visualise and Query. Step 6 is purely optional, but it has certain artistic gratification associated with it. Network graphs are beautiful objects (just look at the banner image above, isn't it beautiful?). Fortunately, there are a good number of Python libraries available for generating graph visualisations. Why Graph? Once the Knowledge Graph (KG) is build, we can use it for many purposes. We can run graph algorithms and calculate centralities of any node, to understand how important a concept (node) is to this body of work. We can calculate communities to bunch the concepts together to better analyse the text. We can understand the connectedness between seemingly disconnected concepts. The best of all, we can achieve Graph Retrieval Augmented Generation (GRAG) and chat with our text in a much more profound way using Graph as a retriever. This is a new and improved version of Retrieval Augmented Generation (RAG) where we use a vectory db as a retriever to chat with our documents. This project Here I have created a simple knowledge graph from a PDF document. The process I follow here is very similar to what is outlined in the above sections, with some simplifications. First I split the entire text into chunks. Then I extract concepts mentioned within each chunk using an LLM. Note that I am not extracting entities using an NER model here. There is a difference between concepts and entities. For example 'Bangalore' is an entity, and 'Pleasant weather in Bangalore' is a concept. In my experience, concepts make more meaningful KG than entities. I assume that the concepts that are mentioned in the vicinity of each other are related. So every edge in the KG is a text chunk in which the two connected concepts are mentioned. Once the nodes (concepts) and the edges (text chunks) are calculated, It is easy to create a graph out of them using the libraries mentioned here.
All the components I used here are set up locally, so this project can be run very easily on a personal machine. I have adopted a no-GPT approach here to keep things economical. I am using the fantastic Mistral 7B openorca instruct, which crushes this use case wonderfully. The model can be set up locally using Ollama so generating the KG is basically free (No calls to GPT). To generate a graph this the notebook you have to tweak. extract_graph.ipynb The notebook implements the method outlined in the following flowchart. Split the corpus of text into chunks. Assign a chunk_id to each of these chunks. For every text chunk extract concepts and their semantic relationships using an LLM. Let’s assign this relation a weightage of W1. There can be multiple relationships between the same pair of concepts. Every such relation is an edge between a pair of concepts. Consider that the concepts that occur in the same text chunk are also related by their contextual proximity. Let’s assign this relation a weightage of W2. Note that the same pair of concepts may occur in multiple chunks. Group similar pairs, sum their weights, and concatenate their relationships. So now we have only one edge between any distinct pair of concepts. The edge has a certain weight and a list of relations as its name. Additional it also calculates the Degree of each node, and Communities of nodes, for sizing and coloring the nodes in the graph respectively. Here is a Medium article explaining the method in detail Tech Stack Mistral 7B I am using the Mistral 7B Openorca for extracting concepts out of text chunks. It can follow the system prompt instructions very well. Ollama Ollama makes it easy to host any model locally. Mistral 7B OpenOrca version is already available with Ollama to use out of the box. To set up this project, you must install Ollama on your local machine. Step 1: Install Ollama https://ollama.ai Step 2: run ollama run zephyr in your terminal. This will pull the zephyr model to your local machine and start the Ollama server. Pandas dataframes for graph schema (can use a graphdb at a later stage). NetworkX This is a python library that makes dealing with graphs super easy Pyvis Pyvis python library for visualisation. Pyvis generates Javascript Graph visualisations using python, so the final graphs can be hosted on the web. For example the github link of this repo is a graph generated by pyvis Looking for contributions This project needs a lot more work. There are some wonderful ideas suggested by folks on medium and here on Github. If this interests you, Please join hands and lets' build this together. Here are a few suggested imrpovements. Back End [ ] Use embeddings to deduplicate semantically similar concepts ( Suggested by William Claude on the Medium Article ) [ ] Avoid having similar concepts written differently by the LLM (eg: "doctor" and "doctors") [ ] Reinforce the clustering of strongly similar concepts (eg: "doctor" and "medical practitioner")? [ ] Filter out the redundant, or outlier concepts that may not be useful in understanding the text. For example, generic concepts that occur too often in the text. ( Suggested by Luke Chesley ) [ ] Better implement the concept of contextual proximity to avoide overweighting certain concepts that occur too frequently, or to weed out useless edges. ( Suggested by Luke Chesley ) Front End [ ] Create a Frontend for rendering Graph of Concepts in a more useful way. for example here is a flow. ( Suggested by David Garcia on the Medium Article ). Provide a list concept/interest/topics User selects what they're interested in This expands to show sub-topics, sub-concepts, sub-x, etc. This is how you get deep into a specialty;Convert any text to a graph of knowledge. This can be used for Graph Augmented Generation or Knowledge Graph based QnA;[] | rahulnyk/knowledge_graph |
MetaCubeX/metacubexd;metacubexd Mihomo Dashboard, The Official One, XD Preview Published Official Links GH Pages Custom Domain: http://d.metacubex.one GH Pages: https://metacubex.github.io/metacubexd Cloudflare Pages: https://metacubexd.pages.dev Usage Enable external-controller in your config file yaml
external-controller: 0.0.0.0:9090 Use pre-built assets from gh-pages branch First time setup shell
git clone https://github.com/metacubex/metacubexd.git -b gh-pages /etc/mihomo/ui Make sure you have external-ui directory set correctly in your config file yaml
external-ui: /etc/mihomo/ui Update shell
git -C /etc/mihomo/ui pull -r Run inside Docker docker cli Running shell
docker run -d --restart always -p 80:80 --name metacubexd ghcr.io/metacubex/metacubexd Update and Restart shell
docker pull ghcr.io/metacubex/metacubexd && docker restart metacubexd docker-compose.yml ```yaml
version: '3' services:
metacubexd:
container_name: metacubexd
image: ghcr.io/metacubex/metacubexd
restart: always
ports:
- '80:80' # optional
meta:
container_name: meta
image: docker.io/metacubex/mihomo:Alpha
restart: always
pid: host
ipc: host
network_mode: host
cap_add:
- ALL
volumes:
- ./config.yaml:/root/.config/mihomo
- /dev/net/tun:/dev/net/tun
``` Running shell
docker compose up -d Update and Restart shell
docker compose pull && docker compose up -d Build locally Install npm dependencies shell
pnpm install Build artifacts shell
pnpm build Serve static files shell
pnpm serve Credits SolidJS daisyUI;Mihomo Dashboard, The Official One, XD;dashboard,solidjs,docker,daisyui | MetaCubeX/metacubexd |
langchain-ai/streamlit-agent;🦜️🔗 LangChain 🤝 Streamlit agent examples This repository contains reference implementations of various LangChain agents as Streamlit apps including: basic_streaming.py : Simple streaming app with langchain.chat_models.ChatOpenAI ( View the app ) basic_memory.py : Simple app using StreamlitChatMessageHistory for LLM conversation memory ( View the app ) mrkl_demo.py : An agent that replicates the MRKL demo ( View the app ) minimal_agent.py : A minimal agent with search (requires setting OPENAI_API_KEY env to run) search_and_chat.py : A search-enabled chatbot that remembers chat history ( View the app ) simple_feedback.py : A chat app that allows the user to add feedback on responses using streamlit-feedback , and link to the traces in LangSmith ( View the app ) chat_with_documents.py : Chatbot capable of answering queries by referring custom documents ( View the app ) chat_with_sql_db.py : Chatbot which can communicate with your database ( View the app ) chat_pandas_df.py : Chatbot to ask questions about a pandas DF (Note: uses PythonAstREPLTool which is vulnerable to arbitrary code execution,
see langchain #7700 ) Apps feature LangChain 🤝 Streamlit integrations such as the Callback integration and StreamlitChatMessageHistory . More great app examples Check out some other full examples of apps that utilize LangChain + Streamlit: Auto-graph - Build knowledge graphs from user-input text ( Source code ) Web Explorer - Retrieve and summarize insights from the web ( Source code ) LangChain Teacher - Learn LangChain from an LLM tutor ( Source code ) Text Splitter Playground - Play with various types of text splitting for RAG ( Source code ) Tweet Generator - Fine tune GPT-3.5 on tweets ( Source code ) Setup This project uses Poetry for dependency management. ```shell Create Python environment $ poetry install Install git pre-commit hooks $ poetry shell
$ pre-commit install
``` Running ```shell Run mrkl_demo.py or another app the same way $ streamlit run streamlit_agent/mrkl_demo.py
``` Running with Docker This project includes Dockerfile to run the app in Docker container. In order to optimise the Docker Image is optimised for size and building time with cache techniques. To generate Image with DOCKER_BUILDKIT , follow below command DOCKER_BUILDKIT=1 docker build --target=runtime . -t langchain-streamlit-agent:latest Run the docker container directly docker run -d --name langchain-streamlit-agent -p 8051:8051 langchain-streamlit-agent:latest Run the docker container using docker-compose (Recommended) Edit the Command in docker-compose with target streamlit app docker-compose up Contributing We plan to add more agent and chain examples over time and improve the existing ones - PRs welcome! 🚀;Reference implementations of several LangChain agents as Streamlit apps;[] | langchain-ai/streamlit-agent |
google/style-aligned;Style Aligned Image Generation via Shared Attention Project Page Paper Setup This code was tested with Python 3.11, Pytorch 2.1 and Diffusers 0.16 . Examples See style_aligned_sdxl notebook for generating style aligned images using SDXL . See style_aligned_transfer_sdxl notebook for generating images with a style from reference image using SDXL . See style_aligned_w_controlnet notebook for generating style aligned and depth conditioned images using SDXL with ControlNet-Depth . style_aligned_w_multidiffusion can be used for generating style aligned panoramas using SD V2 with MultiDiffusion . Demos Thanks to @yvrjsharma for preparing the demos: style aligned text to image , ControlNet + StyleAligned and MultiDiffusion + StyleAligned To start a demo locally, simply run python <demo file name>.py and enter the demo in your browser using the provided url. An online demo of ControlNet + StyleAligned is available here . TODOs [x] Adding demo. [x] StyleAligned from an input image. [ ] Multi-style with MultiDiffusion. [ ] StyleAligned with DreamBooth Disclaimer This is not an officially supported Google product.;Official code for "Style Aligned Image Generation via Shared Attention";[] | google/style-aligned |
hkchengrex/Tracking-Anything-with-DEVA;DEVA: Tracking Anything with Decoupled Video Segmentation Ho Kei Cheng , Seoung Wug Oh , Brian Price , Alexander Schwing , Joon-Young Lee University of Illinois Urbana-Champaign and Adobe ICCV 2023 [arXiV] [PDF] [Project Page] Highlights Provide long-term, open-vocabulary video segmentation with text-prompts out-of-the-box. Fairly easy to integrate your own image model ! Wouldn't you or your reviewers be interested in seeing examples where your image model also works well on videos :smirk:? No finetuning is needed! Note (Mar 6 2024): We have fixed a major bug (introduced in the last update) that prevented the deletion of unmatched segments in text/eval_with_detections modes. This should greatly reduce the amount of accumulated noisy detection/false positives, especially for long videos. See #64 . Note (Sep 12 2023): We have improved automatic video segmentation by not querying the points in segmented regions. We correspondingly increased the number of query points per side to 64 and deprecated the "engulf" mode. The old code can be found in the "legacy_engulf" branch. The new code should run a lot faster and capture smaller objects. The text-prompted mode is still recommended for better results. Note (Sep 11 2023): We have removed the "pluralize" option as it works weirdly sometimes with GroundingDINO. If needed, please pluralize the prompt yourself. Abstract We develop a decoupled video segmentation approach ( DEVA ), composed of task-specific image-level segmentation and class/task-agnostic bi-directional temporal propagation.
Due to this design, we only need an image-level model for the target task and a universal temporal propagation model which is trained once and generalizes across tasks.
To effectively combine these two modules, we propose a (semi-)online fusion of segmentation hypotheses from different frames to generate a coherent segmentation.
We show that this decoupled formulation compares favorably to end-to-end approaches in several tasks, most notably in large-vocabulary video panoptic segmentation and open-world video segmentation. Demo Videos Demo with Grounded Segment Anything (text prompt: "guinea pigs" and "chicken"): https://github.com/hkchengrex/Tracking-Anything-with-DEVA/assets/7107196/457a9a6a-86c3-4c5a-a3cc-25199427cd11 Source: https://www.youtube.com/watch?v=FM9SemMfknA Demo with Grounded Segment Anything (text prompt: "pigs"): https://github.com/hkchengrex/Tracking-Anything-with-DEVA/assets/7107196/9a6dbcd1-2c84-45c8-ac0a-4ad31169881f Source: https://youtu.be/FbK3SL97zf8 Demo with Grounded Segment Anything (text prompt: "capybara"): https://github.com/hkchengrex/Tracking-Anything-with-DEVA/assets/7107196/2ac5acc2-d160-49be-a013-68ad1d4074c5 Source: https://youtu.be/couz1CrlTdQ Demo with Segment Anything (automatic points-in-grid prompting); original video follows DEVA result overlaying the video: https://github.com/hkchengrex/Tracking-Anything-with-DEVA/assets/7107196/ac6ab425-2f49-4438-bcd4-16e4ccfb0d98 Source: DAVIS 2017 validation set "soapbox" Demo with Segment Anything on a out-of-domain example; original video follows DEVA result overlaying the video: https://github.com/hkchengrex/Tracking-Anything-with-DEVA/assets/7107196/48542bcd-113c-4454-b512-030df26def08 Source: https://youtu.be/FQQaSyH9hZI Installation Tested on Ubuntu only. For installation on Windows WSL2, refer to https://github.com/hkchengrex/Tracking-Anything-with-DEVA/issues/20 (thanks @21pl). Prerequisite: - Python 3.9+
- PyTorch 1.12+ and corresponding torchvision Clone our repository: bash
git clone https://github.com/hkchengrex/Tracking-Anything-with-DEVA.git Install with pip: bash
cd Tracking-Anything-with-DEVA
pip install -e . (If you encounter the File "setup.py" not found error, upgrade your pip with pip install --upgrade pip ) Download the pretrained models: bash
bash scripts/download_models.sh Required for the text-prompted/automatic demo: Install our fork of Grounded-Segment-Anything . Follow its instructions. Grounding DINO installation might fail silently.
Try python -c "from groundingdino.util.inference import Model as GroundingDINOModel" .
If you get a warning about running on CPU mode only, make sure you have CUDA_HOME set during Grounding DINO installation. (Optional) For fast integer program solving in the semi-online setting: Get your gurobi licence which is free for academic use.
If a license is not found, we fall back to using PuLP which is slower and is not rigorously tested by us. All experiments are conducted with gurobi. Quick Start DEMO.md contains more details on the input arguments and tips on speeding up inference.
You can always look at deva/inference/eval_args.py and deva/ext/ext_eval_args.py for a full list of arguments. With gradio: bash
python demo/demo_gradio.py Then visit the link that popped up on the terminal. If executing on a remote server, try port forwarding . We have prepared an example in example/vipseg/12_1mWNahzcsAc (a clip from the VIPSeg dataset).
The following two scripts segment the example clip using either Grounded Segment Anything with text prompts or SAM with automatic (points in grid) prompting. Script (text-prompted): bash
python demo/demo_with_text.py --chunk_size 4 \
--img_path ./example/vipseg/images/12_1mWNahzcsAc \
--amp --temporal_setting semionline \
--size 480 \
--output ./example/output --prompt person.hat.horse We support different SAM variants in text-prompted modes , by default we use original sam version. For higher-quality masks prediction, you specify --sam_variant sam_hq . For running efficient sam usage, you can specify --sam_variant sam_hq_light or --sam_variant mobile . Script (automatic): bash
python demo/demo_automatic.py --chunk_size 4 \
--img_path ./example/vipseg/images/12_1mWNahzcsAc \
--amp --temporal_setting semionline \
--size 480 \
--output ./example/output Training and Evaluation Running DEVA with your own detection model. Running DEVA with detections to reproduce the benchmark results. Training the DEVA model. Limitations On closed-set data, DEVA most likely does not work as well as end-to-end approaches. Joint training is (for now) still a better idea when you have enough target data. Positive detections are amplified temporally due to propagation. Having a detector with a lower false positive rate (i.e., a higher threshold) helps. If new objects are coming in and out all the time (e.g., in driving scenes), we will keep a lot of objects in the memory bank which unfortunately increases the false positive rate. Decreasing max_missed_detection_count might help since we delete objects from memory more eagerly. Citation bibtex
@inproceedings{cheng2023tracking,
title={Tracking Anything with Decoupled Video Segmentation},
author={Cheng, Ho Kei and Oh, Seoung Wug and Price, Brian and Schwing, Alexander and Lee, Joon-Young},
booktitle={ICCV},
year={2023}
} References The demo would not be possible without :heart: from the community: Grounded Segment Anything: https://github.com/IDEA-Research/Grounded-Segment-Anything Segment Anything: https://github.com/facebookresearch/segment-anything XMem: https://github.com/hkchengrex/XMem Title card generated with OpenPano: https://github.com/ppwwyyxx/OpenPano;[ICCV 2023] Tracking Anything with Decoupled Video Segmentation;deep-learning,object-tracking,open-vocabulary-segmentation,video-editing,video-object-segmentation,video-segmentation,open-vocabulary-video-segmentation,open-world-video-segmentation,iccv2023 | hkchengrex/Tracking-Anything-with-DEVA |
sh-lee-prml/HierSpeechpp;HierSpeech++: Bridging the Gap between Semantic and Acoustic Representation by Hierarchical Variational Inference for Zero-shot Speech Synthesis The official implementation of HierSpeech++ | | Demo page | Checkpoint Sang-Hoon Lee, Ha-Yeong Choi, Seung-Bin Kim, Seong-Whan Lee Department of Artificial Intelligence, Korea University, Seoul, Korea Abstract Large language models (LLM)-based speech synthesis has been widely adopted in zero-shot speech synthesis. However, they require a large-scale data and possess the same limitations as previous autoregressive speech models, including slow inference speed and lack of robustness. This paper proposes HierSpeech++, a fast and strong zero-shot speech synthesizer for text-to-speech (TTS) and voice conversion (VC). We verified that hierarchical speech synthesis frameworks could significantly improve the robustness and expressiveness of the synthetic speech. Furthermore, we significantly improve the naturalness and speaker similarity of synthetic speech even in zero-shot speech synthesis scenarios. For text-to-speech, we adopt the text-to-vec framework, which generates a self-supervised speech representation and an F0 representation based on text representations and prosody prompts. Then, HierSpeech++ generates speech from the generated vector, F0, and voice prompt. We further introduce a high-efficient speech super-resolution framework from 16 kHz to 48 kHz. The experimental results demonstrated that the hierarchical variational autoencoder could be a strong zero-shot speech synthesizer given that it outperforms LLM-based and diffusion-based models. Moreover, we achieved the first human-level quality zero-shot speech synthesis. This repository contains: 🪐 A PyTorch implementation of HierSpeech++ (TTV, Hierarchical Speech Synthesizer, SpeechSR) ⚡️ Pre-trained HierSpeech++ models trained on LibriTTS (Train-460, Train-960, and more dataset) Gradio Demo on HuggingFace. HuggingFace provides us with a community GPU grant. Thanks 😊 Previous Our Works [NeurIPS2022] HierSpeech: Bridging the Gap between Text and Speech by Hierarchical Variational Inference using Self-supervised Representations for Speech Synthesis [Interspeech2023] HierVST: Hierarchical Adaptive Zero-shot Voice Style Transfer This paper is an extension version of above papers. Update 24.02.20 We get back the reconstruction loss for ttv. Adding the loss masking for zero-padding decrease the tts performance by generating a random long pause in generated speech and repeated sound(It may affect the loss balance). Sorry for the confusion. I revised it as a paper version. 24.01.19 We have released the TTV_v1 training code. Regardless of the language, you can train TTV using personal dataset, and perform speech synthesis using the pre-trained Hierarchical Speech Synthesizer model. Todo Hierarchical Speech Synthesizer [x] HierSpeechpp-Backbone (LibriTTS-train-460) [x] HierSpeechpp-Backbone (LibriTTS-train-960) [x] HierSpeechpp-Backbone-60epoch (LibriTTS-train-960, Libri-light (Medium), Expresso, MSSS(Kor), NIKL(Kor)) [x] HierSpeechpp-Backbone-200epoch (LibriTTS-train-960, Libri-light (Medium), Expresso, MSSS(Kor), NIKL(Kor)) Text-to-Vec (TTV) [x] TTV-v1 (LibriTTS-train-960) [ ] TTV-v2 (Multi-lingual TTV) Speech Super-resolution (16k --> 24k or 48k) [x] SpeechSR-24k [x] SpeechSR-48k Cleaning Up the Source Code [ ] Clean Code Training code (Will be released after paper acceptance) [ ] TTV [ ] Hierarchical Speech Synthesizer [ ] SpeechSR Getting Started Pre-requisites Pytorch >=1.13 and torchaudio >= 0.13 Install requirements pip install -r requirements.txt Install Phonemizer pip install phonemizer
sudo apt-get install espeak-ng Checkpoint [Download] Hierarchical Speech Synthesizer | Model |Sampling Rate|Params|Dataset|Hour|Speaker|Checkpoint|
|------|:---:|:---:|:---:|:---:|:---:|:---:|
| HierSpeech2|16 kHz|97M| LibriTTS (train-460) |245|1,151| [Download] |
| HierSpeech2|16 kHz|97M| LibriTTS (train-960) |555|2,311| [Download] |
| HierSpeech2|16 kHz|97M| LibriTTS (train-960), Libri-light (Small, Medium), Expresso, MSSS(Kor), NIKL(Kor)|2,796| 7,299 | [Download] | TTV | Model |Language|Params|Dataset|Hour|Speaker|Checkpoint|
|------|:---:|:---:|:---:|:---:|:---:|:---:|
| TTV |Eng|107M| LibriTTS (train-960) |555|2,311| [Download] | SpeechSR | Model |Sampling Rate|Params|Dataset |Checkpoint|
|------|:---:|:---:|:---:|:---:|
| SpeechSR-24k |16kHz --> 24 kHz|0.13M| LibriTTS (train-960), MSSS (Kor) | speechsr24k |
| SpeechSR-48k |16kHz --> 48 kHz|0.13M| MSSS (Kor), Expresso (Eng), VCTK (Eng)| speechsr48k | Text-to-Speech ```
sh inference.sh --ckpt "logs/hierspeechpp_libritts460/hierspeechpp_lt460_ckpt.pth" \ LibriTTS-460 --ckpt "logs/hierspeechpp_libritts960/hierspeechpp_lt960_ckpt.pth" \ LibriTTS-960 --ckpt "logs/hierspeechpp_eng_kor/hierspeechpp_v1_ckpt.pth" \ Large_v1 epoch 60 (paper version) --ckpt "logs/hierspeechpp_eng_kor/hierspeechpp_v1.1_ckpt.pth" \ Large_v1.1 epoch 200 (20. Nov. 2023) CUDA_VISIBLE_DEVICES=0 python3 inference.py \
--ckpt "logs/hierspeechpp_eng_kor/hierspeechpp_v1.1_ckpt.pth" \
--ckpt_text2w2v "logs/ttv_libritts_v1/ttv_lt960_ckpt.pth" \
--output_dir "tts_results_eng_kor_v2" \
--noise_scale_vc "0.333" \
--noise_scale_ttv "0.333" \
--denoise_ratio "0" ```
- For better robustness, we recommend a noise_scale of 0.333
- For better expressiveness, we recommend a noise_scale of 0.667
- Find your best parameters for your style prompt Noise Control ``` without denoiser --denoise_ratio "0" with denoiser --denoise_ratio "1" Mixup (Recommend 0.6~0.8) --denoise_rate "0.8"
``` Voice Conversion This method only utilize a hierarchical speech synthesizer for voice conversion.
```
sh inference_vc.sh --ckpt "logs/hierspeechpp_libritts460/hierspeechpp_lt460_ckpt.pth" \ LibriTTS-460 --ckpt "logs/hierspeechpp_libritts960/hierspeechpp_lt960_ckpt.pth" \ LibriTTS-960 --ckpt "logs/hierspeechpp_eng_kor/hierspeechpp_v1_ckpt.pth" \ Large_v1 epoch 60 (paper version) --ckpt "logs/hierspeechpp_eng_kor/hierspeechpp_v1.1_ckpt.pth" \ Large_v1.1 epoch 200 (20. Nov. 2023) CUDA_VISIBLE_DEVICES=0 python3 inference_vc.py \
--ckpt "logs/hierspeechpp_eng_kor/hierspeechpp_v1.1_ckpt.pth" \
--output_dir "vc_results_eng_kor_v2" \
--noise_scale_vc "0.333" \
--noise_scale_ttv "0.333" \
--denoise_ratio "0"
```
- For better robustness, we recommend a noise_scale of 0.333
- For better expressiveness, we recommend a noise_scale of 0.667
- Find your best parameters for your style prompt
- Voice Conversion is vulnerable to noisy target prompt so we recommend to utilize a denoiser with noisy prompt
- For noisy source speech, a wrong F0 may be extracted by YAPPT resulting in a quality degradation. Speech Super-resolution SpeechSR-24k and SpeechSR-48 are provided in TTS pipeline. If you want to use SpeechSR only, please refer inference_speechsr.py. If you change the output resolution, add this --output_sr "48000" # Default
--output_sr "24000" #
--output_sr "16000" # without super-resolution. Speech Denoising for Noise-free Speech Synthesis (Only used in Speaker Encoder during Inference) For denoised style prompt, we utilize a denoiser (MP-SENet) . When using a long reference audio, there is an out-of-memory issue with this model so we have a plan to learn a memory efficient speech denoiser in the future. If you have a problem, we recommend to use a clean reference audio or denoised audio before TTS pipeline or denoise the audio with cpu (but this will be slow😥). (21, Nov. 2023) Sliced window denoising. This may reduce a burden for denoising a speech.
```
if denoise == 0:
audio = torch.cat([audio.cuda(), audio.cuda()], dim=0)
else:
with torch.no_grad(): if ori_prompt_len > 80000:
denoised_audio = []
for i in range((ori_prompt_len//80000)):
denoised_audio.append(denoise(audio.squeeze(0).cuda()[i*80000:(i+1)*80000], denoiser, hps_denoiser))
denoised_audio.append(denoise(audio.squeeze(0).cuda()[(i+1)*80000:], denoiser, hps_denoiser))
denoised_audio = torch.cat(denoised_audio, dim=1)
else:
denoised_audio = denoise(audio.squeeze(0).cuda(), denoiser, hps_denoiser)
audio = torch.cat([audio.cuda(), denoised_audio[:,:audio.shape[-1]]], dim=0) ``` TTV-v2 (WIP) TTV-v1 is a simple model which is very slightly modified from VITS. Although this simple TTV could synthesize a speech with high-quality and high speaker similarity, we thought that there is room for improvement in terms of expressiveness such as prosody modeling. For TTV-v2, we modify some components and training process (Model size: 107M --> 278M) Intermediate hidden size: 256 --> 384 Loss masking for wav2vec reconstruction loss (I left out masking the loss for zero-padding sequences😥) For long sentence generation, we finetune the model with full LibriTTS-train dataset without data filtering (Decrease the learning rate to 2e-5 with batch size of 8 per gpus) Multi-lingual Dataset (We are training the model with Eng, Indic, and Kor dataset now) GAN VS Diffusion [Read More] We think that we could not confirm which is better yet. There are many advatanges for each model so you can utilize each model for your own purposes and each study must be actively conducted simultaneously.
### GAN (Specifically, GAN-based End-to-End Speech Synthesis Models)
- (pros) Fast Inference Speed
- (pros) High-quality Audio
- (cons) Slow Training Speed (Over 7~20 Days)
- (cons) Lower Voice Style Transfer Performance than Diffusion Models
- (cons) Perceptually High-quality but Over-smoothed Audio because of Information Bottleneck by the sampling from the low-dimensional Latent Variable
### Diffusion (Diffusion-based Mel-spectrogram Generation Models)
- (pros) Fast Training Speed (within 3 Days)
- (pros) High-quality Voice Style Transfer
- (cons) Slow Inference Speed
- (cons) Lower Audio quality than End-to-End Speech Synthesis Models
### (In this wors) Our Approaches for GAN-based End-to-End Speech Synthesis Models
- Improving Voice Style Transfer Performance in End-to-End Speech Synthesis Models for OOD (Zero-shot Voice Style Transfer for Novel Speaker)
- Improving the Audio Quality beyond Perceptal Quality for Much more High-fidelity Audio Generation
### (Our other works) Diffusion-based Mel-spectrogram Generation Models
- DDDM-VC: Disentangled Denoising Diffusion Models for High-quality and High-diversity Speech Synthesis Models
- Diff-hierVC: Hierarhical Diffusion-based Speech Synthesis Model with Diffusion-based Pitch Modeling
### Our Goals
- Integrating each model for High-quality, High-diversity and High-fidelity Speech Synthesis Models LLM-based Models We hope to compare LLM-based models for zero-shot TTS baselines. However, there is no public-available official implementation of LLM-based TTS models. Unfortunately, unofficial models have a poor performance in zero-shot TTS so we hope they will release their model for a fair comparison and reproducibility and for our speech community. THB I could not stand the inference speed almost 1,000 times slower than e2e models It takes 5 days to synthesize the full sentences of LibriTTS-test subsets. Even, the audio quality is so bad. I hope they will release their official source code soon. In my very personal opinion, VITS is still the best TTS model I have ever seen. But, I acknowledge that LLM-based models have much powerful potential for their creative generative performance from the large-scale dataset but not now. Limitation of our work Slow training speed and Relatively large model size (Compared with VITS) --> Future work: Light-weight and Fast training pipeline and much larger model... Could not generate realistic background sound --> Future work: adding audio generation part by disentangling speech and sound. Could not generate a speech from a too long sentence becauase of our training setting. We see increasing max length could improve the model performance. I hope to use GPUs with 80 GB 😢 # Data Filtering for limited computation resource.
wav_min = 32
wav_max = 600 # 12s
text_min = 1
text_max = 200 TTV v2 may reduce this issue significantly...! Results [Download] We have attached all samples from LibriTTS test-clean and test-other. Reference Our repository is heavily based on VITS and BigVGAN . [Read More] ### Our Previous Works
- HierSpeech/HierSpeech-U for Hierarchical Speech Synthesis Framework: https://openreview.net/forum?id=awdyRVnfQKX
- HierVST for Baseline Speech Backbone: https://www.isca-speech.org/archive/interspeech_2023/lee23i_interspeech.html
- DDDM-VC: https://dddm-vc.github.io/
- Diff-HierVC: https://diff-hiervc.github.io/
### Baseline Model
- VITS: https://github.com/jaywalnut310/vits
- NaturalSpeech: https://speechresearch.github.io/naturalspeech/
- NANSY for Audio Perturbation: https://github.com/revsic/torch-nansy
- Speech Resynthesis: https://github.com/facebookresearch/speech-resynthesis
### Waveform Generator for High-quality Audio Generation
- HiFi-GAN: https://github.com/jik876/hifi-gan
- BigVGAN for High-quality Generator: https://arxiv.org/abs/2206.04658
- UnivNET: https://github.com/mindslab-ai/univnet
- EnCodec: https://github.com/facebookresearch/encodec
### Self-supervised Speech Model
- Wav2Vec 2.0: https://arxiv.org/abs/2006.11477
- XLS-R: https://huggingface.co/facebook/wav2vec2-xls-r-300m
- MMS: https://huggingface.co/facebook/facebook/mms-300m
### Other Large Language Model based Speech Synthesis Model
- VALL-E & VALL-E-X
- SPEAR-TTS
- Make-a-Voice
- MEGA-TTS & MEGA-TTS 2
- UniAudio
### Diffusion-based Model
- NaturalSpeech 2
### AdaLN-zero
- Dit: https://github.com/facebookresearch/DiT
Thanks for all nice works.;The official implementation of HierSpeech++;[] | sh-lee-prml/HierSpeechpp |
nerdyrodent/AVeryComfyNerd;Overview A variety of ComfyUI related workflows and other stuff. You'll need different models and custom nodes for each different workflow. As this page has multiple headings you'll need to scroll down to see more. Resources You'll need models and other resources for ComfyUI. Check the table below for links to everything from ControlNet models to Upscalers Item | Description | Link
| --- | --- | --- |
ComfyUI | The main thing you'll need! | https://github.com/comfyanonymous/ComfyUI See https://youtu.be/2r3uM_b3zA8 for an install guide
ComfyUI Manager | Install any missing nodes using this | https://github.com/ltdrdata/ComfyUI-Manager
Stability AI | Models & VAEs | https://huggingface.co/stabilityai
Text-to-Image models | Text-2-image models | https://huggingface.co/models?pipeline_tag=text-to-image&sort=trending
SSD-1B | Text2-image model | https://huggingface.co/segmind/SSD-1B
ControlNet Models | ControlNet Models | https://huggingface.co/lllyasviel/sd_control_collection/tree/main https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/tree/main
QR Code Monster Control Net | ControlNet Model | https://huggingface.co/monster-labs/control_v1p_sd15_qrcode_monster
IP Adapter | Github Repo | https://github.com/tencent-ailab/IP-Adapter
IP Adapter models | Models | https://huggingface.co/h94/IP-Adapter
T2I Adapter | Github Repo | https://github.com/TencentARC/T2I-Adapter
Control LoRA | Control Models | https://huggingface.co/stabilityai/control-lora
AnimateDiff | Original repo, many links and more info | https://github.com/guoyww/AnimateDiff
Latent Consistency Models | Models | https://huggingface.co/latent-consistency
Upscale Wiki | Many models & info | https://upscale.wiki/wiki/Main_Page
Artist Style Studies | SDXL Prompt output examples for inspiration | https://sdxl.parrotzone.art/ List of workflows available In ComfyUI the image IS the workflow. Simply drag or load a workflow image into ComfyUI!
See the "troubleshooting" section if your local install is giving errors :) Workflow | Description | Version
| --- | --- | --- | | Basic SDXL ControlNet workflow. Introductory SDXL Canny & Depth ControlNet example. See https://youtu.be/reqamcrPYiM for more information. | SDXL | Basic QR Code Monster SD 1.5 ControlNet - make spiral art! See also - https://youtu.be/D4oJz0w36ps | SD 1.5 | QR Code Monster SD 1.5 ControlNet - make animated spiral art! See also: https://youtu.be/D4oJz0w36ps | SD 1.5 | Updated QR Code Monster SD 1.5 ControlNet with AnimateDiff and FreeU Works best with the v1 QR Code Monster - https://huggingface.co/monster-labs/control_v1p_sd15_qrcode_monster | SD 1.5 | AnimateDiff with Montion LoRA example. Pan up, down, left right, etc. | SD 1.5 |Instant LoRA 1 Inspired by AloeVera (almost identical). Really simple, no training, "LoRA" like functionality. SD 1.5. IP Adapter models: 1. https://huggingface.co/h94/IP-Adapter/blob/main/models/ip-adapter-plus_sd15.bin -> custom_nodes/IPAdapter-ComfyUI/models . 2. https://huggingface.co/h94/IP-Adapter/blob/main/models/image_encoder/model.safetensors -> models/clip_vision . NB (2024). As IPAdapter-ComfyUI from 2023 is now deprecated, you should replace it with a currently supported version before use Video guide - https://youtu.be/HtmIC6fqsMQ | SD 1.5 |Instant LoRA 2 As above, but with ControlNet to guide the shape | SD 1.5 |Instant LoRA 3 As above, but with QR Code Monster ControlNet too :) | SD 1.5 |Instant LoRA 4 As above, but with basic upscaling | SD 1.5 |Instant LoRA 5 As above, but with more upscaling to 16k+ | SD 1.5 |Instant LoRA 6 As above, but different upscaling to 16k+ | SD 1.5 |Morphing AI videos of any length using AnimateDiff. SD 1.5. Includes IPAdapter & Upscaling. IP Adapter models: 1. https://huggingface.co/h94/IP-Adapter/blob/main/models/ip-adapter-plus_sd15.bin -> custom_nodes/IPAdapter-ComfyUI/models . 2. https://huggingface.co/h94/IP-Adapter/blob/main/models/image_encoder/model.safetensors -> models/clip_vision . Video guide - https://youtu.be/6A3a0QNPhIs | SD 1.5 |Morphing AI videos of any length using AnimateDiff. SD 1.5. Includes Upscaling. Like above, but without IPAdapter controls. | SD 1.5 |SDXL "Instant LoRA" - basic. Really simple, no training, "LoRA" like functionality. Uses SDXL IP Adapter - https://huggingface.co/h94/IP-Adapter Video - https://youtu.be/dGL02W4QatI | SDXL |SDXL "Instant LoRA" - with CLIP Vision Uses SDXL IP Adapter - https://huggingface.co/h94/IP-Adapter Also use "Revisions" CLIP vision - https://huggingface.co/stabilityai/control-lora | SDXL |SDXL "Instant LoRA" - with CLIP Vision & ControlNet Uses SDXL IP Adapter - https://huggingface.co/h94/IP-Adapter Also use "Revisions" CLIP vision - https://huggingface.co/stabilityai/control-lora | SDXL |AnimateDiff + QRCode (Vid2Vid) Use any high-contrast input video to create guided animations! Spirals away... | SD 1.5 |SD 1.5 Reposer (2 versions) - single face image to any pose. Get consistent faces! No "roop" or similar face-swapping nodes required = easy install! SD 1.5 ControlNet models: https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/tree/main IP Adapter models: 1. Face = https://huggingface.co/h94/IP-Adapter/blob/main/models/ip-adapter-plus-face_sd15.bin 2. Vision = https://huggingface.co/h94/IP-Adapter/blob/main/models/image_encoder/model.safetensors NOTE Reposer2.png now uses the even more updated version of IPAdapter Reposer Plus Bypass Edition is deprecated, but still available for download if you want to update any nodes at home. Original Reposer Basic Video guide - https://youtu.be/SacK9tMVNUA Original Reposer Plus Video guide - https://youtu.be/ZcCfwTkYSz8 | SD 1.5 |SD 1.5 Video Styler! Combining IPAdapter with Video-to-video for strange styles and weird animations Uses https://github.com/cubiq/ComfyUI_IPAdapter_plus The pre-trained models are available on huggingface , download and place them in the ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/models directory. For SD1.5 you need: ip-adapter_sd15.bin ip-adapter_sd15_light.bin ip-adapter-plus_sd15.bin ip-adapter-plus-face_sd15.bin Additionally you need the image encoder to be placed in the ComfyUI/models/clip_vision/ directory. They are the same models used by the other IPAdapter custom nodes ;) - symlinks are your friend! Video guide - https://youtu.be/kJp8JzA2aVU | SD 1.5 |SDXL version of Reposer using the SDXL "IPAdapter Plus Face" model Pick a face then add a body in any pose - no training! Works with photorealistic faces, anime faces, cartoon faces, etc | SDXL |SSD-1B Workflow - SDXL for 8GB VRAM cards! Model - https://huggingface.co/segmind/SSD-1B Video - https://youtu.be/F-bKndyQ7L8|SSD-1B |LCM LoRA vs Normal|1.5, SDXL, SSD-1B |IPAdapter Attention Masking Example Video https://youtu.be/riLmjBlywcg|1.5 |IPAdapter Attention Masking Example with extra toppings (LCM, Facefix) Video https://youtu.be/riLmjBlywcg|1.5 |Stable Video Diffusion example with a simple upscale and frame interpolation|SVD |SDXL Turbo - 1 step diffusion!|SDXL Turbo, SD2 Turbo |A very basic attempt at a "Comfy MagicAnimate". Needs more work :) Links: Magic Animate - https://github.com/magic-research/magic-animate Magic Animate (Windows) - https://github.com/sdbds/magic-animate-for-windows DreamingTulpa - https://twitter.com/dreamingtulpa/status/1730876691755450572 CocktailPeanut - https://twitter.com/cocktailpeanut/status/1732052909720797524 Google Colab - https://github.com/camenduru/MagicAnimate-colab Huggingface Space - https://huggingface.co/spaces/zcxu-eric/magicanimate Vid2DensePose - https://github.com/Flode-Labs/vid2densepose Model Downloads for the MagicAnimate Gradio App: mkdir -p magic-animate/pretrained_models cd magic-animate/pretrained_models git lfs clone https://huggingface.co/runwayml/stable-diffusion-v1-5 -b fp16 git lfs clone https://huggingface.co/stabilityai/sd-vae-ft-mse git lfs clone https://huggingface.co/zcxu-eric/MagicAnimate Video - https://youtu.be/td27SyA9M80| SD 1.5 |Steerable Motion - Image Batch with AnimateDiff Video guide - https://youtu.be/bH-56e3cR2g| SD 1.5 |Unsampler - Turn images into noise and back again, as modified by your prompts! Video guide - https://youtu.be/qW1I7in1WL0| SD 1.5, SDXL |Refacer - Change the style of any face! Video guide - https://youtu.be/r7Iz8Ps7R2s| SDXL Troubleshooting When troubleshooting (working to fix issues) - such as with your local custom node installs, it's best to do all of these steps until resolution.
* Need even more help or updated workflows? Drop me a DM on https://www.patreon.com/NerdyRodent and help support the channel! :)
* Make sure you've installed the drivers for your graphics card
* In ComfyUI the image IS the workflow.
* These workflows require ComfyUI to run, so you'll need to install that first. See https://youtu.be/2r3uM_b3zA8 for an install guide
* Install ComfyUI Manager next - https://github.com/ltdrdata/ComfyUI-Manager
* Use ComfyUI Manager to install missing custom nodes by clicking "Install Missing Custom Nodes"
* If ComfyUI Manager can't find a node automatically, use the search feature
* Be sure to keep ComfyUI updated regularly - including all custom nodes. Old versions may result in errors appearing. This is the most common issue, so update now!
* These are just workflows - no custom nodes here, so no code to go wrong :)
* Need a model or checkpoint? See the resources section!
* By default, models are saved in subdirectories under ComfyUI/models , though some custom nodes have their own models directory.
* Don't mix SDXL and SD1.5 models (unless stated, such as SDXL needing the SD 1.5 vision model) - chances are you'll get an error!
* Don't try to use SDXL models in workflows not designed for SDXL - chances are they won't work!
* Ensure your model files aren't corrupt - try a fresh download if a particular model gives errors
* Some workflows are large . Zoom out to see more of the canvas.
* Custom node still red after installing it? Remember to restart ComfyUI!
* Custom node still giving an error? Check the GitHub page for that custom node - maybe someone else has a similar issue open?
* Not sure where the GitHub page is for a custom node? You can click on it via ComfyUI Manager
* Check the output when ComfyUI starts up as issues can show up there
* Try updating custom nodes manually ( git pull )
* Over time, some custom nodes implement breaking changes so nodes may need to be replaced. Often this can be done with "fix node". Typically the custom node GitHub page has additional information.
* Sometimes custom nodes just break! Check the github page for the custom node causing any issues for more information and to raise issues
* Sometimes custom nodes change functionality, so check for updates. Changes include:
* Segement anything - mask output inverted & now returns multiple images
* Dynamic Thresholding - output different
* The original IPAdapter ('IPAdapter-ComfyUI') is deprecated and has been moved to the legacy channel.
* March 2024 - the "new" IP Adapter node (IP Adapter Plus) implemented breaking changes which require the node the be re-created. The usual "fix node" won't work.
* The Microsoft Windows portable version of ComfyUI apparently has issues with various custom nodes, whereas normal installs are OK. Unknown error? Try a normal install!
* Need more help? See this Playlist with loads of ComfyUI guides Updating / Installing Custom Nodes Use ComfyUI Manager to install and update custom nodes with ease! Click "Install Missing Custom Nodes" to install any red nodes Use the "search" feature to find any nodes Be sure to keep ComfyUI updated regularly - including all custom nodes. Old versions may result in errors appearing. Custom Node List Screenshots of my installed custom nodes for reference. Not all nodes are used in workflows. Install custom nodes using ComfyUI manager See the Troubleshooting section if you have errors with your local ComfyUI install;ComfyUI related stuff and things;comfyui | nerdyrodent/AVeryComfyNerd |
nlmatics/llmsherpa;LLM Sherpa LLM Sherpa provides strategic APIs to accelerate large language model (LLM) use cases. What's New [!IMPORTANT]
llmsherpa back end service is now fully open sourced under Apache 2.0 Licence. See https://github.com/nlmatics/nlm-ingestor - You can now run your own servers using a docker image!
- Support for different file formats: DOCX, PPTX, HTML, TXT, XML
- OCR Support is built in
- Blocks now have co-ordinates - use bbox propery of blocks such as sections
- A new indent parser to better align all headings in a document to their corresponding level
- The free server and paid server are not updated with latest code and users are requested to spawn their own servers using instructions in nlm-ingestor LayoutPDFReader Most PDF to text parsers do not provide layout information. Often times, even the sentences are split with arbritrary CR/LFs making it very difficult to find paragraph boundaries. This poses various challenges in chunking and adding long running contextual information such as section header to the passages while indexing/vectorizing PDFs for LLM applications such as retrieval augmented generation (RAG). LayoutPDFReader solves this problem by parsing PDFs along with hierarchical layout information such as: Sections and subsections along with their levels. Paragraphs - combines lines. Links between sections and paragraphs. Tables along with the section the tables are found in. Lists and nested lists. Join content spread across pages. Removal of repeating headers and footers. Watermark removal. With LayoutPDFReader, developers can find optimal chunks of text to vectorize, and a solution for limited context window sizes of LLMs. You can experiment with the library directly in Google Colab here Here's a writeup explaining the problem and our approach. Here'a LlamaIndex blog explaining the need for smart chunking. API Reference : https://llmsherpa.readthedocs.io/ How to use with Google Gemini Pro How to use with Cohere Embed3 Important Notes The LayoutPDFReader is tested on a wide variety of PDFs. That being said, it is still challenging to get every PDF parsed correctly. OCR is currently not supported. Only PDFs with a text layer are supported. [!NOTE]
LLMSherpa uses a free and open api server. The server does not store your PDFs except for temporary storage during parsing. This server will be decommissioned soon.
Self-host your own private server using instructions at https://github.com/nlmatics/nlm-ingestor [!IMPORTANT]
Private available at Microsoft Azure Marketplace will be decommissioned soon. Please move to your self-hosted instance using instructions at https://github.com/nlmatics/nlm-ingestor . Installation bash
pip install llmsherpa Read a PDF file The first step in using the LayoutPDFReader is to provide a url or file path to it and get back a document object. ```python
from llmsherpa.readers import LayoutPDFReader llmsherpa_api_url = "https://readers.llmsherpa.com/api/document/developer/parseDocument?renderFormat=all"
pdf_url = "https://arxiv.org/pdf/1910.13461.pdf" # also allowed is a file path e.g. /home/downloads/xyz.pdf
pdf_reader = LayoutPDFReader(llmsherpa_api_url)
doc = pdf_reader.read_pdf(pdf_url) ``` Install LlamaIndex In the following examples, we will use LlamaIndex for simplicity. Install the library if you haven't already. bash
pip install llama-index Setup OpenAI python
import openai
openai.api_key = #<Insert API Key> Vector search and Retrieval Augmented Generation with Smart Chunking LayoutPDFReader does smart chunking keeping related text due to document structure together: All list items are together including the paragraph that precedes the list. Items in a table are chuncked together Contextual information from section headers and nested section headers is included The following code creates a LlamaIndex query engine from LayoutPDFReader document chunks ```python
from llama_index.core import Document
from llama_index.core import VectorStoreIndex index = VectorStoreIndex([])
for chunk in doc.chunks():
index.insert(Document(text=chunk.to_context_text(), extra_info={}))
query_engine = index.as_query_engine()
``` Let's run one query: python
response = query_engine.query("list all the tasks that work with bart")
print(response) We get the following response: BART works well for text generation, comprehension tasks, abstractive dialogue, question answering, and summarization tasks. Let's try another query that needs answer from a table: python
response = query_engine.query("what is the bart performance score on squad")
print(response) Here's the response we get: The BART performance score on SQuAD is 88.8 for EM and 94.6 for F1. Summarize a Section using prompts LayoutPDFReader offers powerful ways to pick sections and subsections from a large document and use LLMs to extract insights from a section. The following code looks for the Fine-tuning section of the document: ```python
from IPython.core.display import display, HTML
selected_section = None find a section in the document by title for section in doc.sections():
if section.title == '3 Fine-tuning BART':
selected_section = section
break use include_children=True and recurse=True to fully expand the section. include_children only returns at one sublevel of children whereas recurse goes through all the descendants HTML(section.to_html(include_children=True, recurse=True))
``` Running the above code yields the following HTML output: 3 Fine-tuning BART The representations produced by BART can be used in several ways for downstream applications. 3.1 Sequence Classification Tasks For sequence classification tasks, the same input is fed into the encoder and decoder, and the final hidden state of the final decoder token is fed into new multi-class linear classifier.\nThis approach is related to the CLS token in BERT; however we add the additional token to the end so that representation for the token in the decoder can attend to decoder states from the complete input (Figure 3a). 3.2 Token Classification Tasks For token classification tasks, such as answer endpoint classification for SQuAD, we feed the complete document into the encoder and decoder, and use the top hidden state of the decoder as a representation for each word.\nThis representation is used to classify the token. 3.3 Sequence Generation Tasks Because BART has an autoregressive decoder, it can be directly fine tuned for sequence generation tasks such as abstractive question answering and summarization.\nIn both of these tasks, information is copied from the input but manipulated, which is closely related to the denoising pre-training objective.\nHere, the encoder input is the input sequence, and the decoder generates outputs autoregressively. 3.4 Machine Translation We also explore using BART to improve machine translation decoders for translating into English.\nPrevious work Edunov et al.\n(2019) has shown that models can be improved by incorporating pre-trained encoders, but gains from using pre-trained language models in decoders have been limited.\nWe show that it is possible to use the entire BART model (both encoder and decoder) as a single pretrained decoder for machine translation, by adding a new set of encoder parameters that are learned from bitext (see Figure 3b). More precisely, we replace BART’s encoder embedding layer with a new randomly initialized encoder.\nThe model is trained end-to-end, which trains the new encoder to map foreign words into an input that BART can de-noise to English.\nThe new encoder can use a separate vocabulary from the original BART model. We train the source encoder in two steps, in both cases backpropagating the cross-entropy loss from the output of the BART model.\nIn the first step, we freeze most of BART parameters and only update the randomly initialized source encoder, the BART positional embeddings, and the self-attention input projection matrix of BART’s encoder first layer.\nIn the second step, we train all model parameters for a small number of iterations. Now, let's create a custom summary of this text using a prompt: python
from llama_index.llms import OpenAI
context = selected_section.to_html(include_children=True, recurse=True)
question = "list all the tasks discussed and one line about each task"
resp = OpenAI().complete(f"read this text and answer question: {question}:\n{context}")
print(resp.text) The above code results in following output: ```
Tasks discussed in the text: Sequence Classification Tasks: The same input is fed into the encoder and decoder, and the final hidden state of the final decoder token is used for multi-class linear classification. Token Classification Tasks: The complete document is fed into the encoder and decoder, and the top hidden state of the decoder is used as a representation for each word for token classification. Sequence Generation Tasks: BART can be fine-tuned for tasks like abstractive question answering and summarization, where the encoder input is the input sequence and the decoder generates outputs autoregressively. Machine Translation: BART can be used to improve machine translation decoders by incorporating pre-trained encoders and using the entire BART model as a single pretrained decoder. The new encoder parameters are learned from bitext.
``` Analyze a Table using prompts With LayoutPDFReader, you can iterate through all the tables in a document and use the power of LLMs to analyze a Table
Let's look at the 6th table in this document. If you are using a notebook, you can display the table as follows: python
from IPython.core.display import display, HTML
HTML(doc.tables()[5].to_html()) The output table structure looks like this: | | SQuAD 1.1 EM/F1 | SQuAD 2.0 EM/F1 | MNLI m/mm | SST Acc | QQP Acc | QNLI Acc | STS-B Acc | RTE Acc | MRPC Acc | CoLA Mcc
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | ---
| BERT | 84.1/90.9 | 79.0/81.8 | 86.6/- | 93.2 | 91.3 | 92.3 | 90.0 | 70.4 | 88.0 | 60.6
| UniLM | -/- | 80.5/83.4 | 87.0/85.9 | 94.5 | - | 92.7 | - | 70.9 | - | 61.1
| XLNet | 89.0/94.5 | 86.1/88.8 | 89.8/- | 95.6 | 91.8 | 93.9 | 91.8 | 83.8 | 89.2 | 63.6
| RoBERTa | 88.9/94.6 | 86.5/89.4 | 90.2/90.2 | 96.4 | 92.2 | 94.7 | 92.4 | 86.6 | 90.9 | 68.0
| BART | 88.8/94.6 | 86.1/89.2 | 89.9/90.1 | 96.6 | 92.5 | 94.9 | 91.2 | 87.0 | 90.4 | 62.8 Now let's ask a question to analyze this table: python
from llama_index.llms import OpenAI
context = doc.tables()[5].to_html()
resp = OpenAI().complete(f"read this table and answer question: which model has the best performance on squad 2.0:\n{context}")
print(resp.text) The above question will result in the following output: The model with the best performance on SQuAD 2.0 is RoBERTa, with an EM/F1 score of 86.5/89.4. That's it! LayoutPDFReader also supports tables with nested headers and header rows. Here's an example with nested headers: from IPython.core.display import display, HTML
HTML(doc.tables()[6].to_html()) | | CNN/DailyMail | | | XSum | | -
| --- | --- | --- | --- | --- | --- | ---
| | R1 | R2 | RL | R1 | R2 | RL
| --- | --- | --- | --- | --- | --- | ---
| Lead-3 | 40.42 | 17.62 | 36.67 | 16.30 | 1.60 | 11.95
| PTGEN (See et al., 2017) | 36.44 | 15.66 | 33.42 | 29.70 | 9.21 | 23.24
| PTGEN+COV (See et al., 2017) | 39.53 | 17.28 | 36.38 | 28.10 | 8.02 | 21.72
| UniLM | 43.33 | 20.21 | 40.51 | - | - | -
| BERTSUMABS (Liu & Lapata, 2019) | 41.72 | 19.39 | 38.76 | 38.76 | 16.33 | 31.15
| BERTSUMEXTABS (Liu & Lapata, 2019) | 42.13 | 19.60 | 39.18 | 38.81 | 16.50 | 31.27
| BART | 44.16 | 21.28 | 40.90 | 45.14 | 22.27 | 37.25 Now let's ask an interesting question: python
from llama_index.llms import OpenAI
context = doc.tables()[6].to_html()
question = "tell me about R1 of bart for different datasets"
resp = OpenAI().complete(f"read this table and answer question: {question}:\n{context}")
print(resp.text) And we get the following answer: ```
R1 of BART for different datasets: For the CNN/DailyMail dataset, the R1 score of BART is 44.16. For the XSum dataset, the R1 score of BART is 45.14.
``` Get the Raw JSON To get the complete json returned by llmsherpa service and process it differently, simply get the json attribute python
doc.json;Developer APIs to Accelerate LLM Projects;[] | nlmatics/llmsherpa |
roboflow/inference;[notebooks](https://github.com/roboflow/notebooks) | [supervision](https://github.com/roboflow/supervision) | [autodistill](https://github.com/autodistill/autodistill) | [maestro](https://github.com/roboflow/multimodal-maestro) [![version](https://badge.fury.io/py/inference.svg)](https://badge.fury.io/py/inference)
[![downloads](https://img.shields.io/pypi/dm/inference)](https://pypistats.org/packages/inference)
![docker pulls](https://img.shields.io/docker/pulls/roboflow/roboflow-inference-server-cpu)
[![license](https://img.shields.io/pypi/l/inference)](https://github.com/roboflow/inference/blob/main/LICENSE.md)
[![huggingface](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Roboflow/workflows)
[![discord](https://img.shields.io/discord/1159501506232451173)](https://discord.gg/GbfgXGJ8Bk) 👋 hello Roboflow Inference is an open-source platform designed to simplify the deployment of computer vision models. It enables developers to perform object detection, classification, and instance segmentation and utilize foundation models like CLIP , Segment Anything , and YOLO-World through a Python-native package, a self-hosted inference server, or a fully managed API . Explore our enterprise options for advanced features like server deployment, device management, active learning, and commercial licenses for YOLOv5 and YOLOv8. 💻 install Inference package requires Python>=3.8,<=3.11 . Click here to learn more about running Inference inside Docker. bash
pip install inference 👉 additional considerations - hardware
Enhance model performance in GPU-accelerated environments by installing CUDA-compatible dependencies.
```bash
pip install inference-gpu
```
- models
The `inference` and `inference-gpu` packages install only the minimal shared dependencies. Install model-specific dependencies to ensure code compatibility and license compliance. Learn more about the [models](https://inference.roboflow.com/#extras) supported by Inference.
```bash
pip install inference[yolo-world]
``` 🔥 quickstart Use Inference SDK to run models locally with just a few lines of code. The image input can be a URL, a numpy array (BGR), or a PIL image. ```python
from inference import get_model model = get_model(model_id="yolov8n-640") results = model.infer("https://media.roboflow.com/inference/people-walking.jpg")
``` 👉 roboflow models Set up your `ROBOFLOW_API_KEY` to access thousands of fine-tuned models shared by the [Roboflow Universe](https://universe.roboflow.com/) community and your custom model. Navigate to 🔑 keys section to learn more.
```python
from inference import get_model
model = get_model(model_id="soccer-players-5fuqs/1")
results = model.infer(
image="https://media.roboflow.com/inference/soccer.jpg",
confidence=0.5,
iou_threshold=0.5
)
``` 👉 foundational models - [CLIP Embeddings](https://inference.roboflow.com/foundation/clip) - generate text and image embeddings that you can use for zero-shot classification or assessing image similarity.
```python
from inference.models import Clip
model = Clip()
embeddings_text = clip.embed_text("a football match")
embeddings_image = model.embed_image("https://media.roboflow.com/inference/soccer.jpg")
```
- [Segment Anything](https://inference.roboflow.com/foundation/sam) - segment all objects visible in the image or only those associated with selected points or boxes.
```python
from inference.models import SegmentAnything
model = SegmentAnything()
result = model.segment_image("https://media.roboflow.com/inference/soccer.jpg")
```
- [YOLO-World](https://inference.roboflow.com/foundation/yolo_world) - an almost real-time zero-shot detector that enables the detection of any objects without any training.
```python
from inference.models import YOLOWorld
model = YOLOWorld(model_id="yolo_world/l")
result = model.infer(
image="https://media.roboflow.com/inference/dog.jpeg",
text=["person", "backpack", "dog", "eye", "nose", "ear", "tongue"],
confidence=0.03
)
``` 📟 inference server deploy server The inference server is distributed via Docker. Behind the scenes, inference will download and run the image that is appropriate for your hardware. Here , you can learn more about the supported images. bash
inference server start run client Consume inference server predictions using the HTTP client available in the Inference SDK. ```python
from inference_sdk import InferenceHTTPClient client = InferenceHTTPClient(
api_url="http://localhost:9001",
api_key= )
with client.use_model(model_id="soccer-players-5fuqs/1"):
predictions = client.infer("https://media.roboflow.com/inference/soccer.jpg")
``` If you're using the hosted API, change the local API URL to https://detect.roboflow.com . Accessing the hosted inference server and/or using any of the fine-tuned models require a ROBOFLOW_API_KEY . For further information, visit the 🔑 keys section. 🎥 inference pipeline The inference pipeline is an efficient method for processing static video files and streams. Select a model, define the video source, and set a callback action. You can choose from predefined callbacks that allow you to display results on the screen or save them to a file . ```python
from inference import InferencePipeline
from inference.core.interfaces.stream.sinks import render_boxes pipeline = InferencePipeline.init(
model_id="yolov8x-1280",
video_reference="https://media.roboflow.com/inference/people-walking.mp4",
on_prediction=render_boxes
) pipeline.start()
pipeline.join()
``` 🔑 keys Inference enables the deployment of a wide range of pre-trained and foundational models without an API key. To access thousands of fine-tuned models shared by the Roboflow Universe community, configure your API key. bash
export ROBOFLOW_API_KEY=<YOUR_API_KEY> 📚 documentation Visit our documentation to explore comprehensive guides, detailed API references, and a wide array of tutorials designed to help you harness the full potential of the Inference package. © license See the "Self Hosting and Edge Deployment" section of the Roboflow Licensing documentation for information on how Roboflow Inference is licensed. 🏆 contribution We would love your input to improve Roboflow Inference! Please see our contributing guide to get started. Thank you to all of our contributors! 🙏;A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.;computer-vision,inference-api,inference-server,vit,yolact,yolov5,yolov7,yolov8,jetson,tensorrt | roboflow/inference |
trigram-mrp/fractureiser;Translations to other languages: These were made at varying times in this document's history and may be outdated — especially the current status in README.md. 简体中文版本见此 Polska wersja Читать на русском языке 한국어는 이곳으로 Many others that are unfinished can be found in Pull Requests What? fractureiser is a virus found in several Minecraft projects uploaded to CurseForge and BukkitDev. The malware is embedded in multiple mods, some of which were added to highly popular modpacks. The malware is only known to target Windows and Linux. If left unchecked, fractureiser can be INCREDIBLY DANGEROUS to your machine. Please read through this document for the info you need to keep yourself safe. We've dubbed this malware fractureiser because that's the name of the CurseForge account that uploaded the most notable malicious files. Current Investigation Status The fractureiser event has ended — no follow-up Stage0s were ever discovered and no further evidence of activity has been discovered in the past 3 months.
A third C&C was never stood up to our knowledge. A copycat malware is still possible — and likely inevitable — but fractureiser is dead. Systems that are already infected are still cause for concern , and the below user documentation is still relevant. Follow-Up Meeting On 2023-06-08 the fractureiser Mitigation Team held a meeting with notable members of the community to discuss preventive measures and solutions for future problems of this scale.
See this page for the agenda and minutes of the event. BlanketCon Panel emilyploszaj and jaskarth, core members of the team, held a panel at BlanketCon 23 about the fractureiser mitigation effort. You can find a recording of the panel by quat on YouTube . What YOU need to know Modded Players CLICK HERE If you're simply a mod player and not a developer, the above link is all you need. It contains surface level information of the malware's effects, steps to check if you have it and how to remove it, and an FAQ. Anyone who wishes to dig deeper may also look at
* Event Timeline * Technical Breakdown I have never used any Minecraft mods You are not infected. Additional Info We've stopped receiving new unique samples, so the sample submission inbox is closed. If you would like to get in contact with the team, please shoot an email to mrp@trigram.org . If you copy portions of this document elsewhere, please put a prominent link back to this GitHub Repository somewhere near the top so that people can read the latest updates and get in contact. The only official public channel that this team ever used for coordination was #cfmalware on EsperNet. We have no affiliation with any Discord guilds. Do not ask for samples. If you have experience and credentials, that's great, but we have no way to verify this without using up tons of our team's limited time. Sharing malware samples is dangerous, even among people who know what they're doing. - the fractureiser Mitigation Team;Information about the fractureiser malware (June 2023);[] | trigram-mrp/fractureiser |
siglens/siglens;[![Build Status](https://github.com/siglens/siglens/workflows/siglens-docker-release/badge.svg)](https://github.com/siglens/siglens/actions/workflows/publish-prod-images.yml)
[![Go Report Card](https://goreportcard.com/badge/github.com/siglens/siglens)](https://goreportcard.com/report/github.com/siglens/siglens)
[![GoDoc](https://godoc.org/github.com/siglens/siglens?status.svg)](https://pkg.go.dev/github.com/siglens/siglens)
[![codecov](https://codecov.io/gh/siglens/siglens/graph/badge.svg?token=MH8S9B0EIK)](https://codecov.io/gh/siglens/siglens) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40SigLensHQ)](https://twitter.com/SigLensHQ)
[![RSS](https://img.shields.io/badge/RSS-Subscribe-orange?style=social&logo=rss)](https://www.siglens.com/blog/blog-rss.xml)
[![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue?style=social&logo=linkedin)](https://www.linkedin.com/company/siglens-com) English | 简体中文 Open Source Observability that is 💥💥 100x 💥💥 more efficient than Splunk Single binary for Logs 🎯, Metrics 🎯 and Traces 🎯. Cut down your Splunk bill by ⚡ ⚡ 90% ⚡ ⚡ Why SigLens: Our experience servicing 10,000+ engineers with Observability tools taught us a few things: Developers have to jump through different tools for logs, metrics, traces Splunk, DataDog, NewRelic are very expensive 💸 💸 💸 ElasticSearch takes too many machines, cluster maintenance is hard 👩💻👩💻 Grafana Loki has slow query performance 🐌🐌 Armed with decades of experience in monitoring domain, we set out to build a observability DB from the ground up, uniquely suited for logs, metrics and traces with zero external dependencies. A single binary that you can run on your laptop and process 8 TB/day . Setup Installation Git | Docker | Helm Documentation Docs Differentiators SigLens v/s Splunk,Elastic,Loki Check out this blog where SigLens ingested data at 1 PB/day rate for 24 hours on a mere 32 EC2 instances compared to 3000 EC2 instances required for Splunk, Elastic, Grafana Loki SigLens v/s Elasticsearch Check out this blog where SigLens is 1025x Faster than Elasticsearch 🚀🚀 SigLens v/s ClickHouse Check out this blog where SigLens is 54x Faster than ClickHouse 🚀🚀 Features: Multiple Ingestion formats: Open Telemetry, Elastic, Splunk HEC, Loki Multiple Query Languages: Splunk SPL, SQL and Loki LogQL Simple architecture, easy to get started. Join our Community Have questions, ask them in our community Slack 👋 Contributing Please read CONTRIBUTING.md to get started with making contributions to SigLens. How-Tos Searching Logs Tracing Creating Dashboards Creating Alerts Live Tail Minion Searches Code of Conduct Please review our code of conduct before contributing. Thanks to all contributors for their efforts;100x Efficient Log Management than Splunk :rocket: Reduce your observability cost by 90%;distributed-tracing,go,hacktoberfest,log-management,log-search,logging,logs,monitoring,newrelic,observability | siglens/siglens |
lem0nSec/ShellGhost;ShellGhost A memory-based evasion technique which makes shellcode invisible from process start to end. Motivation I wanted to share this shellcode self-injection POC to showcase some AV/EDR evasion concepts that may turn useful for Red Teaming. Just a few weeks ago I came up with a custom in-memory evasion technique which I named ShellGhost. This technique stems from the need for having a code that executes an 'invisible' shellcode from process start to finish . Handling the Thread Execution Flow ShellGhost relies on Vectored Exception Handling in combination with software breakpoints to cyclically stop thread execution, replace the executed breakpoint with a RC4-encrypted shellcode instruction, decrypt the instruction and resume execution after restoring memory protection to RX. When the subsequent EXCEPTION_BREAKPOINT is raised, the exception handler replaces the previous shellcode instruction with a new breakpoint so that the allocation will never disclose the complete shellcode in an unencrypted state. This happens inside a private memory page which is initially marked as READ/WRITE.
Having a RW PRV allocation will not be considered an 'Indicator of Compromise' by memory scanners such as PE-Sieve and Moneta. When the allocation becomes RX and the page is scanned, nothing but breakpoints will be found. This happens while the shellcode is actually under execution. The following picture shows that a reverse shell is running, but no IOC is found by Moneta (other than the binary being unsigned). Trying to scan the process with Pe-Sieve has an even better outcome: Shellcode Mapping Shellcode Mapping is the core functionality of ShellGhost. This tactic enables the thread to intermittently execute instructions while never exposing the entire shellcode in memory. This is possible because the position of each single shellcode instruction that the thread executes corresponds to the position of a certain breakpoint inside the allocated memory page. ShellGhost resolves this position by calculating the Relative Virtual Address (RVA) from the thread RIP to the base address of the allocated memory page and adds it to the base address of the encrypted shellcode / encrypted instructions. The number of breakpoints that will be replaced is not always the same, but it varies depending on the number of opcodes that each instruction needs to be correctly generated and interpreted (QUOTA). So for example the instruction 'POP RBP' is equal to '5D', which means only one breakpoint will be replaced. By contrast, the instruction 'JMP RAX' requires opcodes 'FF E0', so two breakpoints will be replaced. For this reason I created the following C data structure. ```c
typedef struct CRYPT_BYTES_QUOTA { DWORD RVA; // offset to encrypted instruction
DWORD quota; // number of opcodes that generate the instruction } CRYPT_BYTES_QUOTA, * PCRYPT_BYTES_QUOTA;
``` Breakpoints are not immediately replaced with their instruction counterparts. This is because instructions need to undergo a decryption routine before being executed. This is where the DWORD quota comes into play. ShellGhost relies on the now popular 'SystemFunction032' to perform RC4 decryption. Unlike XOR, RC4 is not a single-byte encryption scheme. This means that the shellcode cannot be encrypted and decrypted all at once. This is also another reason why each instruction is treated separately. After the breakpoints are replaced, the buffer length that SystemFunction032 needs will be equal to the 'instruction quota', which again represents the number of opcodes the specific instruction is composed of. So for example, consider the following snippet. ```c CRYPT_BYTES_QUOTA instruction[200];
instruction[5].quota = 2 USTRING buf = { 0 }; // will contain the buffer to be decrypted and its length
USTRING key = { 0 }; // will contain the RC4 key and length buf.Length = 2 // buffer length, or length of the instruction to be decrypted ``` We know that shellcode instruction number 5 is composed of 2 opcodes, so a buffer length of 2 will be passed to SystemFunction032. This is important because trying to decrypt the entire shellcode with a single call to SystemFunction032 will corrupt it entirely. How is Shellcode Mapping performed? The shellcode needs to be mapped with ShellGhost_mapping.py before compilation. The script extracts each single instruction and treats it as a small and independent shellcode. Instructions are encrypted one by one and printed out in C format all together as unsigned char. The result can be hardcoded inside the C code. Below is an example of what an encrypted MSF shellcode instructions for calc.exe looks like. This shellcode has 98 instructions, so 98 CRYPT_BYTES_QUOTA structs are declared. When the code executes, these structs have to be populated with the proper instructions RVAs and QUOTAs. The '-1' parameter instructs the mapping script to print out the piece of code that does this. Adjusting Winapi Parameters Metasploit x64 shellcodes tipically have winapi string parameters stored between instructions. So to say, a MSF x64 shellcode that calls Winexec does not push a series of bytes with a nullbyte at the end to have the first parameter string on the stack. Rather, the RCX register (first parameter) is a pointer inside the shellcode itself just like the following picture. This means that the breakpoints whose position relates to the string will never be resolved, because the RIP will never touch that position. As a matter of fact, this code resolves actual shellcode instructions the RIP goes through, not parameters that will never be executed like instructions. To fix this, I noticed that MSF shellcodes always store a pointer to the winapi they are calling inside the RAX register, then make a jump to the register itself. So when ShellGhost VEH detects that the resolved breakpoint is 'JMP RAX' and the RCX register contains a pointer to a position inside the shellcode, it attempts to also resolve what pointed by RCX. Subsequently, execution is not returned to the allocated memory. Rather, RAX (winapi address) is copied into RIP and thread execution is resumed from the winapi, thus overriding the 'JMP RAX' and keeping the allocated memory RW. This is needed for reverse shells calling WaitForSingleObject, which would cause the thread to sleep after the 'JMP RAX' while leaving memory RX for as long as the shell remains alive. The following code snippet contains the two conditions that has to be met in order for ShellGhost to adjust the RCX register when it contains a winapi parameter string and allow the MSF shellcode to correctly issue the function call (WinExec in the example here). ```c if (*(PWORD)exceptionData->ContextRecord->Rip == 0xe0ff) // if RIP is 'JMP RAX' if ((contextRecord->Rcx >= (DWORD_PTR)allocation_base) && (contextRecord->Rcx <= ((DWORD_PTR)allocation_base + sizeof(sh)))) // if RCX is inside the allocation ``` RDX, R8 and R9 (second, third, and fourth parameters) are not covered yet. Differences and Similarities with other Techniques ShellcodeFluctuation is a very similar in-memory evasion concept. Just like it, the allocated memory here 'fluctuates' from RW to RX. In contrast, ShellGhost introduces the following improvements: RC4 encryption plus 'Shellcode Mapping' rather than single-byte XOR No need to hook functions Support for Metasploit shellcodes ShellGhost is far from being a perfect technique though. It still suffers from the biggest downside all these techniques have, namely the need to have private executable memory at some point during execution . More advanced techniques like Foliage already found a way around this. In addition, a memory allocation full of software breakpoints can be detected by a YARA rule. The following picture shows Moneta correctly detecting an IOC for the RX PRV allocation. When it comes to evading an EDR solution, memory scanning is just part of a bigger picture. The complete absence of IOCs does not necessarily mean that a binary using this technique will prove effective against a given EDR. As far as I can tell, I experienced situations when the solution does not even allow you to launch the binary the way you're doing it. The other side of the medal is that IOCs are not always precise indicators, and some of them may turn out to be false positives. With that being said, this is just a raw technique and an inspiration which I hope the reader appreciates. The Red Teamer knows that just like the components of an EDR, in-memory evasion is only one component of the engine. Notes Compilation requires disabling incremental linking. This VS project has all compiler/linker options already set.;A memory-based evasion technique which makes shellcode invisible from process start to end.;[] | lem0nSec/ShellGhost |
vuejs/devtools-next;Vue DevTools Next Unleash Vue Developer Experience Getting Started Please follow the documentation at devtools-next.vuejs.org . Sponsors Contribution Please make sure to read the Contributing Guide before making a pull request. Thank you to all the people who already contributed to Vue DevTools! License MIT;The next iteration of Vue DevTools;[] | vuejs/devtools-next |
argilla-io/distilabel;Synthesize data for AI and add feedback on the fly! Distilabel is the framework for synthetic data and AI feedback for AI engineers that require high-quality outputs, full data ownership, and overall efficiency . If you just want to get started, we recommend you check the documentation . Curious, and want to know more? Keep reading! Why use Distilabel? Whether you are working on a predictive model that computes semantic similarity or the next generative model that is going to beat the LLM benchmarks. Our framework ensures that the hard data work pays off . Distilabel is the missing piece that helps you synthesize data and provide AI feedback . Improve your AI output quality through data quality Compute is expensive and output quality is important. We help you focus on data quality , which tackles the root cause of both of these problems at once. Distilabel helps you to synthesize and judge data to let you spend your valuable time on achieveing and keeping high-quality standards for your data . Take control of your data and models Ownership of data for fine-tuning your own LLMs is not easy but Distilabel can help you to get started. We integrate AI feedback from any LLM provider out there using one unified API. Improve efficiency by quickly iterating on the right research and LLMs Synthesize and judge data with latest research papers while ensuring flexibility, scalability and fault tolerance . So you can focus on improving your data and training your models. Community We are an open-source community-driven project and we love to hear from you. Here are some ways to get involved: Community Meetup : listen in or present during one of our bi-weekly events. Slack : get direct support from the community. Roadmap : plans change but we love to discuss those with our community so feel encouraged to participate. What do people build with Distilabel? Distilabel is a tool that can be used to synthesize data and provide AI feedback . Our community uses Distilabel to create amazing datasets and models , and we love contributions to open-source ourselves too. The 1M OpenHermesPreference is a dataset of ~1 million AI preferences derived from teknium/OpenHermes-2.5. It shows how we can use Distilabel to synthesize data on an immense scale . Our distilabeled Intel Orca DPO dataset and the improved OpenHermes model ,, show how we improve model performance by filtering out 50% of the original dataset through AI feedback . The haiku DPO data outlines how anyone can create a dataset for a specific task and the latest research papers to improve the quality of the dataset. Installation sh
pip install distilabel --upgrade Requires Python 3.8+ In addition, the following extras are available: anthropic : for using models available in Anthropic API via the AnthropicLLM integration. cohere : for using models available in Cohere via the CohereLLM integration. argilla : for exporting the generated datasets to Argilla . groq : for using models available in Groq using groq Python client via the GroqLLM integration. hf-inference-endpoints : for using the Hugging Face Inference Endpoints via the InferenceEndpointsLLM integration. hf-transformers : for using models available in transformers package via the TransformersLLM integration. litellm : for using LiteLLM to call any LLM using OpenAI format via the LiteLLM integration. llama-cpp : for using llama-cpp-python Python bindings for llama.cpp via the LlamaCppLLM integration. mistralai : for using models available in Mistral AI API via the MistralAILLM integration. ollama : for using Ollama and their available models via OllamaLLM integration. openai : for using OpenAI API models via the OpenAILLM integration, or the rest of the integrations based on OpenAI and relying on its client as AnyscaleLLM , AzureOpenAILLM , and TogetherLLM . vertexai : for using Google Vertex AI proprietary models via the VertexAILLM integration. vllm : for using vllm serving engine via the vLLM integration. Example To run the following example you must install distilabel with both openai extra: sh
pip install "distilabel[openai]" --upgrade Then run: ```python
from distilabel.llms import OpenAILLM
from distilabel.pipeline import Pipeline
from distilabel.steps import LoadHubDataset
from distilabel.steps.tasks import TextGeneration with Pipeline(
name="simple-text-generation-pipeline",
description="A simple text generation pipeline",
) as pipeline:
load_dataset = LoadHubDataset(output_mappings={"prompt": "instruction"}) generate_with_openai = TextGeneration(llm=OpenAILLM(model="gpt-3.5-turbo"))
load_dataset >> generate_with_openai if name == " main ":
distiset = pipeline.run(
parameters={
load_dataset.name: {
"repo_id": "distilabel-internal-testing/instruction-dataset-mini",
"split": "test",
},
generate_with_openai.name: {
"llm": {
"generation_kwargs": {
"temperature": 0.7,
"max_new_tokens": 512,
}
}
},
},
)
``` Badges If you build something cool with distilabel consider adding one of these badges to your dataset or model card. [<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>](https://github.com/argilla-io/distilabel) [<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-dark.png" alt="Built with Distilabel" width="200" height="32"/>](https://github.com/argilla-io/distilabel) Contribute To directly contribute with distilabel , check our good first issues or open a new one . Citation bibtex
@misc{distilabel-argilla-2024,
author = {Álvaro Bartolomé Del Canto and Gabriel Martín Blázquez and Agustín Piqueres Lajarín and Daniel Vila Suero},
title = {Distilabel: An AI Feedback (AIF) framework for building datasets with and for LLMs},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/argilla-io/distilabel}}
};⚗️ distilabel is a framework for synthetic data and AI feedback for AI engineers that require high-quality outputs, full data ownership, and overall efficiency.;ai,huggingface,llms,openai,python,rlaif,rlhf,synthetic-data,synthetic-dataset-generation | argilla-io/distilabel |
shubham-goel/4D-Humans;4DHumans: Reconstructing and Tracking Humans with Transformers Code repository for the paper: Humans in 4D: Reconstructing and Tracking Humans with Transformers Shubham Goel , Georgios Pavlakos , Jathushan Rajasegaran , Angjoo Kanazawa * , Jitendra Malik * Installation and Setup First, clone the repo. Then, we recommend creating a clean conda environment, installing all dependencies, and finally activating the environment, as follows: bash
git clone https://github.com/shubham-goel/4D-Humans.git
cd 4D-Humans
conda env create -f environment.yml
conda activate 4D-humans If conda is too slow, you can use pip: bash
conda create --name 4D-humans python=3.10
conda activate 4D-humans
pip install torch
pip install -e .[all] All checkpoints and data will automatically be downloaded to $HOME/.cache/4DHumans the first time you run the demo code. Besides these files, you also need to download the SMPL model. You will need the neutral model for training and running the demo code. Please go to the corresponding website and register to get access to the downloads section. Download the model and place basicModel_neutral_lbs_10_207_0_v1.0.0.pkl in ./data/ . Run demo on images The following command will run ViTDet and HMR2.0 on all images in the specified --img_folder , and save renderings of the reconstructions in --out_folder . --batch_size batches the images together for faster processing. The --side_view flags additionally renders the side view of the reconstructed mesh, --full_frame renders all people together in front view, --save_mesh saves meshes as .obj s. bash
python demo.py \
--img_folder example_data/images \
--out_folder demo_out \
--batch_size=48 --side_view --save_mesh --full_frame Run tracking demo on videos Our tracker builds on PHALP, please install that first: bash
pip install git+https://github.com/brjathu/PHALP.git Now, run track.py to reconstruct and track humans in any video. Input video source may be a video file, a folder of frames, or a youtube link:
```bash Run on video file python track.py video.source="example_data/videos/gymnasts.mp4" Run on extracted frames python track.py video.source="/path/to/frames_folder/" Run on a youtube link (depends on pytube working properly) python track.py video.source=\'"https://www.youtube.com/watch?v=xEH_5T9jMVU"\' ``
The output directory ( ./outputs by default) will contain a video rendering of the tracklets and a .pkl` file containing the tracklets with 3D pose and shape. Please see the PHALP repository for details. Training Download the training data to ./hmr2_training_data/ , then start training using the following command: bash fetch_training_data.sh
python train.py exp_name=hmr2 data=mix_all experiment=hmr_vit_transformer trainer=gpu launcher=local Checkpoints and logs will be saved to ./logs/ . We trained on 8 A100 GPUs for 7 days using PyTorch 1.13.1 and PyTorch-Lightning 1.8.1 with CUDA 11.6 on a Linux system. You may adjust batch size and number of GPUs per your convenience. Evaluation Download the evaluation metadata to ./hmr2_evaluation_data/ . Additionally, download the Human3.6M, 3DPW, LSP-Extended, COCO, and PoseTrack dataset images and update the corresponding paths in hmr2/configs/datasets_eval.yaml . Run evaluation on multiple datasets as follows, results are stored in results/eval_regression.csv . bash
python eval.py --dataset 'H36M-VAL-P2,3DPW-TEST,LSP-EXTENDED,POSETRACK-VAL,COCO-VAL' By default, our code uses the released checkpoint (mentioned as HMR2.0b in the paper). To use the HMR2.0a checkpoint, you may download and untar from here Preprocess code To preprocess LSP Extended and Posetrack into metadata zip files for evaluation, see hmr2/datasets/preprocess . Training data preprocessing coming soon. Acknowledgements Parts of the code are taken or adapted from the following repos:
- ProHMR - SPIN - SMPLify-X - HMR - ViTPose - Detectron2 Additionally, we thank StabilityAI for a generous compute grant that enabled this work. Citing If you find this code useful for your research, please consider citing the following paper: bibtex
@inproceedings{goel2023humans,
title={Humans in 4{D}: Reconstructing and Tracking Humans with Transformers},
author={Goel, Shubham and Pavlakos, Georgios and Rajasegaran, Jathushan and Kanazawa, Angjoo and Malik, Jitendra},
booktitle={ICCV},
year={2023}
};4DHumans: Reconstructing and Tracking Humans with Transformers;3d-reconstruction | shubham-goel/4D-Humans |
Ma-Lab-Berkeley/CRATE;CRATE (Coding RAte reduction TransformEr) This repository is the official PyTorch implementation of the papers: White-Box Transformers via Sparse Rate Reduction [ NeurIPS-2023 , paper link ]. By Yaodong Yu (UC Berkeley), Sam Buchanan (TTIC), Druv Pai (UC Berkeley), Tianzhe Chu (UC Berkeley), Ziyang Wu (UC Berkeley), Shengbang Tong (UC Berkeley), Benjamin D Haeffele (Johns Hopkins University), and Yi Ma (UC Berkeley). Emergence of Segmentation with Minimalistic White-Box Transformers [ CPAL-2024 , paper link ]. By Yaodong Yu (UC Berkeley), Tianzhe Chu (UC Berkeley & ShanghaiTech U), Shengbang Tong (UC Berkeley & NYU), Ziyang Wu (UC Berkeley), Druv Pai (UC Berkeley), Sam Buchanan (TTIC), and Yi Ma (UC Berkeley & HKU). 2023. (* equal contribution) Masked Autoencoding via Structured Diffusion with White-Box Transformers [ ICLR-2024 , paper link ]. By Druv Pai (UC Berkeley), Ziyang Wu (UC Berkeley), Sam Buchanan , Yaodong Yu (UC Berkeley), and Yi Ma (UC Berkeley). Also, we have released a larger journal-length overview paper of this line of research, which contains a superset of all the results presented above, and also more results in NLP and vision SSL.
- White-Box Transformers via Sparse Rate Reduction: Compression is All There Is? [ paper link ]. By Yaodong Yu (UC Berkeley), Sam Buchanan (TTIC), Druv Pai (UC Berkeley), Tianzhe Chu (UC Berkeley), Ziyang Wu (UC Berkeley), Shengbang Tong (UC Berkeley), Hao Bai (UIUC), Yuexiang Zhai (UC Berkeley), Benjamin D Haeffele (Johns Hopkins University), and Yi Ma (UC Berkeley). Table of Contents CRATE (Coding RAte reduction TransformEr) Theoretical Background: What is CRATE? 1. CRATE Architecture overview 2. One layer/block of CRATE 3. Per-layer optimization in CRATE 4. Segmentation visualization of CRATE Autoencoding Implementation and experiments Constructing a CRATE model Pre-trained Checkpoints (ImageNet-1K) Training CRATE on ImageNet Finetuning pretrained / training random initialized CRATE on CIFAR10 Demo: Emergent segmentation in CRATE Constructing a CRATE autoencoding model Pre-trained Checkpoints (ImageNet-1K) Training/Fine-Tuning CRATE-MAE Reference Theoretical Background: What is CRATE? CRATE (Coding RAte reduction TransformEr) is a white-box (mathematically interpretable) transformer architecture, where each layer performs a single step of an alternating minimization algorithm to optimize the sparse rate reduction objective where $R$ and $R^{c}$ are different _coding rates_ for the input representations w.r.t.~different codebooks, and the $\ell^{0}$-norm promotes the sparsity of the final token representations $\boldsymbol{Z} = f(\boldsymbol{X})$. The function $f$ is defined as
$$f=f^{L} \circ f^{L-1} \circ \cdots \circ f^{1} \circ f^{\mathrm{pre}},$$
where $f^{\mathrm{pre}}$ is the pre-processing mapping, and $f^{\ell}$ is the $\ell$-th layer forward mapping that transforms the token distribution to optimize the above sparse rate reduction objective incrementally. More specifically, $f^{\ell}$ transforms the $\ell$-th layer token representations $\boldsymbol{Z}^{\ell}$ to $\boldsymbol{Z}^{\ell+1}$ via the $\texttt{MSSA}$ (Multi-Head Subspace Self-Attention) block and the $\texttt{ISTA}$ (Iterative Shrinkage-Thresholding Algorithms) block, i.e.,
$$\boldsymbol{Z}^{\ell+1} = f^{\ell}(\boldsymbol{Z}^{\ell}) = \texttt{ISTA}(\boldsymbol{Z}^{\ell} + \texttt{MSSA}(\boldsymbol{Z}^{\ell})).$$
### 1. CRATE Architecture overview
The following figure presents an overview of the pipeline for our proposed **CRATE** architecture: ### 2. One layer/block of CRATE
The following figure shows the overall architecture of one layer of **CRATE** as the composition of $\texttt{MSSA}$ and $\texttt{ISTA}$ blocks. ### 3. Per-layer optimization in CRATE
In the following figure, we measure the compression term [ $R^{c}$ ($\boldsymbol{Z}^{\ell+1/2}$) ] and the sparsity term [ $||\boldsymbol{Z}^{\ell+1}||_0$ ] defined in the **sparse rate reduction objective**, and we find that each layer of **CRATE** indeed optimizes the targeted objectives, showing that our white-box theoretical design is predictive of practice. ### 4. Segmentation visualization of CRATE
In the following figure, we visualize self-attention maps from a supervised **CRATE** model with 8x8 patches (similar to the ones shown in [DINO](https://github.com/facebookresearch/dino) :t-rex:). We also discover a surprising empirical phenomenon where each attention head in **CRATE** retains its own semantics. ## Autoencoding
We can also use our theory to build a principled autoencoder, which has the following architecture. It has many of the same empirical properties as the base **CRATE** model, such as segmented attention maps and amenability to layer-wise analysis. We train it on the masked autoencoding task (calling this model **CRATE-MAE**), and it achieves comparable performance in linear probing and reconstruction quality as the base ViT-MAE. # Implementation and Experiments
## Constructing a CRATE model
A CRATE model can be defined using the following code, (the below parameters are specified for CRATE-Tiny)
```python
from model.crate import CRATE
dim = 384
n_heads = 6
depth = 12
model = CRATE(image_size=224,
patch_size=16,
num_classes=1000,
dim=dim,
depth=depth,
heads=n_heads,
dim_head=dim // n_heads)
```
### Pre-trained Checkpoints (ImageNet-1K)
| model | `dim` | `n_heads` | `depth` | pre-trained checkpoint |
| -------- | -------- | -------- | -------- | -------- |
| **CRATE-T**(iny) | 384 | 6 | 12 | TODO |
| **CRATE-S**(mall) | 576 | 12 | 12 | [download link](https://drive.google.com/file/d/1hYgDJl4EKHYfKprwhEjmWmWHuxnK6_h8/view?usp=share_link) |
| **CRATE-B**(ase) | 768 | 12 | 12 | TODO |
| **CRATE-L**(arge) | 1024 | 16 | 24 | TODO |
## Training CRATE on ImageNet
To train a CRATE model on ImageNet-1K, run the following script (training CRATE-tiny)
As an example, we use the following command for training CRATE-tiny on ImageNet-1K:
```python
python main.py
--arch CRATE_tiny
--batch-size 512
--epochs 200
--optimizer Lion
--lr 0.0002
--weight-decay 0.05
--print-freq 25
--data DATA_DIR
```
and replace `DATA_DIR` with `[imagenet-folder with train and val folders]`.
## Finetuning pretrained / training random initialized CRATE on CIFAR10
```python
python finetune.py
--bs 256
--net CRATE_tiny
--opt adamW
--lr 5e-5
--n_epochs 200
--randomaug 1
--data cifar10
--ckpt_dir CKPT_DIR
--data_dir DATA_DIR
```
Replace `CKPT_DIR` with the path for the pretrained CRATE weight, and replace `DATA_DIR` with the path for the `CIFAR10` dataset. If `CKPT_DIR` is `None`, then this script is for training CRATE from random initialization on CIFAR10.
## Demo: Emergent segmentation in CRATE
CRATE models exhibit emergent segmentation in their self-attention maps solely through supervised training.
We provide a Colab Jupyter notebook to visualize the emerged segmentations from a supervised **CRATE** model. The demo provides visualizations which match the segmentation figures above.
Link: [crate-emergence.ipynb](https://colab.research.google.com/drive/1rYn_NlepyW7Fu5LDliyBDmFZylHco7ss?usp=sharing) (in colab) ## Constructing a CRATE autoencoding model
A CRATE-autoencoding model (specifically **CRATE-MAE-Base**) can be defined using the following code:
```python
from model.crate_ae.crate_ae import mae_crate_base
model = mae_crate_base()
```
The other sizes in the paper are also importable in that way. Modifying the `model/crate_ae/crate_ae.py` file will let you initialize and serve your own config.
### Pre-trained Checkpoints (ImageNet-1K)
| model | `dim` | `n_heads` | `depth` | pre-trained checkpoint |
| -------- | -------- | -------- | -------- | -------- |
| **CRATE-MAE-S**(mall) | 576 | 12 | 12 | TODO |
| **CRATE-MAE-B**(ase) | 768 | 12 | 12 | [link](https://drive.google.com/file/d/11i5BMwymqOsunq44WD3omN5mS6ZREQPO/view?usp=sharing) |
## Training/Fine-Tuning CRATE-MAE
To train or fine-tune a CRATE-MAE model on ImageNet-1K, please refer to the [codebase on MAE training](https://github.com/facebookresearch/mae) from Meta FAIR. The `models_mae.py` file in that codebase can be replaced with the contents of `model/crate_ae/crate_ae.py`, and the rest of the code should go through with minimal alterations.
## Demo: Emergent segmentation in CRATE-MAE
CRATE-MAE models also exhibit emergent segmentation in their self-attention maps.
We provide a Colab Jupyter notebook to visualize the emerged segmentations from a **CRATE-MAE** model. The demo provides visualizations which match the segmentation figures above.
Link: [crate-mae.ipynb](https://colab.research.google.com/drive/1xcD-xcxprfgZuvwsRKuDroH7xMjr0Ad3?usp=sharing) (in colab)
# Reference
For technical details and full experimental results, please check the [CRATE paper](https://arxiv.org/abs/2306.01129), [CRATE segmentation paper](https://arxiv.org/abs/2308.16271), [CRATE autoencoding paper](https://openreview.net/forum?id=PvyOYleymy), or [the long-form overview paper](https://arxiv.org/abs/2311.13110). Please consider citing our work if you find it helpful to yours:
```
@article{yu2024white,
title={White-Box Transformers via Sparse Rate Reduction},
author={Yu, Yaodong and Buchanan, Sam and Pai, Druv and Chu, Tianzhe and Wu, Ziyang and Tong, Shengbang and Haeffele, Benjamin and Ma, Yi},
journal={Advances in Neural Information Processing Systems},
volume={36},
year={2024}
}
```
```
@inproceedings{yu2024emergence,
title={Emergence of Segmentation with Minimalistic White-Box Transformers},
author={Yu, Yaodong and Chu, Tianzhe and Tong, Shengbang and Wu, Ziyang and Pai, Druv and Buchanan, Sam and Ma, Yi},
booktitle={Conference on Parsimony and Learning},
pages={72--93},
year={2024},
organization={PMLR}
}
```
```
@inproceedings{pai2024masked,
title={Masked Completion via Structured Diffusion with White-Box Transformers},
author={Pai, Druv and Buchanan, Sam and Wu, Ziyang and Yu, Yaodong and Ma, Yi},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024}
}
```
```
@article{yu2023white,
title={White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is?},
author={Yu, Yaodong and Buchanan, Sam and Pai, Druv and Chu, Tianzhe and Wu, Ziyang and Tong, Shengbang and Bai, Hao and Zhai, Yuexiang and Haeffele, Benjamin D and Ma, Yi},
journal={arXiv preprint arXiv:2311.13110},
year={2023}
}
```;Code for CRATE (Coding RAte reduction TransformEr).;compression,sparsification,transformer-architecture,white-box-architecture | Ma-Lab-Berkeley/CRATE |
pastelsky/tsdocs;tsdocs.dev TSDocs.dev is a service that lets you browse type reference documentation
for Javascript packages. It works even with packages that aren't written in Typescript (sourced from DefinitelyTyped)
or when packages re-export types from other packages. Its depends heavily on a customized version of typedoc for generating API docs documentation. Writing good documentation for your library tsdocs.dev extracts documentation from the type definitions that ships with libraries. In case a type definition is
unavailable, it searches DefinitelyTyped for the closest equivalent. For an example, see documentation for d3 —
https://tsdocs.dev/docs/d3/7.8.5/classes/FormatSpecifier.html Internally tsdocs.dev uses a customized version of typedoc to parse
and render documentation, which works on docstrings and markdown
https://typedoc.org/guides/doccomments/ Development Ensure that you have redis installed and running locally Run yarn install Run yarn dev Sponsors;Browse type documentation for JS libraries;documentation,typescript,api-ref | pastelsky/tsdocs |
techwithtim/Price-Tracking-Web-Scraper;Project Information This project provides a user interface to interact with an automated price tracking web scraper. Currently the tracker scrapes amazon.ca, but could be configured to scrape multiple sources. Libraries/Frameworks/Modules This project uses: React Flask Playwright Bright Data (Web Scraping Browser) Using the Scraper Install all dependencies, create the auth.json file, start the flask backend, run the react frontend and interact with the tool. auth.json Fill in your Bright Data Scraping Browser credentials in a backend/scraper/auth.json file (see auth_example.json ). Python Flask Backend cd backend pip install -r requirements.txt playwright install python app.py or python3 app.py Running the React Frontend cd frontend npm i npm run start Setting Up Automation To automate the collection of prices from this software simply run the scheduler/main.py file at your desired increment while the python flask backend is running. Windows I have created a simple .bat script called run.bat that you can schedule to execute using the Windows Task Scheduler that will automatically run the backend api and send the appropriate request to it.;An automated price tracker that uses bright data, playwright, react and flask.;[] | techwithtim/Price-Tracking-Web-Scraper |
facebookresearch/MetaCLIP;Demystifying CLIP Data This repository contains the code for the MetaCLIP, described in the paper Demystifying CLIP Data that formalizes CLIP data curation as a simple algorithm. The main contributions are:
- Curating data from scratch without filtering via prior models (e.g., different from existing open source efforts ) that uses the original CLIP model as a teacher for filtering student data.
- Making training data more transparent, we released our training data distribution over metadata ;
- A scalable algorithm running in the data pipeline, allowing to scale the data pool to the whole CommonCrawl (CC) w/ 300+B image-text pairs. We observe that data quality is much more important than quantity (different from existing open source efforts or ALIGN that mostly scale quantity);
- standard CLIP training setup for controlled experiments and fair comparisons under fixed training and model configuration. We conclude that:
- Effective pretraining data should maximally preserve signal and mitigate noise , instead of hard removal of noise with blackbox filters that lead to unknown distribution
- Our algorithm is simpler and scalable to curate the whole Internet
- Open-sourcing does not just entail a trained model checkpoint but more importantly the pre-training data distribution . MetaCLIP is trained w/ face blurred images. bibtex
@inproceedings{xu2023metaclip,
title={Demystifying CLIP Data},
author={Hu Xu, Saining Xie, Xiaoqing Ellen Tan, Po-Yao Huang, Russell Howes, Vasu Sharma, Shang-Wen Li, Gargi Ghosh, Luke Zettlemoyer and Christoph Feichtenhofer},
journal={arXiv preprint arXiv:2309.16671},
year={2023}
} Updates 04/25/2024: 🔥 paper MoDE: CLIP Data Experts via Clustering is accepted by CVPR 2024 with code released. 01/18/2024: 🔥 add code for building metadata. 01/16/2024: 🔥 paper accepted by ICLR as spotlight presentation . 12/25/2023: Huggingface Space demo and Colab released. 12/21/2023: ViT-G/14 released. 09/28/2023: initial release. Quick Links Quick Start Pre-trained Models Development Metadata Curation Training Bugs or Questions? Citation Reference Quick Start The pre-trained MetaCLIP models are available in Huggingface or OpenCLIP (or this customized OpenCLIP repo) as following: ```python
from PIL import Image
from transformers import AutoProcessor, AutoModel processor = AutoProcessor.from_pretrained("facebook/metaclip-b32-400m")
model = AutoModel.from_pretrained("facebook/metaclip-b32-400m") image = Image.open("docs/CLIP.png")
inputs = processor(text=["a diagram", "a dog", "a cat"], images=image, return_tensors="pt", padding=True) with torch.no_grad():
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
text_probs = logits_per_image.softmax(dim=-1)
print("Label probs:", text_probs)
``` ```python
import torch
from PIL import Image
import open_clip model, _, preprocess = open_clip.create_model_and_transforms('ViT-B-32-quickgelu', pretrained='metaclip_400m') # for 2.5B use 'metaclip_fullcc' in OpenCLIP or 'metaclip_2_5b' in this repo image = preprocess(Image.open("docs/CLIP.png")).unsqueeze(0)
text = open_clip.tokenize(["a diagram", "a dog", "a cat"]) with torch.no_grad():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True) text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1) print("Label probs:", text_probs)
``` Pre-trained Models All MetaCLIP adhere to OpenAI CLIP training setup: we hope to bring back controlled experiments in the "CLIP era of ImageNet". Specifically, we use OpenAI CLIP's quickgelu activation for all model configs (which was missing in older versions of OpenCLIP that mainly uses nn.GELU instead). We add ViT-B-16-quickgelu , ViT-L-14-quickgelu , ViT-H-14-quickgelu and ViT-bigG-14-quickgelu in this repo. | model_name | pretrained | Data Card | # of Seen Pairs | Res. | GPUs | IN ZS Acc. |
|:--------------------|:------------:|:---------:|:---------:|:---------:|:---------:|:--------------:|
| ViT-B-32-quickgelu | metaclip_400m | data card | 12.8B | 224 | 64 x V100 | 65.5 |
| ViT-B-16-quickgelu | metaclip_400m | data card | 12.8B | 224 | 64 x V100 | 70.8 |
| ViT-L-14-quickgelu | metaclip_400m | data card | 12.8B | 224 | 128 x V100 | 76.2 |
| ViT-B-32-quickgelu | metaclip_2_5b | data card | 12.8B | 224 | 64 x V100 | 67.6 |
| ViT-B-16-quickgelu | metaclip_2_5b | data card | 12.8B | 224 | 64 x V100 | 72.1 |
| ViT-L-14-quickgelu | metaclip_2_5b | data card | 12.8B | 224 | 128 x V100 | 79.2 |
| ViT-H-14-quickgelu | metaclip_2_5b | data card | 12.8B | 224 | 256 x A100 | 80.5 |
| ViT-bigG-14-quickgelu | metaclip_2_5b | data card | 12.8B | 224 | 256 x A100 | 82.1 | Development This code is customized from OpenCLIP and will be maintained separately for research on MetaCLIP. The following command should install requirements for OpenCLIP and submitit=1.2.1 used by this repo: bash
conda create -n metaclip python=3.10 pytorch torchvision pytorch-cuda=11.7 tqdm ftfy braceexpand regex pandas submitit=1.2.1 \
-c pytorch-nightly \
-c nvidia \
-c conda-forge \
-c anaconda Metadata MetaCLIP uses 500,000 queries as metadata to align the training data to distribution over quality writing of Wikipedia/WordNet terms. This metadata also allows us to release training data distribution of a released model as data card . How to Curate ? We have a demo notebook to show how the proposed algorithm works. I already have a (head distributed) dataset: CLIP curation can still help as online balancing (Table 6 in the paper). We wrap CLIP curation in two key functions: substring matching (recommended to run offline) and balancing (either offline or online, please check metaclip.balancing:main ). ```python
import json
import numpy as np
from metaclip.substr_matching import substr_matching
from metaclip.balancing import balance_sampling with open("metadata.json") as f:
metadata = json.load(f) entry counts for our 1.6B(pool) -> 400M(curated); please check balance_sampling:main and substr match and count on your own data. with open("metaclip/entry_counts_400m.json") as f:
entry_count_json = json.load(f)
entry_count = np.array([entry_count_json[entry] for entry in metadata], dtype=np.uint64) # uint64 to be safe for scaling. t = 20000
entry_count[entry_count < t] = t
entry_prob = t / entry_count for text in ["jacksons chameleon", "battery plate"]:
matched_entry_ids = substr_matching(text, metadata) # this is for demo purpose that redo substr_matching; see metaclip/README.md.
curation_prob = min(entry_prob[matched_entry_ids].sum(), 1.0)
curated = balance_sampling(matched_entry_ids, entry_prob)
print(f"[curation_prob={curation_prob:.3f}, curated={curated}] {text}")
``` I want to curate data from scratch: We release a skeleton code for sub-string matching from CommonCrawl WAT or WARC and balancing . Check here for details. Numpy Impl. A numpy impl. of the algorithm can be found at metaclip.pipeline , close to the impl. used by the paper. Training python
python submitit_openclip.py b32_400m Please config the corresponding training_data in run_configs_400m.py . Build Your Own Metadata Consider start from our code for building CLIP's 500k metadata. Bugs or questions? If you have any questions related to the code or the paper, feel free to email Hu Xu ( huxu@meta.com ). Citation Please cite our paper (accepted by ICLR2024 as spotlight presentation) if MetaCLIP helps your work: bibtex
@inproceedings{xu2023metaclip,
title={Demystifying CLIP Data},
author={Hu Xu, Saining Xie, Xiaoqing Ellen Tan, Po-Yao Huang, Russell Howes, Vasu Sharma, Shang-Wen Li, Gargi Ghosh, Luke Zettlemoyer and Christoph Feichtenhofer},
journal={arXiv preprint arXiv:2309.16671},
year={2023}
} Reference The training code is developed based on OpenCLIP , modified to the vanilla CLIP training setup. TODO v0.1 code release; refactor openclip as v0.2; (welcome your use cases or suggestions to update this codebase regularly) License The majority of MetaCLIP is licensed under CC-BY-NC, however portions of the project are available under separate license terms: open_clip is licensed under the https://github.com/mlfoundations/open_clip license. Acknowledgement We gratefully acknowledge the OpenCLIP team for initial CLIP codebase and integration and NielsRogge 's integration into Huggingface .;ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Experts via Clustering;[] | facebookresearch/MetaCLIP |
KtzeAbyss/Easy-Virtual-Display;[!NOTE]
This project is only a temporary solution and currently only provides minimal maintenance. Parsec-vdd will directly provide a more complete solution, please continue to pay attention. where find it? 👉👉👉https://github.com/nomi-san/parsec-vdd 本项目仅作为一个临时解决方案,目前只提供最低限度的维护,Parsec-vdd将直接提供更完善的解决方案,请各位持续关注。 项目地址 👉👉👉https://github.com/nomi-san/parsec-vdd English | 中文 Easy Virtual Display Create virtual displays in Windows with ease, supporting a range of resolutions and refresh rates (such as 4K 240Hz). Ideal for remote control or graphics card spoofing. Project Background This project builds upon the ParsecVDD foundation and utilizes the repository found at https://github.com/nomi-san/parsec-vdd . Download Please select the latest release version. How to Use Download and install the application. It is recommended to create a shortcut. Double-click to launch (virtualDisplayLit.exe, please make sure to run in administrator mode). The program will by default hide in the system tray at the bottom right corner, Right-click the icon to access the feature menu.On the initial run, install the driver (only required the first time). Then, click 'Start Virtual Display' to access the display settings via right-click on the desktop, just like configuring a physical monitor. Menu Items From top to bottom, the menu options are as follows: Start Virtual Display, Stop Virtual Display, Force Quit, Install Driver, Uninstall Driver, and Exit.
1. Start Virtual Display
2. Stop Virtual Display
3. Force Quit
4. Install Driver
5. Uninstall Driver
6. Exit Demo Privacy Screen (Remote Control/Streaming) Privacy Screen (Remote Control/Streaming): After starting the virtual display, configure it to display only on Display 2 (Virtual Display) in the display settings. This will cause the host machine's screen to go black while the client machine displays the host's screen, allowing you to work discreetly without being detected by others. Overcoming Physical Display Limitations Unrestricted creation of virtual displays with various resolutions and refresh rates, allowing the client to output user-preferred resolutions and refresh rates (such as 4K 240Hz) on low-performance displays or systems without a physical display. Easy Virtual Display(简易虚拟显示器) 轻松在Windows中创建虚拟显示器,支持各种分辨率和刷新率(如4k 244hz)。非常适用于远程控制或图形卡欺骗。 项目背景 本项目基于ParsecVDD的基础构建,并利用了位于 https://github.com/nomi-san/parsec-vdd 的存储库。 下载 请选择最新发布版本。 使用方法(首次启动务必先安装驱动!!!首次启动务必先安装驱动!!!首次启动务必先安装驱动!!!) 下载并安装应用程序。建议创建快捷方式。 双击启动(virtualDisplayLit.exe),请确保以管理员模式运行。 该程序默认隐藏在右下角系统托盘中,右键单击图标即可访问功能菜单。首次运行时,请安装驱动程序(仅首次运行)。 然后,单击“启动虚拟显示器”以通过在桌面上右键单击来访问显示设置,就像配置物理显示器一样。 菜单项 从上到下,菜单选项如下:启动虚拟显示器、停止虚拟显示器、强制退出、安装驱动、卸载驱动和退出。
1. 启动虚拟显示器:正常启动(首次运行前务必安装驱动)
2. 停止虚拟显示器:正常停止
3. 强制退出:某些情况下驱动可能会出现占用导致虚拟显示器启动不正常,此时可以先启用强制退出功能,确保驱动异常占用不是本程序导致的,然后卸载重装驱动或修复驱动
4. 安装驱动:首次启动务必先安装驱动!!!首次启动务必先安装驱动!!!首次启动务必先安装驱动!!!
5. 卸载驱动:卸载驱动
6. 退出:退出程序 玩法演示 隐私屏(远程控制/串流) 启动虚拟显示器后,在显示设置中设置仅在显示器2(虚拟显示器)上显示,此时被控端(host)将黑屏,控制端(client)将正常显示被控端(host)的画面,允许你成为卷王而不被其他人发现。 摆脱物理显示器限制 无限制的创建各种分辨率和各种刷新率的虚拟显示器,允许被控端在低性能显示器或无显示器搭载的情况下,控制端输出用户喜好的分辨率和刷新率(如4K 240Hz)。 Star History;Effortlessly create virtual displays in Windows, capable of supporting various resolutions and refresh rates, suitable for remote control or graphics card spoofing.在win中轻松创建支持多种分辨率和刷新率的虚拟显示器,可用于远程控制或显卡欺骗。;easy-to-use,headless-display,lightweight-tool,parsec,privacy-tools,vdd,virtual-desktop,virtual-display | KtzeAbyss/Easy-Virtual-Display |
vectara/hallucination-leaderboard;Hallucination Leaderboard Public LLM leaderboard computed using Vectara's Hughes Hallucination Evaluation Model . This evaluates how often an LLM introduces hallucinations when summarizing a document. We plan to update this regularly as our model and the LLMs get updated over time. Also, feel free to check out our hallucination leaderboard in Hugging Face. In loving memory of Simon Mark Hughes ... Last updated on May 14th, 2024 |Model|Hallucination Rate|Factual Consistency Rate|Answer Rate|Average Summary Length (Words)|
|----|----:|----:|----:|----:|
|GPT 4 Turbo|2.5 %|97.5 %|100.0 %|86.2|
|Snowflake Arctic|2.6 %|97.4 %|100.0 %|68.7|
|Intel Neural Chat 7B|2.8 %|97.2 %|89.5 %|57.6|
|GPT 4|3.0 %|97.0 %|100.0 %|81.1|
|Microsoft Orca-2-13b|3.2 %|96.8 %|100.0 %|66.2|
|GPT 3.5 Turbo|3.5 %|96.5 %|99.6 %|84.1|
|GPT 4o|3.7 %|96.3 %|100.0 %|77.8|
|Cohere Command R Plus|3.8 %|96.2 %|100.0 %|71.2|
|Mixtral 8x22B|3.8 %|96.2 %|99.9 %|92.0|
|Cohere Command R|3.9 %|96.1 %|99.9 %|51.2|
|Microsoft Phi-3-mini-128k|4.1 %|95.9 %|100.0 %|60.1|
|Mistral 7B Instruct-v0.2|4.5 %|95.5 %|100.0 %|106.1|
|Llama 3 70B|4.5 %|95.5 %|99.2 %|68.5|
|Google Gemini 1.5 Pro|4.6 %|95.4 %|89.3 %|82.1|
|Google Gemini Pro|4.8 %|95.2 %|98.4 %|89.5|
|Microsoft WizardLM-2-8x22B|5.0 %|95.0 %|99.9 %|140.8|
|Microsoft Phi-3-mini-4k|5.1 %|94.9 %|100.0 %|86.8|
|Llama 2 70B|5.1 %|94.9 %|99.9 %|84.9|
|Google Gemini 1.5 Flash|5.3 %|94.7 %|98.1 %|62.8|
|Llama 3 8B|5.4 %|94.6 %|99.8 %|79.7|
|Llama 2 7B|5.6 %|94.4 %|99.6 %|119.9|
|Llama 2 13B|5.9 %|94.1 %|99.8 %|82.1|
|Anthropic Claude 3 Sonnet|6.0 %|94.0 %|100.0 %|108.5|
|Databricks DBRX Instruct|6.1 %|93.9 %|100.0 %|85.9|
|Google Gemma-1.1-7b-it|6.3 %|93.7 %|100.0 %|64.3|
|Anthropic Claude 3 Opus|7.4 %|92.6 %|95.5 %|92.1|
|Google Gemma-7b-it|7.5 %|92.5 %|100.0 %|113.0|
|Cohere-Chat|7.5 %|92.5 %|98.0 %|74.4|
|Cohere|8.5 %|91.5 %|99.8 %|59.8|
|Anthropic Claude 2|8.5 %|91.5 %|99.3 %|87.5|
|Microsoft Phi 2|8.5 %|91.5 %|91.5 %|80.8|
|Google Palm 2|8.6 %|91.4 %|99.8 %|86.6|
|Mixtral 8x7B|9.3 %|90.7 %|99.9 %|90.7|
|Amazon Titan Express|9.4 %|90.6 %|99.5 %|98.4|
|Mistral 7B Instruct-v0.1|9.4 %|90.6 %|98.7 %|96.1|
|Google Palm 2 Chat|10.0 %|90.0 %|100.0 %|66.2|
|Google Gemma-1.1-2b-it|11.2 %|88.8 %|100.0 %|66.8|
|Google flan-t5-large|15.8 %|84.2 %|99.3 %|20.9|
|tiiuae falcon-7b-instruct|16.2 %|83.8 %|90.0 %|75.5|
|Apple OpenELM-3B-Instruct|22.4 %|77.6 %|99.3 %|47.2| Model You can find the model used to compute this leaderboard open sourced for commercial use on Hugging Face and Kaggle , along with instructions on how to use the model. Data See link for the generated summaries we used to evaluate the models with. Prior Research Much prior work in this area has been done. For some of the top papers in this area (factual consistency in summarization) please see here: SUMMAC: Re-Visiting NLI-based Models for Inconsistency Detection in Summarization TRUE: Re-evaluating Factual Consistency Evaluation TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models ALIGNSCORE: Evaluating Factual Consistency with A Unified Alignment Function For a very comprehensive list, please see here - https://github.com/EdinburghNLP/awesome-hallucination-detection. The methods described in the following section use protocols established in those papers, amongst many others. Methodology For a detailed explanation of the work that went into this model please refer to our blog post on the release: Cut the Bull…. Detecting Hallucinations in Large Language Models . To determine this leaderboard, we trained a model to detect hallucinations in LLM outputs, using various open source datasets from the factual consistency research into summarization models. Using a model that is competitive with the best state of the art models, we then fed 1000 short documents to each of the LLMs above via their public APIs and asked them to summarize each short document, using only the facts presented in the document. Of these 1000 documents, only 831 document were summarized by every model, the remaining documents were rejected by at least one model due to content restrictions. Using these 831 documents, we then computed the overall factual consistency rate (no hallucinations) and hallucination rate (100 - accuracy) for each model. The rate at which each model refuses to respond to the prompt is detailed in the 'Answer Rate' column. None of the content sent to the models contained illicit or 'not safe for work' content but the present of trigger words was enough to trigger some of the content filters. The documents were taken primarily from the CNN / Daily Mail Corpus . We used a temperature of 0 when calling the LLMs. We evaluate summarization factual consistency rate instead of overall factual accuracy because it allows us to compare the model's response to the provided information. In other words, is the summary provided 'factually consistent' with the source document. Determining hallucinations is impossible to do for any ad hoc question as it's not known precisely what data every LLM is trained on. In addition, having a model that can determine whether any response was hallucinated without a reference source requires solving the hallucination problem and presumably training a model as large or larger than these LLMs being evaluated. So we instead chose to look at the hallucination rate within the summarization task as this is a good analogue to determine how truthful the models are overall. In addition, LLMs are increasingly used in RAG (Retrieval Augmented Generation) pipelines to answer user queries, such as in Bing Chat and Google's chat integration. In a RAG system, the model is being deployed as a summarizer of the search results, so this leaderboard is also a good indicator for the accuracy of the models when used in RAG systems. Prompt Used You are a chat bot answering questions using data. You must stick to the answers provided solely by the text in the passage provided. You are asked the question 'Provide a concise summary of the following passage, covering the core pieces of information described.' <PASSAGE>' When calling the API, the <PASSAGE> token was then replaced with the source document (see the 'source' column in leaderboard-summaries.csv ). API Integration Details Below is a detailed overview of the models integrated and their specific endpoints: OpenAI Models GPT-3.5 : Accessed using the model name gpt-3.5-turbo through OpenAI's Python client library, specifically via the chat.completions.create endpoint. GPT-4 : Integrated with the model identifier gpt-4 . GPT-4 Turbo : Utilized under the model name gpt-4-turbo-2024-04-09 , in line with OpenAI's documentation. GPT-4o : Accessed using the model name gpt-4o . Llama Models Llama 2 7B, 13B, and 70B : These models of varying sizes are accessed through Anyscale hosted endpoints using model meta-llama/Llama-2-xxb-chat-hf , where xxb can be 7b , 13b , and 70b , tailored to each model's capacity. Llama 3 8B and 70B : These models are accessed via Together AI chat endpoint and using the model meta-llama/Llama-3-xxB-chat-hf , where xxB can be 8B and 70B . Cohere Models Cohere Command : Employed using the model command and the /generate endpoint. Cohere-Chat : Integrated through the /chat endpoint for enhanced conversational capabilities. Cohere Command R : Employed using the model command-r and the /chat endpoint. Cohere Command R Plus : Employed using the model command-r-plus and the /chat endpoint. For more information about Cohere's models, refer to their website . Anthropic Model Claude 2 : Invoked the model using claude-2.0 for the API call. Claude 3 Opus : Invoked the model using claude-3-opus-20240229 for the API call. Claude 3 Sonnet : Invoked the model using claude-3-sonnet-20240229 for the API call.
Details on each model can be found on their website . Mistral AI Models on Hugging Face Mistral 7B Instruct-v0.1 : The Mistral-7B-Instruct-v0.1 model is integrated using Hugging Face's API. Mistral 7B Instruct-v0.2 : The Mistral-7B-Instruct-v0.2 model is integrated using Hugging Face's API. Mixtral 8x7B : Similarly, the Mixtral-8x7B-Instruct-v0.1 model is accessed via Hugging Face's API. Mixtral 8x22B : Accessed via Together AI's API using the model mistralai/Mixtral-8x22B-Instruct-v0.1 and the chat endpoint. Google Palm Models via Vertex AI Google Palm 2 and Google Palm 2 Chat : Implemented using the text-bison-001 and chat-bison-001 models, respectively. Google Palm 2 (Beta) and Google Palm 2-Chat (Beta) : Utilized with the model identifiers text-bison and chat-bison . Gemini Pro : Google's gemini-pro model is incorporated for enhanced language processing, accessible on Vertex AI. Gemini 1.5 Pro : Accessed using model gemini-1.5-pro-latest Gemini 1.5 Flash : Accessed using model gemini-1.5-flash-latest For an in-depth understanding of each model's version and lifecycle, especially those offered by Google, please refer to Model Versions and Lifecycles on Vertex AI. Titan Models on Amazon Bedrock Amazon Titan Express : The model is accessed on Amazon Bedrock with model identifier of amazon.titan-text-express-v1 . Microsoft Models Microsoft Phi-2 : The phi-2 model is accessed via Hugging Face's API. Microsoft Orca-2-13b : The Orca-2-13b model is accessed via Hugging Face's API. Microsoft WizardLM-2-8x22B : Accessed via Together AI's API using the model microsoft/WizardLM-2-8x22B and the chat endpoint. Microsoft Phi-3-mini-4k : The phi-3-mini-4k model is accessed via Hugging Face's checkpoint. Microsoft Phi-3-mini-4k : The phi-3-mini-128k model is accessed via Hugging Face's checkpoint. Google Models on Hugging Face Google flan-t5-large : The flan-t5-large model is accessed via Hugging Face's API. Google gemma-7b-it : The gemma-7b-it model is accessed via Hugging Face's API. Google gemma-1.1-7b-it : The gemma-1.1-7b-it model is accessed by being loaded from Hugging Face's checkpoint. Google gemma-1.1-2b-it : The gemma-1.1-7b-it model is accessed via being loaded from Hugging Face's checkpoint tiiuae Models on Hugging Face tiiuae/falcon-7b-instruct : The falcon-7b-instruct model is accessed via Hugging Face's API. Intel Models on Hugging Face Intel/neural-chat-7b-v3-3 : The Intel/neural-chat-7b-v3-3 model is accessed via Hugging Face's API. Databricks Model Databricks/dbrx-instruct : Accessed via Together AI's API using the model databricks/dbrx-instruct and the chat endpoint. Snowflake Model Snowflake/snowflake-arctic-instruct : Accessed via Replicate's API using the model snowflake/snowflake-arctic-instruct . Apple Model Apple/OpenELM-3B-Instruct : The OpenELM-3B-Instruct model is accessed via being loaded from Hugging Face's checkpoint. The prompt for this model is the original prompt plus ''\n\nA concise summary is as follows:'' Frequently Asked Questions Qu. Why are you are using a model to evaluate a model? Answer There are several reasons we chose to do this over a human evaluation. While we could have crowdsourced a large human scale evaluation, that's a one time thing, it does not scale in a way that allows us to constantly update the leaderboard as new APIs come online or models get updated. We work in a fast moving field so any such process would be out of data as soon as it published. Secondly, we wanted a repeatable process that we can share with others so they can use it themselves as one of many LLM quality scores they use when evaluating their own models. This would not be possible with a human annotation process, where the only things that could be shared are the process and the human labels. It's also worth pointing out that building a model for detecting hallucinations is much easier than building a generative model that never produces hallucinations. So long as the hallucination evaluation model is highly correlated with human raters' judgements, it can stand in as a good proxy for human judges. As we are specifically targetting summarization and not general 'closed book' question answering, the LLM we trained does not need to have memorized a large proportion of human knowledge, it just needs to have a solid grasp and understanding of the languages it support (currently just english, but we plan to expand language coverage over time). Qu. What if the LLM refuses to summarize the document or provides a one or two word answer? Answer We explicitly filter these out. See out blog post for more information. You can see the 'Answer Rate' column on the leaderboard that indicates the percentage of documents summarized, and the 'Average Summary Length' column detailing the summary lengths, showing we didn't get very short answers for most documents. Qu. What version of model XYZ did you use? Answer Please see the API details section for specifics about the model versions used and how they were called, as well as the date the leaderboard was last updated. Please contact us (create an issue in the repo) if you need more clarity. Qu. What about xAI's Grok LLM? Answer Currently (as of 11/14/2023) Grok is not publicly available and we do not have access. Those with early access I suspect are probably legally forbidden from doing this sort of evaluation on the model. Once the model is available via a public API we will look to add it, along with any other LLMs that are popular enough. Qu. Can't a model just score a 100% by providing either no answers or very short answers? Answer We explicitly filtered out such responses from every model, doing the final evaluation only on documents that all models provided a summary for. You can find out more technical details in our blog post on the topic. See also the 'Answer Rate' and 'Average Summary Length' columns in the table above. Qu. Wouldn't an extractive summarizer model that just copies and pastes from the original summary score 100% (0 hallucination) on this task? Answer Absolutely as by definition such a model would have no hallucinations and provide a faithful summary. We do not claim to be evaluating summarization quality, that is a separate and orthogonal task, and should be evaluated independently. We are not evaluating the quality of the summaries, only the factual consistency of them, as we point out in the blog post . Qu. This seems a very hackable metric, as you could just copy the original text as the summary Answer. That's true but we are not evaluating arbitrary models on this approach, e.g. like in a Kaggle competition. Any model that does so would perform poorly at any other task you care about. So I would consider this as quality metric that you'd run alongside whatever other evaluations you have for your model, such as summarization quality, question answering accuracy, etc. But we do not recommend using this as a standalone metric. None of the models chosen were trained on our model's output. That may happen in future but as we plan to update the model and also the source documents so this is a living leaderboard, that will be an unlikely occurrence. That is however also an issue with any LLM benchmark. We should also point out this builds on a large body of work on factual consistency where many other academics invented and refined this protocol. See our references to the SummaC and True papers in this blog post , as well as this excellent compilation of resources - https://github.com/EdinburghNLP/awesome-hallucination-detection to read more. Qu. This does not definitively measure all the ways a model can hallucinate Answer. Agreed. We do not claim to have solved the problem of hallucination detection, and plan to expand and enhance this process further. But we do believe it is a move in the right direction, and provides a much needed starting point that everyone can build on top of. Qu. Some models could hallucinate only while summarizing. Couldn't you just provide it a list of well known facts and check how well it can recall them? Answer. That would be a poor test in my opinion. For one thing, unless you trained the model you don't know the data it was trained on, so you can't be sure the model is grounding its response in real data it has seen on or whether it is guessing. Additionally, there is no clear definition of 'well known', and these types of data are typically easy for most models to accurately recall. Most hallucinations, in my admittedly subjective experience, come from the model fetching information that is very rarely known or discussed, or facts for which the model has seen conflicting information. Without knowing the source data the model was trained on, again it's impossible to validate these sort of hallucinations as you won't know which data fits this criterion. I also think its unlikely the model would only hallucinate while summarizing. We are asking the model to take information and transform it in a way that is still faithful to the source. This is analogous to a lot of generative tasks aside from summarization (e.g. write an email covering these points...), and if the model deviates from the prompt then that is a failure to follow instructions, indicating the model would struggle on other instruction following tasks also. Qu. This is a good start but far from definitive Answer. I totally agree. There's a lot more that needs to be done, and the problem is far from solved. But a 'good start' means that hopefully progress will start to be made in this area, and by open sourcing the model, we hope to involve the community into taking this to the next level. Coming Soon We will also be adding a leaderboard on citation accuracy. As a builder of RAG systems, we have noticed that LLMs tend to mis-attribute sources sometimes when answering a question based on supplied search results. We'd like to be able to measure this so we can help mitigate it within our platform. We will also look to expand the benchmark to cover other RAG tasks, such as multi-document summarization. We also plan to cover more languages than just english. Our current platform covers over 100 languages, and we want to develop hallucination detectors with comparable multi-lingual coverage.;Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents;generative-ai,hallucinations,llm | vectara/hallucination-leaderboard |
Etesam913/react-magic-motion;react-magic-motion react-magic-motion.com react-magic-motion is a react.js library that ✨ magically animates your components. ⭐️ Getting Started 📦 Install bash
npm i react-magic-motion 🔎 Simple Example 🎥 Demo https://github.com/Etesam913/react-magic-motion/assets/55665282/dfc56ad5-5012-4f5e-90cc-8ec372527320 🧑💻 Code ```jsx
import { useState } from "react";
import { MagicMotion } from "react-magic-motion"; function ListItem({ children }: { children: string }) {
return ( {children} );
} export default function App() {
const [areGoalsShowing, setAreGoalsShowing] = useState(true);
return ( My Goals {areGoalsShowing && ( 🏀 Make 10 three pointers in a row 🏋️♂️ Bench press 225 lbs 🏃♂️ Run a 5k in under 20 minutes )} setAreGoalsShowing(!areGoalsShowing)}
>
{areGoalsShowing ? "Hide" : "Show"} my goals );
}
``` 💫 Examples To do list Accordion Sidebar Expandable Card Grid Area Search Tabs 📓 Docs For a full understanding of react-magic-motion visit the docs here ❓ Want to Contribute Visit the contributing.md;react-magic-motion is a react.js library that ✨ magically animates your components.;[] | Etesam913/react-magic-motion |
axflow/axflow;The TypeScript framework for AI development Axflow is a collection of modules for building robust natural language powered applications. These modules can be adopted incrementally, thus providing a modular and scalable solution.
Used together, they form an end-to-end framework for developing AI applications. Modules @axflow/models — A zero-dependency, modular SDK for building robust natural language applications. Includes React hooks and streaming utilities that make building AI applications a breeze. axgen — A framework for connecting your data to large language models axeval — A framework for evaluating LLM output quality In addition to the above modules, we're working on the following modules: extract : A library for efficient data processing, particularly loading, transforming, and chunking documents from arbitrary sources. Most useful for applications that need to load and preprocess data for vector search. serve : A serving framework to run any LLM model (OSS or otherwise). It will also provide middleware options for user throttling, analytics, and logging finetune : A library focused on fine-tuning models Documentation Goals Axflow aspires to deconstruct the complex paradigms of working with LLMs into manageable and intuitive components.
Our library takes a code-first approach, emphasizing the importance of flexibility and control for developers.
As a foundational framework, Axflow empowers developers to build higher-level TypeScript AI features and products seamlessly. Examples Here is an example open source UI showcasing what our first module, axgen, can do, with a short video walkthrough. License MIT;The TypeScript framework for AI development;ai,typescript,llm | axflow/axflow |
AgentOps-AI/agentops;AI agents suck. We’re fixing that. 🐦 Twitter • 📢 Discord • 🖇️ AgentOps • 📙 Documentation AgentOps 🖇️ AgentOps helps developers build, evaluate, and monitor AI agents. Tools to build agents from prototype to production. | | |
| ------------------------------------- | ------------------------------------------------------------- |
| 📊 Replay Analytics and Debugging | Step-by-step agent execution graphs |
| 💸 LLM Cost Management | Track spend with LLM foundation model providers |
| 🧪 Agent Benchmarking | Test your agents against 1,000+ evals |
| 🔐 Compliance and Security | Detect common prompt injection and data exfiltration exploits |
| 🤝 Framework Integrations | Native Integrations with CrewAI, AutoGen, & LangChain | Quick Start ⌨️ bash
pip install agentops Session replays in 3 lines of code Initialize the AgentOps client and automatically get analytics on every LLM call. ```python
import agentops Beginning of program's code (i.e. main.py, init .py) agentops.init( ) ... (optional: record specific functions) @agentops.record_function('sample function being record')
def sample_function(...):
... End of program agentops.end_session('Success') Woohoo You're done 🎉 ``` All your sessions are available on the AgentOps dashboard . Refer to our API documentation for detailed instructions. Agent Dashboard Session Analytics Session Replays Integrations 🦾 CrewAI 🛶 Build Crew agents with observability with only 2 lines of code. Simply set an AGENTOPS_API_KEY in your environment, and your crews will get automatic monitoring on the AgentOps dashboard. AgentOps is integrated with CrewAI on a pre-release fork. Install crew with bash
pip install git+https://github.com/AgentOps-AI/crewAI.git@main AgentOps integration example Official CrewAI documentation AutoGen 🤖 With only two lines of code, add full observability and monitoring to Autogen agents. Set an AGENTOPS_API_KEY in your environment and call agentops.init() Autogen Observability Example Autogen - AgentOps Documentation Langchain 🦜🔗 AgentOps works seamlessly with applications built using Langchain. To use the handler, install Langchain as an optional dependency: Installation ```shell
pip install agentops[langchain]
```
To use the handler, import and set
```python
import os
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
from agentops.langchain_callback_handler import LangchainCallbackHandler
AGENTOPS_API_KEY = os.environ['AGENTOPS_API_KEY']
handler = LangchainCallbackHandler(api_key=AGENTOPS_API_KEY, tags=['Langchain Example'])
llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY,
callbacks=[handler],
model='gpt-3.5-turbo')
agent = initialize_agent(tools,
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
callbacks=[handler], # You must pass in a callback handler to record your agent
handle_parsing_errors=True)
```
Check out the [Langchain Examples Notebook](./examples/langchain_examples.ipynb) for more details including Async handlers. Cohere ⌨️ First class support for Cohere(>=5.4.0). This is a living integration, should you need any added functionality please message us on Discord! AgentOps integration example Official Cohere documentation Installation ```bash
pip install cohere
```
```python python
import cohere
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init( )
co = cohere.Client()
chat = co.chat(
message="Is it pronounced ceaux-hear or co-hehray?"
)
print(chat)
agentops.end_session('Success')
```
```python python
import cohere
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init( )
co = cohere.Client()
stream = co.chat_stream(
message="Write me a haiku about the synergies between Cohere and AgentOps"
)
for event in stream:
if event.event_type == "text-generation":
print(event.text, end='')
agentops.end_session('Success')
``` LlamaIndex 🦙 (Coming Soon) Time travel debugging 🔮 (coming soon!) Agent Arena 🥊 (coming soon!) Evaluations Roadmap 🧭 | Platform | Dashboard | Evals |
| ---------------------------------------------------------------------------- | ------------------------------------------ | -------------------------------------- |
| ✅ Python SDK | ✅ Multi-session and Cross-session metrics | ✅ Custom eval metrics |
| 🚧 Evaluation builder API | ✅ Custom event tag tracking | 🔜 Agent scorecards |
| ✅ Javascript/Typescript SDK | ✅ Session replays | 🔜 Evaluation playground + leaderboard | Debugging Roadmap 🧭 | Performance testing | Environments | LLM Testing | Reasoning and execution testing |
| ----------------------------------------- | ----------------------------------------------------------------------------------- | ------------------------------------------- | ------------------------------------------------- |
| ✅ Event latency analysis | 🔜 Non-stationary environment testing | 🔜 LLM non-deterministic function detection | 🚧 Infinite loops and recursive thought detection |
| ✅ Agent workflow execution pricing | 🔜 Multi-modal environments | 🚧 Token limit overflow flags | 🔜 Faulty reasoning detection |
| 🚧 Success validators (external) | 🔜 Execution containers | 🔜 Context limit overflow flags | 🔜 Generative code validators |
| 🔜 Agent controllers/skill tests | ✅ Honeypot and prompt injection detection ( PromptArmor ) | 🔜 API bill tracking | 🔜 Error breakpoint analysis |
| 🔜 Information context constraint testing | 🔜 Anti-agent roadblocks (i.e. Captchas) | 🔜 CI/CD integration checks | |
| 🔜 Regression testing | 🔜 Multi-agent framework visualization | | | Why AgentOps? 🤔 Without the right tools, AI agents are slow, expensive, and unreliable. Our mission is to bring your agent from prototype to production. Here's why AgentOps stands out: Comprehensive Observability : Track your AI agents' performance, user interactions, and API usage. Real-Time Monitoring : Get instant insights with session replays, metrics, and live monitoring tools. Cost Control : Monitor and manage your spend on LLM and API calls. Failure Detection : Quickly identify and respond to agent failures and multi-agent interaction issues. Tool Usage Statistics : Understand how your agents utilize external tools with detailed analytics. Session-Wide Metrics : Gain a holistic view of your agents' sessions with comprehensive statistics. AgentOps is designed to make agent observability, testing, and monitoring easy. Star History Check out our growth in the community:;Python SDK for agent monitoring, LLM cost tracking, benchmarking, and more. Integrates with most LLMs and agent frameworks like CrewAI, Langchain, and Autogen;agent,agentops,ai,evals,evaluation-metrics,llm,anthropic,autogen,cost-estimation,crewai | AgentOps-AI/agentops |
adrianhajdin/nike_landing_page;TailwindCSS Crash Course Build this project step by step with our detailed tutorial on JavaScript Mastery YouTube. Join the JSM family! 📋 Table of Contents 🤖 Introduction ⚙️ Tech Stack 🔋 Features 🤸 Quick Start 🕸️ Snippets 🔗 Links 🚀 More 🚨 Tutorial This repository contains the code corresponding to an in-depth tutorial available on our YouTube channel, JavaScript Mastery . If you prefer visual learning, this is the perfect resource for you. Follow our tutorial to learn how to build projects like these step-by-step in a beginner-friendly manner! 🤖 Introduction Master Tailwind CSS in two parts by first learning fundamentals, advanced techniques, and theming. Then, build a stunning Nike landing page, applying learned skills to create a visually impressive website. If you're getting started and need assistance or face any bugs, join our active Discord community with over 27k+ members. It's a place where people help each other out. ⚙️ Tech Stack Tailwind CSS React.js 🔋 Features 👉 Maximizing Tailwind CSS : Discover tips and tricks to make the most out of Tailwind CSS. 👉 Understanding Tailwind Internals : Dive into the inner workings of Tailwind, gaining insights into its structure and optimizations. 👉 Best Practices : Learn Tailwind's best practices for efficient and maintainable code. 👉 Theming :Explore techniques to add different themes to your website using Tailwind CSS. 👉 JavaScript-like Tasks with Tailwind : Discover how Tailwind CSS can be used to achieve tasks that typically require JavaScript code while building a beautiful Nike Website with a, 👉 Complex Hero Section : A visually appealing hero section showcasing key elements. 👉 Popular Products Showcase : A section highlighting popular Nike products 👉 About Us Section : An informative "About Us" section with a unique design. 👉 Special Offers : Showcase special offers in an eye-catching manner 👉 Testimonials : A testimonials section for a captivating user experience 👉 Newsletter Integration : A newsletter section with Tailwind styling, encouraging user engagement 👉 Footer : A comprehensive footer section containing various links 👉 Mobile Responsive : The entire website is responsive across various devices, emphasizing Tailwind's mobile-friendly capabilities. and many more, including code architecture and reusability 🤸 Quick Start Follow these steps to set up the project locally on your machine. Prerequisites Make sure you have the following installed on your machine: Git Node.js npm (Node Package Manager) Cloning the Repository bash
git clone https://github.com/adrianhajdin/nike_landing_page.git
cd nike_landing_page Installation Install the project dependencies using npm: bash
npm install Running the Project bash
npm start Open http://localhost:5173 in your browser to view the project. 🕸️ Snippets .eslintrc.cjs ```javascript
module.exports = {
root: true,
env: { browser: true, es2020: true },
extends: [
'eslint:recommended',
'plugin:react/recommended',
'plugin:react/jsx-runtime',
'plugin:react-hooks/recommended',
],
ignorePatterns: ['dist', '.eslintrc.cjs'],
parserOptions: { ecmaVersion: 'latest', sourceType: 'module' },
settings: { react: { version: '18.2' } },
plugins: ['react-refresh'],
rules: {
'react-refresh/only-export-components': [
'warn',
{ allowConstantExport: true },
],
"react/prop-types": 0
},
}
``` constants.index.js ```javascript
import { facebook, instagram, shieldTick, support, truckFast, twitter } from "../assets/icons";
import { bigShoe1, bigShoe2, bigShoe3, customer1, customer2, shoe4, shoe5, shoe6, shoe7, thumbnailShoe1, thumbnailShoe2, thumbnailShoe3 } from "../assets/images";
export const navLinks = [
{ href: "#home", label: "Home" },
{ href: "#about-us", label: "About Us" },
{ href: "#products", label: "Products" },
{ href: "#contact-us", label: "Contact Us" },
];
export const shoes = [
{
thumbnail: thumbnailShoe1,
bigShoe: bigShoe1,
},
{
thumbnail: thumbnailShoe2,
bigShoe: bigShoe2,
},
{
thumbnail: thumbnailShoe3,
bigShoe: bigShoe3,
},
];
export const statistics = [
{ value: '1k+', label: 'Brands' },
{ value: '500+', label: 'Shops' },
{ value: '250k+', label: 'Customers' },
];
export const products = [
{
imgURL: shoe4,
name: "Nike Air Jordan-01",
price: "$200.20",
},
{
imgURL: shoe5,
name: "Nike Air Jordan-10",
price: "$210.20",
},
{
imgURL: shoe6,
name: "Nike Air Jordan-100",
price: "$220.20",
},
{
imgURL: shoe7,
name: "Nike Air Jordan-001",
price: "$230.20",
},
];
export const services = [
{
imgURL: truckFast,
label: "Free shipping",
subtext: "Enjoy seamless shopping with our complimentary shipping service."
},
{
imgURL: shieldTick,
label: "Secure Payment",
subtext: "Experience worry-free transactions with our secure payment options."
},
{
imgURL: support,
label: "Love to help you",
subtext: "Our dedicated team is here to assist you every step of the way."
},
];
export const reviews = [
{
imgURL: customer1,
customerName: 'Morich Brown',
rating: 4.5,
feedback: "The attention to detail and the quality of the product exceeded my expectations. Highly recommended!"
},
{
imgURL: customer2,
customerName: 'Lota Mongeskar',
rating: 4.5,
feedback: "The product not only met but exceeded my expectations. I'll definitely be a returning customer!"
}
];
export const footerLinks = [
{
title: "Products",
links: [
{ name: "Air Force 1", link: "/" },
{ name: "Air Max 1", link: "/" },
{ name: "Air Jordan 1", link: "/" },
{ name: "Air Force 2", link: "/" },
{ name: "Nike Waffle Racer", link: "/" },
{ name: "Nike Cortez", link: "/" },
],
},
{
title: "Help",
links: [
{ name: "About us", link: "/" },
{ name: "FAQs", link: "/" },
{ name: "How it works", link: "/" },
{ name: "Privacy policy", link: "/" },
{ name: "Payment policy", link: "/" },
],
},
{
title: "Get in touch",
links: [
{ name: "customer@nike.com", link: "mailto:customer@nike.com" },
{ name: "+92554862354", link: "tel:+92554862354" },
],
},
];
export const socialMedia = [
{ src: facebook, alt: "facebook logo" },
{ src: twitter, alt: "twitter logo" },
{ src: instagram, alt: "instagram logo" },
];
``` index.css ```css
@import url("https://fonts.googleapis.com/css2?family=Montserrat:wght@100;200;300;400;500;600;700;800;900&family=Palanquin:wght@100;200;300;400;500;600;700&display=swap");
@import url("https://fonts.googleapis.com/css2?family=Palanquin:wght@100;200;300;400;500;600;700&display=swap");
@tailwind base;
@tailwind components;
@tailwind utilities;
* {
margin: 0;
padding: 0;
box-sizing: border-box;
scroll-behavior: smooth;
}
@layer components {
.max-container {
max-width: 1440px;
margin: 0 auto;
}
.input {
@apply sm:flex-1 max-sm:w-full text-base leading-normal text-slate-gray pl-5 max-sm:p-5 outline-none sm:border-none border max-sm:border-slate-gray max-sm:rounded-full;
}
}
@layer utilities {
.padding {
@apply sm:px-16 px-8 sm:py-24 py-12;
}
.padding-x {
@apply sm:px-16 px-8;
}
.padding-y {
@apply sm:py-24 py-12;
}
.padding-l {
@apply sm:pl-16 pl-8;
}
.padding-r {
@apply sm:pr-16 pr-8;
}
.padding-t {
@apply sm:pt-24 pt-12;
}
.padding-b {
@apply sm:pb-24 pb-12;
}
.info-text {
@apply font-montserrat text-slate-gray text-lg leading-7;
}
}
``` script.js ```javascript
// To showcase the demo of dark theme. Copy paste :) ``` tailwind.config.js ```javascript
/** @type {import('tailwindcss').Config} */
export default {
content: [
"./index.html",
"./src/**/*.{js,ts,jsx,tsx}",
],
theme: {
fontSize: {
xs: ['12px', '16px'],
sm: ['14px', '20px'],
base: ['16px', '19.5px'],
lg: ['18px', '21.94px'],
xl: ['20px', '24.38px'],
'2xl': ['24px', '29.26px'],
'3xl': ['28px', '50px'],
'4xl': ['48px', '58px'],
'8xl': ['96px', '106px']
},
extend: {
fontFamily: {
palanquin: ['Palanquin', 'sans-serif'],
montserrat: ['Montserrat', 'sans-serif'],
},
colors: {
'primary': "#ECEEFF",
"coral-red": "#FF6452",
"slate-gray": "#6D6D6D",
"pale-blue": "#F5F6FF",
"white-400": "rgba(255, 255, 255, 0.80)"
},
boxShadow: {
'3xl': '0 10px 40px rgba(0, 0, 0, 0.1)'
},
backgroundImage: {
'hero': "url('assets/images/collection-background.svg')",
'card': "url('assets/images/thumbnail-background.svg')",
},
screens: {
"wide": "1440px"
}
},
},
plugins: [],
}
``` 🔗 Links Assets used in the project are here Tailwind Play 🚀 More Advance your skills with Next.js 14 Pro Course Enjoyed creating this project? Dive deeper into our PRO courses for a richer learning adventure. They're packed with detailed explanations, cool features, and exercises to boost your skills. Give it a go! Accelerate your professional journey with the Expert Training program And if you're hungry for more than just a course and want to understand how we learn and tackle tech challenges, hop into our personalized masterclass. We cover best practices, different web skills, and offer mentorship to boost your confidence. Let's learn and grow together!;Dive into the world of Tailwind CSS, build a Nike website, and join top-tier organizations like OpenAI, Shopify, and NASA in building stunning apps effortlessly.;nike,nike-website,tailwind,tailwindcss | adrianhajdin/nike_landing_page |
mickasmt/next-saas-stripe-starter;Next SaaS Stripe Starter Start at full speed with SaaS Starter ! Introduction · Installation · Tech Stack + Features · Author · Credits Introduction Empower your next project with the stack of Next.js 14, Prisma, Neon, Auth.js v5, Resend, React Email, Shadcn/ui, and Stripe. All seamlessly integrated with the SaaS Starter to accelerate your development and saas journey. Installation Clone & create this repo locally with the following command: bash
npx create-next-app my-saas-project --example "https://github.com/mickasmt/next-saas-stripe-starter" Install dependencies using pnpm: sh
pnpm install Copy .env.example to .env.local and update the variables. sh
cp .env.example .env.local Start the development server: sh
pnpm run dev [!NOTE] I use npm-check-updates package for update this project. Use this command for update your project: ncu -i --format group [!WARNING] You need update .react-email folder before use pnpm run email . Check the link here if you have the error : renderToReadableStream not found Roadmap [x] ~Fix Vaul drawer for mobile sign in~ [x] ~Update OG image~ [x] ~Add Server Actions on billing form (stripe)~ [x] ~Add Server Actions on user name form~ [x] ~Upgrade Auth.js to v5~ [x] ~Change database platform for Neon (planetscale removes its free plan on April 2024)~ [x] ~Switch subscription plan (enable on stripe dashboard)~ [x] ~Update documentation for installation & configuration~ [x] ~Improve blog section~ [ ] Upgrade eslint to v9 [ ] Add resend for success subscriptions Tech Stack + Features https://github.com/mickasmt/next-saas-stripe-starter/assets/62285783/828a4e0f-30e3-4cfe-96ff-4dfd9cd55124 Frameworks Next.js – React framework for building performant apps with the best developer experience Auth.js – Handle user authentication with ease with providers like Google, Twitter, GitHub, etc. Prisma – Typescript-first ORM for Node.js React Email – Versatile email framework for efficient and flexible email development Platforms Vercel – Easily preview & deploy changes with git Resend – A powerful email framework for streamlined email development Neon – Serverless Postgres with autoscaling, branching, bottomless storage and generous free tier. UI Tailwind CSS – Utility-first CSS framework for rapid UI development Shadcn/ui – Re-usable components built using Radix UI and Tailwind CSS Framer Motion – Motion library for React to animate components with ease Lucide – Beautifully simple, pixel-perfect icons next/font – Optimize custom fonts and remove external network requests for improved performance ImageResponse – Generate dynamic Open Graph images at the edge Hooks and Utilities useIntersectionObserver – React hook to observe when an element enters or leaves the viewport useLocalStorage – Persist data in the browser's local storage useScroll – React hook to observe scroll position ( example ) nFormatter – Format numbers with suffixes like 1.2k or 1.2M capitalize – Capitalize the first letter of a string truncate – Truncate a string to a specified length use-debounce – Debounce a function call / state update Code Quality TypeScript – Static type checker for end-to-end typesafety Prettier – Opinionated code formatter for consistent code style ESLint – Pluggable linter for Next.js and TypeScript Miscellaneous Vercel Analytics – Track unique visitors, pageviews, and more in a privacy-friendly way Author Created by @miickasmt in 2023, released under the MIT license . Credits This project was inspired by shadcn's Taxonomy , Steven Tey’s Precedent , and Antonio Erdeljac's Next 13 AI SaaS . Shadcn ( @shadcn ) Steven Tey ( @steventey ) Antonio Erdeljac ( @YTCodeAntonio );An open-source SaaS Starter built using Next.js 14, Prisma, Neon, Auth.js v5, Resend, React Email, Shadcn/ui, Stripe and Server Actions.;authjs,nextjs14,prisma,react,react-email,resend,shadcn-ui,stripe,pricing-table,server-actions | mickasmt/next-saas-stripe-starter |
Link-AGI/AutoAgents;AutoAgents: A Framework for Automatic Agent Generation Generate different roles for GPTs to form a collaborative entity for complex tasks. AutoAgents is an experimental open-source application for an Automatic Agents Generation Experiment based on LLM. This program, driven by LLM, autonomously generates multi-agents to achieve whatever goal you set. :boom: Updates 2024.04.16 : We're super excited to announce that our paper got accepted at IJCAI 2024. More updates will be coming soon! 2023.09.31 : 📝 We're excited to share our paper AutoAgents: A Framework for Automatic Agent Generation related to this repository. 2023.08.30 : 🚀 Adding a custom agent collection, AgentBank, allows you to add custom agents. 🚀 Features Planner : Determines the expert roles to be added and the specific execution plan according to the problem. Tools : The set of tools that can be used, currently only compatible with the search tools. Observers : Responsible for reflecting on whether the planner and the results in the execution process are reasonable, currently including reflection checks on Agents, Plan, and Action. Agents : Expert role agents generated by the planner, including name, expertise, tools used, and LLM enhancement. Plan : The execution plan is composed of the generated expert roles, each step of the execution plan has at least one expert role agent. Actions : The specific actions of the expert roles in the execution plan, such as calling tools or outputting results. Demo Online demo:
- Demo / HuggingFace Spaces Video demo:
- Rumor Verification Gluttonous Snake Installation and Usage Installation bash
git clone https://github.com/LinkSoul-AI/AutoAgents
cd AutoAgents
python setup.py install Configuration Configure your OPENAI_API_KEY in any of config/key.yaml / config/config.yaml / env Priority order: config/key.yaml > config/config.yaml > env ```bash Copy the configuration file and make the necessary modifications. cp config/config.yaml config/key.yaml
``` | Variable Name | config/key.yaml | env |
| ------------------------------------------ | ----------------------------------------- | ----------------------------------------------- |
| OPENAI_API_KEY # Replace with your own key | OPENAI_API_KEY: "sk-..." | export OPENAI_API_KEY="sk-..." |
| OPENAI_API_BASE # Optional | OPENAI_API_BASE: "https:// /v1" | export OPENAI_API_BASE="https:// /v1" | Usage Commandline mode: python
python main.py --mode commandline --llm_api_key YOUR_OPENAI_API_KEY --serpapi_key YOUR_SERPAPI_KEY --idea "Is LK-99 really a room temperature superconducting material?" Websocket service mode: python
python main.py --mode service --host "127.0.0.1" --port 9000 Docker Build docker image:
```bash
IMAGE="linksoul.ai/autoagents"
VERSION=1.0 docker build -f docker/Dockerfile -t "${IMAGE}:${VERSION}" . - Start docker container: bash
docker run -it --rm -p 7860:7860 "${IMAGE}:${VERSION}"
```
- Open http://127.0.0.1:7860 in the browser. Contributing AutoAgents is dedicated to creating a cutting-edge automated multi-agent environment for large language models. We are actively seeking enthusiastic collaborators to embark with us on this thrilling and innovative journey. This project exists thanks to all the people who contribute: How Can You Contribute? Issue Reporting and Pull Requests : Encountering difficulties with AutoAgents? Feel free to raise the issue in English. Additionally, you're welcome to take initiative by resolving these issues yourself. Simply request to be assigned the issue, and upon resolution, submit a pull request (PR) with your solution. Software Development Contributions : As an engineer, your skills can significantly enhance AutoAgents. We are in constant pursuit of skilled developers to refine, optimize, and expand our framework, enriching our feature set and devising new modules. Content Creation for Documentation and Tutorials : If writing is your forte, join us in improving our documentation and developing tutorials or blog posts. Your contribution will make AutoAgents more user-friendly and accessible to a diverse audience. Innovative Application Exploration : Intrigued by the prospects of multi-agent systems? If you're keen to experiment with AutoAgents, we're excited to support your endeavors and curious to see your innovative creations. User Feedback and Strategic Suggestions : We highly value user input. Engage with AutoAgents and share your feedback. Your insights are crucial for ongoing enhancements, ensuring our framework's excellence and relevance. Contact Information If you have any questions or feedback about this project, please feel free to contact us. We highly appreciate your suggestions! Email: gy.chen@foxmail.com, ymshi@linksoul.ai GitHub Issues: For more technical inquiries, you can also create a new issue in our GitHub repository . We will respond to all questions within 2-3 business days. License MIT license Citation If you find our work and this repository useful, please consider giving a star :star: and citation :beer:: bibtex
@article{chen2023auto,
title={AutoAgents: The Automatic Agents Generation Framework},
author={Chen, Guangyao and Dong, Siwei and Shu, Yu and Zhang, Ge and Jaward, Sesay and Börje, Karlsson and Fu, Jie and Shi, Yemin},
journal={arXiv preprint},
year={2023}
} Wechat Group Acknowledgements The system , action_bank and role_bank of this code base is built using MetaGPT Icons in the framework made by Darius Dan, Freepik, kmg design, Flat Icons, Vectorslab from FlatIcon;[IJCAI 2024] Generate different roles for GPTs to form a collaborative entity for complex tasks.;[] | Link-AGI/AutoAgents |
X-PLUG/mPLUG-DocOwl;The Powerful Multi-modal LLM Family
for OCR-free Document Understanding Alibaba Group 📢 News 🔥🔥🔥 [2024.5.08] We have released the training code of DocOwl1.5 supported by DeepSpeed. You can now finetune a stronger model based on DocOwl1.5! 🔥🔥🔥 [2024.4.26] We release the arxiv paper of TinyChart , a SOTA 3B Multimodal LLM for Chart Understanding with Program-of-Throught ability (ChartQA: 83.6 > Gemin-Ultra 80.8 > GPT4V 78.5). The demo of TinyChart is available on HuggingFace 🤗. Both codes, models and data are released in TinyChart . 🔥🔥🔥 [2024.4.3] We build demos of DocOwl1.5 on both ModelScope and HuggingFace 🤗, supported by the DocOwl1.5-Omni. The source codes of launching a local demo are also released in DocOwl1.5 . 🔥🔥 [2024.3.28] We release the training data (DocStruct4M, DocDownstream-1.0, DocReason25K), codes and models (DocOwl1.5-stage1, DocOwl1.5, DocOwl1.5-Chat, DocOwl1.5-Omni) of mPLUG-DocOwl 1.5 on both HuggingFace 🤗 and ModelScope . 🔥 [2024.3.20] We release the arxiv paper of mPLUG-DocOwl 1.5 , a SOTA 8B Multimodal LLM on OCR-free Document Understanding (DocVQA 82.2, InfoVQA 50.7, ChartQA 70.2, TextVQA 68.6). [2024.01.13] Our Scientific Diagram Analysis dataset M-Paper has been available on both HuggingFace 🤗 and ModelScope , containing 447k high-resolution diagram images and corresponding paragraph analysis. [2023.10.13] Training data, models of mPLUG-DocOwl / UReader has been open-sourced. [2023.10.10] Our paper UReader is accepted by EMNLP 2023. [2023.07.10] The demo of mPLUG-DocOwl on ModelScope is avaliable. [2023.07.07] We release the technical report and evaluation set of mPLUG-DocOwl. 🤖 Models mPLUG-DocOwl1.5 (Arxiv 2024) - mPLUG-DocOwl 1.5: Unified Structure Learning for OCR-free Document Understanding TinyChart (Arxiv 2024) - TinyChart: Efficient Chart Understanding with
Visual Token Merging and Program-of-Thoughts Learning mPLUG-PaperOwl (Arxiv 2023) - mPLUG-PaperOwl: Scientific Diagram Analysis with the Multimodal Large Language Model UReader (EMNLP 2023) - UReader: Universal OCR-free Visually-situated Language Understanding with Multimodal Large Language Model mPLUG-DocOwl (Arxiv 2023) - mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understanding 📺 Online Demo Note: The demo of HuggingFace is not as stable as ModelScope because the GPU in ZeroGPU Spaces of HuggingFace is dynamically assigned. 📖 DocOwl 1.5 🤗 HuggingFace Space ModelScope Space 📈 TinyChart-3B 🤗 HuggingFace Space 🌰 Cases Related Projects mPLUG . mPLUG-2 . mPLUG-Owl;mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understanding;chart-understanding,document-understanding,mllm,multimodal,multimodal-large-language-models,table-understanding | X-PLUG/mPLUG-DocOwl |
animotionjs/animotion;https://github.com/animotionjs/animotion/assets/38083522/da098a66-d2bb-4109-bc56-510894196d96 Animotion Animotion is a presentational framework for creating beautiful slides and visualizing ideas with code using Svelte , Reveal.js and Tailwind CSS . Setup The quickest way to get started with Animotion. npm create @animotion You can run npx @animotion/create on Windows if that doesn't work. Docs To learn how to use Animotion read the Animotion documentation . Examples You can look at the examples repository if you want to understand how I use Animotion to visualize ideas with code. You can try the examples in the browser thanks to SvelteLab . Contributing If you want to contribute to the project you can read the contributing guide .;🪄 Create beautiful presentations with Svelte;presentation,reveal,slides,svelte,tailwindcss,animotion | animotionjs/animotion |
mkkellogg/GaussianSplats3D;3D Gaussian splatting for Three.js Three.js-based implemetation of a renderer for 3D Gaussian Splatting for Real-Time Radiance Field Rendering , a technique for generating 3D scenes from 2D images. Their project is CUDA-based and needs to run natively on your machine, but I wanted to build a viewer that was accessible via the web. The 3D scenes are stored in a format similar to point clouds and can be viewed, navigated, and interacted with in real-time. This renderer will work with the .ply files generated by the INRIA project, standard .splat files, or my own custom .ksplat files, which are a trimmed-down and compressed version of the original .ply files. When I started, web-based viewers were already available -- A WebGL-based viewer from antimatter15 and a WebGPU viewer from cvlab-epfl -- However no Three.js version existed. I used those versions as a starting point for my initial implementation, but as of now this project contains all my own code. Highlights Rendering is done entirely through Three.js Code is organized into modern ES modules Built-in viewer is self-contained so very little code is necessary to load and view a scene Viewer can import .ply files, .splat files, or my custom compressed .ksplat files Users can convert .ply or .splat files to the .ksplat file format Allows a Three.js scene or object group to be rendered along with the splats Built-in WebXR support Supports 1st and 2nd degree spherical harmonics for view-dependent effects Focus on optimization: Splats culled prior to sorting & rendering using a custom octree WASM splat sort: Implemented in C++ using WASM SIMD instructions Partially GPU accelerated splat sort: Uses transform feedback to pre-calculate splat distances Known issues Splat sort runs on the CPU – would be great to figure out a GPU-based approach Artifacts are visible when you move or rotate too fast (due to CPU-based splat sort) Sub-optimal performance on mobile devices Custom .ksplat file format still needs work, especially around compression The default, integer based splat sort does not work well for larger scenes. In that case a value of false for the integerBasedSort viewer parameter can force a slower, floating-point based sort Future work This is still very much a work in progress! There are several things that still need to be done:
- Improve the method by which splat data is stored in textures
- Continue optimizing CPU-based splat sort - maybe try an incremental sort of some kind?
- Add editing mode, allowing users to modify scene and export changes
- Support very large scenes Online demo https://projects.markkellogg.org/threejs/demo_gaussian_splats_3d.php Controls Mouse
- Left click to set the focal point
- Left click and drag to orbit around the focal point
- Right click and drag to pan the camera and focal point Keyboard
- C Toggles the mesh cursor, showing the intersection point of a mouse-projected ray and the splat mesh I Toggles an info panel that displays debugging info: Camera position Camera focal point/look-at point Camera up vector Mesh cursor position Current FPS Renderer window size Ratio of rendered splats to total splats Last splat sort duration U Toggles a debug object that shows the orientation of the camera controls. It includes a green arrow representing the camera's orbital axis and a white square representing the plane at which the camera's elevation angle is 0. Left arrow Rotate the camera's up vector counter-clockwise Right arrow Rotate the camera's up vector clockwise P Toggle point-cloud mode, where each splat is rendered as a filled circle = Increase splat scale - Decrease splat scale O Toggle orthographic mode Building from source and running locally Navigate to the code directory and run npm install Next run the build. For Linux & Mac OS systems run: npm run build For Windows I have added a Windows-compatible version of the build command: npm run build-windows To view the demo scenes locally run npm run demo The demo will be accessible locally at http://127.0.0.1:8080/index.html . You will need to download the data for the demo scenes and extract them into <code directory>/build/demo/assets/data The demo scene data is available here: https://projects.markkellogg.org/downloads/gaussian_splat_data.zip Installing as an NPM package If you don't want to build the library from source, it is also available as an NPM package. The NPM package does not come with the source code or demos that are available in the source repository. To install, run the following command: npm install @mkkellogg/gaussian-splats-3d Basic Usage To run the built-in viewer: ```javascript
import * as GaussianSplats3D from '@mkkellogg/gaussian-splats-3d'; const viewer = new GaussianSplats3D.Viewer({
'cameraUp': [0, -1, -0.6],
'initialCameraPosition': [-1, -4, 6],
'initialCameraLookAt': [0, 4, 0]
});
viewer.addSplatScene(' ', {
'splatAlphaRemovalThreshold': 5,
'showLoadingUI': true,
'position': [0, 1, 0],
'rotation': [0, 0, 0, 1],
'scale': [1.5, 1.5, 1.5]
})
.then(() => {
viewer.start();
}); ```
Viewer parameters | Parameter | Purpose
| --- | ---
| cameraUp | The natural 'up' vector for viewing the scene (only has an effect when used with orbit controls and when the viewer uses its own camera). Serves as the axis around which the camera will orbit, and is used to determine the scene's orientation relative to the camera.
| initialCameraPosition | The camera's initial position (only used when the viewer uses its own camera).
| initialCameraLookAt | The initial focal point of the camera and center of the camera's orbit (only used when the viewer uses its own camera). Parameters for addSplatScene() | Parameter | Purpose
| --- | ---
| format | Force the loader to assume the specified file format when loading a splat scene. This is useful when loading from a URL where there is no file extension. Valid values are defined in the SceneFormat enum: Ply , Splat , and KSplat .
| splatAlphaRemovalThreshold | Tells addSplatScene() to ignore any splats with an alpha less than the specified value (valid range: 0 - 255). Defaults to 1 .
| showLoadingUI | Displays a loading spinner and/or loading progress bar while the scene is loading. Defaults to true .
| position | Position of the scene, acts as an offset from its default position. Defaults to [0, 0, 0] .
| rotation | Rotation of the scene represented as a quaternion, defaults to [0, 0, 0, 1] (identity quaternion).
| scale | Scene's scale, defaults to [1, 1, 1] .
| progressiveLoad | Progressively load the scene's splat data and allow the scene to be rendered and viewed as the splats are loaded. Option is only valid for addSplatScene() , and not for addSplatScenes() . Viewer can also load multiple scenes simultaneously with the addSplatScenes() function:
```javascript
import * as GaussianSplats3D from '@mkkellogg/gaussian-splats-3d'; viewer.addSplatScenes([{
'path': ' ',
'splatAlphaRemovalThreshold': 20
},
{
'path': ' ',
'rotation': [-0.14724434, -0.0761755, 0.1410657, 0.976020],
'scale': [1.5, 1.5, 1.5],
'position': [-3, -2, -3.2]
}
])
.then(() => {
viewer.start();
});
``` The addSplatScene() and addSplatScenes() methods will accept the original .ply files, standard .splat files, and my custom .ksplat files. Integrating THREE.js scenes You can integrate your own Three.js scene into the viewer if you want rendering to be handled for you. Just pass a Three.js scene object as the threeScene parameter to the constructor:
```javascript
import * as GaussianSplats3D from '@mkkellogg/gaussian-splats-3d';
import * as THREE from 'three'; const threeScene = new THREE.Scene();
const boxColor = 0xBBBBBB;
const boxGeometry = new THREE.BoxGeometry(2, 2, 2);
const boxMesh = new THREE.Mesh(boxGeometry, new THREE.MeshBasicMaterial({'color': boxColor}));
boxMesh.position.set(3, 2, 2);
threeScene.add(boxMesh); const viewer = new GaussianSplats3D.Viewer({
'threeScene': threeScene,
});
viewer.addSplatScene(' ')
.then(() => {
viewer.start();
});
``` Currently this will only work for objects that write to the depth buffer (e.g. standard opaque objects). Supporting transparent objects will be more challenging :) A "drop-in" mode for the viewer is also supported. The DropInViewer class encapsulates Viewer and can be added to a Three.js scene like any other renderable:
```javascript
import * as GaussianSplats3D from '@mkkellogg/gaussian-splats-3d';
import * as THREE from 'three'; const threeScene = new THREE.Scene();
const viewer = new GaussianSplats3D.DropInViewer({
'gpuAcceleratedSort': true
});
viewer.addSplatScenes([{
'path': ' '
'splatAlphaRemovalThreshold': 5
},
{
'path': ' ',
'rotation': [0, -0.857, -0.514495, 6.123233995736766e-17],
'scale': [1.5, 1.5, 1.5],
'position': [0, -2, -1.2]
}
]);
threeScene.add(viewer); ``` Advanced options The viewer allows for various levels of customization via constructor parameters. You can control when its update() and render() methods are called by passing false for the selfDrivenMode parameter and then calling those methods whenever/wherever you decide is appropriate. You can also use your own camera controls, as well as an your own instance of a Three.js Renderer or Camera The sample below shows all of these options: ```javascript
import * as GaussianSplats3D from '@mkkellogg/gaussian-splats-3d';
import * as THREE from 'three'; const renderWidth = 800;
const renderHeight = 600; const rootElement = document.createElement('div');
rootElement.style.width = renderWidth + 'px';
rootElement.style.height = renderHeight + 'px';
document.body.appendChild(rootElement); const renderer = new THREE.WebGLRenderer({
antialias: false
});
renderer.setSize(renderWidth, renderHeight);
rootElement.appendChild(renderer.domElement); const camera = new THREE.PerspectiveCamera(65, renderWidth / renderHeight, 0.1, 500);
camera.position.copy(new THREE.Vector3().fromArray([-1, -4, 6]));
camera.up = new THREE.Vector3().fromArray([0, -1, -0.6]).normalize();
camera.lookAt(new THREE.Vector3().fromArray([0, 4, -0])); const viewer = new GaussianSplats3D.Viewer({
'selfDrivenMode': false,
'renderer': renderer,
'camera': camera,
'useBuiltInControls': false,
'ignoreDevicePixelRatio': false,
'gpuAcceleratedSort': true, enableSIMDInSort : true,
'sharedMemoryForWorkers': true,
'integerBasedSort': true,
'halfPrecisionCovariancesOnGPU': true,
'dynamicScene': false,
'webXRMode': GaussianSplats3D.WebXRMode.None,
'renderMode': GaussianSplats3D.RenderMode.OnChange,
'sceneRevealMode': GaussianSplats3D.SceneRevealMode.Instant,
'antialiased': false,
'focalAdjustment': 1.0,
'logLevel': GaussianSplats3D.LogLevel.None,
'sphericalHarmonicsDegree': 0, enableOptionalEffects : false, plyInMemoryCompressionLevel : 2 freeIntermediateSplatData : false
});
viewer.addSplatScene(' ')
.then(() => {
requestAnimationFrame(update);
}); Since `selfDrivenMode` is false, it is up to the developer to call the `update()` and `render()` methods on the `Viewer` class: javascript
function update() {
requestAnimationFrame(update);
viewer.update();
viewer.render();
} ``
Advanced Viewer parameters
<br>
| Parameter | Purpose
| --- | ---
| selfDrivenMode | If false , tells the viewer that you will manually call its update() and render() methods. Defaults to true .
| renderer | Pass an instance of a Three.js Renderer to the viewer, otherwise it will create its own. Defaults to undefined .
| camera | Pass an instance of a Three.js Camera to the viewer, otherwise it will create its own. Defaults to undefined .
| useBuiltInControls | Tells the viewer to use its own camera controls. Defaults to true .
| ignoreDevicePixelRatio | Tells the viewer to pretend the device pixel ratio is 1, which can boost performance on devices where it is larger, at a small cost to visual quality. Defaults to false .
| gpuAcceleratedSort | Tells the viewer to use a partially GPU-accelerated approach to sorting splats. Currently this means pre-computation of splat distances from the camera is performed on the GPU. It is recommended that this only be set to true when sharedMemoryForWorkers is also true . Defaults to false on mobile devices, true otherwise.
| enableSIMDInSort | Enable the usage of SIMD WebAssembly instructions for the splat sort. Default is true .
| sharedMemoryForWorkers | Tells the viewer to use shared memory via a SharedArrayBuffer to transfer data to and from the sorting web worker. If set to false , it is recommended that gpuAcceleratedSort be set to false as well. Defaults to true .
| integerBasedSort | Tells the sorting web worker to use the integer versions of relevant data to compute the distance of splats from the camera. Since integer arithmetic is faster than floating point, this reduces sort time. However it can result in integer overflows in larger scenes so it should only be used for small scenes. Defaults to true .
| halfPrecisionCovariancesOnGPU | Tells the viewer to use 16-bit floating point values when storing splat covariance data in textures, instead of 32-bit. Defaults to false .
| dynamicScene | Tells the viewer to not make any optimizations that depend on the scene being static. Additionally all splat data retrieved from the viewer's splat mesh will not have their respective scene transform applied to them by default.
| webXRMode | Tells the viewer whether or not to enable built-in Web VR or Web AR. Valid values are defined in the WebXRMode enum: None , VR , and AR . Defaults to None .
| renderMode | Controls when the viewer renders the scene. Valid values are defined in the RenderMode enum: Always , OnChange , and Never . Defaults to Always .
| sceneRevealMode | Controls the fade-in effect used when the scene is loaded. Valid values are defined in the SceneRevealMode enum: Default , Gradual , and Instant . Default results in a nice, slow fade-in effect for progressively loaded scenes, and a fast fade-in for non progressively loaded scenes. Gradual will force a slow fade-in for all scenes. Instant will force all loaded scene data to be immediately visible.
| antialiased | When true, will perform additional steps during rendering to address artifacts caused by the rendering of gaussians at substantially different resolutions than that at which they were rendered during training. This will only work correctly for models that were trained using a process that utilizes this compensation calculation. For more details: https://github.com/nerfstudio-project/gsplat/pull/117, https://github.com/graphdeco-inria/gaussian-splatting/issues/294#issuecomment-1772688093
| focalAdjustment | Hacky, non-scientific parameter for tweaking focal length related calculations. For scenes with very small gaussians & small details, increasing this value can help improve visual quality. Default value is 1.0.
| logLevel | Verbosity of the console logging. Defaults to GaussianSplats3D.LogLevel.None .
| sphericalHarmonicsDegree | Degree of spherical harmonics to utilize in rendering splats (assuming the data is present in the splat scene). Valid values are 0, 1, or 2. Default value is 0.
| enableOptionalEffects | When true, allows for usage of extra properties and attributes during rendering for effects such as opacity adjustment. Default is false for performance reasons. These properties are separate from transform properties (scale, rotation, position) that are enabled by the dynamicScene parameter.
| plyInMemoryCompressionLevel | Level to compress .ply files when loading them for direct rendering (not exporting to .ksplat ). Valid values are the same as .ksplat compression levels (0, 1, or 2). Default is 2.
| freeIntermediateSplatData | When true, the intermediate splat data that is the result of decompressing splat bufffer(s) and used to populate data textures will be freed. This will reduces memory usage, but if that data needs to be modified it will need to be re-populated from the splat buffer(s). Defaults to false`. Creating KSPLAT files To convert a .ply or .splat file into the stripped-down and compressed .ksplat format, there are several options. The easiest method is to use the UI in the main demo page at http://127.0.0.1:8080/index.html . If you want to run the conversion programatically, run the following in a browser: ```javascript
import * as GaussianSplats3D from '@mkkellogg/gaussian-splats-3d'; const compressionLevel = 1;
const splatAlphaRemovalThreshold = 5; // out of 255
const sphericalHarmonicsDegree = 1;
GaussianSplats3D.PlyLoader.loadFromURL(' ',
compressionLevel,
splatAlphaRemovalThreshold,
sphericalHarmonicsDegree)
.then((splatBuffer) => {
GaussianSplats3D.KSplatLoader.downloadFile(splatBuffer, 'converted_file.ksplat');
}); ``
Both of the above methods will prompt your browser to automatically start downloading the converted .ksplat` file. The third option is to use the included nodejs script: node util/create-ksplat.js [path to .PLY or .SPLAT] [output file] [compression level = 0] [alpha removal threshold = 1] Currently supported values for compressionLevel are 0 , 1 , or 2 . 0 means no compression and 1 means compression of scale, rotation, position, and spherical harmonics coefficient values from 32-bit to 16-bit. 2 is similar to 1 except spherical harmonics coefficients are compressed to 8-bit. CORS issues and SharedArrayBuffer By default, the Viewer class uses shared memory (via a typed array backed by a SharedArrayBufffer ) to communicate with the web worker that sorts the splats. This mechanism presents a potential security issue that is outlined here: https://web.dev/articles/cross-origin-isolation-guide. Shared memory can be disabled by passing false for the sharedMemoryForWorkers parameter to the constructor for Viewer , but if you want to leave it enabled, a couple of extra CORS HTTP headers need to be present in the response from the server that is sent when loading the application. Without those headers set, you might see an error like the following in the debug console: "DOMException: Failed to execute 'postMessage' on 'DedicatedWorkerGlobalScope': SharedArrayBuffer transfer requires self.crossOriginIsolated." For the local demo I created a simple HTTP server (util/server.js) that sets those headers: response.setHeader("Cross-Origin-Opener-Policy", "same-origin");
response.setHeader("Cross-Origin-Embedder-Policy", "require-corp"); CORS with Apache For Apache, you can edit the .htaccess file to allow CORS by adding the lines: Header add Cross-Origin-Opener-Policy "same-origin"
Header add Cross-Origin-Embedder-Policy "require-corp" Additionally you may need to require a secure connection to your server by redirecting all access via http:// to https:// . In Apache this can be done by updating the .htaccess file with the following lines: RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R,L] CORS with Vite For Vite, one popular option is to install the vite-plugin-cross-origin-isolation plugin via npm and then add the following to your vite.config.js file. ```javascript
import { defineConfig } from "vite"; export default defineConfig({
plugins: [
{
name: "configure-response-headers",
configureServer: (server) => {
server.middlewares.use((_req, res, next) => {
res.setHeader("Cross-Origin-Embedder-Policy", "require-corp");
res.setHeader("Cross-Origin-Opener-Policy", "same-origin");
next();
});
},
},
],
});
```
There are other ways to configure Vite to handle this referenced in issue #41 .;Three.js-based implementation of 3D Gaussian splatting;gaussian-splatting,three-js,webgl,threejs,3d-gaussian-splatting,javascript | mkkellogg/GaussianSplats3D |
Yujun-Shi/DragDiffusion;DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing Yujun Shi Chuhui Xue Jun Hao Liew Jiachun Pan Hanshu Yan Wenqing Zhang Vincent Y. F. Tan Song Bai Disclaimer This is a research project, NOT a commercial product. Users are granted the freedom to create images using this tool, but they are expected to comply with local laws and utilize it in a responsible manner. The developers do not assume any responsibility for potential misuse by users. News and Update [Jan 29th] Update to support diffusers==0.24.0! [Oct 23rd] Code and data of DragBench are released! Please check README under "drag_bench_evaluation" for details. [Oct 16th] Integrate FreeU when dragging generated image. [Oct 3rd] Speeding up LoRA training when editing real images. ( Now only around 20s on A100! ) [Sept 3rd] v0.1.0 Release. Enable Dragging Diffusion-Generated Images. Introducing a new guidance mechanism that greatly improve quality of dragging results. (Inspired by MasaCtrl ) Enable Dragging Images with arbitrary aspect ratio Adding support for DPM++Solver (Generated Images) [July 18th] v0.0.1 Release. Integrate LoRA training into the User Interface. No need to use training script and everything can be conveniently done in UI! Optimize User Interface layout. Enable using better VAE for eyes and faces (See this ) [July 8th] v0.0.0 Release. Implement Basic function of DragDiffusion Installation It is recommended to run our code on a Nvidia GPU with a linux system. We have not yet tested on other configurations. Currently, it requires around 14 GB GPU memory to run our method. We will continue to optimize memory efficiency To install the required libraries, simply run the following command: conda env create -f environment.yaml
conda activate dragdiff Run DragDiffusion To start with, in command line, run the following to start the gradio user interface: python3 drag_ui.py You may check our GIF above that demonstrate the usage of UI in a step-by-step manner. Basically, it consists of the following steps: Case 1: Dragging Input Real Images 1) train a LoRA Drop our input image into the left-most box. Input a prompt describing the image in the "prompt" field Click the "Train LoRA" button to train a LoRA given the input image 2) do "drag" editing Draw a mask in the left-most box to specify the editable areas. Click handle and target points in the middle box. Also, you may reset all points by clicking "Undo point". Click the "Run" button to run our algorithm. Edited results will be displayed in the right-most box. Case 2: Dragging Diffusion-Generated Images 1) generate an image Fill in the generation parameters (e.g., positive/negative prompt, parameters under Generation Config & FreeU Parameters). Click "Generate Image". 2) do "drag" on the generated image Draw a mask in the left-most box to specify the editable areas Click handle points and target points in the middle box. Click the "Run" button to run our algorithm. Edited results will be displayed in the right-most box. License Code related to the DragDiffusion algorithm is under Apache 2.0 license. BibTeX If you find our repo helpful, please consider leaving a star or cite our paper :) bibtex
@article{shi2023dragdiffusion,
title={DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing},
author={Shi, Yujun and Xue, Chuhui and Pan, Jiachun and Zhang, Wenqing and Tan, Vincent YF and Bai, Song},
journal={arXiv preprint arXiv:2306.14435},
year={2023}
} Contact For any questions on this project, please contact Yujun (shi.yujun@u.nus.edu) Acknowledgement This work is inspired by the amazing DragGAN . The lora training code is modified from an example of diffusers. Image samples are collected from unsplash , pexels , pixabay . Finally, a huge shout-out to all the amazing open source diffusion models and libraries. Related Links Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold MasaCtrl: Tuning-free Mutual Self-Attention Control for Consistent Image Synthesis and Editing Emergent Correspondence from Image Diffusion DragonDiffusion: Enabling Drag-style Manipulation on Diffusion Models FreeDrag: Point Tracking is Not You Need for Interactive Point-based Image Editing Common Issues and Solutions 1) For users struggling in loading models from huggingface due to internet constraint, please 1) follow this links and download the model into the directory "local_pretrained_models"; 2) Run "drag_ui.py" and select the directory to your pretrained model in "Algorithm Parameters -> Base Model Config -> Diffusion Model Path".;[CVPR2024, Highlight] Official code for DragDiffusion;artificial-intelligence,diffusion-models,draggan,image-editing,dragdiffusion,cvpr2024 | Yujun-Shi/DragDiffusion |
showlab/Show-1;🎬Show-1 David Junhao Zhang * Jay Zhangjie Wu * Jia-Wei Liu * Rui Zhao Lingmin Ran Yuchao Gu Difei Gao Mike Zheng Shou ✉ Show Lab, National University of Singapore * Equal Contribution ✉ Corresponding Author -----------------
![](https://img.shields.io/github/stars/showlab/Show-1?style=social)
[![Hits](https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com%2Fshowlab%2FShow-1&count_bg=%2379C83D&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=hits&edge_flat=false)](https://hits.seeyoufarm.com)
### [Project Page](https://showlab.github.io/Show-1) | [arXiv](https://arxiv.org/abs/2309.15818) | [PDF](https://arxiv.org/abs/2309.15818) | [🤗 Space](https://huggingface.co/spaces/showlab/Show-1) | [Colab](https://colab.research.google.com/github/camenduru/Show-1-colab/blob/main/Show_1_steps_colab.ipynb) | [Replicate Demo](https://replicate.com/cjwbw/show-1)
## News
- [10/12/2023] Code and weights released!
## Setup
### Requirements
```shell
pip install -r requirements.txt
```
Note: PyTorch 2.0+ is highly recommended for more efficiency and speed on GPUs.
### Weights
All model weights for Show-1 are available on [Show Lab's HuggingFace page](https://huggingface.co/showlab): Base Model ([show-1-base](https://huggingface.co/showlab/show-1-base)), Interpolation Model ([show-1-interpolation](https://huggingface.co/showlab/show-1-interpolation)), and Super-Resolution Model ([show-1-sr1](https://huggingface.co/showlab/show-1-sr1), [show-1-sr2](https://huggingface.co/showlab/show-1-sr2)).
Note that our [show-1-sr1](https://huggingface.co/showlab/show-1-sr1) incorporates the image super-resolution model from DeepFloyd-IF, [DeepFloyd/IF-II-L-v1.0](https://huggingface.co/DeepFloyd/IF-II-L-v1.0), to upsample the first frame of the video. To obtain the respective weights, follow their [official instructions](https://huggingface.co/DeepFloyd/IF-II-L-v1.0).
## Usage
To generate a video from a text prompt, run the command below:
```bash
python run_inference.py
```
By default, the videos generated from each stage are saved to the `outputs` folder in the GIF format. The script will automatically fetch the necessary model weights from HuggingFace. If you prefer, you can manually download the weights using git lfs and then update the `pretrained_model_path` to point to your local directory. Here's how:
```bash
git lfs install
git clone https://huggingface.co/showlab/show-1-base
```
A demo is also available on the [`showlab/Show-1` 🤗 Space](https://huggingface.co/spaces/showlab/Show-1).
You can use the gradio demo locally by running:
```bash
python app.py
```
## Demo Video
https://github.com/showlab/Show-1/assets/55792387/32242135-25a5-4757-b494-91bf314581e8
## Citation
If you make use of our work, please cite our paper.
```bibtex
@article{zhang2023show,
title={Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation},
author={Zhang, David Junhao and Wu, Jay Zhangjie and Liu, Jia-Wei and Zhao, Rui and Ran, Lingmin and Gu, Yuchao and Gao, Difei and Shou, Mike Zheng},
journal={arXiv preprint arXiv:2309.15818},
year={2023}
}
```
## Commercial Use
We are working with the university (NUS) to figure out the exact paperwork needed for approving commercial use request. In the meantime, to speed up the process, we'd like to solicit intent of interest from community and later on we will process these requests with high priority. If you are keen, can you kindly email us at mike.zheng.shou@gmail.com and junhao.zhang@u.nus.edu to answer the following questions, if possible:
- Who are you / your company?
- What is your product / application?
- How Show-1 can benefit your product?
## Shoutouts
- This work heavily builds on [diffusers](https://github.com/huggingface/diffusers), [deep-floyd/IF](https://github.com/deep-floyd/IF), [modelscope](https://huggingface.co/damo-vilab/modelscope-damo-text-to-video-synthesis), and [zeroscope](https://huggingface.co/cerspense/zeroscope_v2_576w). Thanks for open-sourcing!
- Thanks [@camenduru](https://github.com/camenduru) for providing the CoLab demo and [@chenxwh](https://github.com/chenxwh) for providing replicate demo.;Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation;[] | showlab/Show-1 |
THU-LYJ-Lab/T3Bench;📃 Paper • 🌐 Project Page T 3 Bench: Benchmarking Current Progress in Text-to-3D Generation T 3 Bench is the first comprehensive text-to-3D benchmark containing diverse text prompts of three increasing complexity levels that are specially designed for 3D generation (300 prompts in total). To assess both the subjective quality and the text alignment, we propose two automatic metrics based on multi-view images produced by the 3D contents. The quality metric combines multi-view text-image scores and regional convolution to detect quality and view inconsistency. The alignment metric uses multi-view captioning and Large Language Model (LLM) evaluation to measure text-3D consistency. Both metrics closely correlate with different dimensions of human judgments, providing a paradigm for efficiently evaluating text-to-3D models. 🔥 Updates [2023/10/24] We have released mesh results of all prompt sets and methods! Please check here to download. Evaluate on T 3 Bench Environment Setup We adopt the implementation of ThreeStudio to test the current text-to-3D methods. Please first follow the instructions of ThreeStudio to setup the generation environment. Then install the following packages used for evaluation: shell
pip install -r requirements.txt Note that we use a slightly modified version of ThreeStudio to ensure efficient generation. Evaluation Run Text-to-3D and Extract Mesh ```shell YOUR_GROUP: Choose the prompt set to test, including [single, surr, multi] YOUR_METHOD: We now support latentnerf, magic3d, fantasia3d, dreamfusion, sjc, and prolificdreamer. python run_t3.py --group YOUR_GROUP --gpu YOUR_GPU --method YOUR_METHOD
python run_mesh.py --group YOUR_GROUP --gpu YOUR_GPU --method YOUR_METHOD
``` Quality Evaluation shell
python run_eval_quality.py --group YOUR_GROUP --gpu YOUR_GPU --method YOUR_METHOD Alignment Evaluation ```shell First get the 3D prompt of the text-to-3D result python run_caption.py --group YOUR_GROUP --gpu YOUR_GPU --method YOUR_METHOD then run the LLM Evaluation python run_eval_alignment.py --group YOUR_GROUP --gpu YOUR_GPU --method YOUR_METHOD
``` Citation @misc{he2023t3bench,
title={T$^3$Bench: Benchmarking Current Progress in Text-to-3D Generation},
author={Yuze He and Yushi Bai and Matthieu Lin and Wang Zhao and Yubin Hu and Jenny Sheng and Ran Yi and Juanzi Li and Yong-Jin Liu},
year={2023},
eprint={2310.02977},
archivePrefix={arXiv},
primaryClass={cs.CV}
} Acknowledgement This project could not be possible without the open-source works from ThreeStudio , Cap3D , Stable-DreamFusion , ImageReward , LAVIS . We sincerely thank them all.;T3Bench: Benchmarking Current Progress in Text-to-3D Generation;3d,text-to-3d,diffusion,nerf | THU-LYJ-Lab/T3Bench |
jackaduma/awesome_LLMs_interview_notes;awesome_LLMs_interview_notes LLMs interview notes and answers 内容说明 刚得知 被人警告侵权。新手 第一次知道 github + Apache-2.0 license 是侵权行为。再次向原作道歉,并逐渐下架内容。 请有需求的童鞋们 移至原作 付费观看。不好意思,再次道歉。 BLOG llm大模型训练知乎专栏 Star-History;LLMs interview notes and answers:该仓库主要记录大模型(LLMs)算法工程师相关的面试题和参考答案;ai,interview-practice,interview-preparation,interview-questions,interviews,large-language-models,llm,nlp | jackaduma/awesome_LLMs_interview_notes |
KenneyNL/Godot-SplashScreens;Godot SplashScreens This repository includes 70 different 4K splash screens, 18 vector logos and 1 animation to be used within Godot, as wallpaper or to promote Godot. License The original Godot logo is made by Andrea Calabró and is CC-BY-4.0 licensed . Other derivative logos (reimagined logo, cog logo, text logo) featured in this pack are made by Kenney and are CC0 licensed. See additional license information regarding the Godot logo on their website . Donate/support Like these? Check out my 40,000+ other game assets or support by donating !;70 splash screens and logos for use in Godot;[] | KenneyNL/Godot-SplashScreens |
Cyfrin/security-and-auditing-full-course-s23;Smart Contract Auditing, Assembly, Security, and DeFi Ultimate Course Level up your career as a smart contract auditor writing secure and optimized smart contracts. [![Stargazers][stars-shield]][stars-url] [![Forks][forks-shield]][forks-url]
[![Contributors][contributors-shield]][contributors-url]
[![Issues][issues-shield]][issues-url]
[![MIT License][license-shield]][license-url] And [The Red Guild](https://theredguild.org/) Welcome to the repository for the Ultimate Smart Contract Auditing, Assembly, Security, and DeFi Course by Cyfrin Updraft and The Red Guild! [IMPORTANT!]
Course Link: https://updraft.cyfrin.io/courses/security This repository houses the written content of our courses, organized to facilitate easy access and contribution from our community.
Please refer to this for an in-depth explanation of the content: Website - Join Cyfrin Updraft and enjoy 50+ hours of smart contract development courses Twitter - Stay updated with the latest course releases LinkedIn - Add Updraft to your learning experiences Discord - Join a community of 3000+ developers and auditors Newsletter - Weekly security research tips and resources to level up your career Codehawks - Smart contracts auditing competitions to help securing web3 Table of Contents Note: If you're familiar with Patrick's previous courses, we have renamed "Lessons" to "Sections" Smart Contract Auditing, Assembly, Security, and DeFi Ultimate Course Smart Contract Auditing, Assembly, Security, and DeFi Ultimate Course Smart Contract Auditing, Assembly, Security, and DeFi Ultimate Course Table of Contents Table of Contents Introduction, Resources, and Prerequisites Resources For This Course Prerequisites Outcome Bonus NFTs Important Notes for Arbitrum Bridging to Arbitrum Curriculum Curriculum 🤗 Section 0: Welcome to the Course Welcome Why Security? Why Web3 is so important The Final Boss Codebase, you'll be able to audit this at the end of this course Best Practices for this course Section 0 NFT 🐸 Section 1: Review (Don't skip) Section 1 NFT ❓ Section 2: What is a smart contract audit (Security Review)? What is a security review/smart contract audit? Smart Contract Development Life Cycle Top Smart Contract Auditors (Subjective!) Tooling Audit Readiness Attacker vs. Defender mindset Top Attack Vectors Section 2 NFT ⛳️ Section 3: Your first audit | PasswordStore Audit Security Review > Audit "The Tincho" Exploits Exploits: Access Controls Writing your first finding Exploits: Private Data Your first report Section 3 NFT 🐶 Section 4: Manual & Static Analysis | Puppy Raffle Audit Tooling: Static Analysis Scoping & Reconnaissance: Puppy Raffle Exploits: Reentrancy Exploits: Weak RNG Exploits: Arithmetic issues Exploits: DoS (Denial of service) Exploits: Poor ETH Handling Informational Findings Gas Audits Code Maturity Writing the report: Puppy Raffle Section 4 NFT 🔄 Section 5: Invariants & Intro to DeFi | TSwap Audit Scoping & Reconnaissance: T-Swap Intro to DeFi/OnChain Finance Tooling: T-Swap Exploits: Weird ERC20s Exploits: Core Invariant breaking Design Patterns: T-Swap Section 5 NFT 🌩️ Section 6: Centralization, Proxies, and Oracles | Thunder Loan Audit Section 6: Centralization, Proxies, and Oracles | Thunder Loan Audit Scoping & Reconnaissance: Thunder Loan DeFi: Borrowing & Lending Malicious Scope Tooling: Thunder Loan Exploits: Failure to initialize Exploits: Storage collision Exploits: Centralization Exploits: Missing events Exploits: Bad Upgrade Exploits: Oracle & Price Manipulation Design Patterns: Thunder Loan Section 6 NFT 🌉 Section 7: Bridges, Chains, Signatures, Intro to Yul/Assembly | Bridge Boss Audit Section 7: Bridges, Chains, Signatures, Intro to Yul/Assembly | Bridge Boss Audit Tooling: Boss Bridge Scoping & Reconnaissance: Boss Bridge Exploits: Opcode Support Exploits: Signature Replay Exploits: ERC20 Contract Approval Exploits: Unlimited Minting Bridge Hacks Writing the report: Boss Bridge Design Patterns: Boss Bridge Section 7 NFT 🛡️ Section 8: (THE FINAL BOSS AUDIT) MEV, Nodes, & DAOs | Vault Guardians Audit Section 8: (THE FINAL BOSS AUDIT) MEV, Nodes, & DAOs | Vault Guardians Audit Concepts: Vault Guardians Exploits: Governance Attack Exploits: `block.timestamp` can be bad Introduction to MEV Exploits: Slippage Protection Design Patterns: Vault Guardians Section 8 NFT First CodeHawks Competitive Audit First CodeHawks Competitive Audit Congratulations Congratulations Where do I go now? Learning More Thank you Thank you Sponsors Lead Lecturers / Code Builders Guest Lecturers Special thanks More Security Stuff Huge Extra Thank YOU Introduction, Resources, and Prerequisites Head over to the Cyfrin Updraft website to get the best learning experience! Link to course: https://updraft.cyfrin.io/courses/security ⚠️ All code associated with this course is for demo purposes only. They have been audited, but we do not recommend them for production use and should be used at your own risk. Resources For This Course Join Cyfrin Updraft for the best learning experience! AI Frens ChatGPT Just know that it will often get things wrong, but it's very fast! Phind Like ChatGPT, but it searches the web Bard Other AI extensions Github Discussions Ask questions and chat about the course here! Stack Exchange Ethereum Great place for asking technical questions about Ethereum Peeranha Decentralized Stack Exchange! Cookbook A smart contract registry and co-pilot Exploit Resources SC Exploits Minimized Challenge Contracts Registry Challenge Contracts (Arbitrum) Challenge Contracts (Sepolia) Prerequisites An intermediate understanding of solidity. You don't need to be a pro, but you should be familiar with: Blockchain basics (transactions, blocks, decentralization, etc) Running a smart contract test suite (hardhat, foundry, truffle, etc) Solidity basics (variables, functions, structs, etc) Here are some resources to get you up to speed with the prerequisites: Full Foundry Course : This will give you every single prerequisite Speed Run Ethereum : This will give you most of what you need. But you’ll need a little extra time on invariant tests, using foundry, and DeFi/OnChain Finance. Prerequisite tools git foundry VSCode other other text editor Understand Markdown syntax ChatGPT or other AI assistant Outcome Have the foundational skills to become a professional smart contract auditor Speak, interact, and contribute to the web3 security community Compete in web3 competitive audits Compete in web3 bug bounties Start a career as an independent auditor Become a top 1% smart contract developer Bonus NFTs You can find them on Arbitrum here It's just numbers 0 -> 8 The rest are from the assembly and formal verification or the Web3 DevOps course. Important Notes for Arbitrum IF YOU DECIDE TO MINT THE REAL NFT:
1. We didn't audit/security review the NFT, so if you want to make sure you'll be safe, interact with the contract using a burner wallet (a wallet with very little money that you don't use for anything else)
1. In fact... Get good at interacting with wallets from a burner wallet
2. Read my Tweet thread on basic wallet safety 3. It might be a good idea to wait till later in the course when we teach you about verifying metamask transactions.
4. Feel free to mint NFTs on sepolia without worrying about the above Bridging to Arbitrum We didn't show you how to bring ETH -> Arbitrum, but the process would be: Buy ETH (On an exchange like Coinbase or Kraken ) Send ETH -> one of your wallets like: Safe (Multi-Sig) Metamask Frame Rainbow Argent Coinbase Wallet Use the Arbitrum Bridge Curriculum 🤗 Section 0: Welcome to the Course Do not skip this section! Welcome Why Web3 Security? Web3 is important Permissionless finance Unbreakable promises Web3 security is subpar right now Rekt Leaderboard $1B in 2023 (so far) Web3 vs Web2 hacks. Web2 is mostly PII theft, where Web3 hacks result in irrevocable losses of funds. Bad actors in the space. Lone wolf hackers vs. well funded, persistent nation state actors (e.g. NK). Career opportunities Top 1% Developer Private Audits Cyfrin Trail Of Bits Independent Security Researcher Competitive Audits CodeHawks Code4rena Bug Bounties $2.2M Payout Immunefi Hats Finance Future: Incident Responders On-chain investigators More… Why Web3 is so important Rebuild trust in the ecosystem. Wild West image to the outsiders Pick a class The Final Boss Codebase, you'll be able to audit this at the end of this course Vault Guardians Best Practices for this course Register for Cyfrin Updraft USE THIS SITE!!! It's specfically made to make learning easier Follow the repository: While going through the course be 100% certain to follow along with the github repository. If you run into in an issue check the chronological-updates in the repo. Be Active in the community: Ask questions and engage with other developers going through the course in the discussions tab, be sure to go and say hello or gm! This space is different from the other industries, you don't have to be secretive; communicate, network and learn with others :) Learn at your own pace: It doesn't matter if it takes you a day, a week, a month or even a year. Progress >>> Perfection Take Breaks: You will exhaust your mind and recall less if you go all out and watch the entire course in one sitting. Suggested Strategy every 25 minutes take a 5 min break, and every 2 hours take a longer 30 min break Refer to Documentation: Things are constantly being updated, so whenever Patrick opens up some documentation, open it your end and maybe even have the code sample next to you. Use ChatGPT and/or the course chat And finally, by embarking on this journey, you are now a "Security Researcher", not an "Auditor". The key word being "Researcher", so we will go over strategies for continued learning so you can stay on top of your game. 🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯 🎯 Exercise: Write yourself a message about why you want this
- This will be important for when things get hard
- Is it money? Save web3? Become someone? Write down as many reasons as possible. Section 0 NFT Welcome! (Arb) Welcome! (Sepolia) 🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯🎯 ( back to top ) ⬆️ 🐸 Section 1: Review (Don't skip) Tooling & Environment Prerequistes VSCode VSCodium Foundry chisel cast forge Windows Users: WSL AI Helpers ChatGPT Phind Forums & Resources Ethereum Stack Exchange Peeranha Github Discussions Solidity & Smart Contract Prerequisites Remix Basic smart contracts forge init Fuzzing & Stateful Fuzzing (This might be new) Fuzz tests Stateless Fuzzing Stateful fuzzing Invariants Video Common EIPs/ERCs Github Copilot ERC20s Video NFTs (ERC721s) Video Advanced Solidity storage Clip from foundry course Fallback & Receive Encoding, Call, & Staticcall Clip from the foundry full course Encoding.sol CallAnything.sol Delegatecall & Proxies Clip from foundry full course tx.origin vs msg.sender Selfdestruct (to be removed in an upcoming fork) Solidity by example Advanced Foundry mainnet-forking 🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸 🐸 Exercise:
1. Join the CodeHawks/Cyfrin Discord 2. Go for a walk, and buckle up Section 1 NFT Refresher Fresh NFT (Arb) Refresher Fresh NFT (Sepolia) 🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸🐸 ( back to top ) ⬆️ ❓ Section 2: What is a smart contract audit (Security Review)? What is a security review/smart contract audit? High Level Overview People say "audit" -> security review There is no silver bullet to auditing, and they have limitations 3 phases of a security review Initial Review Scoping Reconnaissance Vulnerability identification Reporting Protocol fixes Fixes issues Retests and adds tests Mitigation Review Reconnaissance Vulnerability identification Reporting Smart Contract Development Life Cycle Plan & Design Develop & Test Smart Contract Audit & Post Deploy Planning Is this just one step? Deploy Monitor & Maintain Top Smart Contract Auditors (Subjective!) Use this list to reference how top quality security teams do reviews, post reports, do research, etc Audit Readiness Simple Security Checklist Test suite with code coverage Fuzzing, Static Analysis Natspec (especially for external/public functions) The Rekt Test ”Code maturity” is important! Tooling Static Analysis Slither Aderyn Fuzzing / Invariant Tests Foundry Echidna Consensys Formal Verification Certora Solidity SMT Checker Maat Manticore AI Tooling vs Humans Attacker vs. Defender mindset Always learning Top Attack Vectors Top attack vectors 📝📝📝📝📝📝📝📝📝📝📝📝📝📝📝📝📝📝📝📝📝📝📝 📝 Exercise: Sign up for one security/web3 newsletter! Cyfrin Updraft Blockchain Threat Intelligence (Referral link) Solodit (not a newsletter, but has constant updates of new hacks) rekt Week In Ethereum Consensys Diligence Newsletter Officer CIA Section 2 NFT Hardest one of the whole course (Arb) Hardest one of the whole course (Sepolia) 📝📝📝📝📝📝📝📝📝📝📝📝📝📝📝📝📝📝📝📝📝📝📝 ( back to top ) ⬆️ 🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢 Important Note: We are now going to do audits. Please note, that we will not find all the bugs in each codebase. Each codebase was designed to show you a specific set of bugs, and give you a good understanding of what an audit "feels" like. 🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢🟢 ⛳️ Section 3: Your first audit (security review) | PasswordStore Audit 💻 Security Review CodeV1: https://sepolia.etherscan.io/address/0x2ecf6ad327776bf966893c96efb24c9747f6694b 💻 Security Review CodeV2: https://github.com/Cyfrin/3-passwordstore-audit 💻 Security Review CodeV3: https://github.com/Cyfrin/3-passwordstore-audit/tree/onboarded 💻 Security Review Final: https://github.com/Cyfrin/3-passwordstore-audit/tree/audit-data Feel free to look ahead and try to find the bugs on the codebase yourself, or get familiar with the protocol first. Remember the phases! 🔽🔽🔽🔽🔽🔽🔽🔽🔽🔽 Initial Review Scoping Reconnaissance Vulnerability identification Reporting 🔼🔼🔼🔼🔼🔼🔼🔼🔼🔼 For this demo, we are ignoring the last 2 phases
- Protocol fixes
- 1. Fixes issues
- 2. Retests and adds tests
- Mitigation Review
- 1. Reconnaissance
- 2. Vulnerability identification
- 3. Reporting The Setup (Scoping): PasswordStore V1 "Hey, here is my link to Etherscan, can I get an audit?" Coinbase asset listing guide V2 Client onboarding: Minimal V3 cloc "The Tincho" Read docs Note taking in-code Small -> Large Solidity Metrics Tincho’s ENS Review Exploits (Vulnerability Identification) Exploits: Access Controls Missing onlyowner Access Controls Unprotected sensitive functions Role misconfiguration Privilege escalation Exploits: Private Data Storing a secret (private data is not private) More Recon coverage Writing your first finding Write finding How to write a good finding Title: Root Cause + Impact Finding Layout:
``` [S-#] Title (ROOT CAUSE + IMPACT) Description: Impact: Proof of Concept: Recommended Mitigation: ```
- Write PoC
- Mitigation
- Using AI Are we done? Your first report (Reporting) Writing the Report Severity Classification Severity Guide Basic Markdown Report Template Alternative way to generate a PDF report 🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚 🥚 Exercises:
1. Sign up for CodeHawks! 2. Tweet about your first audit! Section 3 NFT Storage refresher! (Arb) Storage refresher! (Sepolia) 🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚🥚 ( back to top ) ⬆️ 🐶 Section 4: Manual & Static Analysis | Puppy Raffle Audit ✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅ This is the BEST security review for new auditors, 100% be sure to pay attention to this section. ✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅✅ This is the go-to best starter audit/security review. There are a lot of bugs in here, some obvious, some not. 💻 Security Review Code: https://github.com/Cyfrin/4-puppy-raffle-audit Concepts you'll learn: Static analysis, Reentrancy, Weak RNG, Arithmetic issues, How to write a professional looking report. Tooling: Static Analysis Web3 bugs machine vs human Static Analysis Slither Aderyn cloc Solidity Metrics (audit estimation) Solidity Visual Developer Scoping & Reconnaissance: Puppy Raffle Exploits: DoS (Denial of service) Fixes: Remove unnecessary loops Exploits: Reentrancy Case Study: DAO Hack Still plagues us today Exercises Search "reentrancy" in Solodit Prevention: CEI/CEII ( FREI-PI soon!) NonReentrant modifiers Exploits: Weak RNG Case Study: Meebits Exercises Search "RNG" in Solodit Prevention: Chainlink VRF Exploits: Arithmetic issues Examples: Under/Overflow Rounding & Precision Exercises Search "overflow" in Solodit Prevention: Use newer versions of solidity Multiply before divide Exploits: Poor ETH Handling Case study: Sushiswap Miso Exercises: Stuck ETH without a way to withdraw Mishandling ETH Search "Stuck ETH" in Solodit Informational Findings Stict Solc Versioning Supply Chain Attacks Magic Numbers Gas Audits Code Maturity Code coverage Static Analysis, follow up What is a Competitive Audit? CodeHawks Docs Writing the report: Puppy Raffle Audit Report Templating Github Report Templating (Cyfrin) Github Report Templating (Spearbit) Github Report Templating (Spearbit Custom) 🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀 🧑🚀 Exercises:
1. Ethernaut Challenge s (1, 9, and 10) 🧑🚀
2. Sign up for Solodit 3. Post a tweet about how you completed the Puppy Raffle Audit! 4. Sign up for farcaster 5. Do a CodeHawks First Flight Section 4 NFT A combination hack (Arb) A combination hack (Sepolia) 🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀🧑🚀 ( back to top ) ⬆️ 🔄 Section 5: Invariants & Intro to DeFi | TSwap Audit 💻 Security Review Code: https://github.com/Cyfrin/5-t-swap-audit Concepts you'll learn: Stateful fuzzing, Fuzzing, Invariants, FREI-PI/CEII, Advanced DeFi, AMMs, Uniswap, Curve.fi, Constant product formula 🛑🛑🛑🛑🛑🛑🛑🛑🛑🛑🛑🛑🛑🛑🛑🛑 STOP! Don't look at the contracts for this one! We are going to show you how you can use advanced tools to find even more bugs just by properly understanding invariants and writing more effective test suites. 🛑🛑🛑🛑🛑🛑🛑🛑🛑🛑🛑🛑🛑🛑🛑🛑 The Setup (Scoping): T-Swap Client onboarding: Extensive Reconnaissance: T-Swap Protocol Invariants FREI-PI/CEI Intro to DeFi/OnChain Finance DeFi Llama AMMs UniswapV1 Curve Constant Product Formula Tooling: T-Swap Forge Fuzzing, Stateful Fuzzing, Invariants Echidna Foundry Consensys Mutation Testing Introduction Differential Testing Introduction Solodit Properties Exploits: Weird ERC20s Token integration checklist Weird ERC20 List Rebase & fee-on-transfer ERC777 reentrancy callbacks Exploits: Core Invariant breaking Case Study: Uniswap Euler Design Patterns: T-Swap FREI-PI / CEII / Pre & Post Checks 💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰 💰 Exercises:
1. Write a fuzz test to find a bug in this challenge 2. Write a tweet thread about an interesting finding from Solodit Section 5 NFT A legit DeFi On-Chain Hack (Arb) A legit DeFi On-Chain Hack (Sepolia) 💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰 🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊 Congratulations!! If you've made it this far in the course and you understand what's going on, you have the skills to start getting paid as a security researcher, doing competitive audits, bug bounties, or even get hired! But if you want to become one of the best in the world and really secure web3, keep going... 🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊 ( back to top ) ⬆️ 🌩️ Section 6: Centralization, Proxies, and Oracles | Thunder Loan Audit 💻 Security Review Code: https://github.com/Cyfrin/6-thunder-loan-audit We are staritng to get more advanced with DeFi and smart contract issues. Buckle up, we are getting hotter. Scoping & Reconnaissance: Thunder Loan DeFi: Borrowing & Lending Aave Compound Oracles Chainlink TWAP Proxies UUPS & Transparent Multi-facet Proxy (Diamond) Foundry Proxies & Upgrades What are upgradeable smart contracts? Centralization Malicious Scope Don't "yes-man" every audit Tooling: Thunder Loan Upgradehub __init vs __init_unchained Exploits: Failure to initialize Case Study: I accidentally killed it Exploits: Storage collision Exploits: Centralization Silent Upgrades Case Study: Oasis Exploits: Missing events Exploits: Bad Upgrade Exploits: Oracle & Price Manipulation Flash Loans Case Study: Alpha Homora Case Study: Creme Finance Design Patterns: Thunder Loan Pull over push 📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦 📦 Exercises:
1. YAcademy Proxy 2. Tweet about how YOU feel about upgradeable smart contracts Section 6 NFT It's a bit scary how powerful you've become (Arb) It's a bit scary how powerful you've become (Sepolia) 📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦📦 ( back to top ) ⬆️ 🌉 Section 7: Bridges, Chains, Signatures, Intro to Yul/Assembly | Bridge Boss Audit 💻 Security Review Code: https://github.com/Cyfrin/7-boss-bridge-audit Tooling: Boss Bridge AI Tenderly evm diff We will learn "the Hans'" Checklist Scoping & Reconnaissance: Boss Bridge Precompiles Case Study: Polygon Public private key demo Encoding & Decoding Refresher Exploits: Opcode Support Case study: zkSync Exploits: Signature Replay Exploits: ERC20 Contract Approval Exploits: Unlimited Minting Bridge Hacks Bridge hacks: Ronin, Poly network, Nomad, Wormhole Writing the report: Boss Bridge Design Patterns: Boss Bridge Emergency stop 💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰 💰 Exercises: Damn Vulnerable DeFi Challenges 1, 2, 4 Write a tweet thread about an interesting finding from Solodit Tweet about how you finished the hardest audit yet! Read about more historic attacks: Signature Replay Merkle tree signature issues Polygon Double Spend Nomad Bridge Hack Section 7 NFT Tell Vitalik Hi (Arb) Tell Vitalik Hi (Sepolia) 💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰💰 ( back to top ) ⬆️ Section 7.5: MEV & Governance Introduction to MEV MEV Explained MEV Explained continued Toxic MEV Frontrunning Sandwich Attacks non-toxic Backrunning MEV Protection Design Flashbots Protect MEVBlocker Securerpc MEV in our past security reviews: Puppy: Someone can front-run selectWinner to call a refund T-Swap: Deadline protection means people can "sandwhich" attack you Thunder Loan: Users can front run flash loans to make the fees higher or lower Boss Bridge: A signed transaction could be front run so that an attacker sends tokens from an L2 before the signer can Slippage Protection Exploits: Governance Attack Unlimited Minting Flash Loan Voting Case Study: Beanstalk Metamorphic upgrades Case Study: TORN Governance 🛡️ Section 8: (THE FINAL BOSS AUDIT) MEV, Nodes, & DAOs | Vault Guardians Audit This security review is optional. It's a LOT of code! But if you choose to do it, you'll get a better idea of what a larger codebase feels like. Being comfortable coming up to a codebase and saying "I'll eventually understand this codebase, but right now I don't" is important! 💻 Security Review Code: https://github.com/Cyfrin/8-vault-guardians-audit Concepts: Vault Guardians Tokenized Vaults (ERC-4626) Yearn Finance Permit2 Good luck :) 🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅 🦅 Exercises: 1st CodeHawks Competitive Audit Write a tweet thread about an interesting finding from Solodit Write a blog or tweet on your experience! Read these tips for auditing multi-chain protocols Section 8 NFT GO OUT THERE AND GET IT!!! (Arb) GO OUT THERE AND GET IT!!! (Sepolia) 🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅 🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅 First CodeHawks Competitive Audit How to submit a finding How to decide severity Where to find a competitive audit 🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅 🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅🦅 ( back to top ) ⬆️ 🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊 Congratulations!! If you've made it this far in the course and you understand what's going on, you have the skills to become one of the top security researchers in web3! Either as a solo auditor, freelancer, competitive auditor, or even get hired by a top firm! However... if you want to be on the cutting edge and be able to understand every nook in web3, you've got a little more to go... 🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊 Part 2 has been moved! Update on Wallets, Post-deployment, EVM Opcodes, Assembly, and Formal Verification The next sections (originally just called "part 2") have been moved to their own courses!
- Wallets & Post Deployment
- Updraft - GitHub - Assembly, EVM Opcodes, and Formal Verification
- Updraft - GitHub Highly Recommend We highly recommend takin these two courses (linked above) so you can have a thourough grasp of all things EVM. ( back to top ) ⬆️ Congratulations 🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊 Completed The Course! 🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊🎊 If you've made it this far... wow. Where do I go now? Competititve Audits CodeHawks We are working on many things to get you more deals. Stay tuned... Code4rena Hats Finance CodeHawks Discord Start marketing your services Twitter, Farcaster, LinkedIn, etc Blogging: Medium, Mirror, etc Bug Bounties Immunefi Hats Finance Learning More Patrick Collins YouTube Solodit Block Threat Intelligence (Referral Link) Consensys Diligence Newsletter Owen Thurm YouTube JohnnyTime The Red Guild YouTube Cyfrin YouTube Disclosures The Cyfrin team runs CodeHawks, Cyfrin Updraft, and private security reviews. They are an advisor to the Peeranha project, and run various blockchain nodes like Chainlink & Ethereum. Additionally, the are responsible for the creation of the Aderyn and Solodit tools. Thank you Sponsors Cyfrin Updraft CodeHawks Solodit The Red Guild Lead Lecturers / Code Builders Patrick Collins | Cyfrin Tincho | The Red Guild Guest Lecturers Josselin Feist | Trail of Bits Trail of Bits Fuzzing & Formal Verification Owen | Guardian Audits Guardian Audits Denial Of Service Andy Li | Sigma Prime Sigma Prime Weak Randomness JohnnyTime | Gingersec Gingersec Governance Attack (Specific) Pashov | Independent Security Researcher MEV Juliette | Cyfrin Governance Attack (General) Alex Roan | Cyfrin Fuzzing & Smart Engineering Special thanks hansfriese carlitox477 0Kage giovannidisiena.eth Dacian Alex Roan Peter Kacherginsky Karma Coma Zach Obront Pinata (for hosting my cringe) More Security Stuff Self accounts "audit" https://scsfg.io/ https://github.com/OffcierCia/Crypto-OpSec-SelfGuard-RoadMap https://github.com/transmissions11/solcurity https://github.com/OpenCoreCH/smart-contract-auditing-heuristics https://secure-contracts.com/ https://github.com/crytic/properties Sponsors Big thanks to our sponsors/donors!! Arbitrum Foundation Chainlink Labs Certora Huge Extra Thank YOU Thanks to everyone who is taking, participating in, and working on this course. These courses are passion project data dumps for everyone in the web3 ecosystem. Let's level up so we can keep web3 safer, and thank you again for taking this course! ( back to top ) ⬆️;The ultimate, most advanced, security, DeFi, assembly, web3 auditor course ever created. ;cryptocurrency,ethereum,security,smart-contract-audit,solidity | Cyfrin/security-and-auditing-full-course-s23 |
Kedreamix/Linly-Talker;Digital Human Intelligent Dialogue System - Linly-Talker — 'Interactive Dialogue with Your Virtual Self' Linly-Talker WebUI [![madewithlove](https://img.shields.io/badge/made_with-%E2%9D%A4-red?style=for-the-badge&labelColor=orange)](https://github.com/Kedreamix/Linly-Talker) [![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/Kedreamix/Linly-Talker/blob/main/colab_webui.ipynb)
[![Licence](https://img.shields.io/badge/LICENSE-MIT-green.svg?style=for-the-badge)](https://github.com/Kedreamix/Linly-Talker/blob/main/LICENSE)
[![Huggingface](https://img.shields.io/badge/🤗%20-Models%20Repo-yellow.svg?style=for-the-badge)](https://huggingface.co/Kedreamix/Linly-Talker)
[**English**](./README.md) | [**中文简体**](./README_zh.md) 2023.12 Update 📆 Users can upload any images for the conversation 2024.01 Update 📆📆 Exciting news! I've now incorporated both the powerful GeminiPro and Qwen large models into our conversational scene. Users can now upload images during the conversation, adding a whole new dimension to the interactions. The deployment invocation method for FastAPI has been updated. The advanced settings options for Microsoft TTS have been updated, increasing the variety of voice types. Additionally, video subtitles have been introduced to enhance visualization. Updated the GPT multi-turn conversation system to establish contextual connections in dialogue, enhancing the interactivity and realism of the digital persona. 2024.02 Update 📆 Updated Gradio to the latest version 4.16.0, providing the interface with additional functionalities such as capturing images from the camera to create digital personas, among others. ASR and THG have been updated. FunASR from Alibaba has been integrated into ASR, enhancing its speed significantly. Additionally, the THG section now incorporates the Wav2Lip model, while ER-NeRF is currently in preparation (Coming Soon). I have incorporated the GPT-SoVITS model, which is a voice cloning method. By fine-tuning it with just one minute of a person's speech data, it can effectively clone their voice. The results are quite impressive and worth recommending. I have integrated a web user interface (WebUI) that allows for better execution of Linly-Talker. 2024.04 Update 📆 Updated the offline mode for Paddle TTS, excluding Edge TTS. Updated ER-NeRF as one of the choices for Avatar generation. Updated app_talk.py to allow for the free upload of voice and images/videos for generation without being based on a dialogue scenario. 2024.05 Update 📆 Updated the beginner-friendly AutoDL deployment tutorial, and also updated the codewithgpu image, allowing for one-click experience and learning. Updated WebUI.py: Linly-Talker WebUI now supports multiple modules, multiple models, and multiple options 2024.06 Update 📆 Integrated MuseTalk into Linly-Talker and updated the WebUI, enabling basic real-time conversation capabilities. The refined WebUI defaults to not loading the LLM model to reduce GPU memory usage. It directly responds with text to complete voiceovers. The enhanced WebUI features three main functions: personalized character generation, multi-turn intelligent dialogue with digital humans, and real-time MuseTalk conversations. These improvements reduce previous GPU memory redundancies and add more prompts to assist users effectively. Content - [Digital Human Intelligent Dialogue System - Linly-Talker — 'Interactive Dialogue with Your Virtual Self'](#digital-human-intelligent-dialogue-system---linly-talker--interactive-dialogue-with-your-virtual-self)
- [Introduction](#introduction)
- [TO DO LIST](#to-do-list)
- [Example](#example)
- [Setup Environment](#setup-environment)
- [ASR - Speech Recognition](#asr---speech-recognition)
- [Whisper](#whisper)
- [FunASR](#funasr)
- [Coming Soon](#coming-soon)
- [TTS - Text To Speech](#tts---text-to-speech)
- [Edge TTS](#edge-tts)
- [PaddleTTS](#paddletts)
- [Coming Soon](#coming-soon-1)
- [Voice Clone](#voice-clone)
- [GPT-SoVITS(Recommend)](#gpt-sovitsrecommend)
- [XTTS](#xtts)
- [Coming Soon](#coming-soon-2)
- [THG - Avatar](#thg---avatar)
- [SadTalker](#sadtalker)
- [Wav2Lip](#wav2lip)
- [ER-NeRF](#er-nerf)
- [MuseTalk](#musetalk)
- [Coming Soon](#coming-soon-3)
- [LLM - Conversation](#llm---conversation)
- [Linly-AI](#linly-ai)
- [Qwen](#qwen)
- [Gemini-Pro](#gemini-pro)
- [ChatGPT](#chatgpt)
- [ChatGLM](#chatglm)
- [GPT4Free](#gpt4free)
- [LLM Multiple Model Selection](#llm-multiple-model-selection)
- [Coming Soon](#coming-soon-4)
- [Optimizations](#optimizations)
- [Gradio](#gradio)
- [Start WebUI](#start-webui)
- [WebUI](#webui)
- [Old Verison](#old-verison)
- [Folder structure](#folder-structure)
- [Support Us](#support-us)
- [Reference](#reference)
- [Star History](#star-history) Introduction Linly-Talker is an innovative digital human conversation system that integrates the latest artificial intelligence technologies, including Large Language Models (LLM) 🤖, Automatic Speech Recognition (ASR) 🎙️, Text-to-Speech (TTS) 🗣️, and voice cloning technology 🎤. This system offers an interactive web interface through the Gradio platform 🌐, allowing users to upload images 📷 and engage in personalized dialogues with AI 💬. The core features of the system include: Multi-Model Integration : Linly-Talker combines major models such as Linly, GeminiPro, Qwen, as well as visual models like Whisper, SadTalker, to achieve high-quality dialogues and visual generation. Multi-Turn Conversational Ability : Through the multi-turn dialogue system powered by GPT models, Linly-Talker can understand and maintain contextually relevant and coherent conversations, significantly enhancing the authenticity of the interaction. Voice Cloning : Utilizing technologies like GPT-SoVITS, users can upload a one-minute voice sample for fine-tuning, and the system will clone the user's voice, enabling the digital human to converse in the user's voice. Real-Time Interaction : The system supports real-time speech recognition and video captioning, allowing users to communicate naturally with the digital human via voice. Visual Enhancement : With digital human generation technologies, Linly-Talker can create realistic digital human avatars, providing a more immersive experience. The design philosophy of Linly-Talker is to create a new form of human-computer interaction that goes beyond simple Q&A. By integrating advanced technologies, it offers an intelligent digital human capable of understanding, responding to, and simulating human communication. You can watch the demo video here . I have recorded a series of videos on Bilibili, which also represent every step of my updates and methods of use. For detailed information, please refer to Digital Human Dialogue System - Linly-Talker Collection . 🔥🔥🔥 Digital Human Dialogue System Linly-Talker 🔥🔥🔥 🚀 The Future of Digital Humans: The Empowerment Path of Linly-Talker + GPT-SoVIT Voice Cloning Technology Deploying Linly-Talker on AutoDL Platform (Super Detailed Tutorial for Beginners) Linly-Talker Update: Offline TTS Integration and Customized Digital Human Solutions TO DO LIST [x] Completed the basic conversation system flow, capable of voice interactions . [x] Integrated the LLM large model, including the usage of Linly , Qwen , and GeminiPro . [x] Enabled the ability to upload any digital person's photo for conversation. [x] Integrated FastAPI invocation for Linly. [x] Utilized Microsoft TTS with advanced options, allowing customization of voice and tone parameters to enhance audio diversity. [x] Added subtitles to video generation for improved visualization. [x] GPT Multi-turn Dialogue System (Enhance the interactivity and realism of digital entities, bolstering their intelligence) [x] Optimized the Gradio interface by incorporating additional models such as Wav2Lip, FunASR, and others. [x] Voice Cloning Technology (Synthesize one's own voice using voice cloning to enhance the realism and interactive experience of digital entities) [x] Integrate offline TTS (Text-to-Speech) along with NeRF-based methods and models. [x] Linly-Talker WebUI supports multiple modules, multiple models, and multiple options [x] Added MuseTalk functionality to Linly-Talker, achieving near real-time speed with very fast communication. [x] Integrated MuseTalk into the Linly-Talker WebUI. [ ] Real-time Speech Recognition (Enable conversation and communication between humans and digital entities using voice) 🔆 The Linly-Talker project is ongoing - pull requests are welcome! If you have any suggestions regarding new model approaches, research, techniques, or if you discover any runtime errors, please feel free to edit and submit a pull request. You can also open an issue or contact me directly via email. 📩⭐ If you find this repository useful, please give it a star! 🤩 If you encounter any issues during deployment, please consult the Common Issues Summary section, where I have compiled a list of all potential problems. Additionally, a discussion group is available here, and I will provide regular updates. Thank you for your attention and use of Linly-Talker! Example | 文字/语音对话 | 数字人回答 |
| :----------------------------------------------------------: | :----------------------------------------------------------: |
| 应对压力最有效的方法是什么? | |
| 如何进行时间管理? | |
| 撰写一篇交响乐音乐会评论,讨论乐团的表演和观众的整体体验。 | |
| 翻译成中文:Luck is a dividend of sweat. The more you sweat, the luckier you get. | | Setup Environment AutoDL has released an image, which can be used directly at https://www.codewithgpu.com/i/Kedreamix/Linly-Talker/Kedreamix-Linly-Talker . You can also create an environment directly using Docker. I will continue to update the image. bash
docker pull registry.cn-beijing.aliyuncs.com/codewithgpu2/kedreamix-linly-talker:cMDvNE4RYl For Windows, I've included an all-in-one Python package. You can run the steps in sequence to install the necessary dependencies and download the corresponding model to get it running. Follow the instructions using conda and start installing PyTorch from step 02. If you encounter any issues, please feel free to contact me. Windows All-in-One Package Download the code: bash
git clone https://github.com/Kedreamix/Linly-Talker.git --depth 1 以下是这段文字的英文翻译: If you are using Linly-Talker, you can set up the environment directly with Anaconda, which covers almost all the dependencies required by the models. The specific steps are as follows: ```bash
conda create -n linly python=3.10 conda activate linly PyTorch installation method 1: Install via conda CUDA 11.7 conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.7 -c pytorch -c nvidia CUDA 11.8 conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.8 -c pytorch -c nvidia PyTorch installation method 2: Install via pip CUDA 11.7 pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 CUDA 11.8 pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118 conda install -q ffmpeg # Install ffmpeg==4.2.2 Upgrade pip python -m pip install --upgrade pip Change the PyPI source to speed up the installation of packages pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple pip install tb-nightly -i https://mirrors.aliyun.com/pypi/simple
pip install -r requirements_webui.txt Install dependencies related to musetalk pip install --no-cache-dir -U openmim
mim install mmengine
mim install "mmcv>=2.0.1"
mim install "mmdet>=3.1.0"
mim install "mmpose>=1.1.0" Install NeRF-based dependencies, which might have several issues and can be skipped initially pip install "git+https://github.com/facebookresearch/pytorch3d.git"
pip install -r TFG/requirements_nerf.txt If there are issues with pyaudio, install the corresponding dependencies sudo apt-get update sudo apt-get install libasound-dev portaudio19-dev libportaudio2 libportaudiocpp0 Note the following modules. If installation fails, you can enter the directory and use pip install . or python setup.py install to compile and install: NeRF/freqencoder NeRF/gridencoder NeRF/raymarching NeRF/shencoder ``` Below are some older installation methods, which might cause dependency conflicts, but they generally don't produce many bugs. For an easier and better installation, I've updated the above version. You can ignore the following versions or refer to them if you encounter issues. To install the environment using Anaconda and PyTorch, follow the steps below: ```bash
conda create -n linly python=3.10
conda activate linly PyTorch Installation Method 1: Conda Installation (Recommended) conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch PyTorch Installation Method 2: Pip Installation pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113 conda install -q ffmpeg # ffmpeg==4.2.2 pip install -r requirements_app.txt
``` If you want to use models like voice cloning, you may need a higher version of PyTorch. However, the functionality will be more diverse. You may need to use CUDA 11.8 as the driver version, which you can choose. ```bash
conda create -n linly python=3.10 conda activate linly pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118 conda install -q ffmpeg # ffmpeg==4.2.2 pip install -r requirements_app.txt Install dependencies for voice cloning pip install -r VITS/requirements_gptsovits.txt
``` If you wish to use NeRF-based models, you may need to set up the corresponding environment: ```bash Install dependencies for NeRF pip install "git+https://github.com/facebookresearch/pytorch3d.git"
pip install -r TFG/requirements_nerf.txt If there are issues with PyAudio, you can install the corresponding dependencies sudo apt-get install libasound-dev portaudio19-dev libportaudio2 libportaudiocpp0 Note the following modules. If installation is unsuccessful, you can navigate to the path and use pip install . or python setup.py install to compile and install. NeRF/freqencoder NeRF/gridencoder NeRF/raymarching NeRF/shencoder ``` If you are using PaddleTTS, you can set up the corresponding environment with: bash
pip install -r TTS/requirements_paddle.txt If you are using the FunASR speech recognition model, you can install the environment with: pip install -r ASR/requirements_funasr.txt If using the MuesTalk model, you can set up the environment with the following commands: bash
pip install --no-cache-dir -U openmim
mim install mmengine
mim install "mmcv>=2.0.1"
mim install "mmdet>=3.1.0"
mim install "mmpose>=1.1.0"
pip install -r TFG/requirements_musetalk.txt Next, you need to install the corresponding models. You can download them using the following methods. Once downloaded, place the files in the specified folder structure (explained at the end of this document). We recommend downloading from Quark Netdisk for the latest updates. Baidu (百度云盘) (Password: linl ) huggingface modelscope Quark(夸克网盘) I made a script that can download all the models mentioned below without requiring much input from the user. This method is suitable for stable network conditions, especially for Linux users. For Windows users, Git can also be used to download the models. If the network connection is unstable, users can choose to manually download the models or try running a Shell script to complete the download. The script has the following features: Choose Download Method : Users can choose to download models from three different sources: ModelScope, Huggingface, or Huggingface mirror site. Download Models : Based on the user's selection, the script executes the corresponding download command. Move Model Files : After downloading, the script moves the model files to the specified directory. Error Handling : Error checks are included in each step of the operation. If any step fails, the script will output an error message and stop execution. bash
sh scripts/download_models.sh HuggingFace Download If the download speed is too slow, consider using a mirror site. For more information, refer to Efficiently Obtain Hugging Face Models Using Mirror Sites . ```bash Download pre-trained models from HuggingFace git lfs install
git clone https://huggingface.co/Kedreamix/Linly-Talker --depth 1 git lfs clone https://huggingface.co/Kedreamix/Linly-Talker --depth 1 pip install -U huggingface_hub export HF_ENDPOINT=https://hf-mirror.com # Use a mirror site huggingface-cli download --resume-download --local-dir-use-symlinks False Kedreamix/Linly-Talker --local-dir Linly-Talker
``` ModelScope Download ```bash Download pre-trained models from Modelscope 1. Using git git lfs install
git clone https://www.modelscope.cn/Kedreamix/Linly-Talker.git --depth 1 git lfs clone https://www.modelscope.cn/Kedreamix/Linly-Talker.git 2. Download using Python code pip install modelscope
from modelscope import snapshot_download
model_dir = snapshot_download('Kedreamix/Linly-Talker')
``` Move All Models to the Current Directory If you downloaded from Baidu Netdisk, you can refer to the directory structure at the end of the document to move the models. ```bash Move all models to the current directory Checkpoints contain SadTalker and Wav2Lip mv Linly-Talker/checkpoints/* ./checkpoints Enhanced GFPGAN for SadTalker pip install gfpgan mv Linly-Talker/gfpan ./ Voice cloning models mv Linly-Talker/GPT_SoVITS/pretrained_models/* ./GPT_SoVITS/pretrained_models/ Qwen large language model mv Linly-Talker/Qwen ./ MuseTalk model mkdir -p ./Musetalk/models
mv Linly-Talker/MuseTalk/* ./Musetalk/models
``` For the convenience of deployment and usage, an configs.py file has been updated. You can modify some hyperparameters in this file for customization: ```bash Device Running Port port = 7870 API Running Port and IP Localhost port is 127.0.0.1; for global port forwarding, use "0.0.0.0" ip = '127.0.0.1'
api_port = 7871 Linly Model Path mode = 'api' # For 'api', Linly-api-fast.py must be run first
mode = 'offline'
model_path = 'Linly-AI/Chinese-LLaMA-2-7B-hf' SSL Certificate (required for microphone interaction) Preferably an absolute path ssl_certfile = "./https_cert/cert.pem"
ssl_keyfile = "./https_cert/key.pem"
``` This file allows you to adjust parameters such as the device running port, API running port, Linly model path, and SSL certificate paths for ease of deployment and configuration. ASR - Speech Recognition For detailed information about the usage and code implementation of Automatic Speech Recognition (ASR), please refer to ASR - Bridging the Gap with Digital Humans . Whisper To implement ASR (Automatic Speech Recognition) using OpenAI's Whisper, you can refer to the specific usage methods provided in the GitHub repository: https://github.com/openai/whisper FunASR The speech recognition performance of Alibaba's FunASR is quite impressive and it is actually better than Whisper in terms of Chinese language. Additionally, FunASR is capable of achieving real-time results, making it a great choice. You can experience FunASR by accessing the FunASR file in the ASR folder. Please refer to https://github.com/alibaba-damo-academy/FunASR for more information. Coming Soon Welcome everyone to provide suggestions, motivating me to continuously update the models and enrich the functionality of Linly-Talker. TTS - Text To Speech For detailed information about the usage and code implementation of Text-to-Speech (TTS), please refer to TTS - Empowering Digital Humans with Natural Speech Interaction . Edge TTS To use Microsoft Edge's online text-to-speech service from Python without needing Microsoft Edge or Windows or an API key, you can refer to the GitHub repository at https://github.com/rany2/edge-tts . It provides a Python module called "edge-tts" that allows you to utilize the service. You can find detailed installation instructions and usage examples in the repository's README file. PaddleTTS In practical use, there may be scenarios that require offline operation. Since Edge TTS requires an online environment to generate speech, we have chosen PaddleSpeech, another open-source alternative, for Text-to-Speech (TTS). Although there might be some differences in the quality, PaddleSpeech supports offline operations. For more information, you can refer to the GitHub page of PaddleSpeech: https://github.com/PaddlePaddle/PaddleSpeech . Coming Soon Welcome everyone to provide suggestions, motivating me to continuously update the models and enrich the functionality of Linly-Talker. Voice Clone For detailed information about the usage and code implementation of Voice Clone, please refer to Voice Clone - Stealing Your Voice Quietly During Conversations . GPT-SoVITS(Recommend) Thank you for your open source contribution. I have also found the GPT-SoVITS voice cloning model to be quite impressive. You can find the project at https://github.com/RVC-Boss/GPT-SoVITS . XTTS Coqui XTTS is a leading deep learning toolkit for Text-to-Speech (TTS) tasks, allowing for voice cloning and voice transfer to different languages using a 5-second or longer audio clip. 🐸 TTS is a library for advanced text-to-speech generation. 🚀 Over 1100 pre-trained models for various languages. 🛠️ Tools for training new models and fine-tuning existing models in any language. 📚 Utility programs for dataset analysis and management. Experience XTTS online https://huggingface.co/spaces/coqui/xtts Official GitHub repository: https://github.com/coqui-ai/TTS Coming Soon Welcome everyone to provide suggestions, motivating me to continuously update the models and enrich the functionality of Linly-Talker. THG - Avatar Detailed information about the usage and code implementation of digital human generation can be found in THG - Building Intelligent Digital Humans . SadTalker Digital persona generation can utilize SadTalker (CVPR 2023). For detailed information, please visit https://sadtalker.github.io . Before usage, download the SadTalker model: bash
bash scripts/sadtalker_download_models.sh Baidu (百度云盘) (Password: linl ) Quark(夸克网盘) If downloading from Baidu Cloud, remember to place it in the checkpoints folder. The model downloaded from Baidu Cloud is named sadtalker by default, but it should be renamed to checkpoints . Wav2Lip Digital persona generation can also utilize Wav2Lip (ACM 2020). For detailed information, refer to https://github.com/Rudrabha/Wav2Lip . Before usage, download the Wav2Lip model: | Model | Description | Link to the model |
| ---------------------------- | ----------------------------------------------------- | ------------------------------------------------------------ |
| Wav2Lip | Highly accurate lip-sync | Link |
| Wav2Lip + GAN | Slightly inferior lip-sync, but better visual quality | Link |
| Expert Discriminator | Weights of the expert discriminator | Link |
| Visual Quality Discriminator | Weights of the visual disc trained in a GAN setup | Link | ER-NeRF ER-NeRF (ICCV 2023) is a digital human built using the latest NeRF technology. It allows for the customization of digital characters and can reconstruct them using just a five-minute video of a person. For more details, please refer to https://github.com/Fictionarry/ER-NeRF . Updated: Taking inspiration from the likeness of Obama, for better results, consider cloning and customizing the voice of digital personas for improved effectiveness. MuseTalk MuseTalk is a real-time, high-quality audio-driven lip synchronization model capable of running at over 30 frames per second on an NVIDIA Tesla V100 GPU. This model can be integrated with input videos generated by MuseV, forming a part of a comprehensive virtual human solution. For more details, please refer to https://github.com/TMElyralab/MuseTalk . MuseTalk is trained to operate within the latent space of ft-mse-vae and offers the following features: Unseen Face Synchronization : It can modify unseen faces based on input audio, with a face region size of 256 x 256. Multi-language Support : Supports audio inputs in various languages, including Chinese, English, and Japanese. High-performance Real-time Inference : Achieves real-time inference at over 30 frames per second on an NVIDIA Tesla V100. Facial Center Point Adjustment : Allows the adjustment of the facial region's center point, significantly impacting the generated results. HDTF Dataset Training : Provides model checkpoints trained on the HDTF dataset. Upcoming Training Code Release : Training code will be released soon, facilitating further development and research. MuseTalk offers an efficient and versatile tool for precise audio synchronization with facial expressions in virtual humans, marking a significant step towards fully interactive virtual personas. In Linly-Talker, MuseTalk has been integrated to perform inference on videos based on MuseV, achieving an ideal speed for conversations with near real-time performance. This approach works very well and supports streaming-based inference. Coming Soon Welcome everyone to provide suggestions, motivating me to continuously update the models and enrich the functionality of Linly-Talker. LLM - Conversation For detailed information about the usage and code implementation of Large Language Models (LLM), please refer to LLM - Empowering Digital Humans with Powerful Language Models . Linly-AI Linly-AI is a Large Language model developed by CVI at Shenzhen University. You can find more information about Linly-AI on their GitHub repository: https://github.com/CVI-SZU/Linly Download Linly models: https://huggingface.co/Linly-AI/Chinese-LLaMA-2-7B-hf You can use git to download: bash
git lfs install
git clone https://huggingface.co/Linly-AI/Chinese-LLaMA-2-7B-hf Alternatively, you can use the huggingface download tool huggingface-cli : ```bash
pip install -U huggingface_hub Set up mirror acceleration Linux export HF_ENDPOINT="https://hf-mirror.com" Windows PowerShell $env:HF_ENDPOINT="https://hf-mirror.com" huggingface-cli download --resume-download Linly-AI/Chinese-LLaMA-2-7B-hf --local-dir Linly-AI/Chinese-LLaMA-2-7B-hf
``` Qwen Qwen is an AI model developed by Alibaba Cloud. You can check out the GitHub repository for Qwen here: https://github.com/QwenLM/Qwen If you want to quickly use Qwen, you can choose the 1.8B model, which has fewer parameters and can run smoothly even with limited GPU memory. Of course, this part can be replaced with other options. You can download the Qwen 1.8B model from this link: https://huggingface.co/Qwen/Qwen-1_8B-Chat You can use git to download: bash
git lfs install
git clone https://huggingface.co/Qwen/Qwen-1_8B-Chat Alternatively, you can use the huggingface download tool huggingface-cli : ```bash
pip install -U huggingface_hub Set up mirror acceleration Linux export HF_ENDPOINT="https://hf-mirror.com" Windows PowerShell $env:HF_ENDPOINT="https://hf-mirror.com" huggingface-cli download --resume-download Qwen/Qwen-1_8B-Chat --local-dir Qwen/Qwen-1_8B-Chat
``` Gemini-Pro Gemini-Pro is an AI model developed by Google. To learn more about Gemini-Pro, you can visit their website: https://deepmind.google/technologies/gemini/ If you want to request an API key for Gemini-Pro, you can visit this link: https://makersuite.google.com/ ChatGPT From OpenAI, requires API application. For more information, please visit https://platform.openai.com/docs/introduction . ChatGLM From Tsinghua University, for more information please visit https://github.com/THUDM/ChatGLM3 . GPT4Free For free access to GPT-4 and other models, you can refer to https://github.com/xtekky/gpt4free . This resource provides methods to utilize these models without cost. LLM Multiple Model Selection In the webui.py file, easily select the model you need. ⚠️ For the first run, make sure to download the model first. Refer to Qwen1.8B. Coming Soon Welcome everyone to provide suggestions, motivating me to continuously update the models and enrich the functionality of Linly-Talker. Optimizations Some optimizations: Use fixed input face images, extract features beforehand to avoid reading each time Remove unnecessary libraries to reduce total time Only save final video output, don't save intermediate results to improve performance Use OpenCV to generate final video instead of mimwrite for faster runtime Gradio Gradio is a Python library that provides an easy way to deploy machine learning models as interactive web apps. For Linly-Talker, Gradio serves two main purposes: Visualization & Demo : Gradio provides a simple web GUI for the model, allowing users to see the results intuitively by uploading an image and entering text. This is an effective way to showcase the capabilities of the system. User Interaction : The Gradio GUI can serve as a frontend to allow end users to interact with Linly-Talker. Users can upload their own images and ask arbitrary questions or have conversations to get real-time responses. This provides a more natural speech interaction method. Specifically, we create a Gradio Interface in app.py that takes image and text inputs, calls our function to generate the response video, and displays it in the GUI. This enables browser interaction without needing to build complex frontend. In summary, Gradio provides visualization and user interaction interfaces for Linly-Talker, serving as effective means for showcasing system capabilities and enabling end users. If considering real-time conversation, it may be necessary to switch to a different framework or customize Gradio. Looking forward to working together with everyone. Start WebUI Previously, I had separated many versions, but it became cumbersome to run multiple versions. Therefore, I have added a WebUI feature to provide a single interface for a seamless experience. I will continue to update it in the future. WebUI The current features available in the WebUI are as follows: [x] Text/Voice-based dialogue with virtual characters (fixed characters with male and female roles) [x] Dialogue with virtual characters using any image (you can upload any character image) [x] Multi-turn GPT dialogue (incorporating historical dialogue data to maintain context) [x] Voice cloning dialogue (based on GPT-SoVITS settings for voice cloning, including a built-in smoky voice that can be cloned based on the voice of the dialogue) [x] Digital Persona Text/Voice Playback (based on input text/voice) [x] Multiple modules➕Multiple models➕Multiple choices [x] Multiple role selections: Female/Male/Custom (each part can automatically upload images) Coming Soon [x] Multiple TTS model selections: EdgeTTS / PaddleTTS / GPT-SoVITS / Coming Soon [x] Multiple LLM model selections: Linly / Qwen / ChatGLM / GeminiPro / ChatGPT / Coming Soon [x] Multiple Talker model selections: Wav2Lip / SadTalker / ERNeRF / MuseTalk (coming soon) / Coming Soon [x] Multiple ASR model selections: Whisper / FunASR / Coming Soon You can directly run the web UI to obtain results. The page you will see is as follows: ```bash WebUI python webui.py
``` This time, we've updated the interface. We can freely select the fine-tuned model of GPT-SoVITS to achieve voice cloning. Simply upload a reference audio file to clone the voice. Old Verison There are three modes for the current startup, and you can choose a specific setting based on the scenario. The first mode involves fixed Q&A with a predefined character, eliminating preprocessing time. bash
python app.py The first mode has recently been updated to include the Wav2Lip model for dialogue. bash
python appv2.py The second mode allows for conversing with any uploaded image. bash
python app_img.py The third mode builds upon the first one by incorporating a large language model for multi-turn GPT conversations. bash
python app_multi.py Now, the part of voice cloning has been added, allowing for freely switching between cloned voice models and corresponding person images. Here, I have chosen a deep, smoky voice and an image of a male. bash
python app_vits.py A fourth method has been added, which does not fixate on a specific scenario for conversation. Instead, it allows for direct input of voice or the generation of voice for the creation of a digital human. It incorporates methods such as Sadtalker, Wav2Lip, and ER-NeRF. ER-NeRF is trained on videos of a single individual, so a specific model needs to be replaced to render and obtain the correct results. It comes with pre-installed weights for Obama, which can be used directly with the following command: bash
python app_talk.py MuseTalk has been integrated into Linly-Talker, enabling efficient preprocessing of MuseV-generated videos. Once preprocessed, these videos facilitate conversations at speeds that meet near real-time requirements, providing very fast performance. MuseTalk is now available within the WebUI. To run the application, use the following command: bash
python app_musetalk.py Folder structure The folder structure of the weight files is as follows: Baidu (百度云盘) : You can download the weights from here (Password: linl ). huggingface : You can access the weights at this link . modelscope : The weights will be available soon at this link . Qurak(夸克网盘) :You can download the weights from here bash
Linly-Talker/
├── checkpoints
│ ├── audio_visual_encoder.pth
│ ├── hub
│ │ └── checkpoints
│ │ └── s3fd-619a316812.pth
│ ├── lipsync_expert.pth
│ ├── mapping_00109-model.pth.tar
│ ├── mapping_00229-model.pth.tar
│ ├── May.json
│ ├── May.pth
│ ├── Obama_ave.pth
│ ├── Obama.json
│ ├── Obama.pth
│ ├── ref_eo.npy
│ ├── ref.npy
│ ├── ref.wav
│ ├── SadTalker_V0.0.2_256.safetensors
│ ├── visual_quality_disc.pth
│ ├── wav2lip_gan.pth
│ └── wav2lip.pth
├── gfpgan
│ └── weights
│ ├── alignment_WFLW_4HG.pth
│ └── detection_Resnet50_Final.pth
├── GPT_SoVITS
│ └── pretrained_models
│ ├── chinese-hubert-base
│ │ ├── config.json
│ │ ├── preprocessor_config.json
│ │ └── pytorch_model.bin
│ ├── chinese-roberta-wwm-ext-large
│ │ ├── config.json
│ │ ├── pytorch_model.bin
│ │ └── tokenizer.json
│ ├── README.md
│ ├── s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt
│ ├── s2D488k.pth
│ ├── s2G488k.pth
│ └── speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch
├── MuseTalk
│ ├── models
│ │ ├── dwpose
│ │ │ └── dw-ll_ucoco_384.pth
│ │ ├── face-parse-bisent
│ │ │ ├── 79999_iter.pth
│ │ │ └── resnet18-5c106cde.pth
│ │ ├── musetalk
│ │ │ ├── musetalk.json
│ │ │ └── pytorch_model.bin
│ │ ├── README.md
│ │ ├── sd-vae-ft-mse
│ │ │ ├── config.json
│ │ │ └── diffusion_pytorch_model.bin
│ │ └── whisper
│ │ └── tiny.pt
├── Qwen
│ └── Qwen-1_8B-Chat
│ ├── assets
│ │ ├── logo.jpg
│ │ ├── qwen_tokenizer.png
│ │ ├── react_showcase_001.png
│ │ ├── react_showcase_002.png
│ │ └── wechat.png
│ ├── cache_autogptq_cuda_256.cpp
│ ├── cache_autogptq_cuda_kernel_256.cu
│ ├── config.json
│ ├── configuration_qwen.py
│ ├── cpp_kernels.py
│ ├── examples
│ │ └── react_prompt.md
│ ├── generation_config.json
│ ├── LICENSE
│ ├── model-00001-of-00002.safetensors
│ ├── model-00002-of-00002.safetensors
│ ├── modeling_qwen.py
│ ├── model.safetensors.index.json
│ ├── NOTICE
│ ├── qwen_generation_utils.py
│ ├── qwen.tiktoken
│ ├── README.md
│ ├── tokenization_qwen.py
│ └── tokenizer_config.json
├── Whisper
│ ├── base.pt
│ └── tiny.pt
├── FunASR
│ ├── punc_ct-transformer_zh-cn-common-vocab272727-pytorch
│ │ ├── configuration.json
│ │ ├── config.yaml
│ │ ├── example
│ │ │ └── punc_example.txt
│ │ ├── fig
│ │ │ └── struct.png
│ │ ├── model.pt
│ │ ├── README.md
│ │ └── tokens.json
│ ├── speech_fsmn_vad_zh-cn-16k-common-pytorch
│ │ ├── am.mvn
│ │ ├── configuration.json
│ │ ├── config.yaml
│ │ ├── example
│ │ │ └── vad_example.wav
│ │ ├── fig
│ │ │ └── struct.png
│ │ ├── model.pt
│ │ └── README.md
│ └── speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch
│ ├── am.mvn
│ ├── asr_example_hotword.wav
│ ├── configuration.json
│ ├── config.yaml
│ ├── example
│ │ ├── asr_example.wav
│ │ └── hotword.txt
│ ├── fig
│ │ ├── res.png
│ │ └── seaco.png
│ ├── model.pt
│ ├── README.md
│ ├── seg_dict
│ └── tokens.json
└── README.md Support Us | Alipay | WeChatPay |
| -------------------- | ----------------------- |
| | | Reference ASR https://github.com/openai/whisper https://github.com/alibaba-damo-academy/FunASR TTS https://github.com/rany2/edge-tts https://github.com/PaddlePaddle/PaddleSpeech LLM https://github.com/CVI-SZU/Linly https://github.com/QwenLM/Qwen https://deepmind.google/technologies/gemini/ https://github.com/THUDM/ChatGLM3 https://openai.com THG https://github.com/OpenTalker/SadTalker https://github.com/Rudrabha/Wav2Lip https://github.com/Fictionarry/ER-NeRF Voice Clone https://github.com/RVC-Boss/GPT-SoVITS https://github.com/coqui-ai/TTS Star History;Digital Avatar Conversational System - Linly-Talker. 😄✨ Linly-Talker is an intelligent AI system that combines large language models (LLMs) with visual models to create a novel human-AI interaction method. 🤝🤖 It integrates various technologies like Whisper, Linly, Microsoft Speech Services, and SadTalker talking head generation system. 🌟🔬;[] | Kedreamix/Linly-Talker |
sst/ion;❍ Ion is a new engine for deploying SST apps. It uses Pulumi and Terraform, as opposed to CDK and CloudFormation. Read the full announcement here . 10x faster deploys Native multi-region support No more cyclical dependencies No stacks or stack resource limits No CDK or npm package conflicts Native support for non-AWS providers Note: Ion is generally available and recommended for new SST users. We are working on a migration path for SST v2 users. Installation curl -fsSL https://ion.sst.dev/install | bash To use a package manager, check out our docs . Manually Download the pre-compiled binaries from the releases page and copy to the desired location. Get Started Get started with your favorite framework: Next.js Remix Astro API Learn More Learn more about some of the key concepts: Live Linking Console Components Contributing Here's how you can contribute: Help us improve our docs Find a bug? Open an issue Feature request? Submit a PR Join the #ion channel to learn more. Join our community Discord | YouTube | Twitter;❍ — a new engine for SST;sst,ion | sst/ion |
027xiguapi/pear-rec;pear-rec README 中文 | English | Deutsch 📖 Documentation pear-rec(pear rec) is a free and open-source cross-Platform screenshot, screen recording, audio recording, and video recording software. pear-rec(pear rec) is a project based on react + electron + vite + viewerjs + plyr + aplayer + react-screenshots + webav. More functions and APIs can be found on the official website(https://027xiguapi.github.io/pear-rec) or https://xiguapi027.gitee.io/pear-rec . 🧱 Frameworks The cross-Platform of pear-rec is based on electronjs , and the front-end is based on reactjs . The functions of screenshot, screen recording, recording, recording (dynamic image) gif are a project based on webrtc and webcodecs . 🌰 Example web pages 🧲 Repository gitee: https://gitee.com/xiguapi027/pear-rec github: https://github.com/027xiguapi/pear-rec 🔨 Usage Getting Started To clone and run this repository you'll need Git , Node.js (which comes with npm ) and pnpm installed on your computer. From your command line: ```shell Clone this repository git clone https://github.com/027xiguapi/pear-rec.git Go into the repository cd pear-rec Install dependencies pnpm install Run the web pnpm run dev:web Run the server pnpm run dev:server Run the desktop pnpm run dev:desktop Run the web pnpm run start:web Run the desktop pnpm run start:desktop Build the web pnpm run build:web Build the desktop pnpm run build:desktop Clear node_modules pnpm run clear
``` 🥰 Functions Features that have been ticked are the latest in the development process but may not have been released in the latest version [x] gif(gif.js) [x] record [x] edit [x] Screenshot(react-screenshots) [x] Frame crop [x] Resizable frame position [x] Colour picker [x] Magnifying glass [x] Brush (freehand brush) [x] Geometric shapes (border fill support adjustment) [x] Advanced palette settings [x] Image filters (local mosaic blur and colour adjustment supported) [x] Customize what happens when the frame is released [x] Map search by map [x] QR code recognition [x] Quick full screen capture to clipboard or custom directory [x] Screenshot history [ ] Window and control selection (using OpenCV edge recognition) [ ] Long screen capture [ ] Multi-screen [x] Record screen(WebRTC) [x] Recording full screen [x] Screenshot [x] Customize size [x] Mute [ ] Key prompt [ ] Cursor Location Tips [ ] Recorder bar [ ] Stream Write [x] Record audio(WebRTC) [x] Setting [x] Watch audio [x] Download audio [ ] Edit audio [x] Record video(WebRTC) [ ] Custom bit rate [x] Picture Preview(viewerjs) [x] Zoom in [x] Zoom out [x] Drag [x] Flip [x] Pin [x] Watch local image [x] Download [x] Print [ ] ocr [x] Watch list [x] Map search by map [x] QR code recognition [x] edit image(tui-image-editor) [x] Video Preview(plyr) [x] Audio Previews(aplayer) [x] setting [x] user uuid [x] Save address [x] Self-starting [x] internationalization(zh,en,de) 🌍 Internationalization(I18n) [x] Chinese [x] English [x] German 👇 Download | OS | Windows | Linux | Macos |
| --- | --- | --- | --- |
| Link | Download | Download | Download | 💐 Acknowledgements A screenshot plugin for electron and react. react-screenshots Process audio/video data in the browser using WebCodecs. WebAV JavaScript GIF encoding library. gif.js Screenshot implemented by community personnel based on vue . electron-screenshort . 👨👨👦👦 Feedback We recommend that issue be used for problem feedback. 🤝 License pear-rec is available under the Apache License V2. Open source etiquette;pear-rec is a free and open-source cross platform software for recording, recording, and taking screenshots;react,electron,nodejs,vite,webrtc,webcodecs,nestjs,typescript | 027xiguapi/pear-rec |
resemble-ai/resemble-enhance;Resemble Enhance https://github.com/resemble-ai/resemble-enhance/assets/660224/bc3ec943-e795-4646-b119-cce327c810f1 Resemble Enhance is an AI-powered tool that aims to improve the overall quality of speech by performing denoising and enhancement. It consists of two modules: a denoiser, which separates speech from a noisy audio, and an enhancer, which further boosts the perceptual audio quality by restoring audio distortions and extending the audio bandwidth. The two models are trained on high-quality 44.1kHz speech data that guarantees the enhancement of your speech with high quality. Usage Installation Install the stable version: bash
pip install resemble-enhance --upgrade Or try the latest pre-release version: bash
pip install resemble-enhance --upgrade --pre Enhance resemble_enhance in_dir out_dir Denoise only resemble_enhance in_dir out_dir --denoise_only Web Demo We provide a web demo built with Gradio, you can try it out here , or also run it locally: python app.py Train your own model Data Preparation You need to prepare a foreground speech dataset and a background non-speech dataset. In addition, you need to prepare a RIR dataset ( examples ). bash
data
├── fg
│ ├── 00001.wav
│ └── ...
├── bg
│ ├── 00001.wav
│ └── ...
└── rir
├── 00001.npy
└── ... Training Denoiser Warmup Though the denoiser is trained jointly with the enhancer, it is recommended for a warmup training first. bash
python -m resemble_enhance.denoiser.train --yaml config/denoiser.yaml runs/denoiser Enhancer Then, you can train the enhancer in two stages. The first stage is to train the autoencoder and vocoder. And the second stage is to train the latent conditional flow matching (CFM) model. Stage 1 bash
python -m resemble_enhance.enhancer.train --yaml config/enhancer_stage1.yaml runs/enhancer_stage1 Stage 2 bash
python -m resemble_enhance.enhancer.train --yaml config/enhancer_stage2.yaml runs/enhancer_stage2 Blog Learn more on our website !;AI powered speech denoising and enhancement;denoise,speech-denoising,speech-enhancement,speech-processing | resemble-ai/resemble-enhance |
floneum/floneum;Floneum Floneum makes it easy to develop applications that use local pre-trained AI models. There are two projects in this repository: Kalosm : A simple interface for pre-trained models in rust Floneum Editor (preview) : A graphical editor for local AI workflows. See the user documentation or plugin documentation for more information. Kalosm Kalosm is a simple interface for pre-trained models in Rust that backs Floneum. It makes it easy to interact with pre-trained, language, audio, and image models. Model Support Kalosm supports a variety of models. Here is a list of the models that are currently supported: | Model | Modality | Size | Description | Quantized | CUDA + Metal Accelerated | Example |
| -------- | ------- | ---- | ----------- | --------- | ----------- | --------------------- |
| Llama | Text | 1b-70b | General purpose language model | ✅ | ✅ | llama 3 chat |
| Mistral | Text | 7-13b | General purpose language model | ✅ | ✅ | mistral chat |
| Phi | Text | 2b-4b | Small reasoning focused language model | ✅ | ✅ | phi 3 chat |
| Whisper | Audio | 20MB-1GB | Audio transcription model | ✅ | ✅ | live whisper transcription |
| RWuerstchen | Image | 5gb | Image generation model | ❌ | ✅ | rwuerstchen image generation |
| TrOcr | Image | 3gb | Optical character recognition model | ❌ | ✅ | Text Recognition |
| Segment Anything | Image | 50MB-400MB | Image segmentation model | ❌ | ❌ | Image Segmentation |
| Bert | Text | 100MB-1GB | Text embedding model | ❌ | ✅ | Semantic Search | Utilities Kalosm also supports a variety of utilities around pre-trained models. These include:
- Extracting, formatting and retrieving context for LLMs : Extract context from txt/html/docx/md/pdf chunk that context then search for relevant context with vector database integrations - Transcribing audio from your microphone or file - Crawling and scraping content from web pages Performance Kalosm uses the candle machine learning library to run models in pure rust. It supports quantized and accelerated models with performance on par with llama.cpp : Mistral 7b | Accelerator | Kalosm | llama.cpp |
| ------ | --------- | --------- |
| Metal (M2) | 26 t/s | 27 t/s | Structured Generation Kalosm supports structured generation with a regex grammar. Because the grammar runs in rust code, it doesn't add any overhead to text generation. In fact, using a grammar can be even faster than uncontrolled text generation because Kalosm supports grammar acceleration! structured generation demo In addition to regex, you can provide your own grammar to generate structured data. This lets you constrain the response to any structure you want including complex data structures like JSON, HTML, and XML. Kalosm Quickstart! This quickstart will get you up and running with a simple chatbot. Let's get started! A more complete guide for Kalosm is available on the Kalosm website , and examples are available in the examples folder . 1) Install rust 2) Create a new project: sh
cargo new kalosm-hello-world
cd ./kalosm-hello-world 3) Add Kalosm as a dependency
```sh
cargo add kalosm --git https://github.com/floneum/floneum --features language You can use --features language,metal , --features language,cublas , or --features language,mkl if your machine supports an accelerator cargo add tokio --features full 4) Add this code to your `main.rs` file rust, no_run
use kalosm::language::*; [tokio::main] async fn main() {
let model = Llama::phi_3().await.unwrap();
let mut chat = Chat::builder(model)
.with_system_prompt("You are a pirate called Blackbeard")
.build(); loop {
chat.add_message(prompt_input("\n> ").unwrap())
.await
.unwrap()
.to_std_out()
.await
.unwrap();
} }
``` 5) Run your application with: sh
cargo run --release chat bot demo Community If you are interested in either project, you can join the discord to discuss the project and get help. Contributing Report issues on our issue tracker . Help other users in the discord If you are interested in contributing, feel free to reach out on discord;Instant, controllable, local pre-trained AI models in Rust;ai,llm,rust,llamacpp,kalosm,candle,llama,mistral,floneum-v3,dioxus | floneum/floneum |
Gourieff/comfyui-reactor-node;![Version](https://img.shields.io/badge/node_version-0.5.0_beta4-green?style=for-the-badge&labelColor=darkgreen) ## !!! [Important Update](#latestupdate) !!! Don't forget to add the Node again in existing workflows Support This Project [![Commit activity](https://img.shields.io/github/commit-activity/t/Gourieff/comfyui-reactor-node/main?cacheSeconds=0)](https://github.com/Gourieff/comfyui-reactor-node/commits/main)
![Last commit](https://img.shields.io/github/last-commit/Gourieff/comfyui-reactor-node/main?cacheSeconds=0)
[![Opened issues](https://img.shields.io/github/issues/Gourieff/comfyui-reactor-node?color=red)](https://github.com/Gourieff/comfyui-reactor-node/issues?cacheSeconds=0)
[![Closed issues](https://img.shields.io/github/issues-closed/Gourieff/comfyui-reactor-node?color=green&cacheSeconds=0)](https://github.com/Gourieff/comfyui-reactor-node/issues?q=is%3Aissue+is%3Aclosed)
![License](https://img.shields.io/github/license/Gourieff/comfyui-reactor-node)
English | [Русский](/README_RU.md)
# ReActor Node for ComfyUI The Fast and Simple Face Swap Extension Node for ComfyUI, based on ReActor SD-WebUI Face Swap Extension This Node goes without NSFW filter (uncensored, use it on your own responsibility ) ---
[**What's new**](#latestupdate) | [**Installation**](#installation) | [**Usage**](#usage) | [**Troubleshooting**](#troubleshooting) | [**Updating**](#updating) | [**Disclaimer**](#disclaimer) | [**Credits**](#credits) | [**Note!**](#note)
--- What's new in the latest update 0.5.0 BETA2 You can now build a blended face model from a batch of face models you already have, just add the "Make Face Model Batch" node to your workflow and connect several models via "Load Face Model" Huge performance boost of the image analyzer's module! 10x speed up! Working with videos is now a pleasure! 0.5.0 BETA1 SWAPPED_FACE output for the Masking Helper Node FIX: Empty A-channel for Masking Helper IMAGE output (causing errors with some nodes) was removed 0.5.0 ALPHA1 ReActorBuildFaceModel Node got "face_model" output to provide a blended face model directly to the main Node: Basic workflow 💾 Face Masking feature is available now, just add the "ReActorMaskHelper" Node to the workflow and connect it as shown below: If you don't have the "face_yolov8m.pt" Ultralytics model - you can download it from the Assets and put it into the "ComfyUI\models\ultralytics\bbox" directory As well as "sam_vit_b_01ec64.pth" model - download (if you don't have it) and put it into the "ComfyUI\models\sams" directory; Use this Node to gain the best results of the face swapping process: ReActorImageDublicator Node - rather useful for those who create videos, it helps to duplicate one image to several frames to use them with VAE Encoder (e.g. live avatars): ReActorFaceSwapOpt (a simplified version of the Main Node) + ReActorOptions Nodes to set some additional options such as (new) "input/source faces separate order". Yes! You can now set the order of faces in the index in the way you want ("large to small" goes by default)! Little speed boost when analyzing target images (unfortunately it is still quite slow in compare to swapping and restoring...) Previous versions ### [0.4.2](https://github.com/Gourieff/comfyui-reactor-node/releases/tag/v0.4.2)
- GPEN-BFR-512 and RestoreFormer_Plus_Plus face restoration models support
You can download models here: https://huggingface.co/datasets/Gourieff/ReActor/tree/main/models/facerestore_models Put them into the `ComfyUI\models\facerestore_models` folder - Due to popular demand - you can now blend several images with persons into one face model file and use it with "Load Face Model" Node or in SD WebUI as well;
Experiment and create new faces or blend faces of one person to gain better accuracy and likeness!
Just add the ImpactPack's "Make Image Batch" Node as the input to the ReActor's one and load images you want to blend into one model: Result example (the new face was created from 4 faces of different actresses): Basic workflow [💾](https://github.com/Gourieff/Assets/blob/main/comfyui-reactor-node/workflows/ReActor--Build-Blended-Face-Model--v1.json)
### [0.4.1](https://github.com/Gourieff/comfyui-reactor-node/releases/tag/v0.4.1)
- CUDA 12 Support - don't forget to run (Windows) `install.bat` or (Linux/MacOS) `install.py` for ComfyUI's Python enclosure or try to install ORT-GPU for CU12 manually (https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-12x)
- Issue https://github.com/Gourieff/comfyui-reactor-node/issues/173 fix
- Separate Node for the Face Restoration postprocessing (FR https://github.com/Gourieff/comfyui-reactor-node/issues/191), can be found inside ReActor's menu (RestoreFace Node)
- (Windows) Installation can be done for Python from the System's PATH
- Different fixes and improvements
- Face Restore Visibility and CodeFormer Weight (Fidelity) options are now available! Don't forget to reload the Node in your existing workflow ### [0.4.0](https://github.com/Gourieff/comfyui-reactor-node/releases/tag/v0.4.0)
- Input "input_image" goes first now, it gives a correct bypass and also it is right to have the main input first;
- You can now save face models as "safetensors" files (`ComfyUI\models\reactor\faces`) and load them into ReActor implementing different scenarios and keeping super lightweight face models of the faces you use: - Ability to build and save face models directly from an image: - Both the inputs are optional, just connect one of them according to your workflow; if both is connected - `image` has a priority.
- Different fixes making this extension better.
Thanks to everyone who finds bugs, suggests new features and supports this project! Installation SD WebUI: AUTOMATIC1111 or SD.Next 1. Close (stop) your SD-WebUI/Comfy Server if it's running
2. (For Windows Users):
- Install [Visual Studio 2022](https://visualstudio.microsoft.com/downloads/) (Community version - you need this step to build Insightface)
- OR only [VS C++ Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) and select "Desktop Development with C++" under "Workloads -> Desktop & Mobile"
- OR if you don't want to install VS or VS C++ BT - follow [this steps (sec. I)](#insightfacebuild)
3. Go to the `extensions\sd-webui-comfyui\ComfyUI\custom_nodes`
4. Open Console or Terminal and run `git clone https://github.com/Gourieff/comfyui-reactor-node`
5. Go to the SD WebUI root folder, open Console or Terminal and run (Windows users)`.\venv\Scripts\activate` or (Linux/MacOS)`venv/bin/activate`
6. `python -m pip install -U pip`
7. `cd extensions\sd-webui-comfyui\ComfyUI\custom_nodes\comfyui-reactor-node`
8. `python install.py`
9. Please, wait until the installation process will be finished
10. (From the version 0.3.0) Download additional facerestorers models from the link below and put them into the `extensions\sd-webui-comfyui\ComfyUI\models\facerestore_models` directory: https://huggingface.co/datasets/Gourieff/ReActor/tree/main/models/facerestore_models
11. Run SD WebUI and check console for the message that ReActor Node is running: 12. Go to the ComfyUI tab and find there ReActor Node inside the menu `ReActor` or by using a search: Standalone (Portable) ComfyUI for Windows 1. Do the following:
- Install [Visual Studio 2022](https://visualstudio.microsoft.com/downloads/) (Community version - you need this step to build Insightface)
- OR only [VS C++ Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) and select "Desktop Development with C++" under "Workloads -> Desktop & Mobile"
- OR if you don't want to install VS or VS C++ BT - follow [this steps (sec. I)](#insightfacebuild)
2. Choose between two options:
- (ComfyUI Manager) Open ComfyUI Manager, click "Install Custom Nodes", type "ReActor" in the "Search" field and then click "Install". After ComfyUI will complete the process - please restart the Server.
- (Manually) Go to `ComfyUI\custom_nodes`, open Console and run `git clone https://github.com/Gourieff/comfyui-reactor-node`
3. Go to `ComfyUI\custom_nodes\comfyui-reactor-node` and run `install.bat`
4. If you don't have the "face_yolov8m.pt" Ultralytics model - you can download it from the [Assets](https://huggingface.co/datasets/Gourieff/ReActor/blob/main/models/detection/bbox/face_yolov8m.pt) and put it into the "ComfyUI\models\ultralytics\bbox" directory As well as one or both of "Sams" models from [here](https://huggingface.co/datasets/Gourieff/ReActor/tree/main/models/sams) - download (if you don't have them) and put into the "ComfyUI\models\sams" directory
5. Run ComfyUI and find there ReActor Nodes inside the menu `ReActor` or by using a search Usage You can find ReActor Nodes inside the menu ReActor or by using a search (just type "ReActor" in the search field) List of Nodes:
- ••• Main Nodes •••
- ReActorFaceSwap (Main Node)
- ReActorFaceSwapOpt (Main Node with the additional Options input)
- ReActorOptions (Options for ReActorFaceSwapOpt)
- ReActorMaskHelper (Masking Helper)
- ••• Operations with Face Models •••
- ReActorSaveFaceModel (Save Face Model)
- ReActorLoadFaceModel (Load Face Model)
- ReActorBuildFaceModel (Build Blended Face Model)
- ReActorMakeFaceModelBatch (Make Face Model Batch)
- ••• Additional Nodes •••
- ReActorRestoreFace (Face Restoration)
- ReActorImageDublicator (Dublicate one Image to Images List)
- ImageRGBA2RGB (Convert RGBA to RGB) Connect all required slots and run the query. Main Node Inputs input_image - is an image to be processed (target image, analog of "target image" in the SD WebUI extension); Supported Nodes: "Load Image", "Load Video" or any other nodes providing images as an output; source_image - is an image with a face or faces to swap in the input_image (source image, analog of "source image" in the SD WebUI extension); Supported Nodes: "Load Image" or any other nodes providing images as an output; face_model - is the input for the "Load Face Model" Node or another ReActor node to provide a face model file (face embedding) you created earlier via the "Save Face Model" Node; Supported Nodes: "Load Face Model", "Build Blended Face Model"; Main Node Outputs IMAGE - is an output with the resulted image; Supported Nodes: any nodes which have images as an input; FACE_MODEL - is an output providing a source face's model being built during the swapping process; Supported Nodes: "Save Face Model", "ReActor", "Make Face Model Batch"; Face Restoration Since version 0.3.0 ReActor Node has a buil-in face restoration. Just download the models you want (see Installation instruction) and select one of them to restore the resulting face(s) during the faceswap. It will enhance face details and make your result more accurate. Face Indexes By default ReActor detects faces in images from "large" to "small". You can change this option by adding ReActorFaceSwapOpt node with ReActorOptions. And if you need to specify faces, you can set indexes for source and input images. Index of the first detected face is 0. You can set indexes in the order you need. E.g.: 0,1,2 (for Source); 1,0,2 (for Input). This means: the second Input face (index = 1) will be swapped by the first Source face (index = 0) and so on. Genders You can specify the gender to detect in images. ReActor will swap a face only if it meets the given condition. Face Models Since version 0.4.0 you can save face models as "safetensors" files (stored in ComfyUI\models\reactor\faces ) and load them into ReActor implementing different scenarios and keeping super lightweight face models of the faces you use. To make new models appear in the list of the "Load Face Model" Node - just refresh the page of your ComfyUI web application. (I recommend you to use ComfyUI Manager - otherwise you workflow can be lost after you refresh the page if you didn't save it before that). Troubleshooting I. (For Windows users) If you still cannot build Insightface for some reasons or just don't want to install Visual Studio or VS C++ Build Tools - do the following: (ComfyUI Portable) From the root folder check the version of Python: run CMD and type python_embeded\python.exe -V Download prebuilt Insightface package for Python 3.10 or for Python 3.11 (if in the previous step you see 3.11) or for Python 3.12 (if in the previous step you see 3.12) and put into the stable-diffusion-webui (A1111 or SD.Next) root folder (where you have "webui-user.bat" file) or into ComfyUI root folder if you use ComfyUI Portable From the root folder run: (SD WebUI) CMD and .\venv\Scripts\activate (ComfyUI Portable) run CMD Then update your PIP: (SD WebUI) python -m pip install -U pip (ComfyUI Portable) python_embeded\python.exe -m pip install -U pip Then install Insightface: (SD WebUI) pip install insightface-0.7.3-cp310-cp310-win_amd64.whl (for 3.10) or pip install insightface-0.7.3-cp311-cp311-win_amd64.whl (for 3.11) or pip install insightface-0.7.3-cp312-cp312-win_amd64.whl (for 3.12) (ComfyUI Portable) python_embeded\python.exe -m pip install insightface-0.7.3-cp310-cp310-win_amd64.whl (for 3.10) or python_embeded\python.exe -m pip install insightface-0.7.3-cp311-cp311-win_amd64.whl (for 3.11) or python_embeded\python.exe -m pip install insightface-0.7.3-cp312-cp312-win_amd64.whl (for 3.12) Enjoy! II. "AttributeError: 'NoneType' object has no attribute 'get'" This error may occur if there's smth wrong with the model file inswapper_128.onnx Try to download it manually from here and put it to the ComfyUI\models\insightface replacing existing one III. "reactor.execute() got an unexpected keyword argument 'reference_image'" This means that input points have been changed with the latest update Remove the current ReActor Node from your workflow and add it again IV. ControlNet Aux Node IMPORT failed error when using with ReActor Node Close ComfyUI if it runs Go to the ComfyUI root folder, open CMD there and run: python_embeded\python.exe -m pip uninstall -y opencv-python opencv-contrib-python opencv-python-headless python_embeded\python.exe -m pip install opencv-python==4.7.0.72 That's it! V. "ModuleNotFoundError: No module named 'basicsr'" or "subprocess-exited-with-error" during future-0.18.3 installation Download https://github.com/Gourieff/Assets/raw/main/comfyui-reactor-node/future-0.18.3-py3-none-any.whl Put it to ComfyUI root And run: python_embeded\python.exe -m pip install future-0.18.3-py3-none-any.whl Then: python_embeded\python.exe -m pip install basicsr VI. "fatal: fetch-pack: invalid index-pack output" when you try to git clone the repository" Try to clone with --depth=1 (last commit only): git clone --depth=1 https://github.com/Gourieff/comfyui-reactor-node Then retrieve the rest (if you need): git fetch --unshallow Updating Just put .bat or .sh script from this Repo to the ComfyUI\custom_nodes directory and run it when you need to check for updates Disclaimer This software is meant to be a productive contribution to the rapidly growing AI-generated media industry. It will help artists with tasks such as animating a custom character or using the character as a model for clothing etc. The developers of this software are aware of its possible unethical applications and are committed to take preventative measures against them. We will continue to develop this project in the positive direction while adhering to law and ethics. Users of this software are expected to use this software responsibly while abiding the local law. If face of a real person is being used, users are suggested to get consent from the concerned person and clearly mention that it is a deepfake when posting content online. Developers and Contributors of this software are not responsible for actions of end-users. By using this extension you are agree not to create any content that:
- violates any laws;
- causes any harm to a person or persons;
- propagates (spreads) any information (both public or personal) or images (both public or personal) which could be meant for harm;
- spreads misinformation;
- targets vulnerable groups of people. This software utilizes the pre-trained models buffalo_l and inswapper_128.onnx , which are provided by InsightFace . These models are included under the following conditions: From insighface license : The InsightFace’s pre-trained models are available for non-commercial research purposes only. This includes both auto-downloading models and manually downloaded models. Users of this software must strictly adhere to these conditions of use. The developers and maintainers of this software are not responsible for any misuse of InsightFace’s pre-trained models. Please note that if you intend to use this software for any commercial purposes, you will need to train your own models or find models that can be used commercially. Models Hashsum Safe-to-use models have the following hash: inswapper_128.onnx MD5:a3a155b90354160350efd66fed6b3d80
SHA256:e4a3f08c753cb72d04e10aa0f7dbe3deebbf39567d4ead6dce08e98aa49e16af 1k3d68.onnx MD5:6fb94fcdb0055e3638bf9158e6a108f4
SHA256:df5c06b8a0c12e422b2ed8947b8869faa4105387f199c477af038aa01f9a45cc 2d106det.onnx MD5:a3613ef9eb3662b4ef88eb90db1fcf26
SHA256:f001b856447c413801ef5c42091ed0cd516fcd21f2d6b79635b1e733a7109dbf det_10g.onnx MD5:4c10eef5c9e168357a16fdd580fa8371
SHA256:5838f7fe053675b1c7a08b633df49e7af5495cee0493c7dcf6697200b85b5b91 genderage.onnx MD5:81c77ba87ab38163b0dec6b26f8e2af2
SHA256:4fde69b1c810857b88c64a335084f1c3fe8f01246c9a191b48c7bb756d6652fb w600k_r50.onnx MD5:80248d427976241cbd1343889ed132b3
SHA256:4c06341c33c2ca1f86781dab0e829f88ad5b64be9fba56e56bc9ebdefc619e43 Please check hashsums if you download these models from unverified (or untrusted) sources Thanks and Credits Click to expand |file|source|license|
|----|------|-------|
|[buffalo_l.zip](https://huggingface.co/datasets/Gourieff/ReActor/blob/main/models/buffalo_l.zip) | [DeepInsight](https://github.com/deepinsight/insightface) | ![license](https://img.shields.io/badge/license-non_commercial-red) |
| [codeformer-v0.1.0.pth](https://huggingface.co/datasets/Gourieff/ReActor/blob/main/models/facerestore_models/codeformer-v0.1.0.pth) | [sczhou](https://github.com/sczhou/CodeFormer) | ![license](https://img.shields.io/badge/license-non_commercial-red) |
| [GFPGANv1.3.pth](https://huggingface.co/datasets/Gourieff/ReActor/blob/main/models/facerestore_models/GFPGANv1.3.pth) | [TencentARC](https://github.com/TencentARC/GFPGAN) | ![license](https://img.shields.io/badge/license-Apache_2.0-green.svg) |
| [GFPGANv1.4.pth](https://huggingface.co/datasets/Gourieff/ReActor/blob/main/models/facerestore_models/GFPGANv1.4.pth) | [TencentARC](https://github.com/TencentARC/GFPGAN) | ![license](https://img.shields.io/badge/license-Apache_2.0-green.svg) |
| [inswapper_128.onnx](https://github.com/facefusion/facefusion-assets/releases/download/models/inswapper_128.onnx) | [DeepInsight](https://github.com/deepinsight/insightface) | ![license](https://img.shields.io/badge/license-non_commercial-red) |
| [inswapper_128_fp16.onnx](https://github.com/facefusion/facefusion-assets/releases/download/models/inswapper_128_fp16.onnx) | [Hillobar](https://github.com/Hillobar/Rope) | ![license](https://img.shields.io/badge/license-non_commercial-red) |
[BasicSR](https://github.com/XPixelGroup/BasicSR) - [@XPixelGroup](https://github.com/XPixelGroup) [facexlib](https://github.com/xinntao/facexlib) - [@xinntao](https://github.com/xinntao) [@s0md3v](https://github.com/s0md3v), [@henryruhs](https://github.com/henryruhs) - the original Roop App [@ssitu](https://github.com/ssitu) - the first version of [ComfyUI_roop](https://github.com/ssitu/ComfyUI_roop) extension Note! If you encounter any errors when you use ReActor Node - don't rush to open an issue, first try to remove current ReActor node in your workflow and add it again ReActor Node gets updates from time to time, new functions appear and old node can work with errors or not work at all;Fast and Simple Face Swap Extension Node for ComfyUI;comfyui,face-swapping,stable-diffusion,stable-diffusion-webui | Gourieff/comfyui-reactor-node |
popcar2/GodotOS;GodotOS | Trailer Welcome to GodotOS, an operating system interface created entirely in Godot! Browse folders, edit text files, view images, play games, and more in one cohesive polished interface that can even be used on the web! GodotOS is more of a toy than a serious project. It's meant to push the limits on UI design in Godot while creating a desktop that is minimalist, distraction-free, and aesthetically pleasing. Aside from that, GodotOS is also meant to be a hub for small games and experiences that can easily be bundled in. Want to add your own game to the start menu? Check the contributing guide! Download Try the web version on Itch.io page Download all versions from the releases page Credits GodotOS was made by me, popcar2. Default wallpaper was made by Haseeb Jamil. Misc icons are from game-icons . Folder icons are from flaticon "Godotris" by MrakDun-desu "Snake" by jean-philippe-martin Note: GodotOS is not actually an operating system, it's an application with an interface resembling one. GodotOS is not affiliated with Godot Engine developers or the Godot Foundation.;A Fake Operating System Interface made in Godot!;godot-engine,application,godot4,interface,desktop-environment | popcar2/GodotOS |
MewPurPur/GodSVG;GodSVG GodSVG is an editor for Scalable Vector Graphics (SVG) files. Unlike other editors, it represents the SVG code directly, doesn't add any metadata, and even lets you edit the SVG code in real time. GodSVG is inspired by the need for an SVG editor without abstractions that produces clean and optimized files. [!IMPORTANT]
GodSVG is not officially released, it's currently in late alpha. GodSVG is almost entirely made by my work in my free time. If you like this project and want to help secure it through its development, you can donate on one of the platforms listed to the right and make it a less financially stupid endeavor for me. Features Interactive SVG editing: Modify individual elements of an SVG file using a user-friendly interface. Real-time code: As you manipulate elements in the UI, code is instantly generated and can be edited. Optimized SVGs: The generated SVG files are small and efficient, and there are many options to assist with optimization. How to get it Download the version you want from the list of GodSVG releases . Note that if you're on MacOS, you need to disable Gatekeeper if you haven't yet. I don't have the time or money to deal with Apple's gatekeeping. Link to the web build: https://godsvg.com/editor To run the latest unreleased version, you can download Godot from https://godotengine.org (development is currently happening in v4.3.beta1). After getting the repository files on your machine, you must open Godot, click on the "Import" button, and import the project.godot folder. If there are a lot of errors as some people have reported, it's Godot's fault. Try closing and opening the project a few times, changing small things on the code that errors out, etc. until the errors hopefully clear. How to use it Documentation for GodSVG is likely eventually going to be built-in. Meanwhile, the basics of using it will be outlined below. GodSVG is something between an SVG editor and an assisted code editor for SVG files. SVGs are a text-based format, and to understand how to be efficient with the tool, it would really help to first familiarize with the SVG basics (Check out the first few tutorials here . If you want to import an existing graphic from scratch, use the Import button on top of the code editor or drag-and-drop an SVG file into the app. To add new shapes, press the "+ Add new tag" button, right-click inside the viewport, or right-click inside the tags container. You can then select your shape from the dropdown. After your shape is added, you can drag its handles in the viewport to change its geometry, or modify the attributes in the inspector to change its other properties, like fill and stroke. You can also always modify the SVG code directly. In the inspector, you can hover each tag's fields to see which attribute they represent. You may select tags in the viewport on the right or the inspector on the left, and right-click to do operations on them such as deleting or moving them. Some of these actions shortcuts, you can find them in the settings menu. Pathdata attributes have a very complex editor that allows for selecting individual path commands with a lot of similarities to tags. You can right-click the path command and click "Insert After", then pick the one you want. If you're used to SVG paths, you can also use the M, L, H, V, Z, A, Q, T, C, S keys to insert a new path command after a selected one; pressing Shift will also make the new command absolute instead of relative. Multiple tags or path commands can be selected as usual with Ctrl+Click and Shift+Click. Additionally, double-clicking a path command will select the whole subpath it's in. To export the graphic, use the Export button on top of the code editor. Community and contributing Contributions are very welcome! GodSVG is built in Godot . For code contributions, read Contributing Guidelines . Before starting work on features, first propose them by using the issue form and wait for approval. To report bugs or propose features, use Github's issue form. For more casual discussion around the tool or contributing to it, find me on GodSVG's Discord . GodSVG is free for everyone to use however they want. This is not to say that the official communities are anarchy: All of them, including this Github repository, are actively moderated. License GodSVG is licensed under the MIT License: You are free to use GodSVG for any purpose. GodSVG's license terms and copyright do not apply to the content created with it. You can study how GodSVG works and change it. You may distribute modified versions of GodSVG. Derivative products may use a different license, but they must still document that they derive from the MIT-licensed GodSVG. The above explanation reflects my understanding of my own license terms and does not constitute legal advice.;An application in early development for creating simple vector graphics. Built in Godot.;gdscript,godot,godot-engine,godotengine,open-source,svg-editor,svg-parser,vector-graphics,thorvg,svg | MewPurPur/GodSVG |
adrianhajdin/social_media_app;A Social Media Application Build this project step by step with our detailed tutorial on JavaScript Mastery YouTube. Join the JSM family! 📋 Table of Contents 🤖 Introduction ⚙️ Tech Stack 🔋 Features 🤸 Quick Start 🕸️ Snippets 🔗 Links 🚀 More 🚨 Tutorial This repository contains the code corresponding to an in-depth tutorial available on our YouTube channel, JavaScript Mastery . If you prefer visual learning, this is the perfect resource for you. Follow our tutorial to learn how to build projects like these step-by-step in a beginner-friendly manner! 🤖 Introduction Explore social media with this user-friendly platform that has a nice look and lots of features. Easily create and explore posts, and enjoy a strong authentication system and quick data fetching using React Query for a smooth user experience. If you're getting started and need assistance or face any bugs, join our active Discord community with over 27k+ members. It's a place where people help each other out. ⚙️ Tech Stack React.js Appwrite React Query TypeScript Shadcn Tailwind CSS 🔋 Features 👉 Authentication System : A robust authentication system ensuring security and user privacy 👉 Explore Page : Homepage for users to explore posts, with a featured section for top creators 👉 Like and Save Functionality : Enable users to like and save posts, with dedicated pages for managing liked and saved content 👉 Detailed Post Page : A detailed post page displaying content and related posts for an immersive user experience 👉 Profile Page : A user profile page showcasing liked posts and providing options to edit the profile 👉 Browse Other Users : Allow users to browse and explore other users' profiles and posts 👉 Create Post Page : Implement a user-friendly create post page with effortless file management, storage, and drag-drop feature 👉 Edit Post Functionality : Provide users with the ability to edit the content of their posts at any time 👉 Responsive UI with Bottom Bar : A responsive UI with a bottom bar, enhancing the mobile app feel for seamless navigation 👉 React Query Integration : Incorporate the React Query (Tanstack Query) data fetching library for, Auto caching to enhance performance, Parallel queries for efficient data retrieval, First-class Mutations, etc 👉 Backend as a Service (BaaS) - Appwrite : Utilize Appwrite as a Backend as a Service solution for streamlined backend development, offering features like authentication, database, file storage, and more and many more, including code architecture and reusability 🤸 Quick Start Follow these steps to set up the project locally on your machine. Prerequisites Make sure you have the following installed on your machine: Git Node.js npm (Node Package Manager) Cloning the Repository bash
git clone https://github.com/adrianhajdin/social_media_app.git
cd social_media_app Installation Install the project dependencies using npm: bash
npm install Set Up Environment Variables Create a new file named .env in the root of your project and add the following content: env
VITE_APPWRITE_URL=
VITE_APPWRITE_PROJECT_ID=
VITE_APPWRITE_DATABASE_ID=
VITE_APPWRITE_STORAGE_ID=
VITE_APPWRITE_USER_COLLECTION_ID=
VITE_APPWRITE_POST_COLLECTION_ID=
VITE_APPWRITE_SAVES_COLLECTION_ID= Replace the placeholder values with your actual Appwrite credentials. You can obtain these credentials by signing up on the Appwrite website . Running the Project bash
npm start Open http://localhost:3000 in your browser to view the project. 🕸️ Snippets constants.index.ts ```typescript
export const sidebarLinks = [
{
imgURL: "/assets/icons/home.svg",
route: "/",
label: "Home",
},
{
imgURL: "/assets/icons/wallpaper.svg",
route: "/explore",
label: "Explore",
},
{
imgURL: "/assets/icons/people.svg",
route: "/all-users",
label: "People",
},
{
imgURL: "/assets/icons/bookmark.svg",
route: "/saved",
label: "Saved",
},
{
imgURL: "/assets/icons/gallery-add.svg",
route: "/create-post",
label: "Create Post",
},
];
export const bottombarLinks = [
{
imgURL: "/assets/icons/home.svg",
route: "/",
label: "Home",
},
{
imgURL: "/assets/icons/wallpaper.svg",
route: "/explore",
label: "Explore",
},
{
imgURL: "/assets/icons/bookmark.svg",
route: "/saved",
label: "Saved",
},
{
imgURL: "/assets/icons/gallery-add.svg",
route: "/create-post",
label: "Create",
},
];
``` globals.css ```css
@import url("https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700;800&display=swap");
@tailwind base;
@tailwind components;
@tailwind utilities;
@layer base {
* {
@apply box-border list-none p-0 m-0 scroll-smooth;
}
body {
@apply bg-dark-1 text-white min-h-screen font-inter;
}
}
@layer utilities {
/* TYPOGRAPHY */
.h1-bold {
@apply text-[36px] font-bold leading-[140%] tracking-tighter;
}
.h1-semibold {
@apply text-[36px] font-semibold leading-[140%] tracking-tighter;
}
.h2-bold {
@apply text-[30px] font-bold leading-[140%] tracking-tighter;
}
.h3-bold {
@apply text-[24px] font-bold leading-[140%] tracking-tighter;
}
.base-semibold {
@apply text-[16px] font-semibold leading-[140%] tracking-tighter;
}
.base-medium {
@apply text-[16px] font-medium leading-[140%];
}
.base-regular {
@apply text-[16px] font-normal leading-[140%];
}
.body-bold {
@apply text-[18px] font-bold leading-[140%];
}
.body-medium {
@apply text-[18px] font-medium leading-[140%];
}
.small-semibold {
@apply text-[14px] font-semibold leading-[140%] tracking-tighter;
}
.small-medium {
@apply text-[14px] font-medium leading-[140%];
}
.small-regular {
@apply text-[14px] font-normal leading-[140%];
}
.subtle-semibold {
@apply text-[12px] font-semibold leading-[140%];
}
.tiny-medium {
@apply text-[10px] font-medium leading-[140%];
}
/* UTILITIES */
.invert-white {
@apply invert brightness-0 transition;
}
.flex-center {
@apply flex justify-center items-center;
}
.flex-between {
@apply flex justify-between items-center;
}
.flex-start {
@apply flex justify-start items-center;
}
.custom-scrollbar::-webkit-scrollbar {
width: 3px;
height: 3px;
border-radius: 2px;
}
.custom-scrollbar::-webkit-scrollbar-track {
background: #09090a;
}
.custom-scrollbar::-webkit-scrollbar-thumb {
background: #5c5c7b;
border-radius: 50px;
}
.custom-scrollbar::-webkit-scrollbar-thumb:hover {
background: #7878a3;
}
.common-container {
@apply flex flex-col flex-1 items-center gap-10 overflow-scroll py-10 px-5 md:px-8 lg:p-14 custom-scrollbar;
}
/* All Users */
.user-container {
@apply max-w-5xl flex flex-col items-start w-full gap-6 md:gap-9;
}
.user-grid {
@apply w-full grid grid-cols-1 xs:grid-cols-2 md:grid-cols-2 lg:grid-cols-2 xl:grid-cols-3 gap-7 max-w-5xl;
}
/* Explore */
.explore-container {
@apply flex flex-col flex-1 items-center overflow-scroll py-10 px-5 md:p-14 custom-scrollbar;
}
.explore-inner_container {
@apply max-w-5xl flex flex-col items-center w-full gap-6 md:gap-9;
}
.explore-search {
@apply h-12 bg-dark-4 border-none placeholder:text-light-4 focus-visible:ring-0 focus-visible:ring-offset-0 ring-offset-0 !important;
}
/* Home */
.home-container {
@apply flex flex-col flex-1 items-center gap-10 overflow-scroll py-10 px-5 md:px-8 lg:p-14 custom-scrollbar;
}
.home-posts {
@apply max-w-screen-sm flex flex-col items-center w-full gap-6 md:gap-9;
}
.home-creators {
@apply hidden xl:flex flex-col w-72 2xl:w-465 px-6 py-10 gap-10 overflow-scroll custom-scrollbar;
}
/* Post Details */
.post_details-container {
@apply flex flex-col flex-1 gap-10 overflow-scroll py-10 px-5 md:p-14 custom-scrollbar items-center;
}
.post_details-card {
@apply bg-dark-2 w-full max-w-5xl rounded-[30px] flex-col flex xl:flex-row border border-dark-4 xl:rounded-l-[24px];
}
.post_details-img {
@apply h-80 lg:h-[480px] xl:w-[48%] rounded-t-[30px] xl:rounded-l-[24px] xl:rounded-tr-none object-cover p-5 bg-dark-1;
}
.post_details-info {
@apply bg-dark-2 flex flex-col gap-5 lg:gap-7 flex-1 items-start p-8 rounded-[30px];
}
.post_details-delete_btn {
@apply p-0 flex gap-3 hover:bg-transparent hover:text-light-1 text-light-1 small-medium lg:base-medium;
}
/* Profile */
.profile-container {
@apply flex flex-col items-center flex-1 gap-10 overflow-scroll py-10 px-5 md:p-14 custom-scrollbar;
}
.profile-inner_container {
@apply flex items-center md:mb-8 xl:items-start gap-8 flex-col xl:flex-row relative max-w-5xl w-full;
}
.profile-tab {
@apply flex-center gap-3 py-4 w-48 bg-dark-2 transition flex-1 xl:flex-initial;
}
/* Saved */
.saved-container {
@apply flex flex-col flex-1 items-center gap-10 overflow-scroll py-10 px-5 md:p-14 custom-scrollbar;
}
/* Bottom bar */
.bottom-bar {
@apply z-50 flex-between w-full sticky bottom-0 rounded-t-[20px] bg-dark-2 px-5 py-4 md:hidden;
}
/* File uploader */
.file_uploader-img {
@apply h-80 lg:h-[480px] w-full rounded-[24px] object-cover object-top;
}
.file_uploader-label {
@apply text-light-4 text-center small-regular w-full p-4 border-t border-t-dark-4;
}
.file_uploader-box {
@apply flex-center flex-col p-7 h-80 lg:h-[612px];
}
/* Grid Post List */
.grid-container {
@apply w-full grid grid-cols-1 sm:grid-cols-2 md:grid-cols-1 lg:grid-cols-2 xl:grid-cols-3 gap-7 max-w-5xl;
}
.grid-post_link {
@apply flex rounded-[24px] border border-dark-4 overflow-hidden cursor-pointer w-full h-full;
}
.grid-post_user {
@apply absolute bottom-0 p-5 flex-between w-full bg-gradient-to-t from-dark-3 to-transparent rounded-b-[24px] gap-2;
}
/* Left sidebar */
.leftsidebar {
@apply hidden md:flex px-6 py-10 flex-col justify-between min-w-[270px] bg-dark-2;
}
.leftsidebar-link {
@apply rounded-lg base-medium hover:bg-primary-500 transition;
}
/* Post Card */
.post-card {
@apply bg-dark-2 rounded-3xl border border-dark-4 p-5 lg:p-7 w-full max-w-screen-sm;
}
.post-card_img {
@apply h-64 xs:h-[400px] lg:h-[450px] w-full rounded-[24px] object-cover mb-5;
}
/* Topbar */
.topbar {
@apply sticky top-0 z-50 md:hidden bg-dark-2 w-full;
}
/* User card */
.user-card {
@apply flex-center flex-col gap-4 border border-dark-4 rounded-[20px] px-5 py-8;
}
}
@layer components {
/* SHADCN COMPONENTS */
/* Form */
.shad-form_label {
@apply text-white !important;
}
.shad-form_message {
@apply text-red !important;
}
.shad-input {
@apply h-12 bg-dark-4 border-none placeholder:text-light-4 focus-visible:ring-1 focus-visible:ring-offset-1 ring-offset-light-3 !important;
}
.shad-textarea {
@apply h-36 bg-dark-3 rounded-xl border-none focus-visible:ring-1 focus-visible:ring-offset-1 ring-offset-light-3 !important;
}
/* Button */
.shad-button_primary {
@apply bg-primary-500 hover:bg-primary-500 text-light-1 flex gap-2 !important;
}
.shad-button_dark_4 {
@apply h-12 bg-dark-4 px-5 text-light-1 flex gap-2 !important;
}
.shad-button_ghost {
@apply flex gap-4 items-center justify-start hover:bg-transparent hover:text-white !important;
}
}
``` queryKeys.ts ```typescript
export enum QUERY_KEYS {
// AUTH KEYS
CREATE_USER_ACCOUNT = "createUserAccount",
// USER KEYS
GET_CURRENT_USER = "getCurrentUser",
GET_USERS = "getUsers",
GET_USER_BY_ID = "getUserById",
// POST KEYS
GET_POSTS = "getPosts",
GET_INFINITE_POSTS = "getInfinitePosts",
GET_RECENT_POSTS = "getRecentPosts",
GET_POST_BY_ID = "getPostById",
GET_USER_POSTS = "getUserPosts",
GET_FILE_PREVIEW = "getFilePreview",
// SEARCH KEYS
SEARCH_POSTS = "getSearchPosts",
}
``` tailwind.config.js ```javascript
/** @type {import('tailwindcss').Config} */
const defaultTheme = require('tailwindcss/defaultTheme')
module.exports = {
darkMode: ['class'],
content: [
'./pages/**/*.{ts,tsx}',
'./components/**/*.{ts,tsx}',
'./app/**/*.{ts,tsx}',
'./src/**/*.{ts,tsx}',
],
theme: {
container: {
center: true,
padding: '2rem',
screens: {
'2xl': '1400px',
},
},
extend: {
colors: {
'primary-500': '#877EFF',
'primary-600': '#5D5FEF',
'secondary-500': '#FFB620',
'off-white': '#D0DFFF',
'red': '#FF5A5A',
'dark-1': '#000000',
'dark-2': '#09090A',
'dark-3': '#101012',
'dark-4': '#1F1F22',
'light-1': '#FFFFFF',
'light-2': '#EFEFEF',
'light-3': '#7878A3',
'light-4': '#5C5C7B',
},
screens: {
'xs': '480px',
},
width: {
'420': '420px',
'465': '465px',
},
fontFamily: {
inter: ['Inter', 'sans-serif'],
},
keyframes: {
'accordion-down': {
from: { height: 0 },
to: { height: 'var(--radix-accordion-content-height)' },
},
'accordion-up': {
from: { height: 'var(--radix-accordion-content-height)' },
to: { height: 0 },
},
},
animation: {
'accordion-down': 'accordion-down 0.2s ease-out',
'accordion-up': 'accordion-up 0.2s ease-out',
},
},
},
plugins: [require('tailwindcss-animate')],
};
``` types.index.ts ```typescript
export type INavLink = {
imgURL: string;
route: string;
label: string;
};
export type IUpdateUser = {
userId: string;
name: string;
bio: string;
imageId: string;
imageUrl: URL | string;
file: File[];
};
export type INewPost = {
userId: string;
caption: string;
file: File[];
location?: string;
tags?: string;
};
export type IUpdatePost = {
postId: string;
caption: string;
imageId: string;
imageUrl: URL;
file: File[];
location?: string;
tags?: string;
};
export type IUser = {
id: string;
name: string;
username: string;
email: string;
imageUrl: string;
bio: string;
};
export type INewUser = {
name: string;
email: string;
username: string;
password: string;
};
``` useDebounce.ts ```typescript
import { useEffect, useState } from "react";
// https://codesandbox.io/s/react-query-debounce-ted8o?file=/src/useDebounce.js
export default function useDebounce (value: T, delay: number): T {
// State and setters for debounced value
const [debouncedValue, setDebouncedValue] = useState (value);
useEffect(() => {
// Update debounced value after delay
const handler = setTimeout(() => {
setDebouncedValue(value);
}, delay);
// Cancel the timeout if value changes (also on delay change or unmount)
// This is how we prevent debounced value from updating if value is changed ...
// .. within the delay period. Timeout gets cleared and restarted.
return () => {
clearTimeout(handler);
};
}, [value, delay]); // Only re-call effect if value or delay changes
return debouncedValue;
}
``` utils.ts ```typescript
import { type ClassValue, clsx } from "clsx";
import { twMerge } from "tailwind-merge";
export function cn(...inputs: ClassValue[]) {
return twMerge(clsx(inputs));
}
export const convertFileToUrl = (file: File) => URL.createObjectURL(file);
export function formatDateString(dateString: string) {
const options: Intl.DateTimeFormatOptions = {
year: "numeric",
month: "short",
day: "numeric",
};
const date = new Date(dateString);
const formattedDate = date.toLocaleDateString("en-US", options);
const time = date.toLocaleTimeString([], {
hour: "numeric",
minute: "2-digit",
});
return `${formattedDate} at ${time}`;
}
//
export const multiFormatDateString = (timestamp: string = ""): string => {
const timestampNum = Math.round(new Date(timestamp).getTime() / 1000);
const date: Date = new Date(timestampNum * 1000);
const now: Date = new Date();
const diff: number = now.getTime() - date.getTime();
const diffInSeconds: number = diff / 1000;
const diffInMinutes: number = diffInSeconds / 60;
const diffInHours: number = diffInMinutes / 60;
const diffInDays: number = diffInHours / 24;
switch (true) {
case Math.floor(diffInDays) >= 30:
return formatDateString(timestamp);
case Math.floor(diffInDays) === 1:
return `${Math.floor(diffInDays)} day ago`;
case Math.floor(diffInDays) > 1 && diffInDays < 30:
return `${Math.floor(diffInDays)} days ago`;
case Math.floor(diffInHours) >= 1:
return `${Math.floor(diffInHours)} hours ago`;
case Math.floor(diffInMinutes) >= 1:
return `${Math.floor(diffInMinutes)} minutes ago`;
default:
return "Just now";
}
};
export const checkIsLiked = (likeList: string[], userId: string) => {
return likeList.includes(userId);
};
``` 🔗 Links Assets used in the project are here 🚀 More Advance your skills with Next.js 14 Pro Course Enjoyed creating this project? Dive deeper into our PRO courses for a richer learning adventure. They're packed with detailed explanations, cool features, and exercises to boost your skills. Give it a go! Accelerate your professional journey with the Expert Training program And if you're hungry for more than just a course and want to understand how we learn and tackle tech challenges, hop into our personalized masterclass. We cover best practices, different web skills, and offer mentorship to boost your confidence. Let's learn and grow together!;Build a modern social app with a stunning UI with a native mobile feel, a special tech stack, an infinite scroll feature, and amazing performance using React JS, Appwrite, TypeScript, and more.;appwrite,react,reactjs | adrianhajdin/social_media_app |
gnobitab/InstaFlow;## [ICLR2024] ⚡InstaFlow! One-Step Stable Diffusion with Rectified Flow
[[Paper]](https://arxiv.org/abs/2309.06380) [[Demo in 🤗Hugging Face Space]](https://huggingface.co/spaces/XCLiu/InstaFlow) [[Code and Pre-trained Models](https://github.com/gnobitab/InstaFlow/tree/main/code)][[Colab Notebook](https://colab.research.google.com/drive/1mXvIrkbWFwHcZl0sMNjrR3zGtYlrI6re?usp=sharing)]
by *Xingchao Liu, Xiwen Zhang, Jianzhu Ma, Jian Peng, Qiang Liu* News (New) 2024/06/07 Our large-scale Rectified Flow is extended to text-to-3D and image inversion/editing! Check out the amazing work from Xiaofeng Yang et al. ( paper and code )! 2024/05/17 Try our new few-step model PeRFlow at here ! 2023/12/04 We updated the demo in 🤗Hugging Face Space with InstaFlow+ dreamshaper-7 . Image quality significantly improves! We also provide the Gradio demo for you to run locally here . 2023/12/04 One-step InstaFlow is compatible with pre-trained LoRAs! See here . Code is available here . (We thank individual contributor Dr. Hanshu Yan ) 2023/12/04 ONNX support is available now! [ONNX InstaFlow] [ONNX 2-Rectified Flow] [ONNXStack UI] (We thank saddam213 ) 2023/11/23 Colab notebook is online now. Try it here . (We thank individual contributor xaviviro ) 2023/11/22 One-step InstaFlow is compatible with pre-trained ControlNets. See here . (We thank individual contributor Dr. Hanshu Yan ) 2023/11/22 We release the pre-trained models and inference codes here . 2023/09/26 We provide a demo of InstaFlow-0.9B in 🤗Hugging Face Space. Try it here . Introduction Diffusion models have demonstrated remarkable promises in text-to-image generation. However, their efficacy is still largely hindered by computational constraints stemming from the need of iterative numerical solvers at the inference time for solving the diffusion/flow processes. InstaFlow is an ultra-fast , one-step image generator that achieves image quality close to Stable Diffusion, significantly reducing the demand of computational resources. This efficiency is made possible through a recent Rectified Flow technique, which trains probability flows with straight trajectories, hence inherently requiring only a single step for fast inference. InstaFlow has several advantages:
- Ultra-Fast Inference : InstaFlow models are one-step generators , which directly map noises to images and avoid multi-step sampling of diffusion models. On our machine with A100 GPU, the inference time is around 0.1 second, saving ~90% of the inference time compared to the original Stable Diffusion.
- High-Quality : InstaFlow generates images with intricate details like Stable Diffusion, and have similar FID on MS COCO 2014 as state-of-the-art text-to-image GANs, like StyleGAN-T .
- Simple and Efficient Training : The training process of InstaFlow merely involves supervised training . Leveraging pre-trained Stable Diffusion, it only takes 199 A100 GPU days to get InstaFlow-0.9B . Gallery One-step generation with InstaFlow-0.9B (0.09s per image, $512 \times 512$) One-step generation with InstaFlow-1.7B (0.12s per image, $512 \times 512$) One-step generation with InstaFlow-0.9B (0.09s) + SDXL-Refiner ($1024 \times 1024$) Latent space interpolation of one-step InstaFlow-0.9B (0.09s per image, $512 \times 512$) https://github.com/gnobitab/InstaFlow/assets/1157982/e8c41d7c-aa1d-4ac3-b96f-5cda847331fe LoRA One-step InstaFlow is compatible with pre-trained LoRAs. We thank individual contributor Dr. Hanshu Yan for providing and testing the Rectified Flow+LoRA pipeline! InstaFlow seems to have higher diversity than SDXL-Turbo. https://github.com/gnobitab/InstaFlow/assets/1157982/8f12960e-116d-486a-a2e9-448d745394c2 ControlNet One-step InstaFlow is fully compatible with pre-trained ControlNets. We thank individual contributor Dr. Hanshu Yan for providing and testing the Rectified Flow+ControlNet pipeline! Below are One-Step Generation with InstaFlow-0.9B + ControlNet: Comparison with SD 1.5 on our A100 machine For an intuitive understanding, we used the same A100 server and took screenshots from the Gridio interface of random generation with different models. InstaFlow-0.9B is one-step, while SD 1.5 adopts 25-step DPMSolver . It takes around 0.3 second to download the image from the server. The text prompt is "A photograph of a snowy mountain near a beautiful lake under sunshine." | InstaFlow-0.9B | Stable Diffusion 1.5 |
|:-:|:-:| Method: Straightening Generative Probability Flows with Text-Conditioned Reflow https://github.com/gnobitab/InstaFlow/assets/1157982/897e2d1a-eff9-44bf-ab89-bc26bbc0d8a7 Our pipeline consists of three steps: Generate (text, noise, image) triplets from pre-trained Stable Diffusion Apply text-conditioned reflow to yield 2-Rectified Flow, which is a straightened generative probability flow. Distill from 2-Rectified Flow to get One-Step InstaFlow . Note that distillation and reflow are orthogonal techniques . As captured in the video and the image, straight flows have the following advantages: Straight flows require fewer steps to simulate. Straight flows give better coupling between the noise distribution and the image distribution, thus allow successful distillation. Related Materials We provide several related links and readings here: The official Rectified Flow github repo (https://github.com/gnobitab/RectifiedFlow) An introduction of Rectified Flow (https://www.cs.utexas.edu/~lqiang/rectflow/html/intro.html) An introduction of Rectified Flow in Chinese--Zhihu (https://zhuanlan.zhihu.com/p/603740431) FlowGrad: Controlling the Output of Generative ODEs With Gradients (https://github.com/gnobitab/FlowGrad) Fast Point Cloud Generation with Straight Flows (https://github.com/klightz/PSF) Piecewise Rectified Flow (https://github.com/magic-research/piecewise-rectified-flow) Text-to-Image Rectified Flow as Plug-and-Play Priors (https://github.com/yangxiaofeng/rectified_flow_prior) Citation @inproceedings{liu2023instaflow,
title={Instaflow: One step is enough for high-quality diffusion-based text-to-image generation},
author={Liu, Xingchao and Zhang, Xiwen and Ma, Jianzhu and Peng, Jian and Liu, Qiang},
booktitle={International Conference on Learning Representations},
year={2024}
} Thanks Our training scripts are modified from one of the fine-tuning examples in Diffusers .
Other parts of our work also heavily relies on the 🤗 Diffusers library.;:zap: InstaFlow! One-Step Stable Diffusion with Rectified Flow (ICLR 2024);[] | gnobitab/InstaFlow |